Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'dt' of git://github.com/hzhuang1/linux into next/dt

* 'dt' of git://github.com/hzhuang1/linux:
Documentation: update docs for mmp dt
ARM: dts: refresh dts file for arch mmp
ARM: mmp: support pxa910 with device tree
ARM: mmp: support mmp2 with device tree
gpio: pxa: parse gpio from DTS file
ARM: mmp: support DT in timer
ARM: mmp: support DT in irq
ARM: mmp: append CONFIG_MACH_MMP2_DT
ARM: mmp: fix build issue on mmp with device tree

Includes an update to v3-4-rc5

Signed-off-by: Arnd Bergmann <arnd@arndb.de>

+3845 -1898
+19
Documentation/ABI/testing/sysfs-bus-hsi
··· 1 + What: /sys/bus/hsi 2 + Date: April 2012 3 + KernelVersion: 3.4 4 + Contact: Carlos Chinea <carlos.chinea@nokia.com> 5 + Description: 6 + High Speed Synchronous Serial Interface (HSI) is a 7 + serial interface mainly used for connecting application 8 + engines (APE) with cellular modem engines (CMT) in cellular 9 + handsets. 10 + The bus will be populated with devices (hsi_clients) representing 11 + the protocols available in the system. Bus drivers implement 12 + those protocols. 13 + 14 + What: /sys/bus/hsi/devices/.../modalias 15 + Date: April 2012 16 + KernelVersion: 3.4 17 + Contact: Carlos Chinea <carlos.chinea@nokia.com> 18 + Description: Stores the same MODALIAS value emitted by uevent 19 + Format: hsi:<hsi_client device name>
+8
Documentation/devicetree/bindings/arm/mrvl.txt Documentation/devicetree/bindings/arm/mrvl/mrvl.txt
··· 4 4 PXA168 Aspenite Board 5 5 Required root node properties: 6 6 - compatible = "mrvl,pxa168-aspenite", "mrvl,pxa168"; 7 + 8 + PXA910 DKB Board 9 + Required root node properties: 10 + - compatible = "mrvl,pxa910-dkb"; 11 + 12 + MMP2 Brownstone Board 13 + Required root node properties: 14 + - compatible = "mrvl,mmp2-brownstone";
+40
Documentation/devicetree/bindings/arm/mrvl/intc.txt
··· 1 + * Marvell MMP Interrupt controller 2 + 3 + Required properties: 4 + - compatible : Should be "mrvl,mmp-intc", "mrvl,mmp2-intc" or 5 + "mrvl,mmp2-mux-intc" 6 + - reg : Address and length of the register set of the interrupt controller. 7 + If the interrupt controller is intc, address and length means the range 8 + of the whold interrupt controller. If the interrupt controller is mux-intc, 9 + address and length means one register. Since address of mux-intc is in the 10 + range of intc. mux-intc is secondary interrupt controller. 11 + - reg-names : Name of the register set of the interrupt controller. It's 12 + only required in mux-intc interrupt controller. 13 + - interrupts : Should be the port interrupt shared by mux interrupts. It's 14 + only required in mux-intc interrupt controller. 15 + - interrupt-controller : Identifies the node as an interrupt controller. 16 + - #interrupt-cells : Specifies the number of cells needed to encode an 17 + interrupt source. 18 + - mrvl,intc-nr-irqs : Specifies the number of interrupts in the interrupt 19 + controller. 20 + - mrvl,clr-mfp-irq : Specifies the interrupt that needs to clear MFP edge 21 + detection first. 22 + 23 + Example: 24 + intc: interrupt-controller@d4282000 { 25 + compatible = "mrvl,mmp2-intc"; 26 + interrupt-controller; 27 + #interrupt-cells = <1>; 28 + reg = <0xd4282000 0x1000>; 29 + mrvl,intc-nr-irqs = <64>; 30 + }; 31 + 32 + intcmux4@d4282150 { 33 + compatible = "mrvl,mmp2-mux-intc"; 34 + interrupts = <4>; 35 + interrupt-controller; 36 + #interrupt-cells = <1>; 37 + reg = <0x150 0x4>, <0x168 0x4>; 38 + reg-names = "mux status", "mux mask"; 39 + mrvl,intc-nr-irqs = <2>; 40 + };
+13
Documentation/devicetree/bindings/arm/mrvl/timer.txt
··· 1 + * Marvell MMP Timer controller 2 + 3 + Required properties: 4 + - compatible : Should be "mrvl,mmp-timer". 5 + - reg : Address and length of the register set of timer controller. 6 + - interrupts : Should be the interrupt number. 7 + 8 + Example: 9 + timer0: timer@d4014000 { 10 + compatible = "mrvl,mmp-timer"; 11 + reg = <0xd4014000 0x100>; 12 + interrupts = <13>; 13 + };
+12 -6
Documentation/devicetree/bindings/gpio/mrvl-gpio.txt
··· 3 3 Required properties: 4 4 - compatible : Should be "mrvl,pxa-gpio" or "mrvl,mmp-gpio" 5 5 - reg : Address and length of the register set for the device 6 - - interrupts : Should be the port interrupt shared by all gpio pins, if 7 - - interrupt-name : Should be the name of irq resource. 8 - one number. 6 + - interrupts : Should be the port interrupt shared by all gpio pins. 7 + There're three gpio interrupts in arch-pxa, and they're gpio0, 8 + gpio1 and gpio_mux. There're only one gpio interrupt in arch-mmp, 9 + gpio_mux. 10 + - interrupt-name : Should be the name of irq resource. Each interrupt 11 + binds its interrupt-name. 12 + - interrupt-controller : Identifies the node as an interrupt controller. 13 + - #interrupt-cells: Specifies the number of cells needed to encode an 14 + interrupt source. 9 15 - gpio-controller : Marks the device node as a gpio controller. 10 16 - #gpio-cells : Should be one. It is the pin number. 11 17 12 18 Example: 13 19 14 20 gpio: gpio@d4019000 { 15 - compatible = "mrvl,mmp-gpio", "mrvl,pxa-gpio"; 21 + compatible = "mrvl,mmp-gpio"; 16 22 reg = <0xd4019000 0x1000>; 17 - interrupts = <49>, <17>, <18>; 18 - interrupt-name = "gpio_mux", "gpio0", "gpio1"; 23 + interrupts = <49>; 24 + interrupt-name = "gpio_mux"; 19 25 gpio-controller; 20 26 #gpio-cells = <1>; 21 27 interrupt-controller;
+6 -9
Documentation/devicetree/bindings/i2c/mrvl-i2c.txt
··· 3 3 Required properties : 4 4 5 5 - reg : Offset and length of the register set for the device 6 - - compatible : should be "mrvl,mmp-twsi" where CHIP is the name of a 6 + - compatible : should be "mrvl,mmp-twsi" where mmp is the name of a 7 7 compatible processor, e.g. pxa168, pxa910, mmp2, mmp3. 8 8 For the pxa2xx/pxa3xx, an additional node "mrvl,pxa-i2c" is required 9 9 as shown in the example below. 10 10 11 11 Recommended properties : 12 12 13 - - interrupts : <a b> where a is the interrupt number and b is a 14 - field that represents an encoding of the sense and level 15 - information for the interrupt. This should be encoded based on 16 - the information in section 2) depending on the type of interrupt 17 - controller you have. 13 + - interrupts : the interrupt number 18 14 - interrupt-parent : the phandle for the interrupt controller that 19 - services interrupts for this device. 15 + services interrupts for this device. If the parent is the default 16 + interrupt controller in device tree, it could be ignored. 20 17 - mrvl,i2c-polling : Disable interrupt of i2c controller. Polling 21 18 status register of i2c controller instead. 22 19 - mrvl,i2c-fast-mode : Enable fast mode of i2c controller. 23 20 24 21 Examples: 25 22 twsi1: i2c@d4011000 { 26 - compatible = "mrvl,mmp-twsi", "mrvl,pxa-i2c"; 23 + compatible = "mrvl,mmp-twsi"; 27 24 reg = <0xd4011000 0x1000>; 28 25 interrupts = <7>; 29 26 mrvl,i2c-fast-mode; 30 27 }; 31 28 32 29 twsi2: i2c@d4025000 { 33 - compatible = "mrvl,mmp-twsi", "mrvl,pxa-i2c"; 30 + compatible = "mrvl,mmp-twsi"; 34 31 reg = <0xd4025000 0x1000>; 35 32 interrupts = <58>; 36 33 };
+19 -18
Documentation/power/freezing-of-tasks.txt
··· 9 9 10 10 II. How does it work? 11 11 12 - There are four per-task flags used for that, PF_NOFREEZE, PF_FROZEN, TIF_FREEZE 12 + There are three per-task flags used for that, PF_NOFREEZE, PF_FROZEN 13 13 and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have 14 14 PF_NOFREEZE unset (all user space processes and some kernel threads) are 15 15 regarded as 'freezable' and treated in a special way before the system enters a ··· 17 17 we only consider hibernation, but the description also applies to suspend). 18 18 19 19 Namely, as the first step of the hibernation procedure the function 20 - freeze_processes() (defined in kernel/power/process.c) is called. It executes 21 - try_to_freeze_tasks() that sets TIF_FREEZE for all of the freezable tasks and 22 - either wakes them up, if they are kernel threads, or sends fake signals to them, 23 - if they are user space processes. A task that has TIF_FREEZE set, should react 24 - to it by calling the function called __refrigerator() (defined in 25 - kernel/freezer.c), which sets the task's PF_FROZEN flag, changes its state 26 - to TASK_UNINTERRUPTIBLE and makes it loop until PF_FROZEN is cleared for it. 27 - Then, we say that the task is 'frozen' and therefore the set of functions 28 - handling this mechanism is referred to as 'the freezer' (these functions are 29 - defined in kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h). 30 - User space processes are generally frozen before kernel threads. 20 + freeze_processes() (defined in kernel/power/process.c) is called. A system-wide 21 + variable system_freezing_cnt (as opposed to a per-task flag) is used to indicate 22 + whether the system is to undergo a freezing operation. And freeze_processes() 23 + sets this variable. After this, it executes try_to_freeze_tasks() that sends a 24 + fake signal to all user space processes, and wakes up all the kernel threads. 25 + All freezable tasks must react to that by calling try_to_freeze(), which 26 + results in a call to __refrigerator() (defined in kernel/freezer.c), which sets 27 + the task's PF_FROZEN flag, changes its state to TASK_UNINTERRUPTIBLE and makes 28 + it loop until PF_FROZEN is cleared for it. Then, we say that the task is 29 + 'frozen' and therefore the set of functions handling this mechanism is referred 30 + to as 'the freezer' (these functions are defined in kernel/power/process.c, 31 + kernel/freezer.c & include/linux/freezer.h). User space processes are generally 32 + frozen before kernel threads. 31 33 32 34 __refrigerator() must not be called directly. Instead, use the 33 35 try_to_freeze() function (defined in include/linux/freezer.h), that checks 34 - the task's TIF_FREEZE flag and makes the task enter __refrigerator() if the 35 - flag is set. 36 + if the task is to be frozen and makes the task enter __refrigerator(). 36 37 37 38 For user space processes try_to_freeze() is called automatically from the 38 39 signal-handling code, but the freezable kernel threads need to call it 39 40 explicitly in suitable places or use the wait_event_freezable() or 40 41 wait_event_freezable_timeout() macros (defined in include/linux/freezer.h) 41 - that combine interruptible sleep with checking if TIF_FREEZE is set and calling 42 - try_to_freeze(). The main loop of a freezable kernel thread may look like the 43 - following one: 42 + that combine interruptible sleep with checking if the task is to be frozen and 43 + calling try_to_freeze(). The main loop of a freezable kernel thread may look 44 + like the following one: 44 45 45 46 set_freezable(); 46 47 do { ··· 54 53 (from drivers/usb/core/hub.c::hub_thread()). 55 54 56 55 If a freezable kernel thread fails to call try_to_freeze() after the freezer has 57 - set TIF_FREEZE for it, the freezing of tasks will fail and the entire 56 + initiated a freezing operation, the freezing of tasks will fail and the entire 58 57 hibernation operation will be cancelled. For this reason, freezable kernel 59 58 threads must call try_to_freeze() somewhere or use one of the 60 59 wait_event_freezable() and wait_event_freezable_timeout() macros.
+13 -1
Documentation/security/keys.txt
··· 123 123 124 124 The key service provides a number of features besides keys: 125 125 126 - (*) The key service defines two special key types: 126 + (*) The key service defines three special key types: 127 127 128 128 (+) "keyring" 129 129 ··· 136 136 A key of this type has a description and a payload that are arbitrary 137 137 blobs of data. These can be created, updated and read by userspace, 138 138 and aren't intended for use by kernel services. 139 + 140 + (+) "logon" 141 + 142 + Like a "user" key, a "logon" key has a payload that is an arbitrary 143 + blob of data. It is intended as a place to store secrets which are 144 + accessible to the kernel but not to userspace programs. 145 + 146 + The description can be arbitrary, but must be prefixed with a non-zero 147 + length string that describes the key "subclass". The subclass is 148 + separated from the rest of the description by a ':'. "logon" keys can 149 + be created and updated from userspace, but the payload is only 150 + readable from kernel space. 139 151 140 152 (*) Each process subscribes to three keyrings: a thread-specific keyring, a 141 153 process-specific keyring, and a session-specific keyring.
+3 -2
MAINTAINERS
··· 3592 3592 F: drivers/net/wireless/iwlegacy/ 3593 3593 3594 3594 INTEL WIRELESS WIFI LINK (iwlwifi) 3595 + M: Johannes Berg <johannes.berg@intel.com> 3595 3596 M: Wey-Yi Guy <wey-yi.w.guy@intel.com> 3596 3597 M: Intel Linux Wireless <ilw@linux.intel.com> 3597 3598 L: linux-wireless@vger.kernel.org ··· 7579 7578 F: fs/xfs/ 7580 7579 7581 7580 XILINX AXI ETHERNET DRIVER 7582 - M: Ariane Keller <ariane.keller@tik.ee.ethz.ch> 7583 - M: Daniel Borkmann <daniel.borkmann@tik.ee.ethz.ch> 7581 + M: Anirudha Sarangi <anirudh@xilinx.com> 7582 + M: John Linn <John.Linn@xilinx.com> 7584 7583 S: Maintained 7585 7584 F: drivers/net/ethernet/xilinx/xilinx_axienet* 7586 7585
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 4 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1
arch/arm/Kconfig
··· 633 633 select CLKDEV_LOOKUP 634 634 select GENERIC_CLOCKEVENTS 635 635 select GPIO_PXA 636 + select IRQ_DOMAIN 636 637 select TICK_ONESHOT 637 638 select PLAT_PXA 638 639 select SPARSE_IRQ
+38
arch/arm/boot/dts/mmp2-brownstone.dts
··· 1 + /* 2 + * Copyright (C) 2012 Marvell Technology Group Ltd. 3 + * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * publishhed by the Free Software Foundation. 8 + */ 9 + 10 + /dts-v1/; 11 + /include/ "mmp2.dtsi" 12 + 13 + / { 14 + model = "Marvell MMP2 Aspenite Development Board"; 15 + compatible = "mrvl,mmp2-brownstone", "mrvl,mmp2"; 16 + 17 + chosen { 18 + bootargs = "console=ttyS2,38400 root=/dev/nfs nfsroot=192.168.1.100:/nfsroot/ ip=192.168.1.101:192.168.1.100::255.255.255.0::eth0:on"; 19 + }; 20 + 21 + memory { 22 + reg = <0x00000000 0x04000000>; 23 + }; 24 + 25 + soc { 26 + apb@d4000000 { 27 + uart3: uart@d4018000 { 28 + status = "okay"; 29 + }; 30 + twsi1: i2c@d4011000 { 31 + status = "okay"; 32 + }; 33 + rtc: rtc@d4010000 { 34 + status = "okay"; 35 + }; 36 + }; 37 + }; 38 + };
+220
arch/arm/boot/dts/mmp2.dtsi
··· 1 + /* 2 + * Copyright (C) 2012 Marvell Technology Group Ltd. 3 + * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * publishhed by the Free Software Foundation. 8 + */ 9 + 10 + /include/ "skeleton.dtsi" 11 + 12 + / { 13 + aliases { 14 + serial0 = &uart1; 15 + serial1 = &uart2; 16 + serial2 = &uart3; 17 + serial3 = &uart4; 18 + i2c0 = &twsi1; 19 + i2c1 = &twsi2; 20 + }; 21 + 22 + soc { 23 + #address-cells = <1>; 24 + #size-cells = <1>; 25 + compatible = "simple-bus"; 26 + interrupt-parent = <&intc>; 27 + ranges; 28 + 29 + axi@d4200000 { /* AXI */ 30 + compatible = "mrvl,axi-bus", "simple-bus"; 31 + #address-cells = <1>; 32 + #size-cells = <1>; 33 + reg = <0xd4200000 0x00200000>; 34 + ranges; 35 + 36 + intc: interrupt-controller@d4282000 { 37 + compatible = "mrvl,mmp2-intc"; 38 + interrupt-controller; 39 + #interrupt-cells = <1>; 40 + reg = <0xd4282000 0x1000>; 41 + mrvl,intc-nr-irqs = <64>; 42 + }; 43 + 44 + intcmux4@d4282150 { 45 + compatible = "mrvl,mmp2-mux-intc"; 46 + interrupts = <4>; 47 + interrupt-controller; 48 + #interrupt-cells = <1>; 49 + reg = <0x150 0x4>, <0x168 0x4>; 50 + reg-names = "mux status", "mux mask"; 51 + mrvl,intc-nr-irqs = <2>; 52 + }; 53 + 54 + intcmux5: interrupt-controller@d4282154 { 55 + compatible = "mrvl,mmp2-mux-intc"; 56 + interrupts = <5>; 57 + interrupt-controller; 58 + #interrupt-cells = <1>; 59 + reg = <0x154 0x4>, <0x16c 0x4>; 60 + reg-names = "mux status", "mux mask"; 61 + mrvl,intc-nr-irqs = <2>; 62 + mrvl,clr-mfp-irq = <1>; 63 + }; 64 + 65 + intcmux9: interrupt-controller@d4282180 { 66 + compatible = "mrvl,mmp2-mux-intc"; 67 + interrupts = <9>; 68 + interrupt-controller; 69 + #interrupt-cells = <1>; 70 + reg = <0x180 0x4>, <0x17c 0x4>; 71 + reg-names = "mux status", "mux mask"; 72 + mrvl,intc-nr-irqs = <3>; 73 + }; 74 + 75 + intcmux17: interrupt-controller@d4282158 { 76 + compatible = "mrvl,mmp2-mux-intc"; 77 + interrupts = <17>; 78 + interrupt-controller; 79 + #interrupt-cells = <1>; 80 + reg = <0x158 0x4>, <0x170 0x4>; 81 + reg-names = "mux status", "mux mask"; 82 + mrvl,intc-nr-irqs = <5>; 83 + }; 84 + 85 + intcmux35: interrupt-controller@d428215c { 86 + compatible = "mrvl,mmp2-mux-intc"; 87 + interrupts = <35>; 88 + interrupt-controller; 89 + #interrupt-cells = <1>; 90 + reg = <0x15c 0x4>, <0x174 0x4>; 91 + reg-names = "mux status", "mux mask"; 92 + mrvl,intc-nr-irqs = <15>; 93 + }; 94 + 95 + intcmux51: interrupt-controller@d4282160 { 96 + compatible = "mrvl,mmp2-mux-intc"; 97 + interrupts = <51>; 98 + interrupt-controller; 99 + #interrupt-cells = <1>; 100 + reg = <0x160 0x4>, <0x178 0x4>; 101 + reg-names = "mux status", "mux mask"; 102 + mrvl,intc-nr-irqs = <2>; 103 + }; 104 + 105 + intcmux55: interrupt-controller@d4282188 { 106 + compatible = "mrvl,mmp2-mux-intc"; 107 + interrupts = <55>; 108 + interrupt-controller; 109 + #interrupt-cells = <1>; 110 + reg = <0x188 0x4>, <0x184 0x4>; 111 + reg-names = "mux status", "mux mask"; 112 + mrvl,intc-nr-irqs = <2>; 113 + }; 114 + }; 115 + 116 + apb@d4000000 { /* APB */ 117 + compatible = "mrvl,apb-bus", "simple-bus"; 118 + #address-cells = <1>; 119 + #size-cells = <1>; 120 + reg = <0xd4000000 0x00200000>; 121 + ranges; 122 + 123 + timer0: timer@d4014000 { 124 + compatible = "mrvl,mmp-timer"; 125 + reg = <0xd4014000 0x100>; 126 + interrupts = <13>; 127 + }; 128 + 129 + uart1: uart@d4030000 { 130 + compatible = "mrvl,mmp-uart"; 131 + reg = <0xd4030000 0x1000>; 132 + interrupts = <27>; 133 + status = "disabled"; 134 + }; 135 + 136 + uart2: uart@d4017000 { 137 + compatible = "mrvl,mmp-uart"; 138 + reg = <0xd4017000 0x1000>; 139 + interrupts = <28>; 140 + status = "disabled"; 141 + }; 142 + 143 + uart3: uart@d4018000 { 144 + compatible = "mrvl,mmp-uart"; 145 + reg = <0xd4018000 0x1000>; 146 + interrupts = <24>; 147 + status = "disabled"; 148 + }; 149 + 150 + uart4: uart@d4016000 { 151 + compatible = "mrvl,mmp-uart"; 152 + reg = <0xd4016000 0x1000>; 153 + interrupts = <46>; 154 + status = "disabled"; 155 + }; 156 + 157 + gpio@d4019000 { 158 + compatible = "mrvl,mmp-gpio"; 159 + #address-cells = <1>; 160 + #size-cells = <1>; 161 + reg = <0xd4019000 0x1000>; 162 + gpio-controller; 163 + #gpio-cells = <2>; 164 + interrupts = <49>; 165 + interrupt-names = "gpio_mux"; 166 + interrupt-controller; 167 + #interrupt-cells = <1>; 168 + ranges; 169 + 170 + gcb0: gpio@d4019000 { 171 + reg = <0xd4019000 0x4>; 172 + }; 173 + 174 + gcb1: gpio@d4019004 { 175 + reg = <0xd4019004 0x4>; 176 + }; 177 + 178 + gcb2: gpio@d4019008 { 179 + reg = <0xd4019008 0x4>; 180 + }; 181 + 182 + gcb3: gpio@d4019100 { 183 + reg = <0xd4019100 0x4>; 184 + }; 185 + 186 + gcb4: gpio@d4019104 { 187 + reg = <0xd4019104 0x4>; 188 + }; 189 + 190 + gcb5: gpio@d4019108 { 191 + reg = <0xd4019108 0x4>; 192 + }; 193 + }; 194 + 195 + twsi1: i2c@d4011000 { 196 + compatible = "mrvl,mmp-twsi"; 197 + reg = <0xd4011000 0x1000>; 198 + interrupts = <7>; 199 + mrvl,i2c-fast-mode; 200 + status = "disabled"; 201 + }; 202 + 203 + twsi2: i2c@d4025000 { 204 + compatible = "mrvl,mmp-twsi"; 205 + reg = <0xd4025000 0x1000>; 206 + interrupts = <58>; 207 + status = "disabled"; 208 + }; 209 + 210 + rtc: rtc@d4010000 { 211 + compatible = "mrvl,mmp-rtc"; 212 + reg = <0xd4010000 0x1000>; 213 + interrupts = <1 0>; 214 + interrupt-names = "rtc 1Hz", "rtc alarm"; 215 + interrupt-parent = <&intcmux5>; 216 + status = "disabled"; 217 + }; 218 + }; 219 + }; 220 + };
+2 -2
arch/arm/boot/dts/msm8660-surf.dts
··· 10 10 intc: interrupt-controller@02080000 { 11 11 compatible = "qcom,msm-8660-qgic"; 12 12 interrupt-controller; 13 - #interrupt-cells = <1>; 13 + #interrupt-cells = <3>; 14 14 reg = < 0x02080000 0x1000 >, 15 15 < 0x02081000 0x1000 >; 16 16 }; ··· 19 19 compatible = "qcom,msm-hsuart", "qcom,msm-uart"; 20 20 reg = <0x19c40000 0x1000>, 21 21 <0x19c00000 0x1000>; 22 - interrupts = <195>; 22 + interrupts = <0 195 0x0>; 23 23 }; 24 24 };
+51 -16
arch/arm/boot/dts/pxa168.dtsi
··· 18 18 i2c1 = &twsi2; 19 19 }; 20 20 21 - intc: intc-interrupt-controller@d4282000 { 22 - compatible = "mrvl,mmp-intc", "mrvl,intc"; 23 - interrupt-controller; 24 - #interrupt-cells = <1>; 25 - reg = <0xd4282000 0x1000>; 26 - }; 27 - 28 21 soc { 29 22 #address-cells = <1>; 30 23 #size-cells = <1>; 31 24 compatible = "simple-bus"; 32 25 interrupt-parent = <&intc>; 33 26 ranges; 27 + 28 + axi@d4200000 { /* AXI */ 29 + compatible = "mrvl,axi-bus", "simple-bus"; 30 + #address-cells = <1>; 31 + #size-cells = <1>; 32 + reg = <0xd4200000 0x00200000>; 33 + ranges; 34 + 35 + intc: interrupt-controller@d4282000 { 36 + compatible = "mrvl,mmp-intc"; 37 + interrupt-controller; 38 + #interrupt-cells = <1>; 39 + reg = <0xd4282000 0x1000>; 40 + mrvl,intc-nr-irqs = <64>; 41 + }; 42 + 43 + }; 34 44 35 45 apb@d4000000 { /* APB */ 36 46 compatible = "mrvl,apb-bus", "simple-bus"; ··· 49 39 reg = <0xd4000000 0x00200000>; 50 40 ranges; 51 41 42 + timer0: timer@d4014000 { 43 + compatible = "mrvl,mmp-timer"; 44 + reg = <0xd4014000 0x100>; 45 + interrupts = <13>; 46 + }; 47 + 52 48 uart1: uart@d4017000 { 53 - compatible = "mrvl,mmp-uart", "mrvl,pxa-uart"; 49 + compatible = "mrvl,mmp-uart"; 54 50 reg = <0xd4017000 0x1000>; 55 51 interrupts = <27>; 56 52 status = "disabled"; 57 53 }; 58 54 59 55 uart2: uart@d4018000 { 60 - compatible = "mrvl,mmp-uart", "mrvl,pxa-uart"; 56 + compatible = "mrvl,mmp-uart"; 61 57 reg = <0xd4018000 0x1000>; 62 58 interrupts = <28>; 63 59 status = "disabled"; 64 60 }; 65 61 66 62 uart3: uart@d4026000 { 67 - compatible = "mrvl,mmp-uart", "mrvl,pxa-uart"; 63 + compatible = "mrvl,mmp-uart"; 68 64 reg = <0xd4026000 0x1000>; 69 65 interrupts = <29>; 70 66 status = "disabled"; 71 67 }; 72 68 73 - gpio: gpio@d4019000 { 74 - compatible = "mrvl,mmp-gpio", "mrvl,pxa-gpio"; 69 + gpio@d4019000 { 70 + compatible = "mrvl,mmp-gpio"; 71 + #address-cells = <1>; 72 + #size-cells = <1>; 75 73 reg = <0xd4019000 0x1000>; 74 + gpio-controller; 75 + #gpio-cells = <2>; 76 76 interrupts = <49>; 77 77 interrupt-names = "gpio_mux"; 78 - gpio-controller; 79 - #gpio-cells = <1>; 80 78 interrupt-controller; 81 79 #interrupt-cells = <1>; 80 + ranges; 81 + 82 + gcb0: gpio@d4019000 { 83 + reg = <0xd4019000 0x4>; 84 + }; 85 + 86 + gcb1: gpio@d4019004 { 87 + reg = <0xd4019004 0x4>; 88 + }; 89 + 90 + gcb2: gpio@d4019008 { 91 + reg = <0xd4019008 0x4>; 92 + }; 93 + 94 + gcb3: gpio@d4019100 { 95 + reg = <0xd4019100 0x4>; 96 + }; 82 97 }; 83 98 84 99 twsi1: i2c@d4011000 { 85 - compatible = "mrvl,mmp-twsi", "mrvl,pxa-i2c"; 100 + compatible = "mrvl,mmp-twsi"; 86 101 reg = <0xd4011000 0x1000>; 87 102 interrupts = <7>; 88 103 mrvl,i2c-fast-mode; ··· 115 80 }; 116 81 117 82 twsi2: i2c@d4025000 { 118 - compatible = "mrvl,mmp-twsi", "mrvl,pxa-i2c"; 83 + compatible = "mrvl,mmp-twsi"; 119 84 reg = <0xd4025000 0x1000>; 120 85 interrupts = <58>; 121 86 status = "disabled";
+38
arch/arm/boot/dts/pxa910-dkb.dts
··· 1 + /* 2 + * Copyright (C) 2012 Marvell Technology Group Ltd. 3 + * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * publishhed by the Free Software Foundation. 8 + */ 9 + 10 + /dts-v1/; 11 + /include/ "pxa910.dtsi" 12 + 13 + / { 14 + model = "Marvell PXA910 DKB Development Board"; 15 + compatible = "mrvl,pxa910-dkb", "mrvl,pxa910"; 16 + 17 + chosen { 18 + bootargs = "console=ttyS0,115200 root=/dev/nfs nfsroot=192.168.1.100:/nfsroot/ ip=192.168.1.101:192.168.1.100::255.255.255.0::eth0:on"; 19 + }; 20 + 21 + memory { 22 + reg = <0x00000000 0x10000000>; 23 + }; 24 + 25 + soc { 26 + apb@d4000000 { 27 + uart1: uart@d4017000 { 28 + status = "okay"; 29 + }; 30 + twsi1: i2c@d4011000 { 31 + status = "okay"; 32 + }; 33 + rtc: rtc@d4010000 { 34 + status = "okay"; 35 + }; 36 + }; 37 + }; 38 + };
+140
arch/arm/boot/dts/pxa910.dtsi
··· 1 + /* 2 + * Copyright (C) 2012 Marvell Technology Group Ltd. 3 + * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * publishhed by the Free Software Foundation. 8 + */ 9 + 10 + /include/ "skeleton.dtsi" 11 + 12 + / { 13 + aliases { 14 + serial0 = &uart1; 15 + serial1 = &uart2; 16 + serial2 = &uart3; 17 + i2c0 = &twsi1; 18 + i2c1 = &twsi2; 19 + }; 20 + 21 + soc { 22 + #address-cells = <1>; 23 + #size-cells = <1>; 24 + compatible = "simple-bus"; 25 + interrupt-parent = <&intc>; 26 + ranges; 27 + 28 + axi@d4200000 { /* AXI */ 29 + compatible = "mrvl,axi-bus", "simple-bus"; 30 + #address-cells = <1>; 31 + #size-cells = <1>; 32 + reg = <0xd4200000 0x00200000>; 33 + ranges; 34 + 35 + intc: interrupt-controller@d4282000 { 36 + compatible = "mrvl,mmp-intc"; 37 + interrupt-controller; 38 + #interrupt-cells = <1>; 39 + reg = <0xd4282000 0x1000>; 40 + mrvl,intc-nr-irqs = <64>; 41 + }; 42 + 43 + }; 44 + 45 + apb@d4000000 { /* APB */ 46 + compatible = "mrvl,apb-bus", "simple-bus"; 47 + #address-cells = <1>; 48 + #size-cells = <1>; 49 + reg = <0xd4000000 0x00200000>; 50 + ranges; 51 + 52 + timer0: timer@d4014000 { 53 + compatible = "mrvl,mmp-timer"; 54 + reg = <0xd4014000 0x100>; 55 + interrupts = <13>; 56 + }; 57 + 58 + timer1: timer@d4016000 { 59 + compatible = "mrvl,mmp-timer"; 60 + reg = <0xd4016000 0x100>; 61 + interrupts = <29>; 62 + status = "disabled"; 63 + }; 64 + 65 + uart1: uart@d4017000 { 66 + compatible = "mrvl,mmp-uart"; 67 + reg = <0xd4017000 0x1000>; 68 + interrupts = <27>; 69 + status = "disabled"; 70 + }; 71 + 72 + uart2: uart@d4018000 { 73 + compatible = "mrvl,mmp-uart"; 74 + reg = <0xd4018000 0x1000>; 75 + interrupts = <28>; 76 + status = "disabled"; 77 + }; 78 + 79 + uart3: uart@d4036000 { 80 + compatible = "mrvl,mmp-uart"; 81 + reg = <0xd4036000 0x1000>; 82 + interrupts = <59>; 83 + status = "disabled"; 84 + }; 85 + 86 + gpio@d4019000 { 87 + compatible = "mrvl,mmp-gpio"; 88 + #address-cells = <1>; 89 + #size-cells = <1>; 90 + reg = <0xd4019000 0x1000>; 91 + gpio-controller; 92 + #gpio-cells = <2>; 93 + interrupts = <49>; 94 + interrupt-names = "gpio_mux"; 95 + interrupt-controller; 96 + #interrupt-cells = <1>; 97 + ranges; 98 + 99 + gcb0: gpio@d4019000 { 100 + reg = <0xd4019000 0x4>; 101 + }; 102 + 103 + gcb1: gpio@d4019004 { 104 + reg = <0xd4019004 0x4>; 105 + }; 106 + 107 + gcb2: gpio@d4019008 { 108 + reg = <0xd4019008 0x4>; 109 + }; 110 + 111 + gcb3: gpio@d4019100 { 112 + reg = <0xd4019100 0x4>; 113 + }; 114 + }; 115 + 116 + twsi1: i2c@d4011000 { 117 + compatible = "mrvl,mmp-twsi"; 118 + reg = <0xd4011000 0x1000>; 119 + interrupts = <7>; 120 + mrvl,i2c-fast-mode; 121 + status = "disabled"; 122 + }; 123 + 124 + twsi2: i2c@d4037000 { 125 + compatible = "mrvl,mmp-twsi"; 126 + reg = <0xd4037000 0x1000>; 127 + interrupts = <54>; 128 + status = "disabled"; 129 + }; 130 + 131 + rtc: rtc@d4010000 { 132 + compatible = "mrvl,mmp-rtc"; 133 + reg = <0xd4010000 0x1000>; 134 + interrupts = <5 6>; 135 + interrupt-names = "rtc 1Hz", "rtc alarm"; 136 + status = "disabled"; 137 + }; 138 + }; 139 + }; 140 + };
+2
arch/arm/configs/mini2440_defconfig
··· 14 14 # CONFIG_BLK_DEV_BSG is not set 15 15 CONFIG_BLK_DEV_INTEGRITY=y 16 16 CONFIG_ARCH_S3C24XX=y 17 + # CONFIG_CPU_S3C2410 is not set 18 + CONFIG_CPU_S3C2440=y 17 19 CONFIG_S3C_ADC=y 18 20 CONFIG_S3C24XX_PWM=y 19 21 CONFIG_MACH_MINI2440=y
+1 -5
arch/arm/kernel/smp_twd.c
··· 118 118 * The twd clock events must be reprogrammed to account for the new 119 119 * frequency. The timer is local to a cpu, so cross-call to the 120 120 * changing cpu. 121 - * 122 - * Only wait for it to finish, if the cpu is active to avoid 123 - * deadlock when cpu1 is spinning on while(!cpu_active(cpu1)) during 124 - * booting of that cpu. 125 121 */ 126 122 if (state == CPUFREQ_POSTCHANGE || state == CPUFREQ_RESUMECHANGE) 127 123 smp_call_function_single(freqs->cpu, twd_update_frequency, 128 - NULL, cpu_active(freqs->cpu)); 124 + NULL, 1); 129 125 130 126 return NOTIFY_OK; 131 127 }
+12 -12
arch/arm/mach-exynos/clock-exynos4.c
··· 497 497 .ctrlbit = (1 << 3), 498 498 }, { 499 499 .name = "hsmmc", 500 - .devname = "s3c-sdhci.0", 500 + .devname = "exynos4-sdhci.0", 501 501 .parent = &exynos4_clk_aclk_133.clk, 502 502 .enable = exynos4_clk_ip_fsys_ctrl, 503 503 .ctrlbit = (1 << 5), 504 504 }, { 505 505 .name = "hsmmc", 506 - .devname = "s3c-sdhci.1", 506 + .devname = "exynos4-sdhci.1", 507 507 .parent = &exynos4_clk_aclk_133.clk, 508 508 .enable = exynos4_clk_ip_fsys_ctrl, 509 509 .ctrlbit = (1 << 6), 510 510 }, { 511 511 .name = "hsmmc", 512 - .devname = "s3c-sdhci.2", 512 + .devname = "exynos4-sdhci.2", 513 513 .parent = &exynos4_clk_aclk_133.clk, 514 514 .enable = exynos4_clk_ip_fsys_ctrl, 515 515 .ctrlbit = (1 << 7), 516 516 }, { 517 517 .name = "hsmmc", 518 - .devname = "s3c-sdhci.3", 518 + .devname = "exynos4-sdhci.3", 519 519 .parent = &exynos4_clk_aclk_133.clk, 520 520 .enable = exynos4_clk_ip_fsys_ctrl, 521 521 .ctrlbit = (1 << 8), ··· 1202 1202 static struct clksrc_clk exynos4_clk_sclk_mmc0 = { 1203 1203 .clk = { 1204 1204 .name = "sclk_mmc", 1205 - .devname = "s3c-sdhci.0", 1205 + .devname = "exynos4-sdhci.0", 1206 1206 .parent = &exynos4_clk_dout_mmc0.clk, 1207 1207 .enable = exynos4_clksrc_mask_fsys_ctrl, 1208 1208 .ctrlbit = (1 << 0), ··· 1213 1213 static struct clksrc_clk exynos4_clk_sclk_mmc1 = { 1214 1214 .clk = { 1215 1215 .name = "sclk_mmc", 1216 - .devname = "s3c-sdhci.1", 1216 + .devname = "exynos4-sdhci.1", 1217 1217 .parent = &exynos4_clk_dout_mmc1.clk, 1218 1218 .enable = exynos4_clksrc_mask_fsys_ctrl, 1219 1219 .ctrlbit = (1 << 4), ··· 1224 1224 static struct clksrc_clk exynos4_clk_sclk_mmc2 = { 1225 1225 .clk = { 1226 1226 .name = "sclk_mmc", 1227 - .devname = "s3c-sdhci.2", 1227 + .devname = "exynos4-sdhci.2", 1228 1228 .parent = &exynos4_clk_dout_mmc2.clk, 1229 1229 .enable = exynos4_clksrc_mask_fsys_ctrl, 1230 1230 .ctrlbit = (1 << 8), ··· 1235 1235 static struct clksrc_clk exynos4_clk_sclk_mmc3 = { 1236 1236 .clk = { 1237 1237 .name = "sclk_mmc", 1238 - .devname = "s3c-sdhci.3", 1238 + .devname = "exynos4-sdhci.3", 1239 1239 .parent = &exynos4_clk_dout_mmc3.clk, 1240 1240 .enable = exynos4_clksrc_mask_fsys_ctrl, 1241 1241 .ctrlbit = (1 << 12), ··· 1340 1340 CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos4_clk_sclk_uart1.clk), 1341 1341 CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos4_clk_sclk_uart2.clk), 1342 1342 CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos4_clk_sclk_uart3.clk), 1343 - CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk), 1344 - CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk), 1345 - CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk), 1346 - CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk), 1343 + CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos4_clk_sclk_mmc0.clk), 1344 + CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos4_clk_sclk_mmc1.clk), 1345 + CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos4_clk_sclk_mmc2.clk), 1346 + CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos4_clk_sclk_mmc3.clk), 1347 1347 CLKDEV_INIT("exynos4-fb.0", "lcd", &exynos4_clk_fimd0), 1348 1348 CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos4_clk_pdma0), 1349 1349 CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos4_clk_pdma1),
+12 -12
arch/arm/mach-exynos/clock-exynos5.c
··· 455 455 .ctrlbit = (1 << 20), 456 456 }, { 457 457 .name = "hsmmc", 458 - .devname = "s3c-sdhci.0", 458 + .devname = "exynos4-sdhci.0", 459 459 .parent = &exynos5_clk_aclk_200.clk, 460 460 .enable = exynos5_clk_ip_fsys_ctrl, 461 461 .ctrlbit = (1 << 12), 462 462 }, { 463 463 .name = "hsmmc", 464 - .devname = "s3c-sdhci.1", 464 + .devname = "exynos4-sdhci.1", 465 465 .parent = &exynos5_clk_aclk_200.clk, 466 466 .enable = exynos5_clk_ip_fsys_ctrl, 467 467 .ctrlbit = (1 << 13), 468 468 }, { 469 469 .name = "hsmmc", 470 - .devname = "s3c-sdhci.2", 470 + .devname = "exynos4-sdhci.2", 471 471 .parent = &exynos5_clk_aclk_200.clk, 472 472 .enable = exynos5_clk_ip_fsys_ctrl, 473 473 .ctrlbit = (1 << 14), 474 474 }, { 475 475 .name = "hsmmc", 476 - .devname = "s3c-sdhci.3", 476 + .devname = "exynos4-sdhci.3", 477 477 .parent = &exynos5_clk_aclk_200.clk, 478 478 .enable = exynos5_clk_ip_fsys_ctrl, 479 479 .ctrlbit = (1 << 15), ··· 813 813 static struct clksrc_clk exynos5_clk_sclk_mmc0 = { 814 814 .clk = { 815 815 .name = "sclk_mmc", 816 - .devname = "s3c-sdhci.0", 816 + .devname = "exynos4-sdhci.0", 817 817 .parent = &exynos5_clk_dout_mmc0.clk, 818 818 .enable = exynos5_clksrc_mask_fsys_ctrl, 819 819 .ctrlbit = (1 << 0), ··· 824 824 static struct clksrc_clk exynos5_clk_sclk_mmc1 = { 825 825 .clk = { 826 826 .name = "sclk_mmc", 827 - .devname = "s3c-sdhci.1", 827 + .devname = "exynos4-sdhci.1", 828 828 .parent = &exynos5_clk_dout_mmc1.clk, 829 829 .enable = exynos5_clksrc_mask_fsys_ctrl, 830 830 .ctrlbit = (1 << 4), ··· 835 835 static struct clksrc_clk exynos5_clk_sclk_mmc2 = { 836 836 .clk = { 837 837 .name = "sclk_mmc", 838 - .devname = "s3c-sdhci.2", 838 + .devname = "exynos4-sdhci.2", 839 839 .parent = &exynos5_clk_dout_mmc2.clk, 840 840 .enable = exynos5_clksrc_mask_fsys_ctrl, 841 841 .ctrlbit = (1 << 8), ··· 846 846 static struct clksrc_clk exynos5_clk_sclk_mmc3 = { 847 847 .clk = { 848 848 .name = "sclk_mmc", 849 - .devname = "s3c-sdhci.3", 849 + .devname = "exynos4-sdhci.3", 850 850 .parent = &exynos5_clk_dout_mmc3.clk, 851 851 .enable = exynos5_clksrc_mask_fsys_ctrl, 852 852 .ctrlbit = (1 << 12), ··· 990 990 CLKDEV_INIT("exynos4210-uart.1", "clk_uart_baud0", &exynos5_clk_sclk_uart1.clk), 991 991 CLKDEV_INIT("exynos4210-uart.2", "clk_uart_baud0", &exynos5_clk_sclk_uart2.clk), 992 992 CLKDEV_INIT("exynos4210-uart.3", "clk_uart_baud0", &exynos5_clk_sclk_uart3.clk), 993 - CLKDEV_INIT("s3c-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk), 994 - CLKDEV_INIT("s3c-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk), 995 - CLKDEV_INIT("s3c-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk), 996 - CLKDEV_INIT("s3c-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk), 993 + CLKDEV_INIT("exynos4-sdhci.0", "mmc_busclk.2", &exynos5_clk_sclk_mmc0.clk), 994 + CLKDEV_INIT("exynos4-sdhci.1", "mmc_busclk.2", &exynos5_clk_sclk_mmc1.clk), 995 + CLKDEV_INIT("exynos4-sdhci.2", "mmc_busclk.2", &exynos5_clk_sclk_mmc2.clk), 996 + CLKDEV_INIT("exynos4-sdhci.3", "mmc_busclk.2", &exynos5_clk_sclk_mmc3.clk), 997 997 CLKDEV_INIT("dma-pl330.0", "apb_pclk", &exynos5_clk_pdma0), 998 998 CLKDEV_INIT("dma-pl330.1", "apb_pclk", &exynos5_clk_pdma1), 999 999 CLKDEV_INIT("dma-pl330.2", "apb_pclk", &exynos5_clk_mdma1),
+13 -1
arch/arm/mach-exynos/common.c
··· 326 326 s3c_fimc_setname(2, "exynos4-fimc"); 327 327 s3c_fimc_setname(3, "exynos4-fimc"); 328 328 329 + s3c_sdhci_setname(0, "exynos4-sdhci"); 330 + s3c_sdhci_setname(1, "exynos4-sdhci"); 331 + s3c_sdhci_setname(2, "exynos4-sdhci"); 332 + s3c_sdhci_setname(3, "exynos4-sdhci"); 333 + 329 334 /* The I2C bus controllers are directly compatible with s3c2440 */ 330 335 s3c_i2c0_setname("s3c2440-i2c"); 331 336 s3c_i2c1_setname("s3c2440-i2c"); ··· 348 343 s3c_device_i2c0.resource[0].end = EXYNOS5_PA_IIC(0) + SZ_4K - 1; 349 344 s3c_device_i2c0.resource[1].start = EXYNOS5_IRQ_IIC; 350 345 s3c_device_i2c0.resource[1].end = EXYNOS5_IRQ_IIC; 346 + 347 + s3c_sdhci_setname(0, "exynos4-sdhci"); 348 + s3c_sdhci_setname(1, "exynos4-sdhci"); 349 + s3c_sdhci_setname(2, "exynos4-sdhci"); 350 + s3c_sdhci_setname(3, "exynos4-sdhci"); 351 351 352 352 /* The I2C bus controllers are directly compatible with s3c2440 */ 353 353 s3c_i2c0_setname("s3c2440-i2c"); ··· 547 537 { 548 538 int irq; 549 539 550 - gic_init(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU); 540 + #ifdef CONFIG_OF 541 + of_irq_init(exynos4_dt_irq_match); 542 + #endif 551 543 552 544 for (irq = 0; irq < EXYNOS5_MAX_COMBINER_NR; irq++) { 553 545 combiner_init(irq, (void __iomem *)S5P_VA_COMBINER(irq),
+3 -10
arch/arm/mach-exynos/dev-dwmci.c
··· 16 16 #include <linux/dma-mapping.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/ioport.h> 19 20 #include <linux/mmc/dw_mmc.h> 20 21 21 22 #include <plat/devs.h> ··· 34 33 } 35 34 36 35 static struct resource exynos4_dwmci_resource[] = { 37 - [0] = { 38 - .start = EXYNOS4_PA_DWMCI, 39 - .end = EXYNOS4_PA_DWMCI + SZ_4K - 1, 40 - .flags = IORESOURCE_MEM, 41 - }, 42 - [1] = { 43 - .start = IRQ_DWMCI, 44 - .end = IRQ_DWMCI, 45 - .flags = IORESOURCE_IRQ, 46 - } 36 + [0] = DEFINE_RES_MEM(EXYNOS4_PA_DWMCI, SZ_4K), 37 + [1] = DEFINE_RES_IRQ(EXYNOS4_IRQ_DWMCI), 47 38 }; 48 39 49 40 static struct dw_mci_board exynos4_dwci_pdata = {
+1
arch/arm/mach-exynos/mach-nuri.c
··· 112 112 .host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA | 113 113 MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | 114 114 MMC_CAP_ERASE), 115 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 115 116 .cd_type = S3C_SDHCI_CD_PERMANENT, 116 117 .clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL, 117 118 };
+1
arch/arm/mach-exynos/mach-universal_c210.c
··· 747 747 .max_width = 8, 748 748 .host_caps = (MMC_CAP_8_BIT_DATA | MMC_CAP_4_BIT_DATA | 749 749 MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED), 750 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 750 751 .cd_type = S3C_SDHCI_CD_PERMANENT, 751 752 .clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL, 752 753 };
+19 -10
arch/arm/mach-mmp/Kconfig
··· 2 2 3 3 menu "Marvell PXA168/910/MMP2 Implmentations" 4 4 5 - config MACH_MMP_DT 6 - bool "Support MMP2 platforms from device tree" 7 - select CPU_PXA168 8 - select CPU_PXA910 9 - select USE_OF 10 - help 11 - Include support for Marvell MMP2 based platforms using 12 - the device tree. Needn't select any other machine while 13 - MACH_MMP_DT is enabled. 14 - 15 5 config MACH_ASPENITE 16 6 bool "Marvell's PXA168 Aspenite Development Board" 17 7 select CPU_PXA168 ··· 83 93 help 84 94 Say 'Y' here if you want to support the Marvell PXA168-based 85 95 GuruPlug Display (gplugD) Board 96 + 97 + config MACH_MMP_DT 98 + bool "Support MMP (ARMv5) platforms from device tree" 99 + select CPU_PXA168 100 + select CPU_PXA910 101 + select USE_OF 102 + help 103 + Include support for Marvell MMP2 based platforms using 104 + the device tree. Needn't select any other machine while 105 + MACH_MMP_DT is enabled. 106 + 107 + config MACH_MMP2_DT 108 + bool "Support MMP2 (ARMv7) platforms from device tree" 109 + depends on !CPU_MOHAWK 110 + select CPU_MMP2 111 + select USE_OF 112 + help 113 + Include support for Marvell MMP2 based platforms using 114 + the device tree. 86 115 87 116 endmenu 88 117
+5 -4
arch/arm/mach-mmp/Makefile
··· 2 2 # Makefile for Marvell's PXA168 processors line 3 3 # 4 4 5 - obj-y += common.o clock.o devices.o time.o 5 + obj-y += common.o clock.o devices.o time.o irq.o 6 6 7 7 # SoC support 8 - obj-$(CONFIG_CPU_PXA168) += pxa168.o irq-pxa168.o 9 - obj-$(CONFIG_CPU_PXA910) += pxa910.o irq-pxa168.o 10 - obj-$(CONFIG_CPU_MMP2) += mmp2.o irq-mmp2.o sram.o 8 + obj-$(CONFIG_CPU_PXA168) += pxa168.o 9 + obj-$(CONFIG_CPU_PXA910) += pxa910.o 10 + obj-$(CONFIG_CPU_MMP2) += mmp2.o sram.o 11 11 12 12 # board support 13 13 obj-$(CONFIG_MACH_ASPENITE) += aspenite.o ··· 19 19 obj-$(CONFIG_MACH_FLINT) += flint.o 20 20 obj-$(CONFIG_MACH_MARVELL_JASPER) += jasper.o 21 21 obj-$(CONFIG_MACH_MMP_DT) += mmp-dt.o 22 + obj-$(CONFIG_MACH_MMP2_DT) += mmp2-dt.o 22 23 obj-$(CONFIG_MACH_TETON_BGA) += teton_bga.o 23 24 obj-$(CONFIG_MACH_GPLUGD) += gplugd.o
+3 -1
arch/arm/mach-mmp/include/mach/entry-macro.S
··· 6 6 * published by the Free Software Foundation. 7 7 */ 8 8 9 + #include <asm/irq.h> 9 10 #include <mach/regs-icu.h> 10 11 11 12 .macro get_irqnr_preamble, base, tmp 12 13 mrc p15, 0, \tmp, c0, c0, 0 @ CPUID 13 14 and \tmp, \tmp, #0xff00 14 15 cmp \tmp, #0x5800 15 - ldr \base, =ICU_VIRT_BASE 16 + ldr \base, =mmp_icu_base 17 + ldr \base, [\base, #0] 16 18 addne \base, \base, #0x10c @ PJ1 AP INT SEL register 17 19 addeq \base, \base, #0x104 @ PJ4 IRQ SEL register 18 20 .endm
+19 -8
arch/arm/mach-mmp/include/mach/irqs.h
··· 125 125 #define IRQ_MMP2_RTC_MUX 5 126 126 #define IRQ_MMP2_TWSI1 7 127 127 #define IRQ_MMP2_GPU 8 128 - #define IRQ_MMP2_KEYPAD 9 128 + #define IRQ_MMP2_KEYPAD_MUX 9 129 129 #define IRQ_MMP2_ROTARY 10 130 130 #define IRQ_MMP2_TRACKBALL 11 131 131 #define IRQ_MMP2_ONEWIRE 12 ··· 163 163 #define IRQ_MMP2_DMA_FIQ 47 164 164 #define IRQ_MMP2_DMA_RIQ 48 165 165 #define IRQ_MMP2_GPIO 49 166 - #define IRQ_MMP2_SSP_MUX 51 166 + #define IRQ_MMP2_MIPI_HSI1_MUX 51 167 167 #define IRQ_MMP2_MMC2 52 168 168 #define IRQ_MMP2_MMC3 53 169 169 #define IRQ_MMP2_MMC4 54 170 - #define IRQ_MMP2_MIPI_HSI 55 170 + #define IRQ_MMP2_MIPI_HSI0_MUX 55 171 171 #define IRQ_MMP2_MSP 58 172 172 #define IRQ_MMP2_MIPI_SLIM_DMA 59 173 173 #define IRQ_MMP2_PJ4_FREQ_CHG 60 ··· 186 186 #define IRQ_MMP2_RTC_ALARM (IRQ_MMP2_RTC_BASE + 0) 187 187 #define IRQ_MMP2_RTC (IRQ_MMP2_RTC_BASE + 1) 188 188 189 + /* secondary interrupt of INT #9 */ 190 + #define IRQ_MMP2_KEYPAD_BASE (IRQ_MMP2_RTC_BASE + 2) 191 + #define IRQ_MMP2_KPC (IRQ_MMP2_KEYPAD_BASE + 0) 192 + #define IRQ_MMP2_ROTORY (IRQ_MMP2_KEYPAD_BASE + 1) 193 + #define IRQ_MMP2_TBALL (IRQ_MMP2_KEYPAD_BASE + 2) 194 + 189 195 /* secondary interrupt of INT #17 */ 190 - #define IRQ_MMP2_TWSI_BASE (IRQ_MMP2_RTC_BASE + 2) 196 + #define IRQ_MMP2_TWSI_BASE (IRQ_MMP2_KEYPAD_BASE + 3) 191 197 #define IRQ_MMP2_TWSI2 (IRQ_MMP2_TWSI_BASE + 0) 192 198 #define IRQ_MMP2_TWSI3 (IRQ_MMP2_TWSI_BASE + 1) 193 199 #define IRQ_MMP2_TWSI4 (IRQ_MMP2_TWSI_BASE + 2) ··· 218 212 #define IRQ_MMP2_COMMRX (IRQ_MMP2_MISC_BASE + 14) 219 213 220 214 /* secondary interrupt of INT #51 */ 221 - #define IRQ_MMP2_SSP_BASE (IRQ_MMP2_MISC_BASE + 15) 222 - #define IRQ_MMP2_SSP1_SRDY (IRQ_MMP2_SSP_BASE + 0) 223 - #define IRQ_MMP2_SSP3_SRDY (IRQ_MMP2_SSP_BASE + 1) 215 + #define IRQ_MMP2_MIPI_HSI1_BASE (IRQ_MMP2_MISC_BASE + 15) 216 + #define IRQ_MMP2_HSI1_CAWAKE (IRQ_MMP2_MIPI_HSI1_BASE + 0) 217 + #define IRQ_MMP2_MIPI_HSI_INT1 (IRQ_MMP2_MIPI_HSI1_BASE + 1) 224 218 225 - #define IRQ_MMP2_MUX_END (IRQ_MMP2_SSP_BASE + 2) 219 + /* secondary interrupt of INT #55 */ 220 + #define IRQ_MMP2_MIPI_HSI0_BASE (IRQ_MMP2_MIPI_HSI1_BASE + 2) 221 + #define IRQ_MMP2_HSI0_CAWAKE (IRQ_MMP2_MIPI_HSI0_BASE + 0) 222 + #define IRQ_MMP2_MIPI_HSI_INT0 (IRQ_MMP2_MIPI_HSI0_BASE + 1) 223 + 224 + #define IRQ_MMP2_MUX_END (IRQ_MMP2_MIPI_HSI0_BASE + 2) 226 225 227 226 #define IRQ_GPIO_START 128 228 227 #define MMP_NR_BUILTIN_GPIO 192
-158
arch/arm/mach-mmp/irq-mmp2.c
··· 1 - /* 2 - * linux/arch/arm/mach-mmp/irq-mmp2.c 3 - * 4 - * Generic IRQ handling, GPIO IRQ demultiplexing, etc. 5 - * 6 - * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 7 - * Copyright: Marvell International Ltd. 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License version 2 as 11 - * published by the Free Software Foundation. 12 - */ 13 - 14 - #include <linux/init.h> 15 - #include <linux/irq.h> 16 - #include <linux/io.h> 17 - 18 - #include <mach/irqs.h> 19 - #include <mach/regs-icu.h> 20 - #include <mach/mmp2.h> 21 - 22 - #include "common.h" 23 - 24 - static void icu_mask_irq(struct irq_data *d) 25 - { 26 - uint32_t r = __raw_readl(ICU_INT_CONF(d->irq)); 27 - 28 - r &= ~ICU_INT_ROUTE_PJ4_IRQ; 29 - __raw_writel(r, ICU_INT_CONF(d->irq)); 30 - } 31 - 32 - static void icu_unmask_irq(struct irq_data *d) 33 - { 34 - uint32_t r = __raw_readl(ICU_INT_CONF(d->irq)); 35 - 36 - r |= ICU_INT_ROUTE_PJ4_IRQ; 37 - __raw_writel(r, ICU_INT_CONF(d->irq)); 38 - } 39 - 40 - static struct irq_chip icu_irq_chip = { 41 - .name = "icu_irq", 42 - .irq_mask = icu_mask_irq, 43 - .irq_mask_ack = icu_mask_irq, 44 - .irq_unmask = icu_unmask_irq, 45 - }; 46 - 47 - static void pmic_irq_ack(struct irq_data *d) 48 - { 49 - if (d->irq == IRQ_MMP2_PMIC) 50 - mmp2_clear_pmic_int(); 51 - } 52 - 53 - #define SECOND_IRQ_MASK(_name_, irq_base, prefix) \ 54 - static void _name_##_mask_irq(struct irq_data *d) \ 55 - { \ 56 - uint32_t r; \ 57 - r = __raw_readl(prefix##_MASK) | (1 << (d->irq - irq_base)); \ 58 - __raw_writel(r, prefix##_MASK); \ 59 - } 60 - 61 - #define SECOND_IRQ_UNMASK(_name_, irq_base, prefix) \ 62 - static void _name_##_unmask_irq(struct irq_data *d) \ 63 - { \ 64 - uint32_t r; \ 65 - r = __raw_readl(prefix##_MASK) & ~(1 << (d->irq - irq_base)); \ 66 - __raw_writel(r, prefix##_MASK); \ 67 - } 68 - 69 - #define SECOND_IRQ_DEMUX(_name_, irq_base, prefix) \ 70 - static void _name_##_irq_demux(unsigned int irq, struct irq_desc *desc) \ 71 - { \ 72 - unsigned long status, mask, n; \ 73 - mask = __raw_readl(prefix##_MASK); \ 74 - while (1) { \ 75 - status = __raw_readl(prefix##_STATUS) & ~mask; \ 76 - if (status == 0) \ 77 - break; \ 78 - n = find_first_bit(&status, BITS_PER_LONG); \ 79 - while (n < BITS_PER_LONG) { \ 80 - generic_handle_irq(irq_base + n); \ 81 - n = find_next_bit(&status, BITS_PER_LONG, n+1); \ 82 - } \ 83 - } \ 84 - } 85 - 86 - #define SECOND_IRQ_CHIP(_name_, irq_base, prefix) \ 87 - SECOND_IRQ_MASK(_name_, irq_base, prefix) \ 88 - SECOND_IRQ_UNMASK(_name_, irq_base, prefix) \ 89 - SECOND_IRQ_DEMUX(_name_, irq_base, prefix) \ 90 - static struct irq_chip _name_##_irq_chip = { \ 91 - .name = #_name_, \ 92 - .irq_mask = _name_##_mask_irq, \ 93 - .irq_unmask = _name_##_unmask_irq, \ 94 - } 95 - 96 - SECOND_IRQ_CHIP(pmic, IRQ_MMP2_PMIC_BASE, MMP2_ICU_INT4); 97 - SECOND_IRQ_CHIP(rtc, IRQ_MMP2_RTC_BASE, MMP2_ICU_INT5); 98 - SECOND_IRQ_CHIP(twsi, IRQ_MMP2_TWSI_BASE, MMP2_ICU_INT17); 99 - SECOND_IRQ_CHIP(misc, IRQ_MMP2_MISC_BASE, MMP2_ICU_INT35); 100 - SECOND_IRQ_CHIP(ssp, IRQ_MMP2_SSP_BASE, MMP2_ICU_INT51); 101 - 102 - static void init_mux_irq(struct irq_chip *chip, int start, int num) 103 - { 104 - int irq; 105 - 106 - for (irq = start; num > 0; irq++, num--) { 107 - struct irq_data *d = irq_get_irq_data(irq); 108 - 109 - /* mask and clear the IRQ */ 110 - chip->irq_mask(d); 111 - if (chip->irq_ack) 112 - chip->irq_ack(d); 113 - 114 - irq_set_chip(irq, chip); 115 - set_irq_flags(irq, IRQF_VALID); 116 - irq_set_handler(irq, handle_level_irq); 117 - } 118 - } 119 - 120 - void __init mmp2_init_icu(void) 121 - { 122 - int irq; 123 - 124 - for (irq = 0; irq < IRQ_MMP2_MUX_BASE; irq++) { 125 - icu_mask_irq(irq_get_irq_data(irq)); 126 - irq_set_chip(irq, &icu_irq_chip); 127 - set_irq_flags(irq, IRQF_VALID); 128 - 129 - switch (irq) { 130 - case IRQ_MMP2_PMIC_MUX: 131 - case IRQ_MMP2_RTC_MUX: 132 - case IRQ_MMP2_TWSI_MUX: 133 - case IRQ_MMP2_MISC_MUX: 134 - case IRQ_MMP2_SSP_MUX: 135 - break; 136 - default: 137 - irq_set_handler(irq, handle_level_irq); 138 - break; 139 - } 140 - } 141 - 142 - /* NOTE: IRQ_MMP2_PMIC requires the PMIC MFPR register 143 - * to be written to clear the interrupt 144 - */ 145 - pmic_irq_chip.irq_ack = pmic_irq_ack; 146 - 147 - init_mux_irq(&pmic_irq_chip, IRQ_MMP2_PMIC_BASE, 2); 148 - init_mux_irq(&rtc_irq_chip, IRQ_MMP2_RTC_BASE, 2); 149 - init_mux_irq(&twsi_irq_chip, IRQ_MMP2_TWSI_BASE, 5); 150 - init_mux_irq(&misc_irq_chip, IRQ_MMP2_MISC_BASE, 15); 151 - init_mux_irq(&ssp_irq_chip, IRQ_MMP2_SSP_BASE, 2); 152 - 153 - irq_set_chained_handler(IRQ_MMP2_PMIC_MUX, pmic_irq_demux); 154 - irq_set_chained_handler(IRQ_MMP2_RTC_MUX, rtc_irq_demux); 155 - irq_set_chained_handler(IRQ_MMP2_TWSI_MUX, twsi_irq_demux); 156 - irq_set_chained_handler(IRQ_MMP2_MISC_MUX, misc_irq_demux); 157 - irq_set_chained_handler(IRQ_MMP2_SSP_MUX, ssp_irq_demux); 158 - }
-54
arch/arm/mach-mmp/irq-pxa168.c
··· 1 - /* 2 - * linux/arch/arm/mach-mmp/irq.c 3 - * 4 - * Generic IRQ handling, GPIO IRQ demultiplexing, etc. 5 - * 6 - * Author: Bin Yang <bin.yang@marvell.com> 7 - * Created: Sep 30, 2008 8 - * Copyright: Marvell International Ltd. 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 - */ 14 - 15 - #include <linux/init.h> 16 - #include <linux/irq.h> 17 - #include <linux/io.h> 18 - 19 - #include <mach/regs-icu.h> 20 - 21 - #include "common.h" 22 - 23 - #define IRQ_ROUTE_TO_AP (ICU_INT_CONF_AP_INT | ICU_INT_CONF_IRQ) 24 - 25 - #define PRIORITY_DEFAULT 0x1 26 - #define PRIORITY_NONE 0x0 /* means IRQ disabled */ 27 - 28 - static void icu_mask_irq(struct irq_data *d) 29 - { 30 - __raw_writel(PRIORITY_NONE, ICU_INT_CONF(d->irq)); 31 - } 32 - 33 - static void icu_unmask_irq(struct irq_data *d) 34 - { 35 - __raw_writel(IRQ_ROUTE_TO_AP | PRIORITY_DEFAULT, ICU_INT_CONF(d->irq)); 36 - } 37 - 38 - static struct irq_chip icu_irq_chip = { 39 - .name = "icu_irq", 40 - .irq_ack = icu_mask_irq, 41 - .irq_mask = icu_mask_irq, 42 - .irq_unmask = icu_unmask_irq, 43 - }; 44 - 45 - void __init icu_init_irq(void) 46 - { 47 - int irq; 48 - 49 - for (irq = 0; irq < 64; irq++) { 50 - icu_mask_irq(irq_get_irq_data(irq)); 51 - irq_set_chip_and_handler(irq, &icu_irq_chip, handle_level_irq); 52 - set_irq_flags(irq, IRQF_VALID); 53 - } 54 - }
+445
arch/arm/mach-mmp/irq.c
··· 1 + /* 2 + * linux/arch/arm/mach-mmp/irq.c 3 + * 4 + * Generic IRQ handling, GPIO IRQ demultiplexing, etc. 5 + * Copyright (C) 2008 - 2012 Marvell Technology Group Ltd. 6 + * 7 + * Author: Bin Yang <bin.yang@marvell.com> 8 + * Haojian Zhuang <haojian.zhuang@gmail.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #include <linux/module.h> 16 + #include <linux/init.h> 17 + #include <linux/irq.h> 18 + #include <linux/irqdomain.h> 19 + #include <linux/io.h> 20 + #include <linux/ioport.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 23 + 24 + #include <mach/irqs.h> 25 + 26 + #include "common.h" 27 + 28 + #define MAX_ICU_NR 16 29 + 30 + struct icu_chip_data { 31 + int nr_irqs; 32 + unsigned int virq_base; 33 + unsigned int cascade_irq; 34 + void __iomem *reg_status; 35 + void __iomem *reg_mask; 36 + unsigned int conf_enable; 37 + unsigned int conf_disable; 38 + unsigned int conf_mask; 39 + unsigned int clr_mfp_irq_base; 40 + unsigned int clr_mfp_hwirq; 41 + struct irq_domain *domain; 42 + }; 43 + 44 + struct mmp_intc_conf { 45 + unsigned int conf_enable; 46 + unsigned int conf_disable; 47 + unsigned int conf_mask; 48 + }; 49 + 50 + void __iomem *mmp_icu_base; 51 + static struct icu_chip_data icu_data[MAX_ICU_NR]; 52 + static int max_icu_nr; 53 + 54 + extern void mmp2_clear_pmic_int(void); 55 + 56 + static void icu_mask_ack_irq(struct irq_data *d) 57 + { 58 + struct irq_domain *domain = d->domain; 59 + struct icu_chip_data *data = (struct icu_chip_data *)domain->host_data; 60 + int hwirq; 61 + u32 r; 62 + 63 + hwirq = d->irq - data->virq_base; 64 + if (data == &icu_data[0]) { 65 + r = readl_relaxed(mmp_icu_base + (hwirq << 2)); 66 + r &= ~data->conf_mask; 67 + r |= data->conf_disable; 68 + writel_relaxed(r, mmp_icu_base + (hwirq << 2)); 69 + } else { 70 + #ifdef CONFIG_CPU_MMP2 71 + if ((data->virq_base == data->clr_mfp_irq_base) 72 + && (hwirq == data->clr_mfp_hwirq)) 73 + mmp2_clear_pmic_int(); 74 + #endif 75 + r = readl_relaxed(data->reg_mask) | (1 << hwirq); 76 + writel_relaxed(r, data->reg_mask); 77 + } 78 + } 79 + 80 + static void icu_mask_irq(struct irq_data *d) 81 + { 82 + struct irq_domain *domain = d->domain; 83 + struct icu_chip_data *data = (struct icu_chip_data *)domain->host_data; 84 + int hwirq; 85 + u32 r; 86 + 87 + hwirq = d->irq - data->virq_base; 88 + if (data == &icu_data[0]) { 89 + r = readl_relaxed(mmp_icu_base + (hwirq << 2)); 90 + r &= ~data->conf_mask; 91 + r |= data->conf_disable; 92 + writel_relaxed(r, mmp_icu_base + (hwirq << 2)); 93 + } else { 94 + r = readl_relaxed(data->reg_mask) | (1 << hwirq); 95 + writel_relaxed(r, data->reg_mask); 96 + } 97 + } 98 + 99 + static void icu_unmask_irq(struct irq_data *d) 100 + { 101 + struct irq_domain *domain = d->domain; 102 + struct icu_chip_data *data = (struct icu_chip_data *)domain->host_data; 103 + int hwirq; 104 + u32 r; 105 + 106 + hwirq = d->irq - data->virq_base; 107 + if (data == &icu_data[0]) { 108 + r = readl_relaxed(mmp_icu_base + (hwirq << 2)); 109 + r &= ~data->conf_mask; 110 + r |= data->conf_enable; 111 + writel_relaxed(r, mmp_icu_base + (hwirq << 2)); 112 + } else { 113 + r = readl_relaxed(data->reg_mask) & ~(1 << hwirq); 114 + writel_relaxed(r, data->reg_mask); 115 + } 116 + } 117 + 118 + static struct irq_chip icu_irq_chip = { 119 + .name = "icu_irq", 120 + .irq_mask = icu_mask_irq, 121 + .irq_mask_ack = icu_mask_ack_irq, 122 + .irq_unmask = icu_unmask_irq, 123 + }; 124 + 125 + static void icu_mux_irq_demux(unsigned int irq, struct irq_desc *desc) 126 + { 127 + struct irq_domain *domain; 128 + struct icu_chip_data *data; 129 + int i; 130 + unsigned long mask, status, n; 131 + 132 + for (i = 1; i < max_icu_nr; i++) { 133 + if (irq == icu_data[i].cascade_irq) { 134 + domain = icu_data[i].domain; 135 + data = (struct icu_chip_data *)domain->host_data; 136 + break; 137 + } 138 + } 139 + if (i >= max_icu_nr) { 140 + pr_err("Spurious irq %d in MMP INTC\n", irq); 141 + return; 142 + } 143 + 144 + mask = readl_relaxed(data->reg_mask); 145 + while (1) { 146 + status = readl_relaxed(data->reg_status) & ~mask; 147 + if (status == 0) 148 + break; 149 + n = find_first_bit(&status, BITS_PER_LONG); 150 + while (n < BITS_PER_LONG) { 151 + generic_handle_irq(icu_data[i].virq_base + n); 152 + n = find_next_bit(&status, BITS_PER_LONG, n + 1); 153 + } 154 + } 155 + } 156 + 157 + static int mmp_irq_domain_map(struct irq_domain *d, unsigned int irq, 158 + irq_hw_number_t hw) 159 + { 160 + irq_set_chip_and_handler(irq, &icu_irq_chip, handle_level_irq); 161 + set_irq_flags(irq, IRQF_VALID); 162 + return 0; 163 + } 164 + 165 + static int mmp_irq_domain_xlate(struct irq_domain *d, struct device_node *node, 166 + const u32 *intspec, unsigned int intsize, 167 + unsigned long *out_hwirq, 168 + unsigned int *out_type) 169 + { 170 + *out_hwirq = intspec[0]; 171 + return 0; 172 + } 173 + 174 + const struct irq_domain_ops mmp_irq_domain_ops = { 175 + .map = mmp_irq_domain_map, 176 + .xlate = mmp_irq_domain_xlate, 177 + }; 178 + 179 + static struct mmp_intc_conf mmp_conf = { 180 + .conf_enable = 0x51, 181 + .conf_disable = 0x0, 182 + .conf_mask = 0x7f, 183 + }; 184 + 185 + static struct mmp_intc_conf mmp2_conf = { 186 + .conf_enable = 0x20, 187 + .conf_disable = 0x0, 188 + .conf_mask = 0x7f, 189 + }; 190 + 191 + /* MMP (ARMv5) */ 192 + void __init icu_init_irq(void) 193 + { 194 + int irq; 195 + 196 + max_icu_nr = 1; 197 + mmp_icu_base = ioremap(0xd4282000, 0x1000); 198 + icu_data[0].conf_enable = mmp_conf.conf_enable; 199 + icu_data[0].conf_disable = mmp_conf.conf_disable; 200 + icu_data[0].conf_mask = mmp_conf.conf_mask; 201 + icu_data[0].nr_irqs = 64; 202 + icu_data[0].virq_base = 0; 203 + icu_data[0].domain = irq_domain_add_legacy(NULL, 64, 0, 0, 204 + &irq_domain_simple_ops, 205 + &icu_data[0]); 206 + for (irq = 0; irq < 64; irq++) { 207 + icu_mask_irq(irq_get_irq_data(irq)); 208 + irq_set_chip_and_handler(irq, &icu_irq_chip, handle_level_irq); 209 + set_irq_flags(irq, IRQF_VALID); 210 + } 211 + irq_set_default_host(icu_data[0].domain); 212 + } 213 + 214 + /* MMP2 (ARMv7) */ 215 + void __init mmp2_init_icu(void) 216 + { 217 + int irq; 218 + 219 + max_icu_nr = 8; 220 + mmp_icu_base = ioremap(0xd4282000, 0x1000); 221 + icu_data[0].conf_enable = mmp2_conf.conf_enable; 222 + icu_data[0].conf_disable = mmp2_conf.conf_disable; 223 + icu_data[0].conf_mask = mmp2_conf.conf_mask; 224 + icu_data[0].nr_irqs = 64; 225 + icu_data[0].virq_base = 0; 226 + icu_data[0].domain = irq_domain_add_legacy(NULL, 64, 0, 0, 227 + &irq_domain_simple_ops, 228 + &icu_data[0]); 229 + icu_data[1].reg_status = mmp_icu_base + 0x150; 230 + icu_data[1].reg_mask = mmp_icu_base + 0x168; 231 + icu_data[1].clr_mfp_irq_base = IRQ_MMP2_PMIC_BASE; 232 + icu_data[1].clr_mfp_hwirq = IRQ_MMP2_PMIC - IRQ_MMP2_PMIC_BASE; 233 + icu_data[1].nr_irqs = 2; 234 + icu_data[1].virq_base = IRQ_MMP2_PMIC_BASE; 235 + icu_data[1].domain = irq_domain_add_legacy(NULL, icu_data[1].nr_irqs, 236 + icu_data[1].virq_base, 0, 237 + &irq_domain_simple_ops, 238 + &icu_data[1]); 239 + icu_data[2].reg_status = mmp_icu_base + 0x154; 240 + icu_data[2].reg_mask = mmp_icu_base + 0x16c; 241 + icu_data[2].nr_irqs = 2; 242 + icu_data[2].virq_base = IRQ_MMP2_RTC_BASE; 243 + icu_data[2].domain = irq_domain_add_legacy(NULL, icu_data[2].nr_irqs, 244 + icu_data[2].virq_base, 0, 245 + &irq_domain_simple_ops, 246 + &icu_data[2]); 247 + icu_data[3].reg_status = mmp_icu_base + 0x180; 248 + icu_data[3].reg_mask = mmp_icu_base + 0x17c; 249 + icu_data[3].nr_irqs = 3; 250 + icu_data[3].virq_base = IRQ_MMP2_KEYPAD_BASE; 251 + icu_data[3].domain = irq_domain_add_legacy(NULL, icu_data[3].nr_irqs, 252 + icu_data[3].virq_base, 0, 253 + &irq_domain_simple_ops, 254 + &icu_data[3]); 255 + icu_data[4].reg_status = mmp_icu_base + 0x158; 256 + icu_data[4].reg_mask = mmp_icu_base + 0x170; 257 + icu_data[4].nr_irqs = 5; 258 + icu_data[4].virq_base = IRQ_MMP2_TWSI_BASE; 259 + icu_data[4].domain = irq_domain_add_legacy(NULL, icu_data[4].nr_irqs, 260 + icu_data[4].virq_base, 0, 261 + &irq_domain_simple_ops, 262 + &icu_data[4]); 263 + icu_data[5].reg_status = mmp_icu_base + 0x15c; 264 + icu_data[5].reg_mask = mmp_icu_base + 0x174; 265 + icu_data[5].nr_irqs = 15; 266 + icu_data[5].virq_base = IRQ_MMP2_MISC_BASE; 267 + icu_data[5].domain = irq_domain_add_legacy(NULL, icu_data[5].nr_irqs, 268 + icu_data[5].virq_base, 0, 269 + &irq_domain_simple_ops, 270 + &icu_data[5]); 271 + icu_data[6].reg_status = mmp_icu_base + 0x160; 272 + icu_data[6].reg_mask = mmp_icu_base + 0x178; 273 + icu_data[6].nr_irqs = 2; 274 + icu_data[6].virq_base = IRQ_MMP2_MIPI_HSI1_BASE; 275 + icu_data[6].domain = irq_domain_add_legacy(NULL, icu_data[6].nr_irqs, 276 + icu_data[6].virq_base, 0, 277 + &irq_domain_simple_ops, 278 + &icu_data[6]); 279 + icu_data[7].reg_status = mmp_icu_base + 0x188; 280 + icu_data[7].reg_mask = mmp_icu_base + 0x184; 281 + icu_data[7].nr_irqs = 2; 282 + icu_data[7].virq_base = IRQ_MMP2_MIPI_HSI0_BASE; 283 + icu_data[7].domain = irq_domain_add_legacy(NULL, icu_data[7].nr_irqs, 284 + icu_data[7].virq_base, 0, 285 + &irq_domain_simple_ops, 286 + &icu_data[7]); 287 + for (irq = 0; irq < IRQ_MMP2_MUX_END; irq++) { 288 + icu_mask_irq(irq_get_irq_data(irq)); 289 + switch (irq) { 290 + case IRQ_MMP2_PMIC_MUX: 291 + case IRQ_MMP2_RTC_MUX: 292 + case IRQ_MMP2_KEYPAD_MUX: 293 + case IRQ_MMP2_TWSI_MUX: 294 + case IRQ_MMP2_MISC_MUX: 295 + case IRQ_MMP2_MIPI_HSI1_MUX: 296 + case IRQ_MMP2_MIPI_HSI0_MUX: 297 + irq_set_chip(irq, &icu_irq_chip); 298 + irq_set_chained_handler(irq, icu_mux_irq_demux); 299 + break; 300 + default: 301 + irq_set_chip_and_handler(irq, &icu_irq_chip, 302 + handle_level_irq); 303 + break; 304 + } 305 + set_irq_flags(irq, IRQF_VALID); 306 + } 307 + irq_set_default_host(icu_data[0].domain); 308 + } 309 + 310 + #ifdef CONFIG_OF 311 + static const struct of_device_id intc_ids[] __initconst = { 312 + { .compatible = "mrvl,mmp-intc", .data = &mmp_conf }, 313 + { .compatible = "mrvl,mmp2-intc", .data = &mmp2_conf }, 314 + {} 315 + }; 316 + 317 + static const struct of_device_id mmp_mux_irq_match[] __initconst = { 318 + { .compatible = "mrvl,mmp2-mux-intc" }, 319 + {} 320 + }; 321 + 322 + int __init mmp2_mux_init(struct device_node *parent) 323 + { 324 + struct device_node *node; 325 + const struct of_device_id *of_id; 326 + struct resource res; 327 + int i, irq_base, ret, irq; 328 + u32 nr_irqs, mfp_irq; 329 + 330 + node = parent; 331 + max_icu_nr = 1; 332 + for (i = 1; i < MAX_ICU_NR; i++) { 333 + node = of_find_matching_node(node, mmp_mux_irq_match); 334 + if (!node) 335 + break; 336 + of_id = of_match_node(&mmp_mux_irq_match[0], node); 337 + ret = of_property_read_u32(node, "mrvl,intc-nr-irqs", 338 + &nr_irqs); 339 + if (ret) { 340 + pr_err("Not found mrvl,intc-nr-irqs property\n"); 341 + ret = -EINVAL; 342 + goto err; 343 + } 344 + ret = of_address_to_resource(node, 0, &res); 345 + if (ret < 0) { 346 + pr_err("Not found reg property\n"); 347 + ret = -EINVAL; 348 + goto err; 349 + } 350 + icu_data[i].reg_status = mmp_icu_base + res.start; 351 + ret = of_address_to_resource(node, 1, &res); 352 + if (ret < 0) { 353 + pr_err("Not found reg property\n"); 354 + ret = -EINVAL; 355 + goto err; 356 + } 357 + icu_data[i].reg_mask = mmp_icu_base + res.start; 358 + icu_data[i].cascade_irq = irq_of_parse_and_map(node, 0); 359 + if (!icu_data[i].cascade_irq) { 360 + ret = -EINVAL; 361 + goto err; 362 + } 363 + 364 + irq_base = irq_alloc_descs(-1, 0, nr_irqs, 0); 365 + if (irq_base < 0) { 366 + pr_err("Failed to allocate IRQ numbers for mux intc\n"); 367 + ret = irq_base; 368 + goto err; 369 + } 370 + if (!of_property_read_u32(node, "mrvl,clr-mfp-irq", 371 + &mfp_irq)) { 372 + icu_data[i].clr_mfp_irq_base = irq_base; 373 + icu_data[i].clr_mfp_hwirq = mfp_irq; 374 + } 375 + irq_set_chained_handler(icu_data[i].cascade_irq, 376 + icu_mux_irq_demux); 377 + icu_data[i].nr_irqs = nr_irqs; 378 + icu_data[i].virq_base = irq_base; 379 + icu_data[i].domain = irq_domain_add_legacy(node, nr_irqs, 380 + irq_base, 0, 381 + &mmp_irq_domain_ops, 382 + &icu_data[i]); 383 + for (irq = irq_base; irq < irq_base + nr_irqs; irq++) 384 + icu_mask_irq(irq_get_irq_data(irq)); 385 + } 386 + max_icu_nr = i; 387 + return 0; 388 + err: 389 + of_node_put(node); 390 + max_icu_nr = i; 391 + return ret; 392 + } 393 + 394 + void __init mmp_dt_irq_init(void) 395 + { 396 + struct device_node *node; 397 + const struct of_device_id *of_id; 398 + struct mmp_intc_conf *conf; 399 + int nr_irqs, irq_base, ret, irq; 400 + 401 + node = of_find_matching_node(NULL, intc_ids); 402 + if (!node) { 403 + pr_err("Failed to find interrupt controller in arch-mmp\n"); 404 + return; 405 + } 406 + of_id = of_match_node(intc_ids, node); 407 + conf = of_id->data; 408 + 409 + ret = of_property_read_u32(node, "mrvl,intc-nr-irqs", &nr_irqs); 410 + if (ret) { 411 + pr_err("Not found mrvl,intc-nr-irqs property\n"); 412 + return; 413 + } 414 + 415 + mmp_icu_base = of_iomap(node, 0); 416 + if (!mmp_icu_base) { 417 + pr_err("Failed to get interrupt controller register\n"); 418 + return; 419 + } 420 + 421 + irq_base = irq_alloc_descs(-1, 0, nr_irqs - NR_IRQS_LEGACY, 0); 422 + if (irq_base < 0) { 423 + pr_err("Failed to allocate IRQ numbers\n"); 424 + goto err; 425 + } else if (irq_base != NR_IRQS_LEGACY) { 426 + pr_err("ICU's irqbase should be started from 0\n"); 427 + goto err; 428 + } 429 + icu_data[0].conf_enable = conf->conf_enable; 430 + icu_data[0].conf_disable = conf->conf_disable; 431 + icu_data[0].conf_mask = conf->conf_mask; 432 + icu_data[0].nr_irqs = nr_irqs; 433 + icu_data[0].virq_base = 0; 434 + icu_data[0].domain = irq_domain_add_legacy(node, nr_irqs, 0, 0, 435 + &mmp_irq_domain_ops, 436 + &icu_data[0]); 437 + irq_set_default_host(icu_data[0].domain); 438 + for (irq = 0; irq < nr_irqs; irq++) 439 + icu_mask_irq(irq_get_irq_data(irq)); 440 + mmp2_mux_init(node); 441 + return; 442 + err: 443 + iounmap(mmp_icu_base); 444 + } 445 + #endif
+38 -30
arch/arm/mach-mmp/mmp-dt.c
··· 14 14 #include <linux/of_irq.h> 15 15 #include <linux/of_platform.h> 16 16 #include <asm/mach/arch.h> 17 + #include <asm/mach/time.h> 17 18 #include <mach/irqs.h> 18 19 19 20 #include "common.h" 20 21 21 - extern struct sys_timer pxa168_timer; 22 - extern void __init icu_init_irq(void); 22 + extern void __init mmp_dt_irq_init(void); 23 + extern void __init mmp_dt_init_timer(void); 23 24 24 - static const struct of_dev_auxdata mmp_auxdata_lookup[] __initconst = { 25 + static struct sys_timer mmp_dt_timer = { 26 + .init = mmp_dt_init_timer, 27 + }; 28 + 29 + static const struct of_dev_auxdata pxa168_auxdata_lookup[] __initconst = { 25 30 OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4017000, "pxa2xx-uart.0", NULL), 26 31 OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4018000, "pxa2xx-uart.1", NULL), 27 32 OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4026000, "pxa2xx-uart.2", NULL), ··· 37 32 {} 38 33 }; 39 34 40 - static int __init mmp_intc_add_irq_domain(struct device_node *np, 41 - struct device_node *parent) 42 - { 43 - irq_domain_add_simple(np, 0); 44 - return 0; 45 - } 46 - 47 - static int __init mmp_gpio_add_irq_domain(struct device_node *np, 48 - struct device_node *parent) 49 - { 50 - irq_domain_add_simple(np, IRQ_GPIO_START); 51 - return 0; 52 - } 53 - 54 - static const struct of_device_id mmp_irq_match[] __initconst = { 55 - { .compatible = "mrvl,mmp-intc", .data = mmp_intc_add_irq_domain, }, 56 - { .compatible = "mrvl,mmp-gpio", .data = mmp_gpio_add_irq_domain, }, 35 + static const struct of_dev_auxdata pxa910_auxdata_lookup[] __initconst = { 36 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4017000, "pxa2xx-uart.0", NULL), 37 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4018000, "pxa2xx-uart.1", NULL), 38 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4036000, "pxa2xx-uart.2", NULL), 39 + OF_DEV_AUXDATA("mrvl,mmp-twsi", 0xd4011000, "pxa2xx-i2c.0", NULL), 40 + OF_DEV_AUXDATA("mrvl,mmp-twsi", 0xd4037000, "pxa2xx-i2c.1", NULL), 41 + OF_DEV_AUXDATA("mrvl,mmp-gpio", 0xd4019000, "pxa-gpio", NULL), 42 + OF_DEV_AUXDATA("mrvl,mmp-rtc", 0xd4010000, "sa1100-rtc", NULL), 57 43 {} 58 44 }; 59 45 60 - static void __init mmp_dt_init(void) 46 + static void __init pxa168_dt_init(void) 61 47 { 62 - 63 - of_irq_init(mmp_irq_match); 64 - 65 48 of_platform_populate(NULL, of_default_bus_match_table, 66 - mmp_auxdata_lookup, NULL); 49 + pxa168_auxdata_lookup, NULL); 67 50 } 68 51 69 - static const char *pxa168_dt_board_compat[] __initdata = { 52 + static void __init pxa910_dt_init(void) 53 + { 54 + of_platform_populate(NULL, of_default_bus_match_table, 55 + pxa910_auxdata_lookup, NULL); 56 + } 57 + 58 + static const char *mmp_dt_board_compat[] __initdata = { 70 59 "mrvl,pxa168-aspenite", 60 + "mrvl,pxa910-dkb", 71 61 NULL, 72 62 }; 73 63 74 64 DT_MACHINE_START(PXA168_DT, "Marvell PXA168 (Device Tree Support)") 75 65 .map_io = mmp_map_io, 76 - .init_irq = icu_init_irq, 77 - .timer = &pxa168_timer, 78 - .init_machine = mmp_dt_init, 79 - .dt_compat = pxa168_dt_board_compat, 66 + .init_irq = mmp_dt_irq_init, 67 + .timer = &mmp_dt_timer, 68 + .init_machine = pxa168_dt_init, 69 + .dt_compat = mmp_dt_board_compat, 70 + MACHINE_END 71 + 72 + DT_MACHINE_START(PXA910_DT, "Marvell PXA910 (Device Tree Support)") 73 + .map_io = mmp_map_io, 74 + .init_irq = mmp_dt_irq_init, 75 + .timer = &mmp_dt_timer, 76 + .init_machine = pxa910_dt_init, 77 + .dt_compat = mmp_dt_board_compat, 80 78 MACHINE_END
+60
arch/arm/mach-mmp/mmp2-dt.c
··· 1 + /* 2 + * linux/arch/arm/mach-mmp/mmp2-dt.c 3 + * 4 + * Copyright (C) 2012 Marvell Technology Group Ltd. 5 + * Author: Haojian Zhuang <haojian.zhuang@marvell.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * publishhed by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/io.h> 13 + #include <linux/irq.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/of_irq.h> 16 + #include <linux/of_platform.h> 17 + #include <asm/mach/arch.h> 18 + #include <asm/mach/time.h> 19 + #include <mach/irqs.h> 20 + #include <mach/regs-apbc.h> 21 + 22 + #include "common.h" 23 + 24 + extern void __init mmp_dt_irq_init(void); 25 + extern void __init mmp_dt_init_timer(void); 26 + 27 + static struct sys_timer mmp_dt_timer = { 28 + .init = mmp_dt_init_timer, 29 + }; 30 + 31 + static const struct of_dev_auxdata mmp2_auxdata_lookup[] __initconst = { 32 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4030000, "pxa2xx-uart.0", NULL), 33 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4017000, "pxa2xx-uart.1", NULL), 34 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4018000, "pxa2xx-uart.2", NULL), 35 + OF_DEV_AUXDATA("mrvl,mmp-uart", 0xd4016000, "pxa2xx-uart.3", NULL), 36 + OF_DEV_AUXDATA("mrvl,mmp-twsi", 0xd4011000, "pxa2xx-i2c.0", NULL), 37 + OF_DEV_AUXDATA("mrvl,mmp-twsi", 0xd4025000, "pxa2xx-i2c.1", NULL), 38 + OF_DEV_AUXDATA("mrvl,mmp-gpio", 0xd4019000, "pxa-gpio", NULL), 39 + OF_DEV_AUXDATA("mrvl,mmp-rtc", 0xd4010000, "sa1100-rtc", NULL), 40 + {} 41 + }; 42 + 43 + static void __init mmp2_dt_init(void) 44 + { 45 + of_platform_populate(NULL, of_default_bus_match_table, 46 + mmp2_auxdata_lookup, NULL); 47 + } 48 + 49 + static const char *mmp2_dt_board_compat[] __initdata = { 50 + "mrvl,mmp2-brownstone", 51 + NULL, 52 + }; 53 + 54 + DT_MACHINE_START(MMP2_DT, "Marvell MMP2 (Device Tree Support)") 55 + .map_io = mmp_map_io, 56 + .init_irq = mmp_dt_irq_init, 57 + .timer = &mmp_dt_timer, 58 + .init_machine = mmp2_dt_init, 59 + .dt_compat = mmp2_dt_board_compat, 60 + MACHINE_END
+60 -21
arch/arm/mach-mmp/time.c
··· 25 25 26 26 #include <linux/io.h> 27 27 #include <linux/irq.h> 28 + #include <linux/of.h> 29 + #include <linux/of_address.h> 30 + #include <linux/of_irq.h> 28 31 29 32 #include <asm/sched_clock.h> 30 33 #include <mach/addr-map.h> ··· 44 41 #define MAX_DELTA (0xfffffffe) 45 42 #define MIN_DELTA (16) 46 43 44 + static void __iomem *mmp_timer_base = TIMERS_VIRT_BASE; 45 + 47 46 /* 48 47 * FIXME: the timer needs some delay to stablize the counter capture 49 48 */ ··· 53 48 { 54 49 int delay = 100; 55 50 56 - __raw_writel(1, TIMERS_VIRT_BASE + TMR_CVWR(1)); 51 + __raw_writel(1, mmp_timer_base + TMR_CVWR(1)); 57 52 58 53 while (delay--) 59 54 cpu_relax(); 60 55 61 - return __raw_readl(TIMERS_VIRT_BASE + TMR_CVWR(1)); 56 + return __raw_readl(mmp_timer_base + TMR_CVWR(1)); 62 57 } 63 58 64 59 static u32 notrace mmp_read_sched_clock(void) ··· 73 68 /* 74 69 * Clear pending interrupt status. 75 70 */ 76 - __raw_writel(0x01, TIMERS_VIRT_BASE + TMR_ICR(0)); 71 + __raw_writel(0x01, mmp_timer_base + TMR_ICR(0)); 77 72 78 73 /* 79 74 * Disable timer 0. 80 75 */ 81 - __raw_writel(0x02, TIMERS_VIRT_BASE + TMR_CER); 76 + __raw_writel(0x02, mmp_timer_base + TMR_CER); 82 77 83 78 c->event_handler(c); 84 79 ··· 95 90 /* 96 91 * Disable timer 0. 97 92 */ 98 - __raw_writel(0x02, TIMERS_VIRT_BASE + TMR_CER); 93 + __raw_writel(0x02, mmp_timer_base + TMR_CER); 99 94 100 95 /* 101 96 * Clear and enable timer match 0 interrupt. 102 97 */ 103 - __raw_writel(0x01, TIMERS_VIRT_BASE + TMR_ICR(0)); 104 - __raw_writel(0x01, TIMERS_VIRT_BASE + TMR_IER(0)); 98 + __raw_writel(0x01, mmp_timer_base + TMR_ICR(0)); 99 + __raw_writel(0x01, mmp_timer_base + TMR_IER(0)); 105 100 106 101 /* 107 102 * Setup new clockevent timer value. 108 103 */ 109 - __raw_writel(delta - 1, TIMERS_VIRT_BASE + TMR_TN_MM(0, 0)); 104 + __raw_writel(delta - 1, mmp_timer_base + TMR_TN_MM(0, 0)); 110 105 111 106 /* 112 107 * Enable timer 0. 113 108 */ 114 - __raw_writel(0x03, TIMERS_VIRT_BASE + TMR_CER); 109 + __raw_writel(0x03, mmp_timer_base + TMR_CER); 115 110 116 111 local_irq_restore(flags); 117 112 ··· 129 124 case CLOCK_EVT_MODE_UNUSED: 130 125 case CLOCK_EVT_MODE_SHUTDOWN: 131 126 /* disable the matching interrupt */ 132 - __raw_writel(0x00, TIMERS_VIRT_BASE + TMR_IER(0)); 127 + __raw_writel(0x00, mmp_timer_base + TMR_IER(0)); 133 128 break; 134 129 case CLOCK_EVT_MODE_RESUME: 135 130 case CLOCK_EVT_MODE_PERIODIC: ··· 162 157 163 158 static void __init timer_config(void) 164 159 { 165 - uint32_t ccr = __raw_readl(TIMERS_VIRT_BASE + TMR_CCR); 160 + uint32_t ccr = __raw_readl(mmp_timer_base + TMR_CCR); 166 161 167 - __raw_writel(0x0, TIMERS_VIRT_BASE + TMR_CER); /* disable */ 162 + __raw_writel(0x0, mmp_timer_base + TMR_CER); /* disable */ 168 163 169 164 ccr &= (cpu_is_mmp2()) ? (TMR_CCR_CS_0(0) | TMR_CCR_CS_1(0)) : 170 165 (TMR_CCR_CS_0(3) | TMR_CCR_CS_1(3)); 171 - __raw_writel(ccr, TIMERS_VIRT_BASE + TMR_CCR); 166 + __raw_writel(ccr, mmp_timer_base + TMR_CCR); 172 167 173 168 /* set timer 0 to periodic mode, and timer 1 to free-running mode */ 174 - __raw_writel(0x2, TIMERS_VIRT_BASE + TMR_CMR); 169 + __raw_writel(0x2, mmp_timer_base + TMR_CMR); 175 170 176 - __raw_writel(0x1, TIMERS_VIRT_BASE + TMR_PLCR(0)); /* periodic */ 177 - __raw_writel(0x7, TIMERS_VIRT_BASE + TMR_ICR(0)); /* clear status */ 178 - __raw_writel(0x0, TIMERS_VIRT_BASE + TMR_IER(0)); 171 + __raw_writel(0x1, mmp_timer_base + TMR_PLCR(0)); /* periodic */ 172 + __raw_writel(0x7, mmp_timer_base + TMR_ICR(0)); /* clear status */ 173 + __raw_writel(0x0, mmp_timer_base + TMR_IER(0)); 179 174 180 - __raw_writel(0x0, TIMERS_VIRT_BASE + TMR_PLCR(1)); /* free-running */ 181 - __raw_writel(0x7, TIMERS_VIRT_BASE + TMR_ICR(1)); /* clear status */ 182 - __raw_writel(0x0, TIMERS_VIRT_BASE + TMR_IER(1)); 175 + __raw_writel(0x0, mmp_timer_base + TMR_PLCR(1)); /* free-running */ 176 + __raw_writel(0x7, mmp_timer_base + TMR_ICR(1)); /* clear status */ 177 + __raw_writel(0x0, mmp_timer_base + TMR_IER(1)); 183 178 184 179 /* enable timer 1 counter */ 185 - __raw_writel(0x2, TIMERS_VIRT_BASE + TMR_CER); 180 + __raw_writel(0x2, mmp_timer_base + TMR_CER); 186 181 } 187 182 188 183 static struct irqaction timer_irq = { ··· 208 203 clocksource_register_hz(&cksrc, CLOCK_TICK_RATE); 209 204 clockevents_register_device(&ckevt); 210 205 } 206 + 207 + #ifdef CONFIG_OF 208 + static struct of_device_id mmp_timer_dt_ids[] = { 209 + { .compatible = "mrvl,mmp-timer", }, 210 + {} 211 + }; 212 + 213 + void __init mmp_dt_init_timer(void) 214 + { 215 + struct device_node *np; 216 + int irq, ret; 217 + 218 + np = of_find_matching_node(NULL, mmp_timer_dt_ids); 219 + if (!np) { 220 + ret = -ENODEV; 221 + goto out; 222 + } 223 + 224 + irq = irq_of_parse_and_map(np, 0); 225 + if (!irq) { 226 + ret = -EINVAL; 227 + goto out; 228 + } 229 + mmp_timer_base = of_iomap(np, 0); 230 + if (!mmp_timer_base) { 231 + ret = -ENOMEM; 232 + goto out; 233 + } 234 + timer_init(irq); 235 + return; 236 + out: 237 + pr_err("Failed to get timer from device tree with error:%d\n", ret); 238 + } 239 + #endif
+15 -10
arch/arm/mach-msm/board-msm8x60.c
··· 17 17 #include <linux/irqdomain.h> 18 18 #include <linux/of.h> 19 19 #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 20 21 #include <linux/of_platform.h> 21 22 #include <linux/memblock.h> 22 23 ··· 50 49 msm_map_msm8x60_io(); 51 50 } 52 51 52 + #ifdef CONFIG_OF 53 + static struct of_device_id msm_dt_gic_match[] __initdata = { 54 + { .compatible = "qcom,msm-8660-qgic", .data = gic_of_init }, 55 + {} 56 + }; 57 + #endif 58 + 53 59 static void __init msm8x60_init_irq(void) 54 60 { 55 - gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE, 56 - (void *)MSM_QGIC_CPU_BASE); 61 + if (!of_have_populated_dt()) 62 + gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE, 63 + (void *)MSM_QGIC_CPU_BASE); 64 + #ifdef CONFIG_OF 65 + else 66 + of_irq_init(msm_dt_gic_match); 67 + #endif 57 68 58 69 /* Edge trigger PPIs except AVS_SVICINT and AVS_SVICINTSWDONE */ 59 70 writel(0xFFFFD7FF, MSM_QGIC_DIST_BASE + GIC_DIST_CONFIG + 4); ··· 86 73 {} 87 74 }; 88 75 89 - static struct of_device_id msm_dt_gic_match[] __initdata = { 90 - { .compatible = "qcom,msm-8660-qgic", }, 91 - {} 92 - }; 93 - 94 76 static void __init msm8x60_dt_init(void) 95 77 { 96 - irq_domain_generate_simple(msm_dt_gic_match, MSM8X60_QGIC_DIST_PHYS, 97 - GIC_SPI_START); 98 - 99 78 if (of_machine_is_compatible("qcom,msm8660-surf")) { 100 79 printk(KERN_INFO "Init surf UART registers\n"); 101 80 msm8x60_init_uart12dm();
+7
arch/arm/mach-pxa/include/mach/mfp-pxa2xx.h
··· 17 17 * 18 18 * bit 23 - Input/Output (PXA2xx specific) 19 19 * bit 24 - Wakeup Enable(PXA2xx specific) 20 + * bit 25 - Keep Output (PXA2xx specific) 20 21 */ 21 22 22 23 #define MFP_DIR_IN (0x0 << 23) ··· 26 25 #define MFP_DIR(x) (((x) >> 23) & 0x1) 27 26 28 27 #define MFP_LPM_CAN_WAKEUP (0x1 << 24) 28 + 29 + /* 30 + * MFP_LPM_KEEP_OUTPUT must be specified for pins that need to 31 + * retain their last output level (low or high). 32 + * Note: MFP_LPM_KEEP_OUTPUT has no effect on pins configured for input. 33 + */ 29 34 #define MFP_LPM_KEEP_OUTPUT (0x1 << 25) 30 35 31 36 #define WAKEUP_ON_EDGE_RISE (MFP_LPM_CAN_WAKEUP | MFP_LPM_EDGE_RISE)
+19 -2
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 33 33 #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 34 34 #define GPLR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5)) 35 35 #define GPDR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x0c) 36 + #define GPSR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x18) 37 + #define GPCR(x) __REG2(0x40E00000, BANK_OFF((x) >> 5) + 0x24) 36 38 37 39 #define PWER_WE35 (1 << 24) 38 40 ··· 350 348 #ifdef CONFIG_PM 351 349 static unsigned long saved_gafr[2][4]; 352 350 static unsigned long saved_gpdr[4]; 351 + static unsigned long saved_gplr[4]; 353 352 static unsigned long saved_pgsr[4]; 354 353 355 354 static int pxa2xx_mfp_suspend(void) ··· 369 366 } 370 367 371 368 for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) { 372 - 373 369 saved_gafr[0][i] = GAFR_L(i); 374 370 saved_gafr[1][i] = GAFR_U(i); 375 371 saved_gpdr[i] = GPDR(i * 32); 372 + saved_gplr[i] = GPLR(i * 32); 376 373 saved_pgsr[i] = PGSR(i); 377 374 378 - GPDR(i * 32) = gpdr_lpm[i]; 375 + GPSR(i * 32) = PGSR(i); 376 + GPCR(i * 32) = ~PGSR(i); 379 377 } 378 + 379 + /* set GPDR bits taking into account MFP_LPM_KEEP_OUTPUT */ 380 + for (i = 0; i < pxa_last_gpio; i++) { 381 + if ((gpdr_lpm[gpio_to_bank(i)] & GPIO_bit(i)) || 382 + ((gpio_desc[i].config & MFP_LPM_KEEP_OUTPUT) && 383 + (saved_gpdr[gpio_to_bank(i)] & GPIO_bit(i)))) 384 + GPDR(i) |= GPIO_bit(i); 385 + else 386 + GPDR(i) &= ~GPIO_bit(i); 387 + } 388 + 380 389 return 0; 381 390 } 382 391 ··· 399 384 for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++) { 400 385 GAFR_L(i) = saved_gafr[0][i]; 401 386 GAFR_U(i) = saved_gafr[1][i]; 387 + GPSR(i * 32) = saved_gplr[i]; 388 + GPCR(i * 32) = ~saved_gplr[i]; 402 389 GPDR(i * 32) = saved_gpdr[i]; 403 390 PGSR(i) = saved_pgsr[i]; 404 391 }
+5 -1
arch/arm/mach-pxa/pxa27x.c
··· 421 421 pxa_register_device(&pxa27x_device_i2c_power, info); 422 422 } 423 423 424 + static struct pxa_gpio_platform_data pxa27x_gpio_info __initdata = { 425 + .gpio_set_wake = gpio_set_wake, 426 + }; 427 + 424 428 static struct platform_device *devices[] __initdata = { 425 - &pxa_device_gpio, 426 429 &pxa27x_device_udc, 427 430 &pxa_device_pmu, 428 431 &pxa_device_i2s, ··· 461 458 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 462 459 register_syscore_ops(&pxa2xx_clock_syscore_ops); 463 460 461 + pxa_register_device(&pxa_device_gpio, &pxa27x_gpio_info); 464 462 ret = platform_add_devices(devices, ARRAY_SIZE(devices)); 465 463 } 466 464
+4 -4
arch/arm/mach-s3c24xx/Kconfig
··· 111 111 help 112 112 Compile in platform device definition for Samsung TouchScreen. 113 113 114 - # cpu-specific sections 115 - 116 - if CPU_S3C2410 117 - 118 114 config S3C2410_DMA 119 115 bool 120 116 depends on S3C24XX_DMA && (CPU_S3C2410 || CPU_S3C2442) ··· 122 126 bool 123 127 help 124 128 Power Management code common to S3C2410 and better 129 + 130 + # cpu-specific sections 131 + 132 + if CPU_S3C2410 125 133 126 134 config S3C24XX_SIMTEC_NOR 127 135 bool
+2
arch/arm/mach-s5pv210/mach-goni.c
··· 25 25 #include <linux/gpio_keys.h> 26 26 #include <linux/input.h> 27 27 #include <linux/gpio.h> 28 + #include <linux/mmc/host.h> 28 29 #include <linux/interrupt.h> 29 30 30 31 #include <asm/hardware/vic.h> ··· 766 765 /* MoviNAND */ 767 766 static struct s3c_sdhci_platdata goni_hsmmc0_data __initdata = { 768 767 .max_width = 4, 768 + .host_caps2 = MMC_CAP2_BROKEN_VOLTAGE, 769 769 .cd_type = S3C_SDHCI_CD_PERMANENT, 770 770 }; 771 771
+1 -1
arch/arm/mach-sa1100/generic.c
··· 306 306 } 307 307 308 308 static struct resource sa1100_rtc_resources[] = { 309 - DEFINE_RES_MEM(0x90010000, 0x9001003f), 309 + DEFINE_RES_MEM(0x90010000, 0x40), 310 310 DEFINE_RES_IRQ_NAMED(IRQ_RTC1Hz, "rtc 1Hz"), 311 311 DEFINE_RES_IRQ_NAMED(IRQ_RTCAlrm, "rtc alarm"), 312 312 };
+4 -2
arch/arm/mach-u300/core.c
··· 1667 1667 1668 1668 for (i = 0; i < U300_VIC_IRQS_END; i++) 1669 1669 set_bit(i, (unsigned long *) &mask[0]); 1670 - vic_init((void __iomem *) U300_INTCON0_VBASE, 0, mask[0], mask[0]); 1671 - vic_init((void __iomem *) U300_INTCON1_VBASE, 32, mask[1], mask[1]); 1670 + vic_init((void __iomem *) U300_INTCON0_VBASE, IRQ_U300_INTCON0_START, 1671 + mask[0], mask[0]); 1672 + vic_init((void __iomem *) U300_INTCON1_VBASE, IRQ_U300_INTCON1_START, 1673 + mask[1], mask[1]); 1672 1674 } 1673 1675 1674 1676
+1 -8
arch/arm/mach-u300/i2c.c
··· 146 146 .min_uV = 1800000, 147 147 .max_uV = 1800000, 148 148 .valid_modes_mask = REGULATOR_MODE_NORMAL, 149 - .valid_ops_mask = 150 - REGULATOR_CHANGE_VOLTAGE | 151 - REGULATOR_CHANGE_STATUS, 152 149 .always_on = 1, 153 150 .boot_on = 1, 154 151 }, ··· 157 160 .min_uV = 2500000, 158 161 .max_uV = 2500000, 159 162 .valid_modes_mask = REGULATOR_MODE_NORMAL, 160 - .valid_ops_mask = 161 - REGULATOR_CHANGE_VOLTAGE | 162 - REGULATOR_CHANGE_STATUS, 163 163 .always_on = 1, 164 164 .boot_on = 1, 165 165 }, ··· 224 230 .max_uV = 1800000, 225 231 .valid_modes_mask = REGULATOR_MODE_NORMAL, 226 232 .valid_ops_mask = 227 - REGULATOR_CHANGE_VOLTAGE | 228 - REGULATOR_CHANGE_STATUS, 233 + REGULATOR_CHANGE_VOLTAGE, 229 234 .always_on = 1, 230 235 .boot_on = 1, 231 236 },
+75 -75
arch/arm/mach-u300/include/mach/irqs.h
··· 12 12 #ifndef __MACH_IRQS_H 13 13 #define __MACH_IRQS_H 14 14 15 - #define IRQ_U300_INTCON0_START 0 16 - #define IRQ_U300_INTCON1_START 32 15 + #define IRQ_U300_INTCON0_START 1 16 + #define IRQ_U300_INTCON1_START 33 17 17 /* These are on INTCON0 - 30 lines */ 18 - #define IRQ_U300_IRQ0_EXT 0 19 - #define IRQ_U300_IRQ1_EXT 1 20 - #define IRQ_U300_DMA 2 21 - #define IRQ_U300_VIDEO_ENC_0 3 22 - #define IRQ_U300_VIDEO_ENC_1 4 23 - #define IRQ_U300_AAIF_RX 5 24 - #define IRQ_U300_AAIF_TX 6 25 - #define IRQ_U300_AAIF_VGPIO 7 26 - #define IRQ_U300_AAIF_WAKEUP 8 27 - #define IRQ_U300_PCM_I2S0_FRAME 9 28 - #define IRQ_U300_PCM_I2S0_FIFO 10 29 - #define IRQ_U300_PCM_I2S1_FRAME 11 30 - #define IRQ_U300_PCM_I2S1_FIFO 12 31 - #define IRQ_U300_XGAM_GAMCON 13 32 - #define IRQ_U300_XGAM_CDI 14 33 - #define IRQ_U300_XGAM_CDICON 15 18 + #define IRQ_U300_IRQ0_EXT 1 19 + #define IRQ_U300_IRQ1_EXT 2 20 + #define IRQ_U300_DMA 3 21 + #define IRQ_U300_VIDEO_ENC_0 4 22 + #define IRQ_U300_VIDEO_ENC_1 5 23 + #define IRQ_U300_AAIF_RX 6 24 + #define IRQ_U300_AAIF_TX 7 25 + #define IRQ_U300_AAIF_VGPIO 8 26 + #define IRQ_U300_AAIF_WAKEUP 9 27 + #define IRQ_U300_PCM_I2S0_FRAME 10 28 + #define IRQ_U300_PCM_I2S0_FIFO 11 29 + #define IRQ_U300_PCM_I2S1_FRAME 12 30 + #define IRQ_U300_PCM_I2S1_FIFO 13 31 + #define IRQ_U300_XGAM_GAMCON 14 32 + #define IRQ_U300_XGAM_CDI 15 33 + #define IRQ_U300_XGAM_CDICON 16 34 34 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) 35 35 /* MMIACC not used on the DB3210 or DB3350 chips */ 36 - #define IRQ_U300_XGAM_MMIACC 16 36 + #define IRQ_U300_XGAM_MMIACC 17 37 37 #endif 38 - #define IRQ_U300_XGAM_PDI 17 39 - #define IRQ_U300_XGAM_PDICON 18 40 - #define IRQ_U300_XGAM_GAMEACC 19 41 - #define IRQ_U300_XGAM_MCIDCT 20 42 - #define IRQ_U300_APEX 21 43 - #define IRQ_U300_UART0 22 44 - #define IRQ_U300_SPI 23 45 - #define IRQ_U300_TIMER_APP_OS 24 46 - #define IRQ_U300_TIMER_APP_DD 25 47 - #define IRQ_U300_TIMER_APP_GP1 26 48 - #define IRQ_U300_TIMER_APP_GP2 27 49 - #define IRQ_U300_TIMER_OS 28 50 - #define IRQ_U300_TIMER_MS 29 51 - #define IRQ_U300_KEYPAD_KEYBF 30 52 - #define IRQ_U300_KEYPAD_KEYBR 31 38 + #define IRQ_U300_XGAM_PDI 18 39 + #define IRQ_U300_XGAM_PDICON 19 40 + #define IRQ_U300_XGAM_GAMEACC 20 41 + #define IRQ_U300_XGAM_MCIDCT 21 42 + #define IRQ_U300_APEX 22 43 + #define IRQ_U300_UART0 23 44 + #define IRQ_U300_SPI 24 45 + #define IRQ_U300_TIMER_APP_OS 25 46 + #define IRQ_U300_TIMER_APP_DD 26 47 + #define IRQ_U300_TIMER_APP_GP1 27 48 + #define IRQ_U300_TIMER_APP_GP2 28 49 + #define IRQ_U300_TIMER_OS 29 50 + #define IRQ_U300_TIMER_MS 30 51 + #define IRQ_U300_KEYPAD_KEYBF 31 52 + #define IRQ_U300_KEYPAD_KEYBR 32 53 53 /* These are on INTCON1 - 32 lines */ 54 - #define IRQ_U300_GPIO_PORT0 32 55 - #define IRQ_U300_GPIO_PORT1 33 56 - #define IRQ_U300_GPIO_PORT2 34 54 + #define IRQ_U300_GPIO_PORT0 33 55 + #define IRQ_U300_GPIO_PORT1 34 56 + #define IRQ_U300_GPIO_PORT2 35 57 57 58 58 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) || \ 59 59 defined(CONFIG_MACH_U300_BS335) 60 60 /* These are for DB3150, DB3200 and DB3350 */ 61 - #define IRQ_U300_WDOG 35 62 - #define IRQ_U300_EVHIST 36 63 - #define IRQ_U300_MSPRO 37 64 - #define IRQ_U300_MMCSD_MCIINTR0 38 65 - #define IRQ_U300_MMCSD_MCIINTR1 39 66 - #define IRQ_U300_I2C0 40 67 - #define IRQ_U300_I2C1 41 68 - #define IRQ_U300_RTC 42 69 - #define IRQ_U300_NFIF 43 70 - #define IRQ_U300_NFIF2 44 61 + #define IRQ_U300_WDOG 36 62 + #define IRQ_U300_EVHIST 37 63 + #define IRQ_U300_MSPRO 38 64 + #define IRQ_U300_MMCSD_MCIINTR0 39 65 + #define IRQ_U300_MMCSD_MCIINTR1 40 66 + #define IRQ_U300_I2C0 41 67 + #define IRQ_U300_I2C1 42 68 + #define IRQ_U300_RTC 43 69 + #define IRQ_U300_NFIF 44 70 + #define IRQ_U300_NFIF2 45 71 71 #endif 72 72 73 73 /* DB3150 and DB3200 have only 45 IRQs */ 74 74 #if defined(CONFIG_MACH_U300_BS2X) || defined(CONFIG_MACH_U300_BS330) 75 - #define U300_VIC_IRQS_END 45 75 + #define U300_VIC_IRQS_END 46 76 76 #endif 77 77 78 78 /* The DB3350-specific interrupt lines */ 79 79 #ifdef CONFIG_MACH_U300_BS335 80 - #define IRQ_U300_ISP_F0 45 81 - #define IRQ_U300_ISP_F1 46 82 - #define IRQ_U300_ISP_F2 47 83 - #define IRQ_U300_ISP_F3 48 84 - #define IRQ_U300_ISP_F4 49 85 - #define IRQ_U300_GPIO_PORT3 50 86 - #define IRQ_U300_SYSCON_PLL_LOCK 51 87 - #define IRQ_U300_UART1 52 88 - #define IRQ_U300_GPIO_PORT4 53 89 - #define IRQ_U300_GPIO_PORT5 54 90 - #define IRQ_U300_GPIO_PORT6 55 91 - #define U300_VIC_IRQS_END 56 80 + #define IRQ_U300_ISP_F0 46 81 + #define IRQ_U300_ISP_F1 47 82 + #define IRQ_U300_ISP_F2 48 83 + #define IRQ_U300_ISP_F3 49 84 + #define IRQ_U300_ISP_F4 50 85 + #define IRQ_U300_GPIO_PORT3 51 86 + #define IRQ_U300_SYSCON_PLL_LOCK 52 87 + #define IRQ_U300_UART1 53 88 + #define IRQ_U300_GPIO_PORT4 54 89 + #define IRQ_U300_GPIO_PORT5 55 90 + #define IRQ_U300_GPIO_PORT6 56 91 + #define U300_VIC_IRQS_END 57 92 92 #endif 93 93 94 94 /* The DB3210-specific interrupt lines */ 95 95 #ifdef CONFIG_MACH_U300_BS365 96 - #define IRQ_U300_GPIO_PORT3 35 97 - #define IRQ_U300_GPIO_PORT4 36 98 - #define IRQ_U300_WDOG 37 99 - #define IRQ_U300_EVHIST 38 100 - #define IRQ_U300_MSPRO 39 101 - #define IRQ_U300_MMCSD_MCIINTR0 40 102 - #define IRQ_U300_MMCSD_MCIINTR1 41 103 - #define IRQ_U300_I2C0 42 104 - #define IRQ_U300_I2C1 43 105 - #define IRQ_U300_RTC 44 106 - #define IRQ_U300_NFIF 45 107 - #define IRQ_U300_NFIF2 46 108 - #define IRQ_U300_SYSCON_PLL_LOCK 47 109 - #define U300_VIC_IRQS_END 48 96 + #define IRQ_U300_GPIO_PORT3 36 97 + #define IRQ_U300_GPIO_PORT4 37 98 + #define IRQ_U300_WDOG 38 99 + #define IRQ_U300_EVHIST 39 100 + #define IRQ_U300_MSPRO 40 101 + #define IRQ_U300_MMCSD_MCIINTR0 41 102 + #define IRQ_U300_MMCSD_MCIINTR1 42 103 + #define IRQ_U300_I2C0 43 104 + #define IRQ_U300_I2C1 44 105 + #define IRQ_U300_RTC 45 106 + #define IRQ_U300_NFIF 46 107 + #define IRQ_U300_NFIF2 47 108 + #define IRQ_U300_SYSCON_PLL_LOCK 48 109 + #define U300_VIC_IRQS_END 49 110 110 #endif 111 111 112 112 /* Maximum 8*7 GPIO lines */ ··· 117 117 #define IRQ_U300_GPIO_END (U300_VIC_IRQS_END) 118 118 #endif 119 119 120 - #define NR_IRQS (IRQ_U300_GPIO_END) 120 + #define NR_IRQS (IRQ_U300_GPIO_END - IRQ_U300_INTCON0_START) 121 121 122 122 #endif
+1 -1
arch/arm/mach-ux500/mbox-db5500.c
··· 168 168 return sprintf(buf, "0x%X\n", mbox_value); 169 169 } 170 170 171 - static DEVICE_ATTR(fifo, S_IWUGO | S_IRUGO, mbox_read_fifo, mbox_write_fifo); 171 + static DEVICE_ATTR(fifo, S_IWUSR | S_IRUGO, mbox_read_fifo, mbox_write_fifo); 172 172 173 173 static int mbox_show(struct seq_file *s, void *data) 174 174 {
+28
arch/arm/plat-samsung/include/plat/sdhci.h
··· 18 18 #ifndef __PLAT_S3C_SDHCI_H 19 19 #define __PLAT_S3C_SDHCI_H __FILE__ 20 20 21 + #include <plat/devs.h> 22 + 21 23 struct platform_device; 22 24 struct mmc_host; 23 25 struct mmc_card; ··· 357 355 static inline void exynos4_default_sdhci3(void) { } 358 356 359 357 #endif /* CONFIG_EXYNOS4_SETUP_SDHCI */ 358 + 359 + static inline void s3c_sdhci_setname(int id, char *name) 360 + { 361 + switch (id) { 362 + #ifdef CONFIG_S3C_DEV_HSMMC 363 + case 0: 364 + s3c_device_hsmmc0.name = name; 365 + break; 366 + #endif 367 + #ifdef CONFIG_S3C_DEV_HSMMC1 368 + case 1: 369 + s3c_device_hsmmc1.name = name; 370 + break; 371 + #endif 372 + #ifdef CONFIG_S3C_DEV_HSMMC2 373 + case 2: 374 + s3c_device_hsmmc2.name = name; 375 + break; 376 + #endif 377 + #ifdef CONFIG_S3C_DEV_HSMMC3 378 + case 3: 379 + s3c_device_hsmmc3.name = name; 380 + break; 381 + #endif 382 + } 383 + } 360 384 361 385 #endif /* __PLAT_S3C_SDHCI_H */
+26 -27
arch/blackfin/mach-bf538/boards/ezkit.c
··· 38 38 .name = "rtc-bfin", 39 39 .id = -1, 40 40 }; 41 - #endif 41 + #endif /* CONFIG_RTC_DRV_BFIN */ 42 42 43 43 #if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE) 44 44 #ifdef CONFIG_SERIAL_BFIN_UART0 ··· 100 100 .platform_data = &bfin_uart0_peripherals, /* Passed to driver */ 101 101 }, 102 102 }; 103 - #endif 103 + #endif /* CONFIG_SERIAL_BFIN_UART0 */ 104 104 #ifdef CONFIG_SERIAL_BFIN_UART1 105 105 static struct resource bfin_uart1_resources[] = { 106 106 { ··· 148 148 .platform_data = &bfin_uart1_peripherals, /* Passed to driver */ 149 149 }, 150 150 }; 151 - #endif 151 + #endif /* CONFIG_SERIAL_BFIN_UART1 */ 152 152 #ifdef CONFIG_SERIAL_BFIN_UART2 153 153 static struct resource bfin_uart2_resources[] = { 154 154 { ··· 196 196 .platform_data = &bfin_uart2_peripherals, /* Passed to driver */ 197 197 }, 198 198 }; 199 - #endif 200 - #endif 199 + #endif /* CONFIG_SERIAL_BFIN_UART2 */ 200 + #endif /* CONFIG_SERIAL_BFIN */ 201 201 202 202 #if defined(CONFIG_BFIN_SIR) || defined(CONFIG_BFIN_SIR_MODULE) 203 203 #ifdef CONFIG_BFIN_SIR0 ··· 224 224 .num_resources = ARRAY_SIZE(bfin_sir0_resources), 225 225 .resource = bfin_sir0_resources, 226 226 }; 227 - #endif 227 + #endif /* CONFIG_BFIN_SIR0 */ 228 228 #ifdef CONFIG_BFIN_SIR1 229 229 static struct resource bfin_sir1_resources[] = { 230 230 { ··· 249 249 .num_resources = ARRAY_SIZE(bfin_sir1_resources), 250 250 .resource = bfin_sir1_resources, 251 251 }; 252 - #endif 252 + #endif /* CONFIG_BFIN_SIR1 */ 253 253 #ifdef CONFIG_BFIN_SIR2 254 254 static struct resource bfin_sir2_resources[] = { 255 255 { ··· 274 274 .num_resources = ARRAY_SIZE(bfin_sir2_resources), 275 275 .resource = bfin_sir2_resources, 276 276 }; 277 - #endif 278 - #endif 277 + #endif /* CONFIG_BFIN_SIR2 */ 278 + #endif /* CONFIG_BFIN_SIR */ 279 279 280 280 #if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE) 281 281 #ifdef CONFIG_SERIAL_BFIN_SPORT0_UART ··· 311 311 .platform_data = &bfin_sport0_peripherals, /* Passed to driver */ 312 312 }, 313 313 }; 314 - #endif 314 + #endif /* CONFIG_SERIAL_BFIN_SPORT0_UART */ 315 315 #ifdef CONFIG_SERIAL_BFIN_SPORT1_UART 316 316 static struct resource bfin_sport1_uart_resources[] = { 317 317 { ··· 345 345 .platform_data = &bfin_sport1_peripherals, /* Passed to driver */ 346 346 }, 347 347 }; 348 - #endif 348 + #endif /* CONFIG_SERIAL_BFIN_SPORT1_UART */ 349 349 #ifdef CONFIG_SERIAL_BFIN_SPORT2_UART 350 350 static struct resource bfin_sport2_uart_resources[] = { 351 351 { ··· 379 379 .platform_data = &bfin_sport2_peripherals, /* Passed to driver */ 380 380 }, 381 381 }; 382 - #endif 382 + #endif /* CONFIG_SERIAL_BFIN_SPORT2_UART */ 383 383 #ifdef CONFIG_SERIAL_BFIN_SPORT3_UART 384 384 static struct resource bfin_sport3_uart_resources[] = { 385 385 { ··· 413 413 .platform_data = &bfin_sport3_peripherals, /* Passed to driver */ 414 414 }, 415 415 }; 416 - #endif 417 - #endif 416 + #endif /* CONFIG_SERIAL_BFIN_SPORT3_UART */ 417 + #endif /* CONFIG_SERIAL_BFIN_SPORT */ 418 418 419 419 #if defined(CONFIG_CAN_BFIN) || defined(CONFIG_CAN_BFIN_MODULE) 420 420 static unsigned short bfin_can_peripherals[] = { ··· 452 452 .platform_data = &bfin_can_peripherals, /* Passed to driver */ 453 453 }, 454 454 }; 455 - #endif 455 + #endif /* CONFIG_CAN_BFIN */ 456 456 457 457 /* 458 458 * USB-LAN EzExtender board ··· 488 488 .platform_data = &smc91x_info, 489 489 }, 490 490 }; 491 - #endif 491 + #endif /* CONFIG_SMC91X */ 492 492 493 493 #if defined(CONFIG_SPI_BFIN5XX) || defined(CONFIG_SPI_BFIN5XX_MODULE) 494 494 /* all SPI peripherals info goes here */ ··· 518 518 static struct bfin5xx_spi_chip spi_flash_chip_info = { 519 519 .enable_dma = 0, /* use dma transfer with this chip*/ 520 520 }; 521 - #endif 521 + #endif /* CONFIG_MTD_M25P80 */ 522 + #endif /* CONFIG_SPI_BFIN5XX */ 522 523 523 524 #if defined(CONFIG_TOUCHSCREEN_AD7879) || defined(CONFIG_TOUCHSCREEN_AD7879_MODULE) 524 525 #include <linux/spi/ad7879.h> ··· 536 535 .gpio_export = 1, /* Export GPIO to gpiolib */ 537 536 .gpio_base = -1, /* Dynamic allocation */ 538 537 }; 539 - #endif 538 + #endif /* CONFIG_TOUCHSCREEN_AD7879 */ 540 539 541 540 #if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE) 542 541 #include <asm/bfin-lq035q1.h> ··· 565 564 .platform_data = &bfin_lq035q1_data, 566 565 }, 567 566 }; 568 - #endif 567 + #endif /* CONFIG_FB_BFIN_LQ035Q1 */ 569 568 570 569 static struct spi_board_info bf538_spi_board_info[] __initdata = { 571 570 #if defined(CONFIG_MTD_M25P80) \ ··· 580 579 .controller_data = &spi_flash_chip_info, 581 580 .mode = SPI_MODE_3, 582 581 }, 583 - #endif 582 + #endif /* CONFIG_MTD_M25P80 */ 584 583 #if defined(CONFIG_TOUCHSCREEN_AD7879_SPI) || defined(CONFIG_TOUCHSCREEN_AD7879_SPI_MODULE) 585 584 { 586 585 .modalias = "ad7879", ··· 591 590 .chip_select = 1, 592 591 .mode = SPI_CPHA | SPI_CPOL, 593 592 }, 594 - #endif 593 + #endif /* CONFIG_TOUCHSCREEN_AD7879_SPI */ 595 594 #if defined(CONFIG_FB_BFIN_LQ035Q1) || defined(CONFIG_FB_BFIN_LQ035Q1_MODULE) 596 595 { 597 596 .modalias = "bfin-lq035q1-spi", ··· 600 599 .chip_select = 2, 601 600 .mode = SPI_CPHA | SPI_CPOL, 602 601 }, 603 - #endif 602 + #endif /* CONFIG_FB_BFIN_LQ035Q1 */ 604 603 #if defined(CONFIG_SPI_SPIDEV) || defined(CONFIG_SPI_SPIDEV_MODULE) 605 604 { 606 605 .modalias = "spidev", ··· 608 607 .bus_num = 0, 609 608 .chip_select = 1, 610 609 }, 611 - #endif 610 + #endif /* CONFIG_SPI_SPIDEV */ 612 611 }; 613 612 614 613 /* SPI (0) */ ··· 717 716 }, 718 717 }; 719 718 720 - #endif /* spi master and devices */ 721 - 722 719 #if defined(CONFIG_I2C_BLACKFIN_TWI) || defined(CONFIG_I2C_BLACKFIN_TWI_MODULE) 723 720 static struct resource bfin_twi0_resource[] = { 724 721 [0] = { ··· 758 759 .num_resources = ARRAY_SIZE(bfin_twi1_resource), 759 760 .resource = bfin_twi1_resource, 760 761 }; 761 - #endif 762 - #endif 762 + #endif /* CONFIG_BF542 */ 763 + #endif /* CONFIG_I2C_BLACKFIN_TWI */ 763 764 764 765 #if defined(CONFIG_KEYBOARD_GPIO) || defined(CONFIG_KEYBOARD_GPIO_MODULE) 765 766 #include <linux/gpio_keys.h>
+1
arch/hexagon/kernel/dma.c
··· 22 22 #include <linux/bootmem.h> 23 23 #include <linux/genalloc.h> 24 24 #include <asm/dma-mapping.h> 25 + #include <linux/module.h> 25 26 26 27 struct dma_map_ops *dma_ops; 27 28 EXPORT_SYMBOL(dma_ops);
+3 -3
arch/hexagon/kernel/process.c
··· 1 1 /* 2 2 * Process creation support for Hexagon 3 3 * 4 - * Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. 4 + * Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 and ··· 88 88 void cpu_idle(void) 89 89 { 90 90 while (1) { 91 - tick_nohz_stop_sched_tick(1); 91 + tick_nohz_idle_enter(); 92 92 local_irq_disable(); 93 93 while (!need_resched()) { 94 94 idle_sleep(); ··· 97 97 local_irq_disable(); 98 98 } 99 99 local_irq_enable(); 100 - tick_nohz_restart_sched_tick(); 100 + tick_nohz_idle_exit(); 101 101 schedule(); 102 102 } 103 103 }
+1
arch/hexagon/kernel/ptrace.c
··· 28 28 #include <linux/ptrace.h> 29 29 #include <linux/regset.h> 30 30 #include <linux/user.h> 31 + #include <linux/elf.h> 31 32 32 33 #include <asm/user.h> 33 34
+7 -1
arch/hexagon/kernel/smp.c
··· 1 1 /* 2 2 * SMP support for Hexagon 3 3 * 4 - * Copyright (c) 2010-2011, Code Aurora Forum. All rights reserved. 4 + * Copyright (c) 2010-2012, Code Aurora Forum. All rights reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 and ··· 28 28 #include <linux/sched.h> 29 29 #include <linux/smp.h> 30 30 #include <linux/spinlock.h> 31 + #include <linux/cpu.h> 31 32 32 33 #include <asm/time.h> /* timer_interrupt */ 33 34 #include <asm/hexagon_vm.h> ··· 178 177 179 178 printk(KERN_INFO "%s cpu %d\n", __func__, current_thread_info()->cpu); 180 179 180 + notify_cpu_starting(cpu); 181 + 182 + ipi_call_lock(); 181 183 set_cpu_online(cpu, true); 184 + ipi_call_unlock(); 185 + 182 186 local_irq_enable(); 183 187 184 188 cpu_idle();
+1
arch/hexagon/kernel/time.c
··· 28 28 #include <linux/of.h> 29 29 #include <linux/of_address.h> 30 30 #include <linux/of_irq.h> 31 + #include <linux/module.h> 31 32 32 33 #include <asm/timer-regs.h> 33 34 #include <asm/hexagon_vm.h>
+1
arch/hexagon/kernel/vdso.c
··· 21 21 #include <linux/err.h> 22 22 #include <linux/mm.h> 23 23 #include <linux/vmalloc.h> 24 + #include <linux/binfmts.h> 24 25 25 26 #include <asm/vdso.h> 26 27
+43
arch/powerpc/boot/dts/fsl/pq3-mpic-message-B.dtsi
··· 1 + /* 2 + * PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ] 3 + * 4 + * Copyright 2012 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + message@42400 { 36 + compatible = "fsl,mpic-v3.1-msgr"; 37 + reg = <0x42400 0x200>; 38 + interrupts = < 39 + 0xb4 2 0 0 40 + 0xb5 2 0 0 41 + 0xb6 2 0 0 42 + 0xb7 2 0 0>; 43 + };
+10
arch/powerpc/boot/dts/fsl/pq3-mpic.dtsi
··· 53 53 3 0 3 0>; 54 54 }; 55 55 56 + message@41400 { 57 + compatible = "fsl,mpic-v3.1-msgr"; 58 + reg = <0x41400 0x200>; 59 + interrupts = < 60 + 0xb0 2 0 0 61 + 0xb1 2 0 0 62 + 0xb2 2 0 0 63 + 0xb3 2 0 0>; 64 + }; 65 + 56 66 msi@41600 { 57 67 compatible = "fsl,mpic-msi"; 58 68 reg = <0x41600 0x80>;
-18
arch/powerpc/include/asm/mpic.h
··· 275 275 unsigned int isu_mask; 276 276 /* Number of sources */ 277 277 unsigned int num_sources; 278 - /* default senses array */ 279 - unsigned char *senses; 280 - unsigned int senses_count; 281 278 282 279 /* vector numbers used for internal sources (ipi/timers) */ 283 280 unsigned int ipi_vecs[4]; ··· 411 414 */ 412 415 extern void mpic_assign_isu(struct mpic *mpic, unsigned int isu_num, 413 416 phys_addr_t phys_addr); 414 - 415 - /* Set default sense codes 416 - * 417 - * @mpic: controller 418 - * @senses: array of sense codes 419 - * @count: size of above array 420 - * 421 - * Optionally provide an array (indexed on hardware interrupt numbers 422 - * for this MPIC) of default sense codes for the chip. Those are linux 423 - * sense codes IRQ_TYPE_* 424 - * 425 - * The driver gets ownership of the pointer, don't dispose of it or 426 - * anything like that. __init only. 427 - */ 428 - extern void mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count); 429 417 430 418 431 419 /* Initialize the controller. After this has been called, none of the above
+1
arch/powerpc/include/asm/mpic_msgr.h
··· 13 13 14 14 #include <linux/types.h> 15 15 #include <linux/spinlock.h> 16 + #include <asm/smp.h> 16 17 17 18 struct mpic_msgr { 18 19 u32 __iomem *base;
-5
arch/powerpc/include/asm/reg_booke.h
··· 15 15 #ifndef __ASM_POWERPC_REG_BOOKE_H__ 16 16 #define __ASM_POWERPC_REG_BOOKE_H__ 17 17 18 - #ifdef CONFIG_BOOKE_WDT 19 - extern u32 booke_wdt_enabled; 20 - extern u32 booke_wdt_period; 21 - #endif /* CONFIG_BOOKE_WDT */ 22 - 23 18 /* Machine State Register (MSR) Fields */ 24 19 #define MSR_GS (1<<28) /* Guest state */ 25 20 #define MSR_UCLE (1<<26) /* User-mode cache lock enable */
+3
arch/powerpc/kernel/setup_32.c
··· 150 150 } 151 151 152 152 #ifdef CONFIG_BOOKE_WDT 153 + extern u32 booke_wdt_enabled; 154 + extern u32 booke_wdt_period; 155 + 153 156 /* Checks wdt=x and wdt_period=xx command-line option */ 154 157 notrace int __init early_parse_wdt(char *p) 155 158 {
+6
arch/powerpc/platforms/85xx/common.c
··· 21 21 { .compatible = "fsl,qe", }, 22 22 { .compatible = "fsl,cpm2", }, 23 23 { .compatible = "fsl,srio", }, 24 + /* So that the DMA channel nodes can be probed individually: */ 25 + { .compatible = "fsl,eloplus-dma", }, 26 + /* For the PMC driver */ 27 + { .compatible = "fsl,mpc8548-guts", }, 28 + /* Probably unnecessary? */ 29 + { .compatible = "gpio-leds", }, 24 30 {}, 25 31 }; 26 32
+1 -10
arch/powerpc/platforms/85xx/mpc85xx_mds.c
··· 399 399 machine_arch_initcall(mpc8568_mds, board_fixups); 400 400 machine_arch_initcall(mpc8569_mds, board_fixups); 401 401 402 - static struct of_device_id mpc85xx_ids[] = { 403 - { .compatible = "fsl,mpc8548-guts", }, 404 - { .compatible = "gpio-leds", }, 405 - {}, 406 - }; 407 - 408 402 static int __init mpc85xx_publish_devices(void) 409 403 { 410 404 if (machine_is(mpc8568_mds)) ··· 406 412 if (machine_is(mpc8569_mds)) 407 413 simple_gpiochip_init("fsl,mpc8569mds-bcsr-gpio"); 408 414 409 - mpc85xx_common_publish_devices(); 410 - of_platform_bus_probe(NULL, mpc85xx_ids, NULL); 411 - 412 - return 0; 415 + return mpc85xx_common_publish_devices(); 413 416 } 414 417 415 418 machine_device_initcall(mpc8568_mds, mpc85xx_publish_devices);
+1 -12
arch/powerpc/platforms/85xx/p1022_ds.c
··· 460 460 pr_info("Freescale P1022 DS reference board\n"); 461 461 } 462 462 463 - static struct of_device_id __initdata p1022_ds_ids[] = { 464 - /* So that the DMA channel nodes can be probed individually: */ 465 - { .compatible = "fsl,eloplus-dma", }, 466 - {}, 467 - }; 468 - 469 - static int __init p1022_ds_publish_devices(void) 470 - { 471 - mpc85xx_common_publish_devices(); 472 - return of_platform_bus_probe(NULL, p1022_ds_ids, NULL); 473 - } 474 - machine_device_initcall(p1022_ds, p1022_ds_publish_devices); 463 + machine_device_initcall(p1022_ds, mpc85xx_common_publish_devices); 475 464 476 465 machine_arch_initcall(p1022_ds, swiotlb_setup_bus_notifier); 477 466
+9
arch/powerpc/platforms/powermac/low_i2c.c
··· 366 366 unsigned long flags; 367 367 368 368 spin_lock_irqsave(&host->lock, flags); 369 + 370 + /* 371 + * If the timer is pending, that means we raced with the 372 + * irq, in which case we just return 373 + */ 374 + if (timer_pending(&host->timeout_timer)) 375 + goto skip; 376 + 369 377 kw_i2c_handle_interrupt(host, kw_read_reg(reg_isr)); 370 378 if (host->state != state_idle) { 371 379 host->timeout_timer.expires = jiffies + KW_POLL_TIMEOUT; 372 380 add_timer(&host->timeout_timer); 373 381 } 382 + skip: 374 383 spin_unlock_irqrestore(&host->lock, flags); 375 384 } 376 385
+1 -1
arch/powerpc/platforms/pseries/eeh.c
··· 1076 1076 pr_debug("EEH: Adding device %s\n", pci_name(dev)); 1077 1077 1078 1078 dn = pci_device_to_OF_node(dev); 1079 - edev = pci_dev_to_eeh_dev(dev); 1079 + edev = of_node_to_eeh_dev(dn); 1080 1080 if (edev->pdev == dev) { 1081 1081 pr_debug("EEH: Already referenced !\n"); 1082 1082 return;
+35 -21
arch/powerpc/sysdev/mpic.c
··· 604 604 } 605 605 606 606 /* Determine if the linux irq is an IPI */ 607 - static unsigned int mpic_is_ipi(struct mpic *mpic, unsigned int irq) 607 + static unsigned int mpic_is_ipi(struct mpic *mpic, unsigned int src) 608 608 { 609 - unsigned int src = virq_to_hw(irq); 610 - 611 609 return (src >= mpic->ipi_vecs[0] && src <= mpic->ipi_vecs[3]); 612 610 } 613 611 614 612 /* Determine if the linux irq is a timer */ 615 - static unsigned int mpic_is_tm(struct mpic *mpic, unsigned int irq) 613 + static unsigned int mpic_is_tm(struct mpic *mpic, unsigned int src) 616 614 { 617 - unsigned int src = virq_to_hw(irq); 618 - 619 615 return (src >= mpic->timer_vecs[0] && src <= mpic->timer_vecs[7]); 620 616 } 621 617 ··· 872 876 if (src >= mpic->num_sources) 873 877 return -EINVAL; 874 878 875 - if (flow_type == IRQ_TYPE_NONE) 876 - if (mpic->senses && src < mpic->senses_count) 877 - flow_type = mpic->senses[src]; 878 - if (flow_type == IRQ_TYPE_NONE) 879 - flow_type = IRQ_TYPE_LEVEL_LOW; 879 + vold = mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI)); 880 880 881 + /* We don't support "none" type */ 882 + if (flow_type == IRQ_TYPE_NONE) 883 + flow_type = IRQ_TYPE_DEFAULT; 884 + 885 + /* Default: read HW settings */ 886 + if (flow_type == IRQ_TYPE_DEFAULT) { 887 + switch(vold & (MPIC_INFO(VECPRI_POLARITY_MASK) | 888 + MPIC_INFO(VECPRI_SENSE_MASK))) { 889 + case MPIC_INFO(VECPRI_SENSE_EDGE) | 890 + MPIC_INFO(VECPRI_POLARITY_POSITIVE): 891 + flow_type = IRQ_TYPE_EDGE_RISING; 892 + break; 893 + case MPIC_INFO(VECPRI_SENSE_EDGE) | 894 + MPIC_INFO(VECPRI_POLARITY_NEGATIVE): 895 + flow_type = IRQ_TYPE_EDGE_FALLING; 896 + break; 897 + case MPIC_INFO(VECPRI_SENSE_LEVEL) | 898 + MPIC_INFO(VECPRI_POLARITY_POSITIVE): 899 + flow_type = IRQ_TYPE_LEVEL_HIGH; 900 + break; 901 + case MPIC_INFO(VECPRI_SENSE_LEVEL) | 902 + MPIC_INFO(VECPRI_POLARITY_NEGATIVE): 903 + flow_type = IRQ_TYPE_LEVEL_LOW; 904 + break; 905 + } 906 + } 907 + 908 + /* Apply to irq desc */ 881 909 irqd_set_trigger_type(d, flow_type); 882 910 911 + /* Apply to HW */ 883 912 if (mpic_is_ht_interrupt(mpic, src)) 884 913 vecpri = MPIC_VECPRI_POLARITY_POSITIVE | 885 914 MPIC_VECPRI_SENSE_EDGE; 886 915 else 887 916 vecpri = mpic_type_to_vecpri(mpic, flow_type); 888 917 889 - vold = mpic_irq_read(src, MPIC_INFO(IRQ_VECTOR_PRI)); 890 918 vnew = vold & ~(MPIC_INFO(VECPRI_POLARITY_MASK) | 891 919 MPIC_INFO(VECPRI_SENSE_MASK)); 892 920 vnew |= vecpri; ··· 1046 1026 irq_set_chip_and_handler(virq, chip, handle_fasteoi_irq); 1047 1027 1048 1028 /* Set default irq type */ 1049 - irq_set_irq_type(virq, IRQ_TYPE_NONE); 1029 + irq_set_irq_type(virq, IRQ_TYPE_DEFAULT); 1050 1030 1051 1031 /* If the MPIC was reset, then all vectors have already been 1052 1032 * initialized. Otherwise, a per source lazy initialization ··· 1437 1417 mpic->num_sources = isu_first + mpic->isu_size; 1438 1418 } 1439 1419 1440 - void __init mpic_set_default_senses(struct mpic *mpic, u8 *senses, int count) 1441 - { 1442 - mpic->senses = senses; 1443 - mpic->senses_count = count; 1444 - } 1445 - 1446 1420 void __init mpic_init(struct mpic *mpic) 1447 1421 { 1448 1422 int i, cpu; ··· 1569 1555 return; 1570 1556 1571 1557 raw_spin_lock_irqsave(&mpic_lock, flags); 1572 - if (mpic_is_ipi(mpic, irq)) { 1558 + if (mpic_is_ipi(mpic, src)) { 1573 1559 reg = mpic_ipi_read(src - mpic->ipi_vecs[0]) & 1574 1560 ~MPIC_VECPRI_PRIORITY_MASK; 1575 1561 mpic_ipi_write(src - mpic->ipi_vecs[0], 1576 1562 reg | (pri << MPIC_VECPRI_PRIORITY_SHIFT)); 1577 - } else if (mpic_is_tm(mpic, irq)) { 1563 + } else if (mpic_is_tm(mpic, src)) { 1578 1564 reg = mpic_tm_read(src - mpic->timer_vecs[0]) & 1579 1565 ~MPIC_VECPRI_PRIORITY_MASK; 1580 1566 mpic_tm_write(src - mpic->timer_vecs[0],
+6 -6
arch/powerpc/sysdev/mpic_msgr.c
··· 27 27 28 28 static struct mpic_msgr **mpic_msgrs; 29 29 static unsigned int mpic_msgr_count; 30 + static DEFINE_RAW_SPINLOCK(msgrs_lock); 30 31 31 32 static inline void _mpic_msgr_mer_write(struct mpic_msgr *msgr, u32 value) 32 33 { ··· 57 56 if (reg_num >= mpic_msgr_count) 58 57 return ERR_PTR(-ENODEV); 59 58 60 - raw_spin_lock_irqsave(&msgr->lock, flags); 61 - if (mpic_msgrs[reg_num]->in_use == MSGR_FREE) { 62 - msgr = mpic_msgrs[reg_num]; 59 + raw_spin_lock_irqsave(&msgrs_lock, flags); 60 + msgr = mpic_msgrs[reg_num]; 61 + if (msgr->in_use == MSGR_FREE) 63 62 msgr->in_use = MSGR_INUSE; 64 - } 65 - raw_spin_unlock_irqrestore(&msgr->lock, flags); 63 + raw_spin_unlock_irqrestore(&msgrs_lock, flags); 66 64 67 65 return msgr; 68 66 } ··· 228 228 229 229 reg_number = block_number * MPIC_MSGR_REGISTERS_PER_BLOCK + i; 230 230 msgr->base = msgr_block_addr + i * MPIC_MSGR_STRIDE; 231 - msgr->mer = msgr->base + MPIC_MSGR_MER_OFFSET; 231 + msgr->mer = (u32 *)((u8 *)msgr->base + MPIC_MSGR_MER_OFFSET); 232 232 msgr->in_use = MSGR_FREE; 233 233 msgr->num = i; 234 234 raw_spin_lock_init(&msgr->lock);
+1
arch/powerpc/sysdev/scom.c
··· 22 22 #include <linux/debugfs.h> 23 23 #include <linux/slab.h> 24 24 #include <linux/export.h> 25 + #include <asm/debug.h> 25 26 #include <asm/prom.h> 26 27 #include <asm/scom.h> 27 28
+1 -1
arch/sh/include/asm/atomic.h
··· 11 11 #include <linux/types.h> 12 12 #include <asm/cmpxchg.h> 13 13 14 - #define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) 14 + #define ATOMIC_INIT(i) { (i) } 15 15 16 16 #define atomic_read(v) (*(volatile int *)&(v)->counter) 17 17 #define atomic_set(v,i) ((v)->counter = (i))
+1 -1
arch/sh/mm/fault_32.c
··· 86 86 pte_t *pte_k; 87 87 88 88 /* Make sure we are in vmalloc/module/P3 area: */ 89 - if (!(address >= VMALLOC_START && address < P3_ADDR_MAX)) 89 + if (!(address >= P3SEG && address < P3_ADDR_MAX)) 90 90 return -1; 91 91 92 92 /*
+2 -2
arch/tile/include/asm/pci.h
··· 47 47 */ 48 48 #define PCI_DMA_BUS_IS_PHYS 1 49 49 50 - int __devinit tile_pci_init(void); 51 - int __devinit pcibios_init(void); 50 + int __init tile_pci_init(void); 51 + int __init pcibios_init(void); 52 52 53 53 static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) {} 54 54
+2 -2
arch/tile/kernel/pci.c
··· 141 141 * 142 142 * Returns the number of controllers discovered. 143 143 */ 144 - int __devinit tile_pci_init(void) 144 + int __init tile_pci_init(void) 145 145 { 146 146 int i; 147 147 ··· 287 287 * The controllers have been set up by the time we get here, by a call to 288 288 * tile_pci_init. 289 289 */ 290 - int __devinit pcibios_init(void) 290 + int __init pcibios_init(void) 291 291 { 292 292 int i; 293 293
+11 -3
arch/x86/boot/compressed/head_32.S
··· 33 33 __HEAD 34 34 ENTRY(startup_32) 35 35 #ifdef CONFIG_EFI_STUB 36 + jmp preferred_addr 37 + 38 + .balign 0x10 36 39 /* 37 40 * We don't need the return address, so set up the stack so 38 41 * efi_main() can find its arugments. ··· 44 41 45 42 call efi_main 46 43 cmpl $0, %eax 47 - je preferred_addr 48 44 movl %eax, %esi 49 - call 1f 45 + jne 2f 50 46 1: 47 + /* EFI init failed, so hang. */ 48 + hlt 49 + jmp 1b 50 + 2: 51 + call 3f 52 + 3: 51 53 popl %eax 52 - subl $1b, %eax 54 + subl $3b, %eax 53 55 subl BP_pref_address(%esi), %eax 54 56 add BP_code32_start(%esi), %eax 55 57 leal preferred_addr(%eax), %eax
+16 -6
arch/x86/boot/compressed/head_64.S
··· 200 200 * entire text+data+bss and hopefully all of memory. 201 201 */ 202 202 #ifdef CONFIG_EFI_STUB 203 - pushq %rsi 203 + /* 204 + * The entry point for the PE/COFF executable is 0x210, so only 205 + * legacy boot loaders will execute this jmp. 206 + */ 207 + jmp preferred_addr 208 + 209 + .org 0x210 204 210 mov %rcx, %rdi 205 211 mov %rdx, %rsi 206 212 call efi_main 207 - popq %rsi 208 - cmpq $0,%rax 209 - je preferred_addr 210 213 movq %rax,%rsi 211 - call 1f 214 + cmpq $0,%rax 215 + jne 2f 212 216 1: 217 + /* EFI init failed, so hang. */ 218 + hlt 219 + jmp 1b 220 + 2: 221 + call 3f 222 + 3: 213 223 popq %rax 214 - subq $1b, %rax 224 + subq $3b, %rax 215 225 subq BP_pref_address(%rsi), %rax 216 226 add BP_code32_start(%esi), %eax 217 227 leaq preferred_addr(%rax), %rax
+11 -4
arch/x86/boot/tools/build.c
··· 205 205 put_unaligned_le32(file_sz, &buf[pe_header + 0x50]); 206 206 207 207 #ifdef CONFIG_X86_32 208 - /* Address of entry point */ 209 - put_unaligned_le32(i, &buf[pe_header + 0x28]); 208 + /* 209 + * Address of entry point. 210 + * 211 + * The EFI stub entry point is +16 bytes from the start of 212 + * the .text section. 213 + */ 214 + put_unaligned_le32(i + 16, &buf[pe_header + 0x28]); 210 215 211 216 /* .text size */ 212 217 put_unaligned_le32(file_sz, &buf[pe_header + 0xb0]); ··· 222 217 /* 223 218 * Address of entry point. startup_32 is at the beginning and 224 219 * the 64-bit entry point (startup_64) is always 512 bytes 225 - * after. 220 + * after. The EFI stub entry point is 16 bytes after that, as 221 + * the first instruction allows legacy loaders to jump over 222 + * the EFI stub initialisation 226 223 */ 227 - put_unaligned_le32(i + 512, &buf[pe_header + 0x28]); 224 + put_unaligned_le32(i + 528, &buf[pe_header + 0x28]); 228 225 229 226 /* .text size */ 230 227 put_unaligned_le32(file_sz, &buf[pe_header + 0xc0]);
+3 -3
arch/x86/include/asm/posix_types.h
··· 7 7 #else 8 8 # ifdef __i386__ 9 9 # include "posix_types_32.h" 10 - # elif defined(__LP64__) 11 - # include "posix_types_64.h" 12 - # else 10 + # elif defined(__ILP32__) 13 11 # include "posix_types_x32.h" 12 + # else 13 + # include "posix_types_64.h" 14 14 # endif 15 15 #endif
+1 -1
arch/x86/include/asm/sigcontext.h
··· 257 257 __u64 oldmask; 258 258 __u64 cr2; 259 259 struct _fpstate __user *fpstate; /* zero when no FPU context */ 260 - #ifndef __LP64__ 260 + #ifdef __ILP32__ 261 261 __u32 __fpstate_pad; 262 262 #endif 263 263 __u64 reserved1[8];
+7 -1
arch/x86/include/asm/siginfo.h
··· 2 2 #define _ASM_X86_SIGINFO_H 3 3 4 4 #ifdef __x86_64__ 5 - # define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int)) 5 + # ifdef __ILP32__ /* x32 */ 6 + typedef long long __kernel_si_clock_t __attribute__((aligned(4))); 7 + # define __ARCH_SI_CLOCK_T __kernel_si_clock_t 8 + # define __ARCH_SI_ATTRIBUTES __attribute__((aligned(8))) 9 + # else /* x86-64 */ 10 + # define __ARCH_SI_PREAMBLE_SIZE (4 * sizeof(int)) 11 + # endif 6 12 #endif 7 13 8 14 #include <asm-generic/siginfo.h>
+3 -3
arch/x86/include/asm/unistd.h
··· 63 63 #else 64 64 # ifdef __i386__ 65 65 # include <asm/unistd_32.h> 66 - # elif defined(__LP64__) 67 - # include <asm/unistd_64.h> 68 - # else 66 + # elif defined(__ILP32__) 69 67 # include <asm/unistd_x32.h> 68 + # else 69 + # include <asm/unistd_64.h> 70 70 # endif 71 71 #endif 72 72
-1
arch/x86/include/asm/x86_init.h
··· 195 195 196 196 extern void x86_init_noop(void); 197 197 extern void x86_init_uint_noop(unsigned int unused); 198 - extern void x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node); 199 198 200 199 #endif
+4
arch/x86/kernel/acpi/sleep.c
··· 24 24 static char temp_stack[4096]; 25 25 #endif 26 26 27 + asmlinkage void acpi_enter_s3(void) 28 + { 29 + acpi_enter_sleep_state(3, wake_sleep_flags); 30 + } 27 31 /** 28 32 * acpi_suspend_lowlevel - save kernel state 29 33 *
+4
arch/x86/kernel/acpi/sleep.h
··· 3 3 */ 4 4 5 5 #include <asm/trampoline.h> 6 + #include <linux/linkage.h> 6 7 7 8 extern unsigned long saved_video_mode; 8 9 extern long saved_magic; 9 10 10 11 extern int wakeup_pmode_return; 12 + 13 + extern u8 wake_sleep_flags; 14 + extern asmlinkage void acpi_enter_s3(void); 11 15 12 16 extern unsigned long acpi_copy_wakeup_routine(unsigned long); 13 17 extern void wakeup_long64(void);
+1 -3
arch/x86/kernel/acpi/wakeup_32.S
··· 74 74 ENTRY(do_suspend_lowlevel) 75 75 call save_processor_state 76 76 call save_registers 77 - pushl $3 78 - call acpi_enter_sleep_state 79 - addl $4, %esp 77 + call acpi_enter_s3 80 78 81 79 # In case of S3 failure, we'll emerge here. Jump 82 80 # to ret_point to recover
+1 -3
arch/x86/kernel/acpi/wakeup_64.S
··· 71 71 movq %rsi, saved_rsi 72 72 73 73 addq $8, %rsp 74 - movl $3, %edi 75 - xorl %eax, %eax 76 - call acpi_enter_sleep_state 74 + call acpi_enter_s3 77 75 /* in case something went wrong, restore the machine status and go on */ 78 76 jmp resume_point 79 77
+20 -14
arch/x86/kernel/apic/apic.c
··· 1637 1637 mp_lapic_addr = APIC_DEFAULT_PHYS_BASE; 1638 1638 1639 1639 /* The BIOS may have set up the APIC at some other address */ 1640 - rdmsr(MSR_IA32_APICBASE, l, h); 1641 - if (l & MSR_IA32_APICBASE_ENABLE) 1642 - mp_lapic_addr = l & MSR_IA32_APICBASE_BASE; 1640 + if (boot_cpu_data.x86 >= 6) { 1641 + rdmsr(MSR_IA32_APICBASE, l, h); 1642 + if (l & MSR_IA32_APICBASE_ENABLE) 1643 + mp_lapic_addr = l & MSR_IA32_APICBASE_BASE; 1644 + } 1643 1645 1644 1646 pr_info("Found and enabled local APIC!\n"); 1645 1647 return 0; ··· 1659 1657 * MSR. This can only be done in software for Intel P6 or later 1660 1658 * and AMD K7 (Model > 1) or later. 1661 1659 */ 1662 - rdmsr(MSR_IA32_APICBASE, l, h); 1663 - if (!(l & MSR_IA32_APICBASE_ENABLE)) { 1664 - pr_info("Local APIC disabled by BIOS -- reenabling.\n"); 1665 - l &= ~MSR_IA32_APICBASE_BASE; 1666 - l |= MSR_IA32_APICBASE_ENABLE | addr; 1667 - wrmsr(MSR_IA32_APICBASE, l, h); 1668 - enabled_via_apicbase = 1; 1660 + if (boot_cpu_data.x86 >= 6) { 1661 + rdmsr(MSR_IA32_APICBASE, l, h); 1662 + if (!(l & MSR_IA32_APICBASE_ENABLE)) { 1663 + pr_info("Local APIC disabled by BIOS -- reenabling.\n"); 1664 + l &= ~MSR_IA32_APICBASE_BASE; 1665 + l |= MSR_IA32_APICBASE_ENABLE | addr; 1666 + wrmsr(MSR_IA32_APICBASE, l, h); 1667 + enabled_via_apicbase = 1; 1668 + } 1669 1669 } 1670 1670 return apic_verify(); 1671 1671 } ··· 2213 2209 * FIXME! This will be wrong if we ever support suspend on 2214 2210 * SMP! We'll need to do this as part of the CPU restore! 2215 2211 */ 2216 - rdmsr(MSR_IA32_APICBASE, l, h); 2217 - l &= ~MSR_IA32_APICBASE_BASE; 2218 - l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr; 2219 - wrmsr(MSR_IA32_APICBASE, l, h); 2212 + if (boot_cpu_data.x86 >= 6) { 2213 + rdmsr(MSR_IA32_APICBASE, l, h); 2214 + l &= ~MSR_IA32_APICBASE_BASE; 2215 + l |= MSR_IA32_APICBASE_ENABLE | mp_lapic_addr; 2216 + wrmsr(MSR_IA32_APICBASE, l, h); 2217 + } 2220 2218 } 2221 2219 2222 2220 maxlvt = lapic_get_maxlvt();
+5 -2
arch/x86/kernel/apic/apic_numachip.c
··· 207 207 208 208 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node) 209 209 { 210 - c->phys_proc_id = node; 211 - per_cpu(cpu_llc_id, smp_processor_id()) = node; 210 + 211 + if (c->phys_proc_id != node) { 212 + c->phys_proc_id = node; 213 + per_cpu(cpu_llc_id, smp_processor_id()) = node; 214 + } 212 215 } 213 216 214 217 static int __init numachip_system_init(void)
+6
arch/x86/kernel/apic/x2apic_phys.c
··· 24 24 { 25 25 if (x2apic_phys) 26 26 return x2apic_enabled(); 27 + else if ((acpi_gbl_FADT.header.revision >= FADT2_REVISION_ID) && 28 + (acpi_gbl_FADT.flags & ACPI_FADT_APIC_PHYSICAL) && 29 + x2apic_enabled()) { 30 + printk(KERN_DEBUG "System requires x2apic physical mode\n"); 31 + return 1; 32 + } 27 33 else 28 34 return 0; 29 35 }
+6 -5
arch/x86/kernel/cpu/amd.c
··· 26 26 * contact AMD for precise details and a CPU swap. 27 27 * 28 28 * See http://www.multimania.com/poulot/k6bug.html 29 - * http://www.amd.com/K6/k6docs/revgd.html 29 + * and section 2.6.2 of "AMD-K6 Processor Revision Guide - Model 6" 30 + * (Publication # 21266 Issue Date: August 1998) 30 31 * 31 32 * The following test is erm.. interesting. AMD neglected to up 32 33 * the chip setting when fixing the bug but they also tweaked some ··· 95 94 "system stability may be impaired when more than 32 MB are used.\n"); 96 95 else 97 96 printk(KERN_CONT "probably OK (after B9730xxxx).\n"); 98 - printk(KERN_INFO "Please see http://membres.lycos.fr/poulot/k6bug.html\n"); 99 97 } 100 98 101 99 /* K6 with old style WHCR */ ··· 353 353 node = per_cpu(cpu_llc_id, cpu); 354 354 355 355 /* 356 - * If core numbers are inconsistent, it's likely a multi-fabric platform, 357 - * so invoke platform-specific handler 356 + * On multi-fabric platform (e.g. Numascale NumaChip) a 357 + * platform-specific handler needs to be called to fixup some 358 + * IDs of the CPU. 358 359 */ 359 - if (c->phys_proc_id != node) 360 + if (x86_cpuinit.fixup_cpu_id) 360 361 x86_cpuinit.fixup_cpu_id(c, node); 361 362 362 363 if (!node_online(node)) {
-9
arch/x86/kernel/cpu/common.c
··· 1163 1163 #endif /* ! CONFIG_KGDB */ 1164 1164 1165 1165 /* 1166 - * Prints an error where the NUMA and configured core-number mismatch and the 1167 - * platform didn't override this to fix it up 1168 - */ 1169 - void __cpuinit x86_default_fixup_cpu_id(struct cpuinfo_x86 *c, int node) 1170 - { 1171 - pr_err("NUMA core number %d differs from configured core number %d\n", node, c->phys_proc_id); 1172 - } 1173 - 1174 - /* 1175 1166 * cpu_init() initializes state that is per-CPU. Some data is already 1176 1167 * initialized (naturally) in the bootstrap process, such as the GDT 1177 1168 * and IDT. We reload them nevertheless, this function acts as a
+4 -4
arch/x86/kernel/cpu/intel_cacheinfo.c
··· 433 433 /* check if @slot is already used or the index is already disabled */ 434 434 ret = amd_get_l3_disable_slot(nb, slot); 435 435 if (ret >= 0) 436 - return -EINVAL; 436 + return -EEXIST; 437 437 438 438 if (index > nb->l3_cache.indices) 439 439 return -EINVAL; 440 440 441 441 /* check whether the other slot has disabled the same index already */ 442 442 if (index == amd_get_l3_disable_slot(nb, !slot)) 443 - return -EINVAL; 443 + return -EEXIST; 444 444 445 445 amd_l3_disable_index(nb, cpu, slot, index); 446 446 ··· 468 468 err = amd_set_l3_disable_slot(this_leaf->base.nb, cpu, slot, val); 469 469 if (err) { 470 470 if (err == -EEXIST) 471 - printk(KERN_WARNING "L3 disable slot %d in use!\n", 472 - slot); 471 + pr_warning("L3 slot %d in use/index already disabled!\n", 472 + slot); 473 473 return err; 474 474 } 475 475 return count;
+1
arch/x86/kernel/i387.c
··· 235 235 if (tsk_used_math(tsk)) { 236 236 if (HAVE_HWFP && tsk == current) 237 237 unlazy_fpu(tsk); 238 + tsk->thread.fpu.last_cpu = ~0; 238 239 return 0; 239 240 } 240 241
+7 -5
arch/x86/kernel/microcode_amd.c
··· 82 82 { 83 83 struct cpuinfo_x86 *c = &cpu_data(cpu); 84 84 85 - if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) { 86 - pr_warning("CPU%d: family %d not supported\n", cpu, c->x86); 87 - return -1; 88 - } 89 - 90 85 csig->rev = c->microcode; 91 86 pr_info("CPU%d: patch_level=0x%08x\n", cpu, csig->rev); 92 87 ··· 375 380 376 381 struct microcode_ops * __init init_amd_microcode(void) 377 382 { 383 + struct cpuinfo_x86 *c = &cpu_data(0); 384 + 385 + if (c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) { 386 + pr_warning("AMD CPU family 0x%x not supported\n", c->x86); 387 + return NULL; 388 + } 389 + 378 390 patch = (void *)get_zeroed_page(GFP_KERNEL); 379 391 if (!patch) 380 392 return NULL;
+4 -6
arch/x86/kernel/microcode_core.c
··· 419 419 if (err) 420 420 return err; 421 421 422 - if (microcode_init_cpu(cpu) == UCODE_ERROR) { 423 - sysfs_remove_group(&dev->kobj, &mc_attr_group); 422 + if (microcode_init_cpu(cpu) == UCODE_ERROR) 424 423 return -EINVAL; 425 - } 426 424 427 425 return err; 428 426 } ··· 526 528 microcode_ops = init_intel_microcode(); 527 529 else if (c->x86_vendor == X86_VENDOR_AMD) 528 530 microcode_ops = init_amd_microcode(); 529 - 530 - if (!microcode_ops) { 531 + else 531 532 pr_err("no support for this CPU vendor\n"); 533 + 534 + if (!microcode_ops) 532 535 return -ENODEV; 533 - } 534 536 535 537 microcode_pdev = platform_device_register_simple("microcode", -1, 536 538 NULL, 0);
-1
arch/x86/kernel/x86_init.c
··· 93 93 struct x86_cpuinit_ops x86_cpuinit __cpuinitdata = { 94 94 .early_percpu_clock_init = x86_init_noop, 95 95 .setup_percpu_clockev = setup_secondary_APIC_clock, 96 - .fixup_cpu_id = x86_default_fixup_cpu_id, 97 96 }; 98 97 99 98 static void default_nmi_init(void) { };
+2 -2
arch/x86/platform/mrst/mrst.c
··· 805 805 } else 806 806 i2c_register_board_info(i2c_bus[i], i2c_devs[i], 1); 807 807 } 808 - intel_scu_notifier_post(SCU_AVAILABLE, 0L); 808 + intel_scu_notifier_post(SCU_AVAILABLE, NULL); 809 809 } 810 810 EXPORT_SYMBOL_GPL(intel_scu_devices_create); 811 811 ··· 814 814 { 815 815 int i; 816 816 817 - intel_scu_notifier_post(SCU_DOWN, 0L); 817 + intel_scu_notifier_post(SCU_DOWN, NULL); 818 818 819 819 for (i = 0; i < ipc_next_dev; i++) 820 820 platform_device_del(ipc_devs[i]);
+2 -2
arch/x86/xen/enlighten.c
··· 261 261 262 262 static bool __init xen_check_mwait(void) 263 263 { 264 - #ifdef CONFIG_ACPI 264 + #if defined(CONFIG_ACPI) && !defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR) && \ 265 + !defined(CONFIG_ACPI_PROCESSOR_AGGREGATOR_MODULE) 265 266 struct xen_platform_op op = { 266 267 .cmd = XENPF_set_processor_pminfo, 267 268 .u.set_pminfo.id = -1, ··· 350 349 /* Xen will set CR4.OSXSAVE if supported and not disabled by force */ 351 350 if ((cx & xsave_mask) != xsave_mask) 352 351 cpuid_leaf1_ecx_mask &= ~xsave_mask; /* disable XSAVE & OSXSAVE */ 353 - 354 352 if (xen_check_mwait()) 355 353 cpuid_leaf1_ecx_set_mask = (1 << (X86_FEATURE_MWAIT % 32)); 356 354 }
+15
arch/x86/xen/smp.c
··· 178 178 static void __init xen_filter_cpu_maps(void) 179 179 { 180 180 int i, rc; 181 + unsigned int subtract = 0; 181 182 182 183 if (!xen_initial_domain()) 183 184 return; ··· 193 192 } else { 194 193 set_cpu_possible(i, false); 195 194 set_cpu_present(i, false); 195 + subtract++; 196 196 } 197 197 } 198 + #ifdef CONFIG_HOTPLUG_CPU 199 + /* This is akin to using 'nr_cpus' on the Linux command line. 200 + * Which is OK as when we use 'dom0_max_vcpus=X' we can only 201 + * have up to X, while nr_cpu_ids is greater than X. This 202 + * normally is not a problem, except when CPU hotplugging 203 + * is involved and then there might be more than X CPUs 204 + * in the guest - which will not work as there is no 205 + * hypercall to expand the max number of VCPUs an already 206 + * running guest has. So cap it up to X. */ 207 + if (subtract) 208 + nr_cpu_ids = nr_cpu_ids - subtract; 209 + #endif 210 + 198 211 } 199 212 200 213 static void __init xen_smp_prepare_boot_cpu(void)
+1 -1
arch/x86/xen/xen-asm.S
··· 96 96 97 97 /* check for unmasked and pending */ 98 98 cmpw $0x0001, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_pending 99 - jz 1f 99 + jnz 1f 100 100 2: call check_events 101 101 1: 102 102 ENDPATCH(xen_restore_fl_direct)
-3
arch/xtensa/include/asm/hardirq.h
··· 11 11 #ifndef _XTENSA_HARDIRQ_H 12 12 #define _XTENSA_HARDIRQ_H 13 13 14 - void ack_bad_irq(unsigned int irq); 15 - #define ack_bad_irq ack_bad_irq 16 - 17 14 #include <asm-generic/hardirq.h> 18 15 19 16 #endif /* _XTENSA_HARDIRQ_H */
+1
arch/xtensa/include/asm/io.h
··· 14 14 #ifdef __KERNEL__ 15 15 #include <asm/byteorder.h> 16 16 #include <asm/page.h> 17 + #include <linux/bug.h> 17 18 #include <linux/kernel.h> 18 19 19 20 #include <linux/types.h>
+1
arch/xtensa/kernel/signal.c
··· 496 496 signr = get_signal_to_deliver(&info, &ka, regs, NULL); 497 497 498 498 if (signr > 0) { 499 + int ret; 499 500 500 501 /* Are we from a system call? */ 501 502
+30 -26
drivers/acpi/sleep.c
··· 28 28 #include "internal.h" 29 29 #include "sleep.h" 30 30 31 + u8 wake_sleep_flags = ACPI_NO_OPTIONAL_METHODS; 31 32 static unsigned int gts, bfs; 32 - module_param(gts, uint, 0644); 33 - module_param(bfs, uint, 0644); 33 + static int set_param_wake_flag(const char *val, struct kernel_param *kp) 34 + { 35 + int ret = param_set_int(val, kp); 36 + 37 + if (ret) 38 + return ret; 39 + 40 + if (kp->arg == (const char *)&gts) { 41 + if (gts) 42 + wake_sleep_flags |= ACPI_EXECUTE_GTS; 43 + else 44 + wake_sleep_flags &= ~ACPI_EXECUTE_GTS; 45 + } 46 + if (kp->arg == (const char *)&bfs) { 47 + if (bfs) 48 + wake_sleep_flags |= ACPI_EXECUTE_BFS; 49 + else 50 + wake_sleep_flags &= ~ACPI_EXECUTE_BFS; 51 + } 52 + return ret; 53 + } 54 + module_param_call(gts, set_param_wake_flag, param_get_int, &gts, 0644); 55 + module_param_call(bfs, set_param_wake_flag, param_get_int, &bfs, 0644); 34 56 MODULE_PARM_DESC(gts, "Enable evaluation of _GTS on suspend."); 35 57 MODULE_PARM_DESC(bfs, "Enable evaluation of _BFS on resume".); 36 - 37 - static u8 wake_sleep_flags(void) 38 - { 39 - u8 flags = ACPI_NO_OPTIONAL_METHODS; 40 - 41 - if (gts) 42 - flags |= ACPI_EXECUTE_GTS; 43 - if (bfs) 44 - flags |= ACPI_EXECUTE_BFS; 45 - 46 - return flags; 47 - } 48 58 49 59 static u8 sleep_states[ACPI_S_STATE_COUNT]; 50 60 ··· 273 263 { 274 264 acpi_status status = AE_OK; 275 265 u32 acpi_state = acpi_target_sleep_state; 276 - u8 flags = wake_sleep_flags(); 277 266 int error; 278 267 279 268 ACPI_FLUSH_CPU_CACHE(); ··· 280 271 switch (acpi_state) { 281 272 case ACPI_STATE_S1: 282 273 barrier(); 283 - status = acpi_enter_sleep_state(acpi_state, flags); 274 + status = acpi_enter_sleep_state(acpi_state, wake_sleep_flags); 284 275 break; 285 276 286 277 case ACPI_STATE_S3: ··· 295 286 acpi_write_bit_register(ACPI_BITREG_SCI_ENABLE, 1); 296 287 297 288 /* Reprogram control registers and execute _BFS */ 298 - acpi_leave_sleep_state_prep(acpi_state, flags); 289 + acpi_leave_sleep_state_prep(acpi_state, wake_sleep_flags); 299 290 300 291 /* ACPI 3.0 specs (P62) says that it's the responsibility 301 292 * of the OSPM to clear the status bit [ implying that the ··· 559 550 560 551 static int acpi_hibernation_enter(void) 561 552 { 562 - u8 flags = wake_sleep_flags(); 563 553 acpi_status status = AE_OK; 564 554 565 555 ACPI_FLUSH_CPU_CACHE(); 566 556 567 557 /* This shouldn't return. If it returns, we have a problem */ 568 - status = acpi_enter_sleep_state(ACPI_STATE_S4, flags); 558 + status = acpi_enter_sleep_state(ACPI_STATE_S4, wake_sleep_flags); 569 559 /* Reprogram control registers and execute _BFS */ 570 - acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags); 560 + acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags); 571 561 572 562 return ACPI_SUCCESS(status) ? 0 : -EFAULT; 573 563 } 574 564 575 565 static void acpi_hibernation_leave(void) 576 566 { 577 - u8 flags = wake_sleep_flags(); 578 - 579 567 /* 580 568 * If ACPI is not enabled by the BIOS and the boot kernel, we need to 581 569 * enable it here. 582 570 */ 583 571 acpi_enable(); 584 572 /* Reprogram control registers and execute _BFS */ 585 - acpi_leave_sleep_state_prep(ACPI_STATE_S4, flags); 573 + acpi_leave_sleep_state_prep(ACPI_STATE_S4, wake_sleep_flags); 586 574 /* Check the hardware signature */ 587 575 if (facs && s4_hardware_signature != facs->hardware_signature) { 588 576 printk(KERN_EMERG "ACPI: Hardware changed while hibernated, " ··· 834 828 835 829 static void acpi_power_off(void) 836 830 { 837 - u8 flags = wake_sleep_flags(); 838 - 839 831 /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ 840 832 printk(KERN_DEBUG "%s called\n", __func__); 841 833 local_irq_disable(); 842 - acpi_enter_sleep_state(ACPI_STATE_S5, flags); 834 + acpi_enter_sleep_state(ACPI_STATE_S5, wake_sleep_flags); 843 835 } 844 836 845 837 /*
+5 -2
drivers/bcma/sprom.c
··· 404 404 return -EOPNOTSUPP; 405 405 406 406 if (!bcma_sprom_ext_available(bus)) { 407 + bool sprom_onchip; 408 + 407 409 /* 408 410 * External SPROM takes precedence so check 409 411 * on-chip OTP only when no external SPROM 410 412 * is present. 411 413 */ 412 - if (bcma_sprom_onchip_available(bus)) { 414 + sprom_onchip = bcma_sprom_onchip_available(bus); 415 + if (sprom_onchip) { 413 416 /* determine offset */ 414 417 offset = bcma_sprom_onchip_offset(bus); 415 418 } 416 - if (!offset) { 419 + if (!offset || !sprom_onchip) { 417 420 /* 418 421 * Maybe there is no SPROM on the device? 419 422 * Now we ask the arch code if there is some sprom
+1
drivers/dma/amba-pl08x.c
··· 1429 1429 * signal 1430 1430 */ 1431 1431 release_phy_channel(plchan); 1432 + plchan->phychan_hold = 0; 1432 1433 } 1433 1434 /* Dequeue jobs and free LLIs */ 1434 1435 if (plchan->at) {
-4
drivers/dma/at_hdmac.c
··· 221 221 222 222 vdbg_dump_regs(atchan); 223 223 224 - /* clear any pending interrupt */ 225 - while (dma_readl(atdma, EBCISR)) 226 - cpu_relax(); 227 - 228 224 channel_writel(atchan, SADDR, 0); 229 225 channel_writel(atchan, DADDR, 0); 230 226 channel_writel(atchan, CTRLA, 0);
+6 -3
drivers/dma/imx-dma.c
··· 571 571 if (desc->desc.callback) 572 572 desc->desc.callback(desc->desc.callback_param); 573 573 574 - dma_cookie_complete(&desc->desc); 575 - 576 - /* If we are dealing with a cyclic descriptor keep it on ld_active */ 574 + /* If we are dealing with a cyclic descriptor keep it on ld_active 575 + * and dont mark the descripor as complete. 576 + * Only in non-cyclic cases it would be marked as complete 577 + */ 577 578 if (imxdma_chan_is_doing_cyclic(imxdmac)) 578 579 goto out; 580 + else 581 + dma_cookie_complete(&desc->desc); 579 582 580 583 /* Free 2D slot if it was an interleaved transfer */ 581 584 if (imxdmac->enabled_2d) {
+3 -7
drivers/dma/mxs-dma.c
··· 201 201 202 202 static dma_cookie_t mxs_dma_tx_submit(struct dma_async_tx_descriptor *tx) 203 203 { 204 - struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(tx->chan); 205 - 206 - mxs_dma_enable_chan(mxs_chan); 207 - 208 204 return dma_cookie_assign(tx); 209 205 } 210 206 ··· 554 558 555 559 static void mxs_dma_issue_pending(struct dma_chan *chan) 556 560 { 557 - /* 558 - * Nothing to do. We only have a single descriptor. 559 - */ 561 + struct mxs_dma_chan *mxs_chan = to_mxs_dma_chan(chan); 562 + 563 + mxs_dma_enable_chan(mxs_chan); 560 564 } 561 565 562 566 static int __init mxs_dma_init(struct mxs_dma_engine *mxs_dma)
+15 -10
drivers/dma/pl330.c
··· 2225 2225 { 2226 2226 struct dma_pl330_dmac *pdmac; 2227 2227 struct dma_pl330_desc *desc; 2228 - struct dma_pl330_chan *pch; 2228 + struct dma_pl330_chan *pch = NULL; 2229 2229 unsigned long flags; 2230 - 2231 - if (list_empty(list)) 2232 - return; 2233 2230 2234 2231 /* Finish off the work list */ 2235 2232 list_for_each_entry(desc, list, node) { ··· 2244 2247 desc->pchan = NULL; 2245 2248 } 2246 2249 2250 + /* pch will be unset if list was empty */ 2251 + if (!pch) 2252 + return; 2253 + 2247 2254 pdmac = pch->dmac; 2248 2255 2249 2256 spin_lock_irqsave(&pdmac->pool_lock, flags); ··· 2258 2257 static inline void handle_cyclic_desc_list(struct list_head *list) 2259 2258 { 2260 2259 struct dma_pl330_desc *desc; 2261 - struct dma_pl330_chan *pch; 2260 + struct dma_pl330_chan *pch = NULL; 2262 2261 unsigned long flags; 2263 - 2264 - if (list_empty(list)) 2265 - return; 2266 2262 2267 2263 list_for_each_entry(desc, list, node) { 2268 2264 dma_async_tx_callback callback; ··· 2271 2273 if (callback) 2272 2274 callback(desc->txd.callback_param); 2273 2275 } 2276 + 2277 + /* pch will be unset if list was empty */ 2278 + if (!pch) 2279 + return; 2274 2280 2275 2281 spin_lock_irqsave(&pch->lock, flags); 2276 2282 list_splice_tail_init(list, &pch->work_list); ··· 2928 2926 INIT_LIST_HEAD(&pd->channels); 2929 2927 2930 2928 /* Initialize channel parameters */ 2931 - num_chan = max(pdat ? pdat->nr_valid_peri : (u8)pi->pcfg.num_peri, 2932 - (u8)pi->pcfg.num_chan); 2929 + if (pdat) 2930 + num_chan = max_t(int, pdat->nr_valid_peri, pi->pcfg.num_chan); 2931 + else 2932 + num_chan = max_t(int, pi->pcfg.num_peri, pi->pcfg.num_chan); 2933 + 2933 2934 pdmac->peripherals = kzalloc(num_chan * sizeof(*pch), GFP_KERNEL); 2934 2935 2935 2936 for (i = 0; i < num_chan; i++) {
+204 -125
drivers/dma/ste_dma40.c
··· 18 18 #include <linux/pm_runtime.h> 19 19 #include <linux/err.h> 20 20 #include <linux/amba/bus.h> 21 + #include <linux/regulator/consumer.h> 21 22 22 23 #include <plat/ste_dma40.h> 23 24 ··· 67 66 D40_DMA_RUN = 1, 68 67 D40_DMA_SUSPEND_REQ = 2, 69 68 D40_DMA_SUSPENDED = 3 69 + }; 70 + 71 + /* 72 + * enum d40_events - The different Event Enables for the event lines. 73 + * 74 + * @D40_DEACTIVATE_EVENTLINE: De-activate Event line, stopping the logical chan. 75 + * @D40_ACTIVATE_EVENTLINE: Activate the Event line, to start a logical chan. 76 + * @D40_SUSPEND_REQ_EVENTLINE: Requesting for suspending a event line. 77 + * @D40_ROUND_EVENTLINE: Status check for event line. 78 + */ 79 + 80 + enum d40_events { 81 + D40_DEACTIVATE_EVENTLINE = 0, 82 + D40_ACTIVATE_EVENTLINE = 1, 83 + D40_SUSPEND_REQ_EVENTLINE = 2, 84 + D40_ROUND_EVENTLINE = 3 70 85 }; 71 86 72 87 /* ··· 887 870 } 888 871 #endif 889 872 890 - static int d40_channel_execute_command(struct d40_chan *d40c, 891 - enum d40_command command) 873 + static int __d40_execute_command_phy(struct d40_chan *d40c, 874 + enum d40_command command) 892 875 { 893 876 u32 status; 894 877 int i; ··· 896 879 int ret = 0; 897 880 unsigned long flags; 898 881 u32 wmask; 882 + 883 + if (command == D40_DMA_STOP) { 884 + ret = __d40_execute_command_phy(d40c, D40_DMA_SUSPEND_REQ); 885 + if (ret) 886 + return ret; 887 + } 899 888 900 889 spin_lock_irqsave(&d40c->base->execmd_lock, flags); 901 890 ··· 996 973 } 997 974 998 975 d40c->pending_tx = 0; 999 - d40c->busy = false; 1000 976 } 1001 977 1002 - static void __d40_config_set_event(struct d40_chan *d40c, bool enable, 1003 - u32 event, int reg) 978 + static void __d40_config_set_event(struct d40_chan *d40c, 979 + enum d40_events event_type, u32 event, 980 + int reg) 1004 981 { 1005 982 void __iomem *addr = chan_base(d40c) + reg; 1006 983 int tries; 984 + u32 status; 1007 985 1008 - if (!enable) { 986 + switch (event_type) { 987 + 988 + case D40_DEACTIVATE_EVENTLINE: 989 + 1009 990 writel((D40_DEACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event)) 1010 991 | ~D40_EVENTLINE_MASK(event), addr); 1011 - return; 1012 - } 992 + break; 1013 993 994 + case D40_SUSPEND_REQ_EVENTLINE: 995 + status = (readl(addr) & D40_EVENTLINE_MASK(event)) >> 996 + D40_EVENTLINE_POS(event); 997 + 998 + if (status == D40_DEACTIVATE_EVENTLINE || 999 + status == D40_SUSPEND_REQ_EVENTLINE) 1000 + break; 1001 + 1002 + writel((D40_SUSPEND_REQ_EVENTLINE << D40_EVENTLINE_POS(event)) 1003 + | ~D40_EVENTLINE_MASK(event), addr); 1004 + 1005 + for (tries = 0 ; tries < D40_SUSPEND_MAX_IT; tries++) { 1006 + 1007 + status = (readl(addr) & D40_EVENTLINE_MASK(event)) >> 1008 + D40_EVENTLINE_POS(event); 1009 + 1010 + cpu_relax(); 1011 + /* 1012 + * Reduce the number of bus accesses while 1013 + * waiting for the DMA to suspend. 1014 + */ 1015 + udelay(3); 1016 + 1017 + if (status == D40_DEACTIVATE_EVENTLINE) 1018 + break; 1019 + } 1020 + 1021 + if (tries == D40_SUSPEND_MAX_IT) { 1022 + chan_err(d40c, 1023 + "unable to stop the event_line chl %d (log: %d)" 1024 + "status %x\n", d40c->phy_chan->num, 1025 + d40c->log_num, status); 1026 + } 1027 + break; 1028 + 1029 + case D40_ACTIVATE_EVENTLINE: 1014 1030 /* 1015 1031 * The hardware sometimes doesn't register the enable when src and dst 1016 1032 * event lines are active on the same logical channel. Retry to ensure 1017 1033 * it does. Usually only one retry is sufficient. 1018 1034 */ 1019 - tries = 100; 1020 - while (--tries) { 1021 - writel((D40_ACTIVATE_EVENTLINE << D40_EVENTLINE_POS(event)) 1022 - | ~D40_EVENTLINE_MASK(event), addr); 1035 + tries = 100; 1036 + while (--tries) { 1037 + writel((D40_ACTIVATE_EVENTLINE << 1038 + D40_EVENTLINE_POS(event)) | 1039 + ~D40_EVENTLINE_MASK(event), addr); 1023 1040 1024 - if (readl(addr) & D40_EVENTLINE_MASK(event)) 1025 - break; 1041 + if (readl(addr) & D40_EVENTLINE_MASK(event)) 1042 + break; 1043 + } 1044 + 1045 + if (tries != 99) 1046 + dev_dbg(chan2dev(d40c), 1047 + "[%s] workaround enable S%cLNK (%d tries)\n", 1048 + __func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D', 1049 + 100 - tries); 1050 + 1051 + WARN_ON(!tries); 1052 + break; 1053 + 1054 + case D40_ROUND_EVENTLINE: 1055 + BUG(); 1056 + break; 1057 + 1026 1058 } 1027 - 1028 - if (tries != 99) 1029 - dev_dbg(chan2dev(d40c), 1030 - "[%s] workaround enable S%cLNK (%d tries)\n", 1031 - __func__, reg == D40_CHAN_REG_SSLNK ? 'S' : 'D', 1032 - 100 - tries); 1033 - 1034 - WARN_ON(!tries); 1035 1059 } 1036 1060 1037 - static void d40_config_set_event(struct d40_chan *d40c, bool do_enable) 1061 + static void d40_config_set_event(struct d40_chan *d40c, 1062 + enum d40_events event_type) 1038 1063 { 1039 - unsigned long flags; 1040 - 1041 - spin_lock_irqsave(&d40c->phy_chan->lock, flags); 1042 - 1043 1064 /* Enable event line connected to device (or memcpy) */ 1044 1065 if ((d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_MEM) || 1045 1066 (d40c->dma_cfg.dir == STEDMA40_PERIPH_TO_PERIPH)) { 1046 1067 u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.src_dev_type); 1047 1068 1048 - __d40_config_set_event(d40c, do_enable, event, 1069 + __d40_config_set_event(d40c, event_type, event, 1049 1070 D40_CHAN_REG_SSLNK); 1050 1071 } 1051 1072 1052 1073 if (d40c->dma_cfg.dir != STEDMA40_PERIPH_TO_MEM) { 1053 1074 u32 event = D40_TYPE_TO_EVENT(d40c->dma_cfg.dst_dev_type); 1054 1075 1055 - __d40_config_set_event(d40c, do_enable, event, 1076 + __d40_config_set_event(d40c, event_type, event, 1056 1077 D40_CHAN_REG_SDLNK); 1057 1078 } 1058 - 1059 - spin_unlock_irqrestore(&d40c->phy_chan->lock, flags); 1060 1079 } 1061 1080 1062 1081 static u32 d40_chan_has_events(struct d40_chan *d40c) ··· 1110 1045 val |= readl(chanbase + D40_CHAN_REG_SDLNK); 1111 1046 1112 1047 return val; 1048 + } 1049 + 1050 + static int 1051 + __d40_execute_command_log(struct d40_chan *d40c, enum d40_command command) 1052 + { 1053 + unsigned long flags; 1054 + int ret = 0; 1055 + u32 active_status; 1056 + void __iomem *active_reg; 1057 + 1058 + if (d40c->phy_chan->num % 2 == 0) 1059 + active_reg = d40c->base->virtbase + D40_DREG_ACTIVE; 1060 + else 1061 + active_reg = d40c->base->virtbase + D40_DREG_ACTIVO; 1062 + 1063 + 1064 + spin_lock_irqsave(&d40c->phy_chan->lock, flags); 1065 + 1066 + switch (command) { 1067 + case D40_DMA_STOP: 1068 + case D40_DMA_SUSPEND_REQ: 1069 + 1070 + active_status = (readl(active_reg) & 1071 + D40_CHAN_POS_MASK(d40c->phy_chan->num)) >> 1072 + D40_CHAN_POS(d40c->phy_chan->num); 1073 + 1074 + if (active_status == D40_DMA_RUN) 1075 + d40_config_set_event(d40c, D40_SUSPEND_REQ_EVENTLINE); 1076 + else 1077 + d40_config_set_event(d40c, D40_DEACTIVATE_EVENTLINE); 1078 + 1079 + if (!d40_chan_has_events(d40c) && (command == D40_DMA_STOP)) 1080 + ret = __d40_execute_command_phy(d40c, command); 1081 + 1082 + break; 1083 + 1084 + case D40_DMA_RUN: 1085 + 1086 + d40_config_set_event(d40c, D40_ACTIVATE_EVENTLINE); 1087 + ret = __d40_execute_command_phy(d40c, command); 1088 + break; 1089 + 1090 + case D40_DMA_SUSPENDED: 1091 + BUG(); 1092 + break; 1093 + } 1094 + 1095 + spin_unlock_irqrestore(&d40c->phy_chan->lock, flags); 1096 + return ret; 1097 + } 1098 + 1099 + static int d40_channel_execute_command(struct d40_chan *d40c, 1100 + enum d40_command command) 1101 + { 1102 + if (chan_is_logical(d40c)) 1103 + return __d40_execute_command_log(d40c, command); 1104 + else 1105 + return __d40_execute_command_phy(d40c, command); 1113 1106 } 1114 1107 1115 1108 static u32 d40_get_prmo(struct d40_chan *d40c) ··· 1272 1149 spin_lock_irqsave(&d40c->lock, flags); 1273 1150 1274 1151 res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ); 1275 - if (res == 0) { 1276 - if (chan_is_logical(d40c)) { 1277 - d40_config_set_event(d40c, false); 1278 - /* Resume the other logical channels if any */ 1279 - if (d40_chan_has_events(d40c)) 1280 - res = d40_channel_execute_command(d40c, 1281 - D40_DMA_RUN); 1282 - } 1283 - } 1152 + 1284 1153 pm_runtime_mark_last_busy(d40c->base->dev); 1285 1154 pm_runtime_put_autosuspend(d40c->base->dev); 1286 1155 spin_unlock_irqrestore(&d40c->lock, flags); ··· 1289 1174 1290 1175 spin_lock_irqsave(&d40c->lock, flags); 1291 1176 pm_runtime_get_sync(d40c->base->dev); 1292 - if (d40c->base->rev == 0) 1293 - if (chan_is_logical(d40c)) { 1294 - res = d40_channel_execute_command(d40c, 1295 - D40_DMA_SUSPEND_REQ); 1296 - goto no_suspend; 1297 - } 1298 1177 1299 1178 /* If bytes left to transfer or linked tx resume job */ 1300 - if (d40_residue(d40c) || d40_tx_is_linked(d40c)) { 1301 - 1302 - if (chan_is_logical(d40c)) 1303 - d40_config_set_event(d40c, true); 1304 - 1179 + if (d40_residue(d40c) || d40_tx_is_linked(d40c)) 1305 1180 res = d40_channel_execute_command(d40c, D40_DMA_RUN); 1306 - } 1307 1181 1308 - no_suspend: 1309 1182 pm_runtime_mark_last_busy(d40c->base->dev); 1310 1183 pm_runtime_put_autosuspend(d40c->base->dev); 1311 1184 spin_unlock_irqrestore(&d40c->lock, flags); 1312 1185 return res; 1313 - } 1314 - 1315 - static int d40_terminate_all(struct d40_chan *chan) 1316 - { 1317 - unsigned long flags; 1318 - int ret = 0; 1319 - 1320 - ret = d40_pause(chan); 1321 - if (!ret && chan_is_physical(chan)) 1322 - ret = d40_channel_execute_command(chan, D40_DMA_STOP); 1323 - 1324 - spin_lock_irqsave(&chan->lock, flags); 1325 - d40_term_all(chan); 1326 - spin_unlock_irqrestore(&chan->lock, flags); 1327 - 1328 - return ret; 1329 1186 } 1330 1187 1331 1188 static dma_cookie_t d40_tx_submit(struct dma_async_tx_descriptor *tx) ··· 1319 1232 1320 1233 static int d40_start(struct d40_chan *d40c) 1321 1234 { 1322 - if (d40c->base->rev == 0) { 1323 - int err; 1324 - 1325 - if (chan_is_logical(d40c)) { 1326 - err = d40_channel_execute_command(d40c, 1327 - D40_DMA_SUSPEND_REQ); 1328 - if (err) 1329 - return err; 1330 - } 1331 - } 1332 - 1333 - if (chan_is_logical(d40c)) 1334 - d40_config_set_event(d40c, true); 1335 - 1336 1235 return d40_channel_execute_command(d40c, D40_DMA_RUN); 1337 1236 } 1338 1237 ··· 1331 1258 d40d = d40_first_queued(d40c); 1332 1259 1333 1260 if (d40d != NULL) { 1334 - if (!d40c->busy) 1261 + if (!d40c->busy) { 1335 1262 d40c->busy = true; 1336 - 1337 - pm_runtime_get_sync(d40c->base->dev); 1263 + pm_runtime_get_sync(d40c->base->dev); 1264 + } 1338 1265 1339 1266 /* Remove from queue */ 1340 1267 d40_desc_remove(d40d); ··· 1461 1388 1462 1389 return; 1463 1390 1464 - err: 1465 - /* Rescue manoeuvre if receiving double interrupts */ 1391 + err: 1392 + /* Rescue manouver if receiving double interrupts */ 1466 1393 if (d40c->pending_tx > 0) 1467 1394 d40c->pending_tx--; 1468 1395 spin_unlock_irqrestore(&d40c->lock, flags); ··· 1843 1770 return 0; 1844 1771 } 1845 1772 1846 - 1847 1773 static int d40_free_dma(struct d40_chan *d40c) 1848 1774 { 1849 1775 ··· 1878 1806 } 1879 1807 1880 1808 pm_runtime_get_sync(d40c->base->dev); 1881 - res = d40_channel_execute_command(d40c, D40_DMA_SUSPEND_REQ); 1882 - if (res) { 1883 - chan_err(d40c, "suspend failed\n"); 1884 - goto out; 1885 - } 1886 - 1887 - if (chan_is_logical(d40c)) { 1888 - /* Release logical channel, deactivate the event line */ 1889 - 1890 - d40_config_set_event(d40c, false); 1891 - d40c->base->lookup_log_chans[d40c->log_num] = NULL; 1892 - 1893 - /* 1894 - * Check if there are more logical allocation 1895 - * on this phy channel. 1896 - */ 1897 - if (!d40_alloc_mask_free(phy, is_src, event)) { 1898 - /* Resume the other logical channels if any */ 1899 - if (d40_chan_has_events(d40c)) { 1900 - res = d40_channel_execute_command(d40c, 1901 - D40_DMA_RUN); 1902 - if (res) 1903 - chan_err(d40c, 1904 - "Executing RUN command\n"); 1905 - } 1906 - goto out; 1907 - } 1908 - } else { 1909 - (void) d40_alloc_mask_free(phy, is_src, 0); 1910 - } 1911 - 1912 - /* Release physical channel */ 1913 1809 res = d40_channel_execute_command(d40c, D40_DMA_STOP); 1914 1810 if (res) { 1915 - chan_err(d40c, "Failed to stop channel\n"); 1811 + chan_err(d40c, "stop failed\n"); 1916 1812 goto out; 1917 1813 } 1814 + 1815 + d40_alloc_mask_free(phy, is_src, chan_is_logical(d40c) ? event : 0); 1816 + 1817 + if (chan_is_logical(d40c)) 1818 + d40c->base->lookup_log_chans[d40c->log_num] = NULL; 1819 + else 1820 + d40c->base->lookup_phy_chans[phy->num] = NULL; 1918 1821 1919 1822 if (d40c->busy) { 1920 1823 pm_runtime_mark_last_busy(d40c->base->dev); ··· 1899 1852 d40c->busy = false; 1900 1853 d40c->phy_chan = NULL; 1901 1854 d40c->configured = false; 1902 - d40c->base->lookup_phy_chans[phy->num] = NULL; 1903 1855 out: 1904 1856 1905 1857 pm_runtime_mark_last_busy(d40c->base->dev); ··· 2116 2070 if (sg_next(&sg_src[sg_len - 1]) == sg_src) 2117 2071 desc->cyclic = true; 2118 2072 2119 - if (direction != DMA_NONE) { 2073 + if (direction != DMA_TRANS_NONE) { 2120 2074 dma_addr_t dev_addr = d40_get_dev_addr(chan, direction); 2121 2075 2122 2076 if (direction == DMA_DEV_TO_MEM) ··· 2417 2371 spin_unlock_irqrestore(&d40c->lock, flags); 2418 2372 } 2419 2373 2374 + static void d40_terminate_all(struct dma_chan *chan) 2375 + { 2376 + unsigned long flags; 2377 + struct d40_chan *d40c = container_of(chan, struct d40_chan, chan); 2378 + int ret; 2379 + 2380 + spin_lock_irqsave(&d40c->lock, flags); 2381 + 2382 + pm_runtime_get_sync(d40c->base->dev); 2383 + ret = d40_channel_execute_command(d40c, D40_DMA_STOP); 2384 + if (ret) 2385 + chan_err(d40c, "Failed to stop channel\n"); 2386 + 2387 + d40_term_all(d40c); 2388 + pm_runtime_mark_last_busy(d40c->base->dev); 2389 + pm_runtime_put_autosuspend(d40c->base->dev); 2390 + if (d40c->busy) { 2391 + pm_runtime_mark_last_busy(d40c->base->dev); 2392 + pm_runtime_put_autosuspend(d40c->base->dev); 2393 + } 2394 + d40c->busy = false; 2395 + 2396 + spin_unlock_irqrestore(&d40c->lock, flags); 2397 + } 2398 + 2420 2399 static int 2421 2400 dma40_config_to_halfchannel(struct d40_chan *d40c, 2422 2401 struct stedma40_half_channel_info *info, ··· 2622 2551 2623 2552 switch (cmd) { 2624 2553 case DMA_TERMINATE_ALL: 2625 - return d40_terminate_all(d40c); 2554 + d40_terminate_all(chan); 2555 + return 0; 2626 2556 case DMA_PAUSE: 2627 2557 return d40_pause(d40c); 2628 2558 case DMA_RESUME: ··· 2980 2908 dev_info(&pdev->dev, "hardware revision: %d @ 0x%x\n", 2981 2909 rev, res->start); 2982 2910 2911 + if (rev < 2) { 2912 + d40_err(&pdev->dev, "hardware revision: %d is not supported", 2913 + rev); 2914 + goto failure; 2915 + } 2916 + 2983 2917 plat_data = pdev->dev.platform_data; 2984 2918 2985 2919 /* Count the number of logical channels in use */ ··· 3076 2998 3077 2999 if (base) { 3078 3000 kfree(base->lcla_pool.alloc_map); 3001 + kfree(base->reg_val_backup_chan); 3079 3002 kfree(base->lookup_log_chans); 3080 3003 kfree(base->lookup_phy_chans); 3081 3004 kfree(base->phy_res);
-2
drivers/dma/ste_dma40_ll.h
··· 62 62 #define D40_SREG_ELEM_LOG_LIDX_MASK (0xFF << D40_SREG_ELEM_LOG_LIDX_POS) 63 63 64 64 /* Link register */ 65 - #define D40_DEACTIVATE_EVENTLINE 0x0 66 - #define D40_ACTIVATE_EVENTLINE 0x1 67 65 #define D40_EVENTLINE_POS(i) (2 * i) 68 66 #define D40_EVENTLINE_MASK(i) (0x3 << D40_EVENTLINE_POS(i)) 69 67
+118 -21
drivers/gpio/gpio-pxa.c
··· 11 11 * it under the terms of the GNU General Public License version 2 as 12 12 * published by the Free Software Foundation. 13 13 */ 14 + #include <linux/module.h> 14 15 #include <linux/clk.h> 15 16 #include <linux/err.h> 16 17 #include <linux/gpio.h> 17 18 #include <linux/gpio-pxa.h> 18 19 #include <linux/init.h> 19 20 #include <linux/irq.h> 21 + #include <linux/irqdomain.h> 20 22 #include <linux/io.h> 23 + #include <linux/of.h> 24 + #include <linux/of_device.h> 21 25 #include <linux/platform_device.h> 22 26 #include <linux/syscore_ops.h> 23 27 #include <linux/slab.h> ··· 60 56 61 57 int pxa_last_gpio; 62 58 59 + #ifdef CONFIG_OF 60 + static struct irq_domain *domain; 61 + #endif 62 + 63 63 struct pxa_gpio_chip { 64 64 struct gpio_chip chip; 65 65 void __iomem *regbase; ··· 72 64 unsigned long irq_mask; 73 65 unsigned long irq_edge_rise; 74 66 unsigned long irq_edge_fall; 67 + int (*set_wake)(unsigned int gpio, unsigned int on); 75 68 76 69 #ifdef CONFIG_PM 77 70 unsigned long saved_gplr; ··· 89 80 PXA3XX_GPIO, 90 81 PXA93X_GPIO, 91 82 MMP_GPIO = 0x10, 92 - MMP2_GPIO, 93 83 }; 94 84 95 85 static DEFINE_SPINLOCK(gpio_lock); ··· 277 269 (value ? GPSR_OFFSET : GPCR_OFFSET)); 278 270 } 279 271 280 - static int __devinit pxa_init_gpio_chip(int gpio_end) 272 + static int __devinit pxa_init_gpio_chip(int gpio_end, 273 + int (*set_wake)(unsigned int, unsigned int)) 281 274 { 282 275 int i, gpio, nbanks = gpio_to_bank(gpio_end) + 1; 283 276 struct pxa_gpio_chip *chips; ··· 294 285 295 286 sprintf(chips[i].label, "gpio-%d", i); 296 287 chips[i].regbase = gpio_reg_base + BANK_OFF(i); 288 + chips[i].set_wake = set_wake; 297 289 298 290 c->base = gpio; 299 291 c->label = chips[i].label; ··· 422 412 writel_relaxed(gfer, c->regbase + GFER_OFFSET); 423 413 } 424 414 415 + static int pxa_gpio_set_wake(struct irq_data *d, unsigned int on) 416 + { 417 + int gpio = pxa_irq_to_gpio(d->irq); 418 + struct pxa_gpio_chip *c = gpio_to_pxachip(gpio); 419 + 420 + if (c->set_wake) 421 + return c->set_wake(gpio, on); 422 + else 423 + return 0; 424 + } 425 + 425 426 static void pxa_unmask_muxed_gpio(struct irq_data *d) 426 427 { 427 428 int gpio = pxa_irq_to_gpio(d->irq); ··· 448 427 .irq_mask = pxa_mask_muxed_gpio, 449 428 .irq_unmask = pxa_unmask_muxed_gpio, 450 429 .irq_set_type = pxa_gpio_irq_type, 430 + .irq_set_wake = pxa_gpio_set_wake, 451 431 }; 452 432 453 433 static int pxa_gpio_nums(void) ··· 482 460 gpio_type = MMP_GPIO; 483 461 } else if (cpu_is_mmp2()) { 484 462 count = 191; 485 - gpio_type = MMP2_GPIO; 463 + gpio_type = MMP_GPIO; 486 464 } 487 465 #endif /* CONFIG_ARCH_MMP */ 488 466 return count; 489 467 } 468 + 469 + static struct of_device_id pxa_gpio_dt_ids[] = { 470 + { .compatible = "mrvl,pxa-gpio" }, 471 + { .compatible = "mrvl,mmp-gpio", .data = (void *)MMP_GPIO }, 472 + {} 473 + }; 474 + 475 + static int pxa_irq_domain_map(struct irq_domain *d, unsigned int irq, 476 + irq_hw_number_t hw) 477 + { 478 + irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 479 + handle_edge_irq); 480 + set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 481 + return 0; 482 + } 483 + 484 + const struct irq_domain_ops pxa_irq_domain_ops = { 485 + .map = pxa_irq_domain_map, 486 + }; 487 + 488 + #ifdef CONFIG_OF 489 + static int __devinit pxa_gpio_probe_dt(struct platform_device *pdev) 490 + { 491 + int ret, nr_banks, nr_gpios, irq_base; 492 + struct device_node *prev, *next, *np = pdev->dev.of_node; 493 + const struct of_device_id *of_id = 494 + of_match_device(pxa_gpio_dt_ids, &pdev->dev); 495 + 496 + if (!of_id) { 497 + dev_err(&pdev->dev, "Failed to find gpio controller\n"); 498 + return -EFAULT; 499 + } 500 + gpio_type = (int)of_id->data; 501 + 502 + next = of_get_next_child(np, NULL); 503 + prev = next; 504 + if (!next) { 505 + dev_err(&pdev->dev, "Failed to find child gpio node\n"); 506 + ret = -EINVAL; 507 + goto err; 508 + } 509 + for (nr_banks = 1; ; nr_banks++) { 510 + next = of_get_next_child(np, prev); 511 + if (!next) 512 + break; 513 + prev = next; 514 + } 515 + of_node_put(prev); 516 + nr_gpios = nr_banks << 5; 517 + pxa_last_gpio = nr_gpios - 1; 518 + 519 + irq_base = irq_alloc_descs(-1, 0, nr_gpios, 0); 520 + if (irq_base < 0) { 521 + dev_err(&pdev->dev, "Failed to allocate IRQ numbers\n"); 522 + goto err; 523 + } 524 + domain = irq_domain_add_legacy(np, nr_gpios, irq_base, 0, 525 + &pxa_irq_domain_ops, NULL); 526 + return 0; 527 + err: 528 + iounmap(gpio_reg_base); 529 + return ret; 530 + } 531 + #else 532 + #define pxa_gpio_probe_dt(pdev) (-1) 533 + #endif 490 534 491 535 static int __devinit pxa_gpio_probe(struct platform_device *pdev) 492 536 { 493 537 struct pxa_gpio_chip *c; 494 538 struct resource *res; 495 539 struct clk *clk; 496 - int gpio, irq, ret; 540 + struct pxa_gpio_platform_data *info; 541 + int gpio, irq, ret, use_of = 0; 497 542 int irq0 = 0, irq1 = 0, irq_mux, gpio_offset = 0; 498 543 499 - pxa_last_gpio = pxa_gpio_nums(); 544 + ret = pxa_gpio_probe_dt(pdev); 545 + if (ret < 0) 546 + pxa_last_gpio = pxa_gpio_nums(); 547 + else 548 + use_of = 1; 500 549 if (!pxa_last_gpio) 501 550 return -EINVAL; 502 551 ··· 609 516 } 610 517 611 518 /* Initialize GPIO chips */ 612 - pxa_init_gpio_chip(pxa_last_gpio); 519 + info = dev_get_platdata(&pdev->dev); 520 + pxa_init_gpio_chip(pxa_last_gpio, info ? info->gpio_set_wake : NULL); 613 521 614 522 /* clear all GPIO edge detects */ 615 523 for_each_gpio_chip(gpio, c) { ··· 622 528 writel_relaxed(~0, c->regbase + ED_MASK_OFFSET); 623 529 } 624 530 531 + if (!use_of) { 625 532 #ifdef CONFIG_ARCH_PXA 626 - irq = gpio_to_irq(0); 627 - irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 628 - handle_edge_irq); 629 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 630 - irq_set_chained_handler(IRQ_GPIO0, pxa_gpio_demux_handler); 631 - 632 - irq = gpio_to_irq(1); 633 - irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 634 - handle_edge_irq); 635 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 636 - irq_set_chained_handler(IRQ_GPIO1, pxa_gpio_demux_handler); 637 - #endif 638 - 639 - for (irq = gpio_to_irq(gpio_offset); 640 - irq <= gpio_to_irq(pxa_last_gpio); irq++) { 533 + irq = gpio_to_irq(0); 641 534 irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 642 535 handle_edge_irq); 643 536 set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 537 + irq_set_chained_handler(IRQ_GPIO0, pxa_gpio_demux_handler); 538 + 539 + irq = gpio_to_irq(1); 540 + irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 541 + handle_edge_irq); 542 + set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 543 + irq_set_chained_handler(IRQ_GPIO1, pxa_gpio_demux_handler); 544 + #endif 545 + 546 + for (irq = gpio_to_irq(gpio_offset); 547 + irq <= gpio_to_irq(pxa_last_gpio); irq++) { 548 + irq_set_chip_and_handler(irq, &pxa_muxed_gpio_chip, 549 + handle_edge_irq); 550 + set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 551 + } 644 552 } 645 553 646 554 irq_set_chained_handler(irq_mux, pxa_gpio_demux_handler); ··· 653 557 .probe = pxa_gpio_probe, 654 558 .driver = { 655 559 .name = "pxa-gpio", 560 + .of_match_table = pxa_gpio_dt_ids, 656 561 }, 657 562 }; 658 563
+5 -25
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 149 149 unsigned long pfn; 150 150 151 151 if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { 152 - unsigned long usize = buf->size; 153 - 154 152 if (!buf->pages) 155 153 return -EINTR; 156 154 157 - while (usize > 0) { 158 - pfn = page_to_pfn(buf->pages[page_offset++]); 159 - vm_insert_mixed(vma, f_vaddr, pfn); 160 - f_vaddr += PAGE_SIZE; 161 - usize -= PAGE_SIZE; 162 - } 163 - 164 - return 0; 165 - } 166 - 167 - pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset; 155 + pfn = page_to_pfn(buf->pages[page_offset++]); 156 + } else 157 + pfn = (buf->dma_addr >> PAGE_SHIFT) + page_offset; 168 158 169 159 return vm_insert_mixed(vma, f_vaddr, pfn); 170 160 } ··· 514 524 if (!buffer->pages) 515 525 return -EINVAL; 516 526 527 + vma->vm_flags |= VM_MIXEDMAP; 528 + 517 529 do { 518 530 ret = vm_insert_page(vma, uaddr, buffer->pages[i++]); 519 531 if (ret) { ··· 702 710 int exynos_drm_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) 703 711 { 704 712 struct drm_gem_object *obj = vma->vm_private_data; 705 - struct exynos_drm_gem_obj *exynos_gem_obj = to_exynos_gem_obj(obj); 706 713 struct drm_device *dev = obj->dev; 707 714 unsigned long f_vaddr; 708 715 pgoff_t page_offset; ··· 713 722 714 723 mutex_lock(&dev->struct_mutex); 715 724 716 - /* 717 - * allocate all pages as desired size if user wants to allocate 718 - * physically non-continuous memory. 719 - */ 720 - if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG) { 721 - ret = exynos_drm_gem_get_pages(obj); 722 - if (ret < 0) 723 - goto err; 724 - } 725 - 726 725 ret = exynos_drm_gem_map_pages(obj, vma, f_vaddr, page_offset); 727 726 if (ret < 0) 728 727 DRM_ERROR("failed to map pages.\n"); 729 728 730 - err: 731 729 mutex_unlock(&dev->struct_mutex); 732 730 733 731 return convert_to_vm_err_msg(ret);
+7 -1
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1133 1133 return -EINVAL; 1134 1134 } 1135 1135 1136 + if (args->num_cliprects > UINT_MAX / sizeof(*cliprects)) { 1137 + DRM_DEBUG("execbuf with %u cliprects\n", 1138 + args->num_cliprects); 1139 + return -EINVAL; 1140 + } 1136 1141 cliprects = kmalloc(args->num_cliprects * sizeof(*cliprects), 1137 1142 GFP_KERNEL); 1138 1143 if (cliprects == NULL) { ··· 1409 1404 struct drm_i915_gem_exec_object2 *exec2_list = NULL; 1410 1405 int ret; 1411 1406 1412 - if (args->buffer_count < 1) { 1407 + if (args->buffer_count < 1 || 1408 + args->buffer_count > UINT_MAX / sizeof(*exec2_list)) { 1413 1409 DRM_DEBUG("execbuf2 with %d buffers\n", args->buffer_count); 1414 1410 return -EINVAL; 1415 1411 }
+1
drivers/gpu/drm/i915/i915_reg.h
··· 568 568 #define CM0_MASK_SHIFT 16 569 569 #define CM0_IZ_OPT_DISABLE (1<<6) 570 570 #define CM0_ZR_OPT_DISABLE (1<<5) 571 + #define CM0_STC_EVICT_DISABLE_LRA_SNB (1<<5) 571 572 #define CM0_DEPTH_EVICT_DISABLE (1<<4) 572 573 #define CM0_COLOR_EVICT_DISABLE (1<<3) 573 574 #define CM0_DEPTH_WRITE_DISABLE (1<<1)
+11 -18
drivers/gpu/drm/i915/intel_crt.c
··· 430 430 { 431 431 struct drm_device *dev = connector->dev; 432 432 struct intel_crt *crt = intel_attached_crt(connector); 433 - struct drm_crtc *crtc; 434 433 enum drm_connector_status status; 434 + struct intel_load_detect_pipe tmp; 435 435 436 436 if (I915_HAS_HOTPLUG(dev)) { 437 437 if (intel_crt_detect_hotplug(connector)) { ··· 450 450 return connector->status; 451 451 452 452 /* for pre-945g platforms use load detect */ 453 - crtc = crt->base.base.crtc; 454 - if (crtc && crtc->enabled) { 455 - status = intel_crt_load_detect(crt); 456 - } else { 457 - struct intel_load_detect_pipe tmp; 458 - 459 - if (intel_get_load_detect_pipe(&crt->base, connector, NULL, 460 - &tmp)) { 461 - if (intel_crt_detect_ddc(connector)) 462 - status = connector_status_connected; 463 - else 464 - status = intel_crt_load_detect(crt); 465 - intel_release_load_detect_pipe(&crt->base, connector, 466 - &tmp); 467 - } else 468 - status = connector_status_unknown; 469 - } 453 + if (intel_get_load_detect_pipe(&crt->base, connector, NULL, 454 + &tmp)) { 455 + if (intel_crt_detect_ddc(connector)) 456 + status = connector_status_connected; 457 + else 458 + status = intel_crt_load_detect(crt); 459 + intel_release_load_detect_pipe(&crt->base, connector, 460 + &tmp); 461 + } else 462 + status = connector_status_unknown; 470 463 471 464 return status; 472 465 }
+8
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 401 401 if (INTEL_INFO(dev)->gen >= 6) { 402 402 I915_WRITE(INSTPM, 403 403 INSTPM_FORCE_ORDERING << 16 | INSTPM_FORCE_ORDERING); 404 + 405 + /* From the Sandybridge PRM, volume 1 part 3, page 24: 406 + * "If this bit is set, STCunit will have LRA as replacement 407 + * policy. [...] This bit must be reset. LRA replacement 408 + * policy is not supported." 409 + */ 410 + I915_WRITE(CACHE_MODE_0, 411 + CM0_STC_EVICT_DISABLE_LRA_SNB << CM0_MASK_SHIFT); 404 412 } 405 413 406 414 return ret;
+18 -16
drivers/gpu/drm/i915/intel_sdvo.c
··· 731 731 uint16_t width, height; 732 732 uint16_t h_blank_len, h_sync_len, v_blank_len, v_sync_len; 733 733 uint16_t h_sync_offset, v_sync_offset; 734 + int mode_clock; 734 735 735 736 width = mode->crtc_hdisplay; 736 737 height = mode->crtc_vdisplay; ··· 746 745 h_sync_offset = mode->crtc_hsync_start - mode->crtc_hblank_start; 747 746 v_sync_offset = mode->crtc_vsync_start - mode->crtc_vblank_start; 748 747 749 - dtd->part1.clock = mode->clock / 10; 748 + mode_clock = mode->clock; 749 + mode_clock /= intel_mode_get_pixel_multiplier(mode) ?: 1; 750 + mode_clock /= 10; 751 + dtd->part1.clock = mode_clock; 752 + 750 753 dtd->part1.h_active = width & 0xff; 751 754 dtd->part1.h_blank = h_blank_len & 0xff; 752 755 dtd->part1.h_high = (((width >> 8) & 0xf) << 4) | ··· 1001 996 struct intel_sdvo *intel_sdvo = to_intel_sdvo(encoder); 1002 997 u32 sdvox; 1003 998 struct intel_sdvo_in_out_map in_out; 1004 - struct intel_sdvo_dtd input_dtd; 999 + struct intel_sdvo_dtd input_dtd, output_dtd; 1005 1000 int pixel_multiplier = intel_mode_get_pixel_multiplier(adjusted_mode); 1006 1001 int rate; 1007 1002 ··· 1026 1021 intel_sdvo->attached_output)) 1027 1022 return; 1028 1023 1029 - /* We have tried to get input timing in mode_fixup, and filled into 1030 - * adjusted_mode. 1031 - */ 1032 - if (intel_sdvo->is_tv || intel_sdvo->is_lvds) { 1033 - input_dtd = intel_sdvo->input_dtd; 1034 - } else { 1035 - /* Set the output timing to the screen */ 1036 - if (!intel_sdvo_set_target_output(intel_sdvo, 1037 - intel_sdvo->attached_output)) 1038 - return; 1039 - 1040 - intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode); 1041 - (void) intel_sdvo_set_output_timing(intel_sdvo, &input_dtd); 1042 - } 1024 + /* lvds has a special fixed output timing. */ 1025 + if (intel_sdvo->is_lvds) 1026 + intel_sdvo_get_dtd_from_mode(&output_dtd, 1027 + intel_sdvo->sdvo_lvds_fixed_mode); 1028 + else 1029 + intel_sdvo_get_dtd_from_mode(&output_dtd, mode); 1030 + (void) intel_sdvo_set_output_timing(intel_sdvo, &output_dtd); 1043 1031 1044 1032 /* Set the input timing to the screen. Assume always input 0. */ 1045 1033 if (!intel_sdvo_set_target_input(intel_sdvo)) ··· 1050 1052 !intel_sdvo_set_tv_format(intel_sdvo)) 1051 1053 return; 1052 1054 1055 + /* We have tried to get input timing in mode_fixup, and filled into 1056 + * adjusted_mode. 1057 + */ 1058 + intel_sdvo_get_dtd_from_mode(&input_dtd, adjusted_mode); 1053 1059 (void) intel_sdvo_set_input_timing(intel_sdvo, &input_dtd); 1054 1060 1055 1061 switch (pixel_multiplier) {
+5 -2
drivers/gpu/drm/radeon/atombios_crtc.c
··· 575 575 576 576 if (rdev->family < CHIP_RV770) 577 577 pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP; 578 + /* use frac fb div on APUs */ 579 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 580 + pll->flags |= RADEON_PLL_USE_FRAC_FB_DIV; 578 581 } else { 579 582 pll->flags |= RADEON_PLL_LEGACY; 580 583 ··· 958 955 break; 959 956 } 960 957 961 - if (radeon_encoder->active_device & 962 - (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) { 958 + if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || 959 + (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)) { 963 960 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 964 961 struct drm_connector *connector = 965 962 radeon_get_connector_for_encoder(encoder);
+2 -1
drivers/gpu/drm/radeon/radeon_display.c
··· 533 533 radeon_legacy_init_crtc(dev, radeon_crtc); 534 534 } 535 535 536 - static const char *encoder_names[36] = { 536 + static const char *encoder_names[37] = { 537 537 "NONE", 538 538 "INTERNAL_LVDS", 539 539 "INTERNAL_TMDS1", ··· 570 570 "INTERNAL_UNIPHY2", 571 571 "NUTMEG", 572 572 "TRAVIS", 573 + "INTERNAL_VCE" 573 574 }; 574 575 575 576 static const char *connector_names[15] = {
+1 -1
drivers/hsi/clients/hsi_char.c
··· 123 123 static unsigned int hsc_major; 124 124 /* Maximum buffer size that hsi_char will accept from userspace */ 125 125 static unsigned int max_data_size = 0x1000; 126 - module_param(max_data_size, uint, S_IRUSR | S_IWUSR); 126 + module_param(max_data_size, uint, 0); 127 127 MODULE_PARM_DESC(max_data_size, "max read/write data size [4,8..65536] (^2)"); 128 128 129 129 static void hsc_add_tail(struct hsc_channel *channel, struct hsi_msg *msg,
+118 -105
drivers/hsi/hsi.c
··· 21 21 */ 22 22 #include <linux/hsi/hsi.h> 23 23 #include <linux/compiler.h> 24 - #include <linux/rwsem.h> 25 24 #include <linux/list.h> 26 - #include <linux/spinlock.h> 27 25 #include <linux/kobject.h> 28 26 #include <linux/slab.h> 29 27 #include <linux/string.h> 28 + #include <linux/notifier.h> 30 29 #include "hsi_core.h" 31 - 32 - static struct device_type hsi_ctrl = { 33 - .name = "hsi_controller", 34 - }; 35 - 36 - static struct device_type hsi_cl = { 37 - .name = "hsi_client", 38 - }; 39 - 40 - static struct device_type hsi_port = { 41 - .name = "hsi_port", 42 - }; 43 30 44 31 static ssize_t modalias_show(struct device *dev, 45 32 struct device_attribute *a __maybe_unused, char *buf) ··· 41 54 42 55 static int hsi_bus_uevent(struct device *dev, struct kobj_uevent_env *env) 43 56 { 44 - if (dev->type == &hsi_cl) 45 - add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev)); 57 + add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev)); 46 58 47 59 return 0; 48 60 } ··· 66 80 static void hsi_new_client(struct hsi_port *port, struct hsi_board_info *info) 67 81 { 68 82 struct hsi_client *cl; 69 - unsigned long flags; 70 83 71 84 cl = kzalloc(sizeof(*cl), GFP_KERNEL); 72 85 if (!cl) 73 86 return; 74 - cl->device.type = &hsi_cl; 75 87 cl->tx_cfg = info->tx_cfg; 76 88 cl->rx_cfg = info->rx_cfg; 77 89 cl->device.bus = &hsi_bus_type; ··· 77 93 cl->device.release = hsi_client_release; 78 94 dev_set_name(&cl->device, info->name); 79 95 cl->device.platform_data = info->platform_data; 80 - spin_lock_irqsave(&port->clock, flags); 81 - list_add_tail(&cl->link, &port->clients); 82 - spin_unlock_irqrestore(&port->clock, flags); 83 96 if (info->archdata) 84 97 cl->device.archdata = *info->archdata; 85 98 if (device_register(&cl->device) < 0) { 86 99 pr_err("hsi: failed to register client: %s\n", info->name); 87 - kfree(cl); 100 + put_device(&cl->device); 88 101 } 89 102 } 90 103 ··· 101 120 102 121 static int hsi_remove_client(struct device *dev, void *data __maybe_unused) 103 122 { 104 - struct hsi_client *cl = to_hsi_client(dev); 105 - struct hsi_port *port = to_hsi_port(dev->parent); 106 - unsigned long flags; 107 - 108 - spin_lock_irqsave(&port->clock, flags); 109 - list_del(&cl->link); 110 - spin_unlock_irqrestore(&port->clock, flags); 111 123 device_unregister(dev); 112 124 113 125 return 0; ··· 114 140 return 0; 115 141 } 116 142 117 - static void hsi_controller_release(struct device *dev __maybe_unused) 143 + static void hsi_controller_release(struct device *dev) 118 144 { 145 + struct hsi_controller *hsi = to_hsi_controller(dev); 146 + 147 + kfree(hsi->port); 148 + kfree(hsi); 119 149 } 120 150 121 - static void hsi_port_release(struct device *dev __maybe_unused) 151 + static void hsi_port_release(struct device *dev) 122 152 { 153 + kfree(to_hsi_port(dev)); 123 154 } 124 155 125 156 /** ··· 149 170 unsigned int i; 150 171 int err; 151 172 152 - hsi->device.type = &hsi_ctrl; 153 - hsi->device.bus = &hsi_bus_type; 154 - hsi->device.release = hsi_controller_release; 155 - err = device_register(&hsi->device); 173 + err = device_add(&hsi->device); 156 174 if (err < 0) 157 175 return err; 158 176 for (i = 0; i < hsi->num_ports; i++) { 159 - hsi->port[i].device.parent = &hsi->device; 160 - hsi->port[i].device.bus = &hsi_bus_type; 161 - hsi->port[i].device.release = hsi_port_release; 162 - hsi->port[i].device.type = &hsi_port; 163 - INIT_LIST_HEAD(&hsi->port[i].clients); 164 - spin_lock_init(&hsi->port[i].clock); 165 - err = device_register(&hsi->port[i].device); 177 + hsi->port[i]->device.parent = &hsi->device; 178 + err = device_add(&hsi->port[i]->device); 166 179 if (err < 0) 167 180 goto out; 168 181 } ··· 163 192 164 193 return 0; 165 194 out: 166 - hsi_unregister_controller(hsi); 195 + while (i-- > 0) 196 + device_del(&hsi->port[i]->device); 197 + device_del(&hsi->device); 167 198 168 199 return err; 169 200 } ··· 196 223 } 197 224 198 225 /** 226 + * hsi_put_controller - Free an HSI controller 227 + * 228 + * @hsi: Pointer to the HSI controller to freed 229 + * 230 + * HSI controller drivers should only use this function if they need 231 + * to free their allocated hsi_controller structures before a successful 232 + * call to hsi_register_controller. Other use is not allowed. 233 + */ 234 + void hsi_put_controller(struct hsi_controller *hsi) 235 + { 236 + unsigned int i; 237 + 238 + if (!hsi) 239 + return; 240 + 241 + for (i = 0; i < hsi->num_ports; i++) 242 + if (hsi->port && hsi->port[i]) 243 + put_device(&hsi->port[i]->device); 244 + put_device(&hsi->device); 245 + } 246 + EXPORT_SYMBOL_GPL(hsi_put_controller); 247 + 248 + /** 199 249 * hsi_alloc_controller - Allocate an HSI controller and its ports 200 250 * @n_ports: Number of ports on the HSI controller 201 251 * @flags: Kernel allocation flags ··· 228 232 struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags) 229 233 { 230 234 struct hsi_controller *hsi; 231 - struct hsi_port *port; 235 + struct hsi_port **port; 232 236 unsigned int i; 233 237 234 238 if (!n_ports) 235 239 return NULL; 236 240 237 - port = kzalloc(sizeof(*port)*n_ports, flags); 238 - if (!port) 239 - return NULL; 240 241 hsi = kzalloc(sizeof(*hsi), flags); 241 242 if (!hsi) 242 - goto out; 243 - for (i = 0; i < n_ports; i++) { 244 - dev_set_name(&port[i].device, "port%d", i); 245 - port[i].num = i; 246 - port[i].async = hsi_dummy_msg; 247 - port[i].setup = hsi_dummy_cl; 248 - port[i].flush = hsi_dummy_cl; 249 - port[i].start_tx = hsi_dummy_cl; 250 - port[i].stop_tx = hsi_dummy_cl; 251 - port[i].release = hsi_dummy_cl; 252 - mutex_init(&port[i].lock); 243 + return NULL; 244 + port = kzalloc(sizeof(*port)*n_ports, flags); 245 + if (!port) { 246 + kfree(hsi); 247 + return NULL; 253 248 } 254 249 hsi->num_ports = n_ports; 255 250 hsi->port = port; 251 + hsi->device.release = hsi_controller_release; 252 + device_initialize(&hsi->device); 253 + 254 + for (i = 0; i < n_ports; i++) { 255 + port[i] = kzalloc(sizeof(**port), flags); 256 + if (port[i] == NULL) 257 + goto out; 258 + port[i]->num = i; 259 + port[i]->async = hsi_dummy_msg; 260 + port[i]->setup = hsi_dummy_cl; 261 + port[i]->flush = hsi_dummy_cl; 262 + port[i]->start_tx = hsi_dummy_cl; 263 + port[i]->stop_tx = hsi_dummy_cl; 264 + port[i]->release = hsi_dummy_cl; 265 + mutex_init(&port[i]->lock); 266 + ATOMIC_INIT_NOTIFIER_HEAD(&port[i]->n_head); 267 + dev_set_name(&port[i]->device, "port%d", i); 268 + hsi->port[i]->device.release = hsi_port_release; 269 + device_initialize(&hsi->port[i]->device); 270 + } 256 271 257 272 return hsi; 258 273 out: 259 - kfree(port); 274 + hsi_put_controller(hsi); 260 275 261 276 return NULL; 262 277 } 263 278 EXPORT_SYMBOL_GPL(hsi_alloc_controller); 264 - 265 - /** 266 - * hsi_free_controller - Free an HSI controller 267 - * @hsi: Pointer to HSI controller 268 - */ 269 - void hsi_free_controller(struct hsi_controller *hsi) 270 - { 271 - if (!hsi) 272 - return; 273 - 274 - kfree(hsi->port); 275 - kfree(hsi); 276 - } 277 - EXPORT_SYMBOL_GPL(hsi_free_controller); 278 279 279 280 /** 280 281 * hsi_free_msg - Free an HSI message ··· 407 414 } 408 415 EXPORT_SYMBOL_GPL(hsi_release_port); 409 416 410 - static int hsi_start_rx(struct hsi_client *cl, void *data __maybe_unused) 417 + static int hsi_event_notifier_call(struct notifier_block *nb, 418 + unsigned long event, void *data __maybe_unused) 411 419 { 412 - if (cl->hsi_start_rx) 413 - (*cl->hsi_start_rx)(cl); 420 + struct hsi_client *cl = container_of(nb, struct hsi_client, nb); 421 + 422 + (*cl->ehandler)(cl, event); 414 423 415 424 return 0; 416 425 } 417 426 418 - static int hsi_stop_rx(struct hsi_client *cl, void *data __maybe_unused) 427 + /** 428 + * hsi_register_port_event - Register a client to receive port events 429 + * @cl: HSI client that wants to receive port events 430 + * @cb: Event handler callback 431 + * 432 + * Clients should register a callback to be able to receive 433 + * events from the ports. Registration should happen after 434 + * claiming the port. 435 + * The handler can be called in interrupt context. 436 + * 437 + * Returns -errno on error, or 0 on success. 438 + */ 439 + int hsi_register_port_event(struct hsi_client *cl, 440 + void (*handler)(struct hsi_client *, unsigned long)) 419 441 { 420 - if (cl->hsi_stop_rx) 421 - (*cl->hsi_stop_rx)(cl); 442 + struct hsi_port *port = hsi_get_port(cl); 422 443 423 - return 0; 444 + if (!handler || cl->ehandler) 445 + return -EINVAL; 446 + if (!hsi_port_claimed(cl)) 447 + return -EACCES; 448 + cl->ehandler = handler; 449 + cl->nb.notifier_call = hsi_event_notifier_call; 450 + 451 + return atomic_notifier_chain_register(&port->n_head, &cl->nb); 424 452 } 453 + EXPORT_SYMBOL_GPL(hsi_register_port_event); 425 454 426 - static int hsi_port_for_each_client(struct hsi_port *port, void *data, 427 - int (*fn)(struct hsi_client *cl, void *data)) 455 + /** 456 + * hsi_unregister_port_event - Stop receiving port events for a client 457 + * @cl: HSI client that wants to stop receiving port events 458 + * 459 + * Clients should call this function before releasing their associated 460 + * port. 461 + * 462 + * Returns -errno on error, or 0 on success. 463 + */ 464 + int hsi_unregister_port_event(struct hsi_client *cl) 428 465 { 429 - struct hsi_client *cl; 466 + struct hsi_port *port = hsi_get_port(cl); 467 + int err; 430 468 431 - spin_lock(&port->clock); 432 - list_for_each_entry(cl, &port->clients, link) { 433 - spin_unlock(&port->clock); 434 - (*fn)(cl, data); 435 - spin_lock(&port->clock); 436 - } 437 - spin_unlock(&port->clock); 469 + WARN_ON(!hsi_port_claimed(cl)); 438 470 439 - return 0; 471 + err = atomic_notifier_chain_unregister(&port->n_head, &cl->nb); 472 + if (!err) 473 + cl->ehandler = NULL; 474 + 475 + return err; 440 476 } 477 + EXPORT_SYMBOL_GPL(hsi_unregister_port_event); 441 478 442 479 /** 443 480 * hsi_event -Notifies clients about port events ··· 481 458 * Events: 482 459 * HSI_EVENT_START_RX - Incoming wake line high 483 460 * HSI_EVENT_STOP_RX - Incoming wake line down 461 + * 462 + * Returns -errno on error, or 0 on success. 484 463 */ 485 - void hsi_event(struct hsi_port *port, unsigned int event) 464 + int hsi_event(struct hsi_port *port, unsigned long event) 486 465 { 487 - int (*fn)(struct hsi_client *cl, void *data); 488 - 489 - switch (event) { 490 - case HSI_EVENT_START_RX: 491 - fn = hsi_start_rx; 492 - break; 493 - case HSI_EVENT_STOP_RX: 494 - fn = hsi_stop_rx; 495 - break; 496 - default: 497 - return; 498 - } 499 - hsi_port_for_each_client(port, NULL, fn); 466 + return atomic_notifier_call_chain(&port->n_head, event, NULL); 500 467 } 501 468 EXPORT_SYMBOL_GPL(hsi_event); 502 469
+5 -7
drivers/hwmon/ad7314.c
··· 47 47 u16 rx ____cacheline_aligned; 48 48 }; 49 49 50 - static int ad7314_spi_read(struct ad7314_data *chip, s16 *data) 50 + static int ad7314_spi_read(struct ad7314_data *chip) 51 51 { 52 52 int ret; 53 53 ··· 57 57 return ret; 58 58 } 59 59 60 - *data = be16_to_cpu(chip->rx); 61 - 62 - return ret; 60 + return be16_to_cpu(chip->rx); 63 61 } 64 62 65 63 static ssize_t ad7314_show_temperature(struct device *dev, ··· 68 70 s16 data; 69 71 int ret; 70 72 71 - ret = ad7314_spi_read(chip, &data); 73 + ret = ad7314_spi_read(chip); 72 74 if (ret < 0) 73 75 return ret; 74 76 switch (spi_get_device_id(chip->spi_dev)->driver_data) { 75 77 case ad7314: 76 - data = (data & AD7314_TEMP_MASK) >> AD7314_TEMP_OFFSET; 78 + data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_OFFSET; 77 79 data = (data << 6) >> 6; 78 80 79 81 return sprintf(buf, "%d\n", 250 * data); ··· 84 86 * with a sign bit - which is a 14 bit 2's complement 85 87 * register. 1lsb - 31.25 milli degrees centigrade 86 88 */ 87 - data &= ADT7301_TEMP_MASK; 89 + data = ret & ADT7301_TEMP_MASK; 88 90 data = (data << 2) >> 2; 89 91 90 92 return sprintf(buf, "%d\n",
+6 -3
drivers/hwmon/fam15h_power.c
··· 128 128 * counter saturations resulting in bogus power readings. 129 129 * We correct this value ourselves to cope with older BIOSes. 130 130 */ 131 + static DEFINE_PCI_DEVICE_TABLE(affected_device) = { 132 + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }, 133 + { 0 } 134 + }; 135 + 131 136 static void __devinit tweak_runavg_range(struct pci_dev *pdev) 132 137 { 133 138 u32 val; 134 - const struct pci_device_id affected_device = { 135 - PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }; 136 139 137 140 /* 138 141 * let this quirk apply only to the current version of the 139 142 * northbridge, since future versions may change the behavior 140 143 */ 141 - if (!pci_match_id(&affected_device, pdev)) 144 + if (!pci_match_id(affected_device, pdev)) 142 145 return; 143 146 144 147 pci_bus_read_config_dword(pdev->bus,
+5 -3
drivers/infiniband/core/mad.c
··· 1854 1854 response->mad.mad.mad_hdr.method = IB_MGMT_METHOD_GET_RESP; 1855 1855 response->mad.mad.mad_hdr.status = 1856 1856 cpu_to_be16(IB_MGMT_MAD_STATUS_UNSUPPORTED_METHOD_ATTRIB); 1857 + if (recv->mad.mad.mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) 1858 + response->mad.mad.mad_hdr.status |= IB_SMP_DIRECTION; 1857 1859 1858 1860 return true; 1859 1861 } else { ··· 1871 1869 struct ib_mad_list_head *mad_list; 1872 1870 struct ib_mad_agent_private *mad_agent; 1873 1871 int port_num; 1872 + int ret = IB_MAD_RESULT_SUCCESS; 1874 1873 1875 1874 mad_list = (struct ib_mad_list_head *)(unsigned long)wc->wr_id; 1876 1875 qp_info = mad_list->mad_queue->qp_info; ··· 1955 1952 local: 1956 1953 /* Give driver "right of first refusal" on incoming MAD */ 1957 1954 if (port_priv->device->process_mad) { 1958 - int ret; 1959 - 1960 1955 ret = port_priv->device->process_mad(port_priv->device, 0, 1961 1956 port_priv->port_num, 1962 1957 wc, &recv->grh, ··· 1982 1981 * or via recv_handler in ib_mad_complete_recv() 1983 1982 */ 1984 1983 recv = NULL; 1985 - } else if (generate_unmatched_resp(recv, response)) { 1984 + } else if ((ret & IB_MAD_RESULT_SUCCESS) && 1985 + generate_unmatched_resp(recv, response)) { 1986 1986 agent_send_response(&response->mad.mad, &recv->grh, wc, 1987 1987 port_priv->device, port_num, qp_info->qp->qp_num); 1988 1988 }
+1 -1
drivers/infiniband/hw/mlx4/main.c
··· 247 247 err = mlx4_MAD_IFC(to_mdev(ibdev), 1, 1, port, 248 248 NULL, NULL, in_mad, out_mad); 249 249 if (err) 250 - return err; 250 + goto out; 251 251 252 252 /* Checking LinkSpeedActive for FDR-10 */ 253 253 if (out_mad->data[15] & 0x1)
+2 -2
drivers/md/dm-raid.c
··· 859 859 int ret; 860 860 unsigned redundancy = 0; 861 861 struct raid_dev *dev; 862 - struct md_rdev *rdev, *freshest; 862 + struct md_rdev *rdev, *tmp, *freshest; 863 863 struct mddev *mddev = &rs->md; 864 864 865 865 switch (rs->raid_type->level) { ··· 877 877 } 878 878 879 879 freshest = NULL; 880 - rdev_for_each(rdev, mddev) { 880 + rdev_for_each_safe(rdev, tmp, mddev) { 881 881 if (!rdev->meta_bdev) 882 882 continue; 883 883
+4 -3
drivers/md/md.c
··· 7560 7560 * any transients in the value of "sync_action". 7561 7561 */ 7562 7562 set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); 7563 - clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 7564 7563 /* Clear some bits that don't mean anything, but 7565 7564 * might be left set 7566 7565 */ 7567 7566 clear_bit(MD_RECOVERY_INTR, &mddev->recovery); 7568 7567 clear_bit(MD_RECOVERY_DONE, &mddev->recovery); 7569 7568 7570 - if (test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) 7569 + if (!test_and_clear_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || 7570 + test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) 7571 7571 goto unlock; 7572 7572 /* no recovery is running. 7573 7573 * remove any failed drives, then ··· 8140 8140 8141 8141 for_each_mddev(mddev, tmp) { 8142 8142 if (mddev_trylock(mddev)) { 8143 - __md_stop_writes(mddev); 8143 + if (mddev->pers) 8144 + __md_stop_writes(mddev); 8144 8145 mddev->safemode = 2; 8145 8146 mddev_unlock(mddev); 8146 8147 }
+3
drivers/mmc/host/mxs-mmc.c
··· 363 363 goto out; 364 364 365 365 dmaengine_submit(desc); 366 + dma_async_issue_pending(host->dmach); 366 367 return; 367 368 368 369 out: ··· 404 403 goto out; 405 404 406 405 dmaengine_submit(desc); 406 + dma_async_issue_pending(host->dmach); 407 407 return; 408 408 409 409 out: ··· 533 531 goto out; 534 532 535 533 dmaengine_submit(desc); 534 + dma_async_issue_pending(host->dmach); 536 535 return; 537 536 out: 538 537 dev_warn(mmc_dev(host->mmc),
+1
drivers/mtd/nand/gpmi-nand/gpmi-nand.c
··· 266 266 desc->callback = dma_irq_callback; 267 267 desc->callback_param = this; 268 268 dmaengine_submit(desc); 269 + dma_async_issue_pending(get_dma_chan(this)); 269 270 270 271 /* Wait for the interrupt from the DMA block. */ 271 272 err = wait_for_completion_timeout(dma_c, msecs_to_jiffies(1000));
+4 -4
drivers/net/arcnet/arc-rimi.c
··· 89 89 BUGLVL(D_NORMAL) printk(VERSION); 90 90 BUGLVL(D_NORMAL) printk("E-mail me if you actually test the RIM I driver, please!\n"); 91 91 92 - BUGMSG(D_NORMAL, "Given: node %02Xh, shmem %lXh, irq %d\n", 92 + BUGLVL(D_NORMAL) printk("Given: node %02Xh, shmem %lXh, irq %d\n", 93 93 dev->dev_addr[0], dev->mem_start, dev->irq); 94 94 95 95 if (dev->mem_start <= 0 || dev->irq <= 0) { 96 - BUGMSG(D_NORMAL, "No autoprobe for RIM I; you " 96 + BUGLVL(D_NORMAL) printk("No autoprobe for RIM I; you " 97 97 "must specify the shmem and irq!\n"); 98 98 return -ENODEV; 99 99 } 100 100 if (dev->dev_addr[0] == 0) { 101 - BUGMSG(D_NORMAL, "You need to specify your card's station " 101 + BUGLVL(D_NORMAL) printk("You need to specify your card's station " 102 102 "ID!\n"); 103 103 return -ENODEV; 104 104 } ··· 109 109 * will be taken. 110 110 */ 111 111 if (!request_mem_region(dev->mem_start, MIRROR_SIZE, "arcnet (90xx)")) { 112 - BUGMSG(D_NORMAL, "Card memory already allocated\n"); 112 + BUGLVL(D_NORMAL) printk("Card memory already allocated\n"); 113 113 return -ENODEV; 114 114 } 115 115 return arcrimi_found(dev);
+5 -4
drivers/net/caif/caif_hsi.c
··· 744 744 size_t fifo_occupancy = 0; 745 745 746 746 /* Wakeup timeout */ 747 - dev_err(&cfhsi->ndev->dev, "%s: Timeout.\n", 747 + dev_dbg(&cfhsi->ndev->dev, "%s: Timeout.\n", 748 748 __func__); 749 749 750 750 /* Check FIFO to check if modem has sent something. */ 751 751 WARN_ON(cfhsi->dev->cfhsi_fifo_occupancy(cfhsi->dev, 752 752 &fifo_occupancy)); 753 753 754 - dev_err(&cfhsi->ndev->dev, "%s: Bytes in FIFO: %u.\n", 754 + dev_dbg(&cfhsi->ndev->dev, "%s: Bytes in FIFO: %u.\n", 755 755 __func__, (unsigned) fifo_occupancy); 756 756 757 757 /* Check if we misssed the interrupt. */ ··· 1210 1210 1211 1211 static void cfhsi_shutdown(struct cfhsi *cfhsi) 1212 1212 { 1213 - u8 *tx_buf, *rx_buf; 1213 + u8 *tx_buf, *rx_buf, *flip_buf; 1214 1214 1215 1215 /* Stop TXing */ 1216 1216 netif_tx_stop_all_queues(cfhsi->ndev); ··· 1234 1234 /* Store bufferes: will be freed later. */ 1235 1235 tx_buf = cfhsi->tx_buf; 1236 1236 rx_buf = cfhsi->rx_buf; 1237 - 1237 + flip_buf = cfhsi->rx_flip_buf; 1238 1238 /* Flush transmit queues. */ 1239 1239 cfhsi_abort_tx(cfhsi); 1240 1240 ··· 1247 1247 /* Free buffers. */ 1248 1248 kfree(tx_buf); 1249 1249 kfree(rx_buf); 1250 + kfree(flip_buf); 1250 1251 } 1251 1252 1252 1253 int cfhsi_remove(struct platform_device *pdev)
+2
drivers/net/can/usb/peak_usb/pcan_usb_pro.c
··· 875 875 PCAN_USBPRO_INFO_FW, 876 876 &fi, sizeof(fi)); 877 877 if (err) { 878 + kfree(usb_if); 878 879 dev_err(dev->netdev->dev.parent, 879 880 "unable to read %s firmware info (err %d)\n", 880 881 pcan_usb_pro.name, err); ··· 886 885 PCAN_USBPRO_INFO_BL, 887 886 &bi, sizeof(bi)); 888 887 if (err) { 888 + kfree(usb_if); 889 889 dev_err(dev->netdev->dev.parent, 890 890 "unable to read %s bootloader info (err %d)\n", 891 891 pcan_usb_pro.name, err);
+3 -3
drivers/net/dummy.c
··· 107 107 return 0; 108 108 } 109 109 110 - static void dummy_dev_free(struct net_device *dev) 110 + static void dummy_dev_uninit(struct net_device *dev) 111 111 { 112 112 free_percpu(dev->dstats); 113 - free_netdev(dev); 114 113 } 115 114 116 115 static const struct net_device_ops dummy_netdev_ops = { 117 116 .ndo_init = dummy_dev_init, 117 + .ndo_uninit = dummy_dev_uninit, 118 118 .ndo_start_xmit = dummy_xmit, 119 119 .ndo_validate_addr = eth_validate_addr, 120 120 .ndo_set_rx_mode = set_multicast_list, ··· 128 128 129 129 /* Initialize the device structure. */ 130 130 dev->netdev_ops = &dummy_netdev_ops; 131 - dev->destructor = dummy_dev_free; 131 + dev->destructor = free_netdev; 132 132 133 133 /* Fill in device structure with ethernet-generic values. */ 134 134 dev->tx_queue_len = 0;
+5 -7
drivers/net/ethernet/atheros/atlx/atl1.c
··· 2476 2476 "pcie phy link down %x\n", status); 2477 2477 if (netif_running(adapter->netdev)) { /* reset MAC */ 2478 2478 iowrite32(0, adapter->hw.hw_addr + REG_IMR); 2479 - schedule_work(&adapter->pcie_dma_to_rst_task); 2479 + schedule_work(&adapter->reset_dev_task); 2480 2480 return IRQ_HANDLED; 2481 2481 } 2482 2482 } ··· 2488 2488 "pcie DMA r/w error (status = 0x%x)\n", 2489 2489 status); 2490 2490 iowrite32(0, adapter->hw.hw_addr + REG_IMR); 2491 - schedule_work(&adapter->pcie_dma_to_rst_task); 2491 + schedule_work(&adapter->reset_dev_task); 2492 2492 return IRQ_HANDLED; 2493 2493 } 2494 2494 ··· 2633 2633 atl1_clean_rx_ring(adapter); 2634 2634 } 2635 2635 2636 - static void atl1_tx_timeout_task(struct work_struct *work) 2636 + static void atl1_reset_dev_task(struct work_struct *work) 2637 2637 { 2638 2638 struct atl1_adapter *adapter = 2639 - container_of(work, struct atl1_adapter, tx_timeout_task); 2639 + container_of(work, struct atl1_adapter, reset_dev_task); 2640 2640 struct net_device *netdev = adapter->netdev; 2641 2641 2642 2642 netif_device_detach(netdev); ··· 3038 3038 (unsigned long)adapter); 3039 3039 adapter->phy_timer_pending = false; 3040 3040 3041 - INIT_WORK(&adapter->tx_timeout_task, atl1_tx_timeout_task); 3041 + INIT_WORK(&adapter->reset_dev_task, atl1_reset_dev_task); 3042 3042 3043 3043 INIT_WORK(&adapter->link_chg_task, atlx_link_chg_task); 3044 - 3045 - INIT_WORK(&adapter->pcie_dma_to_rst_task, atl1_tx_timeout_task); 3046 3044 3047 3045 err = register_netdev(netdev); 3048 3046 if (err)
+1 -2
drivers/net/ethernet/atheros/atlx/atl1.h
··· 758 758 u16 link_speed; 759 759 u16 link_duplex; 760 760 spinlock_t lock; 761 - struct work_struct tx_timeout_task; 761 + struct work_struct reset_dev_task; 762 762 struct work_struct link_chg_task; 763 - struct work_struct pcie_dma_to_rst_task; 764 763 765 764 struct timer_list phy_config_timer; 766 765 bool phy_timer_pending;
+1 -1
drivers/net/ethernet/atheros/atlx/atlx.c
··· 194 194 { 195 195 struct atlx_adapter *adapter = netdev_priv(netdev); 196 196 /* Do the reset outside of interrupt context */ 197 - schedule_work(&adapter->tx_timeout_task); 197 + schedule_work(&adapter->reset_dev_task); 198 198 } 199 199 200 200 /*
+6 -6
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 942 942 const u8 max_num_of_cos = (port) ? DCBX_E3B0_MAX_NUM_COS_PORT1 : 943 943 DCBX_E3B0_MAX_NUM_COS_PORT0; 944 944 945 + if (pri >= max_num_of_cos) { 946 + DP(NETIF_MSG_LINK, "bnx2x_ets_e3b0_sp_pri_to_cos_set invalid " 947 + "parameter Illegal strict priority\n"); 948 + return -EINVAL; 949 + } 950 + 945 951 if (sp_pri_to_cos[pri] != DCBX_INVALID_COS) { 946 952 DP(NETIF_MSG_LINK, "bnx2x_ets_e3b0_sp_pri_to_cos_set invalid " 947 953 "parameter There can't be two COS's with " 948 954 "the same strict pri\n"); 949 955 return -EINVAL; 950 - } 951 - 952 - if (pri > max_num_of_cos) { 953 - DP(NETIF_MSG_LINK, "bnx2x_ets_e3b0_sp_pri_to_cos_set invalid " 954 - "parameter Illegal strict priority\n"); 955 - return -EINVAL; 956 956 } 957 957 958 958 sp_pri_to_cos[pri] = cos_entry;
+10 -5
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 1310 1310 1311 1311 if (mac_reg & E1000_PHY_CTRL_D0A_LPLU) 1312 1312 oem_reg |= HV_OEM_BITS_LPLU; 1313 - 1314 - /* Set Restart auto-neg to activate the bits */ 1315 - if (!hw->phy.ops.check_reset_block(hw)) 1316 - oem_reg |= HV_OEM_BITS_RESTART_AN; 1317 1313 } else { 1318 1314 if (mac_reg & (E1000_PHY_CTRL_GBE_DISABLE | 1319 1315 E1000_PHY_CTRL_NOND0A_GBE_DISABLE)) ··· 1319 1323 E1000_PHY_CTRL_NOND0A_LPLU)) 1320 1324 oem_reg |= HV_OEM_BITS_LPLU; 1321 1325 } 1326 + 1327 + /* Set Restart auto-neg to activate the bits */ 1328 + if ((d0_state || (hw->mac.type != e1000_pchlan)) && 1329 + !hw->phy.ops.check_reset_block(hw)) 1330 + oem_reg |= HV_OEM_BITS_RESTART_AN; 1322 1331 1323 1332 ret_val = hw->phy.ops.write_reg_locked(hw, HV_OEM_BITS, oem_reg); 1324 1333 ··· 3683 3682 3684 3683 if (hw->mac.type >= e1000_pchlan) { 3685 3684 e1000_oem_bits_config_ich8lan(hw, false); 3686 - e1000_phy_hw_reset_ich8lan(hw); 3685 + 3686 + /* Reset PHY to activate OEM bits on 82577/8 */ 3687 + if (hw->mac.type == e1000_pchlan) 3688 + e1000e_phy_hw_reset_generic(hw); 3689 + 3687 3690 ret_val = hw->phy.ops.acquire(hw); 3688 3691 if (ret_val) 3689 3692 return;
+10
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
··· 622 622 if (adapter->hw.mac.type == ixgbe_mac_82599EB) 623 623 set_bit(__IXGBE_RX_CSUM_UDP_ZERO_ERR, &ring->state); 624 624 625 + #ifdef IXGBE_FCOE 626 + if (adapter->netdev->features & NETIF_F_FCOE_MTU) { 627 + struct ixgbe_ring_feature *f; 628 + f = &adapter->ring_feature[RING_F_FCOE]; 629 + if ((rxr_idx >= f->mask) && 630 + (rxr_idx < f->mask + f->indices)) 631 + set_bit(__IXGBE_RX_FCOE_BUFSZ, &ring->state); 632 + } 633 + 634 + #endif /* IXGBE_FCOE */ 625 635 /* apply Rx specific ring traits */ 626 636 ring->count = adapter->rx_ring_count; 627 637 ring->queue_index = rxr_idx;
+12 -8
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 3154 3154 set_ring_rsc_enabled(rx_ring); 3155 3155 else 3156 3156 clear_ring_rsc_enabled(rx_ring); 3157 - #ifdef IXGBE_FCOE 3158 - if (netdev->features & NETIF_F_FCOE_MTU) { 3159 - struct ixgbe_ring_feature *f; 3160 - f = &adapter->ring_feature[RING_F_FCOE]; 3161 - if ((i >= f->mask) && (i < f->mask + f->indices)) 3162 - set_bit(__IXGBE_RX_FCOE_BUFSZ, &rx_ring->state); 3163 - } 3164 - #endif /* IXGBE_FCOE */ 3165 3157 } 3166 3158 } 3167 3159 ··· 4828 4836 4829 4837 pci_wake_from_d3(pdev, false); 4830 4838 4839 + rtnl_lock(); 4831 4840 err = ixgbe_init_interrupt_scheme(adapter); 4841 + rtnl_unlock(); 4832 4842 if (err) { 4833 4843 e_dev_err("Cannot initialize interrupts for device\n"); 4834 4844 return err; ··· 4886 4892 #endif 4887 4893 if (wufc) { 4888 4894 ixgbe_set_rx_mode(netdev); 4895 + 4896 + /* 4897 + * enable the optics for both mult-speed fiber and 4898 + * 82599 SFP+ fiber as we can WoL. 4899 + */ 4900 + if (hw->mac.ops.enable_tx_laser && 4901 + (hw->phy.multispeed_fiber || 4902 + (hw->mac.ops.get_media_type(hw) == ixgbe_media_type_fiber && 4903 + hw->mac.type == ixgbe_mac_82599EB))) 4904 + hw->mac.ops.enable_tx_laser(hw); 4889 4905 4890 4906 /* turn on all-multi mode if wake on multicast is enabled */ 4891 4907 if (wufc & IXGBE_WUFC_MC) {
+11 -10
drivers/net/ethernet/micrel/ks8851.c
··· 889 889 netif_stop_queue(dev); 890 890 891 891 mutex_lock(&ks->lock); 892 + /* turn off the IRQs and ack any outstanding */ 893 + ks8851_wrreg16(ks, KS_IER, 0x0000); 894 + ks8851_wrreg16(ks, KS_ISR, 0xffff); 895 + mutex_unlock(&ks->lock); 892 896 893 897 /* stop any outstanding work */ 894 898 flush_work(&ks->irq_work); 895 899 flush_work(&ks->tx_work); 896 900 flush_work(&ks->rxctrl_work); 897 901 898 - /* turn off the IRQs and ack any outstanding */ 899 - ks8851_wrreg16(ks, KS_IER, 0x0000); 900 - ks8851_wrreg16(ks, KS_ISR, 0xffff); 901 - 902 + mutex_lock(&ks->lock); 902 903 /* shutdown RX process */ 903 904 ks8851_wrreg16(ks, KS_RXCR1, 0x0000); 904 905 ··· 908 907 909 908 /* set powermode to soft power down to save power */ 910 909 ks8851_set_powermode(ks, PMECR_PM_SOFTDOWN); 910 + mutex_unlock(&ks->lock); 911 911 912 912 /* ensure any queued tx buffers are dumped */ 913 913 while (!skb_queue_empty(&ks->txq)) { ··· 920 918 dev_kfree_skb(txb); 921 919 } 922 920 923 - mutex_unlock(&ks->lock); 924 921 return 0; 925 922 } 926 923 ··· 1419 1418 struct net_device *ndev; 1420 1419 struct ks8851_net *ks; 1421 1420 int ret; 1421 + unsigned cider; 1422 1422 1423 1423 ndev = alloc_etherdev(sizeof(struct ks8851_net)); 1424 1424 if (!ndev) ··· 1486 1484 ks8851_soft_reset(ks, GRR_GSR); 1487 1485 1488 1486 /* simple check for a valid chip being connected to the bus */ 1489 - 1490 - if ((ks8851_rdreg16(ks, KS_CIDER) & ~CIDER_REV_MASK) != CIDER_ID) { 1487 + cider = ks8851_rdreg16(ks, KS_CIDER); 1488 + if ((cider & ~CIDER_REV_MASK) != CIDER_ID) { 1491 1489 dev_err(&spi->dev, "failed to read device ID\n"); 1492 1490 ret = -ENODEV; 1493 1491 goto err_id; ··· 1518 1516 } 1519 1517 1520 1518 netdev_info(ndev, "revision %d, MAC %pM, IRQ %d, %s EEPROM\n", 1521 - CIDER_REV_GET(ks8851_rdreg16(ks, KS_CIDER)), 1522 - ndev->dev_addr, ndev->irq, 1519 + CIDER_REV_GET(cider), ndev->dev_addr, ndev->irq, 1523 1520 ks->rc_ccr & CCR_EEPROM ? "has" : "no"); 1524 1521 1525 1522 return 0; 1526 1523 1527 1524 1528 1525 err_netdev: 1529 - free_irq(ndev->irq, ndev); 1526 + free_irq(ndev->irq, ks); 1530 1527 1531 1528 err_id: 1532 1529 err_irq:
+1 -1
drivers/net/ethernet/micrel/ks8851_mll.c
··· 40 40 #define DRV_NAME "ks8851_mll" 41 41 42 42 static u8 KS_DEFAULT_MAC_ADDRESS[] = { 0x00, 0x10, 0xA1, 0x86, 0x95, 0x11 }; 43 - #define MAX_RECV_FRAMES 32 43 + #define MAX_RECV_FRAMES 255 44 44 #define MAX_BUF_SIZE 2048 45 45 #define TX_BUF_SIZE 2000 46 46 #define RX_BUF_SIZE 2000
+1 -1
drivers/net/ethernet/micrel/ksz884x.c
··· 5675 5675 memcpy(hw->override_addr, mac->sa_data, ETH_ALEN); 5676 5676 } 5677 5677 5678 - memcpy(dev->dev_addr, mac->sa_data, MAX_ADDR_LEN); 5678 + memcpy(dev->dev_addr, mac->sa_data, ETH_ALEN); 5679 5679 5680 5680 interrupt = hw_block_intr(hw); 5681 5681
+8 -2
drivers/net/ethernet/realtek/8139cp.c
··· 958 958 cpw8(Cmd, RxOn | TxOn); 959 959 } 960 960 961 + static void cp_enable_irq(struct cp_private *cp) 962 + { 963 + cpw16_f(IntrMask, cp_intr_mask); 964 + } 965 + 961 966 static void cp_init_hw (struct cp_private *cp) 962 967 { 963 968 struct net_device *dev = cp->dev; ··· 1001 996 cpw32_f(TxRingAddr + 4, (ring_dma >> 16) >> 16); 1002 997 1003 998 cpw16(MultiIntr, 0); 1004 - 1005 - cpw16_f(IntrMask, cp_intr_mask); 1006 999 1007 1000 cpw8_f(Cfg9346, Cfg9346_Lock); 1008 1001 } ··· 1132 1129 rc = request_irq(dev->irq, cp_interrupt, IRQF_SHARED, dev->name, dev); 1133 1130 if (rc) 1134 1131 goto err_out_hw; 1132 + 1133 + cp_enable_irq(cp); 1135 1134 1136 1135 netif_carrier_off(dev); 1137 1136 mii_check_media(&cp->mii_if, netif_msg_link(cp), true); ··· 2036 2031 /* FIXME: sh*t may happen if the Rx ring buffer is depleted */ 2037 2032 cp_init_rings_index (cp); 2038 2033 cp_init_hw (cp); 2034 + cp_enable_irq(cp); 2039 2035 netif_start_queue (dev); 2040 2036 2041 2037 spin_lock_irqsave (&cp->lock, flags);
+6 -11
drivers/net/ethernet/smsc/smsc911x.c
··· 1166 1166 1167 1167 /* Quickly dumps bad packets */ 1168 1168 static void 1169 - smsc911x_rx_fastforward(struct smsc911x_data *pdata, unsigned int pktbytes) 1169 + smsc911x_rx_fastforward(struct smsc911x_data *pdata, unsigned int pktwords) 1170 1170 { 1171 - unsigned int pktwords = (pktbytes + NET_IP_ALIGN + 3) >> 2; 1172 - 1173 1171 if (likely(pktwords >= 4)) { 1174 1172 unsigned int timeout = 500; 1175 1173 unsigned int val; ··· 1231 1233 continue; 1232 1234 } 1233 1235 1234 - skb = netdev_alloc_skb(dev, pktlength + NET_IP_ALIGN); 1236 + skb = netdev_alloc_skb(dev, pktwords << 2); 1235 1237 if (unlikely(!skb)) { 1236 1238 SMSC_WARN(pdata, rx_err, 1237 1239 "Unable to allocate skb for rx packet"); ··· 1241 1243 break; 1242 1244 } 1243 1245 1244 - skb->data = skb->head; 1245 - skb_reset_tail_pointer(skb); 1246 + pdata->ops->rx_readfifo(pdata, 1247 + (unsigned int *)skb->data, pktwords); 1246 1248 1247 1249 /* Align IP on 16B boundary */ 1248 1250 skb_reserve(skb, NET_IP_ALIGN); 1249 1251 skb_put(skb, pktlength - 4); 1250 - pdata->ops->rx_readfifo(pdata, 1251 - (unsigned int *)skb->head, pktwords); 1252 1252 skb->protocol = eth_type_trans(skb, dev); 1253 1253 skb_checksum_none_assert(skb); 1254 1254 netif_receive_skb(skb); ··· 1561 1565 smsc911x_reg_write(pdata, FIFO_INT, temp); 1562 1566 1563 1567 /* set RX Data offset to 2 bytes for alignment */ 1564 - smsc911x_reg_write(pdata, RX_CFG, (2 << 8)); 1568 + smsc911x_reg_write(pdata, RX_CFG, (NET_IP_ALIGN << 8)); 1565 1569 1566 1570 /* enable NAPI polling before enabling RX interrupts */ 1567 1571 napi_enable(&pdata->napi); ··· 2378 2382 SET_NETDEV_DEV(dev, &pdev->dev); 2379 2383 2380 2384 pdata = netdev_priv(dev); 2381 - 2382 2385 dev->irq = irq_res->start; 2383 2386 irq_flags = irq_res->flags & IRQF_TRIGGER_MASK; 2384 2387 pdata->ioaddr = ioremap_nocache(res->start, res_size); ··· 2441 2446 if (retval) { 2442 2447 SMSC_WARN(pdata, probe, 2443 2448 "Unable to claim requested irq: %d", dev->irq); 2444 - goto out_free_irq; 2449 + goto out_disable_resources; 2445 2450 } 2446 2451 2447 2452 retval = register_netdev(dev);
+5
drivers/net/ethernet/ti/davinci_mdio.c
··· 181 181 __davinci_mdio_reset(data); 182 182 return -EAGAIN; 183 183 } 184 + 185 + reg = __raw_readl(&regs->user[0].access); 186 + if ((reg & USERACCESS_GO) == 0) 187 + return 0; 188 + 184 189 dev_err(data->dev, "timed out waiting for user access\n"); 185 190 return -ETIMEDOUT; 186 191 }
+1 -3
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 2 2 * Definitions for Xilinx Axi Ethernet device driver. 3 3 * 4 4 * Copyright (c) 2009 Secret Lab Technologies, Ltd. 5 - * Copyright (c) 2010 Xilinx, Inc. All rights reserved. 6 - * Copyright (c) 2012 Daniel Borkmann, <daniel.borkmann@tik.ee.ethz.ch> 7 - * Copyright (c) 2012 Ariane Keller, <ariane.keller@tik.ee.ethz.ch> 5 + * Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved. 8 6 */ 9 7 10 8 #ifndef XILINX_AXIENET_H
+3 -3
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 4 4 * Copyright (c) 2008 Nissin Systems Co., Ltd., Yoshio Kashiwagi 5 5 * Copyright (c) 2005-2008 DLA Systems, David H. Lynch Jr. <dhlii@dlasys.net> 6 6 * Copyright (c) 2008-2009 Secret Lab Technologies Ltd. 7 - * Copyright (c) 2010 Xilinx, Inc. All rights reserved. 8 - * Copyright (c) 2012 Daniel Borkmann, <daniel.borkmann@tik.ee.ethz.ch> 9 - * Copyright (c) 2012 Ariane Keller, <ariane.keller@tik.ee.ethz.ch> 7 + * Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu> 8 + * Copyright (c) 2010 - 2011 PetaLogix 9 + * Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved. 10 10 * 11 11 * This is a driver for the Xilinx Axi Ethernet which is used in the Virtex6 12 12 * and Spartan6.
+3 -3
drivers/net/ethernet/xilinx/xilinx_axienet_mdio.c
··· 2 2 * MDIO bus driver for the Xilinx Axi Ethernet device 3 3 * 4 4 * Copyright (c) 2009 Secret Lab Technologies, Ltd. 5 - * Copyright (c) 2010 Xilinx, Inc. All rights reserved. 6 - * Copyright (c) 2012 Daniel Borkmann, <daniel.borkmann@tik.ee.ethz.ch> 7 - * Copyright (c) 2012 Ariane Keller, <ariane.keller@tik.ee.ethz.ch> 5 + * Copyright (c) 2010 - 2011 Michal Simek <monstr@monstr.eu> 6 + * Copyright (c) 2010 - 2011 PetaLogix 7 + * Copyright (c) 2010 - 2012 Xilinx, Inc. All rights reserved. 8 8 */ 9 9 10 10 #include <linux/of_address.h>
+14 -24
drivers/net/hyperv/netvsc_drv.c
··· 44 44 /* point back to our device context */ 45 45 struct hv_device *device_ctx; 46 46 struct delayed_work dwork; 47 + struct work_struct work; 47 48 }; 48 49 49 50 ··· 52 51 module_param(ring_size, int, S_IRUGO); 53 52 MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)"); 54 53 55 - struct set_multicast_work { 56 - struct work_struct work; 57 - struct net_device *net; 58 - }; 59 - 60 54 static void do_set_multicast(struct work_struct *w) 61 55 { 62 - struct set_multicast_work *swk = 63 - container_of(w, struct set_multicast_work, work); 64 - struct net_device *net = swk->net; 65 - 66 - struct net_device_context *ndevctx = netdev_priv(net); 56 + struct net_device_context *ndevctx = 57 + container_of(w, struct net_device_context, work); 67 58 struct netvsc_device *nvdev; 68 59 struct rndis_device *rdev; 69 60 70 61 nvdev = hv_get_drvdata(ndevctx->device_ctx); 71 - if (nvdev == NULL) 72 - goto out; 62 + if (nvdev == NULL || nvdev->ndev == NULL) 63 + return; 73 64 74 65 rdev = nvdev->extension; 75 66 if (rdev == NULL) 76 - goto out; 67 + return; 77 68 78 - if (net->flags & IFF_PROMISC) 69 + if (nvdev->ndev->flags & IFF_PROMISC) 79 70 rndis_filter_set_packet_filter(rdev, 80 71 NDIS_PACKET_TYPE_PROMISCUOUS); 81 72 else ··· 75 82 NDIS_PACKET_TYPE_BROADCAST | 76 83 NDIS_PACKET_TYPE_ALL_MULTICAST | 77 84 NDIS_PACKET_TYPE_DIRECTED); 78 - 79 - out: 80 - kfree(w); 81 85 } 82 86 83 87 static void netvsc_set_multicast_list(struct net_device *net) 84 88 { 85 - struct set_multicast_work *swk = 86 - kmalloc(sizeof(struct set_multicast_work), GFP_ATOMIC); 87 - if (swk == NULL) 88 - return; 89 + struct net_device_context *net_device_ctx = netdev_priv(net); 89 90 90 - swk->net = net; 91 - INIT_WORK(&swk->work, do_set_multicast); 92 - schedule_work(&swk->work); 91 + schedule_work(&net_device_ctx->work); 93 92 } 94 93 95 94 static int netvsc_open(struct net_device *net) ··· 110 125 111 126 netif_tx_disable(net); 112 127 128 + /* Make sure netvsc_set_multicast_list doesn't re-enable filter! */ 129 + cancel_work_sync(&net_device_ctx->work); 113 130 ret = rndis_filter_close(device_obj); 114 131 if (ret != 0) 115 132 netdev_err(net, "unable to close device (ret %d).\n", ret); ··· 322 335 323 336 nvdev->start_remove = true; 324 337 cancel_delayed_work_sync(&ndevctx->dwork); 338 + cancel_work_sync(&ndevctx->work); 325 339 netif_tx_disable(ndev); 326 340 rndis_filter_device_remove(hdev); 327 341 ··· 391 403 net_device_ctx->device_ctx = dev; 392 404 hv_set_drvdata(dev, net); 393 405 INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_send_garp); 406 + INIT_WORK(&net_device_ctx->work, do_set_multicast); 394 407 395 408 net->netdev_ops = &device_ops; 396 409 ··· 445 456 446 457 ndev_ctx = netdev_priv(net); 447 458 cancel_delayed_work_sync(&ndev_ctx->dwork); 459 + cancel_work_sync(&ndev_ctx->work); 448 460 449 461 /* Stop outbound asap */ 450 462 netif_tx_disable(net);
+11 -1
drivers/net/phy/icplus.c
··· 40 40 #define IP1001_PHASE_SEL_MASK 3 /* IP1001 RX/TXPHASE_SEL */ 41 41 #define IP1001_APS_ON 11 /* IP1001 APS Mode bit */ 42 42 #define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */ 43 + #define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */ 43 44 44 45 static int ip175c_config_init(struct phy_device *phydev) 45 46 { ··· 186 185 return 0; 187 186 } 188 187 188 + static int ip101a_g_ack_interrupt(struct phy_device *phydev) 189 + { 190 + int err = phy_read(phydev, IP101A_G_IRQ_CONF_STATUS); 191 + if (err < 0) 192 + return err; 193 + 194 + return 0; 195 + } 196 + 189 197 static struct phy_driver ip175c_driver = { 190 198 .phy_id = 0x02430d80, 191 199 .name = "ICPlus IP175C", ··· 214 204 .phy_id_mask = 0x0ffffff0, 215 205 .features = PHY_GBIT_FEATURES | SUPPORTED_Pause | 216 206 SUPPORTED_Asym_Pause, 217 - .flags = PHY_HAS_INTERRUPT, 218 207 .config_init = &ip1001_config_init, 219 208 .config_aneg = &genphy_config_aneg, 220 209 .read_status = &genphy_read_status, ··· 229 220 .features = PHY_BASIC_FEATURES | SUPPORTED_Pause | 230 221 SUPPORTED_Asym_Pause, 231 222 .flags = PHY_HAS_INTERRUPT, 223 + .ack_interrupt = ip101a_g_ack_interrupt, 232 224 .config_init = &ip101a_g_config_init, 233 225 .config_aneg = &genphy_config_aneg, 234 226 .read_status = &genphy_read_status,
+6 -9
drivers/net/ppp/ppp_generic.c
··· 235 235 /* Prototypes. */ 236 236 static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf, 237 237 struct file *file, unsigned int cmd, unsigned long arg); 238 - static int ppp_xmit_process(struct ppp *ppp); 238 + static void ppp_xmit_process(struct ppp *ppp); 239 239 static void ppp_send_frame(struct ppp *ppp, struct sk_buff *skb); 240 240 static void ppp_push(struct ppp *ppp); 241 241 static void ppp_channel_push(struct channel *pch); ··· 969 969 put_unaligned_be16(proto, pp); 970 970 971 971 skb_queue_tail(&ppp->file.xq, skb); 972 - if (!ppp_xmit_process(ppp)) 973 - netif_stop_queue(dev); 972 + ppp_xmit_process(ppp); 974 973 return NETDEV_TX_OK; 975 974 976 975 outf: ··· 1047 1048 * Called to do any work queued up on the transmit side 1048 1049 * that can now be done. 1049 1050 */ 1050 - static int 1051 + static void 1051 1052 ppp_xmit_process(struct ppp *ppp) 1052 1053 { 1053 1054 struct sk_buff *skb; 1054 - int ret = 0; 1055 1055 1056 1056 ppp_xmit_lock(ppp); 1057 1057 if (!ppp->closing) { ··· 1060 1062 ppp_send_frame(ppp, skb); 1061 1063 /* If there's no work left to do, tell the core net 1062 1064 code that we can accept some more. */ 1063 - if (!ppp->xmit_pending && !skb_peek(&ppp->file.xq)) { 1065 + if (!ppp->xmit_pending && !skb_peek(&ppp->file.xq)) 1064 1066 netif_wake_queue(ppp->dev); 1065 - ret = 1; 1066 - } 1067 + else 1068 + netif_stop_queue(ppp->dev); 1067 1069 } 1068 1070 ppp_xmit_unlock(ppp); 1069 - return ret; 1070 1071 } 1071 1072 1072 1073 static inline struct sk_buff *
+30
drivers/net/usb/qmi_wwan.c
··· 365 365 .data = BIT(4), /* interface whitelist bitmap */ 366 366 }; 367 367 368 + /* Sierra Wireless provide equally useless interface descriptors 369 + * Devices in QMI mode can be switched between two different 370 + * configurations: 371 + * a) USB interface #8 is QMI/wwan 372 + * b) USB interfaces #8, #19 and #20 are QMI/wwan 373 + * 374 + * Both configurations provide a number of other interfaces (serial++), 375 + * some of which have the same endpoint configuration as we expect, so 376 + * a whitelist or blacklist is necessary. 377 + * 378 + * FIXME: The below whitelist should include BIT(20). It does not 379 + * because I cannot get it to work... 380 + */ 381 + static const struct driver_info qmi_wwan_sierra = { 382 + .description = "Sierra Wireless wwan/QMI device", 383 + .flags = FLAG_WWAN, 384 + .bind = qmi_wwan_bind_gobi, 385 + .unbind = qmi_wwan_unbind_shared, 386 + .manage_power = qmi_wwan_manage_power, 387 + .data = BIT(8) | BIT(19), /* interface whitelist bitmap */ 388 + }; 368 389 369 390 #define HUAWEI_VENDOR_ID 0x12D1 370 391 #define QMI_GOBI_DEVICE(vend, prod) \ ··· 465 444 .bInterfaceSubClass = 0xff, 466 445 .bInterfaceProtocol = 0xff, 467 446 .driver_info = (unsigned long)&qmi_wwan_force_int4, 447 + }, 448 + { /* Sierra Wireless MC77xx in QMI mode */ 449 + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO, 450 + .idVendor = 0x1199, 451 + .idProduct = 0x68a2, 452 + .bInterfaceClass = 0xff, 453 + .bInterfaceSubClass = 0xff, 454 + .bInterfaceProtocol = 0xff, 455 + .driver_info = (unsigned long)&qmi_wwan_sierra, 468 456 }, 469 457 {QMI_GOBI_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */ 470 458 {QMI_GOBI_DEVICE(0x03f0, 0x1f1d)}, /* HP un2400 Gobi Modem Device */
+1
drivers/net/usb/smsc75xx.c
··· 1051 1051 dev->net->ethtool_ops = &smsc75xx_ethtool_ops; 1052 1052 dev->net->flags |= IFF_MULTICAST; 1053 1053 dev->net->hard_header_len += SMSC75XX_TX_OVERHEAD; 1054 + dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 1054 1055 return 0; 1055 1056 } 1056 1057
+2 -3
drivers/net/virtio_net.c
··· 626 626 /* This can happen with OOM and indirect buffers. */ 627 627 if (unlikely(capacity < 0)) { 628 628 if (likely(capacity == -ENOMEM)) { 629 - if (net_ratelimit()) { 629 + if (net_ratelimit()) 630 630 dev_warn(&dev->dev, 631 631 "TX queue failure: out of memory\n"); 632 - } else { 632 + } else { 633 633 dev->stats.tx_fifo_errors++; 634 634 if (net_ratelimit()) 635 635 dev_warn(&dev->dev, 636 636 "Unexpected TX queue failure: %d\n", 637 637 capacity); 638 - } 639 638 } 640 639 dev->stats.tx_dropped++; 641 640 kfree_skb(skb);
+1
drivers/net/wan/farsync.c
··· 2483 2483 pr_err("Control memory remap failed\n"); 2484 2484 pci_release_regions(pdev); 2485 2485 pci_disable_device(pdev); 2486 + iounmap(card->mem); 2486 2487 kfree(card); 2487 2488 return -ENODEV; 2488 2489 }
+5 -2
drivers/net/wireless/ath/ath5k/ahb.c
··· 19 19 #include <linux/nl80211.h> 20 20 #include <linux/platform_device.h> 21 21 #include <linux/etherdevice.h> 22 + #include <linux/export.h> 22 23 #include <ar231x_platform.h> 23 24 #include "ath5k.h" 24 25 #include "debug.h" ··· 120 119 if (res == NULL) { 121 120 dev_err(&pdev->dev, "no IRQ resource found\n"); 122 121 ret = -ENXIO; 123 - goto err_out; 122 + goto err_iounmap; 124 123 } 125 124 126 125 irq = res->start; ··· 129 128 if (hw == NULL) { 130 129 dev_err(&pdev->dev, "no memory for ieee80211_hw\n"); 131 130 ret = -ENOMEM; 132 - goto err_out; 131 + goto err_iounmap; 133 132 } 134 133 135 134 ah = hw->priv; ··· 186 185 err_free_hw: 187 186 ieee80211_free_hw(hw); 188 187 platform_set_drvdata(pdev, NULL); 188 + err_iounmap: 189 + iounmap(mem); 189 190 err_out: 190 191 return ret; 191 192 }
+8 -1
drivers/net/wireless/ath/ath9k/main.c
··· 1548 1548 struct ath_hw *ah = sc->sc_ah; 1549 1549 struct ath_common *common = ath9k_hw_common(ah); 1550 1550 struct ieee80211_conf *conf = &hw->conf; 1551 + bool reset_channel = false; 1551 1552 1552 1553 ath9k_ps_wakeup(sc); 1553 1554 mutex_lock(&sc->mutex); ··· 1557 1556 sc->ps_idle = !!(conf->flags & IEEE80211_CONF_IDLE); 1558 1557 if (sc->ps_idle) 1559 1558 ath_cancel_work(sc); 1559 + else 1560 + /* 1561 + * The chip needs a reset to properly wake up from 1562 + * full sleep 1563 + */ 1564 + reset_channel = ah->chip_fullsleep; 1560 1565 } 1561 1566 1562 1567 /* ··· 1591 1584 } 1592 1585 } 1593 1586 1594 - if (changed & IEEE80211_CONF_CHANGE_CHANNEL) { 1587 + if ((changed & IEEE80211_CONF_CHANGE_CHANNEL) || reset_channel) { 1595 1588 struct ieee80211_channel *curchan = hw->conf.channel; 1596 1589 int pos = curchan->hw_value; 1597 1590 int old_pos = -1;
+9 -1
drivers/net/wireless/ath/ath9k/xmit.c
··· 1820 1820 struct ath_frame_info *fi = get_frame_info(skb); 1821 1821 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1822 1822 struct ath_buf *bf; 1823 + int fragno; 1823 1824 u16 seqno; 1824 1825 1825 1826 bf = ath_tx_get_buffer(sc); ··· 1832 1831 ATH_TXBUF_RESET(bf); 1833 1832 1834 1833 if (tid) { 1834 + fragno = le16_to_cpu(hdr->seq_ctrl) & IEEE80211_SCTL_FRAG; 1835 1835 seqno = tid->seq_next; 1836 1836 hdr->seq_ctrl = cpu_to_le16(tid->seq_next << IEEE80211_SEQ_SEQ_SHIFT); 1837 - INCR(tid->seq_next, IEEE80211_SEQ_MAX); 1837 + 1838 + if (fragno) 1839 + hdr->seq_ctrl |= cpu_to_le16(fragno); 1840 + 1841 + if (!ieee80211_has_morefrags(hdr->frame_control)) 1842 + INCR(tid->seq_next, IEEE80211_SEQ_MAX); 1843 + 1838 1844 bf->bf_state.seqno = seqno; 1839 1845 } 1840 1846
+8
drivers/net/wireless/brcm80211/brcmsmac/main.c
··· 7614 7614 { 7615 7615 int len_mpdu; 7616 7616 struct ieee80211_rx_status rx_status; 7617 + struct ieee80211_hdr *hdr; 7617 7618 7618 7619 memset(&rx_status, 0, sizeof(rx_status)); 7619 7620 prep_mac80211_status(wlc, rxh, p, &rx_status); ··· 7623 7622 len_mpdu = p->len - D11_PHY_HDR_LEN - FCS_LEN; 7624 7623 skb_pull(p, D11_PHY_HDR_LEN); 7625 7624 __skb_trim(p, len_mpdu); 7625 + 7626 + /* unmute transmit */ 7627 + if (wlc->hw->suspended_fifos) { 7628 + hdr = (struct ieee80211_hdr *)p->data; 7629 + if (ieee80211_is_beacon(hdr->frame_control)) 7630 + brcms_b_mute(wlc->hw, false); 7631 + } 7626 7632 7627 7633 memcpy(IEEE80211_SKB_RXCB(p), &rx_status, sizeof(rx_status)); 7628 7634 ieee80211_rx_irqsafe(wlc->pub->ieee_hw, p);
+7 -2
drivers/net/wireless/libertas/cfg.c
··· 103 103 * Convert NL80211's auth_type to the one from Libertas, see chapter 5.9.1 104 104 * in the firmware spec 105 105 */ 106 - static u8 lbs_auth_to_authtype(enum nl80211_auth_type auth_type) 106 + static int lbs_auth_to_authtype(enum nl80211_auth_type auth_type) 107 107 { 108 108 int ret = -ENOTSUPP; 109 109 ··· 1411 1411 goto done; 1412 1412 } 1413 1413 1414 - lbs_set_authtype(priv, sme); 1414 + ret = lbs_set_authtype(priv, sme); 1415 + if (ret == -ENOTSUPP) { 1416 + wiphy_err(wiphy, "unsupported authtype 0x%x\n", sme->auth_type); 1417 + goto done; 1418 + } 1419 + 1415 1420 lbs_set_radio(priv, preamble, 1); 1416 1421 1417 1422 /* Do the actual association */
+9 -9
drivers/net/wireless/mwifiex/pcie.h
··· 48 48 #define PCIE_HOST_INT_STATUS_MASK 0xC3C 49 49 #define PCIE_SCRATCH_2_REG 0xC40 50 50 #define PCIE_SCRATCH_3_REG 0xC44 51 - #define PCIE_SCRATCH_4_REG 0xCC0 52 - #define PCIE_SCRATCH_5_REG 0xCC4 53 - #define PCIE_SCRATCH_6_REG 0xCC8 54 - #define PCIE_SCRATCH_7_REG 0xCCC 55 - #define PCIE_SCRATCH_8_REG 0xCD0 56 - #define PCIE_SCRATCH_9_REG 0xCD4 57 - #define PCIE_SCRATCH_10_REG 0xCD8 58 - #define PCIE_SCRATCH_11_REG 0xCDC 59 - #define PCIE_SCRATCH_12_REG 0xCE0 51 + #define PCIE_SCRATCH_4_REG 0xCD0 52 + #define PCIE_SCRATCH_5_REG 0xCD4 53 + #define PCIE_SCRATCH_6_REG 0xCD8 54 + #define PCIE_SCRATCH_7_REG 0xCDC 55 + #define PCIE_SCRATCH_8_REG 0xCE0 56 + #define PCIE_SCRATCH_9_REG 0xCE4 57 + #define PCIE_SCRATCH_10_REG 0xCE8 58 + #define PCIE_SCRATCH_11_REG 0xCEC 59 + #define PCIE_SCRATCH_12_REG 0xCF0 60 60 61 61 #define CPU_INTR_DNLD_RDY BIT(0) 62 62 #define CPU_INTR_DOOR_BELL BIT(1)
+1
drivers/pci/Makefile
··· 42 42 obj-$(CONFIG_PARISC) += setup-bus.o 43 43 obj-$(CONFIG_SUPERH) += setup-bus.o setup-irq.o 44 44 obj-$(CONFIG_PPC) += setup-bus.o 45 + obj-$(CONFIG_FRV) += setup-bus.o 45 46 obj-$(CONFIG_MIPS) += setup-bus.o setup-irq.o 46 47 obj-$(CONFIG_X86_VISWS) += setup-irq.o 47 48 obj-$(CONFIG_MN10300) += setup-bus.o
+45 -22
drivers/platform/x86/acerhdf.c
··· 50 50 */ 51 51 #undef START_IN_KERNEL_MODE 52 52 53 - #define DRV_VER "0.5.24" 53 + #define DRV_VER "0.5.26" 54 54 55 55 /* 56 56 * According to the Atom N270 datasheet, ··· 83 83 #endif 84 84 85 85 static unsigned int interval = 10; 86 - static unsigned int fanon = 63000; 87 - static unsigned int fanoff = 58000; 86 + static unsigned int fanon = 60000; 87 + static unsigned int fanoff = 53000; 88 88 static unsigned int verbose; 89 89 static unsigned int fanstate = ACERHDF_FAN_AUTO; 90 90 static char force_bios[16]; ··· 150 150 {"Acer", "AOA150", "v0.3308", 0x55, 0x58, {0x20, 0x00} }, 151 151 {"Acer", "AOA150", "v0.3309", 0x55, 0x58, {0x20, 0x00} }, 152 152 {"Acer", "AOA150", "v0.3310", 0x55, 0x58, {0x20, 0x00} }, 153 + /* LT1005u */ 154 + {"Acer", "LT-10Q", "v0.3310", 0x55, 0x58, {0x20, 0x00} }, 153 155 /* Acer 1410 */ 154 156 {"Acer", "Aspire 1410", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 155 157 {"Acer", "Aspire 1410", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, ··· 163 161 {"Acer", "Aspire 1410", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 164 162 {"Acer", "Aspire 1410", "v1.3308", 0x55, 0x58, {0x9e, 0x00} }, 165 163 {"Acer", "Aspire 1410", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 164 + {"Acer", "Aspire 1410", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 166 165 /* Acer 1810xx */ 167 166 {"Acer", "Aspire 1810TZ", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 168 167 {"Acer", "Aspire 1810T", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, ··· 186 183 {"Acer", "Aspire 1810TZ", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 187 184 {"Acer", "Aspire 1810T", "v1.3310", 0x55, 0x58, {0x9e, 0x00} }, 188 185 {"Acer", "Aspire 1810TZ", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 186 + {"Acer", "Aspire 1810T", "v1.3314", 0x55, 0x58, {0x9e, 0x00} }, 189 187 /* Acer 531 */ 188 + {"Acer", "AO531h", "v0.3104", 0x55, 0x58, {0x20, 0x00} }, 190 189 {"Acer", "AO531h", "v0.3201", 0x55, 0x58, {0x20, 0x00} }, 190 + {"Acer", "AO531h", "v0.3304", 0x55, 0x58, {0x20, 0x00} }, 191 + /* Acer 751 */ 192 + {"Acer", "AO751h", "V0.3212", 0x55, 0x58, {0x21, 0x00} }, 193 + /* Acer 1825 */ 194 + {"Acer", "Aspire 1825PTZ", "V1.3118", 0x55, 0x58, {0x9e, 0x00} }, 195 + {"Acer", "Aspire 1825PTZ", "V1.3127", 0x55, 0x58, {0x9e, 0x00} }, 196 + /* Acer TravelMate 7730 */ 197 + {"Acer", "TravelMate 7730G", "v0.3509", 0x55, 0x58, {0xaf, 0x00} }, 191 198 /* Gateway */ 192 - {"Gateway", "AOA110", "v0.3103", 0x55, 0x58, {0x21, 0x00} }, 193 - {"Gateway", "AOA150", "v0.3103", 0x55, 0x58, {0x20, 0x00} }, 194 - {"Gateway", "LT31", "v1.3103", 0x55, 0x58, {0x9e, 0x00} }, 195 - {"Gateway", "LT31", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 196 - {"Gateway", "LT31", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 199 + {"Gateway", "AOA110", "v0.3103", 0x55, 0x58, {0x21, 0x00} }, 200 + {"Gateway", "AOA150", "v0.3103", 0x55, 0x58, {0x20, 0x00} }, 201 + {"Gateway", "LT31", "v1.3103", 0x55, 0x58, {0x9e, 0x00} }, 202 + {"Gateway", "LT31", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 203 + {"Gateway", "LT31", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 204 + {"Gateway", "LT31", "v1.3303t", 0x55, 0x58, {0x9e, 0x00} }, 197 205 /* Packard Bell */ 198 - {"Packard Bell", "DOA150", "v0.3104", 0x55, 0x58, {0x21, 0x00} }, 199 - {"Packard Bell", "DOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 200 - {"Packard Bell", "AOA110", "v0.3105", 0x55, 0x58, {0x21, 0x00} }, 201 - {"Packard Bell", "AOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 202 - {"Packard Bell", "DOTMU", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 203 - {"Packard Bell", "DOTMU", "v0.3120", 0x55, 0x58, {0x9e, 0x00} }, 204 - {"Packard Bell", "DOTMU", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 205 - {"Packard Bell", "DOTMU", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, 206 - {"Packard Bell", "DOTMU", "v0.3115", 0x55, 0x58, {0x9e, 0x00} }, 207 - {"Packard Bell", "DOTMU", "v0.3117", 0x55, 0x58, {0x9e, 0x00} }, 208 - {"Packard Bell", "DOTMU", "v0.3119", 0x55, 0x58, {0x9e, 0x00} }, 209 - {"Packard Bell", "DOTMU", "v1.3204", 0x55, 0x58, {0x9e, 0x00} }, 210 - {"Packard Bell", "DOTMA", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 211 - {"Packard Bell", "DOTMA", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 206 + {"Packard Bell", "DOA150", "v0.3104", 0x55, 0x58, {0x21, 0x00} }, 207 + {"Packard Bell", "DOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 208 + {"Packard Bell", "AOA110", "v0.3105", 0x55, 0x58, {0x21, 0x00} }, 209 + {"Packard Bell", "AOA150", "v0.3105", 0x55, 0x58, {0x20, 0x00} }, 210 + {"Packard Bell", "ENBFT", "V1.3118", 0x55, 0x58, {0x9e, 0x00} }, 211 + {"Packard Bell", "ENBFT", "V1.3127", 0x55, 0x58, {0x9e, 0x00} }, 212 + {"Packard Bell", "DOTMU", "v1.3303", 0x55, 0x58, {0x9e, 0x00} }, 213 + {"Packard Bell", "DOTMU", "v0.3120", 0x55, 0x58, {0x9e, 0x00} }, 214 + {"Packard Bell", "DOTMU", "v0.3108", 0x55, 0x58, {0x9e, 0x00} }, 215 + {"Packard Bell", "DOTMU", "v0.3113", 0x55, 0x58, {0x9e, 0x00} }, 216 + {"Packard Bell", "DOTMU", "v0.3115", 0x55, 0x58, {0x9e, 0x00} }, 217 + {"Packard Bell", "DOTMU", "v0.3117", 0x55, 0x58, {0x9e, 0x00} }, 218 + {"Packard Bell", "DOTMU", "v0.3119", 0x55, 0x58, {0x9e, 0x00} }, 219 + {"Packard Bell", "DOTMU", "v1.3204", 0x55, 0x58, {0x9e, 0x00} }, 220 + {"Packard Bell", "DOTMA", "v1.3201", 0x55, 0x58, {0x9e, 0x00} }, 221 + {"Packard Bell", "DOTMA", "v1.3302", 0x55, 0x58, {0x9e, 0x00} }, 222 + {"Packard Bell", "DOTMA", "v1.3303t", 0x55, 0x58, {0x9e, 0x00} }, 223 + {"Packard Bell", "DOTVR46", "v1.3308", 0x55, 0x58, {0x9e, 0x00} }, 212 224 /* pewpew-terminator */ 213 225 {"", "", "", 0, 0, {0, 0} } 214 226 }; ··· 719 701 MODULE_AUTHOR("Peter Feuerer"); 720 702 MODULE_DESCRIPTION("Aspire One temperature and fan driver"); 721 703 MODULE_ALIAS("dmi:*:*Acer*:pnAOA*:"); 704 + MODULE_ALIAS("dmi:*:*Acer*:pnAO751h*:"); 722 705 MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1410*:"); 723 706 MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1810*:"); 707 + MODULE_ALIAS("dmi:*:*Acer*:pnAspire*1825PTZ:"); 724 708 MODULE_ALIAS("dmi:*:*Acer*:pnAO531*:"); 709 + MODULE_ALIAS("dmi:*:*Acer*:TravelMate*7730G:"); 725 710 MODULE_ALIAS("dmi:*:*Gateway*:pnAOA*:"); 726 711 MODULE_ALIAS("dmi:*:*Gateway*:pnLT31*:"); 727 712 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnAOA*:"); 728 713 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOA*:"); 729 714 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTMU*:"); 715 + MODULE_ALIAS("dmi:*:*Packard*Bell*:pnENBFT*:"); 730 716 MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTMA*:"); 717 + MODULE_ALIAS("dmi:*:*Packard*Bell*:pnDOTVR46*:"); 731 718 732 719 module_init(acerhdf_init); 733 720 module_exit(acerhdf_exit);
+1
drivers/platform/x86/dell-laptop.c
··· 212 212 }, 213 213 .driver_data = &quirk_dell_vostro_v130, 214 214 }, 215 + { } 215 216 }; 216 217 217 218 static struct calling_interface_buffer *buffer;
+1 -1
drivers/platform/x86/intel_ips.c
··· 1565 1565 ips->poll_turbo_status = true; 1566 1566 1567 1567 if (!ips_get_i915_syms(ips)) { 1568 - dev_err(&dev->dev, "failed to get i915 symbols, graphics turbo disabled\n"); 1568 + dev_info(&dev->dev, "failed to get i915 symbols, graphics turbo disabled until i915 loads\n"); 1569 1569 ips->gpu_turbo_enabled = false; 1570 1570 } else { 1571 1571 dev_dbg(&dev->dev, "graphics turbo enabled\n");
+1
drivers/rtc/rtc-ds1307.c
··· 902 902 } 903 903 ds1307->nvram->attr.name = "nvram"; 904 904 ds1307->nvram->attr.mode = S_IRUGO | S_IWUSR; 905 + sysfs_bin_attr_init(ds1307->nvram); 905 906 ds1307->nvram->read = ds1307_nvram_read, 906 907 ds1307->nvram->write = ds1307_nvram_write, 907 908 ds1307->nvram->size = chip->nvram_size;
+1 -1
drivers/spi/Kconfig
··· 74 74 This selects a driver for the Atmel SPI Controller, present on 75 75 many AT32 (AVR32) and AT91 (ARM) chips. 76 76 77 - config SPI_BFIN 77 + config SPI_BFIN5XX 78 78 tristate "SPI controller driver for ADI Blackfin5xx" 79 79 depends on BLACKFIN 80 80 help
+1 -1
drivers/spi/Makefile
··· 15 15 obj-$(CONFIG_SPI_ATH79) += spi-ath79.o 16 16 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o 17 17 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o 18 - obj-$(CONFIG_SPI_BFIN) += spi-bfin5xx.o 18 + obj-$(CONFIG_SPI_BFIN5XX) += spi-bfin5xx.o 19 19 obj-$(CONFIG_SPI_BFIN_SPORT) += spi-bfin-sport.o 20 20 obj-$(CONFIG_SPI_BITBANG) += spi-bitbang.o 21 21 obj-$(CONFIG_SPI_BUTTERFLY) += spi-butterfly.o
+93 -72
drivers/spi/spi-bcm63xx.c
··· 1 1 /* 2 2 * Broadcom BCM63xx SPI controller support 3 3 * 4 - * Copyright (C) 2009-2011 Florian Fainelli <florian@openwrt.org> 4 + * Copyright (C) 2009-2012 Florian Fainelli <florian@openwrt.org> 5 5 * Copyright (C) 2010 Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com> 6 6 * 7 7 * This program is free software; you can redistribute it and/or ··· 30 30 #include <linux/spi/spi.h> 31 31 #include <linux/completion.h> 32 32 #include <linux/err.h> 33 + #include <linux/workqueue.h> 34 + #include <linux/pm_runtime.h> 33 35 34 36 #include <bcm63xx_dev_spi.h> 35 37 ··· 39 37 #define DRV_VER "0.1.2" 40 38 41 39 struct bcm63xx_spi { 42 - spinlock_t lock; 43 - int stopping; 44 40 struct completion done; 45 41 46 42 void __iomem *regs; ··· 96 96 { 391000, SPI_CLK_0_391MHZ } 97 97 }; 98 98 99 - static int bcm63xx_spi_setup_transfer(struct spi_device *spi, 100 - struct spi_transfer *t) 99 + static int bcm63xx_spi_check_transfer(struct spi_device *spi, 100 + struct spi_transfer *t) 101 101 { 102 - struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 103 102 u8 bits_per_word; 104 - u8 clk_cfg, reg; 105 - u32 hz; 106 - int i; 107 103 108 104 bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word; 109 - hz = (t) ? t->speed_hz : spi->max_speed_hz; 110 105 if (bits_per_word != 8) { 111 106 dev_err(&spi->dev, "%s, unsupported bits_per_word=%d\n", 112 107 __func__, bits_per_word); ··· 113 118 __func__, spi->chip_select); 114 119 return -EINVAL; 115 120 } 121 + 122 + return 0; 123 + } 124 + 125 + static void bcm63xx_spi_setup_transfer(struct spi_device *spi, 126 + struct spi_transfer *t) 127 + { 128 + struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 129 + u32 hz; 130 + u8 clk_cfg, reg; 131 + int i; 132 + 133 + hz = (t) ? t->speed_hz : spi->max_speed_hz; 116 134 117 135 /* Find the closest clock configuration */ 118 136 for (i = 0; i < SPI_CLK_MASK; i++) { ··· 147 139 bcm_spi_writeb(bs, reg, SPI_CLK_CFG); 148 140 dev_dbg(&spi->dev, "Setting clock register to %02x (hz %d)\n", 149 141 clk_cfg, hz); 150 - 151 - return 0; 152 142 } 153 143 154 144 /* the spi->mode bits understood by this driver: */ ··· 159 153 160 154 bs = spi_master_get_devdata(spi->master); 161 155 162 - if (bs->stopping) 163 - return -ESHUTDOWN; 164 - 165 156 if (!spi->bits_per_word) 166 157 spi->bits_per_word = 8; 167 158 ··· 168 165 return -EINVAL; 169 166 } 170 167 171 - ret = bcm63xx_spi_setup_transfer(spi, NULL); 168 + ret = bcm63xx_spi_check_transfer(spi, NULL); 172 169 if (ret < 0) { 173 170 dev_err(&spi->dev, "setup: unsupported mode bits %x\n", 174 171 spi->mode & ~MODEBITS); ··· 193 190 bs->remaining_bytes -= size; 194 191 } 195 192 196 - static int bcm63xx_txrx_bufs(struct spi_device *spi, struct spi_transfer *t) 193 + static unsigned int bcm63xx_txrx_bufs(struct spi_device *spi, 194 + struct spi_transfer *t) 197 195 { 198 196 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 199 197 u16 msg_ctl; 200 198 u16 cmd; 199 + 200 + /* Disable the CMD_DONE interrupt */ 201 + bcm_spi_writeb(bs, 0, SPI_INT_MASK); 201 202 202 203 dev_dbg(&spi->dev, "txrx: tx %p, rx %p, len %d\n", 203 204 t->tx_buf, t->rx_buf, t->len); ··· 209 202 /* Transmitter is inhibited */ 210 203 bs->tx_ptr = t->tx_buf; 211 204 bs->rx_ptr = t->rx_buf; 212 - init_completion(&bs->done); 213 205 214 206 if (t->tx_buf) { 215 207 bs->remaining_bytes = t->len; 216 208 bcm63xx_spi_fill_tx_fifo(bs); 217 209 } 218 210 219 - /* Enable the command done interrupt which 220 - * we use to determine completion of a command */ 221 - bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 211 + init_completion(&bs->done); 222 212 223 213 /* Fill in the Message control register */ 224 214 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT); ··· 234 230 cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT); 235 231 cmd |= (spi->chip_select << SPI_CMD_DEVICE_ID_SHIFT); 236 232 bcm_spi_writew(bs, cmd, SPI_CMD); 237 - wait_for_completion(&bs->done); 238 233 239 - /* Disable the CMD_DONE interrupt */ 240 - bcm_spi_writeb(bs, 0, SPI_INT_MASK); 234 + /* Enable the CMD_DONE interrupt */ 235 + bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 241 236 242 237 return t->len - bs->remaining_bytes; 243 238 } 244 239 245 - static int bcm63xx_transfer(struct spi_device *spi, struct spi_message *m) 240 + static int bcm63xx_spi_prepare_transfer(struct spi_master *master) 246 241 { 247 - struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 242 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 243 + 244 + pm_runtime_get_sync(&bs->pdev->dev); 245 + 246 + return 0; 247 + } 248 + 249 + static int bcm63xx_spi_unprepare_transfer(struct spi_master *master) 250 + { 251 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 252 + 253 + pm_runtime_put(&bs->pdev->dev); 254 + 255 + return 0; 256 + } 257 + 258 + static int bcm63xx_spi_transfer_one(struct spi_master *master, 259 + struct spi_message *m) 260 + { 261 + struct bcm63xx_spi *bs = spi_master_get_devdata(master); 248 262 struct spi_transfer *t; 249 - int ret = 0; 250 - 251 - if (unlikely(list_empty(&m->transfers))) 252 - return -EINVAL; 253 - 254 - if (bs->stopping) 255 - return -ESHUTDOWN; 263 + struct spi_device *spi = m->spi; 264 + int status = 0; 265 + unsigned int timeout = 0; 256 266 257 267 list_for_each_entry(t, &m->transfers, transfer_list) { 258 - ret += bcm63xx_txrx_bufs(spi, t); 268 + unsigned int len = t->len; 269 + u8 rx_tail; 270 + 271 + status = bcm63xx_spi_check_transfer(spi, t); 272 + if (status < 0) 273 + goto exit; 274 + 275 + /* configure adapter for a new transfer */ 276 + bcm63xx_spi_setup_transfer(spi, t); 277 + 278 + while (len) { 279 + /* send the data */ 280 + len -= bcm63xx_txrx_bufs(spi, t); 281 + 282 + timeout = wait_for_completion_timeout(&bs->done, HZ); 283 + if (!timeout) { 284 + status = -ETIMEDOUT; 285 + goto exit; 286 + } 287 + 288 + /* read out all data */ 289 + rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 290 + 291 + /* Read out all the data */ 292 + if (rx_tail) 293 + memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail); 294 + } 295 + 296 + m->actual_length += t->len; 259 297 } 298 + exit: 299 + m->status = status; 300 + spi_finalize_current_message(master); 260 301 261 - m->complete(m->context); 262 - 263 - return ret; 302 + return 0; 264 303 } 265 304 266 305 /* This driver supports single master mode only. Hence ··· 314 267 struct spi_master *master = (struct spi_master *)dev_id; 315 268 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 316 269 u8 intr; 317 - u16 cmd; 318 270 319 271 /* Read interupts and clear them immediately */ 320 272 intr = bcm_spi_readb(bs, SPI_INT_STATUS); 321 273 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS); 322 274 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 323 275 324 - /* A tansfer completed */ 325 - if (intr & SPI_INTR_CMD_DONE) { 326 - u8 rx_tail; 327 - 328 - rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 329 - 330 - /* Read out all the data */ 331 - if (rx_tail) 332 - memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail); 333 - 334 - /* See if there is more data to send */ 335 - if (bs->remaining_bytes > 0) { 336 - bcm63xx_spi_fill_tx_fifo(bs); 337 - 338 - /* Start the transfer */ 339 - bcm_spi_writew(bs, SPI_HD_W << SPI_MSG_TYPE_SHIFT, 340 - SPI_MSG_CTL); 341 - cmd = bcm_spi_readw(bs, SPI_CMD); 342 - cmd |= SPI_CMD_START_IMMEDIATE; 343 - cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT); 344 - bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 345 - bcm_spi_writew(bs, cmd, SPI_CMD); 346 - } else { 347 - complete(&bs->done); 348 - } 349 - } 276 + /* A transfer completed */ 277 + if (intr & SPI_INTR_CMD_DONE) 278 + complete(&bs->done); 350 279 351 280 return IRQ_HANDLED; 352 281 } ··· 368 345 } 369 346 370 347 bs = spi_master_get_devdata(master); 371 - init_completion(&bs->done); 372 348 373 349 platform_set_drvdata(pdev, master); 374 350 bs->pdev = pdev; ··· 401 379 master->bus_num = pdata->bus_num; 402 380 master->num_chipselect = pdata->num_chipselect; 403 381 master->setup = bcm63xx_spi_setup; 404 - master->transfer = bcm63xx_transfer; 382 + master->prepare_transfer_hardware = bcm63xx_spi_prepare_transfer; 383 + master->unprepare_transfer_hardware = bcm63xx_spi_unprepare_transfer; 384 + master->transfer_one_message = bcm63xx_spi_transfer_one; 385 + master->mode_bits = MODEBITS; 405 386 bs->speed_hz = pdata->speed_hz; 406 - bs->stopping = 0; 407 387 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA)); 408 388 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA)); 409 - spin_lock_init(&bs->lock); 410 389 411 390 /* Initialize hardware */ 412 391 clk_enable(bs->clk); ··· 441 418 struct spi_master *master = platform_get_drvdata(pdev); 442 419 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 443 420 421 + spi_unregister_master(master); 422 + 444 423 /* reset spi block */ 445 424 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 446 - spin_lock(&bs->lock); 447 - bs->stopping = 1; 448 425 449 426 /* HW shutdown */ 450 427 clk_disable(bs->clk); 451 428 clk_put(bs->clk); 452 429 453 - spin_unlock(&bs->lock); 454 430 platform_set_drvdata(pdev, 0); 455 - spi_unregister_master(master); 456 431 457 432 return 0; 458 433 }
+11 -10
drivers/spi/spi-bfin-sport.c
··· 252 252 bfin_sport_spi_restore_state(struct bfin_sport_spi_master_data *drv_data) 253 253 { 254 254 struct bfin_sport_spi_slave_data *chip = drv_data->cur_chip; 255 - unsigned int bits = (drv_data->ops == &bfin_sport_transfer_ops_u8 ? 7 : 15); 256 255 257 256 bfin_sport_spi_disable(drv_data); 258 257 dev_dbg(drv_data->dev, "restoring spi ctl state\n"); 259 258 260 259 bfin_write(&drv_data->regs->tcr1, chip->ctl_reg); 261 - bfin_write(&drv_data->regs->tcr2, bits); 262 260 bfin_write(&drv_data->regs->tclkdiv, chip->baud); 263 - bfin_write(&drv_data->regs->tfsdiv, bits); 264 261 SSYNC(); 265 262 266 263 bfin_write(&drv_data->regs->rcr1, chip->ctl_reg & ~(ITCLK | ITFS)); 267 - bfin_write(&drv_data->regs->rcr2, bits); 268 264 SSYNC(); 269 265 270 266 bfin_sport_spi_cs_active(chip); ··· 416 420 drv_data->cs_change = transfer->cs_change; 417 421 418 422 /* Bits per word setup */ 419 - bits_per_word = transfer->bits_per_word ? : message->spi->bits_per_word; 420 - if (bits_per_word == 8) 421 - drv_data->ops = &bfin_sport_transfer_ops_u8; 422 - else 423 + bits_per_word = transfer->bits_per_word ? : 424 + message->spi->bits_per_word ? : 8; 425 + if (bits_per_word % 16 == 0) 423 426 drv_data->ops = &bfin_sport_transfer_ops_u16; 427 + else 428 + drv_data->ops = &bfin_sport_transfer_ops_u8; 429 + bfin_write(&drv_data->regs->tcr2, bits_per_word - 1); 430 + bfin_write(&drv_data->regs->tfsdiv, bits_per_word - 1); 431 + bfin_write(&drv_data->regs->rcr2, bits_per_word - 1); 424 432 425 433 drv_data->state = RUNNING_STATE; 426 434 ··· 598 598 } 599 599 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 600 600 chip->idle_tx_val = chip_info->idle_tx_val; 601 - spi->bits_per_word = chip_info->bits_per_word; 602 601 } 603 602 } 604 603 605 - if (spi->bits_per_word != 8 && spi->bits_per_word != 16) { 604 + if (spi->bits_per_word % 8) { 605 + dev_err(&spi->dev, "%d bits_per_word is not supported\n", 606 + spi->bits_per_word); 606 607 ret = -EINVAL; 607 608 goto error; 608 609 }
+8 -6
drivers/spi/spi-bfin5xx.c
··· 396 396 /* last read */ 397 397 if (drv_data->rx) { 398 398 dev_dbg(&drv_data->pdev->dev, "last read\n"); 399 - if (n_bytes % 2) { 399 + if (!(n_bytes % 2)) { 400 400 u16 *buf = (u16 *)drv_data->rx; 401 401 for (loop = 0; loop < n_bytes / 2; loop++) 402 402 *buf++ = bfin_read(&drv_data->regs->rdbr); ··· 424 424 if (drv_data->rx && drv_data->tx) { 425 425 /* duplex */ 426 426 dev_dbg(&drv_data->pdev->dev, "duplex: write_TDBR\n"); 427 - if (n_bytes % 2) { 427 + if (!(n_bytes % 2)) { 428 428 u16 *buf = (u16 *)drv_data->rx; 429 429 u16 *buf2 = (u16 *)drv_data->tx; 430 430 for (loop = 0; loop < n_bytes / 2; loop++) { ··· 442 442 } else if (drv_data->rx) { 443 443 /* read */ 444 444 dev_dbg(&drv_data->pdev->dev, "read: write_TDBR\n"); 445 - if (n_bytes % 2) { 445 + if (!(n_bytes % 2)) { 446 446 u16 *buf = (u16 *)drv_data->rx; 447 447 for (loop = 0; loop < n_bytes / 2; loop++) { 448 448 *buf++ = bfin_read(&drv_data->regs->rdbr); ··· 458 458 } else if (drv_data->tx) { 459 459 /* write */ 460 460 dev_dbg(&drv_data->pdev->dev, "write: write_TDBR\n"); 461 - if (n_bytes % 2) { 461 + if (!(n_bytes % 2)) { 462 462 u16 *buf = (u16 *)drv_data->tx; 463 463 for (loop = 0; loop < n_bytes / 2; loop++) { 464 464 bfin_read(&drv_data->regs->rdbr); ··· 587 587 if (message->state == DONE_STATE) { 588 588 dev_dbg(&drv_data->pdev->dev, "transfer: all done!\n"); 589 589 message->status = 0; 590 + bfin_spi_flush(drv_data); 590 591 bfin_spi_giveback(drv_data); 591 592 return; 592 593 } ··· 871 870 message->actual_length += drv_data->len_in_bytes; 872 871 /* Move to next transfer of this msg */ 873 872 message->state = bfin_spi_next_transfer(drv_data); 874 - if (drv_data->cs_change) 873 + if (drv_data->cs_change && message->state != DONE_STATE) { 874 + bfin_spi_flush(drv_data); 875 875 bfin_spi_cs_deactive(drv_data, chip); 876 + } 876 877 } 877 878 878 879 /* Schedule next transfer tasklet */ ··· 1029 1026 chip->cs_chg_udelay = chip_info->cs_chg_udelay; 1030 1027 chip->idle_tx_val = chip_info->idle_tx_val; 1031 1028 chip->pio_interrupt = chip_info->pio_interrupt; 1032 - spi->bits_per_word = chip_info->bits_per_word; 1033 1029 } else { 1034 1030 /* force a default base state */ 1035 1031 chip->ctl_reg &= bfin_ctl_reg;
+10 -14
drivers/spi/spi-ep93xx.c
··· 545 545 * in case of failure. 546 546 */ 547 547 static struct dma_async_tx_descriptor * 548 - ep93xx_spi_dma_prepare(struct ep93xx_spi *espi, enum dma_data_direction dir) 548 + ep93xx_spi_dma_prepare(struct ep93xx_spi *espi, enum dma_transfer_direction dir) 549 549 { 550 550 struct spi_transfer *t = espi->current_msg->state; 551 551 struct dma_async_tx_descriptor *txd; 552 552 enum dma_slave_buswidth buswidth; 553 553 struct dma_slave_config conf; 554 - enum dma_transfer_direction slave_dirn; 555 554 struct scatterlist *sg; 556 555 struct sg_table *sgt; 557 556 struct dma_chan *chan; ··· 566 567 memset(&conf, 0, sizeof(conf)); 567 568 conf.direction = dir; 568 569 569 - if (dir == DMA_FROM_DEVICE) { 570 + if (dir == DMA_DEV_TO_MEM) { 570 571 chan = espi->dma_rx; 571 572 buf = t->rx_buf; 572 573 sgt = &espi->rx_sgt; 573 574 574 575 conf.src_addr = espi->sspdr_phys; 575 576 conf.src_addr_width = buswidth; 576 - slave_dirn = DMA_DEV_TO_MEM; 577 577 } else { 578 578 chan = espi->dma_tx; 579 579 buf = t->tx_buf; ··· 580 582 581 583 conf.dst_addr = espi->sspdr_phys; 582 584 conf.dst_addr_width = buswidth; 583 - slave_dirn = DMA_MEM_TO_DEV; 584 585 } 585 586 586 587 ret = dmaengine_slave_config(chan, &conf); ··· 630 633 if (!nents) 631 634 return ERR_PTR(-ENOMEM); 632 635 633 - txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, 634 - slave_dirn, DMA_CTRL_ACK); 636 + txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK); 635 637 if (!txd) { 636 638 dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir); 637 639 return ERR_PTR(-ENOMEM); ··· 647 651 * unmapped. 648 652 */ 649 653 static void ep93xx_spi_dma_finish(struct ep93xx_spi *espi, 650 - enum dma_data_direction dir) 654 + enum dma_transfer_direction dir) 651 655 { 652 656 struct dma_chan *chan; 653 657 struct sg_table *sgt; 654 658 655 - if (dir == DMA_FROM_DEVICE) { 659 + if (dir == DMA_DEV_TO_MEM) { 656 660 chan = espi->dma_rx; 657 661 sgt = &espi->rx_sgt; 658 662 } else { ··· 673 677 struct spi_message *msg = espi->current_msg; 674 678 struct dma_async_tx_descriptor *rxd, *txd; 675 679 676 - rxd = ep93xx_spi_dma_prepare(espi, DMA_FROM_DEVICE); 680 + rxd = ep93xx_spi_dma_prepare(espi, DMA_DEV_TO_MEM); 677 681 if (IS_ERR(rxd)) { 678 682 dev_err(&espi->pdev->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd)); 679 683 msg->status = PTR_ERR(rxd); 680 684 return; 681 685 } 682 686 683 - txd = ep93xx_spi_dma_prepare(espi, DMA_TO_DEVICE); 687 + txd = ep93xx_spi_dma_prepare(espi, DMA_MEM_TO_DEV); 684 688 if (IS_ERR(txd)) { 685 - ep93xx_spi_dma_finish(espi, DMA_FROM_DEVICE); 689 + ep93xx_spi_dma_finish(espi, DMA_DEV_TO_MEM); 686 690 dev_err(&espi->pdev->dev, "DMA TX failed: %ld\n", PTR_ERR(rxd)); 687 691 msg->status = PTR_ERR(txd); 688 692 return; ··· 701 705 702 706 wait_for_completion(&espi->wait); 703 707 704 - ep93xx_spi_dma_finish(espi, DMA_TO_DEVICE); 705 - ep93xx_spi_dma_finish(espi, DMA_FROM_DEVICE); 708 + ep93xx_spi_dma_finish(espi, DMA_MEM_TO_DEV); 709 + ep93xx_spi_dma_finish(espi, DMA_DEV_TO_MEM); 706 710 } 707 711 708 712 /**
+34 -24
drivers/spi/spi-pl022.c
··· 1667 1667 /* cpsdvsr = 254 & scr = 255 */ 1668 1668 min_tclk = spi_rate(rate, CPSDVR_MAX, SCR_MAX); 1669 1669 1670 - if (!((freq <= max_tclk) && (freq >= min_tclk))) { 1670 + if (freq > max_tclk) 1671 + dev_warn(&pl022->adev->dev, 1672 + "Max speed that can be programmed is %d Hz, you requested %d\n", 1673 + max_tclk, freq); 1674 + 1675 + if (freq < min_tclk) { 1671 1676 dev_err(&pl022->adev->dev, 1672 - "controller data is incorrect: out of range frequency"); 1677 + "Requested frequency: %d Hz is less than minimum possible %d Hz\n", 1678 + freq, min_tclk); 1673 1679 return -EINVAL; 1674 1680 } 1675 1681 ··· 1687 1681 while (scr <= SCR_MAX) { 1688 1682 tmp = spi_rate(rate, cpsdvsr, scr); 1689 1683 1690 - if (tmp > freq) 1684 + if (tmp > freq) { 1685 + /* we need lower freq */ 1691 1686 scr++; 1687 + continue; 1688 + } 1689 + 1692 1690 /* 1693 - * If found exact value, update and break. 1694 - * If found more closer value, update and continue. 1691 + * If found exact value, mark found and break. 1692 + * If found more closer value, update and break. 1695 1693 */ 1696 - else if ((tmp == freq) || (tmp > best_freq)) { 1694 + if (tmp > best_freq) { 1697 1695 best_freq = tmp; 1698 1696 best_cpsdvsr = cpsdvsr; 1699 1697 best_scr = scr; 1700 1698 1701 1699 if (tmp == freq) 1702 - break; 1700 + found = 1; 1703 1701 } 1704 - scr++; 1702 + /* 1703 + * increased scr will give lower rates, which are not 1704 + * required 1705 + */ 1706 + break; 1705 1707 } 1706 1708 cpsdvsr += 2; 1707 1709 scr = SCR_MIN; 1708 1710 } 1711 + 1712 + WARN(!best_freq, "pl022: Matching cpsdvsr and scr not found for %d Hz rate \n", 1713 + freq); 1709 1714 1710 1715 clk_freq->cpsdvsr = (u8) (best_cpsdvsr & 0xFF); 1711 1716 clk_freq->scr = (u8) (best_scr & 0xFF); ··· 1840 1823 } else 1841 1824 chip->cs_control = chip_info->cs_control; 1842 1825 1843 - if (bits <= 3) { 1844 - /* PL022 doesn't support less than 4-bits */ 1826 + /* Check bits per word with vendor specific range */ 1827 + if ((bits <= 3) || (bits > pl022->vendor->max_bpw)) { 1845 1828 status = -ENOTSUPP; 1829 + dev_err(&spi->dev, "illegal data size for this controller!\n"); 1830 + dev_err(&spi->dev, "This controller can only handle 4 <= n <= %d bit words\n", 1831 + pl022->vendor->max_bpw); 1846 1832 goto err_config_params; 1847 1833 } else if (bits <= 8) { 1848 1834 dev_dbg(&spi->dev, "4 <= n <=8 bits per word\n"); ··· 1858 1838 chip->read = READING_U16; 1859 1839 chip->write = WRITING_U16; 1860 1840 } else { 1861 - if (pl022->vendor->max_bpw >= 32) { 1862 - dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n"); 1863 - chip->n_bytes = 4; 1864 - chip->read = READING_U32; 1865 - chip->write = WRITING_U32; 1866 - } else { 1867 - dev_err(&spi->dev, 1868 - "illegal data size for this controller!\n"); 1869 - dev_err(&spi->dev, 1870 - "a standard pl022 can only handle " 1871 - "1 <= n <= 16 bit words\n"); 1872 - status = -ENOTSUPP; 1873 - goto err_config_params; 1874 - } 1841 + dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n"); 1842 + chip->n_bytes = 4; 1843 + chip->read = READING_U32; 1844 + chip->write = WRITING_U32; 1875 1845 } 1876 1846 1877 1847 /* Now Initialize all register settings required for this chip */
+1
drivers/staging/octeon/ethernet-rx.c
··· 36 36 #include <linux/prefetch.h> 37 37 #include <linux/ratelimit.h> 38 38 #include <linux/smp.h> 39 + #include <linux/interrupt.h> 39 40 #include <net/dst.h> 40 41 #ifdef CONFIG_XFRM 41 42 #include <linux/xfrm.h>
+1
drivers/staging/octeon/ethernet-tx.c
··· 32 32 #include <linux/ip.h> 33 33 #include <linux/ratelimit.h> 34 34 #include <linux/string.h> 35 + #include <linux/interrupt.h> 35 36 #include <net/dst.h> 36 37 #ifdef CONFIG_XFRM 37 38 #include <linux/xfrm.h>
+1
drivers/staging/octeon/ethernet.c
··· 31 31 #include <linux/etherdevice.h> 32 32 #include <linux/phy.h> 33 33 #include <linux/slab.h> 34 + #include <linux/interrupt.h> 34 35 35 36 #include <net/dst.h> 36 37
-2
drivers/staging/ozwpan/ozpd.c
··· 383 383 pd->tx_pool = &f->link; 384 384 pd->tx_pool_count++; 385 385 f = 0; 386 - } else { 387 - kfree(f); 388 386 } 389 387 spin_unlock_bh(&pd->tx_frame_lock); 390 388 if (f)
+12 -8
drivers/staging/tidspbridge/core/tiomap3430.c
··· 79 79 #define OMAP343X_CONTROL_IVA2_BOOTADDR (OMAP2_CONTROL_GENERAL + 0x0190) 80 80 #define OMAP343X_CONTROL_IVA2_BOOTMOD (OMAP2_CONTROL_GENERAL + 0x0194) 81 81 82 - #define OMAP343X_CTRL_REGADDR(reg) \ 83 - OMAP2_L4_IO_ADDRESS(OMAP343X_CTRL_BASE + (reg)) 84 - 85 - 86 82 /* Forward Declarations: */ 87 83 static int bridge_brd_monitor(struct bridge_dev_context *dev_ctxt); 88 84 static int bridge_brd_read(struct bridge_dev_context *dev_ctxt, ··· 414 418 415 419 /* Assert RST1 i.e only the RST only for DSP megacell */ 416 420 if (!status) { 421 + /* 422 + * XXX: ioremapping MUST be removed once ctrl 423 + * function is made available. 424 + */ 425 + void __iomem *ctrl = ioremap(OMAP343X_CTRL_BASE, SZ_4K); 426 + if (!ctrl) 427 + return -ENOMEM; 428 + 417 429 (*pdata->dsp_prm_rmw_bits)(OMAP3430_RST1_IVA2_MASK, 418 430 OMAP3430_RST1_IVA2_MASK, OMAP3430_IVA2_MOD, 419 431 OMAP2_RM_RSTCTRL); 420 432 /* Mask address with 1K for compatibility */ 421 433 __raw_writel(dsp_addr & OMAP3_IVA2_BOOTADDR_MASK, 422 - OMAP343X_CTRL_REGADDR( 423 - OMAP343X_CONTROL_IVA2_BOOTADDR)); 434 + ctrl + OMAP343X_CONTROL_IVA2_BOOTADDR); 424 435 /* 425 436 * Set bootmode to self loop if dsp_debug flag is true 426 437 */ 427 438 __raw_writel((dsp_debug) ? OMAP3_IVA2_BOOTMOD_IDLE : 0, 428 - OMAP343X_CTRL_REGADDR( 429 - OMAP343X_CONTROL_IVA2_BOOTMOD)); 439 + ctrl + OMAP343X_CONTROL_IVA2_BOOTMOD); 440 + 441 + iounmap(ctrl); 430 442 } 431 443 } 432 444 if (!status) {
+7 -1
drivers/staging/tidspbridge/core/wdt.c
··· 53 53 int ret = 0; 54 54 55 55 dsp_wdt.sm_wdt = NULL; 56 - dsp_wdt.reg_base = OMAP2_L4_IO_ADDRESS(OMAP34XX_WDT3_BASE); 56 + dsp_wdt.reg_base = ioremap(OMAP34XX_WDT3_BASE, SZ_4K); 57 + if (!dsp_wdt.reg_base) 58 + return -ENOMEM; 59 + 57 60 tasklet_init(&dsp_wdt.wdt3_tasklet, dsp_wdt_dpc, 0); 58 61 59 62 dsp_wdt.fclk = clk_get(NULL, "wdt3_fck"); ··· 102 99 dsp_wdt.fclk = NULL; 103 100 dsp_wdt.iclk = NULL; 104 101 dsp_wdt.sm_wdt = NULL; 102 + 103 + if (dsp_wdt.reg_base) 104 + iounmap(dsp_wdt.reg_base); 105 105 dsp_wdt.reg_base = NULL; 106 106 } 107 107
+1 -1
drivers/staging/zcache/Kconfig
··· 2 2 bool "Dynamic compression of swap pages and clean pagecache pages" 3 3 # X86 dependency is because zsmalloc uses non-portable pte/tlb 4 4 # functions 5 - depends on (CLEANCACHE || FRONTSWAP) && CRYPTO && X86 5 + depends on (CLEANCACHE || FRONTSWAP) && CRYPTO=y && X86 6 6 select ZSMALLOC 7 7 select CRYPTO_LZO 8 8 default n
+5 -2
drivers/usb/class/cdc-wdm.c
··· 157 157 spin_lock(&desc->iuspin); 158 158 desc->werr = urb->status; 159 159 spin_unlock(&desc->iuspin); 160 - clear_bit(WDM_IN_USE, &desc->flags); 161 160 kfree(desc->outbuf); 161 + desc->outbuf = NULL; 162 + clear_bit(WDM_IN_USE, &desc->flags); 162 163 wake_up(&desc->wait); 163 164 } 164 165 ··· 339 338 if (we < 0) 340 339 return -EIO; 341 340 342 - desc->outbuf = buf = kmalloc(count, GFP_KERNEL); 341 + buf = kmalloc(count, GFP_KERNEL); 343 342 if (!buf) { 344 343 rv = -ENOMEM; 345 344 goto outnl; ··· 407 406 req->wIndex = desc->inum; 408 407 req->wLength = cpu_to_le16(count); 409 408 set_bit(WDM_IN_USE, &desc->flags); 409 + desc->outbuf = buf; 410 410 411 411 rv = usb_submit_urb(desc->command, GFP_KERNEL); 412 412 if (rv < 0) { 413 413 kfree(buf); 414 + desc->outbuf = NULL; 414 415 clear_bit(WDM_IN_USE, &desc->flags); 415 416 dev_err(&desc->intf->dev, "Tx URB error: %d\n", rv); 416 417 } else {
+9
drivers/usb/core/hcd-pci.c
··· 493 493 494 494 pci_save_state(pci_dev); 495 495 496 + /* 497 + * Some systems crash if an EHCI controller is in D3 during 498 + * a sleep transition. We have to leave such controllers in D0. 499 + */ 500 + if (hcd->broken_pci_sleep) { 501 + dev_dbg(dev, "Staying in PCI D0\n"); 502 + return retval; 503 + } 504 + 496 505 /* If the root hub is dead rather than suspended, disallow remote 497 506 * wakeup. usb_hc_died() should ensure that both hosts are marked as 498 507 * dying, so we only need to check the primary roothub.
-1
drivers/usb/gadget/dummy_hcd.c
··· 927 927 928 928 dum->driver = NULL; 929 929 930 - dummy_pullup(&dum->gadget, 0); 931 930 return 0; 932 931 } 933 932
+1 -1
drivers/usb/gadget/f_mass_storage.c
··· 2189 2189 common->data_size_from_cmnd = 0; 2190 2190 sprintf(unknown, "Unknown x%02x", common->cmnd[0]); 2191 2191 reply = check_command(common, common->cmnd_size, 2192 - DATA_DIR_UNKNOWN, 0xff, 0, unknown); 2192 + DATA_DIR_UNKNOWN, ~0, 0, unknown); 2193 2193 if (reply == 0) { 2194 2194 common->curlun->sense_data = SS_INVALID_COMMAND; 2195 2195 reply = -EINVAL;
+1 -1
drivers/usb/gadget/file_storage.c
··· 2579 2579 fsg->data_size_from_cmnd = 0; 2580 2580 sprintf(unknown, "Unknown x%02x", fsg->cmnd[0]); 2581 2581 if ((reply = check_command(fsg, fsg->cmnd_size, 2582 - DATA_DIR_UNKNOWN, 0xff, 0, unknown)) == 0) { 2582 + DATA_DIR_UNKNOWN, ~0, 0, unknown)) == 0) { 2583 2583 fsg->curlun->sense_data = SS_INVALID_COMMAND; 2584 2584 reply = -EINVAL; 2585 2585 }
+2 -2
drivers/usb/gadget/udc-core.c
··· 263 263 264 264 if (udc_is_newstyle(udc)) { 265 265 udc->driver->disconnect(udc->gadget); 266 - udc->driver->unbind(udc->gadget); 267 266 usb_gadget_disconnect(udc->gadget); 267 + udc->driver->unbind(udc->gadget); 268 268 usb_gadget_udc_stop(udc->gadget, udc->driver); 269 269 } else { 270 270 usb_gadget_stop(udc->gadget, udc->driver); ··· 415 415 usb_gadget_udc_start(udc->gadget, udc->driver); 416 416 usb_gadget_connect(udc->gadget); 417 417 } else if (sysfs_streq(buf, "disconnect")) { 418 + usb_gadget_disconnect(udc->gadget); 418 419 if (udc_is_newstyle(udc)) 419 420 usb_gadget_udc_stop(udc->gadget, udc->driver); 420 - usb_gadget_disconnect(udc->gadget); 421 421 } else { 422 422 dev_err(dev, "unsupported command '%s'\n", buf); 423 423 return -EINVAL;
+1 -1
drivers/usb/gadget/uvc.h
··· 28 28 29 29 struct uvc_request_data 30 30 { 31 - unsigned int length; 31 + __s32 length; 32 32 __u8 data[60]; 33 33 }; 34 34
+1 -1
drivers/usb/gadget/uvc_v4l2.c
··· 39 39 if (data->length < 0) 40 40 return usb_ep_set_halt(cdev->gadget->ep0); 41 41 42 - req->length = min(uvc->event_length, data->length); 42 + req->length = min_t(unsigned int, uvc->event_length, data->length); 43 43 req->zero = data->length < uvc->event_length; 44 44 req->dma = DMA_ADDR_INVALID; 45 45
+8
drivers/usb/host/ehci-pci.c
··· 144 144 hcd->has_tt = 1; 145 145 tdi_reset(ehci); 146 146 } 147 + if (pdev->subsystem_vendor == PCI_VENDOR_ID_ASUSTEK) { 148 + /* EHCI #1 or #2 on 6 Series/C200 Series chipset */ 149 + if (pdev->device == 0x1c26 || pdev->device == 0x1c2d) { 150 + ehci_info(ehci, "broken D3 during system sleep on ASUS\n"); 151 + hcd->broken_pci_sleep = 1; 152 + device_set_wakeup_capable(&pdev->dev, false); 153 + } 154 + } 147 155 break; 148 156 case PCI_VENDOR_ID_TDI: 149 157 if (pdev->device == PCI_DEVICE_ID_TDI_EHCI) {
+2 -1
drivers/usb/musb/davinci.c
··· 386 386 usb_nop_xceiv_register(); 387 387 musb->xceiv = usb_get_transceiver(); 388 388 if (!musb->xceiv) 389 - return -ENODEV; 389 + goto unregister; 390 390 391 391 musb->mregs += DAVINCI_BASE_OFFSET; 392 392 ··· 444 444 445 445 fail: 446 446 usb_put_transceiver(musb->xceiv); 447 + unregister: 447 448 usb_nop_xceiv_unregister(); 448 449 return -ENODEV; 449 450 }
+1 -1
drivers/usb/musb/musb_core.h
··· 449 449 * We added this flag to forcefully disable double 450 450 * buffering until we get it working. 451 451 */ 452 - unsigned double_buffer_not_ok:1 __deprecated; 452 + unsigned double_buffer_not_ok:1; 453 453 454 454 struct musb_hdrc_config *config; 455 455
+14 -1
drivers/usb/otg/gpio_vbus.c
··· 96 96 struct gpio_vbus_data *gpio_vbus = 97 97 container_of(work, struct gpio_vbus_data, work); 98 98 struct gpio_vbus_mach_info *pdata = gpio_vbus->dev->platform_data; 99 - int gpio; 99 + int gpio, status; 100 100 101 101 if (!gpio_vbus->phy.otg->gadget) 102 102 return; ··· 108 108 */ 109 109 gpio = pdata->gpio_pullup; 110 110 if (is_vbus_powered(pdata)) { 111 + status = USB_EVENT_VBUS; 111 112 gpio_vbus->phy.state = OTG_STATE_B_PERIPHERAL; 113 + gpio_vbus->phy.last_event = status; 112 114 usb_gadget_vbus_connect(gpio_vbus->phy.otg->gadget); 113 115 114 116 /* drawing a "unit load" is *always* OK, except for OTG */ ··· 119 117 /* optionally enable D+ pullup */ 120 118 if (gpio_is_valid(gpio)) 121 119 gpio_set_value(gpio, !pdata->gpio_pullup_inverted); 120 + 121 + atomic_notifier_call_chain(&gpio_vbus->phy.notifier, 122 + status, gpio_vbus->phy.otg->gadget); 122 123 } else { 123 124 /* optionally disable D+ pullup */ 124 125 if (gpio_is_valid(gpio)) ··· 130 125 set_vbus_draw(gpio_vbus, 0); 131 126 132 127 usb_gadget_vbus_disconnect(gpio_vbus->phy.otg->gadget); 128 + status = USB_EVENT_NONE; 133 129 gpio_vbus->phy.state = OTG_STATE_B_IDLE; 130 + gpio_vbus->phy.last_event = status; 131 + 132 + atomic_notifier_call_chain(&gpio_vbus->phy.notifier, 133 + status, gpio_vbus->phy.otg->gadget); 134 134 } 135 135 } 136 136 ··· 297 287 irq, err); 298 288 goto err_irq; 299 289 } 290 + 291 + ATOMIC_INIT_NOTIFIER_HEAD(&gpio_vbus->phy.notifier); 292 + 300 293 INIT_WORK(&gpio_vbus->work, gpio_vbus_work); 301 294 302 295 gpio_vbus->vbus_draw = regulator_get(&pdev->dev, "vbus_draw");
+1 -1
drivers/vhost/net.c
··· 238 238 239 239 vq->heads[vq->upend_idx].len = len; 240 240 ubuf->callback = vhost_zerocopy_callback; 241 - ubuf->arg = vq->ubufs; 241 + ubuf->ctx = vq->ubufs; 242 242 ubuf->desc = vq->upend_idx; 243 243 msg.msg_control = ubuf; 244 244 msg.msg_controllen = sizeof(ubuf);
+2 -3
drivers/vhost/vhost.c
··· 1598 1598 kfree(ubufs); 1599 1599 } 1600 1600 1601 - void vhost_zerocopy_callback(void *arg) 1601 + void vhost_zerocopy_callback(struct ubuf_info *ubuf) 1602 1602 { 1603 - struct ubuf_info *ubuf = arg; 1604 - struct vhost_ubuf_ref *ubufs = ubuf->arg; 1603 + struct vhost_ubuf_ref *ubufs = ubuf->ctx; 1605 1604 struct vhost_virtqueue *vq = ubufs->vq; 1606 1605 1607 1606 /* set len = 1 to mark this desc buffers done DMA */
+1 -1
drivers/vhost/vhost.h
··· 188 188 189 189 int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log, 190 190 unsigned int log_num, u64 len); 191 - void vhost_zerocopy_callback(void *arg); 191 + void vhost_zerocopy_callback(struct ubuf_info *); 192 192 int vhost_zerocopy_signal_used(struct vhost_virtqueue *vq); 193 193 194 194 #define vq_err(vq, fmt, ...) do { \
+1
drivers/video/bfin-lq035q1-fb.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/string.h> 15 15 #include <linux/fb.h> 16 + #include <linux/gpio.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/init.h> 18 19 #include <linux/types.h>
+3 -3
drivers/watchdog/hpwdt.c
··· 435 435 { 436 436 reload = SECS_TO_TICKS(soft_margin); 437 437 iowrite16(reload, hpwdt_timer_reg); 438 - iowrite16(0x85, hpwdt_timer_con); 438 + iowrite8(0x85, hpwdt_timer_con); 439 439 } 440 440 441 441 static void hpwdt_stop(void) 442 442 { 443 443 unsigned long data; 444 444 445 - data = ioread16(hpwdt_timer_con); 445 + data = ioread8(hpwdt_timer_con); 446 446 data &= 0xFE; 447 - iowrite16(data, hpwdt_timer_con); 447 + iowrite8(data, hpwdt_timer_con); 448 448 } 449 449 450 450 static void hpwdt_ping(void)
+1 -1
drivers/xen/events.c
··· 274 274 275 275 static bool pirq_check_eoi_map(unsigned irq) 276 276 { 277 - return test_bit(irq, pirq_eoi_map); 277 + return test_bit(pirq_from_irq(irq), pirq_eoi_map); 278 278 } 279 279 280 280 static bool pirq_needs_eoi_flag(unsigned irq)
+4 -1
drivers/xen/xen-acpi-processor.c
··· 128 128 pr_debug(" C%d: %s %d uS\n", 129 129 cx->type, cx->desc, (u32)cx->latency); 130 130 } 131 - } else 131 + } else if (ret != -EINVAL) 132 + /* EINVAL means the ACPI ID is incorrect - meaning the ACPI 133 + * table is referencing a non-existing CPU - which can happen 134 + * with broken ACPI tables. */ 132 135 pr_err(DRV_NAME "(CX): Hypervisor error (%d) for ACPI CPU%u\n", 133 136 ret, _pr->acpi_id); 134 137
+11 -1
fs/autofs4/autofs_i.h
··· 110 110 int sub_version; 111 111 int min_proto; 112 112 int max_proto; 113 - int compat_daemon; 114 113 unsigned long exp_timeout; 115 114 unsigned int type; 116 115 int reghost_enabled; ··· 268 269 int autofs4_fill_super(struct super_block *, void *, int); 269 270 struct autofs_info *autofs4_new_ino(struct autofs_sb_info *); 270 271 void autofs4_clean_ino(struct autofs_info *); 272 + 273 + static inline int autofs_prepare_pipe(struct file *pipe) 274 + { 275 + if (!pipe->f_op || !pipe->f_op->write) 276 + return -EINVAL; 277 + if (!S_ISFIFO(pipe->f_dentry->d_inode->i_mode)) 278 + return -EINVAL; 279 + /* We want a packet pipe */ 280 + pipe->f_flags |= O_DIRECT; 281 + return 0; 282 + } 271 283 272 284 /* Queue management functions */ 273 285
+1 -2
fs/autofs4/dev-ioctl.c
··· 376 376 err = -EBADF; 377 377 goto out; 378 378 } 379 - if (!pipe->f_op || !pipe->f_op->write) { 379 + if (autofs_prepare_pipe(pipe) < 0) { 380 380 err = -EPIPE; 381 381 fput(pipe); 382 382 goto out; ··· 385 385 sbi->pipefd = pipefd; 386 386 sbi->pipe = pipe; 387 387 sbi->catatonic = 0; 388 - sbi->compat_daemon = is_compat_task(); 389 388 } 390 389 out: 391 390 mutex_unlock(&sbi->wq_mutex);
+1 -3
fs/autofs4/inode.c
··· 19 19 #include <linux/parser.h> 20 20 #include <linux/bitops.h> 21 21 #include <linux/magic.h> 22 - #include <linux/compat.h> 23 22 #include "autofs_i.h" 24 23 #include <linux/module.h> 25 24 ··· 224 225 set_autofs_type_indirect(&sbi->type); 225 226 sbi->min_proto = 0; 226 227 sbi->max_proto = 0; 227 - sbi->compat_daemon = is_compat_task(); 228 228 mutex_init(&sbi->wq_mutex); 229 229 mutex_init(&sbi->pipe_mutex); 230 230 spin_lock_init(&sbi->fs_lock); ··· 290 292 printk("autofs: could not open pipe file descriptor\n"); 291 293 goto fail_dput; 292 294 } 293 - if (!pipe->f_op || !pipe->f_op->write) 295 + if (autofs_prepare_pipe(pipe) < 0) 294 296 goto fail_fput; 295 297 sbi->pipe = pipe; 296 298 sbi->pipefd = pipefd;
+3 -19
fs/autofs4/waitq.c
··· 91 91 92 92 return (bytes > 0); 93 93 } 94 - 95 - /* 96 - * The autofs_v5 packet was misdesigned. 97 - * 98 - * The packets are identical on x86-32 and x86-64, but have different 99 - * alignment. Which means that 'sizeof()' will give different results. 100 - * Fix it up for the case of running 32-bit user mode on a 64-bit kernel. 101 - */ 102 - static noinline size_t autofs_v5_packet_size(struct autofs_sb_info *sbi) 103 - { 104 - size_t pktsz = sizeof(struct autofs_v5_packet); 105 - #if defined(CONFIG_X86_64) && defined(CONFIG_COMPAT) 106 - if (sbi->compat_daemon > 0) 107 - pktsz -= 4; 108 - #endif 109 - return pktsz; 110 - } 111 - 94 + 112 95 static void autofs4_notify_daemon(struct autofs_sb_info *sbi, 113 96 struct autofs_wait_queue *wq, 114 97 int type) ··· 155 172 { 156 173 struct autofs_v5_packet *packet = &pkt.v5_pkt.v5_packet; 157 174 158 - pktsz = autofs_v5_packet_size(sbi); 175 + pktsz = sizeof(*packet); 176 + 159 177 packet->wait_queue_token = wq->wait_queue_token; 160 178 packet->len = wq->name.len; 161 179 memcpy(packet->name, wq->name.name, wq->name.len);
+20 -7
fs/btrfs/backref.c
··· 22 22 #include "ulist.h" 23 23 #include "transaction.h" 24 24 #include "delayed-ref.h" 25 + #include "locking.h" 25 26 26 27 /* 27 28 * this structure records all encountered refs on the way up to the root ··· 894 893 s64 bytes_left = size - 1; 895 894 struct extent_buffer *eb = eb_in; 896 895 struct btrfs_key found_key; 896 + int leave_spinning = path->leave_spinning; 897 897 898 898 if (bytes_left >= 0) 899 899 dest[bytes_left] = '\0'; 900 900 901 + path->leave_spinning = 1; 901 902 while (1) { 902 903 len = btrfs_inode_ref_name_len(eb, iref); 903 904 bytes_left -= len; 904 905 if (bytes_left >= 0) 905 906 read_extent_buffer(eb, dest + bytes_left, 906 907 (unsigned long)(iref + 1), len); 907 - if (eb != eb_in) 908 + if (eb != eb_in) { 909 + btrfs_tree_read_unlock_blocking(eb); 908 910 free_extent_buffer(eb); 911 + } 909 912 ret = inode_ref_info(parent, 0, fs_root, path, &found_key); 910 913 if (ret > 0) 911 914 ret = -ENOENT; ··· 924 919 slot = path->slots[0]; 925 920 eb = path->nodes[0]; 926 921 /* make sure we can use eb after releasing the path */ 927 - if (eb != eb_in) 922 + if (eb != eb_in) { 928 923 atomic_inc(&eb->refs); 924 + btrfs_tree_read_lock(eb); 925 + btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); 926 + } 929 927 btrfs_release_path(path); 930 928 931 929 iref = btrfs_item_ptr(eb, slot, struct btrfs_inode_ref); ··· 939 931 } 940 932 941 933 btrfs_release_path(path); 934 + path->leave_spinning = leave_spinning; 942 935 943 936 if (ret) 944 937 return ERR_PTR(ret); ··· 1256 1247 struct btrfs_path *path, 1257 1248 iterate_irefs_t *iterate, void *ctx) 1258 1249 { 1259 - int ret; 1250 + int ret = 0; 1260 1251 int slot; 1261 1252 u32 cur; 1262 1253 u32 len; ··· 1268 1259 struct btrfs_inode_ref *iref; 1269 1260 struct btrfs_key found_key; 1270 1261 1271 - while (1) { 1262 + while (!ret) { 1263 + path->leave_spinning = 1; 1272 1264 ret = inode_ref_info(inum, parent ? parent+1 : 0, fs_root, path, 1273 1265 &found_key); 1274 1266 if (ret < 0) ··· 1285 1275 eb = path->nodes[0]; 1286 1276 /* make sure we can use eb after releasing the path */ 1287 1277 atomic_inc(&eb->refs); 1278 + btrfs_tree_read_lock(eb); 1279 + btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK); 1288 1280 btrfs_release_path(path); 1289 1281 1290 1282 item = btrfs_item_nr(eb, slot); ··· 1300 1288 (unsigned long long)found_key.objectid, 1301 1289 (unsigned long long)fs_root->objectid); 1302 1290 ret = iterate(parent, iref, eb, ctx); 1303 - if (ret) { 1304 - free_extent_buffer(eb); 1291 + if (ret) 1305 1292 break; 1306 - } 1307 1293 len = sizeof(*iref) + name_len; 1308 1294 iref = (struct btrfs_inode_ref *)((char *)iref + len); 1309 1295 } 1296 + btrfs_tree_read_unlock_blocking(eb); 1310 1297 free_extent_buffer(eb); 1311 1298 } 1312 1299 ··· 1425 1414 1426 1415 void free_ipath(struct inode_fs_paths *ipath) 1427 1416 { 1417 + if (!ipath) 1418 + return; 1428 1419 kfree(ipath->fspath); 1429 1420 kfree(ipath); 1430 1421 }
+1 -1
fs/btrfs/ctree.h
··· 1078 1078 * is required instead of the faster short fsync log commits 1079 1079 */ 1080 1080 u64 last_trans_log_full_commit; 1081 - unsigned long mount_opt:21; 1081 + unsigned long mount_opt; 1082 1082 unsigned long compress_type:4; 1083 1083 u64 max_inline; 1084 1084 u64 alloc_start;
+11 -11
fs/btrfs/disk-io.c
··· 383 383 if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags)) 384 384 break; 385 385 386 - if (!failed_mirror) { 387 - failed = 1; 388 - printk(KERN_ERR "failed mirror was %d\n", eb->failed_mirror); 389 - failed_mirror = eb->failed_mirror; 390 - } 391 - 392 386 num_copies = btrfs_num_copies(&root->fs_info->mapping_tree, 393 387 eb->start, eb->len); 394 388 if (num_copies == 1) 395 389 break; 390 + 391 + if (!failed_mirror) { 392 + failed = 1; 393 + failed_mirror = eb->read_mirror; 394 + } 396 395 397 396 mirror_num++; 398 397 if (mirror_num == failed_mirror) ··· 563 564 } 564 565 565 566 static int btree_readpage_end_io_hook(struct page *page, u64 start, u64 end, 566 - struct extent_state *state) 567 + struct extent_state *state, int mirror) 567 568 { 568 569 struct extent_io_tree *tree; 569 570 u64 found_start; ··· 588 589 if (!reads_done) 589 590 goto err; 590 591 592 + eb->read_mirror = mirror; 591 593 if (test_bit(EXTENT_BUFFER_IOERR, &eb->bflags)) { 592 594 ret = -EIO; 593 595 goto err; ··· 652 652 653 653 eb = (struct extent_buffer *)page->private; 654 654 set_bit(EXTENT_BUFFER_IOERR, &eb->bflags); 655 - eb->failed_mirror = failed_mirror; 655 + eb->read_mirror = failed_mirror; 656 656 if (test_and_clear_bit(EXTENT_BUFFER_READAHEAD, &eb->bflags)) 657 657 btree_readahead_hook(root, eb, eb->start, -EIO); 658 658 return -EIO; /* we fixed nothing */ ··· 2254 2254 goto fail_sb_buffer; 2255 2255 } 2256 2256 2257 - if (sectorsize < PAGE_SIZE) { 2258 - printk(KERN_WARNING "btrfs: Incompatible sector size " 2259 - "found on %s\n", sb->s_id); 2257 + if (sectorsize != PAGE_SIZE) { 2258 + printk(KERN_WARNING "btrfs: Incompatible sector size(%lu) " 2259 + "found on %s\n", (unsigned long)sectorsize, sb->s_id); 2260 2260 goto fail_sb_buffer; 2261 2261 } 2262 2262
+7 -8
fs/btrfs/extent-tree.c
··· 2301 2301 2302 2302 if (ret) { 2303 2303 printk(KERN_DEBUG "btrfs: run_delayed_extent_op returned %d\n", ret); 2304 + spin_lock(&delayed_refs->lock); 2304 2305 return ret; 2305 2306 } 2306 2307 ··· 2332 2331 2333 2332 if (ret) { 2334 2333 printk(KERN_DEBUG "btrfs: run_one_delayed_ref returned %d\n", ret); 2334 + spin_lock(&delayed_refs->lock); 2335 2335 return ret; 2336 2336 } 2337 2337 ··· 3771 3769 */ 3772 3770 if (current->journal_info) 3773 3771 return -EAGAIN; 3774 - ret = wait_event_interruptible(space_info->wait, 3775 - !space_info->flush); 3776 - /* Must have been interrupted, return */ 3777 - if (ret) { 3778 - printk(KERN_DEBUG "btrfs: %s returning -EINTR\n", __func__); 3772 + ret = wait_event_killable(space_info->wait, !space_info->flush); 3773 + /* Must have been killed, return */ 3774 + if (ret) 3779 3775 return -EINTR; 3780 - } 3781 3776 3782 3777 spin_lock(&space_info->lock); 3783 3778 } ··· 4214 4215 4215 4216 num_bytes = calc_global_metadata_size(fs_info); 4216 4217 4217 - spin_lock(&block_rsv->lock); 4218 4218 spin_lock(&sinfo->lock); 4219 + spin_lock(&block_rsv->lock); 4219 4220 4220 4221 block_rsv->size = num_bytes; 4221 4222 ··· 4241 4242 block_rsv->full = 1; 4242 4243 } 4243 4244 4244 - spin_unlock(&sinfo->lock); 4245 4245 spin_unlock(&block_rsv->lock); 4246 + spin_unlock(&sinfo->lock); 4246 4247 } 4247 4248 4248 4249 static void init_global_block_rsv(struct btrfs_fs_info *fs_info)
+28 -28
fs/btrfs/extent_io.c
··· 402 402 return 0; 403 403 } 404 404 405 + static struct extent_state *next_state(struct extent_state *state) 406 + { 407 + struct rb_node *next = rb_next(&state->rb_node); 408 + if (next) 409 + return rb_entry(next, struct extent_state, rb_node); 410 + else 411 + return NULL; 412 + } 413 + 405 414 /* 406 415 * utility function to clear some bits in an extent state struct. 407 - * it will optionally wake up any one waiting on this state (wake == 1), or 408 - * forcibly remove the state from the tree (delete == 1). 416 + * it will optionally wake up any one waiting on this state (wake == 1) 409 417 * 410 418 * If no bits are set on the state struct after clearing things, the 411 419 * struct is freed and removed from the tree 412 420 */ 413 - static int clear_state_bit(struct extent_io_tree *tree, 414 - struct extent_state *state, 415 - int *bits, int wake) 421 + static struct extent_state *clear_state_bit(struct extent_io_tree *tree, 422 + struct extent_state *state, 423 + int *bits, int wake) 416 424 { 425 + struct extent_state *next; 417 426 int bits_to_clear = *bits & ~EXTENT_CTLBITS; 418 - int ret = state->state & bits_to_clear; 419 427 420 428 if ((bits_to_clear & EXTENT_DIRTY) && (state->state & EXTENT_DIRTY)) { 421 429 u64 range = state->end - state->start + 1; ··· 435 427 if (wake) 436 428 wake_up(&state->wq); 437 429 if (state->state == 0) { 430 + next = next_state(state); 438 431 if (state->tree) { 439 432 rb_erase(&state->rb_node, &tree->state); 440 433 state->tree = NULL; ··· 445 436 } 446 437 } else { 447 438 merge_state(tree, state); 439 + next = next_state(state); 448 440 } 449 - return ret; 441 + return next; 450 442 } 451 443 452 444 static struct extent_state * ··· 486 476 struct extent_state *state; 487 477 struct extent_state *cached; 488 478 struct extent_state *prealloc = NULL; 489 - struct rb_node *next_node; 490 479 struct rb_node *node; 491 480 u64 last_end; 492 481 int err; ··· 537 528 WARN_ON(state->end < start); 538 529 last_end = state->end; 539 530 540 - if (state->end < end && !need_resched()) 541 - next_node = rb_next(&state->rb_node); 542 - else 543 - next_node = NULL; 544 - 545 531 /* the state doesn't have the wanted bits, go ahead */ 546 - if (!(state->state & bits)) 532 + if (!(state->state & bits)) { 533 + state = next_state(state); 547 534 goto next; 535 + } 548 536 549 537 /* 550 538 * | ---- desired range ---- | ··· 599 593 goto out; 600 594 } 601 595 602 - clear_state_bit(tree, state, &bits, wake); 596 + state = clear_state_bit(tree, state, &bits, wake); 603 597 next: 604 598 if (last_end == (u64)-1) 605 599 goto out; 606 600 start = last_end + 1; 607 - if (start <= end && next_node) { 608 - state = rb_entry(next_node, struct extent_state, 609 - rb_node); 601 + if (start <= end && state && !need_resched()) 610 602 goto hit_next; 611 - } 612 603 goto search_again; 613 604 614 605 out: ··· 2304 2301 u64 start; 2305 2302 u64 end; 2306 2303 int whole_page; 2307 - int failed_mirror; 2304 + int mirror; 2308 2305 int ret; 2309 2306 2310 2307 if (err) ··· 2343 2340 } 2344 2341 spin_unlock(&tree->lock); 2345 2342 2343 + mirror = (int)(unsigned long)bio->bi_bdev; 2346 2344 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) { 2347 2345 ret = tree->ops->readpage_end_io_hook(page, start, end, 2348 - state); 2346 + state, mirror); 2349 2347 if (ret) 2350 2348 uptodate = 0; 2351 2349 else 2352 2350 clean_io_failure(start, page); 2353 2351 } 2354 2352 2355 - if (!uptodate) 2356 - failed_mirror = (int)(unsigned long)bio->bi_bdev; 2357 - 2358 2353 if (!uptodate && tree->ops && tree->ops->readpage_io_failed_hook) { 2359 - ret = tree->ops->readpage_io_failed_hook(page, failed_mirror); 2354 + ret = tree->ops->readpage_io_failed_hook(page, mirror); 2360 2355 if (!ret && !err && 2361 2356 test_bit(BIO_UPTODATE, &bio->bi_flags)) 2362 2357 uptodate = 1; ··· 2369 2368 * can't handle the error it will return -EIO and we 2370 2369 * remain responsible for that page. 2371 2370 */ 2372 - ret = bio_readpage_error(bio, page, start, end, 2373 - failed_mirror, NULL); 2371 + ret = bio_readpage_error(bio, page, start, end, mirror, NULL); 2374 2372 if (ret == 0) { 2375 2373 uptodate = 2376 2374 test_bit(BIO_UPTODATE, &bio->bi_flags); ··· 4462 4462 } 4463 4463 4464 4464 clear_bit(EXTENT_BUFFER_IOERR, &eb->bflags); 4465 - eb->failed_mirror = 0; 4465 + eb->read_mirror = 0; 4466 4466 atomic_set(&eb->io_pages, num_reads); 4467 4467 for (i = start_i; i < num_pages; i++) { 4468 4468 page = extent_buffer_page(eb, i);
+2 -2
fs/btrfs/extent_io.h
··· 79 79 u64 start, u64 end, 80 80 struct extent_state *state); 81 81 int (*readpage_end_io_hook)(struct page *page, u64 start, u64 end, 82 - struct extent_state *state); 82 + struct extent_state *state, int mirror); 83 83 int (*writepage_end_io_hook)(struct page *page, u64 start, u64 end, 84 84 struct extent_state *state, int uptodate); 85 85 void (*set_bit_hook)(struct inode *inode, struct extent_state *state, ··· 135 135 spinlock_t refs_lock; 136 136 atomic_t refs; 137 137 atomic_t io_pages; 138 - int failed_mirror; 138 + int read_mirror; 139 139 struct list_head leak_list; 140 140 struct rcu_head rcu_head; 141 141 pid_t lock_owner;
+7 -2
fs/btrfs/file.c
··· 567 567 int extent_type; 568 568 int recow; 569 569 int ret; 570 + int modify_tree = -1; 570 571 571 572 if (drop_cache) 572 573 btrfs_drop_extent_cache(inode, start, end - 1, 0); ··· 576 575 if (!path) 577 576 return -ENOMEM; 578 577 578 + if (start >= BTRFS_I(inode)->disk_i_size) 579 + modify_tree = 0; 580 + 579 581 while (1) { 580 582 recow = 0; 581 583 ret = btrfs_lookup_file_extent(trans, root, path, ino, 582 - search_start, -1); 584 + search_start, modify_tree); 583 585 if (ret < 0) 584 586 break; 585 587 if (ret > 0 && path->slots[0] > 0 && search_start == start) { ··· 638 634 } 639 635 640 636 search_start = max(key.offset, start); 641 - if (recow) { 637 + if (recow || !modify_tree) { 638 + modify_tree = -1; 642 639 btrfs_release_path(path); 643 640 continue; 644 641 }
+17 -35
fs/btrfs/inode.c
··· 1947 1947 * extent_io.c will try to find good copies for us. 1948 1948 */ 1949 1949 static int btrfs_readpage_end_io_hook(struct page *page, u64 start, u64 end, 1950 - struct extent_state *state) 1950 + struct extent_state *state, int mirror) 1951 1951 { 1952 1952 size_t offset = start - ((u64)page->index << PAGE_CACHE_SHIFT); 1953 1953 struct inode *inode = page->mapping->host; ··· 4069 4069 BTRFS_I(inode)->dummy_inode = 1; 4070 4070 4071 4071 inode->i_ino = BTRFS_EMPTY_SUBVOL_DIR_OBJECTID; 4072 - inode->i_op = &simple_dir_inode_operations; 4072 + inode->i_op = &btrfs_dir_ro_inode_operations; 4073 4073 inode->i_fop = &simple_dir_operations; 4074 4074 inode->i_mode = S_IFDIR | S_IRUGO | S_IWUSR | S_IXUGO; 4075 4075 inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME; ··· 4140 4140 static int btrfs_dentry_delete(const struct dentry *dentry) 4141 4141 { 4142 4142 struct btrfs_root *root; 4143 + struct inode *inode = dentry->d_inode; 4143 4144 4144 - if (!dentry->d_inode && !IS_ROOT(dentry)) 4145 - dentry = dentry->d_parent; 4145 + if (!inode && !IS_ROOT(dentry)) 4146 + inode = dentry->d_parent->d_inode; 4146 4147 4147 - if (dentry->d_inode) { 4148 - root = BTRFS_I(dentry->d_inode)->root; 4148 + if (inode) { 4149 + root = BTRFS_I(inode)->root; 4149 4150 if (btrfs_root_refs(&root->root_item) == 0) 4151 + return 1; 4152 + 4153 + if (btrfs_ino(inode) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID) 4150 4154 return 1; 4151 4155 } 4152 4156 return 0; ··· 4192 4188 struct btrfs_path *path; 4193 4189 struct list_head ins_list; 4194 4190 struct list_head del_list; 4195 - struct qstr q; 4196 4191 int ret; 4197 4192 struct extent_buffer *leaf; 4198 4193 int slot; ··· 4282 4279 4283 4280 while (di_cur < di_total) { 4284 4281 struct btrfs_key location; 4285 - struct dentry *tmp; 4286 4282 4287 4283 if (verify_dir_item(root, leaf, di)) 4288 4284 break; ··· 4302 4300 d_type = btrfs_filetype_table[btrfs_dir_type(leaf, di)]; 4303 4301 btrfs_dir_item_key_to_cpu(leaf, di, &location); 4304 4302 4305 - q.name = name_ptr; 4306 - q.len = name_len; 4307 - q.hash = full_name_hash(q.name, q.len); 4308 - tmp = d_lookup(filp->f_dentry, &q); 4309 - if (!tmp) { 4310 - struct btrfs_key *newkey; 4311 4303 4312 - newkey = kzalloc(sizeof(struct btrfs_key), 4313 - GFP_NOFS); 4314 - if (!newkey) 4315 - goto no_dentry; 4316 - tmp = d_alloc(filp->f_dentry, &q); 4317 - if (!tmp) { 4318 - kfree(newkey); 4319 - dput(tmp); 4320 - goto no_dentry; 4321 - } 4322 - memcpy(newkey, &location, 4323 - sizeof(struct btrfs_key)); 4324 - tmp->d_fsdata = newkey; 4325 - tmp->d_flags |= DCACHE_NEED_LOOKUP; 4326 - d_rehash(tmp); 4327 - dput(tmp); 4328 - } else { 4329 - dput(tmp); 4330 - } 4331 - no_dentry: 4332 4304 /* is this a reference to our own snapshot? If so 4333 - * skip it 4305 + * skip it. 4306 + * 4307 + * In contrast to old kernels, we insert the snapshot's 4308 + * dir item and dir index after it has been created, so 4309 + * we won't find a reference to our own snapshot. We 4310 + * still keep the following code for backward 4311 + * compatibility. 4334 4312 */ 4335 4313 if (location.type == BTRFS_ROOT_ITEM_KEY && 4336 4314 location.objectid == root->root_key.objectid) {
+4 -1
fs/btrfs/ioctl.c
··· 2262 2262 di_args->bytes_used = dev->bytes_used; 2263 2263 di_args->total_bytes = dev->total_bytes; 2264 2264 memcpy(di_args->uuid, dev->uuid, sizeof(di_args->uuid)); 2265 - strncpy(di_args->path, dev->name, sizeof(di_args->path)); 2265 + if (dev->name) 2266 + strncpy(di_args->path, dev->name, sizeof(di_args->path)); 2267 + else 2268 + di_args->path[0] = '\0'; 2266 2269 2267 2270 out: 2268 2271 if (ret == 0 && copy_to_user(arg, di_args, sizeof(*di_args)))
+30 -20
fs/btrfs/reada.c
··· 250 250 struct btrfs_bio *bbio) 251 251 { 252 252 int ret; 253 - int looped = 0; 254 253 struct reada_zone *zone; 255 254 struct btrfs_block_group_cache *cache = NULL; 256 255 u64 start; 257 256 u64 end; 258 257 int i; 259 258 260 - again: 261 259 zone = NULL; 262 260 spin_lock(&fs_info->reada_lock); 263 261 ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone, ··· 271 273 kref_put(&zone->refcnt, reada_zone_release); 272 274 spin_unlock(&fs_info->reada_lock); 273 275 } 274 - 275 - if (looped) 276 - return NULL; 277 276 278 277 cache = btrfs_lookup_block_group(fs_info, logical); 279 278 if (!cache) ··· 302 307 ret = radix_tree_insert(&dev->reada_zones, 303 308 (unsigned long)(zone->end >> PAGE_CACHE_SHIFT), 304 309 zone); 305 - spin_unlock(&fs_info->reada_lock); 306 310 307 - if (ret) { 311 + if (ret == -EEXIST) { 308 312 kfree(zone); 309 - looped = 1; 310 - goto again; 313 + ret = radix_tree_gang_lookup(&dev->reada_zones, (void **)&zone, 314 + logical >> PAGE_CACHE_SHIFT, 1); 315 + if (ret == 1) 316 + kref_get(&zone->refcnt); 311 317 } 318 + spin_unlock(&fs_info->reada_lock); 312 319 313 320 return zone; 314 321 } ··· 320 323 struct btrfs_key *top, int level) 321 324 { 322 325 int ret; 323 - int looped = 0; 324 326 struct reada_extent *re = NULL; 327 + struct reada_extent *re_exist = NULL; 325 328 struct btrfs_fs_info *fs_info = root->fs_info; 326 329 struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree; 327 330 struct btrfs_bio *bbio = NULL; 328 331 struct btrfs_device *dev; 332 + struct btrfs_device *prev_dev; 329 333 u32 blocksize; 330 334 u64 length; 331 335 int nzones = 0; 332 336 int i; 333 337 unsigned long index = logical >> PAGE_CACHE_SHIFT; 334 338 335 - again: 336 339 spin_lock(&fs_info->reada_lock); 337 340 re = radix_tree_lookup(&fs_info->reada_tree, index); 338 341 if (re) 339 342 kref_get(&re->refcnt); 340 343 spin_unlock(&fs_info->reada_lock); 341 344 342 - if (re || looped) 345 + if (re) 343 346 return re; 344 347 345 348 re = kzalloc(sizeof(*re), GFP_NOFS); ··· 395 398 /* insert extent in reada_tree + all per-device trees, all or nothing */ 396 399 spin_lock(&fs_info->reada_lock); 397 400 ret = radix_tree_insert(&fs_info->reada_tree, index, re); 398 - if (ret) { 401 + if (ret == -EEXIST) { 402 + re_exist = radix_tree_lookup(&fs_info->reada_tree, index); 403 + BUG_ON(!re_exist); 404 + kref_get(&re_exist->refcnt); 399 405 spin_unlock(&fs_info->reada_lock); 400 - if (ret != -ENOMEM) { 401 - /* someone inserted the extent in the meantime */ 402 - looped = 1; 403 - } 404 406 goto error; 405 407 } 408 + if (ret) { 409 + spin_unlock(&fs_info->reada_lock); 410 + goto error; 411 + } 412 + prev_dev = NULL; 406 413 for (i = 0; i < nzones; ++i) { 407 414 dev = bbio->stripes[i].dev; 415 + if (dev == prev_dev) { 416 + /* 417 + * in case of DUP, just add the first zone. As both 418 + * are on the same device, there's nothing to gain 419 + * from adding both. 420 + * Also, it wouldn't work, as the tree is per device 421 + * and adding would fail with EEXIST 422 + */ 423 + continue; 424 + } 425 + prev_dev = dev; 408 426 ret = radix_tree_insert(&dev->reada_extents, index, re); 409 427 if (ret) { 410 428 while (--i >= 0) { ··· 462 450 } 463 451 kfree(bbio); 464 452 kfree(re); 465 - if (looped) 466 - goto again; 467 - return NULL; 453 + return re_exist; 468 454 } 469 455 470 456 static void reada_kref_dummy(struct kref *kr)
+3 -1
fs/btrfs/relocation.c
··· 1279 1279 if (rb_node) 1280 1280 backref_tree_panic(rb_node, -EEXIST, node->bytenr); 1281 1281 } else { 1282 + spin_lock(&root->fs_info->trans_lock); 1282 1283 list_del_init(&root->root_list); 1284 + spin_unlock(&root->fs_info->trans_lock); 1283 1285 kfree(node); 1284 1286 } 1285 1287 return 0; ··· 3813 3811 3814 3812 ret = btrfs_block_rsv_check(rc->extent_root, rc->block_rsv, 5); 3815 3813 if (ret < 0) { 3816 - if (ret != -EAGAIN) { 3814 + if (ret != -ENOSPC) { 3817 3815 err = ret; 3818 3816 WARN_ON(1); 3819 3817 break;
-15
fs/btrfs/scrub.c
··· 1257 1257 if (memcmp(csum, on_disk_csum, sdev->csum_size)) 1258 1258 fail = 1; 1259 1259 1260 - if (fail) { 1261 - spin_lock(&sdev->stat_lock); 1262 - ++sdev->stat.csum_errors; 1263 - spin_unlock(&sdev->stat_lock); 1264 - } 1265 - 1266 1260 return fail; 1267 1261 } 1268 1262 ··· 1328 1334 btrfs_csum_final(crc, calculated_csum); 1329 1335 if (memcmp(calculated_csum, on_disk_csum, sdev->csum_size)) 1330 1336 ++crc_fail; 1331 - 1332 - if (crc_fail || fail) { 1333 - spin_lock(&sdev->stat_lock); 1334 - if (crc_fail) 1335 - ++sdev->stat.csum_errors; 1336 - if (fail) 1337 - ++sdev->stat.verify_errors; 1338 - spin_unlock(&sdev->stat_lock); 1339 - } 1340 1337 1341 1338 return fail || crc_fail; 1342 1339 }
+4 -3
fs/btrfs/super.c
··· 815 815 return 0; 816 816 } 817 817 818 - btrfs_start_delalloc_inodes(root, 0); 819 818 btrfs_wait_ordered_extents(root, 0, 0); 820 819 821 820 trans = btrfs_start_transaction(root, 0); ··· 1147 1148 if (ret) 1148 1149 goto restore; 1149 1150 } else { 1150 - if (fs_info->fs_devices->rw_devices == 0) 1151 + if (fs_info->fs_devices->rw_devices == 0) { 1151 1152 ret = -EACCES; 1152 1153 goto restore; 1154 + } 1153 1155 1154 - if (btrfs_super_log_root(fs_info->super_copy) != 0) 1156 + if (btrfs_super_log_root(fs_info->super_copy) != 0) { 1155 1157 ret = -EINVAL; 1156 1158 goto restore; 1159 + } 1157 1160 1158 1161 ret = btrfs_cleanup_fs_roots(fs_info); 1159 1162 if (ret)
+5 -1
fs/btrfs/transaction.c
··· 73 73 74 74 cur_trans = root->fs_info->running_transaction; 75 75 if (cur_trans) { 76 - if (cur_trans->aborted) 76 + if (cur_trans->aborted) { 77 + spin_unlock(&root->fs_info->trans_lock); 77 78 return cur_trans->aborted; 79 + } 78 80 atomic_inc(&cur_trans->use_count); 79 81 atomic_inc(&cur_trans->num_writers); 80 82 cur_trans->num_joined++; ··· 1402 1400 ret = commit_fs_roots(trans, root); 1403 1401 if (ret) { 1404 1402 mutex_unlock(&root->fs_info->tree_log_mutex); 1403 + mutex_unlock(&root->fs_info->reloc_mutex); 1405 1404 goto cleanup_transaction; 1406 1405 } 1407 1406 ··· 1414 1411 ret = commit_cowonly_roots(trans, root); 1415 1412 if (ret) { 1416 1413 mutex_unlock(&root->fs_info->tree_log_mutex); 1414 + mutex_unlock(&root->fs_info->reloc_mutex); 1417 1415 goto cleanup_transaction; 1418 1416 } 1419 1417
+9 -4
fs/btrfs/volumes.c
··· 3324 3324 stripe_size = devices_info[ndevs-1].max_avail; 3325 3325 num_stripes = ndevs * dev_stripes; 3326 3326 3327 - if (stripe_size * num_stripes > max_chunk_size * ncopies) { 3327 + if (stripe_size * ndevs > max_chunk_size * ncopies) { 3328 3328 stripe_size = max_chunk_size * ncopies; 3329 - do_div(stripe_size, num_stripes); 3329 + do_div(stripe_size, ndevs); 3330 3330 } 3331 3331 3332 3332 do_div(stripe_size, dev_stripes); 3333 + 3334 + /* align to BTRFS_STRIPE_LEN */ 3333 3335 do_div(stripe_size, BTRFS_STRIPE_LEN); 3334 3336 stripe_size *= BTRFS_STRIPE_LEN; 3335 3337 ··· 3807 3805 else if (mirror_num) 3808 3806 stripe_index += mirror_num - 1; 3809 3807 else { 3808 + int old_stripe_index = stripe_index; 3810 3809 stripe_index = find_live_mirror(map, stripe_index, 3811 3810 map->sub_stripes, stripe_index + 3812 3811 current->pid % map->sub_stripes); 3813 - mirror_num = stripe_index + 1; 3812 + mirror_num = stripe_index - old_stripe_index + 1; 3814 3813 } 3815 3814 } else { 3816 3815 /* ··· 4353 4350 4354 4351 ret = __btrfs_open_devices(fs_devices, FMODE_READ, 4355 4352 root->fs_info->bdev_holder); 4356 - if (ret) 4353 + if (ret) { 4354 + free_fs_devices(fs_devices); 4357 4355 goto out; 4356 + } 4358 4357 4359 4358 if (!fs_devices->seeding) { 4360 4359 __btrfs_close_devices(fs_devices);
-1
fs/buffer.c
··· 985 985 return page; 986 986 987 987 failed: 988 - BUG(); 989 988 unlock_page(page); 990 989 page_cache_release(page); 991 990 return NULL;
+8 -4
fs/cifs/cifsfs.c
··· 370 370 (int)(srcaddr->sa_family)); 371 371 } 372 372 373 - seq_printf(s, ",uid=%d", cifs_sb->mnt_uid); 373 + seq_printf(s, ",uid=%u", cifs_sb->mnt_uid); 374 374 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_UID) 375 375 seq_printf(s, ",forceuid"); 376 376 else 377 377 seq_printf(s, ",noforceuid"); 378 378 379 - seq_printf(s, ",gid=%d", cifs_sb->mnt_gid); 379 + seq_printf(s, ",gid=%u", cifs_sb->mnt_gid); 380 380 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID) 381 381 seq_printf(s, ",forcegid"); 382 382 else ··· 434 434 seq_printf(s, ",noperm"); 435 435 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) 436 436 seq_printf(s, ",strictcache"); 437 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPUID) 438 + seq_printf(s, ",backupuid=%u", cifs_sb->mnt_backupuid); 439 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPGID) 440 + seq_printf(s, ",backupgid=%u", cifs_sb->mnt_backupgid); 437 441 438 - seq_printf(s, ",rsize=%d", cifs_sb->rsize); 439 - seq_printf(s, ",wsize=%d", cifs_sb->wsize); 442 + seq_printf(s, ",rsize=%u", cifs_sb->rsize); 443 + seq_printf(s, ",wsize=%u", cifs_sb->wsize); 440 444 /* convert actimeo and display it in seconds */ 441 445 seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ); 442 446
+6 -6
fs/cifs/connect.c
··· 3228 3228 3229 3229 cifs_sb->mnt_uid = pvolume_info->linux_uid; 3230 3230 cifs_sb->mnt_gid = pvolume_info->linux_gid; 3231 - if (pvolume_info->backupuid_specified) 3232 - cifs_sb->mnt_backupuid = pvolume_info->backupuid; 3233 - if (pvolume_info->backupgid_specified) 3234 - cifs_sb->mnt_backupgid = pvolume_info->backupgid; 3235 3231 cifs_sb->mnt_file_mode = pvolume_info->file_mode; 3236 3232 cifs_sb->mnt_dir_mode = pvolume_info->dir_mode; 3237 3233 cFYI(1, "file mode: 0x%hx dir mode: 0x%hx", ··· 3258 3262 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_RWPIDFORWARD; 3259 3263 if (pvolume_info->cifs_acl) 3260 3264 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_ACL; 3261 - if (pvolume_info->backupuid_specified) 3265 + if (pvolume_info->backupuid_specified) { 3262 3266 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPUID; 3263 - if (pvolume_info->backupgid_specified) 3267 + cifs_sb->mnt_backupuid = pvolume_info->backupuid; 3268 + } 3269 + if (pvolume_info->backupgid_specified) { 3264 3270 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPGID; 3271 + cifs_sb->mnt_backupgid = pvolume_info->backupgid; 3272 + } 3265 3273 if (pvolume_info->override_uid) 3266 3274 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_OVERR_UID; 3267 3275 if (pvolume_info->override_gid)
+2 -1
fs/cifs/file.c
··· 2178 2178 unsigned long nr_pages, i; 2179 2179 size_t copied, len, cur_len; 2180 2180 ssize_t total_written = 0; 2181 - loff_t offset = *poffset; 2181 + loff_t offset; 2182 2182 struct iov_iter it; 2183 2183 struct cifsFileInfo *open_file; 2184 2184 struct cifs_tcon *tcon; ··· 2200 2200 cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 2201 2201 open_file = file->private_data; 2202 2202 tcon = tlink_tcon(open_file->tlink); 2203 + offset = *poffset; 2203 2204 2204 2205 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD) 2205 2206 pid = open_file->pid;
+12
fs/dlm/lock.c
··· 1737 1737 return 1; 1738 1738 1739 1739 /* 1740 + * Even if the convert is compat with all granted locks, 1741 + * QUECVT forces it behind other locks on the convert queue. 1742 + */ 1743 + 1744 + if (now && conv && (lkb->lkb_exflags & DLM_LKF_QUECVT)) { 1745 + if (list_empty(&r->res_convertqueue)) 1746 + return 1; 1747 + else 1748 + goto out; 1749 + } 1750 + 1751 + /* 1740 1752 * The NOORDER flag is set to avoid the standard vms rules on grant 1741 1753 * order. 1742 1754 */
+3 -1
fs/eventpoll.c
··· 1663 1663 if (op == EPOLL_CTL_ADD) { 1664 1664 if (is_file_epoll(tfile)) { 1665 1665 error = -ELOOP; 1666 - if (ep_loop_check(ep, tfile) != 0) 1666 + if (ep_loop_check(ep, tfile) != 0) { 1667 + clear_tfile_check_list(); 1667 1668 goto error_tgt_fput; 1669 + } 1668 1670 } else 1669 1671 list_add(&tfile->f_tfile_llink, &tfile_check_list); 1670 1672 }
+2
fs/ext4/super.c
··· 1597 1597 unsigned int *journal_ioprio, 1598 1598 int is_remount) 1599 1599 { 1600 + #ifdef CONFIG_QUOTA 1600 1601 struct ext4_sb_info *sbi = EXT4_SB(sb); 1602 + #endif 1601 1603 char *p; 1602 1604 substring_t args[MAX_OPT_ARGS]; 1603 1605 int token;
+7 -3
fs/gfs2/lock_dlm.c
··· 200 200 return -1; 201 201 } 202 202 203 - static u32 make_flags(const u32 lkid, const unsigned int gfs_flags, 203 + static u32 make_flags(struct gfs2_glock *gl, const unsigned int gfs_flags, 204 204 const int req) 205 205 { 206 206 u32 lkf = DLM_LKF_VALBLK; 207 + u32 lkid = gl->gl_lksb.sb_lkid; 207 208 208 209 if (gfs_flags & LM_FLAG_TRY) 209 210 lkf |= DLM_LKF_NOQUEUE; ··· 228 227 BUG(); 229 228 } 230 229 231 - if (lkid != 0) 230 + if (lkid != 0) { 232 231 lkf |= DLM_LKF_CONVERT; 232 + if (test_bit(GLF_BLOCKING, &gl->gl_flags)) 233 + lkf |= DLM_LKF_QUECVT; 234 + } 233 235 234 236 return lkf; 235 237 } ··· 254 250 char strname[GDLM_STRNAME_BYTES] = ""; 255 251 256 252 req = make_mode(req_state); 257 - lkf = make_flags(gl->gl_lksb.sb_lkid, flags, req); 253 + lkf = make_flags(gl, flags, req); 258 254 gfs2_glstats_inc(gl, GFS2_LKS_DCOUNT); 259 255 gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT); 260 256 if (gl->gl_lksb.sb_lkid) {
+1
fs/hugetlbfs/inode.c
··· 485 485 inode->i_fop = &simple_dir_operations; 486 486 /* directory inodes start off with i_nlink == 2 (for "." entry) */ 487 487 inc_nlink(inode); 488 + lockdep_annotate_inode_mutex_key(inode); 488 489 } 489 490 return inode; 490 491 }
+2 -2
fs/jbd2/commit.c
··· 723 723 if (commit_transaction->t_need_data_flush && 724 724 (journal->j_fs_dev != journal->j_dev) && 725 725 (journal->j_flags & JBD2_BARRIER)) 726 - blkdev_issue_flush(journal->j_fs_dev, GFP_KERNEL, NULL); 726 + blkdev_issue_flush(journal->j_fs_dev, GFP_NOFS, NULL); 727 727 728 728 /* Done it all: now write the commit record asynchronously. */ 729 729 if (JBD2_HAS_INCOMPAT_FEATURE(journal, ··· 859 859 if (JBD2_HAS_INCOMPAT_FEATURE(journal, 860 860 JBD2_FEATURE_INCOMPAT_ASYNC_COMMIT) && 861 861 journal->j_flags & JBD2_BARRIER) { 862 - blkdev_issue_flush(journal->j_dev, GFP_KERNEL, NULL); 862 + blkdev_issue_flush(journal->j_dev, GFP_NOFS, NULL); 863 863 } 864 864 865 865 if (err)
+2 -2
fs/nfs/dir.c
··· 1429 1429 } 1430 1430 1431 1431 open_flags = nd->intent.open.flags; 1432 - attr.ia_valid = 0; 1432 + attr.ia_valid = ATTR_OPEN; 1433 1433 1434 1434 ctx = create_nfs_open_context(dentry, open_flags); 1435 1435 res = ERR_CAST(ctx); ··· 1536 1536 if (IS_ERR(ctx)) 1537 1537 goto out; 1538 1538 1539 - attr.ia_valid = 0; 1539 + attr.ia_valid = ATTR_OPEN; 1540 1540 if (openflags & O_TRUNC) { 1541 1541 attr.ia_valid |= ATTR_SIZE; 1542 1542 attr.ia_size = 0;
+1
fs/nfs/nfs4_fs.h
··· 59 59 60 60 #define NFS_SEQID_CONFIRMED 1 61 61 struct nfs_seqid_counter { 62 + ktime_t create_time; 62 63 int owner_id; 63 64 int flags; 64 65 u32 counter;
+36 -8
fs/nfs/nfs4proc.c
··· 838 838 p->o_arg.open_flags = flags; 839 839 p->o_arg.fmode = fmode & (FMODE_READ|FMODE_WRITE); 840 840 p->o_arg.clientid = server->nfs_client->cl_clientid; 841 - p->o_arg.id = sp->so_seqid.owner_id; 841 + p->o_arg.id.create_time = ktime_to_ns(sp->so_seqid.create_time); 842 + p->o_arg.id.uniquifier = sp->so_seqid.owner_id; 842 843 p->o_arg.name = &dentry->d_name; 843 844 p->o_arg.server = server; 844 845 p->o_arg.bitmask = server->attr_bitmask; ··· 1467 1466 goto unlock_no_action; 1468 1467 rcu_read_unlock(); 1469 1468 } 1470 - /* Update sequence id. */ 1471 - data->o_arg.id = sp->so_seqid.owner_id; 1469 + /* Update client id. */ 1472 1470 data->o_arg.clientid = sp->so_server->nfs_client->cl_clientid; 1473 1471 if (data->o_arg.claim == NFS4_OPEN_CLAIM_PREVIOUS) { 1474 1472 task->tk_msg.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_OPEN_NOATTR]; ··· 1954 1954 }; 1955 1955 int err; 1956 1956 do { 1957 - err = nfs4_handle_exception(server, 1958 - _nfs4_do_setattr(inode, cred, fattr, sattr, state), 1959 - &exception); 1957 + err = _nfs4_do_setattr(inode, cred, fattr, sattr, state); 1958 + switch (err) { 1959 + case -NFS4ERR_OPENMODE: 1960 + if (state && !(state->state & FMODE_WRITE)) { 1961 + err = -EBADF; 1962 + if (sattr->ia_valid & ATTR_OPEN) 1963 + err = -EACCES; 1964 + goto out; 1965 + } 1966 + } 1967 + err = nfs4_handle_exception(server, err, &exception); 1960 1968 } while (exception.retry); 1969 + out: 1961 1970 return err; 1962 1971 } 1963 1972 ··· 4567 4558 static int nfs4_lock_reclaim(struct nfs4_state *state, struct file_lock *request) 4568 4559 { 4569 4560 struct nfs_server *server = NFS_SERVER(state->inode); 4570 - struct nfs4_exception exception = { }; 4561 + struct nfs4_exception exception = { 4562 + .inode = state->inode, 4563 + }; 4571 4564 int err; 4572 4565 4573 4566 do { ··· 4587 4576 static int nfs4_lock_expired(struct nfs4_state *state, struct file_lock *request) 4588 4577 { 4589 4578 struct nfs_server *server = NFS_SERVER(state->inode); 4590 - struct nfs4_exception exception = { }; 4579 + struct nfs4_exception exception = { 4580 + .inode = state->inode, 4581 + }; 4591 4582 int err; 4592 4583 4593 4584 err = nfs4_set_lock_state(state, request); ··· 4689 4676 { 4690 4677 struct nfs4_exception exception = { 4691 4678 .state = state, 4679 + .inode = state->inode, 4692 4680 }; 4693 4681 int err; 4694 4682 ··· 4735 4721 4736 4722 if (state == NULL) 4737 4723 return -ENOLCK; 4724 + /* 4725 + * Don't rely on the VFS having checked the file open mode, 4726 + * since it won't do this for flock() locks. 4727 + */ 4728 + switch (request->fl_type & (F_RDLCK|F_WRLCK|F_UNLCK)) { 4729 + case F_RDLCK: 4730 + if (!(filp->f_mode & FMODE_READ)) 4731 + return -EBADF; 4732 + break; 4733 + case F_WRLCK: 4734 + if (!(filp->f_mode & FMODE_WRITE)) 4735 + return -EBADF; 4736 + } 4737 + 4738 4738 do { 4739 4739 status = nfs4_proc_setlk(state, cmd, request); 4740 4740 if ((status != -EAGAIN) || IS_SETLK(cmd))
+19 -12
fs/nfs/nfs4state.c
··· 393 393 static void 394 394 nfs4_init_seqid_counter(struct nfs_seqid_counter *sc) 395 395 { 396 + sc->create_time = ktime_get(); 396 397 sc->flags = 0; 397 398 sc->counter = 0; 398 399 spin_lock_init(&sc->lock); ··· 435 434 static void 436 435 nfs4_drop_state_owner(struct nfs4_state_owner *sp) 437 436 { 438 - if (!RB_EMPTY_NODE(&sp->so_server_node)) { 437 + struct rb_node *rb_node = &sp->so_server_node; 438 + 439 + if (!RB_EMPTY_NODE(rb_node)) { 439 440 struct nfs_server *server = sp->so_server; 440 441 struct nfs_client *clp = server->nfs_client; 441 442 442 443 spin_lock(&clp->cl_lock); 443 - rb_erase(&sp->so_server_node, &server->state_owners); 444 - RB_CLEAR_NODE(&sp->so_server_node); 444 + if (!RB_EMPTY_NODE(rb_node)) { 445 + rb_erase(rb_node, &server->state_owners); 446 + RB_CLEAR_NODE(rb_node); 447 + } 445 448 spin_unlock(&clp->cl_lock); 446 449 } 447 450 } ··· 521 516 /** 522 517 * nfs4_put_state_owner - Release a nfs4_state_owner 523 518 * @sp: state owner data to release 519 + * 520 + * Note that we keep released state owners on an LRU 521 + * list. 522 + * This caches valid state owners so that they can be 523 + * reused, to avoid the OPEN_CONFIRM on minor version 0. 524 + * It also pins the uniquifier of dropped state owners for 525 + * a while, to ensure that those state owner names are 526 + * never reused. 524 527 */ 525 528 void nfs4_put_state_owner(struct nfs4_state_owner *sp) 526 529 { ··· 538 525 if (!atomic_dec_and_lock(&sp->so_count, &clp->cl_lock)) 539 526 return; 540 527 541 - if (!RB_EMPTY_NODE(&sp->so_server_node)) { 542 - sp->so_expires = jiffies; 543 - list_add_tail(&sp->so_lru, &server->state_owners_lru); 544 - spin_unlock(&clp->cl_lock); 545 - } else { 546 - nfs4_remove_state_owner_locked(sp); 547 - spin_unlock(&clp->cl_lock); 548 - nfs4_free_state_owner(sp); 549 - } 528 + sp->so_expires = jiffies; 529 + list_add_tail(&sp->so_lru, &server->state_owners_lru); 530 + spin_unlock(&clp->cl_lock); 550 531 } 551 532 552 533 /**
+5 -4
fs/nfs/nfs4xdr.c
··· 74 74 /* lock,open owner id: 75 75 * we currently use size 2 (u64) out of (NFS4_OPAQUE_LIMIT >> 2) 76 76 */ 77 - #define open_owner_id_maxsz (1 + 1 + 4) 77 + #define open_owner_id_maxsz (1 + 2 + 1 + 1 + 2) 78 78 #define lock_owner_id_maxsz (1 + 1 + 4) 79 79 #define decode_lockowner_maxsz (1 + XDR_QUADLEN(IDMAP_NAMESZ)) 80 80 #define compound_encode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2)) ··· 1340 1340 */ 1341 1341 encode_nfs4_seqid(xdr, arg->seqid); 1342 1342 encode_share_access(xdr, arg->fmode); 1343 - p = reserve_space(xdr, 32); 1343 + p = reserve_space(xdr, 36); 1344 1344 p = xdr_encode_hyper(p, arg->clientid); 1345 - *p++ = cpu_to_be32(20); 1345 + *p++ = cpu_to_be32(24); 1346 1346 p = xdr_encode_opaque_fixed(p, "open id:", 8); 1347 1347 *p++ = cpu_to_be32(arg->server->s_dev); 1348 - xdr_encode_hyper(p, arg->id); 1348 + *p++ = cpu_to_be32(arg->id.uniquifier); 1349 + xdr_encode_hyper(p, arg->id.create_time); 1349 1350 } 1350 1351 1351 1352 static inline void encode_createmode(struct xdr_stream *xdr, const struct nfs_openargs *arg)
+1 -1
fs/nfs/read.c
··· 322 322 while (!list_empty(res)) { 323 323 data = list_entry(res->next, struct nfs_read_data, list); 324 324 list_del(&data->list); 325 - nfs_readdata_free(data); 325 + nfs_readdata_release(data); 326 326 } 327 327 nfs_readpage_release(req); 328 328 return -ENOMEM;
+6 -2
fs/nfs/super.c
··· 2767 2767 char *root_devname; 2768 2768 size_t len; 2769 2769 2770 - len = strlen(hostname) + 3; 2770 + len = strlen(hostname) + 5; 2771 2771 root_devname = kmalloc(len, GFP_KERNEL); 2772 2772 if (root_devname == NULL) 2773 2773 return ERR_PTR(-ENOMEM); 2774 - snprintf(root_devname, len, "%s:/", hostname); 2774 + /* Does hostname needs to be enclosed in brackets? */ 2775 + if (strchr(hostname, ':')) 2776 + snprintf(root_devname, len, "[%s]:/", hostname); 2777 + else 2778 + snprintf(root_devname, len, "%s:/", hostname); 2775 2779 root_mnt = vfs_kern_mount(fs_type, flags, root_devname, data); 2776 2780 kfree(root_devname); 2777 2781 return root_mnt;
+3 -2
fs/nfs/write.c
··· 682 682 req->wb_bytes = rqend - req->wb_offset; 683 683 out_unlock: 684 684 spin_unlock(&inode->i_lock); 685 - nfs_clear_request_commit(req); 685 + if (req) 686 + nfs_clear_request_commit(req); 686 687 return req; 687 688 out_flushme: 688 689 spin_unlock(&inode->i_lock); ··· 1019 1018 while (!list_empty(res)) { 1020 1019 data = list_entry(res->next, struct nfs_write_data, list); 1021 1020 list_del(&data->list); 1022 - nfs_writedata_free(data); 1021 + nfs_writedata_release(data); 1023 1022 } 1024 1023 nfs_redirty_request(req); 1025 1024 return -ENOMEM;
+29 -2
fs/pipe.c
··· 346 346 .get = generic_pipe_buf_get, 347 347 }; 348 348 349 + static const struct pipe_buf_operations packet_pipe_buf_ops = { 350 + .can_merge = 0, 351 + .map = generic_pipe_buf_map, 352 + .unmap = generic_pipe_buf_unmap, 353 + .confirm = generic_pipe_buf_confirm, 354 + .release = anon_pipe_buf_release, 355 + .steal = generic_pipe_buf_steal, 356 + .get = generic_pipe_buf_get, 357 + }; 358 + 349 359 static ssize_t 350 360 pipe_read(struct kiocb *iocb, const struct iovec *_iov, 351 361 unsigned long nr_segs, loff_t pos) ··· 417 407 ret += chars; 418 408 buf->offset += chars; 419 409 buf->len -= chars; 410 + 411 + /* Was it a packet buffer? Clean up and exit */ 412 + if (buf->flags & PIPE_BUF_FLAG_PACKET) { 413 + total_len = chars; 414 + buf->len = 0; 415 + } 416 + 420 417 if (!buf->len) { 421 418 buf->ops = NULL; 422 419 ops->release(pipe, buf); ··· 474 457 if (ret > 0) 475 458 file_accessed(filp); 476 459 return ret; 460 + } 461 + 462 + static inline int is_packetized(struct file *file) 463 + { 464 + return (file->f_flags & O_DIRECT) != 0; 477 465 } 478 466 479 467 static ssize_t ··· 615 593 buf->ops = &anon_pipe_buf_ops; 616 594 buf->offset = 0; 617 595 buf->len = chars; 596 + buf->flags = 0; 597 + if (is_packetized(filp)) { 598 + buf->ops = &packet_pipe_buf_ops; 599 + buf->flags = PIPE_BUF_FLAG_PACKET; 600 + } 618 601 pipe->nrbufs = ++bufs; 619 602 pipe->tmp_page = NULL; 620 603 ··· 1040 1013 goto err_dentry; 1041 1014 f->f_mapping = inode->i_mapping; 1042 1015 1043 - f->f_flags = O_WRONLY | (flags & O_NONBLOCK); 1016 + f->f_flags = O_WRONLY | (flags & (O_NONBLOCK | O_DIRECT)); 1044 1017 f->f_version = 0; 1045 1018 1046 1019 return f; ··· 1084 1057 int error; 1085 1058 int fdw, fdr; 1086 1059 1087 - if (flags & ~(O_CLOEXEC | O_NONBLOCK)) 1060 + if (flags & ~(O_CLOEXEC | O_NONBLOCK | O_DIRECT)) 1088 1061 return -EINVAL; 1089 1062 1090 1063 fw = create_write_pipe(flags);
-3
fs/proc/task_mmu.c
··· 597 597 if (!page) 598 598 continue; 599 599 600 - if (PageReserved(page)) 601 - continue; 602 - 603 600 /* Clear accessed and referenced bits. */ 604 601 ptep_test_and_clear_young(vma, addr, pte); 605 602 ClearPageReferenced(page);
+11 -3
include/asm-generic/siginfo.h
··· 35 35 #define __ARCH_SI_BAND_T long 36 36 #endif 37 37 38 + #ifndef __ARCH_SI_CLOCK_T 39 + #define __ARCH_SI_CLOCK_T __kernel_clock_t 40 + #endif 41 + 42 + #ifndef __ARCH_SI_ATTRIBUTES 43 + #define __ARCH_SI_ATTRIBUTES 44 + #endif 45 + 38 46 #ifndef HAVE_ARCH_SIGINFO_T 39 47 40 48 typedef struct siginfo { ··· 80 72 __kernel_pid_t _pid; /* which child */ 81 73 __ARCH_SI_UID_T _uid; /* sender's uid */ 82 74 int _status; /* exit code */ 83 - __kernel_clock_t _utime; 84 - __kernel_clock_t _stime; 75 + __ARCH_SI_CLOCK_T _utime; 76 + __ARCH_SI_CLOCK_T _stime; 85 77 } _sigchld; 86 78 87 79 /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */ ··· 99 91 int _fd; 100 92 } _sigpoll; 101 93 } _sifields; 102 - } siginfo_t; 94 + } __ARCH_SI_ATTRIBUTES siginfo_t; 103 95 104 96 #endif 105 97
+4
include/linux/gpio-pxa.h
··· 13 13 14 14 extern int pxa_irq_to_gpio(int irq); 15 15 16 + struct pxa_gpio_platform_data { 17 + int (*gpio_set_wake)(unsigned int gpio, unsigned int on); 18 + }; 19 + 16 20 #endif /* __GPIO_PXA_H */
+17 -14
include/linux/hsi/hsi.h
··· 26 26 #include <linux/device.h> 27 27 #include <linux/mutex.h> 28 28 #include <linux/scatterlist.h> 29 - #include <linux/spinlock.h> 30 29 #include <linux/list.h> 31 30 #include <linux/module.h> 31 + #include <linux/notifier.h> 32 32 33 33 /* HSI message ttype */ 34 34 #define HSI_MSG_READ 0 ··· 121 121 * @device: Driver model representation of the device 122 122 * @tx_cfg: HSI TX configuration 123 123 * @rx_cfg: HSI RX configuration 124 - * @hsi_start_rx: Called after incoming wake line goes high 125 - * @hsi_stop_rx: Called after incoming wake line goes low 124 + * @e_handler: Callback for handling port events (RX Wake High/Low) 125 + * @pclaimed: Keeps tracks if the clients claimed its associated HSI port 126 + * @nb: Notifier block for port events 126 127 */ 127 128 struct hsi_client { 128 129 struct device device; 129 130 struct hsi_config tx_cfg; 130 131 struct hsi_config rx_cfg; 131 - void (*hsi_start_rx)(struct hsi_client *cl); 132 - void (*hsi_stop_rx)(struct hsi_client *cl); 133 132 /* private: */ 133 + void (*ehandler)(struct hsi_client *, unsigned long); 134 134 unsigned int pclaimed:1; 135 - struct list_head link; 135 + struct notifier_block nb; 136 136 }; 137 137 138 138 #define to_hsi_client(dev) container_of(dev, struct hsi_client, device) ··· 146 146 { 147 147 return dev_get_drvdata(&cl->device); 148 148 } 149 + 150 + int hsi_register_port_event(struct hsi_client *cl, 151 + void (*handler)(struct hsi_client *, unsigned long)); 152 + int hsi_unregister_port_event(struct hsi_client *cl); 149 153 150 154 /** 151 155 * struct hsi_client_driver - Driver associated to an HSI client ··· 218 214 * @start_tx: Callback to inform that a client wants to TX data 219 215 * @stop_tx: Callback to inform that a client no longer wishes to TX data 220 216 * @release: Callback to inform that a client no longer uses the port 221 - * @clients: List of hsi_clients using the port. 222 - * @clock: Lock to serialize access to the clients list. 217 + * @n_head: Notifier chain for signaling port events to the clients. 223 218 */ 224 219 struct hsi_port { 225 220 struct device device; ··· 234 231 int (*start_tx)(struct hsi_client *cl); 235 232 int (*stop_tx)(struct hsi_client *cl); 236 233 int (*release)(struct hsi_client *cl); 237 - struct list_head clients; 238 - spinlock_t clock; 234 + /* private */ 235 + struct atomic_notifier_head n_head; 239 236 }; 240 237 241 238 #define to_hsi_port(dev) container_of(dev, struct hsi_port, device) 242 239 #define hsi_get_port(cl) to_hsi_port((cl)->device.parent) 243 240 244 - void hsi_event(struct hsi_port *port, unsigned int event); 241 + int hsi_event(struct hsi_port *port, unsigned long event); 245 242 int hsi_claim_port(struct hsi_client *cl, unsigned int share); 246 243 void hsi_release_port(struct hsi_client *cl); 247 244 ··· 273 270 struct module *owner; 274 271 unsigned int id; 275 272 unsigned int num_ports; 276 - struct hsi_port *port; 273 + struct hsi_port **port; 277 274 }; 278 275 279 276 #define to_hsi_controller(dev) container_of(dev, struct hsi_controller, device) 280 277 281 278 struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags); 282 - void hsi_free_controller(struct hsi_controller *hsi); 279 + void hsi_put_controller(struct hsi_controller *hsi); 283 280 int hsi_register_controller(struct hsi_controller *hsi); 284 281 void hsi_unregister_controller(struct hsi_controller *hsi); 285 282 ··· 297 294 static inline struct hsi_port *hsi_find_port_num(struct hsi_controller *hsi, 298 295 unsigned int num) 299 296 { 300 - return (num < hsi->num_ports) ? &hsi->port[num] : NULL; 297 + return (num < hsi->num_ports) ? hsi->port[num] : NULL; 301 298 } 302 299 303 300 /*
+7
include/linux/irq.h
··· 49 49 * IRQ_TYPE_LEVEL_LOW - low level triggered 50 50 * IRQ_TYPE_LEVEL_MASK - Mask to filter out the level bits 51 51 * IRQ_TYPE_SENSE_MASK - Mask for all the above bits 52 + * IRQ_TYPE_DEFAULT - For use by some PICs to ask irq_set_type 53 + * to setup the HW to a sane default (used 54 + * by irqdomain map() callbacks to synchronize 55 + * the HW state and SW flags for a newly 56 + * allocated descriptor). 57 + * 52 58 * IRQ_TYPE_PROBE - Special flag for probing in progress 53 59 * 54 60 * Bits which can be modified via irq_set/clear/modify_status_flags() ··· 83 77 IRQ_TYPE_LEVEL_LOW = 0x00000008, 84 78 IRQ_TYPE_LEVEL_MASK = (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH), 85 79 IRQ_TYPE_SENSE_MASK = 0x0000000f, 80 + IRQ_TYPE_DEFAULT = IRQ_TYPE_SENSE_MASK, 86 81 87 82 IRQ_TYPE_PROBE = 0x00000010, 88 83
+6 -1
include/linux/nfs_xdr.h
··· 312 312 int rpc_status; 313 313 }; 314 314 315 + struct stateowner_id { 316 + __u64 create_time; 317 + __u32 uniquifier; 318 + }; 319 + 315 320 /* 316 321 * Arguments to the open call. 317 322 */ ··· 326 321 int open_flags; 327 322 fmode_t fmode; 328 323 __u64 clientid; 329 - __u64 id; 324 + struct stateowner_id id; 330 325 union { 331 326 struct { 332 327 struct iattr * attrs; /* UNCHECKED, GUARDED */
+1
include/linux/pipe_fs_i.h
··· 6 6 #define PIPE_BUF_FLAG_LRU 0x01 /* page is on the LRU */ 7 7 #define PIPE_BUF_FLAG_ATOMIC 0x02 /* was atomically mapped */ 8 8 #define PIPE_BUF_FLAG_GIFT 0x04 /* page is a gift */ 9 + #define PIPE_BUF_FLAG_PACKET 0x08 /* read() as a packet */ 9 10 10 11 /** 11 12 * struct pipe_buffer - a linux kernel pipe buffer
+4 -3
include/linux/skbuff.h
··· 238 238 /* 239 239 * The callback notifies userspace to release buffers when skb DMA is done in 240 240 * lower device, the skb last reference should be 0 when calling this. 241 - * The desc is used to track userspace buffer index. 241 + * The ctx field is used to track device context. 242 + * The desc field is used to track userspace buffer index. 242 243 */ 243 244 struct ubuf_info { 244 - void (*callback)(void *); 245 - void *arg; 245 + void (*callback)(struct ubuf_info *); 246 + void *ctx; 246 247 unsigned long desc; 247 248 }; 248 249
+1 -1
include/linux/spi/spi.h
··· 254 254 * driver is finished with this message, it must call 255 255 * spi_finalize_current_message() so the subsystem can issue the next 256 256 * transfer 257 - * @prepare_transfer_hardware: there are currently no more messages on the 257 + * @unprepare_transfer_hardware: there are currently no more messages on the 258 258 * queue so the subsystem notifies the driver that it may relax the 259 259 * hardware by issuing this call 260 260 *
+2
include/linux/usb/hcd.h
··· 126 126 unsigned wireless:1; /* Wireless USB HCD */ 127 127 unsigned authorized_default:1; 128 128 unsigned has_tt:1; /* Integrated TT in root hub */ 129 + unsigned broken_pci_sleep:1; /* Don't put the 130 + controller in PCI-D3 for system sleep */ 129 131 130 132 unsigned int irq; /* irq allocated */ 131 133 void __iomem *regs; /* device memory/io */
+3 -2
include/linux/vm_event_item.h
··· 26 26 PGFREE, PGACTIVATE, PGDEACTIVATE, 27 27 PGFAULT, PGMAJFAULT, 28 28 FOR_ALL_ZONES(PGREFILL), 29 - FOR_ALL_ZONES(PGSTEAL), 29 + FOR_ALL_ZONES(PGSTEAL_KSWAPD), 30 + FOR_ALL_ZONES(PGSTEAL_DIRECT), 30 31 FOR_ALL_ZONES(PGSCAN_KSWAPD), 31 32 FOR_ALL_ZONES(PGSCAN_DIRECT), 32 33 #ifdef CONFIG_NUMA 33 34 PGSCAN_ZONE_RECLAIM_FAILED, 34 35 #endif 35 - PGINODESTEAL, SLABS_SCANNED, KSWAPD_STEAL, KSWAPD_INODESTEAL, 36 + PGINODESTEAL, SLABS_SCANNED, KSWAPD_INODESTEAL, 36 37 KSWAPD_LOW_WMARK_HIT_QUICKLY, KSWAPD_HIGH_WMARK_HIT_QUICKLY, 37 38 KSWAPD_SKIP_CONGESTION_WAIT, 38 39 PAGEOUTRUN, ALLOCSTALL, PGROTATED,
+5 -1
include/net/dst.h
··· 36 36 struct net_device *dev; 37 37 struct dst_ops *ops; 38 38 unsigned long _metrics; 39 - unsigned long expires; 39 + union { 40 + unsigned long expires; 41 + /* point to where the dst_entry copied from */ 42 + struct dst_entry *from; 43 + }; 40 44 struct dst_entry *path; 41 45 struct neighbour __rcu *_neighbour; 42 46 #ifdef CONFIG_XFRM
+48
include/net/ip6_fib.h
··· 123 123 return ((struct rt6_info *)dst)->rt6i_idev; 124 124 } 125 125 126 + static inline void rt6_clean_expires(struct rt6_info *rt) 127 + { 128 + if (!(rt->rt6i_flags & RTF_EXPIRES) && rt->dst.from) 129 + dst_release(rt->dst.from); 130 + 131 + rt->rt6i_flags &= ~RTF_EXPIRES; 132 + rt->dst.from = NULL; 133 + } 134 + 135 + static inline void rt6_set_expires(struct rt6_info *rt, unsigned long expires) 136 + { 137 + if (!(rt->rt6i_flags & RTF_EXPIRES) && rt->dst.from) 138 + dst_release(rt->dst.from); 139 + 140 + rt->rt6i_flags |= RTF_EXPIRES; 141 + rt->dst.expires = expires; 142 + } 143 + 144 + static inline void rt6_update_expires(struct rt6_info *rt, int timeout) 145 + { 146 + if (!(rt->rt6i_flags & RTF_EXPIRES)) { 147 + if (rt->dst.from) 148 + dst_release(rt->dst.from); 149 + /* dst_set_expires relies on expires == 0 150 + * if it has not been set previously. 151 + */ 152 + rt->dst.expires = 0; 153 + } 154 + 155 + dst_set_expires(&rt->dst, timeout); 156 + rt->rt6i_flags |= RTF_EXPIRES; 157 + } 158 + 159 + static inline void rt6_set_from(struct rt6_info *rt, struct rt6_info *from) 160 + { 161 + struct dst_entry *new = (struct dst_entry *) from; 162 + 163 + if (!(rt->rt6i_flags & RTF_EXPIRES) && rt->dst.from) { 164 + if (new == rt->dst.from) 165 + return; 166 + dst_release(rt->dst.from); 167 + } 168 + 169 + rt->rt6i_flags &= ~RTF_EXPIRES; 170 + rt->dst.from = new; 171 + dst_hold(new); 172 + } 173 + 126 174 struct fib6_walker_t { 127 175 struct list_head lh; 128 176 struct fib6_node *root, *node;
+3 -3
include/net/red.h
··· 245 245 * 246 246 * dummy packets as a burst after idle time, i.e. 247 247 * 248 - * p->qavg *= (1-W)^m 248 + * v->qavg *= (1-W)^m 249 249 * 250 250 * This is an apparently overcomplicated solution (f.e. we have to 251 251 * precompute a table to make this calculation in reasonable time) ··· 279 279 unsigned int backlog) 280 280 { 281 281 /* 282 - * NOTE: p->qavg is fixed point number with point at Wlog. 282 + * NOTE: v->qavg is fixed point number with point at Wlog. 283 283 * The formula below is equvalent to floating point 284 284 * version: 285 285 * ··· 390 390 if (red_is_idling(v)) 391 391 qavg = red_calc_qavg_from_idle_time(p, v); 392 392 393 - /* p->qavg is fixed point number with point at Wlog */ 393 + /* v->qavg is fixed point number with point at Wlog */ 394 394 qavg >>= p->Wlog; 395 395 396 396 if (qavg > p->target_max && p->max_P <= MAX_P_MAX)
+1
include/net/sock.h
··· 246 246 * @sk_user_data: RPC layer private data 247 247 * @sk_sndmsg_page: cached page for sendmsg 248 248 * @sk_sndmsg_off: cached offset for sendmsg 249 + * @sk_peek_off: current peek_offset value 249 250 * @sk_send_head: front of stuff to transmit 250 251 * @sk_security: used by security modules 251 252 * @sk_mark: generic packet mark
+13 -12
init/main.c
··· 225 225 226 226 early_param("loglevel", loglevel); 227 227 228 - /* 229 - * Unknown boot options get handed to init, unless they look like 230 - * unused parameters (modprobe will find them in /proc/cmdline). 231 - */ 232 - static int __init unknown_bootoption(char *param, char *val) 228 + /* Change NUL term back to "=", to make "param" the whole string. */ 229 + static int __init repair_env_string(char *param, char *val) 233 230 { 234 - /* Change NUL term back to "=", to make "param" the whole string. */ 235 231 if (val) { 236 232 /* param=val or param="val"? */ 237 233 if (val == param+strlen(param)+1) ··· 239 243 } else 240 244 BUG(); 241 245 } 246 + return 0; 247 + } 248 + 249 + /* 250 + * Unknown boot options get handed to init, unless they look like 251 + * unused parameters (modprobe will find them in /proc/cmdline). 252 + */ 253 + static int __init unknown_bootoption(char *param, char *val) 254 + { 255 + repair_env_string(param, val); 242 256 243 257 /* Handle obsolete-style parameters */ 244 258 if (obsolete_checksetup(param)) ··· 738 732 "late parameters", 739 733 }; 740 734 741 - static int __init ignore_unknown_bootoption(char *param, char *val) 742 - { 743 - return 0; 744 - } 745 - 746 735 static void __init do_initcall_level(int level) 747 736 { 748 737 extern const struct kernel_param __start___param[], __stop___param[]; ··· 748 747 static_command_line, __start___param, 749 748 __stop___param - __start___param, 750 749 level, level, 751 - ignore_unknown_bootoption); 750 + repair_env_string); 752 751 753 752 for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++) 754 753 do_one_initcall(*fn);
+1 -1
kernel/events/core.c
··· 3183 3183 perf_event_for_each_child(event, func); 3184 3184 func(event); 3185 3185 list_for_each_entry(sibling, &event->sibling_list, group_entry) 3186 - perf_event_for_each_child(event, func); 3186 + perf_event_for_each_child(sibling, func); 3187 3187 mutex_unlock(&ctx->mutex); 3188 3188 } 3189 3189
+19 -19
kernel/irq/debug.h
··· 4 4 5 5 #include <linux/kallsyms.h> 6 6 7 - #define P(f) if (desc->status_use_accessors & f) printk("%14s set\n", #f) 8 - #define PS(f) if (desc->istate & f) printk("%14s set\n", #f) 7 + #define ___P(f) if (desc->status_use_accessors & f) printk("%14s set\n", #f) 8 + #define ___PS(f) if (desc->istate & f) printk("%14s set\n", #f) 9 9 /* FIXME */ 10 - #define PD(f) do { } while (0) 10 + #define ___PD(f) do { } while (0) 11 11 12 12 static inline void print_irq_desc(unsigned int irq, struct irq_desc *desc) 13 13 { ··· 23 23 print_symbol("%s\n", (unsigned long)desc->action->handler); 24 24 } 25 25 26 - P(IRQ_LEVEL); 27 - P(IRQ_PER_CPU); 28 - P(IRQ_NOPROBE); 29 - P(IRQ_NOREQUEST); 30 - P(IRQ_NOTHREAD); 31 - P(IRQ_NOAUTOEN); 26 + ___P(IRQ_LEVEL); 27 + ___P(IRQ_PER_CPU); 28 + ___P(IRQ_NOPROBE); 29 + ___P(IRQ_NOREQUEST); 30 + ___P(IRQ_NOTHREAD); 31 + ___P(IRQ_NOAUTOEN); 32 32 33 - PS(IRQS_AUTODETECT); 34 - PS(IRQS_REPLAY); 35 - PS(IRQS_WAITING); 36 - PS(IRQS_PENDING); 33 + ___PS(IRQS_AUTODETECT); 34 + ___PS(IRQS_REPLAY); 35 + ___PS(IRQS_WAITING); 36 + ___PS(IRQS_PENDING); 37 37 38 - PD(IRQS_INPROGRESS); 39 - PD(IRQS_DISABLED); 40 - PD(IRQS_MASKED); 38 + ___PD(IRQS_INPROGRESS); 39 + ___PD(IRQS_DISABLED); 40 + ___PD(IRQS_MASKED); 41 41 } 42 42 43 - #undef P 44 - #undef PS 45 - #undef PD 43 + #undef ___P 44 + #undef ___PS 45 + #undef ___PD
+22 -6
kernel/power/swap.c
··· 51 51 52 52 #define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1) 53 53 54 + /* 55 + * Number of free pages that are not high. 56 + */ 57 + static inline unsigned long low_free_pages(void) 58 + { 59 + return nr_free_pages() - nr_free_highpages(); 60 + } 61 + 62 + /* 63 + * Number of pages required to be kept free while writing the image. Always 64 + * half of all available low pages before the writing starts. 65 + */ 66 + static inline unsigned long reqd_free_pages(void) 67 + { 68 + return low_free_pages() / 2; 69 + } 70 + 54 71 struct swap_map_page { 55 72 sector_t entries[MAP_PAGE_ENTRIES]; 56 73 sector_t next_swap; ··· 89 72 sector_t cur_swap; 90 73 sector_t first_sector; 91 74 unsigned int k; 92 - unsigned long nr_free_pages, written; 75 + unsigned long reqd_free_pages; 93 76 u32 crc32; 94 77 }; 95 78 ··· 333 316 goto err_rel; 334 317 } 335 318 handle->k = 0; 336 - handle->nr_free_pages = nr_free_pages() >> 1; 337 - handle->written = 0; 319 + handle->reqd_free_pages = reqd_free_pages(); 338 320 handle->first_sector = handle->cur_swap; 339 321 return 0; 340 322 err_rel: ··· 368 352 handle->cur_swap = offset; 369 353 handle->k = 0; 370 354 } 371 - if (bio_chain && ++handle->written > handle->nr_free_pages) { 355 + if (bio_chain && low_free_pages() <= handle->reqd_free_pages) { 372 356 error = hib_wait_on_bio_chain(bio_chain); 373 357 if (error) 374 358 goto out; 375 - handle->written = 0; 359 + handle->reqd_free_pages = reqd_free_pages(); 376 360 } 377 361 out: 378 362 return error; ··· 634 618 * Adjust number of free pages after all allocations have been done. 635 619 * We don't want to run out of pages when writing. 636 620 */ 637 - handle->nr_free_pages = nr_free_pages() >> 1; 621 + handle->reqd_free_pages = reqd_free_pages(); 638 622 639 623 /* 640 624 * Start the CRC32 thread.
-1
kernel/rcutree.c
··· 1820 1820 * a quiescent state betweentimes. 1821 1821 */ 1822 1822 local_irq_save(flags); 1823 - WARN_ON_ONCE(cpu_is_offline(smp_processor_id())); 1824 1823 rdp = this_cpu_ptr(rsp->rda); 1825 1824 1826 1825 /* Add the callback to our list. */
+16 -6
kernel/sched/core.c
··· 6405 6405 struct sd_data *sdd = &tl->data; 6406 6406 6407 6407 for_each_cpu(j, cpu_map) { 6408 - struct sched_domain *sd = *per_cpu_ptr(sdd->sd, j); 6409 - if (sd && (sd->flags & SD_OVERLAP)) 6410 - free_sched_groups(sd->groups, 0); 6411 - kfree(*per_cpu_ptr(sdd->sd, j)); 6412 - kfree(*per_cpu_ptr(sdd->sg, j)); 6413 - kfree(*per_cpu_ptr(sdd->sgp, j)); 6408 + struct sched_domain *sd; 6409 + 6410 + if (sdd->sd) { 6411 + sd = *per_cpu_ptr(sdd->sd, j); 6412 + if (sd && (sd->flags & SD_OVERLAP)) 6413 + free_sched_groups(sd->groups, 0); 6414 + kfree(*per_cpu_ptr(sdd->sd, j)); 6415 + } 6416 + 6417 + if (sdd->sg) 6418 + kfree(*per_cpu_ptr(sdd->sg, j)); 6419 + if (sdd->sgp) 6420 + kfree(*per_cpu_ptr(sdd->sgp, j)); 6414 6421 } 6415 6422 free_percpu(sdd->sd); 6423 + sdd->sd = NULL; 6416 6424 free_percpu(sdd->sg); 6425 + sdd->sg = NULL; 6417 6426 free_percpu(sdd->sgp); 6427 + sdd->sgp = NULL; 6418 6428 } 6419 6429 } 6420 6430
+10 -8
kernel/sched/fair.c
··· 784 784 update_load_add(&rq_of(cfs_rq)->load, se->load.weight); 785 785 #ifdef CONFIG_SMP 786 786 if (entity_is_task(se)) 787 - list_add_tail(&se->group_node, &rq_of(cfs_rq)->cfs_tasks); 787 + list_add(&se->group_node, &rq_of(cfs_rq)->cfs_tasks); 788 788 #endif 789 789 cfs_rq->nr_running++; 790 790 } ··· 3215 3215 3216 3216 static unsigned long task_h_load(struct task_struct *p); 3217 3217 3218 + static const unsigned int sched_nr_migrate_break = 32; 3219 + 3218 3220 /* 3219 3221 * move_tasks tries to move up to load_move weighted load from busiest to 3220 3222 * this_rq, as part of a balancing operation within domain "sd". ··· 3244 3242 3245 3243 /* take a breather every nr_migrate tasks */ 3246 3244 if (env->loop > env->loop_break) { 3247 - env->loop_break += sysctl_sched_nr_migrate; 3245 + env->loop_break += sched_nr_migrate_break; 3248 3246 env->flags |= LBF_NEED_BREAK; 3249 3247 break; 3250 3248 } ··· 3254 3252 3255 3253 load = task_h_load(p); 3256 3254 3257 - if (load < 16 && !env->sd->nr_balance_failed) 3255 + if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed) 3258 3256 goto next; 3259 3257 3260 3258 if ((load / 2) > env->load_move) ··· 4409 4407 .dst_cpu = this_cpu, 4410 4408 .dst_rq = this_rq, 4411 4409 .idle = idle, 4412 - .loop_break = sysctl_sched_nr_migrate, 4410 + .loop_break = sched_nr_migrate_break, 4413 4411 }; 4414 4412 4415 4413 cpumask_copy(cpus, cpu_active_mask); ··· 4447 4445 * correctly treated as an imbalance. 4448 4446 */ 4449 4447 env.flags |= LBF_ALL_PINNED; 4450 - env.load_move = imbalance; 4451 - env.src_cpu = busiest->cpu; 4452 - env.src_rq = busiest; 4453 - env.loop_max = busiest->nr_running; 4448 + env.load_move = imbalance; 4449 + env.src_cpu = busiest->cpu; 4450 + env.src_rq = busiest; 4451 + env.loop_max = min_t(unsigned long, sysctl_sched_nr_migrate, busiest->nr_running); 4454 4452 4455 4453 more_balance: 4456 4454 local_irq_save(flags);
+1
kernel/sched/features.h
··· 68 68 69 69 SCHED_FEAT(FORCE_SD_OVERLAP, false) 70 70 SCHED_FEAT(RT_RUNTIME_SHARE, true) 71 + SCHED_FEAT(LB_MIN, false)
+6 -7
kernel/time/tick-broadcast.c
··· 346 346 tick_get_broadcast_mask()); 347 347 break; 348 348 case TICKDEV_MODE_ONESHOT: 349 - broadcast = tick_resume_broadcast_oneshot(bc); 349 + if (!cpumask_empty(tick_get_broadcast_mask())) 350 + broadcast = tick_resume_broadcast_oneshot(bc); 350 351 break; 351 352 } 352 353 } ··· 373 372 static int tick_broadcast_set_event(ktime_t expires, int force) 374 373 { 375 374 struct clock_event_device *bc = tick_broadcast_device.evtdev; 375 + 376 + if (bc->mode != CLOCK_EVT_MODE_ONESHOT) 377 + clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 376 378 377 379 return clockevents_program_event(bc, expires, force); 378 380 } ··· 535 531 int was_periodic = bc->mode == CLOCK_EVT_MODE_PERIODIC; 536 532 537 533 bc->event_handler = tick_handle_oneshot_broadcast; 538 - clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 539 534 540 535 /* Take the do_timer update */ 541 536 tick_do_timer_cpu = cpu; ··· 552 549 to_cpumask(tmpmask)); 553 550 554 551 if (was_periodic && !cpumask_empty(to_cpumask(tmpmask))) { 552 + clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 555 553 tick_broadcast_init_next_event(to_cpumask(tmpmask), 556 554 tick_next_period); 557 555 tick_broadcast_set_event(tick_next_period, 1); ··· 581 577 raw_spin_lock_irqsave(&tick_broadcast_lock, flags); 582 578 583 579 tick_broadcast_device.mode = TICKDEV_MODE_ONESHOT; 584 - 585 - if (cpumask_empty(tick_get_broadcast_mask())) 586 - goto end; 587 - 588 580 bc = tick_broadcast_device.evtdev; 589 581 if (bc) 590 582 tick_broadcast_setup_oneshot(bc); 591 583 592 - end: 593 584 raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags); 594 585 } 595 586
+5 -3
kernel/trace/trace.c
··· 4629 4629 rb_simple_read(struct file *filp, char __user *ubuf, 4630 4630 size_t cnt, loff_t *ppos) 4631 4631 { 4632 - struct ring_buffer *buffer = filp->private_data; 4632 + struct trace_array *tr = filp->private_data; 4633 + struct ring_buffer *buffer = tr->buffer; 4633 4634 char buf[64]; 4634 4635 int r; 4635 4636 ··· 4648 4647 rb_simple_write(struct file *filp, const char __user *ubuf, 4649 4648 size_t cnt, loff_t *ppos) 4650 4649 { 4651 - struct ring_buffer *buffer = filp->private_data; 4650 + struct trace_array *tr = filp->private_data; 4651 + struct ring_buffer *buffer = tr->buffer; 4652 4652 unsigned long val; 4653 4653 int ret; 4654 4654 ··· 4736 4734 &trace_clock_fops); 4737 4735 4738 4736 trace_create_file("tracing_on", 0644, d_tracer, 4739 - global_trace.buffer, &rb_simple_fops); 4737 + &global_trace, &rb_simple_fops); 4740 4738 4741 4739 #ifdef CONFIG_DYNAMIC_FTRACE 4742 4740 trace_create_file("dyn_ftrace_total_info", 0444, d_tracer,
+2 -2
kernel/trace/trace.h
··· 836 836 filter) 837 837 #include "trace_entries.h" 838 838 839 - #ifdef CONFIG_FUNCTION_TRACER 839 + #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_FUNCTION_TRACER) 840 840 int perf_ftrace_event_register(struct ftrace_event_call *call, 841 841 enum trace_reg type, void *data); 842 842 #else 843 843 #define perf_ftrace_event_register NULL 844 - #endif /* CONFIG_FUNCTION_TRACER */ 844 + #endif 845 845 846 846 #endif /* _LINUX_KERNEL_TRACE_H */
+5
kernel/trace/trace_output.c
··· 652 652 { 653 653 u64 next_ts; 654 654 int ret; 655 + /* trace_find_next_entry will reset ent_size */ 656 + int ent_size = iter->ent_size; 655 657 struct trace_seq *s = &iter->seq; 656 658 struct trace_entry *entry = iter->ent, 657 659 *next_entry = trace_find_next_entry(iter, NULL, ··· 661 659 unsigned long verbose = (trace_flags & TRACE_ITER_VERBOSE); 662 660 unsigned long abs_usecs = ns2usecs(iter->ts - iter->tr->time_start); 663 661 unsigned long rel_usecs; 662 + 663 + /* Restore the original ent_size */ 664 + iter->ent_size = ent_size; 664 665 665 666 if (!next_entry) 666 667 next_ts = iter->ts;
+1 -1
mm/hugetlb.c
··· 532 532 struct vm_area_struct *vma, 533 533 unsigned long address, int avoid_reserve) 534 534 { 535 - struct page *page; 535 + struct page *page = NULL; 536 536 struct mempolicy *mpol; 537 537 nodemask_t *nodemask; 538 538 struct zonelist *zonelist;
+5 -12
mm/memcontrol.c
··· 2476 2476 static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg, 2477 2477 struct page *page, 2478 2478 unsigned int nr_pages, 2479 - struct page_cgroup *pc, 2480 2479 enum charge_type ctype, 2481 2480 bool lrucare) 2482 2481 { 2482 + struct page_cgroup *pc = lookup_page_cgroup(page); 2483 2483 struct zone *uninitialized_var(zone); 2484 2484 bool was_on_lru = false; 2485 2485 bool anon; ··· 2716 2716 { 2717 2717 struct mem_cgroup *memcg = NULL; 2718 2718 unsigned int nr_pages = 1; 2719 - struct page_cgroup *pc; 2720 2719 bool oom = true; 2721 2720 int ret; 2722 2721 ··· 2729 2730 oom = false; 2730 2731 } 2731 2732 2732 - pc = lookup_page_cgroup(page); 2733 2733 ret = __mem_cgroup_try_charge(mm, gfp_mask, nr_pages, &memcg, oom); 2734 2734 if (ret == -ENOMEM) 2735 2735 return ret; 2736 - __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype, false); 2736 + __mem_cgroup_commit_charge(memcg, page, nr_pages, ctype, false); 2737 2737 return 0; 2738 2738 } 2739 2739 ··· 2829 2831 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg, 2830 2832 enum charge_type ctype) 2831 2833 { 2832 - struct page_cgroup *pc; 2833 - 2834 2834 if (mem_cgroup_disabled()) 2835 2835 return; 2836 2836 if (!memcg) 2837 2837 return; 2838 2838 cgroup_exclude_rmdir(&memcg->css); 2839 2839 2840 - pc = lookup_page_cgroup(page); 2841 - __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype, true); 2840 + __mem_cgroup_commit_charge(memcg, page, 1, ctype, true); 2842 2841 /* 2843 2842 * Now swap is on-memory. This means this page may be 2844 2843 * counted both as mem and swap....double count. ··· 3293 3298 * page. In the case new page is migrated but not remapped, new page's 3294 3299 * mapcount will be finally 0 and we call uncharge in end_migration(). 3295 3300 */ 3296 - pc = lookup_page_cgroup(newpage); 3297 3301 if (PageAnon(page)) 3298 3302 ctype = MEM_CGROUP_CHARGE_TYPE_MAPPED; 3299 3303 else if (page_is_file_cache(page)) 3300 3304 ctype = MEM_CGROUP_CHARGE_TYPE_CACHE; 3301 3305 else 3302 3306 ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM; 3303 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype, false); 3307 + __mem_cgroup_commit_charge(memcg, newpage, 1, ctype, false); 3304 3308 return ret; 3305 3309 } 3306 3310 ··· 3386 3392 * the newpage may be on LRU(or pagevec for LRU) already. We lock 3387 3393 * LRU while we overwrite pc->mem_cgroup. 3388 3394 */ 3389 - pc = lookup_page_cgroup(newpage); 3390 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, type, true); 3395 + __mem_cgroup_commit_charge(memcg, newpage, 1, type, true); 3391 3396 } 3392 3397 3393 3398 #ifdef CONFIG_DEBUG_VM
+7 -4
mm/mempolicy.c
··· 1361 1361 1362 1362 mm = get_task_mm(task); 1363 1363 put_task_struct(task); 1364 - if (mm) 1365 - err = do_migrate_pages(mm, old, new, 1366 - capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE); 1367 - else 1364 + 1365 + if (!mm) { 1368 1366 err = -EINVAL; 1367 + goto out; 1368 + } 1369 + 1370 + err = do_migrate_pages(mm, old, new, 1371 + capable(CAP_SYS_NICE) ? MPOL_MF_MOVE_ALL : MPOL_MF_MOVE); 1369 1372 1370 1373 mmput(mm); 1371 1374 out:
+8 -8
mm/migrate.c
··· 1388 1388 mm = get_task_mm(task); 1389 1389 put_task_struct(task); 1390 1390 1391 - if (mm) { 1392 - if (nodes) 1393 - err = do_pages_move(mm, task_nodes, nr_pages, pages, 1394 - nodes, status, flags); 1395 - else 1396 - err = do_pages_stat(mm, nr_pages, pages, status); 1397 - } else 1398 - err = -EINVAL; 1391 + if (!mm) 1392 + return -EINVAL; 1393 + 1394 + if (nodes) 1395 + err = do_pages_move(mm, task_nodes, nr_pages, pages, 1396 + nodes, status, flags); 1397 + else 1398 + err = do_pages_stat(mm, nr_pages, pages, status); 1399 1399 1400 1400 mmput(mm); 1401 1401 return err;
+8 -2
mm/nobootmem.c
··· 298 298 if (WARN_ON_ONCE(slab_is_available())) 299 299 return kzalloc_node(size, GFP_NOWAIT, pgdat->node_id); 300 300 301 + again: 301 302 ptr = __alloc_memory_core_early(pgdat->node_id, size, align, 302 303 goal, -1ULL); 303 304 if (ptr) 304 305 return ptr; 305 306 306 - return __alloc_memory_core_early(MAX_NUMNODES, size, align, 307 - goal, -1ULL); 307 + ptr = __alloc_memory_core_early(MAX_NUMNODES, size, align, 308 + goal, -1ULL); 309 + if (!ptr && goal) { 310 + goal = 0; 311 + goto again; 312 + } 313 + return ptr; 308 314 } 309 315 310 316 void * __init __alloc_bootmem_node_high(pg_data_t *pgdat, unsigned long size,
+1 -1
mm/swap_state.c
··· 26 26 */ 27 27 static const struct address_space_operations swap_aops = { 28 28 .writepage = swap_writepage, 29 - .set_page_dirty = __set_page_dirty_nobuffers, 29 + .set_page_dirty = __set_page_dirty_no_writeback, 30 30 .migratepage = migrate_page, 31 31 }; 32 32
+8 -3
mm/vmscan.c
··· 1568 1568 reclaim_stat->recent_scanned[0] += nr_anon; 1569 1569 reclaim_stat->recent_scanned[1] += nr_file; 1570 1570 1571 - if (current_is_kswapd()) 1572 - __count_vm_events(KSWAPD_STEAL, nr_reclaimed); 1573 - __count_zone_vm_events(PGSTEAL, zone, nr_reclaimed); 1571 + if (global_reclaim(sc)) { 1572 + if (current_is_kswapd()) 1573 + __count_zone_vm_events(PGSTEAL_KSWAPD, zone, 1574 + nr_reclaimed); 1575 + else 1576 + __count_zone_vm_events(PGSTEAL_DIRECT, zone, 1577 + nr_reclaimed); 1578 + } 1574 1579 1575 1580 putback_inactive_pages(mz, &page_list); 1576 1581
+2 -2
mm/vmstat.c
··· 738 738 "pgmajfault", 739 739 740 740 TEXTS_FOR_ZONES("pgrefill") 741 - TEXTS_FOR_ZONES("pgsteal") 741 + TEXTS_FOR_ZONES("pgsteal_kswapd") 742 + TEXTS_FOR_ZONES("pgsteal_direct") 742 743 TEXTS_FOR_ZONES("pgscan_kswapd") 743 744 TEXTS_FOR_ZONES("pgscan_direct") 744 745 ··· 748 747 #endif 749 748 "pginodesteal", 750 749 "slabs_scanned", 751 - "kswapd_steal", 752 750 "kswapd_inodesteal", 753 751 "kswapd_low_wmark_hit_quickly", 754 752 "kswapd_high_wmark_hit_quickly",
+5 -4
net/ax25/af_ax25.c
··· 2011 2011 proc_net_remove(&init_net, "ax25_route"); 2012 2012 proc_net_remove(&init_net, "ax25"); 2013 2013 proc_net_remove(&init_net, "ax25_calls"); 2014 - ax25_rt_free(); 2015 - ax25_uid_free(); 2016 - ax25_dev_free(); 2017 2014 2018 - ax25_unregister_sysctl(); 2019 2015 unregister_netdevice_notifier(&ax25_dev_notifier); 2016 + ax25_unregister_sysctl(); 2020 2017 2021 2018 dev_remove_pack(&ax25_packet_type); 2022 2019 2023 2020 sock_unregister(PF_AX25); 2024 2021 proto_unregister(&ax25_proto); 2022 + 2023 + ax25_rt_free(); 2024 + ax25_uid_free(); 2025 + ax25_dev_free(); 2025 2026 } 2026 2027 module_exit(ax25_exit);
+6 -3
net/caif/chnl_net.c
··· 103 103 skb->protocol = htons(ETH_P_IPV6); 104 104 break; 105 105 default: 106 + kfree_skb(skb); 106 107 priv->netdev->stats.rx_errors++; 107 108 return -EINVAL; 108 109 } ··· 221 220 222 221 if (skb->len > priv->netdev->mtu) { 223 222 pr_warn("Size of skb exceeded MTU\n"); 223 + kfree_skb(skb); 224 224 dev->stats.tx_errors++; 225 - return -ENOSPC; 225 + return NETDEV_TX_OK; 226 226 } 227 227 228 228 if (!priv->flowenabled) { 229 229 pr_debug("dropping packets flow off\n"); 230 + kfree_skb(skb); 230 231 dev->stats.tx_dropped++; 231 - return NETDEV_TX_BUSY; 232 + return NETDEV_TX_OK; 232 233 } 233 234 234 235 if (priv->conn_req.protocol == CAIFPROTO_DATAGRAM_LOOP) ··· 245 242 result = priv->chnl.dn->transmit(priv->chnl.dn, pkt); 246 243 if (result) { 247 244 dev->stats.tx_dropped++; 248 - return result; 245 + return NETDEV_TX_OK; 249 246 } 250 247 251 248 /* Update statistics. */
+20
net/core/dev.c
··· 1409 1409 * register_netdevice_notifier(). The notifier is unlinked into the 1410 1410 * kernel structures and may then be reused. A negative errno code 1411 1411 * is returned on a failure. 1412 + * 1413 + * After unregistering unregister and down device events are synthesized 1414 + * for all devices on the device list to the removed notifier to remove 1415 + * the need for special case cleanup code. 1412 1416 */ 1413 1417 1414 1418 int unregister_netdevice_notifier(struct notifier_block *nb) 1415 1419 { 1420 + struct net_device *dev; 1421 + struct net *net; 1416 1422 int err; 1417 1423 1418 1424 rtnl_lock(); 1419 1425 err = raw_notifier_chain_unregister(&netdev_chain, nb); 1426 + if (err) 1427 + goto unlock; 1428 + 1429 + for_each_net(net) { 1430 + for_each_netdev(net, dev) { 1431 + if (dev->flags & IFF_UP) { 1432 + nb->notifier_call(nb, NETDEV_GOING_DOWN, dev); 1433 + nb->notifier_call(nb, NETDEV_DOWN, dev); 1434 + } 1435 + nb->notifier_call(nb, NETDEV_UNREGISTER, dev); 1436 + nb->notifier_call(nb, NETDEV_UNREGISTER_BATCH, dev); 1437 + } 1438 + } 1439 + unlock: 1420 1440 rtnl_unlock(); 1421 1441 return err; 1422 1442 }
+1
net/core/drop_monitor.c
··· 150 150 for (i = 0; i < msg->entries; i++) { 151 151 if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) { 152 152 msg->points[i].count++; 153 + atomic_inc(&data->dm_hit_count); 153 154 goto out; 154 155 } 155 156 }
+18 -15
net/core/net_namespace.c
··· 83 83 84 84 static int ops_init(const struct pernet_operations *ops, struct net *net) 85 85 { 86 - int err; 86 + int err = -ENOMEM; 87 + void *data = NULL; 88 + 87 89 if (ops->id && ops->size) { 88 - void *data = kzalloc(ops->size, GFP_KERNEL); 90 + data = kzalloc(ops->size, GFP_KERNEL); 89 91 if (!data) 90 - return -ENOMEM; 92 + goto out; 91 93 92 94 err = net_assign_generic(net, *ops->id, data); 93 - if (err) { 94 - kfree(data); 95 - return err; 96 - } 95 + if (err) 96 + goto cleanup; 97 97 } 98 + err = 0; 98 99 if (ops->init) 99 - return ops->init(net); 100 - return 0; 100 + err = ops->init(net); 101 + if (!err) 102 + return 0; 103 + 104 + cleanup: 105 + kfree(data); 106 + 107 + out: 108 + return err; 101 109 } 102 110 103 111 static void ops_free(const struct pernet_operations *ops, struct net *net) ··· 456 448 static int __register_pernet_operations(struct list_head *list, 457 449 struct pernet_operations *ops) 458 450 { 459 - int err = 0; 460 - err = ops_init(ops, &init_net); 461 - if (err) 462 - ops_free(ops, &init_net); 463 - return err; 464 - 451 + return ops_init(ops, &init_net); 465 452 } 466 453 467 454 static void __unregister_pernet_operations(struct pernet_operations *ops)
+1
net/ipv4/tcp_input.c
··· 335 335 incr = __tcp_grow_window(sk, skb); 336 336 337 337 if (incr) { 338 + incr = max_t(int, incr, 2 * skb->len); 338 339 tp->rcv_ssthresh = min(tp->rcv_ssthresh + incr, 339 340 tp->window_clamp); 340 341 inet_csk(sk)->icsk_ack.quick |= 1;
+1
net/ipv4/tcp_output.c
··· 1096 1096 eat = min_t(int, len, skb_headlen(skb)); 1097 1097 if (eat) { 1098 1098 __skb_pull(skb, eat); 1099 + skb->avail_size -= eat; 1099 1100 len -= eat; 1100 1101 if (!len) 1101 1102 return;
+3 -6
net/ipv6/addrconf.c
··· 803 803 ip6_del_rt(rt); 804 804 rt = NULL; 805 805 } else if (!(rt->rt6i_flags & RTF_EXPIRES)) { 806 - rt->dst.expires = expires; 807 - rt->rt6i_flags |= RTF_EXPIRES; 806 + rt6_set_expires(rt, expires); 808 807 } 809 808 } 810 809 dst_release(&rt->dst); ··· 1886 1887 rt = NULL; 1887 1888 } else if (addrconf_finite_timeout(rt_expires)) { 1888 1889 /* not infinity */ 1889 - rt->dst.expires = jiffies + rt_expires; 1890 - rt->rt6i_flags |= RTF_EXPIRES; 1890 + rt6_set_expires(rt, jiffies + rt_expires); 1891 1891 } else { 1892 - rt->rt6i_flags &= ~RTF_EXPIRES; 1893 - rt->dst.expires = 0; 1892 + rt6_clean_expires(rt); 1894 1893 } 1895 1894 } else if (valid_lft) { 1896 1895 clock_t expires = 0;
+4 -5
net/ipv6/ip6_fib.c
··· 673 673 &rt->rt6i_gateway)) { 674 674 if (!(iter->rt6i_flags & RTF_EXPIRES)) 675 675 return -EEXIST; 676 - iter->dst.expires = rt->dst.expires; 677 - if (!(rt->rt6i_flags & RTF_EXPIRES)) { 678 - iter->rt6i_flags &= ~RTF_EXPIRES; 679 - iter->dst.expires = 0; 680 - } 676 + if (!(rt->rt6i_flags & RTF_EXPIRES)) 677 + rt6_clean_expires(iter); 678 + else 679 + rt6_set_expires(iter, rt->dst.expires); 681 680 return -EEXIST; 682 681 } 683 682 }
+1 -2
net/ipv6/ndisc.c
··· 1264 1264 } 1265 1265 1266 1266 if (rt) 1267 - rt->dst.expires = jiffies + (HZ * lifetime); 1268 - 1267 + rt6_set_expires(rt, jiffies + (HZ * lifetime)); 1269 1268 if (ra_msg->icmph.icmp6_hop_limit) { 1270 1269 in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1271 1270 if (rt)
+44 -27
net/ipv6/route.c
··· 62 62 #include <linux/sysctl.h> 63 63 #endif 64 64 65 - static struct rt6_info *ip6_rt_copy(const struct rt6_info *ort, 65 + static struct rt6_info *ip6_rt_copy(struct rt6_info *ort, 66 66 const struct in6_addr *dest); 67 67 static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie); 68 68 static unsigned int ip6_default_advmss(const struct dst_entry *dst); ··· 285 285 rt->rt6i_idev = NULL; 286 286 in6_dev_put(idev); 287 287 } 288 + 289 + if (!(rt->rt6i_flags & RTF_EXPIRES) && dst->from) 290 + dst_release(dst->from); 291 + 288 292 if (peer) { 289 293 rt->rt6i_peer = NULL; 290 294 inet_putpeer(peer); ··· 333 329 334 330 static __inline__ int rt6_check_expired(const struct rt6_info *rt) 335 331 { 336 - return (rt->rt6i_flags & RTF_EXPIRES) && 337 - time_after(jiffies, rt->dst.expires); 332 + struct rt6_info *ort = NULL; 333 + 334 + if (rt->rt6i_flags & RTF_EXPIRES) { 335 + if (time_after(jiffies, rt->dst.expires)) 336 + return 1; 337 + } else if (rt->dst.from) { 338 + ort = (struct rt6_info *) rt->dst.from; 339 + return (ort->rt6i_flags & RTF_EXPIRES) && 340 + time_after(jiffies, ort->dst.expires); 341 + } 342 + return 0; 338 343 } 339 344 340 345 static inline int rt6_need_strict(const struct in6_addr *daddr) ··· 633 620 (rt->rt6i_flags & ~RTF_PREF_MASK) | RTF_PREF(pref); 634 621 635 622 if (rt) { 636 - if (!addrconf_finite_timeout(lifetime)) { 637 - rt->rt6i_flags &= ~RTF_EXPIRES; 638 - } else { 639 - rt->dst.expires = jiffies + HZ * lifetime; 640 - rt->rt6i_flags |= RTF_EXPIRES; 641 - } 623 + if (!addrconf_finite_timeout(lifetime)) 624 + rt6_clean_expires(rt); 625 + else 626 + rt6_set_expires(rt, jiffies + HZ * lifetime); 627 + 642 628 dst_release(&rt->dst); 643 629 } 644 630 return 0; ··· 742 730 return __ip6_ins_rt(rt, &info); 743 731 } 744 732 745 - static struct rt6_info *rt6_alloc_cow(const struct rt6_info *ort, 733 + static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort, 746 734 const struct in6_addr *daddr, 747 735 const struct in6_addr *saddr) 748 736 { ··· 966 954 rt->rt6i_idev = ort->rt6i_idev; 967 955 if (rt->rt6i_idev) 968 956 in6_dev_hold(rt->rt6i_idev); 969 - rt->dst.expires = 0; 970 957 971 958 rt->rt6i_gateway = ort->rt6i_gateway; 972 - rt->rt6i_flags = ort->rt6i_flags & ~RTF_EXPIRES; 959 + rt->rt6i_flags = ort->rt6i_flags; 960 + rt6_clean_expires(rt); 973 961 rt->rt6i_metric = 0; 974 962 975 963 memcpy(&rt->rt6i_dst, &ort->rt6i_dst, sizeof(struct rt6key)); ··· 1031 1019 1032 1020 rt = (struct rt6_info *) skb_dst(skb); 1033 1021 if (rt) { 1034 - if (rt->rt6i_flags & RTF_CACHE) { 1035 - dst_set_expires(&rt->dst, 0); 1036 - rt->rt6i_flags |= RTF_EXPIRES; 1037 - } else if (rt->rt6i_node && (rt->rt6i_flags & RTF_DEFAULT)) 1022 + if (rt->rt6i_flags & RTF_CACHE) 1023 + rt6_update_expires(rt, 0); 1024 + else if (rt->rt6i_node && (rt->rt6i_flags & RTF_DEFAULT)) 1038 1025 rt->rt6i_node->fn_sernum = -1; 1039 1026 } 1040 1027 } ··· 1300 1289 } 1301 1290 1302 1291 rt->dst.obsolete = -1; 1303 - rt->dst.expires = (cfg->fc_flags & RTF_EXPIRES) ? 1304 - jiffies + clock_t_to_jiffies(cfg->fc_expires) : 1305 - 0; 1292 + 1293 + if (cfg->fc_flags & RTF_EXPIRES) 1294 + rt6_set_expires(rt, jiffies + 1295 + clock_t_to_jiffies(cfg->fc_expires)); 1296 + else 1297 + rt6_clean_expires(rt); 1306 1298 1307 1299 if (cfg->fc_protocol == RTPROT_UNSPEC) 1308 1300 cfg->fc_protocol = RTPROT_BOOT; ··· 1750 1736 features |= RTAX_FEATURE_ALLFRAG; 1751 1737 dst_metric_set(&rt->dst, RTAX_FEATURES, features); 1752 1738 } 1753 - dst_set_expires(&rt->dst, net->ipv6.sysctl.ip6_rt_mtu_expires); 1754 - rt->rt6i_flags |= RTF_MODIFIED|RTF_EXPIRES; 1739 + rt6_update_expires(rt, net->ipv6.sysctl.ip6_rt_mtu_expires); 1740 + rt->rt6i_flags |= RTF_MODIFIED; 1755 1741 goto out; 1756 1742 } 1757 1743 ··· 1779 1765 * which is 10 mins. After 10 mins the decreased pmtu is expired 1780 1766 * and detecting PMTU increase will be automatically happened. 1781 1767 */ 1782 - dst_set_expires(&nrt->dst, net->ipv6.sysctl.ip6_rt_mtu_expires); 1783 - nrt->rt6i_flags |= RTF_DYNAMIC|RTF_EXPIRES; 1784 - 1768 + rt6_update_expires(nrt, net->ipv6.sysctl.ip6_rt_mtu_expires); 1769 + nrt->rt6i_flags |= RTF_DYNAMIC; 1785 1770 ip6_ins_rt(nrt); 1786 1771 } 1787 1772 out: ··· 1812 1799 * Misc support functions 1813 1800 */ 1814 1801 1815 - static struct rt6_info *ip6_rt_copy(const struct rt6_info *ort, 1802 + static struct rt6_info *ip6_rt_copy(struct rt6_info *ort, 1816 1803 const struct in6_addr *dest) 1817 1804 { 1818 1805 struct net *net = dev_net(ort->dst.dev); ··· 1832 1819 if (rt->rt6i_idev) 1833 1820 in6_dev_hold(rt->rt6i_idev); 1834 1821 rt->dst.lastuse = jiffies; 1835 - rt->dst.expires = 0; 1836 1822 1837 1823 rt->rt6i_gateway = ort->rt6i_gateway; 1838 - rt->rt6i_flags = ort->rt6i_flags & ~RTF_EXPIRES; 1824 + rt->rt6i_flags = ort->rt6i_flags; 1825 + if ((ort->rt6i_flags & (RTF_DEFAULT | RTF_ADDRCONF)) == 1826 + (RTF_DEFAULT | RTF_ADDRCONF)) 1827 + rt6_set_from(rt, ort); 1828 + else 1829 + rt6_clean_expires(rt); 1839 1830 rt->rt6i_metric = 0; 1840 1831 1841 1832 #ifdef CONFIG_IPV6_SUBTREES
+4
net/ipv6/tcp_ipv6.c
··· 1383 1383 tcp_mtup_init(newsk); 1384 1384 tcp_sync_mss(newsk, dst_mtu(dst)); 1385 1385 newtp->advmss = dst_metric_advmss(dst); 1386 + if (tcp_sk(sk)->rx_opt.user_mss && 1387 + tcp_sk(sk)->rx_opt.user_mss < newtp->advmss) 1388 + newtp->advmss = tcp_sk(sk)->rx_opt.user_mss; 1389 + 1386 1390 tcp_initialize_rcv_mss(newsk); 1387 1391 if (tcp_rsk(req)->snt_synack) 1388 1392 tcp_valid_rtt_meas(newsk,
+1 -1
net/key/af_key.c
··· 3480 3480 3481 3481 /* Addresses to be used by KM for negotiation, if ext is available */ 3482 3482 if (k != NULL && (set_sadb_kmaddress(skb, k) < 0)) 3483 - return -EINVAL; 3483 + goto err; 3484 3484 3485 3485 /* selector src */ 3486 3486 set_sadb_address(skb, sasize_sel, SADB_EXT_ADDRESS_SRC, sel);
+3 -2
net/l2tp/l2tp_ip.c
··· 232 232 { 233 233 write_lock_bh(&l2tp_ip_lock); 234 234 hlist_del_init(&sk->sk_bind_node); 235 - hlist_del_init(&sk->sk_node); 235 + sk_del_node_init(sk); 236 236 write_unlock_bh(&l2tp_ip_lock); 237 237 sk_common_release(sk); 238 238 } ··· 271 271 chk_addr_ret != RTN_MULTICAST && chk_addr_ret != RTN_BROADCAST) 272 272 goto out; 273 273 274 - inet->inet_rcv_saddr = inet->inet_saddr = addr->l2tp_addr.s_addr; 274 + if (addr->l2tp_addr.s_addr) 275 + inet->inet_rcv_saddr = inet->inet_saddr = addr->l2tp_addr.s_addr; 275 276 if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) 276 277 inet->inet_saddr = 0; /* Use device */ 277 278 sk_dst_reset(sk);
+2 -2
net/mac80211/ibss.c
··· 457 457 * fall back to HT20 if we don't use or use 458 458 * the other extension channel 459 459 */ 460 - if ((channel_type == NL80211_CHAN_HT40MINUS || 461 - channel_type == NL80211_CHAN_HT40PLUS) && 460 + if (!(channel_type == NL80211_CHAN_HT40MINUS || 461 + channel_type == NL80211_CHAN_HT40PLUS) || 462 462 channel_type != sdata->u.ibss.channel_type) 463 463 sta_ht_cap_new.cap &= 464 464 ~IEEE80211_HT_CAP_SUP_WIDTH_20_40;
+6 -4
net/mac80211/rx.c
··· 103 103 ieee80211_add_rx_radiotap_header(struct ieee80211_local *local, 104 104 struct sk_buff *skb, 105 105 struct ieee80211_rate *rate, 106 - int rtap_len) 106 + int rtap_len, bool has_fcs) 107 107 { 108 108 struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb); 109 109 struct ieee80211_radiotap_header *rthdr; ··· 134 134 } 135 135 136 136 /* IEEE80211_RADIOTAP_FLAGS */ 137 - if (local->hw.flags & IEEE80211_HW_RX_INCLUDES_FCS) 137 + if (has_fcs && (local->hw.flags & IEEE80211_HW_RX_INCLUDES_FCS)) 138 138 *pos |= IEEE80211_RADIOTAP_F_FCS; 139 139 if (status->flag & (RX_FLAG_FAILED_FCS_CRC | RX_FLAG_FAILED_PLCP_CRC)) 140 140 *pos |= IEEE80211_RADIOTAP_F_BADFCS; ··· 294 294 } 295 295 296 296 /* prepend radiotap information */ 297 - ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom); 297 + ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom, 298 + true); 298 299 299 300 skb_reset_mac_header(skb); 300 301 skb->ip_summed = CHECKSUM_UNNECESSARY; ··· 2572 2571 goto out_free_skb; 2573 2572 2574 2573 /* prepend radiotap information */ 2575 - ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom); 2574 + ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom, 2575 + false); 2576 2576 2577 2577 skb_set_mac_header(skb, 0); 2578 2578 skb->ip_summed = CHECKSUM_UNNECESSARY;
+2 -19
net/phonet/pn_dev.c
··· 331 331 332 332 static void __net_exit phonet_exit_net(struct net *net) 333 333 { 334 - struct phonet_net *pnn = phonet_pernet(net); 335 - struct net_device *dev; 336 - unsigned i; 337 - 338 - rtnl_lock(); 339 - for_each_netdev(net, dev) 340 - phonet_device_destroy(dev); 341 - 342 - for (i = 0; i < 64; i++) { 343 - dev = pnn->routes.table[i]; 344 - if (dev) { 345 - rtm_phonet_notify(RTM_DELROUTE, dev, i); 346 - dev_put(dev); 347 - } 348 - } 349 - rtnl_unlock(); 350 - 351 334 proc_net_remove(net, "phonet"); 352 335 } 353 336 ··· 344 361 /* Initialize Phonet devices list */ 345 362 int __init phonet_device_init(void) 346 363 { 347 - int err = register_pernet_device(&phonet_net_ops); 364 + int err = register_pernet_subsys(&phonet_net_ops); 348 365 if (err) 349 366 return err; 350 367 ··· 360 377 { 361 378 rtnl_unregister_all(PF_PHONET); 362 379 unregister_netdevice_notifier(&phonet_device_notifier); 363 - unregister_pernet_device(&phonet_net_ops); 380 + unregister_pernet_subsys(&phonet_net_ops); 364 381 proc_net_remove(&init_net, "pnresource"); 365 382 } 366 383
+2 -5
net/sched/sch_gred.c
··· 565 565 opt.packets = q->packetsin; 566 566 opt.bytesin = q->bytesin; 567 567 568 - if (gred_wred_mode(table)) { 569 - q->vars.qidlestart = 570 - table->tab[table->def]->vars.qidlestart; 571 - q->vars.qavg = table->tab[table->def]->vars.qavg; 572 - } 568 + if (gred_wred_mode(table)) 569 + gred_load_wred_set(table, q); 573 570 574 571 opt.qave = red_calc_qavg(&q->parms, &q->vars, q->vars.qavg); 575 572
+9 -8
net/sunrpc/sunrpc_syms.c
··· 75 75 static int __init 76 76 init_sunrpc(void) 77 77 { 78 - int err = register_rpc_pipefs(); 78 + int err = rpc_init_mempool(); 79 79 if (err) 80 80 goto out; 81 - err = rpc_init_mempool(); 82 - if (err) 83 - goto out2; 84 81 err = rpcauth_init_module(); 85 82 if (err) 86 - goto out3; 83 + goto out2; 87 84 88 85 cache_initialize(); 89 86 90 87 err = register_pernet_subsys(&sunrpc_net_ops); 88 + if (err) 89 + goto out3; 90 + 91 + err = register_rpc_pipefs(); 91 92 if (err) 92 93 goto out4; 93 94 #ifdef RPC_DEBUG ··· 99 98 return 0; 100 99 101 100 out4: 102 - rpcauth_remove_module(); 101 + unregister_pernet_subsys(&sunrpc_net_ops); 103 102 out3: 104 - rpc_destroy_mempool(); 103 + rpcauth_remove_module(); 105 104 out2: 106 - unregister_rpc_pipefs(); 105 + rpc_destroy_mempool(); 107 106 out: 108 107 return err; 109 108 }
+1 -1
net/wireless/util.c
··· 989 989 if (rdev->wiphy.software_iftypes & BIT(iftype)) 990 990 continue; 991 991 for (j = 0; j < c->n_limits; j++) { 992 - if (!(limits[j].types & iftype)) 992 + if (!(limits[j].types & BIT(iftype))) 993 993 continue; 994 994 if (limits[j].max < num[iftype]) 995 995 goto cont;
+4
scripts/mod/file2alias.c
··· 1100 1100 if (!sym->st_shndx || get_secindex(info, sym) >= info->num_sections) 1101 1101 return; 1102 1102 1103 + /* We're looking for an object */ 1104 + if (ELF_ST_TYPE(sym->st_info) != STT_OBJECT) 1105 + return; 1106 + 1103 1107 /* All our symbols are of form <prefix>__mod_XXX_device_table. */ 1104 1108 name = strstr(symname, "__mod_"); 1105 1109 if (!name)
+1
sound/pci/hda/patch_realtek.c
··· 6109 6109 6110 6110 static const struct snd_pci_quirk alc269_fixup_tbl[] = { 6111 6111 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_MIC2_MUTE_LED), 6112 + SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC), 6112 6113 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 6113 6114 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 6114 6115 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+2
sound/soc/codecs/cs42l73.c
··· 929 929 930 930 /* MCLKX -> MCLK */ 931 931 mclkx_coeff = cs42l73_get_mclkx_coeff(freq); 932 + if (mclkx_coeff < 0) 933 + return mclkx_coeff; 932 934 933 935 mclk = cs42l73_mclkx_coeffs[mclkx_coeff].mclkx / 934 936 cs42l73_mclkx_coeffs[mclkx_coeff].ratio;
+223 -55
sound/soc/codecs/wm8994.c
··· 1000 1000 } 1001 1001 } 1002 1002 1003 + static int aif1clk_ev(struct snd_soc_dapm_widget *w, 1004 + struct snd_kcontrol *kcontrol, int event) 1005 + { 1006 + struct snd_soc_codec *codec = w->codec; 1007 + struct wm8994 *control = codec->control_data; 1008 + int mask = WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC1R_ENA; 1009 + int dac; 1010 + int adc; 1011 + int val; 1012 + 1013 + switch (control->type) { 1014 + case WM8994: 1015 + case WM8958: 1016 + mask |= WM8994_AIF1DAC2L_ENA | WM8994_AIF1DAC2R_ENA; 1017 + break; 1018 + default: 1019 + break; 1020 + } 1021 + 1022 + switch (event) { 1023 + case SND_SOC_DAPM_PRE_PMU: 1024 + val = snd_soc_read(codec, WM8994_AIF1_CONTROL_1); 1025 + if ((val & WM8994_AIF1ADCL_SRC) && 1026 + (val & WM8994_AIF1ADCR_SRC)) 1027 + adc = WM8994_AIF1ADC1R_ENA | WM8994_AIF1ADC2R_ENA; 1028 + else if (!(val & WM8994_AIF1ADCL_SRC) && 1029 + !(val & WM8994_AIF1ADCR_SRC)) 1030 + adc = WM8994_AIF1ADC1L_ENA | WM8994_AIF1ADC2L_ENA; 1031 + else 1032 + adc = WM8994_AIF1ADC1R_ENA | WM8994_AIF1ADC2R_ENA | 1033 + WM8994_AIF1ADC1L_ENA | WM8994_AIF1ADC2L_ENA; 1034 + 1035 + val = snd_soc_read(codec, WM8994_AIF1_CONTROL_2); 1036 + if ((val & WM8994_AIF1DACL_SRC) && 1037 + (val & WM8994_AIF1DACR_SRC)) 1038 + dac = WM8994_AIF1DAC1R_ENA | WM8994_AIF1DAC2R_ENA; 1039 + else if (!(val & WM8994_AIF1DACL_SRC) && 1040 + !(val & WM8994_AIF1DACR_SRC)) 1041 + dac = WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC2L_ENA; 1042 + else 1043 + dac = WM8994_AIF1DAC1R_ENA | WM8994_AIF1DAC2R_ENA | 1044 + WM8994_AIF1DAC1L_ENA | WM8994_AIF1DAC2L_ENA; 1045 + 1046 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1047 + mask, adc); 1048 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1049 + mask, dac); 1050 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1051 + WM8994_AIF1DSPCLK_ENA | 1052 + WM8994_SYSDSPCLK_ENA, 1053 + WM8994_AIF1DSPCLK_ENA | 1054 + WM8994_SYSDSPCLK_ENA); 1055 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, mask, 1056 + WM8994_AIF1ADC1R_ENA | 1057 + WM8994_AIF1ADC1L_ENA | 1058 + WM8994_AIF1ADC2R_ENA | 1059 + WM8994_AIF1ADC2L_ENA); 1060 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, mask, 1061 + WM8994_AIF1DAC1R_ENA | 1062 + WM8994_AIF1DAC1L_ENA | 1063 + WM8994_AIF1DAC2R_ENA | 1064 + WM8994_AIF1DAC2L_ENA); 1065 + break; 1066 + 1067 + case SND_SOC_DAPM_PRE_PMD: 1068 + case SND_SOC_DAPM_POST_PMD: 1069 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1070 + mask, 0); 1071 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1072 + mask, 0); 1073 + 1074 + val = snd_soc_read(codec, WM8994_CLOCKING_1); 1075 + if (val & WM8994_AIF2DSPCLK_ENA) 1076 + val = WM8994_SYSDSPCLK_ENA; 1077 + else 1078 + val = 0; 1079 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1080 + WM8994_SYSDSPCLK_ENA | 1081 + WM8994_AIF1DSPCLK_ENA, val); 1082 + break; 1083 + } 1084 + 1085 + return 0; 1086 + } 1087 + 1088 + static int aif2clk_ev(struct snd_soc_dapm_widget *w, 1089 + struct snd_kcontrol *kcontrol, int event) 1090 + { 1091 + struct snd_soc_codec *codec = w->codec; 1092 + int dac; 1093 + int adc; 1094 + int val; 1095 + 1096 + switch (event) { 1097 + case SND_SOC_DAPM_PRE_PMU: 1098 + val = snd_soc_read(codec, WM8994_AIF2_CONTROL_1); 1099 + if ((val & WM8994_AIF2ADCL_SRC) && 1100 + (val & WM8994_AIF2ADCR_SRC)) 1101 + adc = WM8994_AIF2ADCR_ENA; 1102 + else if (!(val & WM8994_AIF2ADCL_SRC) && 1103 + !(val & WM8994_AIF2ADCR_SRC)) 1104 + adc = WM8994_AIF2ADCL_ENA; 1105 + else 1106 + adc = WM8994_AIF2ADCL_ENA | WM8994_AIF2ADCR_ENA; 1107 + 1108 + 1109 + val = snd_soc_read(codec, WM8994_AIF2_CONTROL_2); 1110 + if ((val & WM8994_AIF2DACL_SRC) && 1111 + (val & WM8994_AIF2DACR_SRC)) 1112 + dac = WM8994_AIF2DACR_ENA; 1113 + else if (!(val & WM8994_AIF2DACL_SRC) && 1114 + !(val & WM8994_AIF2DACR_SRC)) 1115 + dac = WM8994_AIF2DACL_ENA; 1116 + else 1117 + dac = WM8994_AIF2DACL_ENA | WM8994_AIF2DACR_ENA; 1118 + 1119 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1120 + WM8994_AIF2ADCL_ENA | 1121 + WM8994_AIF2ADCR_ENA, adc); 1122 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1123 + WM8994_AIF2DACL_ENA | 1124 + WM8994_AIF2DACR_ENA, dac); 1125 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1126 + WM8994_AIF2DSPCLK_ENA | 1127 + WM8994_SYSDSPCLK_ENA, 1128 + WM8994_AIF2DSPCLK_ENA | 1129 + WM8994_SYSDSPCLK_ENA); 1130 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_4, 1131 + WM8994_AIF2ADCL_ENA | 1132 + WM8994_AIF2ADCR_ENA, 1133 + WM8994_AIF2ADCL_ENA | 1134 + WM8994_AIF2ADCR_ENA); 1135 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1136 + WM8994_AIF2DACL_ENA | 1137 + WM8994_AIF2DACR_ENA, 1138 + WM8994_AIF2DACL_ENA | 1139 + WM8994_AIF2DACR_ENA); 1140 + break; 1141 + 1142 + case SND_SOC_DAPM_PRE_PMD: 1143 + case SND_SOC_DAPM_POST_PMD: 1144 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1145 + WM8994_AIF2DACL_ENA | 1146 + WM8994_AIF2DACR_ENA, 0); 1147 + snd_soc_update_bits(codec, WM8994_POWER_MANAGEMENT_5, 1148 + WM8994_AIF2ADCL_ENA | 1149 + WM8994_AIF2ADCR_ENA, 0); 1150 + 1151 + val = snd_soc_read(codec, WM8994_CLOCKING_1); 1152 + if (val & WM8994_AIF1DSPCLK_ENA) 1153 + val = WM8994_SYSDSPCLK_ENA; 1154 + else 1155 + val = 0; 1156 + snd_soc_update_bits(codec, WM8994_CLOCKING_1, 1157 + WM8994_SYSDSPCLK_ENA | 1158 + WM8994_AIF2DSPCLK_ENA, val); 1159 + break; 1160 + } 1161 + 1162 + return 0; 1163 + } 1164 + 1165 + static int aif1clk_late_ev(struct snd_soc_dapm_widget *w, 1166 + struct snd_kcontrol *kcontrol, int event) 1167 + { 1168 + struct snd_soc_codec *codec = w->codec; 1169 + struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1170 + 1171 + switch (event) { 1172 + case SND_SOC_DAPM_PRE_PMU: 1173 + wm8994->aif1clk_enable = 1; 1174 + break; 1175 + case SND_SOC_DAPM_POST_PMD: 1176 + wm8994->aif1clk_disable = 1; 1177 + break; 1178 + } 1179 + 1180 + return 0; 1181 + } 1182 + 1183 + static int aif2clk_late_ev(struct snd_soc_dapm_widget *w, 1184 + struct snd_kcontrol *kcontrol, int event) 1185 + { 1186 + struct snd_soc_codec *codec = w->codec; 1187 + struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1188 + 1189 + switch (event) { 1190 + case SND_SOC_DAPM_PRE_PMU: 1191 + wm8994->aif2clk_enable = 1; 1192 + break; 1193 + case SND_SOC_DAPM_POST_PMD: 1194 + wm8994->aif2clk_disable = 1; 1195 + break; 1196 + } 1197 + 1198 + return 0; 1199 + } 1200 + 1003 1201 static int late_enable_ev(struct snd_soc_dapm_widget *w, 1004 1202 struct snd_kcontrol *kcontrol, int event) 1005 1203 { ··· 1207 1009 switch (event) { 1208 1010 case SND_SOC_DAPM_PRE_PMU: 1209 1011 if (wm8994->aif1clk_enable) { 1012 + aif1clk_ev(w, kcontrol, event); 1210 1013 snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1, 1211 1014 WM8994_AIF1CLK_ENA_MASK, 1212 1015 WM8994_AIF1CLK_ENA); 1213 1016 wm8994->aif1clk_enable = 0; 1214 1017 } 1215 1018 if (wm8994->aif2clk_enable) { 1019 + aif2clk_ev(w, kcontrol, event); 1216 1020 snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1, 1217 1021 WM8994_AIF2CLK_ENA_MASK, 1218 1022 WM8994_AIF2CLK_ENA); ··· 1240 1040 if (wm8994->aif1clk_disable) { 1241 1041 snd_soc_update_bits(codec, WM8994_AIF1_CLOCKING_1, 1242 1042 WM8994_AIF1CLK_ENA_MASK, 0); 1043 + aif1clk_ev(w, kcontrol, event); 1243 1044 wm8994->aif1clk_disable = 0; 1244 1045 } 1245 1046 if (wm8994->aif2clk_disable) { 1246 1047 snd_soc_update_bits(codec, WM8994_AIF2_CLOCKING_1, 1247 1048 WM8994_AIF2CLK_ENA_MASK, 0); 1049 + aif2clk_ev(w, kcontrol, event); 1248 1050 wm8994->aif2clk_disable = 0; 1249 1051 } 1250 - break; 1251 - } 1252 - 1253 - return 0; 1254 - } 1255 - 1256 - static int aif1clk_ev(struct snd_soc_dapm_widget *w, 1257 - struct snd_kcontrol *kcontrol, int event) 1258 - { 1259 - struct snd_soc_codec *codec = w->codec; 1260 - struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1261 - 1262 - switch (event) { 1263 - case SND_SOC_DAPM_PRE_PMU: 1264 - wm8994->aif1clk_enable = 1; 1265 - break; 1266 - case SND_SOC_DAPM_POST_PMD: 1267 - wm8994->aif1clk_disable = 1; 1268 - break; 1269 - } 1270 - 1271 - return 0; 1272 - } 1273 - 1274 - static int aif2clk_ev(struct snd_soc_dapm_widget *w, 1275 - struct snd_kcontrol *kcontrol, int event) 1276 - { 1277 - struct snd_soc_codec *codec = w->codec; 1278 - struct wm8994_priv *wm8994 = snd_soc_codec_get_drvdata(codec); 1279 - 1280 - switch (event) { 1281 - case SND_SOC_DAPM_PRE_PMU: 1282 - wm8994->aif2clk_enable = 1; 1283 - break; 1284 - case SND_SOC_DAPM_POST_PMD: 1285 - wm8994->aif2clk_disable = 1; 1286 1052 break; 1287 1053 } 1288 1054 ··· 1551 1385 SOC_DAPM_ENUM("AIF2DACR Mux", aif2dacr_src_enum); 1552 1386 1553 1387 static const struct snd_soc_dapm_widget wm8994_lateclk_revd_widgets[] = { 1554 - SND_SOC_DAPM_SUPPLY("AIF1CLK", SND_SOC_NOPM, 0, 0, aif1clk_ev, 1388 + SND_SOC_DAPM_SUPPLY("AIF1CLK", SND_SOC_NOPM, 0, 0, aif1clk_late_ev, 1555 1389 SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 1556 - SND_SOC_DAPM_SUPPLY("AIF2CLK", SND_SOC_NOPM, 0, 0, aif2clk_ev, 1390 + SND_SOC_DAPM_SUPPLY("AIF2CLK", SND_SOC_NOPM, 0, 0, aif2clk_late_ev, 1557 1391 SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD), 1558 1392 1559 1393 SND_SOC_DAPM_PGA_E("Late DAC1L Enable PGA", SND_SOC_NOPM, 0, 0, NULL, 0, ··· 1582 1416 }; 1583 1417 1584 1418 static const struct snd_soc_dapm_widget wm8994_lateclk_widgets[] = { 1585 - SND_SOC_DAPM_SUPPLY("AIF1CLK", WM8994_AIF1_CLOCKING_1, 0, 0, NULL, 0), 1586 - SND_SOC_DAPM_SUPPLY("AIF2CLK", WM8994_AIF2_CLOCKING_1, 0, 0, NULL, 0), 1419 + SND_SOC_DAPM_SUPPLY("AIF1CLK", WM8994_AIF1_CLOCKING_1, 0, 0, aif1clk_ev, 1420 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 1421 + SND_SOC_DAPM_SUPPLY("AIF2CLK", WM8994_AIF2_CLOCKING_1, 0, 0, aif2clk_ev, 1422 + SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_PRE_PMD), 1587 1423 SND_SOC_DAPM_PGA("Direct Voice", SND_SOC_NOPM, 0, 0, NULL, 0), 1588 1424 SND_SOC_DAPM_MIXER("SPKL", WM8994_POWER_MANAGEMENT_3, 8, 0, 1589 1425 left_speaker_mixer, ARRAY_SIZE(left_speaker_mixer)), ··· 1638 1470 SND_SOC_DAPM_SUPPLY("CLK_SYS", SND_SOC_NOPM, 0, 0, clk_sys_event, 1639 1471 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1640 1472 1641 - SND_SOC_DAPM_SUPPLY("DSP1CLK", WM8994_CLOCKING_1, 3, 0, NULL, 0), 1642 - SND_SOC_DAPM_SUPPLY("DSP2CLK", WM8994_CLOCKING_1, 2, 0, NULL, 0), 1643 - SND_SOC_DAPM_SUPPLY("DSPINTCLK", WM8994_CLOCKING_1, 1, 0, NULL, 0), 1473 + SND_SOC_DAPM_SUPPLY("DSP1CLK", SND_SOC_NOPM, 3, 0, NULL, 0), 1474 + SND_SOC_DAPM_SUPPLY("DSP2CLK", SND_SOC_NOPM, 2, 0, NULL, 0), 1475 + SND_SOC_DAPM_SUPPLY("DSPINTCLK", SND_SOC_NOPM, 1, 0, NULL, 0), 1644 1476 1645 1477 SND_SOC_DAPM_AIF_OUT("AIF1ADC1L", NULL, 1646 - 0, WM8994_POWER_MANAGEMENT_4, 9, 0), 1478 + 0, SND_SOC_NOPM, 9, 0), 1647 1479 SND_SOC_DAPM_AIF_OUT("AIF1ADC1R", NULL, 1648 - 0, WM8994_POWER_MANAGEMENT_4, 8, 0), 1480 + 0, SND_SOC_NOPM, 8, 0), 1649 1481 SND_SOC_DAPM_AIF_IN_E("AIF1DAC1L", NULL, 0, 1650 - WM8994_POWER_MANAGEMENT_5, 9, 0, wm8958_aif_ev, 1482 + SND_SOC_NOPM, 9, 0, wm8958_aif_ev, 1651 1483 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1652 1484 SND_SOC_DAPM_AIF_IN_E("AIF1DAC1R", NULL, 0, 1653 - WM8994_POWER_MANAGEMENT_5, 8, 0, wm8958_aif_ev, 1485 + SND_SOC_NOPM, 8, 0, wm8958_aif_ev, 1654 1486 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1655 1487 1656 1488 SND_SOC_DAPM_AIF_OUT("AIF1ADC2L", NULL, 1657 - 0, WM8994_POWER_MANAGEMENT_4, 11, 0), 1489 + 0, SND_SOC_NOPM, 11, 0), 1658 1490 SND_SOC_DAPM_AIF_OUT("AIF1ADC2R", NULL, 1659 - 0, WM8994_POWER_MANAGEMENT_4, 10, 0), 1491 + 0, SND_SOC_NOPM, 10, 0), 1660 1492 SND_SOC_DAPM_AIF_IN_E("AIF1DAC2L", NULL, 0, 1661 - WM8994_POWER_MANAGEMENT_5, 11, 0, wm8958_aif_ev, 1493 + SND_SOC_NOPM, 11, 0, wm8958_aif_ev, 1662 1494 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1663 1495 SND_SOC_DAPM_AIF_IN_E("AIF1DAC2R", NULL, 0, 1664 - WM8994_POWER_MANAGEMENT_5, 10, 0, wm8958_aif_ev, 1496 + SND_SOC_NOPM, 10, 0, wm8958_aif_ev, 1665 1497 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD), 1666 1498 1667 1499 SND_SOC_DAPM_MIXER("AIF1ADC1L Mixer", SND_SOC_NOPM, 0, 0, ··· 1688 1520 dac1r_mix, ARRAY_SIZE(dac1r_mix)), 1689 1521 1690 1522 SND_SOC_DAPM_AIF_OUT("AIF2ADCL", NULL, 0, 1691 - WM8994_POWER_MANAGEMENT_4, 13, 0), 1523 + SND_SOC_NOPM, 13, 0), 1692 1524 SND_SOC_DAPM_AIF_OUT("AIF2ADCR", NULL, 0, 1693 - WM8994_POWER_MANAGEMENT_4, 12, 0), 1525 + SND_SOC_NOPM, 12, 0), 1694 1526 SND_SOC_DAPM_AIF_IN_E("AIF2DACL", NULL, 0, 1695 - WM8994_POWER_MANAGEMENT_5, 13, 0, wm8958_aif_ev, 1527 + SND_SOC_NOPM, 13, 0, wm8958_aif_ev, 1696 1528 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1697 1529 SND_SOC_DAPM_AIF_IN_E("AIF2DACR", NULL, 0, 1698 - WM8994_POWER_MANAGEMENT_5, 12, 0, wm8958_aif_ev, 1530 + SND_SOC_NOPM, 12, 0, wm8958_aif_ev, 1699 1531 SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), 1700 1532 1701 1533 SND_SOC_DAPM_AIF_IN("AIF1DACDAT", NULL, 0, SND_SOC_NOPM, 0, 0),
+3 -4
sound/soc/sh/fsi.c
··· 1001 1001 sg_dma_address(&sg) = buf; 1002 1002 sg_dma_len(&sg) = len; 1003 1003 1004 - desc = chan->device->device_prep_slave_sg(chan, &sg, 1, dir, 1005 - DMA_PREP_INTERRUPT | 1006 - DMA_CTRL_ACK); 1004 + desc = dmaengine_prep_slave_sg(chan, &sg, 1, dir, 1005 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1007 1006 if (!desc) { 1008 - dev_err(dai->dev, "device_prep_slave_sg() fail\n"); 1007 + dev_err(dai->dev, "dmaengine_prep_slave_sg() fail\n"); 1009 1008 return; 1010 1009 } 1011 1010
+1
sound/soc/soc-core.c
··· 3113 3113 GFP_KERNEL); 3114 3114 if (card->rtd == NULL) 3115 3115 return -ENOMEM; 3116 + card->num_rtd = 0; 3116 3117 card->rtd_aux = &card->rtd[card->num_links]; 3117 3118 3118 3119 for (i = 0; i < card->num_links; i++)
+2
sound/soc/soc-dapm.c
··· 67 67 [snd_soc_dapm_out_drv] = 10, 68 68 [snd_soc_dapm_hp] = 10, 69 69 [snd_soc_dapm_spk] = 10, 70 + [snd_soc_dapm_line] = 10, 70 71 [snd_soc_dapm_post] = 11, 71 72 }; 72 73 ··· 76 75 [snd_soc_dapm_adc] = 1, 77 76 [snd_soc_dapm_hp] = 2, 78 77 [snd_soc_dapm_spk] = 2, 78 + [snd_soc_dapm_line] = 2, 79 79 [snd_soc_dapm_out_drv] = 2, 80 80 [snd_soc_dapm_pga] = 4, 81 81 [snd_soc_dapm_mixer_named_ctl] = 5,
+2 -2
tools/perf/Makefile
··· 234 234 235 235 export PERL_PATH 236 236 237 - FLEX = $(CROSS_COMPILE)flex 238 - BISON= $(CROSS_COMPILE)bison 237 + FLEX = flex 238 + BISON= bison 239 239 240 240 $(OUTPUT)util/parse-events-flex.c: util/parse-events.l 241 241 $(QUIET_FLEX)$(FLEX) --header-file=$(OUTPUT)util/parse-events-flex.h -t util/parse-events.l > $(OUTPUT)util/parse-events-flex.c
+12 -5
tools/perf/builtin-report.c
··· 374 374 (kernel_map->dso->hit && 375 375 (kernel_kmap->ref_reloc_sym == NULL || 376 376 kernel_kmap->ref_reloc_sym->addr == 0))) { 377 - const struct dso *kdso = kernel_map->dso; 377 + const char *desc = 378 + "As no suitable kallsyms nor vmlinux was found, kernel samples\n" 379 + "can't be resolved."; 380 + 381 + if (kernel_map) { 382 + const struct dso *kdso = kernel_map->dso; 383 + if (!RB_EMPTY_ROOT(&kdso->symbols[MAP__FUNCTION])) { 384 + desc = "If some relocation was applied (e.g. " 385 + "kexec) symbols may be misresolved."; 386 + } 387 + } 378 388 379 389 ui__warning( 380 390 "Kernel address maps (/proc/{kallsyms,modules}) were restricted.\n\n" 381 391 "Check /proc/sys/kernel/kptr_restrict before running 'perf record'.\n\n%s\n\n" 382 392 "Samples in kernel modules can't be resolved as well.\n\n", 383 - RB_EMPTY_ROOT(&kdso->symbols[MAP__FUNCTION]) ? 384 - "As no suitable kallsyms nor vmlinux was found, kernel samples\n" 385 - "can't be resolved." : 386 - "If some relocation was applied (e.g. kexec) symbols may be misresolved."); 393 + desc); 387 394 } 388 395 389 396 if (dump_trace) {
+30
tools/perf/builtin-test.c
··· 851 851 return test__checkevent_symbolic_name(evlist); 852 852 } 853 853 854 + static int test__checkevent_exclude_host_modifier(struct perf_evlist *evlist) 855 + { 856 + struct perf_evsel *evsel = list_entry(evlist->entries.next, 857 + struct perf_evsel, node); 858 + 859 + TEST_ASSERT_VAL("wrong exclude guest", !evsel->attr.exclude_guest); 860 + TEST_ASSERT_VAL("wrong exclude host", evsel->attr.exclude_host); 861 + 862 + return test__checkevent_symbolic_name(evlist); 863 + } 864 + 865 + static int test__checkevent_exclude_guest_modifier(struct perf_evlist *evlist) 866 + { 867 + struct perf_evsel *evsel = list_entry(evlist->entries.next, 868 + struct perf_evsel, node); 869 + 870 + TEST_ASSERT_VAL("wrong exclude guest", evsel->attr.exclude_guest); 871 + TEST_ASSERT_VAL("wrong exclude host", !evsel->attr.exclude_host); 872 + 873 + return test__checkevent_symbolic_name(evlist); 874 + } 875 + 854 876 static int test__checkevent_symbolic_alias_modifier(struct perf_evlist *evlist) 855 877 { 856 878 struct perf_evsel *evsel = list_entry(evlist->entries.next, ··· 1112 1090 { 1113 1091 .name = "r1,syscalls:sys_enter_open:k,1:1:hp", 1114 1092 .check = test__checkevent_list, 1093 + }, 1094 + { 1095 + .name = "instructions:G", 1096 + .check = test__checkevent_exclude_host_modifier, 1097 + }, 1098 + { 1099 + .name = "instructions:H", 1100 + .check = test__checkevent_exclude_guest_modifier, 1115 1101 }, 1116 1102 }; 1117 1103
+1 -1
tools/perf/util/parse-events.l
··· 54 54 num_hex 0x[a-fA-F0-9]+ 55 55 num_raw_hex [a-fA-F0-9]+ 56 56 name [a-zA-Z_*?][a-zA-Z0-9_*?]* 57 - modifier_event [ukhp]{1,5} 57 + modifier_event [ukhpGH]{1,8} 58 58 modifier_bp [rwx] 59 59 60 60 %%
+6 -7
tools/perf/util/symbol.c
··· 977 977 * And always look at the original dso, not at debuginfo packages, that 978 978 * have the PLT data stripped out (shdr_rel_plt.sh_type == SHT_NOBITS). 979 979 */ 980 - static int dso__synthesize_plt_symbols(struct dso *dso, struct map *map, 981 - symbol_filter_t filter) 980 + static int 981 + dso__synthesize_plt_symbols(struct dso *dso, char *name, struct map *map, 982 + symbol_filter_t filter) 982 983 { 983 984 uint32_t nr_rel_entries, idx; 984 985 GElf_Sym sym; ··· 994 993 char sympltname[1024]; 995 994 Elf *elf; 996 995 int nr = 0, symidx, fd, err = 0; 997 - char name[PATH_MAX]; 998 996 999 - snprintf(name, sizeof(name), "%s%s", 1000 - symbol_conf.symfs, dso->long_name); 1001 997 fd = open(name, O_RDONLY); 1002 998 if (fd < 0) 1003 999 goto out; ··· 1701 1703 continue; 1702 1704 1703 1705 if (ret > 0) { 1704 - int nr_plt = dso__synthesize_plt_symbols(dso, map, 1705 - filter); 1706 + int nr_plt; 1707 + 1708 + nr_plt = dso__synthesize_plt_symbols(dso, name, map, filter); 1706 1709 if (nr_plt > 0) 1707 1710 ret += nr_plt; 1708 1711 break;