Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/emulex/benet/be_main.c
net/core/sysctl_net_core.c
net/ipv4/inet_diag.c

The be_main.c conflict resolution was really tricky. The conflict
hunks generated by GIT were very unhelpful, to say the least. It
split functions in half and moved them around, when the real actual
conflict only existed solely inside of one function, that being
be_map_pci_bars().

So instead, to resolve this, I checked out be_main.c from the top
of net-next, then I applied the be_main.c changes from 'net' since
the last time I merged. And this worked beautifully.

The inet_diag.c and sysctl_net_core.c conflicts were simple
overlapping changes, and were easily to resolve.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3804 -1880
+2
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 22 22 - pclkN, clkN: Pairs of parent of input clock and input clock to the 23 23 devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 24 24 are supported currently. 25 + - power-domains: phandle pointing to the parent power domain, for more details 26 + see Documentation/devicetree/bindings/power/power_domain.txt 25 27 26 28 Node of a device using power domains must have a power-domains property 27 29 defined with a phandle to respective power domain.
+4
Documentation/devicetree/bindings/arm/sti.txt
··· 13 13 Required root node property: 14 14 compatible = "st,stih407"; 15 15 16 + Boards with the ST STiH410 SoC shall have the following properties: 17 + Required root node property: 18 + compatible = "st,stih410"; 19 + 16 20 Boards with the ST STiH418 SoC shall have the following properties: 17 21 Required root node property: 18 22 compatible = "st,stih418";
+29
Documentation/devicetree/bindings/power/power_domain.txt
··· 19 19 providing multiple PM domains (e.g. power controllers), but can be any value 20 20 as specified by device tree binding documentation of particular provider. 21 21 22 + Optional properties: 23 + - power-domains : A phandle and PM domain specifier as defined by bindings of 24 + the power controller specified by phandle. 25 + Some power domains might be powered from another power domain (or have 26 + other hardware specific dependencies). For representing such dependency 27 + a standard PM domain consumer binding is used. When provided, all domains 28 + created by the given provider should be subdomains of the domain 29 + specified by this binding. More details about power domain specifier are 30 + available in the next section. 31 + 22 32 Example: 23 33 24 34 power: power-controller@12340000 { ··· 39 29 40 30 The node above defines a power controller that is a PM domain provider and 41 31 expects one cell as its phandle argument. 32 + 33 + Example 2: 34 + 35 + parent: power-controller@12340000 { 36 + compatible = "foo,power-controller"; 37 + reg = <0x12340000 0x1000>; 38 + #power-domain-cells = <1>; 39 + }; 40 + 41 + child: power-controller@12340000 { 42 + compatible = "foo,power-controller"; 43 + reg = <0x12341000 0x1000>; 44 + power-domains = <&parent 0>; 45 + #power-domain-cells = <1>; 46 + }; 47 + 48 + The nodes above define two power controllers: 'parent' and 'child'. 49 + Domains created by the 'child' power controller are subdomains of '0' power 50 + domain provided by the 'parent' power controller. 42 51 43 52 ==PM domain consumers== 44 53
+19
Documentation/devicetree/bindings/serial/axis,etraxfs-uart.txt
··· 1 + ETRAX FS UART 2 + 3 + Required properties: 4 + - compatible : "axis,etraxfs-uart" 5 + - reg: offset and length of the register set for the device. 6 + - interrupts: device interrupt 7 + 8 + Optional properties: 9 + - {dtr,dsr,ri,cd}-gpios: specify a GPIO for DTR/DSR/RI/CD 10 + line respectively. 11 + 12 + Example: 13 + 14 + serial@b00260000 { 15 + compatible = "axis,etraxfs-uart"; 16 + reg = <0xb0026000 0x1000>; 17 + interrupts = <68>; 18 + status = "disabled"; 19 + };
Documentation/devicetree/bindings/serial/of-serial.txt Documentation/devicetree/bindings/serial/8250.txt
+3
Documentation/devicetree/bindings/submitting-patches.txt
··· 12 12 13 13 devicetree@vger.kernel.org 14 14 15 + and Cc: the DT maintainers. Use scripts/get_maintainer.pl to identify 16 + all of the DT maintainers. 17 + 15 18 3) The Documentation/ portion of the patch should come in the series before 16 19 the code implementing the binding. 17 20
+2
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 20 20 ams AMS AG 21 21 amstaos AMS-Taos Inc. 22 22 apm Applied Micro Circuits Corporation (APM) 23 + arasan Arasan Chip Systems 23 24 arm ARM Ltd. 24 25 armadeus ARMadeus Systems SARL 25 26 asahi-kasei Asahi Kasei Corp. ··· 28 27 auo AU Optronics Corporation 29 28 avago Avago Technologies 30 29 avic Shanghai AVIC Optoelectronics Co., Ltd. 30 + axis Axis Communications AB 31 31 bosch Bosch Sensortec GmbH 32 32 brcm Broadcom Corporation 33 33 buffalo Buffalo, Inc.
+5
Documentation/devicetree/bindings/watchdog/atmel-wdt.txt
··· 26 26 - atmel,disable : Should be present if you want to disable the watchdog. 27 27 - atmel,idle-halt : Should be present if you want to stop the watchdog when 28 28 entering idle state. 29 + CAUTION: This property should be used with care, it actually makes the 30 + watchdog not counting when the CPU is in idle state, therefore the 31 + watchdog reset time depends on mean CPU usage and will not reset at all 32 + if the CPU stop working while it is in idle state, which is probably 33 + not what you want. 29 34 - atmel,dbg-halt : Should be present if you want to stop the watchdog when 30 35 entering debug state. 31 36
+12 -2
MAINTAINERS
··· 1030 1030 F: arch/arm/boot/dts/imx* 1031 1031 F: arch/arm/configs/imx*_defconfig 1032 1032 1033 + ARM/FREESCALE VYBRID ARM ARCHITECTURE 1034 + M: Shawn Guo <shawn.guo@linaro.org> 1035 + M: Sascha Hauer <kernel@pengutronix.de> 1036 + R: Stefan Agner <stefan@agner.ch> 1037 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1038 + S: Maintained 1039 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 1040 + F: arch/arm/mach-imx/*vf610* 1041 + F: arch/arm/boot/dts/vf* 1042 + 1033 1043 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 1034 1044 M: Lennert Buytenhek <kernel@wantstofly.org> 1035 1045 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 1198 1188 M: Jason Cooper <jason@lakedaemon.net> 1199 1189 M: Andrew Lunn <andrew@lunn.ch> 1200 1190 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1191 + M: Gregory Clement <gregory.clement@free-electrons.com> 1201 1192 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1202 1193 S: Maintained 1203 1194 F: arch/arm/mach-dove/ ··· 1741 1730 F: drivers/net/ethernet/atheros/ 1742 1731 1743 1732 ATM 1744 - M: Chas Williams <chas@cmf.nrl.navy.mil> 1733 + M: Chas Williams <3chas3@gmail.com> 1745 1734 L: linux-atm-general@lists.sourceforge.net (moderated for non-subscribers) 1746 1735 L: netdev@vger.kernel.org 1747 1736 W: http://linux-atm.sourceforge.net ··· 2118 2107 2119 2108 BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE 2120 2109 M: Christian Daudt <bcm@fixthebug.org> 2121 - M: Matt Porter <mporter@linaro.org> 2122 2110 M: Florian Fainelli <f.fainelli@gmail.com> 2123 2111 L: bcm-kernel-feedback-list@broadcom.com 2124 2112 T: git git://github.com/broadcom/mach-bcm
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+1
arch/arm/Makefile
··· 150 150 machine-$(CONFIG_ARCH_CLPS711X) += clps711x 151 151 machine-$(CONFIG_ARCH_CNS3XXX) += cns3xxx 152 152 machine-$(CONFIG_ARCH_DAVINCI) += davinci 153 + machine-$(CONFIG_ARCH_DIGICOLOR) += digicolor 153 154 machine-$(CONFIG_ARCH_DOVE) += dove 154 155 machine-$(CONFIG_ARCH_EBSA110) += ebsa110 155 156 machine-$(CONFIG_ARCH_EFM32) += efm32
+8
arch/arm/boot/dts/am335x-bone-common.dtsi
··· 301 301 cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; 302 302 cd-inverted; 303 303 }; 304 + 305 + &aes { 306 + status = "okay"; 307 + }; 308 + 309 + &sham { 310 + status = "okay"; 311 + };
-8
arch/arm/boot/dts/am335x-bone.dts
··· 24 24 &mmc1 { 25 25 vmmc-supply = <&ldo3_reg>; 26 26 }; 27 - 28 - &sham { 29 - status = "okay"; 30 - }; 31 - 32 - &aes { 33 - status = "okay"; 34 - };
+4
arch/arm/boot/dts/am335x-lxm.dts
··· 328 328 dual_emac_res_vlan = <3>; 329 329 }; 330 330 331 + &phy_sel { 332 + rmii-clock-ext; 333 + }; 334 + 331 335 &mac { 332 336 pinctrl-names = "default", "sleep"; 333 337 pinctrl-0 = <&cpsw_default>;
+3 -3
arch/arm/boot/dts/am33xx-clocks.dtsi
··· 99 99 ehrpwm0_tbclk: ehrpwm0_tbclk@44e10664 { 100 100 #clock-cells = <0>; 101 101 compatible = "ti,gate-clock"; 102 - clocks = <&dpll_per_m2_ck>; 102 + clocks = <&l4ls_gclk>; 103 103 ti,bit-shift = <0>; 104 104 reg = <0x0664>; 105 105 }; ··· 107 107 ehrpwm1_tbclk: ehrpwm1_tbclk@44e10664 { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <1>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm2_tbclk: ehrpwm2_tbclk@44e10664 { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <2>; 120 120 reg = <0x0664>; 121 121 };
+6 -6
arch/arm/boot/dts/am43xx-clocks.dtsi
··· 107 107 ehrpwm0_tbclk: ehrpwm0_tbclk { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <0>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm1_tbclk: ehrpwm1_tbclk { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <1>; 120 120 reg = <0x0664>; 121 121 }; ··· 123 123 ehrpwm2_tbclk: ehrpwm2_tbclk { 124 124 #clock-cells = <0>; 125 125 compatible = "ti,gate-clock"; 126 - clocks = <&dpll_per_m2_ck>; 126 + clocks = <&l4ls_gclk>; 127 127 ti,bit-shift = <2>; 128 128 reg = <0x0664>; 129 129 }; ··· 131 131 ehrpwm3_tbclk: ehrpwm3_tbclk { 132 132 #clock-cells = <0>; 133 133 compatible = "ti,gate-clock"; 134 - clocks = <&dpll_per_m2_ck>; 134 + clocks = <&l4ls_gclk>; 135 135 ti,bit-shift = <4>; 136 136 reg = <0x0664>; 137 137 }; ··· 139 139 ehrpwm4_tbclk: ehrpwm4_tbclk { 140 140 #clock-cells = <0>; 141 141 compatible = "ti,gate-clock"; 142 - clocks = <&dpll_per_m2_ck>; 142 + clocks = <&l4ls_gclk>; 143 143 ti,bit-shift = <5>; 144 144 reg = <0x0664>; 145 145 }; ··· 147 147 ehrpwm5_tbclk: ehrpwm5_tbclk { 148 148 #clock-cells = <0>; 149 149 compatible = "ti,gate-clock"; 150 - clocks = <&dpll_per_m2_ck>; 150 + clocks = <&l4ls_gclk>; 151 151 ti,bit-shift = <6>; 152 152 reg = <0x0664>; 153 153 };
+3 -4
arch/arm/boot/dts/at91sam9260.dtsi
··· 494 494 495 495 pinctrl_usart3_rts: usart3_rts-0 { 496 496 atmel,pins = 497 - <AT91_PIOB 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC8 periph B */ 497 + <AT91_PIOC 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; 498 498 }; 499 499 500 500 pinctrl_usart3_cts: usart3_cts-0 { 501 501 atmel,pins = 502 - <AT91_PIOB 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC10 periph B */ 502 + <AT91_PIOC 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; 503 503 }; 504 504 }; 505 505 ··· 853 853 }; 854 854 855 855 usb1: gadget@fffa4000 { 856 - compatible = "atmel,at91rm9200-udc"; 856 + compatible = "atmel,at91sam9260-udc"; 857 857 reg = <0xfffa4000 0x4000>; 858 858 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 859 859 clocks = <&udc_clk>, <&udpck>; ··· 976 976 atmel,watchdog-type = "hardware"; 977 977 atmel,reset-type = "all"; 978 978 atmel,dbg-halt; 979 - atmel,idle-halt; 980 979 status = "disabled"; 981 980 }; 982 981
+5 -4
arch/arm/boot/dts/at91sam9261.dtsi
··· 124 124 }; 125 125 126 126 usb1: gadget@fffa4000 { 127 - compatible = "atmel,at91rm9200-udc"; 127 + compatible = "atmel,at91sam9261-udc"; 128 128 reg = <0xfffa4000 0x4000>; 129 129 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 130 - clocks = <&usb>, <&udc_clk>, <&udpck>; 131 - clock-names = "usb_clk", "udc_clk", "udpck"; 130 + clocks = <&udc_clk>, <&udpck>; 131 + clock-names = "pclk", "hclk"; 132 + atmel,matrix = <&matrix>; 132 133 status = "disabled"; 133 134 }; 134 135 ··· 263 262 }; 264 263 265 264 matrix: matrix@ffffee00 { 266 - compatible = "atmel,at91sam9260-bus-matrix"; 265 + compatible = "atmel,at91sam9260-bus-matrix", "syscon"; 267 266 reg = <0xffffee00 0x200>; 268 267 }; 269 268
+2 -3
arch/arm/boot/dts/at91sam9263.dtsi
··· 69 69 70 70 sram1: sram@00500000 { 71 71 compatible = "mmio-sram"; 72 - reg = <0x00300000 0x4000>; 72 + reg = <0x00500000 0x4000>; 73 73 }; 74 74 75 75 ahb { ··· 856 856 }; 857 857 858 858 usb1: gadget@fff78000 { 859 - compatible = "atmel,at91rm9200-udc"; 859 + compatible = "atmel,at91sam9263-udc"; 860 860 reg = <0xfff78000 0x4000>; 861 861 interrupts = <24 IRQ_TYPE_LEVEL_HIGH 2>; 862 862 clocks = <&udc_clk>, <&udpck>; ··· 905 905 atmel,watchdog-type = "hardware"; 906 906 atmel,reset-type = "all"; 907 907 atmel,dbg-halt; 908 - atmel,idle-halt; 909 908 status = "disabled"; 910 909 }; 911 910
+1 -2
arch/arm/boot/dts/at91sam9g45.dtsi
··· 1116 1116 atmel,watchdog-type = "hardware"; 1117 1117 atmel,reset-type = "all"; 1118 1118 atmel,dbg-halt; 1119 - atmel,idle-halt; 1120 1119 status = "disabled"; 1121 1120 }; 1122 1121 ··· 1300 1301 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1301 1302 reg = <0x00800000 0x100000>; 1302 1303 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1303 - clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 + clocks = <&utmi>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 1305 clock-names = "usb_clk", "ehci_clk", "hclk", "uhpck"; 1305 1306 status = "disabled"; 1306 1307 };
-1
arch/arm/boot/dts/at91sam9n12.dtsi
··· 894 894 atmel,watchdog-type = "hardware"; 895 895 atmel,reset-type = "all"; 896 896 atmel,dbg-halt; 897 - atmel,idle-halt; 898 897 status = "disabled"; 899 898 }; 900 899
+2 -3
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1066 1066 reg = <0x00500000 0x80000 1067 1067 0xf803c000 0x400>; 1068 1068 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1069 - clocks = <&usb>, <&udphs_clk>; 1069 + clocks = <&utmi>, <&udphs_clk>; 1070 1070 clock-names = "hclk", "pclk"; 1071 1071 status = "disabled"; 1072 1072 ··· 1130 1130 atmel,watchdog-type = "hardware"; 1131 1131 atmel,reset-type = "all"; 1132 1132 atmel,dbg-halt; 1133 - atmel,idle-halt; 1134 1133 status = "disabled"; 1135 1134 }; 1136 1135 ··· 1185 1186 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1186 1187 reg = <0x00700000 0x100000>; 1187 1188 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1188 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1189 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1189 1190 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1190 1191 status = "disabled"; 1191 1192 };
+4 -6
arch/arm/boot/dts/dra7-evm.dts
··· 263 263 264 264 dcan1_pins_default: dcan1_pins_default { 265 265 pinctrl-single,pins = < 266 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 267 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 268 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 266 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 267 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 269 268 >; 270 269 }; 271 270 272 271 dcan1_pins_sleep: dcan1_pins_sleep { 273 272 pinctrl-single,pins = < 274 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 275 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 276 - 0x418 (MUX_MODE15) /* wakeup0.off */ 273 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 274 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 277 275 >; 278 276 }; 279 277 };
+4 -6
arch/arm/boot/dts/dra72-evm.dts
··· 119 119 120 120 dcan1_pins_default: dcan1_pins_default { 121 121 pinctrl-single,pins = < 122 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 123 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 124 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 122 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 123 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 125 124 >; 126 125 }; 127 126 128 127 dcan1_pins_sleep: dcan1_pins_sleep { 129 128 pinctrl-single,pins = < 130 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 131 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 132 - 0x418 (MUX_MODE15) /* wakeup0.off */ 129 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 130 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 133 131 >; 134 132 }; 135 133
+81 -9
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 243 243 ti,invert-autoidle-bit; 244 244 }; 245 245 246 + dpll_core_byp_mux: dpll_core_byp_mux { 247 + #clock-cells = <0>; 248 + compatible = "ti,mux-clock"; 249 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 250 + ti,bit-shift = <23>; 251 + reg = <0x012c>; 252 + }; 253 + 246 254 dpll_core_ck: dpll_core_ck { 247 255 #clock-cells = <0>; 248 256 compatible = "ti,omap4-dpll-core-clock"; 249 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 257 + clocks = <&sys_clkin1>, <&dpll_core_byp_mux>; 250 258 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 251 259 }; 252 260 ··· 317 309 clock-div = <1>; 318 310 }; 319 311 312 + dpll_dsp_byp_mux: dpll_dsp_byp_mux { 313 + #clock-cells = <0>; 314 + compatible = "ti,mux-clock"; 315 + clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 316 + ti,bit-shift = <23>; 317 + reg = <0x0240>; 318 + }; 319 + 320 320 dpll_dsp_ck: dpll_dsp_ck { 321 321 #clock-cells = <0>; 322 322 compatible = "ti,omap4-dpll-clock"; 323 - clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 323 + clocks = <&sys_clkin1>, <&dpll_dsp_byp_mux>; 324 324 reg = <0x0234>, <0x0238>, <0x0240>, <0x023c>; 325 325 }; 326 326 ··· 351 335 clock-div = <1>; 352 336 }; 353 337 338 + dpll_iva_byp_mux: dpll_iva_byp_mux { 339 + #clock-cells = <0>; 340 + compatible = "ti,mux-clock"; 341 + clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 342 + ti,bit-shift = <23>; 343 + reg = <0x01ac>; 344 + }; 345 + 354 346 dpll_iva_ck: dpll_iva_ck { 355 347 #clock-cells = <0>; 356 348 compatible = "ti,omap4-dpll-clock"; 357 - clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 349 + clocks = <&sys_clkin1>, <&dpll_iva_byp_mux>; 358 350 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 359 351 }; 360 352 ··· 385 361 clock-div = <1>; 386 362 }; 387 363 364 + dpll_gpu_byp_mux: dpll_gpu_byp_mux { 365 + #clock-cells = <0>; 366 + compatible = "ti,mux-clock"; 367 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 368 + ti,bit-shift = <23>; 369 + reg = <0x02e4>; 370 + }; 371 + 388 372 dpll_gpu_ck: dpll_gpu_ck { 389 373 #clock-cells = <0>; 390 374 compatible = "ti,omap4-dpll-clock"; 391 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 375 + clocks = <&sys_clkin1>, <&dpll_gpu_byp_mux>; 392 376 reg = <0x02d8>, <0x02dc>, <0x02e4>, <0x02e0>; 393 377 }; 394 378 ··· 430 398 clock-div = <1>; 431 399 }; 432 400 401 + dpll_ddr_byp_mux: dpll_ddr_byp_mux { 402 + #clock-cells = <0>; 403 + compatible = "ti,mux-clock"; 404 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 405 + ti,bit-shift = <23>; 406 + reg = <0x021c>; 407 + }; 408 + 433 409 dpll_ddr_ck: dpll_ddr_ck { 434 410 #clock-cells = <0>; 435 411 compatible = "ti,omap4-dpll-clock"; 436 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 412 + clocks = <&sys_clkin1>, <&dpll_ddr_byp_mux>; 437 413 reg = <0x0210>, <0x0214>, <0x021c>, <0x0218>; 438 414 }; 439 415 ··· 456 416 ti,invert-autoidle-bit; 457 417 }; 458 418 419 + dpll_gmac_byp_mux: dpll_gmac_byp_mux { 420 + #clock-cells = <0>; 421 + compatible = "ti,mux-clock"; 422 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 423 + ti,bit-shift = <23>; 424 + reg = <0x02b4>; 425 + }; 426 + 459 427 dpll_gmac_ck: dpll_gmac_ck { 460 428 #clock-cells = <0>; 461 429 compatible = "ti,omap4-dpll-clock"; 462 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 430 + clocks = <&sys_clkin1>, <&dpll_gmac_byp_mux>; 463 431 reg = <0x02a8>, <0x02ac>, <0x02b4>, <0x02b0>; 464 432 }; 465 433 ··· 530 482 clock-div = <1>; 531 483 }; 532 484 485 + dpll_eve_byp_mux: dpll_eve_byp_mux { 486 + #clock-cells = <0>; 487 + compatible = "ti,mux-clock"; 488 + clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 489 + ti,bit-shift = <23>; 490 + reg = <0x0290>; 491 + }; 492 + 533 493 dpll_eve_ck: dpll_eve_ck { 534 494 #clock-cells = <0>; 535 495 compatible = "ti,omap4-dpll-clock"; 536 - clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 496 + clocks = <&sys_clkin1>, <&dpll_eve_byp_mux>; 537 497 reg = <0x0284>, <0x0288>, <0x0290>, <0x028c>; 538 498 }; 539 499 ··· 1305 1249 clock-div = <1>; 1306 1250 }; 1307 1251 1252 + dpll_per_byp_mux: dpll_per_byp_mux { 1253 + #clock-cells = <0>; 1254 + compatible = "ti,mux-clock"; 1255 + clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1256 + ti,bit-shift = <23>; 1257 + reg = <0x014c>; 1258 + }; 1259 + 1308 1260 dpll_per_ck: dpll_per_ck { 1309 1261 #clock-cells = <0>; 1310 1262 compatible = "ti,omap4-dpll-clock"; 1311 - clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1263 + clocks = <&sys_clkin1>, <&dpll_per_byp_mux>; 1312 1264 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 1313 1265 }; 1314 1266 ··· 1339 1275 clock-div = <1>; 1340 1276 }; 1341 1277 1278 + dpll_usb_byp_mux: dpll_usb_byp_mux { 1279 + #clock-cells = <0>; 1280 + compatible = "ti,mux-clock"; 1281 + clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1282 + ti,bit-shift = <23>; 1283 + reg = <0x018c>; 1284 + }; 1285 + 1342 1286 dpll_usb_ck: dpll_usb_ck { 1343 1287 #clock-cells = <0>; 1344 1288 compatible = "ti,omap4-dpll-j-type-clock"; 1345 - clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1289 + clocks = <&sys_clkin1>, <&dpll_usb_byp_mux>; 1346 1290 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 1347 1291 }; 1348 1292
+2
arch/arm/boot/dts/exynos3250.dtsi
··· 18 18 */ 19 19 20 20 #include "skeleton.dtsi" 21 + #include "exynos4-cpu-thermal.dtsi" 21 22 #include <dt-bindings/clock/exynos3250.h> 22 23 23 24 / { ··· 194 193 interrupts = <0 216 0>; 195 194 clocks = <&cmu CLK_TMU_APBIF>; 196 195 clock-names = "tmu_apbif"; 196 + #include "exynos4412-tmu-sensor-conf.dtsi" 197 197 status = "disabled"; 198 198 }; 199 199
+52
arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4 thermal zone 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal.h> 13 + 14 + / { 15 + thermal-zones { 16 + cpu_thermal: cpu-thermal { 17 + thermal-sensors = <&tmu 0>; 18 + polling-delay-passive = <0>; 19 + polling-delay = <0>; 20 + trips { 21 + cpu_alert0: cpu-alert-0 { 22 + temperature = <70000>; /* millicelsius */ 23 + hysteresis = <10000>; /* millicelsius */ 24 + type = "active"; 25 + }; 26 + cpu_alert1: cpu-alert-1 { 27 + temperature = <95000>; /* millicelsius */ 28 + hysteresis = <10000>; /* millicelsius */ 29 + type = "active"; 30 + }; 31 + cpu_alert2: cpu-alert-2 { 32 + temperature = <110000>; /* millicelsius */ 33 + hysteresis = <10000>; /* millicelsius */ 34 + type = "active"; 35 + }; 36 + cpu_crit0: cpu-crit-0 { 37 + temperature = <120000>; /* millicelsius */ 38 + hysteresis = <0>; /* millicelsius */ 39 + type = "critical"; 40 + }; 41 + }; 42 + cooling-maps { 43 + map0 { 44 + trip = <&cpu_alert0>; 45 + }; 46 + map1 { 47 + trip = <&cpu_alert1>; 48 + }; 49 + }; 50 + }; 51 + }; 52 + };
+45
arch/arm/boot/dts/exynos4.dtsi
··· 38 38 i2c5 = &i2c_5; 39 39 i2c6 = &i2c_6; 40 40 i2c7 = &i2c_7; 41 + i2c8 = &i2c_8; 41 42 csis0 = &csis_0; 42 43 csis1 = &csis_1; 43 44 fimc0 = &fimc_0; ··· 105 104 compatible = "samsung,exynos4210-pd"; 106 105 reg = <0x10023C20 0x20>; 107 106 #power-domain-cells = <0>; 107 + power-domains = <&pd_lcd0>; 108 108 }; 109 109 110 110 pd_cam: cam-power-domain@10023C00 { ··· 556 554 status = "disabled"; 557 555 }; 558 556 557 + i2c_8: i2c@138E0000 { 558 + #address-cells = <1>; 559 + #size-cells = <0>; 560 + compatible = "samsung,s3c2440-hdmiphy-i2c"; 561 + reg = <0x138E0000 0x100>; 562 + interrupts = <0 93 0>; 563 + clocks = <&clock CLK_I2C_HDMI>; 564 + clock-names = "i2c"; 565 + status = "disabled"; 566 + 567 + hdmi_i2c_phy: hdmiphy@38 { 568 + compatible = "exynos4210-hdmiphy"; 569 + reg = <0x38>; 570 + }; 571 + }; 572 + 559 573 spi_0: spi@13920000 { 560 574 compatible = "samsung,exynos4210-spi"; 561 575 reg = <0x13920000 0x100>; ··· 678 660 clock-names = "sclk_fimd", "fimd"; 679 661 power-domains = <&pd_lcd0>; 680 662 samsung,sysreg = <&sys_reg>; 663 + status = "disabled"; 664 + }; 665 + 666 + tmu: tmu@100C0000 { 667 + #include "exynos4412-tmu-sensor-conf.dtsi" 668 + }; 669 + 670 + hdmi: hdmi@12D00000 { 671 + compatible = "samsung,exynos4210-hdmi"; 672 + reg = <0x12D00000 0x70000>; 673 + interrupts = <0 92 0>; 674 + clock-names = "hdmi", "sclk_hdmi", "sclk_pixel", "sclk_hdmiphy", 675 + "mout_hdmi"; 676 + clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 677 + <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, 678 + <&clock CLK_MOUT_HDMI>; 679 + phy = <&hdmi_i2c_phy>; 680 + power-domains = <&pd_tv>; 681 + samsung,syscon-phandle = <&pmu_system_controller>; 682 + status = "disabled"; 683 + }; 684 + 685 + mixer: mixer@12C10000 { 686 + compatible = "samsung,exynos4210-mixer"; 687 + interrupts = <0 91 0>; 688 + reg = <0x12C10000 0x2100>, <0x12c00000 0x300>; 689 + power-domains = <&pd_tv>; 681 690 status = "disabled"; 682 691 }; 683 692
+19
arch/arm/boot/dts/exynos4210-trats.dts
··· 426 426 status = "okay"; 427 427 }; 428 428 429 + tmu@100C0000 { 430 + status = "okay"; 431 + }; 432 + 433 + thermal-zones { 434 + cpu_thermal: cpu-thermal { 435 + cooling-maps { 436 + map0 { 437 + /* Corresponds to 800MHz at freq_table */ 438 + cooling-device = <&cpu0 2 2>; 439 + }; 440 + map1 { 441 + /* Corresponds to 200MHz at freq_table */ 442 + cooling-device = <&cpu0 4 4>; 443 + }; 444 + }; 445 + }; 446 + }; 447 + 429 448 camera { 430 449 pinctrl-names = "default"; 431 450 pinctrl-0 = <>;
+57
arch/arm/boot/dts/exynos4210-universal_c210.dts
··· 505 505 assigned-clock-rates = <0>, <160000000>; 506 506 }; 507 507 }; 508 + 509 + hdmi_en: voltage-regulator-hdmi-5v { 510 + compatible = "regulator-fixed"; 511 + regulator-name = "HDMI_5V"; 512 + regulator-min-microvolt = <5000000>; 513 + regulator-max-microvolt = <5000000>; 514 + gpio = <&gpe0 1 0>; 515 + enable-active-high; 516 + }; 517 + 518 + hdmi_ddc: i2c-ddc { 519 + compatible = "i2c-gpio"; 520 + gpios = <&gpe4 2 0 &gpe4 3 0>; 521 + i2c-gpio,delay-us = <100>; 522 + #address-cells = <1>; 523 + #size-cells = <0>; 524 + 525 + pinctrl-0 = <&i2c_ddc_bus>; 526 + pinctrl-names = "default"; 527 + status = "okay"; 528 + }; 529 + 530 + mixer@12C10000 { 531 + status = "okay"; 532 + }; 533 + 534 + hdmi@12D00000 { 535 + hpd-gpio = <&gpx3 7 0>; 536 + pinctrl-names = "default"; 537 + pinctrl-0 = <&hdmi_hpd>; 538 + hdmi-en-supply = <&hdmi_en>; 539 + vdd-supply = <&ldo3_reg>; 540 + vdd_osc-supply = <&ldo4_reg>; 541 + vdd_pll-supply = <&ldo3_reg>; 542 + ddc = <&hdmi_ddc>; 543 + status = "okay"; 544 + }; 545 + 546 + i2c@138E0000 { 547 + status = "okay"; 548 + }; 549 + }; 550 + 551 + &pinctrl_1 { 552 + hdmi_hpd: hdmi-hpd { 553 + samsung,pins = "gpx3-7"; 554 + samsung,pin-pud = <0>; 555 + }; 556 + }; 557 + 558 + &pinctrl_0 { 559 + i2c_ddc_bus: i2c-ddc-bus { 560 + samsung,pins = "gpe4-2", "gpe4-3"; 561 + samsung,pin-function = <2>; 562 + samsung,pin-pud = <3>; 563 + samsung,pin-drv = <0>; 564 + }; 508 565 }; 509 566 510 567 &mdma1 {
+36 -2
arch/arm/boot/dts/exynos4210.dtsi
··· 21 21 22 22 #include "exynos4.dtsi" 23 23 #include "exynos4210-pinctrl.dtsi" 24 + #include "exynos4-cpu-thermal.dtsi" 24 25 25 26 / { 26 27 compatible = "samsung,exynos4210", "samsung,exynos4"; ··· 36 35 #address-cells = <1>; 37 36 #size-cells = <0>; 38 37 39 - cpu@900 { 38 + cpu0: cpu@900 { 40 39 device_type = "cpu"; 41 40 compatible = "arm,cortex-a9"; 42 41 reg = <0x900>; 42 + cooling-min-level = <4>; 43 + cooling-max-level = <2>; 44 + #cooling-cells = <2>; /* min followed by max */ 43 45 }; 44 46 45 47 cpu@901 { ··· 157 153 reg = <0x03860000 0x1000>; 158 154 }; 159 155 160 - tmu@100C0000 { 156 + tmu: tmu@100C0000 { 161 157 compatible = "samsung,exynos4210-tmu"; 162 158 interrupt-parent = <&combiner>; 163 159 reg = <0x100C0000 0x100>; 164 160 interrupts = <2 4>; 165 161 clocks = <&clock CLK_TMU_APBIF>; 166 162 clock-names = "tmu_apbif"; 163 + samsung,tmu_gain = <15>; 164 + samsung,tmu_reference_voltage = <7>; 167 165 status = "disabled"; 166 + }; 167 + 168 + thermal-zones { 169 + cpu_thermal: cpu-thermal { 170 + polling-delay-passive = <0>; 171 + polling-delay = <0>; 172 + thermal-sensors = <&tmu 0>; 173 + 174 + trips { 175 + cpu_alert0: cpu-alert-0 { 176 + temperature = <85000>; /* millicelsius */ 177 + }; 178 + cpu_alert1: cpu-alert-1 { 179 + temperature = <100000>; /* millicelsius */ 180 + }; 181 + cpu_alert2: cpu-alert-2 { 182 + temperature = <110000>; /* millicelsius */ 183 + }; 184 + }; 185 + }; 168 186 }; 169 187 170 188 g2d@12800000 { ··· 227 201 samsung,mainscaler-ext; 228 202 samsung,lcd-wb; 229 203 }; 204 + }; 205 + 206 + mixer: mixer@12C10000 { 207 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp", "mout_mixer", 208 + "sclk_mixer"; 209 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 210 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>, 211 + <&clock CLK_MOUT_MIXER>, <&clock CLK_SCLK_MIXER>; 230 212 }; 231 213 232 214 ppmu_lcd1: ppmu_lcd1@12240000 {
+4 -1
arch/arm/boot/dts/exynos4212.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+64
arch/arm/boot/dts/exynos4412-odroid-common.dtsi
··· 249 249 regulator-always-on; 250 250 }; 251 251 252 + ldo8_reg: ldo@8 { 253 + regulator-compatible = "LDO8"; 254 + regulator-name = "VDD10_HDMI_1.0V"; 255 + regulator-min-microvolt = <1000000>; 256 + regulator-max-microvolt = <1000000>; 257 + }; 258 + 259 + ldo10_reg: ldo@10 { 260 + regulator-compatible = "LDO10"; 261 + regulator-name = "VDDQ_MIPIHSI_1.8V"; 262 + regulator-min-microvolt = <1800000>; 263 + regulator-max-microvolt = <1800000>; 264 + }; 265 + 252 266 ldo11_reg: LDO11 { 253 267 regulator-name = "VDD18_ABB1_1.8V"; 254 268 regulator-min-microvolt = <1800000>; ··· 425 411 ehci: ehci@12580000 { 426 412 status = "okay"; 427 413 }; 414 + 415 + tmu@100C0000 { 416 + vtmu-supply = <&ldo10_reg>; 417 + status = "okay"; 418 + }; 419 + 420 + thermal-zones { 421 + cpu_thermal: cpu-thermal { 422 + cooling-maps { 423 + map0 { 424 + /* Corresponds to 800MHz at freq_table */ 425 + cooling-device = <&cpu0 7 7>; 426 + }; 427 + map1 { 428 + /* Corresponds to 200MHz at freq_table */ 429 + cooling-device = <&cpu0 13 13>; 430 + }; 431 + }; 432 + }; 433 + }; 434 + 435 + mixer: mixer@12C10000 { 436 + status = "okay"; 437 + }; 438 + 439 + hdmi@12D00000 { 440 + hpd-gpio = <&gpx3 7 0>; 441 + pinctrl-names = "default"; 442 + pinctrl-0 = <&hdmi_hpd>; 443 + vdd-supply = <&ldo8_reg>; 444 + vdd_osc-supply = <&ldo10_reg>; 445 + vdd_pll-supply = <&ldo8_reg>; 446 + ddc = <&hdmi_ddc>; 447 + status = "okay"; 448 + }; 449 + 450 + hdmi_ddc: i2c@13880000 { 451 + status = "okay"; 452 + pinctrl-names = "default"; 453 + pinctrl-0 = <&i2c2_bus>; 454 + }; 455 + 456 + i2c@138E0000 { 457 + status = "okay"; 458 + }; 428 459 }; 429 460 430 461 &pinctrl_1 { ··· 483 424 samsung,pin-function = <0>; 484 425 samsung,pin-pud = <0>; 485 426 samsung,pin-drv = <0>; 427 + }; 428 + 429 + hdmi_hpd: hdmi-hpd { 430 + samsung,pins = "gpx3-7"; 431 + samsung,pin-pud = <1>; 486 432 }; 487 433 };
+24
arch/arm/boot/dts/exynos4412-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4412 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <8>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <55>; 19 + samsung,tmu_min_efuse_value = <40>; 20 + samsung,tmu_max_efuse_value = <100>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <85>; 23 + samsung,tmu_default_temp_offset = <50>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+15
arch/arm/boot/dts/exynos4412-trats2.dts
··· 927 927 pulldown-ohm = <100000>; /* 100K */ 928 928 io-channels = <&adc 2>; /* Battery temperature */ 929 929 }; 930 + 931 + thermal-zones { 932 + cpu_thermal: cpu-thermal { 933 + cooling-maps { 934 + map0 { 935 + /* Corresponds to 800MHz at freq_table */ 936 + cooling-device = <&cpu0 7 7>; 937 + }; 938 + map1 { 939 + /* Corresponds to 200MHz at freq_table */ 940 + cooling-device = <&cpu0 13 13>; 941 + }; 942 + }; 943 + }; 944 + }; 930 945 }; 931 946 932 947 &pmu_system_controller {
+4 -1
arch/arm/boot/dts/exynos4412.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+12
arch/arm/boot/dts/exynos4x12.dtsi
··· 19 19 20 20 #include "exynos4.dtsi" 21 21 #include "exynos4x12-pinctrl.dtsi" 22 + #include "exynos4-cpu-thermal.dtsi" 22 23 23 24 / { 24 25 aliases { ··· 297 296 clocks = <&clock 383>; 298 297 clock-names = "tmu_apbif"; 299 298 status = "disabled"; 299 + }; 300 + 301 + hdmi: hdmi@12D00000 { 302 + compatible = "samsung,exynos4212-hdmi"; 303 + }; 304 + 305 + mixer: mixer@12C10000 { 306 + compatible = "samsung,exynos4212-mixer"; 307 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp"; 308 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 309 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>; 300 310 }; 301 311 };
+39 -5
arch/arm/boot/dts/exynos5250.dtsi
··· 20 20 #include <dt-bindings/clock/exynos5250.h> 21 21 #include "exynos5.dtsi" 22 22 #include "exynos5250-pinctrl.dtsi" 23 - 23 + #include "exynos4-cpu-thermal.dtsi" 24 24 #include <dt-bindings/clock/exynos-audss-clk.h> 25 25 26 26 / { ··· 58 58 #address-cells = <1>; 59 59 #size-cells = <0>; 60 60 61 - cpu@0 { 61 + cpu0: cpu@0 { 62 62 device_type = "cpu"; 63 63 compatible = "arm,cortex-a15"; 64 64 reg = <0>; 65 65 clock-frequency = <1700000000>; 66 + cooling-min-level = <15>; 67 + cooling-max-level = <9>; 68 + #cooling-cells = <2>; /* min followed by max */ 66 69 }; 67 70 cpu@1 { 68 71 device_type = "cpu"; ··· 102 99 pd_mfc: mfc-power-domain@10044040 { 103 100 compatible = "samsung,exynos4210-pd"; 104 101 reg = <0x10044040 0x20>; 102 + #power-domain-cells = <0>; 103 + }; 104 + 105 + pd_disp1: disp1-power-domain@100440A0 { 106 + compatible = "samsung,exynos4210-pd"; 107 + reg = <0x100440A0 0x20>; 105 108 #power-domain-cells = <0>; 106 109 }; 107 110 ··· 244 235 status = "disabled"; 245 236 }; 246 237 247 - tmu@10060000 { 238 + tmu: tmu@10060000 { 248 239 compatible = "samsung,exynos5250-tmu"; 249 240 reg = <0x10060000 0x100>; 250 241 interrupts = <0 65 0>; 251 242 clocks = <&clock CLK_TMU>; 252 243 clock-names = "tmu_apbif"; 244 + #include "exynos4412-tmu-sensor-conf.dtsi" 245 + }; 246 + 247 + thermal-zones { 248 + cpu_thermal: cpu-thermal { 249 + polling-delay-passive = <0>; 250 + polling-delay = <0>; 251 + thermal-sensors = <&tmu 0>; 252 + 253 + cooling-maps { 254 + map0 { 255 + /* Corresponds to 800MHz at freq_table */ 256 + cooling-device = <&cpu0 9 9>; 257 + }; 258 + map1 { 259 + /* Corresponds to 200MHz at freq_table */ 260 + cooling-device = <&cpu0 15 15>; 261 + }; 262 + }; 263 + }; 253 264 }; 254 265 255 266 serial@12C00000 { ··· 748 719 hdmi: hdmi { 749 720 compatible = "samsung,exynos4212-hdmi"; 750 721 reg = <0x14530000 0x70000>; 722 + power-domains = <&pd_disp1>; 751 723 interrupts = <0 95 0>; 752 724 clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 753 725 <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, ··· 761 731 mixer { 762 732 compatible = "samsung,exynos5250-mixer"; 763 733 reg = <0x14450000 0x10000>; 734 + power-domains = <&pd_disp1>; 764 735 interrupts = <0 94 0>; 765 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 766 - clock-names = "mixer", "sclk_hdmi"; 736 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 737 + <&clock CLK_SCLK_HDMI>; 738 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 767 739 }; 768 740 769 741 dp_phy: video-phy@10040720 { ··· 775 743 }; 776 744 777 745 dp: dp-controller@145B0000 { 746 + power-domains = <&pd_disp1>; 778 747 clocks = <&clock CLK_DP>; 779 748 clock-names = "dp"; 780 749 phys = <&dp_phy>; ··· 783 750 }; 784 751 785 752 fimd: fimd@14400000 { 753 + power-domains = <&pd_disp1>; 786 754 clocks = <&clock CLK_SCLK_FIMD1>, <&clock CLK_FIMD1>; 787 755 clock-names = "sclk_fimd", "fimd"; 788 756 };
+35
arch/arm/boot/dts/exynos5420-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5420 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <85000>; /* millicelsius */ 17 + hysteresis = <10000>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-alert-1 { 21 + temperature = <103000>; /* millicelsius */ 22 + hysteresis = <10000>; /* millicelsius */ 23 + type = "active"; 24 + }; 25 + cpu-alert-2 { 26 + temperature = <110000>; /* millicelsius */ 27 + hysteresis = <10000>; /* millicelsius */ 28 + type = "active"; 29 + }; 30 + cpu-crit-0 { 31 + temperature = <1200000>; /* millicelsius */ 32 + hysteresis = <0>; /* millicelsius */ 33 + type = "critical"; 34 + }; 35 + };
+31 -2
arch/arm/boot/dts/exynos5420.dtsi
··· 740 740 compatible = "samsung,exynos5420-mixer"; 741 741 reg = <0x14450000 0x10000>; 742 742 interrupts = <0 94 0>; 743 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 744 - clock-names = "mixer", "sclk_hdmi"; 743 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 744 + <&clock CLK_SCLK_HDMI>; 745 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 745 746 power-domains = <&disp_pd>; 746 747 }; 747 748 ··· 783 782 interrupts = <0 65 0>; 784 783 clocks = <&clock CLK_TMU>; 785 784 clock-names = "tmu_apbif"; 785 + #include "exynos4412-tmu-sensor-conf.dtsi" 786 786 }; 787 787 788 788 tmu_cpu1: tmu@10064000 { ··· 792 790 interrupts = <0 183 0>; 793 791 clocks = <&clock CLK_TMU>; 794 792 clock-names = "tmu_apbif"; 793 + #include "exynos4412-tmu-sensor-conf.dtsi" 795 794 }; 796 795 797 796 tmu_cpu2: tmu@10068000 { ··· 801 798 interrupts = <0 184 0>; 802 799 clocks = <&clock CLK_TMU>, <&clock CLK_TMU>; 803 800 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 801 + #include "exynos4412-tmu-sensor-conf.dtsi" 804 802 }; 805 803 806 804 tmu_cpu3: tmu@1006c000 { ··· 810 806 interrupts = <0 185 0>; 811 807 clocks = <&clock CLK_TMU>, <&clock CLK_TMU_GPU>; 812 808 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 809 + #include "exynos4412-tmu-sensor-conf.dtsi" 813 810 }; 814 811 815 812 tmu_gpu: tmu@100a0000 { ··· 819 814 interrupts = <0 215 0>; 820 815 clocks = <&clock CLK_TMU_GPU>, <&clock CLK_TMU>; 821 816 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 817 + #include "exynos4412-tmu-sensor-conf.dtsi" 818 + }; 819 + 820 + thermal-zones { 821 + cpu0_thermal: cpu0-thermal { 822 + thermal-sensors = <&tmu_cpu0>; 823 + #include "exynos5420-trip-points.dtsi" 824 + }; 825 + cpu1_thermal: cpu1-thermal { 826 + thermal-sensors = <&tmu_cpu1>; 827 + #include "exynos5420-trip-points.dtsi" 828 + }; 829 + cpu2_thermal: cpu2-thermal { 830 + thermal-sensors = <&tmu_cpu2>; 831 + #include "exynos5420-trip-points.dtsi" 832 + }; 833 + cpu3_thermal: cpu3-thermal { 834 + thermal-sensors = <&tmu_cpu3>; 835 + #include "exynos5420-trip-points.dtsi" 836 + }; 837 + gpu_thermal: gpu-thermal { 838 + thermal-sensors = <&tmu_gpu>; 839 + #include "exynos5420-trip-points.dtsi" 840 + }; 822 841 }; 823 842 824 843 watchdog: watchdog@101D0000 {
+24
arch/arm/boot/dts/exynos5440-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos5440 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <5>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <0x5d2d>; 19 + samsung,tmu_min_efuse_value = <16>; 20 + samsung,tmu_max_efuse_value = <76>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <70>; 23 + samsung,tmu_default_temp_offset = <25>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+25
arch/arm/boot/dts/exynos5440-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5440 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <100000>; /* millicelsius */ 17 + hysteresis = <0>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-crit-0 { 21 + temperature = <1050000>; /* millicelsius */ 22 + hysteresis = <0>; /* millicelsius */ 23 + type = "critical"; 24 + }; 25 + };
+18
arch/arm/boot/dts/exynos5440.dtsi
··· 219 219 interrupts = <0 58 0>; 220 220 clocks = <&clock CLK_B_125>; 221 221 clock-names = "tmu_apbif"; 222 + #include "exynos5440-tmu-sensor-conf.dtsi" 222 223 }; 223 224 224 225 tmuctrl_1: tmuctrl@16011C { ··· 228 227 interrupts = <0 58 0>; 229 228 clocks = <&clock CLK_B_125>; 230 229 clock-names = "tmu_apbif"; 230 + #include "exynos5440-tmu-sensor-conf.dtsi" 231 231 }; 232 232 233 233 tmuctrl_2: tmuctrl@160120 { ··· 237 235 interrupts = <0 58 0>; 238 236 clocks = <&clock CLK_B_125>; 239 237 clock-names = "tmu_apbif"; 238 + #include "exynos5440-tmu-sensor-conf.dtsi" 239 + }; 240 + 241 + thermal-zones { 242 + cpu0_thermal: cpu0-thermal { 243 + thermal-sensors = <&tmuctrl_0>; 244 + #include "exynos5440-trip-points.dtsi" 245 + }; 246 + cpu1_thermal: cpu1-thermal { 247 + thermal-sensors = <&tmuctrl_1>; 248 + #include "exynos5440-trip-points.dtsi" 249 + }; 250 + cpu2_thermal: cpu2-thermal { 251 + thermal-sensors = <&tmuctrl_2>; 252 + #include "exynos5440-trip-points.dtsi" 253 + }; 240 254 }; 241 255 242 256 sata@210000 {
+2
arch/arm/boot/dts/imx6qdl-sabresd.dtsi
··· 35 35 regulator-max-microvolt = <5000000>; 36 36 gpio = <&gpio3 22 0>; 37 37 enable-active-high; 38 + vin-supply = <&swbst_reg>; 38 39 }; 39 40 40 41 reg_usb_h1_vbus: regulator@1 { ··· 46 45 regulator-max-microvolt = <5000000>; 47 46 gpio = <&gpio1 29 0>; 48 47 enable-active-high; 48 + vin-supply = <&swbst_reg>; 49 49 }; 50 50 51 51 reg_audio: regulator@2 {
+2
arch/arm/boot/dts/imx6sl-evk.dts
··· 52 52 regulator-max-microvolt = <5000000>; 53 53 gpio = <&gpio4 0 0>; 54 54 enable-active-high; 55 + vin-supply = <&swbst_reg>; 55 56 }; 56 57 57 58 reg_usb_otg2_vbus: regulator@1 { ··· 63 62 regulator-max-microvolt = <5000000>; 64 63 gpio = <&gpio4 2 0>; 65 64 enable-active-high; 65 + vin-supply = <&swbst_reg>; 66 66 }; 67 67 68 68 reg_aud3v: regulator@2 {
+1 -1
arch/arm/boot/dts/omap5-core-thermal.dtsi
··· 13 13 14 14 core_thermal: core_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 2>;
+1 -1
arch/arm/boot/dts/omap5-gpu-thermal.dtsi
··· 13 13 14 14 gpu_thermal: gpu_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 1>;
+4
arch/arm/boot/dts/omap5.dtsi
··· 1079 1079 }; 1080 1080 }; 1081 1081 1082 + &cpu_thermal { 1083 + polling-delay = <500>; /* milliseconds */ 1084 + }; 1085 + 1082 1086 /include/ "omap54xx-clocks.dtsi"
+37 -4
arch/arm/boot/dts/omap54xx-clocks.dtsi
··· 167 167 ti,index-starts-at-one; 168 168 }; 169 169 170 + dpll_core_byp_mux: dpll_core_byp_mux { 171 + #clock-cells = <0>; 172 + compatible = "ti,mux-clock"; 173 + clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 174 + ti,bit-shift = <23>; 175 + reg = <0x012c>; 176 + }; 177 + 170 178 dpll_core_ck: dpll_core_ck { 171 179 #clock-cells = <0>; 172 180 compatible = "ti,omap4-dpll-core-clock"; 173 - clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 181 + clocks = <&sys_clkin>, <&dpll_core_byp_mux>; 174 182 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 175 183 }; 176 184 ··· 302 294 clock-div = <1>; 303 295 }; 304 296 297 + dpll_iva_byp_mux: dpll_iva_byp_mux { 298 + #clock-cells = <0>; 299 + compatible = "ti,mux-clock"; 300 + clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 301 + ti,bit-shift = <23>; 302 + reg = <0x01ac>; 303 + }; 304 + 305 305 dpll_iva_ck: dpll_iva_ck { 306 306 #clock-cells = <0>; 307 307 compatible = "ti,omap4-dpll-clock"; 308 - clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 308 + clocks = <&sys_clkin>, <&dpll_iva_byp_mux>; 309 309 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 310 310 }; 311 311 ··· 615 599 }; 616 600 }; 617 601 &cm_core_clocks { 602 + 603 + dpll_per_byp_mux: dpll_per_byp_mux { 604 + #clock-cells = <0>; 605 + compatible = "ti,mux-clock"; 606 + clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 607 + ti,bit-shift = <23>; 608 + reg = <0x014c>; 609 + }; 610 + 618 611 dpll_per_ck: dpll_per_ck { 619 612 #clock-cells = <0>; 620 613 compatible = "ti,omap4-dpll-clock"; 621 - clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 614 + clocks = <&sys_clkin>, <&dpll_per_byp_mux>; 622 615 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 623 616 }; 624 617 ··· 739 714 ti,index-starts-at-one; 740 715 }; 741 716 717 + dpll_usb_byp_mux: dpll_usb_byp_mux { 718 + #clock-cells = <0>; 719 + compatible = "ti,mux-clock"; 720 + clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 721 + ti,bit-shift = <23>; 722 + reg = <0x018c>; 723 + }; 724 + 742 725 dpll_usb_ck: dpll_usb_ck { 743 726 #clock-cells = <0>; 744 727 compatible = "ti,omap4-dpll-j-type-clock"; 745 - clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 728 + clocks = <&sys_clkin>, <&dpll_usb_byp_mux>; 746 729 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 747 730 }; 748 731
+1 -2
arch/arm/boot/dts/sama5d3.dtsi
··· 1248 1248 atmel,watchdog-type = "hardware"; 1249 1249 atmel,reset-type = "all"; 1250 1250 atmel,dbg-halt; 1251 - atmel,idle-halt; 1252 1251 status = "disabled"; 1253 1252 }; 1254 1253 ··· 1415 1416 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1416 1417 reg = <0x00700000 0x100000>; 1417 1418 interrupts = <32 IRQ_TYPE_LEVEL_HIGH 2>; 1418 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1419 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1419 1420 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1420 1421 status = "disabled"; 1421 1422 };
+5 -4
arch/arm/boot/dts/sama5d4.dtsi
··· 66 66 gpio4 = &pioE; 67 67 tcb0 = &tcb0; 68 68 tcb1 = &tcb1; 69 + i2c0 = &i2c0; 69 70 i2c2 = &i2c2; 70 71 }; 71 72 cpus { ··· 260 259 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 261 260 reg = <0x00600000 0x100000>; 262 261 interrupts = <46 IRQ_TYPE_LEVEL_HIGH 2>; 263 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 262 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 264 263 clock-names = "usb_clk", "ehci_clk", "uhpck"; 265 264 status = "disabled"; 266 265 }; ··· 462 461 463 462 lcdck: lcdck { 464 463 #clock-cells = <0>; 465 - reg = <4>; 466 - clocks = <&smd>; 464 + reg = <3>; 465 + clocks = <&mck>; 467 466 }; 468 467 469 468 smdck: smdck { ··· 771 770 reg = <50>; 772 771 }; 773 772 774 - lcd_clk: lcd_clk { 773 + lcdc_clk: lcdc_clk { 775 774 #clock-cells = <0>; 776 775 reg = <51>; 777 776 };
+6
arch/arm/boot/dts/socfpga.dtsi
··· 713 713 reg-shift = <2>; 714 714 reg-io-width = <4>; 715 715 clocks = <&l4_sp_clk>; 716 + dmas = <&pdma 28>, 717 + <&pdma 29>; 718 + dma-names = "tx", "rx"; 716 719 }; 717 720 718 721 uart1: serial1@ffc03000 { ··· 725 722 reg-shift = <2>; 726 723 reg-io-width = <4>; 727 724 clocks = <&l4_sp_clk>; 725 + dmas = <&pdma 30>, 726 + <&pdma 31>; 727 + dma-names = "tx", "rx"; 728 728 }; 729 729 730 730 rst: rstmgr@ffd05000 {
+1
arch/arm/configs/at91_dt_defconfig
··· 70 70 CONFIG_BLK_DEV_SD=y 71 71 # CONFIG_SCSI_LOWLEVEL is not set 72 72 CONFIG_NETDEVICES=y 73 + CONFIG_ARM_AT91_ETHER=y 73 74 CONFIG_MACB=y 74 75 # CONFIG_NET_VENDOR_BROADCOM is not set 75 76 CONFIG_DM9000=y
+1 -1
arch/arm/configs/multi_v7_defconfig
··· 99 99 CONFIG_PCI_RCAR_GEN2_PCIE=y 100 100 CONFIG_PCIEPORTBUS=y 101 101 CONFIG_SMP=y 102 - CONFIG_NR_CPUS=8 102 + CONFIG_NR_CPUS=16 103 103 CONFIG_HIGHPTE=y 104 104 CONFIG_CMA=y 105 105 CONFIG_ARM_APPENDED_DTB=y
+1
arch/arm/configs/omap2plus_defconfig
··· 377 377 CONFIG_PWM_TWL_LED=m 378 378 CONFIG_OMAP_USB2=m 379 379 CONFIG_TI_PIPE3=y 380 + CONFIG_TWL4030_USB=m 380 381 CONFIG_EXT2_FS=y 381 382 CONFIG_EXT3_FS=y 382 383 # CONFIG_EXT3_FS_XATTR is not set
-2
arch/arm/configs/sama5_defconfig
··· 3 3 CONFIG_SYSVIPC=y 4 4 CONFIG_IRQ_DOMAIN_DEBUG=y 5 5 CONFIG_LOG_BUF_SHIFT=14 6 - CONFIG_SYSFS_DEPRECATED=y 7 - CONFIG_SYSFS_DEPRECATED_V2=y 8 6 CONFIG_BLK_DEV_INITRD=y 9 7 CONFIG_EMBEDDED=y 10 8 CONFIG_SLAB=y
+1
arch/arm/configs/sunxi_defconfig
··· 4 4 CONFIG_PERF_EVENTS=y 5 5 CONFIG_ARCH_SUNXI=y 6 6 CONFIG_SMP=y 7 + CONFIG_NR_CPUS=8 7 8 CONFIG_AEABI=y 8 9 CONFIG_HIGHMEM=y 9 10 CONFIG_HIGHPTE=y
+1 -1
arch/arm/configs/vexpress_defconfig
··· 118 118 CONFIG_USB=y 119 119 CONFIG_USB_ANNOUNCE_NEW_DEVICES=y 120 120 CONFIG_USB_MON=y 121 - CONFIG_USB_ISP1760_HCD=y 122 121 CONFIG_USB_STORAGE=y 122 + CONFIG_USB_ISP1760=y 123 123 CONFIG_MMC=y 124 124 CONFIG_MMC_ARMMMCI=y 125 125 CONFIG_NEW_LEDS=y
+8 -4
arch/arm/crypto/aesbs-core.S_shipped
··· 58 58 # define VFP_ABI_FRAME 0 59 59 # define BSAES_ASM_EXTENDED_KEY 60 60 # define XTS_CHAIN_TWEAK 61 - # define __ARM_ARCH__ 7 61 + # define __ARM_ARCH__ __LINUX_ARM_ARCH__ 62 + # define __ARM_MAX_ARCH__ 7 62 63 #endif 63 64 64 65 #ifdef __thumb__ 65 66 # define adrl adr 66 67 #endif 67 68 68 - #if __ARM_ARCH__>=7 69 + #if __ARM_MAX_ARCH__>=7 70 + .arch armv7-a 71 + .fpu neon 72 + 69 73 .text 70 74 .syntax unified @ ARMv7-capable assembler is expected to handle this 71 75 #ifdef __thumb2__ ··· 77 73 #else 78 74 .code 32 79 75 #endif 80 - 81 - .fpu neon 82 76 83 77 .type _bsaes_decrypt8,%function 84 78 .align 4 ··· 2097 2095 vld1.8 {q8}, [r0] @ initial tweak 2098 2096 adr r2, .Lxts_magic 2099 2097 2098 + #ifndef XTS_CHAIN_TWEAK 2100 2099 tst r9, #0xf @ if not multiple of 16 2101 2100 it ne @ Thumb2 thing, sanity check in ARM 2102 2101 subne r9, #0x10 @ subtract another 16 bytes 2102 + #endif 2103 2103 subs r9, #0x80 2104 2104 2105 2105 blo .Lxts_dec_short
+8 -4
arch/arm/crypto/bsaes-armv7.pl
··· 701 701 # define VFP_ABI_FRAME 0 702 702 # define BSAES_ASM_EXTENDED_KEY 703 703 # define XTS_CHAIN_TWEAK 704 - # define __ARM_ARCH__ 7 704 + # define __ARM_ARCH__ __LINUX_ARM_ARCH__ 705 + # define __ARM_MAX_ARCH__ 7 705 706 #endif 706 707 707 708 #ifdef __thumb__ 708 709 # define adrl adr 709 710 #endif 710 711 711 - #if __ARM_ARCH__>=7 712 + #if __ARM_MAX_ARCH__>=7 713 + .arch armv7-a 714 + .fpu neon 715 + 712 716 .text 713 717 .syntax unified @ ARMv7-capable assembler is expected to handle this 714 718 #ifdef __thumb2__ ··· 720 716 #else 721 717 .code 32 722 718 #endif 723 - 724 - .fpu neon 725 719 726 720 .type _bsaes_decrypt8,%function 727 721 .align 4 ··· 2078 2076 vld1.8 {@XMM[8]}, [r0] @ initial tweak 2079 2077 adr $magic, .Lxts_magic 2080 2078 2079 + #ifndef XTS_CHAIN_TWEAK 2081 2080 tst $len, #0xf @ if not multiple of 16 2082 2081 it ne @ Thumb2 thing, sanity check in ARM 2083 2082 subne $len, #0x10 @ subtract another 16 bytes 2083 + #endif 2084 2084 subs $len, #0x80 2085 2085 2086 2086 blo .Lxts_dec_short
+7 -8
arch/arm/include/asm/kvm_mmu.h
··· 149 149 (__boundary - 1 < (end) - 1)? __boundary: (end); \ 150 150 }) 151 151 152 + #define kvm_pgd_index(addr) pgd_index(addr) 153 + 152 154 static inline bool kvm_page_empty(void *ptr) 153 155 { 154 156 struct page *ptr_page = virt_to_page(ptr); 155 157 return page_count(ptr_page) == 1; 156 158 } 157 - 158 159 159 160 #define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep) 160 161 #define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp) ··· 163 162 164 163 #define KVM_PREALLOC_LEVEL 0 165 164 166 - static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) 167 - { 168 - return 0; 169 - } 170 - 171 - static inline void kvm_free_hwpgd(struct kvm *kvm) { } 172 - 173 165 static inline void *kvm_get_hwpgd(struct kvm *kvm) 174 166 { 175 167 return kvm->arch.pgd; 168 + } 169 + 170 + static inline unsigned int kvm_get_hwpgd_size(void) 171 + { 172 + return PTRS_PER_S2_PGD * sizeof(pgd_t); 176 173 } 177 174 178 175 struct kvm;
+4 -1
arch/arm/include/debug/at91.S
··· 18 18 #define AT91_DBGU 0xfc00c000 /* SAMA5D4_BASE_USART3 */ 19 19 #endif 20 20 21 - /* Keep in sync with mach-at91/include/mach/hardware.h */ 21 + #ifdef CONFIG_MMU 22 22 #define AT91_IO_P2V(x) ((x) - 0x01000000) 23 + #else 24 + #define AT91_IO_P2V(x) (x) 25 + #endif 23 26 24 27 #define AT91_DBGU_SR (0x14) /* Status Register */ 25 28 #define AT91_DBGU_THR (0x1c) /* Transmitter Holding Register */
+53 -22
arch/arm/kvm/mmu.c
··· 290 290 phys_addr_t addr = start, end = start + size; 291 291 phys_addr_t next; 292 292 293 - pgd = pgdp + pgd_index(addr); 293 + pgd = pgdp + kvm_pgd_index(addr); 294 294 do { 295 295 next = kvm_pgd_addr_end(addr, end); 296 296 if (!pgd_none(*pgd)) ··· 355 355 phys_addr_t next; 356 356 pgd_t *pgd; 357 357 358 - pgd = kvm->arch.pgd + pgd_index(addr); 358 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 359 359 do { 360 360 next = kvm_pgd_addr_end(addr, end); 361 361 stage2_flush_puds(kvm, pgd, addr, next); ··· 632 632 __phys_to_pfn(phys_addr), PAGE_HYP_DEVICE); 633 633 } 634 634 635 + /* Free the HW pgd, one page at a time */ 636 + static void kvm_free_hwpgd(void *hwpgd) 637 + { 638 + free_pages_exact(hwpgd, kvm_get_hwpgd_size()); 639 + } 640 + 641 + /* Allocate the HW PGD, making sure that each page gets its own refcount */ 642 + static void *kvm_alloc_hwpgd(void) 643 + { 644 + unsigned int size = kvm_get_hwpgd_size(); 645 + 646 + return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO); 647 + } 648 + 635 649 /** 636 650 * kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation. 637 651 * @kvm: The KVM struct pointer for the VM. ··· 659 645 */ 660 646 int kvm_alloc_stage2_pgd(struct kvm *kvm) 661 647 { 662 - int ret; 663 648 pgd_t *pgd; 649 + void *hwpgd; 664 650 665 651 if (kvm->arch.pgd != NULL) { 666 652 kvm_err("kvm_arch already initialized?\n"); 667 653 return -EINVAL; 668 654 } 669 655 656 + hwpgd = kvm_alloc_hwpgd(); 657 + if (!hwpgd) 658 + return -ENOMEM; 659 + 660 + /* When the kernel uses more levels of page tables than the 661 + * guest, we allocate a fake PGD and pre-populate it to point 662 + * to the next-level page table, which will be the real 663 + * initial page table pointed to by the VTTBR. 664 + * 665 + * When KVM_PREALLOC_LEVEL==2, we allocate a single page for 666 + * the PMD and the kernel will use folded pud. 667 + * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD 668 + * pages. 669 + */ 670 670 if (KVM_PREALLOC_LEVEL > 0) { 671 + int i; 672 + 671 673 /* 672 674 * Allocate fake pgd for the page table manipulation macros to 673 675 * work. This is not used by the hardware and we have no ··· 691 661 */ 692 662 pgd = (pgd_t *)kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t), 693 663 GFP_KERNEL | __GFP_ZERO); 664 + 665 + if (!pgd) { 666 + kvm_free_hwpgd(hwpgd); 667 + return -ENOMEM; 668 + } 669 + 670 + /* Plug the HW PGD into the fake one. */ 671 + for (i = 0; i < PTRS_PER_S2_PGD; i++) { 672 + if (KVM_PREALLOC_LEVEL == 1) 673 + pgd_populate(NULL, pgd + i, 674 + (pud_t *)hwpgd + i * PTRS_PER_PUD); 675 + else if (KVM_PREALLOC_LEVEL == 2) 676 + pud_populate(NULL, pud_offset(pgd, 0) + i, 677 + (pmd_t *)hwpgd + i * PTRS_PER_PMD); 678 + } 694 679 } else { 695 680 /* 696 681 * Allocate actual first-level Stage-2 page table used by the 697 682 * hardware for Stage-2 page table walks. 698 683 */ 699 - pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, S2_PGD_ORDER); 684 + pgd = (pgd_t *)hwpgd; 700 685 } 701 - 702 - if (!pgd) 703 - return -ENOMEM; 704 - 705 - ret = kvm_prealloc_hwpgd(kvm, pgd); 706 - if (ret) 707 - goto out_err; 708 686 709 687 kvm_clean_pgd(pgd); 710 688 kvm->arch.pgd = pgd; 711 689 return 0; 712 - out_err: 713 - if (KVM_PREALLOC_LEVEL > 0) 714 - kfree(pgd); 715 - else 716 - free_pages((unsigned long)pgd, S2_PGD_ORDER); 717 - return ret; 718 690 } 719 691 720 692 /** ··· 817 785 return; 818 786 819 787 unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); 820 - kvm_free_hwpgd(kvm); 788 + kvm_free_hwpgd(kvm_get_hwpgd(kvm)); 821 789 if (KVM_PREALLOC_LEVEL > 0) 822 790 kfree(kvm->arch.pgd); 823 - else 824 - free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER); 791 + 825 792 kvm->arch.pgd = NULL; 826 793 } 827 794 ··· 830 799 pgd_t *pgd; 831 800 pud_t *pud; 832 801 833 - pgd = kvm->arch.pgd + pgd_index(addr); 802 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 834 803 if (WARN_ON(pgd_none(*pgd))) { 835 804 if (!cache) 836 805 return NULL; ··· 1120 1089 pgd_t *pgd; 1121 1090 phys_addr_t next; 1122 1091 1123 - pgd = kvm->arch.pgd + pgd_index(addr); 1092 + pgd = kvm->arch.pgd + kvm_pgd_index(addr); 1124 1093 do { 1125 1094 /* 1126 1095 * Release kvm_mmu_lock periodically if the memory region is
+10 -12
arch/arm/mach-at91/pm.c
··· 270 270 phys_addr_t sram_pbase; 271 271 unsigned long sram_base; 272 272 struct device_node *node; 273 - struct platform_device *pdev; 273 + struct platform_device *pdev = NULL; 274 274 275 - node = of_find_compatible_node(NULL, NULL, "mmio-sram"); 276 - if (!node) { 277 - pr_warn("%s: failed to find sram node!\n", __func__); 278 - return; 275 + for_each_compatible_node(node, NULL, "mmio-sram") { 276 + pdev = of_find_device_by_node(node); 277 + if (pdev) { 278 + of_node_put(node); 279 + break; 280 + } 279 281 } 280 282 281 - pdev = of_find_device_by_node(node); 282 283 if (!pdev) { 283 284 pr_warn("%s: failed to find sram device!\n", __func__); 284 - goto put_node; 285 + return; 285 286 } 286 287 287 288 sram_pool = dev_get_gen_pool(&pdev->dev); 288 289 if (!sram_pool) { 289 290 pr_warn("%s: sram pool unavailable!\n", __func__); 290 - goto put_node; 291 + return; 291 292 } 292 293 293 294 sram_base = gen_pool_alloc(sram_pool, at91_slow_clock_sz); 294 295 if (!sram_base) { 295 296 pr_warn("%s: unable to alloc ocram!\n", __func__); 296 - goto put_node; 297 + return; 297 298 } 298 299 299 300 sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base); 300 301 slow_clock = __arm_ioremap_exec(sram_pbase, at91_slow_clock_sz, false); 301 - 302 - put_node: 303 - of_node_put(node); 304 302 } 305 303 #endif 306 304
+1 -1
arch/arm/mach-at91/pm.h
··· 44 44 " mcr p15, 0, %0, c7, c0, 4\n\t" 45 45 " str %5, [%1, %2]" 46 46 : 47 - : "r" (0), "r" (AT91_BASE_SYS), "r" (AT91RM9200_SDRAMC_LPR), 47 + : "r" (0), "r" (at91_ramc_base[0]), "r" (AT91RM9200_SDRAMC_LPR), 48 48 "r" (1), "r" (AT91RM9200_SDRAMC_SRR), 49 49 "r" (lpr)); 50 50 }
+46 -34
arch/arm/mach-at91/pm_slowclock.S
··· 25 25 */ 26 26 #undef SLOWDOWN_MASTER_CLOCK 27 27 28 - #define MCKRDY_TIMEOUT 1000 29 - #define MOSCRDY_TIMEOUT 1000 30 - #define PLLALOCK_TIMEOUT 1000 31 - #define PLLBLOCK_TIMEOUT 1000 32 - 33 28 pmc .req r0 34 29 sdramc .req r1 35 30 ramc1 .req r2 ··· 36 41 * Wait until master clock is ready (after switching master clock source) 37 42 */ 38 43 .macro wait_mckrdy 39 - mov tmp2, #MCKRDY_TIMEOUT 40 - 1: sub tmp2, tmp2, #1 41 - cmp tmp2, #0 42 - beq 2f 43 - ldr tmp1, [pmc, #AT91_PMC_SR] 44 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 44 45 tst tmp1, #AT91_PMC_MCKRDY 45 46 beq 1b 46 - 2: 47 47 .endm 48 48 49 49 /* 50 50 * Wait until master oscillator has stabilized. 51 51 */ 52 52 .macro wait_moscrdy 53 - mov tmp2, #MOSCRDY_TIMEOUT 54 - 1: sub tmp2, tmp2, #1 55 - cmp tmp2, #0 56 - beq 2f 57 - ldr tmp1, [pmc, #AT91_PMC_SR] 53 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 58 54 tst tmp1, #AT91_PMC_MOSCS 59 55 beq 1b 60 - 2: 61 56 .endm 62 57 63 58 /* 64 59 * Wait until PLLA has locked. 65 60 */ 66 61 .macro wait_pllalock 67 - mov tmp2, #PLLALOCK_TIMEOUT 68 - 1: sub tmp2, tmp2, #1 69 - cmp tmp2, #0 70 - beq 2f 71 - ldr tmp1, [pmc, #AT91_PMC_SR] 62 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 72 63 tst tmp1, #AT91_PMC_LOCKA 73 64 beq 1b 74 - 2: 75 65 .endm 76 66 77 67 /* 78 68 * Wait until PLLB has locked. 79 69 */ 80 70 .macro wait_pllblock 81 - mov tmp2, #PLLBLOCK_TIMEOUT 82 - 1: sub tmp2, tmp2, #1 83 - cmp tmp2, #0 84 - beq 2f 85 - ldr tmp1, [pmc, #AT91_PMC_SR] 71 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 86 72 tst tmp1, #AT91_PMC_LOCKB 87 73 beq 1b 88 - 2: 89 74 .endm 90 75 91 76 .text 77 + 78 + .arm 92 79 93 80 /* void at91_slow_clock(void __iomem *pmc, void __iomem *sdramc, 94 81 * void __iomem *ramc1, int memctrl) ··· 111 134 cmp memctrl, #AT91_MEMCTRL_DDRSDR 112 135 bne sdr_sr_enable 113 136 137 + /* LPDDR1 --> force DDR2 mode during self-refresh */ 138 + ldr tmp1, [sdramc, #AT91_DDRSDRC_MDR] 139 + str tmp1, .saved_sam9_mdr 140 + bic tmp1, tmp1, #~AT91_DDRSDRC_MD 141 + cmp tmp1, #AT91_DDRSDRC_MD_LOW_POWER_DDR 142 + ldreq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 143 + biceq tmp1, tmp1, #AT91_DDRSDRC_MD 144 + orreq tmp1, tmp1, #AT91_DDRSDRC_MD_DDR2 145 + streq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 146 + 114 147 /* prepare for DDRAM self-refresh mode */ 115 148 ldr tmp1, [sdramc, #AT91_DDRSDRC_LPR] 116 149 str tmp1, .saved_sam9_lpr ··· 129 142 130 143 /* figure out if we use the second ram controller */ 131 144 cmp ramc1, #0 132 - ldrne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 133 - strne tmp2, .saved_sam9_lpr1 134 - bicne tmp2, #AT91_DDRSDRC_LPCB 135 - orrne tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 145 + beq ddr_no_2nd_ctrl 146 + 147 + ldr tmp2, [ramc1, #AT91_DDRSDRC_MDR] 148 + str tmp2, .saved_sam9_mdr1 149 + bic tmp2, tmp2, #~AT91_DDRSDRC_MD 150 + cmp tmp2, #AT91_DDRSDRC_MD_LOW_POWER_DDR 151 + ldreq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 152 + biceq tmp2, tmp2, #AT91_DDRSDRC_MD 153 + orreq tmp2, tmp2, #AT91_DDRSDRC_MD_DDR2 154 + streq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 155 + 156 + ldr tmp2, [ramc1, #AT91_DDRSDRC_LPR] 157 + str tmp2, .saved_sam9_lpr1 158 + bic tmp2, #AT91_DDRSDRC_LPCB 159 + orr tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 136 160 137 161 /* Enable DDRAM self-refresh mode */ 162 + str tmp2, [ramc1, #AT91_DDRSDRC_LPR] 163 + ddr_no_2nd_ctrl: 138 164 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 139 - strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 140 165 141 166 b sdr_sr_done 142 167 ··· 207 208 /* Turn off the main oscillator */ 208 209 ldr tmp1, [pmc, #AT91_CKGR_MOR] 209 210 bic tmp1, tmp1, #AT91_PMC_MOSCEN 211 + orr tmp1, tmp1, #AT91_PMC_KEY 210 212 str tmp1, [pmc, #AT91_CKGR_MOR] 211 213 212 214 /* Wait for interrupt */ ··· 216 216 /* Turn on the main oscillator */ 217 217 ldr tmp1, [pmc, #AT91_CKGR_MOR] 218 218 orr tmp1, tmp1, #AT91_PMC_MOSCEN 219 + orr tmp1, tmp1, #AT91_PMC_KEY 219 220 str tmp1, [pmc, #AT91_CKGR_MOR] 220 221 221 222 wait_moscrdy ··· 281 280 */ 282 281 cmp memctrl, #AT91_MEMCTRL_DDRSDR 283 282 bne sdr_en_restore 283 + /* Restore MDR in case of LPDDR1 */ 284 + ldr tmp1, .saved_sam9_mdr 285 + str tmp1, [sdramc, #AT91_DDRSDRC_MDR] 284 286 /* Restore LPR on AT91 with DDRAM */ 285 287 ldr tmp1, .saved_sam9_lpr 286 288 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 287 289 288 290 /* if we use the second ram controller */ 289 291 cmp ramc1, #0 292 + ldrne tmp2, .saved_sam9_mdr1 293 + strne tmp2, [ramc1, #AT91_DDRSDRC_MDR] 290 294 ldrne tmp2, .saved_sam9_lpr1 291 295 strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 292 296 ··· 323 317 .word 0 324 318 325 319 .saved_sam9_lpr1: 320 + .word 0 321 + 322 + .saved_sam9_mdr: 323 + .word 0 324 + 325 + .saved_sam9_mdr1: 326 326 .word 0 327 327 328 328 ENTRY(at91_slow_clock_sz)
+1 -2
arch/arm/mach-exynos/platsmp.c
··· 126 126 */ 127 127 void exynos_cpu_power_down(int cpu) 128 128 { 129 - if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") || 130 - of_machine_is_compatible("samsung,exynos5800"))) { 129 + if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) { 131 130 /* 132 131 * Bypass power down for CPU0 during suspend. Check for 133 132 * the SYS_PWR_REG value to decide if we are suspending
+28
arch/arm/mach-exynos/pm_domains.c
··· 161 161 of_genpd_add_provider_simple(np, &pd->pd); 162 162 } 163 163 164 + /* Assign the child power domains to their parents */ 165 + for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 166 + struct generic_pm_domain *child_domain, *parent_domain; 167 + struct of_phandle_args args; 168 + 169 + args.np = np; 170 + args.args_count = 0; 171 + child_domain = of_genpd_get_from_provider(&args); 172 + if (!child_domain) 173 + continue; 174 + 175 + if (of_parse_phandle_with_args(np, "power-domains", 176 + "#power-domain-cells", 0, &args) != 0) 177 + continue; 178 + 179 + parent_domain = of_genpd_get_from_provider(&args); 180 + if (!parent_domain) 181 + continue; 182 + 183 + if (pm_genpd_add_subdomain(parent_domain, child_domain)) 184 + pr_warn("%s failed to add subdomain: %s\n", 185 + parent_domain->name, child_domain->name); 186 + else 187 + pr_info("%s has as child subdomain: %s.\n", 188 + parent_domain->name, child_domain->name); 189 + of_node_put(np); 190 + } 191 + 164 192 return 0; 165 193 } 166 194 arch_initcall(exynos4_pm_init_power_domain);
+2 -2
arch/arm/mach-exynos/suspend.c
··· 87 87 static u32 exynos_irqwake_intmask = 0xffffffff; 88 88 89 89 static const struct exynos_wkup_irq exynos3250_wkup_irq[] = { 90 - { 73, BIT(1) }, /* RTC alarm */ 91 - { 74, BIT(2) }, /* RTC tick */ 90 + { 105, BIT(1) }, /* RTC alarm */ 91 + { 106, BIT(2) }, /* RTC tick */ 92 92 { /* sentinel */ }, 93 93 }; 94 94
+3 -2
arch/arm/mach-imx/mach-imx6q.c
··· 211 211 * set bit IOMUXC_GPR1[21]. Or the PTP clock must be from pad 212 212 * (external OSC), and we need to clear the bit. 213 213 */ 214 - clksel = ptp_clk == enet_ref ? IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 215 - IMX6Q_GPR1_ENET_CLK_SEL_PAD; 214 + clksel = clk_is_match(ptp_clk, enet_ref) ? 215 + IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 216 + IMX6Q_GPR1_ENET_CLK_SEL_PAD; 216 217 gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); 217 218 if (!IS_ERR(gpr)) 218 219 regmap_update_bits(gpr, IOMUXC_GPR1,
+5 -5
arch/arm/mach-omap2/omap_hwmod.c
··· 1692 1692 if (ret == -EBUSY) 1693 1693 pr_warn("omap_hwmod: %s: failed to hardreset\n", oh->name); 1694 1694 1695 - if (!ret) { 1695 + if (oh->clkdm) { 1696 1696 /* 1697 1697 * Set the clockdomain to HW_AUTO, assuming that the 1698 1698 * previous state was HW_AUTO. 1699 1699 */ 1700 - if (oh->clkdm && hwsup) 1700 + if (hwsup) 1701 1701 clkdm_allow_idle(oh->clkdm); 1702 - } else { 1703 - if (oh->clkdm) 1704 - clkdm_hwmod_disable(oh->clkdm, oh); 1702 + 1703 + clkdm_hwmod_disable(oh->clkdm, oh); 1705 1704 } 1706 1705 1707 1706 return ret; ··· 2697 2698 INIT_LIST_HEAD(&oh->master_ports); 2698 2699 INIT_LIST_HEAD(&oh->slave_ports); 2699 2700 spin_lock_init(&oh->_lock); 2701 + lockdep_set_class(&oh->_lock, &oh->hwmod_key); 2700 2702 2701 2703 oh->_state = _HWMOD_STATE_REGISTERED; 2702 2704
+1
arch/arm/mach-omap2/omap_hwmod.h
··· 674 674 u32 _sysc_cache; 675 675 void __iomem *_mpu_rt_va; 676 676 spinlock_t _lock; 677 + struct lock_class_key hwmod_key; /* unique lock class */ 677 678 struct list_head node; 678 679 struct omap_hwmod_ocp_if *_mpu_port; 679 680 unsigned int (*xlate_irq)(unsigned int);
+24 -79
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1466 1466 * 1467 1467 */ 1468 1468 1469 - static struct omap_hwmod_class dra7xx_pcie_hwmod_class = { 1469 + static struct omap_hwmod_class dra7xx_pciess_hwmod_class = { 1470 1470 .name = "pcie", 1471 1471 }; 1472 1472 1473 1473 /* pcie1 */ 1474 - static struct omap_hwmod dra7xx_pcie1_hwmod = { 1474 + static struct omap_hwmod dra7xx_pciess1_hwmod = { 1475 1475 .name = "pcie1", 1476 - .class = &dra7xx_pcie_hwmod_class, 1476 + .class = &dra7xx_pciess_hwmod_class, 1477 1477 .clkdm_name = "pcie_clkdm", 1478 - .main_clk = "l4_root_clk_div", 1479 - .prcm = { 1480 - .omap4 = { 1481 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1482 - .modulemode = MODULEMODE_SWCTRL, 1483 - }, 1484 - }, 1485 - }; 1486 - 1487 - /* pcie2 */ 1488 - static struct omap_hwmod dra7xx_pcie2_hwmod = { 1489 - .name = "pcie2", 1490 - .class = &dra7xx_pcie_hwmod_class, 1491 - .clkdm_name = "pcie_clkdm", 1492 - .main_clk = "l4_root_clk_div", 1493 - .prcm = { 1494 - .omap4 = { 1495 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1496 - .modulemode = MODULEMODE_SWCTRL, 1497 - }, 1498 - }, 1499 - }; 1500 - 1501 - /* 1502 - * 'PCIE PHY' class 1503 - * 1504 - */ 1505 - 1506 - static struct omap_hwmod_class dra7xx_pcie_phy_hwmod_class = { 1507 - .name = "pcie-phy", 1508 - }; 1509 - 1510 - /* pcie1 phy */ 1511 - static struct omap_hwmod dra7xx_pcie1_phy_hwmod = { 1512 - .name = "pcie1-phy", 1513 - .class = &dra7xx_pcie_phy_hwmod_class, 1514 - .clkdm_name = "l3init_clkdm", 1515 1478 .main_clk = "l4_root_clk_div", 1516 1479 .prcm = { 1517 1480 .omap4 = { ··· 1485 1522 }, 1486 1523 }; 1487 1524 1488 - /* pcie2 phy */ 1489 - static struct omap_hwmod dra7xx_pcie2_phy_hwmod = { 1490 - .name = "pcie2-phy", 1491 - .class = &dra7xx_pcie_phy_hwmod_class, 1492 - .clkdm_name = "l3init_clkdm", 1525 + /* pcie2 */ 1526 + static struct omap_hwmod dra7xx_pciess2_hwmod = { 1527 + .name = "pcie2", 1528 + .class = &dra7xx_pciess_hwmod_class, 1529 + .clkdm_name = "pcie_clkdm", 1493 1530 .main_clk = "l4_root_clk_div", 1494 1531 .prcm = { 1495 1532 .omap4 = { ··· 2840 2877 .user = OCP_USER_MPU | OCP_USER_SDMA, 2841 2878 }; 2842 2879 2843 - /* l3_main_1 -> pcie1 */ 2844 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie1 = { 2880 + /* l3_main_1 -> pciess1 */ 2881 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess1 = { 2845 2882 .master = &dra7xx_l3_main_1_hwmod, 2846 - .slave = &dra7xx_pcie1_hwmod, 2883 + .slave = &dra7xx_pciess1_hwmod, 2847 2884 .clk = "l3_iclk_div", 2848 2885 .user = OCP_USER_MPU | OCP_USER_SDMA, 2849 2886 }; 2850 2887 2851 - /* l4_cfg -> pcie1 */ 2852 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1 = { 2888 + /* l4_cfg -> pciess1 */ 2889 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess1 = { 2853 2890 .master = &dra7xx_l4_cfg_hwmod, 2854 - .slave = &dra7xx_pcie1_hwmod, 2891 + .slave = &dra7xx_pciess1_hwmod, 2855 2892 .clk = "l4_root_clk_div", 2856 2893 .user = OCP_USER_MPU | OCP_USER_SDMA, 2857 2894 }; 2858 2895 2859 - /* l3_main_1 -> pcie2 */ 2860 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie2 = { 2896 + /* l3_main_1 -> pciess2 */ 2897 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess2 = { 2861 2898 .master = &dra7xx_l3_main_1_hwmod, 2862 - .slave = &dra7xx_pcie2_hwmod, 2899 + .slave = &dra7xx_pciess2_hwmod, 2863 2900 .clk = "l3_iclk_div", 2864 2901 .user = OCP_USER_MPU | OCP_USER_SDMA, 2865 2902 }; 2866 2903 2867 - /* l4_cfg -> pcie2 */ 2868 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2 = { 2904 + /* l4_cfg -> pciess2 */ 2905 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess2 = { 2869 2906 .master = &dra7xx_l4_cfg_hwmod, 2870 - .slave = &dra7xx_pcie2_hwmod, 2871 - .clk = "l4_root_clk_div", 2872 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2873 - }; 2874 - 2875 - /* l4_cfg -> pcie1 phy */ 2876 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1_phy = { 2877 - .master = &dra7xx_l4_cfg_hwmod, 2878 - .slave = &dra7xx_pcie1_phy_hwmod, 2879 - .clk = "l4_root_clk_div", 2880 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2881 - }; 2882 - 2883 - /* l4_cfg -> pcie2 phy */ 2884 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2_phy = { 2885 - .master = &dra7xx_l4_cfg_hwmod, 2886 - .slave = &dra7xx_pcie2_phy_hwmod, 2907 + .slave = &dra7xx_pciess2_hwmod, 2887 2908 .clk = "l4_root_clk_div", 2888 2909 .user = OCP_USER_MPU | OCP_USER_SDMA, 2889 2910 }; ··· 3274 3327 &dra7xx_l4_cfg__mpu, 3275 3328 &dra7xx_l4_cfg__ocp2scp1, 3276 3329 &dra7xx_l4_cfg__ocp2scp3, 3277 - &dra7xx_l3_main_1__pcie1, 3278 - &dra7xx_l4_cfg__pcie1, 3279 - &dra7xx_l3_main_1__pcie2, 3280 - &dra7xx_l4_cfg__pcie2, 3281 - &dra7xx_l4_cfg__pcie1_phy, 3282 - &dra7xx_l4_cfg__pcie2_phy, 3330 + &dra7xx_l3_main_1__pciess1, 3331 + &dra7xx_l4_cfg__pciess1, 3332 + &dra7xx_l3_main_1__pciess2, 3333 + &dra7xx_l4_cfg__pciess2, 3283 3334 &dra7xx_l3_main_1__qspi, 3284 3335 &dra7xx_l4_per3__rtcss, 3285 3336 &dra7xx_l4_cfg__sata,
+1
arch/arm/mach-omap2/pdata-quirks.c
··· 173 173 174 174 static void __init omap3_evm_legacy_init(void) 175 175 { 176 + hsmmc2_internal_input_clk(); 176 177 legacy_init_wl12xx(WL12XX_REFCLOCK_38, 0, 149); 177 178 } 178 179
+2 -2
arch/arm/mach-omap2/prm44xx.c
··· 252 252 { 253 253 saved_mask[0] = 254 254 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 255 - OMAP4_PRM_IRQSTATUS_MPU_OFFSET); 255 + OMAP4_PRM_IRQENABLE_MPU_OFFSET); 256 256 saved_mask[1] = 257 257 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 258 - OMAP4_PRM_IRQSTATUS_MPU_2_OFFSET); 258 + OMAP4_PRM_IRQENABLE_MPU_2_OFFSET); 259 259 260 260 omap4_prm_write_inst_reg(0, OMAP4430_PRM_OCP_SOCKET_INST, 261 261 OMAP4_PRM_IRQENABLE_MPU_OFFSET);
+1 -1
arch/arm/mach-socfpga/core.h
··· 45 45 46 46 extern unsigned long socfpga_cpu1start_addr; 47 47 48 - #define SOCFPGA_SCU_VIRT_BASE 0xfffec000 48 + #define SOCFPGA_SCU_VIRT_BASE 0xfee00000 49 49 50 50 #endif
+5
arch/arm/mach-socfpga/socfpga.c
··· 23 23 #include <asm/hardware/cache-l2x0.h> 24 24 #include <asm/mach/arch.h> 25 25 #include <asm/mach/map.h> 26 + #include <asm/cacheflush.h> 26 27 27 28 #include "core.h" 28 29 ··· 73 72 if (of_property_read_u32(np, "cpu1-start-addr", 74 73 (u32 *) &socfpga_cpu1start_addr)) 75 74 pr_err("SMP: Need cpu1-start-addr in device tree.\n"); 75 + 76 + /* Ensure that socfpga_cpu1start_addr is visible to other CPUs */ 77 + smp_wmb(); 78 + sync_cache_w(&socfpga_cpu1start_addr); 76 79 77 80 sys_manager_base_addr = of_iomap(np, 0); 78 81
+1
arch/arm/mach-sti/board-dt.c
··· 18 18 "st,stih415", 19 19 "st,stih416", 20 20 "st,stih407", 21 + "st,stih410", 21 22 "st,stih418", 22 23 NULL 23 24 };
+3 -2
arch/arm64/include/asm/kvm_arm.h
··· 129 129 * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are 130 130 * not known to exist and will break with this configuration. 131 131 * 132 + * VTCR_EL2.PS is extracted from ID_AA64MMFR0_EL1.PARange at boot time 133 + * (see hyp-init.S). 134 + * 132 135 * Note that when using 4K pages, we concatenate two first level page tables 133 136 * together. 134 137 * ··· 141 138 #ifdef CONFIG_ARM64_64K_PAGES 142 139 /* 143 140 * Stage2 translation configuration: 144 - * 40bits output (PS = 2) 145 141 * 40bits input (T0SZ = 24) 146 142 * 64kB pages (TG0 = 1) 147 143 * 2 level page tables (SL = 1) ··· 152 150 #else 153 151 /* 154 152 * Stage2 translation configuration: 155 - * 40bits output (PS = 2) 156 153 * 40bits input (T0SZ = 24) 157 154 * 4kB pages (TG0 = 0) 158 155 * 3 level page tables (SL = 1)
+6 -42
arch/arm64/include/asm/kvm_mmu.h
··· 158 158 #define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT) 159 159 #define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t)) 160 160 161 + #define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1)) 162 + 161 163 /* 162 164 * If we are concatenating first level stage-2 page tables, we would have less 163 165 * than or equal to 16 pointers in the fake PGD, because that's what the ··· 172 170 #else 173 171 #define KVM_PREALLOC_LEVEL (0) 174 172 #endif 175 - 176 - /** 177 - * kvm_prealloc_hwpgd - allocate inital table for VTTBR 178 - * @kvm: The KVM struct pointer for the VM. 179 - * @pgd: The kernel pseudo pgd 180 - * 181 - * When the kernel uses more levels of page tables than the guest, we allocate 182 - * a fake PGD and pre-populate it to point to the next-level page table, which 183 - * will be the real initial page table pointed to by the VTTBR. 184 - * 185 - * When KVM_PREALLOC_LEVEL==2, we allocate a single page for the PMD and 186 - * the kernel will use folded pud. When KVM_PREALLOC_LEVEL==1, we 187 - * allocate 2 consecutive PUD pages. 188 - */ 189 - static inline int kvm_prealloc_hwpgd(struct kvm *kvm, pgd_t *pgd) 190 - { 191 - unsigned int i; 192 - unsigned long hwpgd; 193 - 194 - if (KVM_PREALLOC_LEVEL == 0) 195 - return 0; 196 - 197 - hwpgd = __get_free_pages(GFP_KERNEL | __GFP_ZERO, PTRS_PER_S2_PGD_SHIFT); 198 - if (!hwpgd) 199 - return -ENOMEM; 200 - 201 - for (i = 0; i < PTRS_PER_S2_PGD; i++) { 202 - if (KVM_PREALLOC_LEVEL == 1) 203 - pgd_populate(NULL, pgd + i, 204 - (pud_t *)hwpgd + i * PTRS_PER_PUD); 205 - else if (KVM_PREALLOC_LEVEL == 2) 206 - pud_populate(NULL, pud_offset(pgd, 0) + i, 207 - (pmd_t *)hwpgd + i * PTRS_PER_PMD); 208 - } 209 - 210 - return 0; 211 - } 212 173 213 174 static inline void *kvm_get_hwpgd(struct kvm *kvm) 214 175 { ··· 189 224 return pmd_offset(pud, 0); 190 225 } 191 226 192 - static inline void kvm_free_hwpgd(struct kvm *kvm) 227 + static inline unsigned int kvm_get_hwpgd_size(void) 193 228 { 194 - if (KVM_PREALLOC_LEVEL > 0) { 195 - unsigned long hwpgd = (unsigned long)kvm_get_hwpgd(kvm); 196 - free_pages(hwpgd, PTRS_PER_S2_PGD_SHIFT); 197 - } 229 + if (KVM_PREALLOC_LEVEL > 0) 230 + return PTRS_PER_S2_PGD * PAGE_SIZE; 231 + return PTRS_PER_S2_PGD * sizeof(pgd_t); 198 232 } 199 233 200 234 static inline bool kvm_page_empty(void *ptr)
+3
arch/arm64/include/asm/tlb.h
··· 48 48 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, 49 49 unsigned long addr) 50 50 { 51 + __flush_tlb_pgtable(tlb->mm, addr); 51 52 pgtable_page_dtor(pte); 52 53 tlb_remove_entry(tlb, pte); 53 54 } ··· 57 56 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, 58 57 unsigned long addr) 59 58 { 59 + __flush_tlb_pgtable(tlb->mm, addr); 60 60 tlb_remove_entry(tlb, virt_to_page(pmdp)); 61 61 } 62 62 #endif ··· 66 64 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, 67 65 unsigned long addr) 68 66 { 67 + __flush_tlb_pgtable(tlb->mm, addr); 69 68 tlb_remove_entry(tlb, virt_to_page(pudp)); 70 69 } 71 70 #endif
+13
arch/arm64/include/asm/tlbflush.h
··· 144 144 } 145 145 146 146 /* 147 + * Used to invalidate the TLB (walk caches) corresponding to intermediate page 148 + * table levels (pgd/pud/pmd). 149 + */ 150 + static inline void __flush_tlb_pgtable(struct mm_struct *mm, 151 + unsigned long uaddr) 152 + { 153 + unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(mm) << 48); 154 + 155 + dsb(ishst); 156 + asm("tlbi vae1is, %0" : : "r" (addr)); 157 + dsb(ish); 158 + } 159 + /* 147 160 * On AArch64, the cache coherency is handled via the set_pte_at() function. 148 161 */ 149 162 static inline void update_mmu_cache(struct vm_area_struct *vma,
+9
arch/arm64/kernel/efi.c
··· 354 354 efi_set_pgd(current->active_mm); 355 355 preempt_enable(); 356 356 } 357 + 358 + /* 359 + * UpdateCapsule() depends on the system being shutdown via 360 + * ResetSystem(). 361 + */ 362 + bool efi_poweroff_required(void) 363 + { 364 + return efi_enabled(EFI_RUNTIME_SERVICES); 365 + }
+1 -1
arch/arm64/kernel/head.S
··· 585 585 * zeroing of .bss would clobber it. 586 586 */ 587 587 .pushsection .data..cacheline_aligned 588 - ENTRY(__boot_cpu_mode) 589 588 .align L1_CACHE_SHIFT 589 + ENTRY(__boot_cpu_mode) 590 590 .long BOOT_CPU_MODE_EL2 591 591 .long 0 592 592 .popsection
+8
arch/arm64/kernel/process.c
··· 21 21 #include <stdarg.h> 22 22 23 23 #include <linux/compat.h> 24 + #include <linux/efi.h> 24 25 #include <linux/export.h> 25 26 #include <linux/sched.h> 26 27 #include <linux/kernel.h> ··· 150 149 /* Disable interrupts first */ 151 150 local_irq_disable(); 152 151 smp_send_stop(); 152 + 153 + /* 154 + * UpdateCapsule() depends on the system being reset via 155 + * ResetSystem(). 156 + */ 157 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 158 + efi_reboot(reboot_mode, NULL); 153 159 154 160 /* Now call the architecture specific reboot code. */ 155 161 if (arm_pm_restart)
+5
arch/c6x/include/asm/pgtable.h
··· 67 67 */ 68 68 #define pgtable_cache_init() do { } while (0) 69 69 70 + /* 71 + * c6x is !MMU, so define the simpliest implementation 72 + */ 73 + #define pgprot_writecombine pgprot_noncached 74 + 70 75 #include <asm-generic/pgtable.h> 71 76 72 77 #endif /* _ASM_C6X_PGTABLE_H */
+4 -3
arch/microblaze/kernel/entry.S
··· 348 348 * The LP register should point to the location where the called function 349 349 * should return. [note that MAKE_SYS_CALL uses label 1] */ 350 350 /* See if the system call number is valid */ 351 + blti r12, 5f 351 352 addi r11, r12, -__NR_syscalls; 352 - bgei r11,5f; 353 + bgei r11, 5f; 353 354 /* Figure out which function to use for this system call. */ 354 355 /* Note Microblaze barrel shift is optional, so don't rely on it */ 355 356 add r12, r12, r12; /* convert num -> ptr */ ··· 376 375 377 376 /* The syscall number is invalid, return an error. */ 378 377 5: 379 - rtsd r15, 8; /* looks like a normal subroutine return */ 378 + braid ret_from_trap 380 379 addi r3, r0, -ENOSYS; 381 380 382 381 /* Entry point used to return from a syscall/trap */ ··· 412 411 bri 1b 413 412 414 413 /* Maybe handle a signal */ 415 - 5: 414 + 5: 416 415 andi r11, r19, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME; 417 416 beqi r11, 4f; /* Signals to handle, handle them */ 418 417
+47
arch/nios2/include/asm/ptrace.h
··· 15 15 16 16 #include <uapi/asm/ptrace.h> 17 17 18 + /* This struct defines the way the registers are stored on the 19 + stack during a system call. */ 20 + 18 21 #ifndef __ASSEMBLY__ 22 + struct pt_regs { 23 + unsigned long r8; /* r8-r15 Caller-saved GP registers */ 24 + unsigned long r9; 25 + unsigned long r10; 26 + unsigned long r11; 27 + unsigned long r12; 28 + unsigned long r13; 29 + unsigned long r14; 30 + unsigned long r15; 31 + unsigned long r1; /* Assembler temporary */ 32 + unsigned long r2; /* Retval LS 32bits */ 33 + unsigned long r3; /* Retval MS 32bits */ 34 + unsigned long r4; /* r4-r7 Register arguments */ 35 + unsigned long r5; 36 + unsigned long r6; 37 + unsigned long r7; 38 + unsigned long orig_r2; /* Copy of r2 ?? */ 39 + unsigned long ra; /* Return address */ 40 + unsigned long fp; /* Frame pointer */ 41 + unsigned long sp; /* Stack pointer */ 42 + unsigned long gp; /* Global pointer */ 43 + unsigned long estatus; 44 + unsigned long ea; /* Exception return address (pc) */ 45 + unsigned long orig_r7; 46 + }; 47 + 48 + /* 49 + * This is the extended stack used by signal handlers and the context 50 + * switcher: it's pushed after the normal "struct pt_regs". 51 + */ 52 + struct switch_stack { 53 + unsigned long r16; /* r16-r23 Callee-saved GP registers */ 54 + unsigned long r17; 55 + unsigned long r18; 56 + unsigned long r19; 57 + unsigned long r20; 58 + unsigned long r21; 59 + unsigned long r22; 60 + unsigned long r23; 61 + unsigned long fp; 62 + unsigned long gp; 63 + unsigned long ra; 64 + }; 65 + 19 66 #define user_mode(regs) (((regs)->estatus & ESTATUS_EU)) 20 67 21 68 #define instruction_pointer(regs) ((regs)->ra)
-32
arch/nios2/include/asm/ucontext.h
··· 1 - /* 2 - * Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch> 3 - * Copyright (C) 2004 Microtronix Datacom Ltd 4 - * 5 - * This file is subject to the terms and conditions of the GNU General Public 6 - * License. See the file "COPYING" in the main directory of this archive 7 - * for more details. 8 - */ 9 - 10 - #ifndef _ASM_NIOS2_UCONTEXT_H 11 - #define _ASM_NIOS2_UCONTEXT_H 12 - 13 - typedef int greg_t; 14 - #define NGREG 32 15 - typedef greg_t gregset_t[NGREG]; 16 - 17 - struct mcontext { 18 - int version; 19 - gregset_t gregs; 20 - }; 21 - 22 - #define MCONTEXT_VERSION 2 23 - 24 - struct ucontext { 25 - unsigned long uc_flags; 26 - struct ucontext *uc_link; 27 - stack_t uc_stack; 28 - struct mcontext uc_mcontext; 29 - sigset_t uc_sigmask; /* mask last for extensibility */ 30 - }; 31 - 32 - #endif
+2 -1
arch/nios2/include/uapi/asm/Kbuild
··· 1 1 include include/uapi/asm-generic/Kbuild.asm 2 2 3 3 header-y += elf.h 4 - header-y += ucontext.h 4 + 5 + generic-y += ucontext.h
+1 -3
arch/nios2/include/uapi/asm/elf.h
··· 50 50 51 51 typedef unsigned long elf_greg_t; 52 52 53 - #define ELF_NGREG \ 54 - ((sizeof(struct pt_regs) + sizeof(struct switch_stack)) / \ 55 - sizeof(elf_greg_t)) 53 + #define ELF_NGREG 49 56 54 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 57 55 58 56 typedef unsigned long elf_fpregset_t;
+3 -47
arch/nios2/include/uapi/asm/ptrace.h
··· 67 67 68 68 #define NUM_PTRACE_REG (PTR_TLBMISC + 1) 69 69 70 - /* this struct defines the way the registers are stored on the 71 - stack during a system call. 72 - 73 - There is a fake_regs in setup.c that has to match pt_regs.*/ 74 - 75 - struct pt_regs { 76 - unsigned long r8; /* r8-r15 Caller-saved GP registers */ 77 - unsigned long r9; 78 - unsigned long r10; 79 - unsigned long r11; 80 - unsigned long r12; 81 - unsigned long r13; 82 - unsigned long r14; 83 - unsigned long r15; 84 - unsigned long r1; /* Assembler temporary */ 85 - unsigned long r2; /* Retval LS 32bits */ 86 - unsigned long r3; /* Retval MS 32bits */ 87 - unsigned long r4; /* r4-r7 Register arguments */ 88 - unsigned long r5; 89 - unsigned long r6; 90 - unsigned long r7; 91 - unsigned long orig_r2; /* Copy of r2 ?? */ 92 - unsigned long ra; /* Return address */ 93 - unsigned long fp; /* Frame pointer */ 94 - unsigned long sp; /* Stack pointer */ 95 - unsigned long gp; /* Global pointer */ 96 - unsigned long estatus; 97 - unsigned long ea; /* Exception return address (pc) */ 98 - unsigned long orig_r7; 99 - }; 100 - 101 - /* 102 - * This is the extended stack used by signal handlers and the context 103 - * switcher: it's pushed after the normal "struct pt_regs". 104 - */ 105 - struct switch_stack { 106 - unsigned long r16; /* r16-r23 Callee-saved GP registers */ 107 - unsigned long r17; 108 - unsigned long r18; 109 - unsigned long r19; 110 - unsigned long r20; 111 - unsigned long r21; 112 - unsigned long r22; 113 - unsigned long r23; 114 - unsigned long fp; 115 - unsigned long gp; 116 - unsigned long ra; 70 + /* User structures for general purpose registers. */ 71 + struct user_pt_regs { 72 + __u32 regs[49]; 117 73 }; 118 74 119 75 #endif /* __ASSEMBLY__ */
+7 -5
arch/nios2/include/uapi/asm/sigcontext.h
··· 15 15 * details. 16 16 */ 17 17 18 - #ifndef _ASM_NIOS2_SIGCONTEXT_H 19 - #define _ASM_NIOS2_SIGCONTEXT_H 18 + #ifndef _UAPI__ASM_SIGCONTEXT_H 19 + #define _UAPI__ASM_SIGCONTEXT_H 20 20 21 - #include <asm/ptrace.h> 21 + #include <linux/types.h> 22 + 23 + #define MCONTEXT_VERSION 2 22 24 23 25 struct sigcontext { 24 - struct pt_regs regs; 25 - unsigned long sc_mask; /* old sigmask */ 26 + int version; 27 + unsigned long gregs[32]; 26 28 }; 27 29 28 30 #endif
+2 -2
arch/nios2/kernel/signal.c
··· 39 39 struct ucontext *uc, int *pr2) 40 40 { 41 41 int temp; 42 - greg_t *gregs = uc->uc_mcontext.gregs; 42 + unsigned long *gregs = uc->uc_mcontext.gregs; 43 43 int err; 44 44 45 45 /* Always make any pending restarted system calls return -EINTR */ ··· 127 127 static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs) 128 128 { 129 129 struct switch_stack *sw = (struct switch_stack *)regs - 1; 130 - greg_t *gregs = uc->uc_mcontext.gregs; 130 + unsigned long *gregs = uc->uc_mcontext.gregs; 131 131 int err = 0; 132 132 133 133 err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
-6
arch/nios2/mm/fault.c
··· 126 126 break; 127 127 } 128 128 129 - survive: 130 129 /* 131 130 * If for any reason at all we couldn't handle the fault, 132 131 * make sure we exit gracefully rather than endlessly redo ··· 219 220 */ 220 221 out_of_memory: 221 222 up_read(&mm->mmap_sem); 222 - if (is_global_init(tsk)) { 223 - yield(); 224 - down_read(&mm->mmap_sem); 225 - goto survive; 226 - } 227 223 if (!user_mode(regs)) 228 224 goto no_context; 229 225 pagefault_out_of_memory();
-1
arch/s390/kvm/kvm-s390.c
··· 165 165 case KVM_CAP_ONE_REG: 166 166 case KVM_CAP_ENABLE_CAP: 167 167 case KVM_CAP_S390_CSS_SUPPORT: 168 - case KVM_CAP_IRQFD: 169 168 case KVM_CAP_IOEVENTFD: 170 169 case KVM_CAP_DEVICE_CTRL: 171 170 case KVM_CAP_ENABLE_CAP_VM:
+3
arch/sparc/Kconfig
··· 86 86 default "arch/sparc/configs/sparc32_defconfig" if SPARC32 87 87 default "arch/sparc/configs/sparc64_defconfig" if SPARC64 88 88 89 + config ARCH_PROC_KCORE_TEXT 90 + def_bool y 91 + 89 92 config IOMMU_HELPER 90 93 bool 91 94 default y if SPARC64
+10 -10
arch/sparc/include/asm/io_64.h
··· 407 407 { 408 408 } 409 409 410 - #define ioread8(X) readb(X) 411 - #define ioread16(X) readw(X) 412 - #define ioread16be(X) __raw_readw(X) 413 - #define ioread32(X) readl(X) 414 - #define ioread32be(X) __raw_readl(X) 415 - #define iowrite8(val,X) writeb(val,X) 416 - #define iowrite16(val,X) writew(val,X) 417 - #define iowrite16be(val,X) __raw_writew(val,X) 418 - #define iowrite32(val,X) writel(val,X) 419 - #define iowrite32be(val,X) __raw_writel(val,X) 410 + #define ioread8 readb 411 + #define ioread16 readw 412 + #define ioread16be __raw_readw 413 + #define ioread32 readl 414 + #define ioread32be __raw_readl 415 + #define iowrite8 writeb 416 + #define iowrite16 writew 417 + #define iowrite16be __raw_writew 418 + #define iowrite32 writel 419 + #define iowrite32be __raw_writel 420 420 421 421 /* Create a virtual mapping cookie for an IO port range */ 422 422 void __iomem *ioport_map(unsigned long port, unsigned int nr);
-1
arch/sparc/include/asm/starfire.h
··· 12 12 extern int this_is_starfire; 13 13 14 14 void check_if_starfire(void); 15 - int starfire_hard_smp_processor_id(void); 16 15 void starfire_hookup(int); 17 16 unsigned int starfire_translate(unsigned long imap, unsigned int upaid); 18 17
-4
arch/sparc/kernel/entry.h
··· 98 98 void do_privop(struct pt_regs *regs); 99 99 void do_privact(struct pt_regs *regs); 100 100 void do_cee(struct pt_regs *regs); 101 - void do_cee_tl1(struct pt_regs *regs); 102 - void do_dae_tl1(struct pt_regs *regs); 103 - void do_iae_tl1(struct pt_regs *regs); 104 101 void do_div0_tl1(struct pt_regs *regs); 105 - void do_fpdis_tl1(struct pt_regs *regs); 106 102 void do_fpieee_tl1(struct pt_regs *regs); 107 103 void do_fpother_tl1(struct pt_regs *regs); 108 104 void do_ill_tl1(struct pt_regs *regs);
+24 -3
arch/sparc/kernel/smp_64.c
··· 1406 1406 scheduler_ipi(); 1407 1407 } 1408 1408 1409 - /* This is a nop because we capture all other cpus 1410 - * anyways when making the PROM active. 1411 - */ 1409 + static void stop_this_cpu(void *dummy) 1410 + { 1411 + prom_stopself(); 1412 + } 1413 + 1412 1414 void smp_send_stop(void) 1413 1415 { 1416 + int cpu; 1417 + 1418 + if (tlb_type == hypervisor) { 1419 + for_each_online_cpu(cpu) { 1420 + if (cpu == smp_processor_id()) 1421 + continue; 1422 + #ifdef CONFIG_SUN_LDOMS 1423 + if (ldom_domaining_enabled) { 1424 + unsigned long hv_err; 1425 + hv_err = sun4v_cpu_stop(cpu); 1426 + if (hv_err) 1427 + printk(KERN_ERR "sun4v_cpu_stop() " 1428 + "failed err=%lu\n", hv_err); 1429 + } else 1430 + #endif 1431 + prom_stopcpu_cpuid(cpu); 1432 + } 1433 + } else 1434 + smp_call_function(stop_this_cpu, NULL, 0); 1414 1435 } 1415 1436 1416 1437 /**
-5
arch/sparc/kernel/starfire.c
··· 28 28 this_is_starfire = 1; 29 29 } 30 30 31 - int starfire_hard_smp_processor_id(void) 32 - { 33 - return upa_readl(0x1fff40000d0UL); 34 - } 35 - 36 31 /* 37 32 * Each Starfire board has 32 registers which perform translation 38 33 * and delivery of traditional interrupt packets into the extended
+1 -1
arch/sparc/kernel/sys_sparc_64.c
··· 333 333 long err; 334 334 335 335 /* No need for backward compatibility. We can start fresh... */ 336 - if (call <= SEMCTL) { 336 + if (call <= SEMTIMEDOP) { 337 337 switch (call) { 338 338 case SEMOP: 339 339 err = sys_semtimedop(first, ptr,
+2 -28
arch/sparc/kernel/traps_64.c
··· 2427 2427 } 2428 2428 user_instruction_dump ((unsigned int __user *) regs->tpc); 2429 2429 } 2430 + if (panic_on_oops) 2431 + panic("Fatal exception"); 2430 2432 if (regs->tstate & TSTATE_PRIV) 2431 2433 do_exit(SIGKILL); 2432 2434 do_exit(SIGSEGV); ··· 2566 2564 die_if_kernel("TL0: Cache Error Exception", regs); 2567 2565 } 2568 2566 2569 - void do_cee_tl1(struct pt_regs *regs) 2570 - { 2571 - exception_enter(); 2572 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2573 - die_if_kernel("TL1: Cache Error Exception", regs); 2574 - } 2575 - 2576 - void do_dae_tl1(struct pt_regs *regs) 2577 - { 2578 - exception_enter(); 2579 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2580 - die_if_kernel("TL1: Data Access Exception", regs); 2581 - } 2582 - 2583 - void do_iae_tl1(struct pt_regs *regs) 2584 - { 2585 - exception_enter(); 2586 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2587 - die_if_kernel("TL1: Instruction Access Exception", regs); 2588 - } 2589 - 2590 2567 void do_div0_tl1(struct pt_regs *regs) 2591 2568 { 2592 2569 exception_enter(); 2593 2570 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2594 2571 die_if_kernel("TL1: DIV0 Exception", regs); 2595 - } 2596 - 2597 - void do_fpdis_tl1(struct pt_regs *regs) 2598 - { 2599 - exception_enter(); 2600 - dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2601 - die_if_kernel("TL1: FPU Disabled", regs); 2602 2572 } 2603 2573 2604 2574 void do_fpieee_tl1(struct pt_regs *regs)
+1 -1
arch/sparc/mm/init_64.c
··· 2820 2820 2821 2821 return 0; 2822 2822 } 2823 - device_initcall(report_memory); 2823 + arch_initcall(report_memory); 2824 2824 2825 2825 #ifdef CONFIG_SMP 2826 2826 #define do_flush_tlb_kernel_range smp_flush_tlb_kernel_range
+1 -33
arch/x86/boot/compressed/aslr.c
··· 14 14 static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@" 15 15 LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION; 16 16 17 - struct kaslr_setup_data { 18 - __u64 next; 19 - __u32 type; 20 - __u32 len; 21 - __u8 data[1]; 22 - } kaslr_setup_data; 23 - 24 17 #define I8254_PORT_CONTROL 0x43 25 18 #define I8254_PORT_COUNTER0 0x40 26 19 #define I8254_CMD_READBACK 0xC0 ··· 295 302 return slots_fetch_random(); 296 303 } 297 304 298 - static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled) 299 - { 300 - struct setup_data *data; 301 - 302 - kaslr_setup_data.type = SETUP_KASLR; 303 - kaslr_setup_data.len = 1; 304 - kaslr_setup_data.next = 0; 305 - kaslr_setup_data.data[0] = enabled; 306 - 307 - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 308 - 309 - while (data && data->next) 310 - data = (struct setup_data *)(unsigned long)data->next; 311 - 312 - if (data) 313 - data->next = (unsigned long)&kaslr_setup_data; 314 - else 315 - params->hdr.setup_data = (unsigned long)&kaslr_setup_data; 316 - 317 - } 318 - 319 - unsigned char *choose_kernel_location(struct boot_params *params, 320 - unsigned char *input, 305 + unsigned char *choose_kernel_location(unsigned char *input, 321 306 unsigned long input_size, 322 307 unsigned char *output, 323 308 unsigned long output_size) ··· 306 335 #ifdef CONFIG_HIBERNATION 307 336 if (!cmdline_find_option_bool("kaslr")) { 308 337 debug_putstr("KASLR disabled by default...\n"); 309 - add_kaslr_setup_data(params, 0); 310 338 goto out; 311 339 } 312 340 #else 313 341 if (cmdline_find_option_bool("nokaslr")) { 314 342 debug_putstr("KASLR disabled by cmdline...\n"); 315 - add_kaslr_setup_data(params, 0); 316 343 goto out; 317 344 } 318 345 #endif 319 - add_kaslr_setup_data(params, 1); 320 346 321 347 /* Record the various known unsafe memory ranges. */ 322 348 mem_avoid_init((unsigned long)input, input_size,
+1 -2
arch/x86/boot/compressed/misc.c
··· 401 401 * the entire decompressed kernel plus relocation table, or the 402 402 * entire decompressed kernel plus .bss and .brk sections. 403 403 */ 404 - output = choose_kernel_location(real_mode, input_data, input_len, 405 - output, 404 + output = choose_kernel_location(input_data, input_len, output, 406 405 output_len > run_size ? output_len 407 406 : run_size); 408 407
+2 -4
arch/x86/boot/compressed/misc.h
··· 57 57 58 58 #if CONFIG_RANDOMIZE_BASE 59 59 /* aslr.c */ 60 - unsigned char *choose_kernel_location(struct boot_params *params, 61 - unsigned char *input, 60 + unsigned char *choose_kernel_location(unsigned char *input, 62 61 unsigned long input_size, 63 62 unsigned char *output, 64 63 unsigned long output_size); ··· 65 66 bool has_cpuflag(int flag); 66 67 #else 67 68 static inline 68 - unsigned char *choose_kernel_location(struct boot_params *params, 69 - unsigned char *input, 69 + unsigned char *choose_kernel_location(unsigned char *input, 70 70 unsigned long input_size, 71 71 unsigned char *output, 72 72 unsigned long output_size)
+2 -2
arch/x86/crypto/aesni-intel_glue.c
··· 1155 1155 src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); 1156 1156 if (!src) 1157 1157 return -ENOMEM; 1158 - assoc = (src + req->cryptlen + auth_tag_len); 1158 + assoc = (src + req->cryptlen); 1159 1159 scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0); 1160 1160 scatterwalk_map_and_copy(assoc, req->assoc, 0, 1161 1161 req->assoclen, 0); ··· 1180 1180 scatterwalk_done(&src_sg_walk, 0, 0); 1181 1181 scatterwalk_done(&assoc_sg_walk, 0, 0); 1182 1182 } else { 1183 - scatterwalk_map_and_copy(dst, req->dst, 0, req->cryptlen, 1); 1183 + scatterwalk_map_and_copy(dst, req->dst, 0, tempCipherLen, 1); 1184 1184 kfree(src); 1185 1185 } 1186 1186 return retval;
+1 -1
arch/x86/include/asm/fpu-internal.h
··· 370 370 preempt_disable(); 371 371 tsk->thread.fpu_counter = 0; 372 372 __drop_fpu(tsk); 373 - clear_used_math(); 373 + clear_stopped_child_used_math(tsk); 374 374 preempt_enable(); 375 375 } 376 376
-2
arch/x86/include/asm/page_types.h
··· 51 51 extern unsigned long max_low_pfn_mapped; 52 52 extern unsigned long max_pfn_mapped; 53 53 54 - extern bool kaslr_enabled; 55 - 56 54 static inline phys_addr_t get_max_mapped(void) 57 55 { 58 56 return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
-1
arch/x86/include/uapi/asm/bootparam.h
··· 7 7 #define SETUP_DTB 2 8 8 #define SETUP_PCI 3 9 9 #define SETUP_EFI 4 10 - #define SETUP_KASLR 5 11 10 12 11 /* ram_size flags */ 13 12 #define RAMDISK_IMAGE_START_MASK 0x07FF
+25
arch/x86/kernel/acpi/boot.c
··· 1338 1338 } 1339 1339 1340 1340 /* 1341 + * ACPI offers an alternative platform interface model that removes 1342 + * ACPI hardware requirements for platforms that do not implement 1343 + * the PC Architecture. 1344 + * 1345 + * We initialize the Hardware-reduced ACPI model here: 1346 + */ 1347 + static void __init acpi_reduced_hw_init(void) 1348 + { 1349 + if (acpi_gbl_reduced_hardware) { 1350 + /* 1351 + * Override x86_init functions and bypass legacy pic 1352 + * in Hardware-reduced ACPI mode 1353 + */ 1354 + x86_init.timers.timer_init = x86_init_noop; 1355 + x86_init.irqs.pre_vector_init = x86_init_noop; 1356 + legacy_pic = &null_legacy_pic; 1357 + } 1358 + } 1359 + 1360 + /* 1341 1361 * If your system is blacklisted here, but you find that acpi=force 1342 1362 * works for you, please contact linux-acpi@vger.kernel.org 1343 1363 */ ··· 1555 1535 * Process the Multiple APIC Description Table (MADT), if present 1556 1536 */ 1557 1537 early_acpi_process_madt(); 1538 + 1539 + /* 1540 + * Hardware-reduced ACPI mode initialization: 1541 + */ 1542 + acpi_reduced_hw_init(); 1558 1543 1559 1544 return 0; 1560 1545 }
+16 -6
arch/x86/kernel/apic/apic_numachip.c
··· 37 37 static unsigned int get_apic_id(unsigned long x) 38 38 { 39 39 unsigned long value; 40 - unsigned int id; 40 + unsigned int id = (x >> 24) & 0xff; 41 41 42 - rdmsrl(MSR_FAM10H_NODE_ID, value); 43 - id = ((x >> 24) & 0xffU) | ((value << 2) & 0xff00U); 42 + if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { 43 + rdmsrl(MSR_FAM10H_NODE_ID, value); 44 + id |= (value << 2) & 0xff00; 45 + } 44 46 45 47 return id; 46 48 } ··· 157 155 158 156 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node) 159 157 { 160 - if (c->phys_proc_id != node) { 161 - c->phys_proc_id = node; 162 - per_cpu(cpu_llc_id, smp_processor_id()) = node; 158 + u64 val; 159 + u32 nodes = 1; 160 + 161 + this_cpu_write(cpu_llc_id, node); 162 + 163 + /* Account for nodes per socket in multi-core-module processors */ 164 + if (static_cpu_has_safe(X86_FEATURE_NODEID_MSR)) { 165 + rdmsrl(MSR_FAM10H_NODE_ID, val); 166 + nodes = ((val >> 3) & 7) + 1; 163 167 } 168 + 169 + c->phys_proc_id = node / nodes; 164 170 } 165 171 166 172 static int __init numachip_system_init(void)
+9 -1
arch/x86/kernel/module.c
··· 47 47 48 48 #ifdef CONFIG_RANDOMIZE_BASE 49 49 static unsigned long module_load_offset; 50 + static int randomize_modules = 1; 50 51 51 52 /* Mutex protects the module_load_offset. */ 52 53 static DEFINE_MUTEX(module_kaslr_mutex); 53 54 55 + static int __init parse_nokaslr(char *p) 56 + { 57 + randomize_modules = 0; 58 + return 0; 59 + } 60 + early_param("nokaslr", parse_nokaslr); 61 + 54 62 static unsigned long int get_module_load_offset(void) 55 63 { 56 - if (kaslr_enabled) { 64 + if (randomize_modules) { 57 65 mutex_lock(&module_kaslr_mutex); 58 66 /* 59 67 * Calculate the module_load_offset the first time this
+4 -18
arch/x86/kernel/setup.c
··· 122 122 unsigned long max_low_pfn_mapped; 123 123 unsigned long max_pfn_mapped; 124 124 125 - bool __read_mostly kaslr_enabled = false; 126 - 127 125 #ifdef CONFIG_DMI 128 126 RESERVE_BRK(dmi_alloc, 65536); 129 127 #endif ··· 425 427 } 426 428 #endif /* CONFIG_BLK_DEV_INITRD */ 427 429 428 - static void __init parse_kaslr_setup(u64 pa_data, u32 data_len) 429 - { 430 - kaslr_enabled = (bool)(pa_data + sizeof(struct setup_data)); 431 - } 432 - 433 430 static void __init parse_setup_data(void) 434 431 { 435 432 struct setup_data *data; ··· 449 456 break; 450 457 case SETUP_EFI: 451 458 parse_efi_setup(pa_data, data_len); 452 - break; 453 - case SETUP_KASLR: 454 - parse_kaslr_setup(pa_data, data_len); 455 459 break; 456 460 default: 457 461 break; ··· 832 842 static int 833 843 dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p) 834 844 { 835 - if (kaslr_enabled) 836 - pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n", 837 - (unsigned long)&_text - __START_KERNEL, 838 - __START_KERNEL, 839 - __START_KERNEL_map, 840 - MODULES_VADDR-1); 841 - else 842 - pr_emerg("Kernel Offset: disabled\n"); 845 + pr_emerg("Kernel Offset: 0x%lx from 0x%lx " 846 + "(relocation range: 0x%lx-0x%lx)\n", 847 + (unsigned long)&_text - __START_KERNEL, __START_KERNEL, 848 + __START_KERNEL_map, MODULES_VADDR-1); 843 849 844 850 return 0; 845 851 }
+2 -2
arch/x86/kernel/traps.c
··· 384 384 goto exit; 385 385 conditional_sti(regs); 386 386 387 - if (!user_mode(regs)) 387 + if (!user_mode_vm(regs)) 388 388 die("bounds", regs, error_code); 389 389 390 390 if (!cpu_feature_enabled(X86_FEATURE_MPX)) { ··· 637 637 * then it's very likely the result of an icebp/int01 trap. 638 638 * User wants a sigtrap for that. 639 639 */ 640 - if (!dr6 && user_mode(regs)) 640 + if (!dr6 && user_mode_vm(regs)) 641 641 user_icebp = 1; 642 642 643 643 /* Catch kmemcheck conditions first of all! */
+4 -3
arch/x86/kernel/xsave.c
··· 379 379 * thread's fpu state, reconstruct fxstate from the fsave 380 380 * header. Sanitize the copied state etc. 381 381 */ 382 - struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave; 382 + struct fpu *fpu = &tsk->thread.fpu; 383 383 struct user_i387_ia32_struct env; 384 384 int err = 0; 385 385 ··· 393 393 */ 394 394 drop_fpu(tsk); 395 395 396 - if (__copy_from_user(xsave, buf_fx, state_size) || 396 + if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) || 397 397 __copy_from_user(&env, buf, sizeof(env))) { 398 + fpu_finit(fpu); 398 399 err = -1; 399 400 } else { 400 401 sanitize_restored_xstate(tsk, &env, xstate_bv, fx_only); 401 - set_used_math(); 402 402 } 403 403 404 + set_used_math(); 404 405 if (use_eager_fpu()) { 405 406 preempt_disable(); 406 407 math_state_restore();
+1
arch/x86/kvm/i8259.c
··· 507 507 return -EOPNOTSUPP; 508 508 509 509 if (len != 1) { 510 + memset(val, 0, len); 510 511 pr_pic_unimpl("non byte read\n"); 511 512 return 0; 512 513 }
+7 -4
arch/x86/kvm/vmx.c
··· 2168 2168 { 2169 2169 unsigned long *msr_bitmap; 2170 2170 2171 - if (irqchip_in_kernel(vcpu->kvm) && apic_x2apic_mode(vcpu->arch.apic)) { 2171 + if (is_guest_mode(vcpu)) 2172 + msr_bitmap = vmx_msr_bitmap_nested; 2173 + else if (irqchip_in_kernel(vcpu->kvm) && 2174 + apic_x2apic_mode(vcpu->arch.apic)) { 2172 2175 if (is_long_mode(vcpu)) 2173 2176 msr_bitmap = vmx_msr_bitmap_longmode_x2apic; 2174 2177 else ··· 9221 9218 } 9222 9219 9223 9220 if (cpu_has_vmx_msr_bitmap() && 9224 - exec_control & CPU_BASED_USE_MSR_BITMAPS && 9225 - nested_vmx_merge_msr_bitmap(vcpu, vmcs12)) { 9226 - vmcs_write64(MSR_BITMAP, __pa(vmx_msr_bitmap_nested)); 9221 + exec_control & CPU_BASED_USE_MSR_BITMAPS) { 9222 + nested_vmx_merge_msr_bitmap(vcpu, vmcs12); 9223 + /* MSR_BITMAP will be set by following vmx_set_efer. */ 9227 9224 } else 9228 9225 exec_control &= ~CPU_BASED_USE_MSR_BITMAPS; 9229 9226
-1
arch/x86/kvm/x86.c
··· 2744 2744 case KVM_CAP_USER_NMI: 2745 2745 case KVM_CAP_REINJECT_CONTROL: 2746 2746 case KVM_CAP_IRQ_INJECT_STATUS: 2747 - case KVM_CAP_IRQFD: 2748 2747 case KVM_CAP_IOEVENTFD: 2749 2748 case KVM_CAP_IOEVENTFD_NO_LENGTH: 2750 2749 case KVM_CAP_PIT2:
+1
arch/x86/vdso/vdso32/sigreturn.S
··· 17 17 .text 18 18 .globl __kernel_sigreturn 19 19 .type __kernel_sigreturn,@function 20 + nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */ 20 21 ALIGN 21 22 __kernel_sigreturn: 22 23 .LSTART_sigreturn:
+1 -1
arch/x86/xen/p2m.c
··· 563 563 if (p2m_pfn == PFN_DOWN(__pa(p2m_missing))) 564 564 p2m_init(p2m); 565 565 else 566 - p2m_init_identity(p2m, pfn); 566 + p2m_init_identity(p2m, pfn & ~(P2M_PER_PAGE - 1)); 567 567 568 568 spin_lock_irqsave(&p2m_update_lock, flags); 569 569
+4 -1
drivers/acpi/acpi_lpss.c
··· 65 65 66 66 struct lpss_device_desc { 67 67 unsigned int flags; 68 + const char *clk_con_id; 68 69 unsigned int prv_offset; 69 70 size_t prv_size_override; 70 71 void (*setup)(struct lpss_private_data *pdata); ··· 141 140 142 141 static struct lpss_device_desc lpt_uart_dev_desc = { 143 142 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, 143 + .clk_con_id = "baudclk", 144 144 .prv_offset = 0x800, 145 145 .setup = lpss_uart_setup, 146 146 }; ··· 158 156 159 157 static struct lpss_device_desc byt_uart_dev_desc = { 160 158 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 159 + .clk_con_id = "baudclk", 161 160 .prv_offset = 0x800, 162 161 .setup = lpss_uart_setup, 163 162 }; ··· 316 313 return PTR_ERR(clk); 317 314 318 315 pdata->clk = clk; 319 - clk_register_clkdev(clk, NULL, devname); 316 + clk_register_clkdev(clk, dev_desc->clk_con_id, devname); 320 317 return 0; 321 318 } 322 319
+1 -1
drivers/base/regmap/regcache-rbtree.c
··· 307 307 if (pos == 0) { 308 308 memmove(blk + offset * map->cache_word_size, 309 309 blk, rbnode->blklen * map->cache_word_size); 310 - bitmap_shift_right(present, present, offset, blklen); 310 + bitmap_shift_left(present, present, offset, blklen); 311 311 } 312 312 313 313 /* update the rbnode block, its size and the base register */
+4 -2
drivers/base/regmap/regcache.c
··· 608 608 for (i = start; i < end; i++) { 609 609 regtmp = block_base + (i * map->reg_stride); 610 610 611 - if (!regcache_reg_present(cache_present, i)) 611 + if (!regcache_reg_present(cache_present, i) || 612 + !regmap_writeable(map, regtmp)) 612 613 continue; 613 614 614 615 val = regcache_get_val(map, block, i); ··· 678 677 for (i = start; i < end; i++) { 679 678 regtmp = block_base + (i * map->reg_stride); 680 679 681 - if (!regcache_reg_present(cache_present, i)) { 680 + if (!regcache_reg_present(cache_present, i) || 681 + !regmap_writeable(map, regtmp)) { 682 682 ret = regcache_sync_block_raw_flush(map, &data, 683 683 base, regtmp); 684 684 if (ret != 0)
+2 -1
drivers/base/regmap/regmap-irq.c
··· 499 499 goto err_alloc; 500 500 } 501 501 502 - ret = request_threaded_irq(irq, NULL, regmap_irq_thread, irq_flags, 502 + ret = request_threaded_irq(irq, NULL, regmap_irq_thread, 503 + irq_flags | IRQF_ONESHOT, 503 504 chip->name, d); 504 505 if (ret != 0) { 505 506 dev_err(map->dev, "Failed to request IRQ %d for %s: %d\n",
+18 -1
drivers/char/virtio_console.c
··· 142 142 * notification 143 143 */ 144 144 struct work_struct control_work; 145 + struct work_struct config_work; 145 146 146 147 struct list_head ports; 147 148 ··· 1838 1837 1839 1838 portdev = vdev->priv; 1840 1839 1840 + if (!use_multiport(portdev)) 1841 + schedule_work(&portdev->config_work); 1842 + } 1843 + 1844 + static void config_work_handler(struct work_struct *work) 1845 + { 1846 + struct ports_device *portdev; 1847 + 1848 + portdev = container_of(work, struct ports_device, control_work); 1841 1849 if (!use_multiport(portdev)) { 1850 + struct virtio_device *vdev; 1842 1851 struct port *port; 1843 1852 u16 rows, cols; 1844 1853 1854 + vdev = portdev->vdev; 1845 1855 virtio_cread(vdev, struct virtio_console_config, cols, &cols); 1846 1856 virtio_cread(vdev, struct virtio_console_config, rows, &rows); 1847 1857 ··· 2052 2040 2053 2041 virtio_device_ready(portdev->vdev); 2054 2042 2043 + INIT_WORK(&portdev->config_work, &config_work_handler); 2044 + INIT_WORK(&portdev->control_work, &control_work_handler); 2045 + 2055 2046 if (multiport) { 2056 2047 unsigned int nr_added_bufs; 2057 2048 2058 2049 spin_lock_init(&portdev->c_ivq_lock); 2059 2050 spin_lock_init(&portdev->c_ovq_lock); 2060 - INIT_WORK(&portdev->control_work, &control_work_handler); 2061 2051 2062 2052 nr_added_bufs = fill_queue(portdev->c_ivq, 2063 2053 &portdev->c_ivq_lock); ··· 2127 2113 /* Finish up work that's lined up */ 2128 2114 if (use_multiport(portdev)) 2129 2115 cancel_work_sync(&portdev->control_work); 2116 + else 2117 + cancel_work_sync(&portdev->config_work); 2130 2118 2131 2119 list_for_each_entry_safe(port, port2, &portdev->ports, list) 2132 2120 unplug_port(port); ··· 2180 2164 2181 2165 virtqueue_disable_cb(portdev->c_ivq); 2182 2166 cancel_work_sync(&portdev->control_work); 2167 + cancel_work_sync(&portdev->config_work); 2183 2168 /* 2184 2169 * Once more: if control_work_handler() was running, it would 2185 2170 * enable the cb as the last step.
+14 -15
drivers/clk/clk-divider.c
··· 144 144 divider->flags); 145 145 } 146 146 147 - /* 148 - * The reverse of DIV_ROUND_UP: The maximum number which 149 - * divided by m is r 150 - */ 151 - #define MULT_ROUND_UP(r, m) ((r) * (m) + (m) - 1) 152 - 153 147 static bool _is_valid_table_div(const struct clk_div_table *table, 154 148 unsigned int div) 155 149 { ··· 219 225 unsigned long parent_rate, unsigned long rate, 220 226 unsigned long flags) 221 227 { 222 - int up, down, div; 228 + int up, down; 229 + unsigned long up_rate, down_rate; 223 230 224 - up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate); 231 + up = DIV_ROUND_UP(parent_rate, rate); 232 + down = parent_rate / rate; 225 233 226 234 if (flags & CLK_DIVIDER_POWER_OF_TWO) { 227 - up = __roundup_pow_of_two(div); 228 - down = __rounddown_pow_of_two(div); 235 + up = __roundup_pow_of_two(up); 236 + down = __rounddown_pow_of_two(down); 229 237 } else if (table) { 230 - up = _round_up_table(table, div); 231 - down = _round_down_table(table, div); 238 + up = _round_up_table(table, up); 239 + down = _round_down_table(table, down); 232 240 } 233 241 234 - return (up - div) <= (div - down) ? up : down; 242 + up_rate = DIV_ROUND_UP(parent_rate, up); 243 + down_rate = DIV_ROUND_UP(parent_rate, down); 244 + 245 + return (rate - up_rate) <= (down_rate - rate) ? up : down; 235 246 } 236 247 237 248 static int _div_round(const struct clk_div_table *table, ··· 312 313 return i; 313 314 } 314 315 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 315 - MULT_ROUND_UP(rate, i)); 316 + rate * i); 316 317 now = DIV_ROUND_UP(parent_rate, i); 317 318 if (_is_best_div(rate, now, best, flags)) { 318 319 bestdiv = i; ··· 352 353 bestdiv = readl(divider->reg) >> divider->shift; 353 354 bestdiv &= div_mask(divider->width); 354 355 bestdiv = _get_div(divider->table, bestdiv, divider->flags); 355 - return bestdiv; 356 + return DIV_ROUND_UP(*prate, bestdiv); 356 357 } 357 358 358 359 return divider_round_rate(hw, rate, prate, divider->table,
+26 -1
drivers/clk/clk.c
··· 1350 1350 1351 1351 return rate; 1352 1352 } 1353 - EXPORT_SYMBOL_GPL(clk_core_get_rate); 1354 1353 1355 1354 /** 1356 1355 * clk_get_rate - return the rate of clk ··· 2168 2169 2169 2170 return clk_core_get_phase(clk->core); 2170 2171 } 2172 + 2173 + /** 2174 + * clk_is_match - check if two clk's point to the same hardware clock 2175 + * @p: clk compared against q 2176 + * @q: clk compared against p 2177 + * 2178 + * Returns true if the two struct clk pointers both point to the same hardware 2179 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 2180 + * share the same struct clk_core object. 2181 + * 2182 + * Returns false otherwise. Note that two NULL clks are treated as matching. 2183 + */ 2184 + bool clk_is_match(const struct clk *p, const struct clk *q) 2185 + { 2186 + /* trivial case: identical struct clk's or both NULL */ 2187 + if (p == q) 2188 + return true; 2189 + 2190 + /* true if clk->core pointers match. Avoid derefing garbage */ 2191 + if (!IS_ERR_OR_NULL(p) && !IS_ERR_OR_NULL(q)) 2192 + if (p->core == q->core) 2193 + return true; 2194 + 2195 + return false; 2196 + } 2197 + EXPORT_SYMBOL_GPL(clk_is_match); 2171 2198 2172 2199 /** 2173 2200 * __clk_init - initialize the data structures in a struct clk
+13
drivers/clk/qcom/gcc-msm8960.c
··· 48 48 }, 49 49 }; 50 50 51 + static struct clk_regmap pll4_vote = { 52 + .enable_reg = 0x34c0, 53 + .enable_mask = BIT(4), 54 + .hw.init = &(struct clk_init_data){ 55 + .name = "pll4_vote", 56 + .parent_names = (const char *[]){ "pll4" }, 57 + .num_parents = 1, 58 + .ops = &clk_pll_vote_ops, 59 + }, 60 + }; 61 + 51 62 static struct clk_pll pll8 = { 52 63 .l_reg = 0x3144, 53 64 .m_reg = 0x3148, ··· 3034 3023 3035 3024 static struct clk_regmap *gcc_msm8960_clks[] = { 3036 3025 [PLL3] = &pll3.clkr, 3026 + [PLL4_VOTE] = &pll4_vote, 3037 3027 [PLL8] = &pll8.clkr, 3038 3028 [PLL8_VOTE] = &pll8_vote, 3039 3029 [PLL14] = &pll14.clkr, ··· 3259 3247 3260 3248 static struct clk_regmap *gcc_apq8064_clks[] = { 3261 3249 [PLL3] = &pll3.clkr, 3250 + [PLL4_VOTE] = &pll4_vote, 3262 3251 [PLL8] = &pll8.clkr, 3263 3252 [PLL8_VOTE] = &pll8_vote, 3264 3253 [PLL14] = &pll14.clkr,
-1
drivers/clk/qcom/lcc-ipq806x.c
··· 462 462 .remove = lcc_ipq806x_remove, 463 463 .driver = { 464 464 .name = "lcc-ipq806x", 465 - .owner = THIS_MODULE, 466 465 .of_match_table = lcc_ipq806x_match_table, 467 466 }, 468 467 };
+3 -4
drivers/clk/qcom/lcc-msm8960.c
··· 417 417 .mnctr_en_bit = 8, 418 418 .mnctr_reset_bit = 7, 419 419 .mnctr_mode_shift = 5, 420 - .n_val_shift = 16, 421 - .m_val_shift = 16, 420 + .n_val_shift = 24, 421 + .m_val_shift = 8, 422 422 .width = 8, 423 423 }, 424 424 .p = { ··· 547 547 return PTR_ERR(regmap); 548 548 549 549 /* Use the correct frequency plan depending on speed of PLL4 */ 550 - val = regmap_read(regmap, 0x4, &val); 550 + regmap_read(regmap, 0x4, &val); 551 551 if (val == 0x12) { 552 552 slimbus_src.freq_tbl = clk_tbl_aif_osr_492; 553 553 mi2s_osr_src.freq_tbl = clk_tbl_aif_osr_492; ··· 574 574 .remove = lcc_msm8960_remove, 575 575 .driver = { 576 576 .name = "lcc-msm8960", 577 - .owner = THIS_MODULE, 578 577 .of_match_table = lcc_msm8960_match_table, 579 578 }, 580 579 };
+3 -3
drivers/clk/ti/fapll.c
··· 84 84 struct fapll_data *fd = to_fapll(hw); 85 85 u32 v = readl_relaxed(fd->base); 86 86 87 - v |= (1 << FAPLL_MAIN_PLLEN); 87 + v |= FAPLL_MAIN_PLLEN; 88 88 writel_relaxed(v, fd->base); 89 89 90 90 return 0; ··· 95 95 struct fapll_data *fd = to_fapll(hw); 96 96 u32 v = readl_relaxed(fd->base); 97 97 98 - v &= ~(1 << FAPLL_MAIN_PLLEN); 98 + v &= ~FAPLL_MAIN_PLLEN; 99 99 writel_relaxed(v, fd->base); 100 100 } 101 101 ··· 104 104 struct fapll_data *fd = to_fapll(hw); 105 105 u32 v = readl_relaxed(fd->base); 106 106 107 - return v & (1 << FAPLL_MAIN_PLLEN); 107 + return v & FAPLL_MAIN_PLLEN; 108 108 } 109 109 110 110 static unsigned long ti_fapll_recalc_rate(struct clk_hw *hw,
+2 -2
drivers/clocksource/time-efm32.c
··· 225 225 clock_event_ddata.base = base; 226 226 clock_event_ddata.periodic_top = DIV_ROUND_CLOSEST(rate, 1024 * HZ); 227 227 228 - setup_irq(irq, &efm32_clock_event_irq); 229 - 230 228 clockevents_config_and_register(&clock_event_ddata.evtdev, 231 229 DIV_ROUND_CLOSEST(rate, 1024), 232 230 0xf, 0xffff); 231 + 232 + setup_irq(irq, &efm32_clock_event_irq); 233 233 234 234 return 0; 235 235
+4 -4
drivers/clocksource/timer-sun5i.c
··· 178 178 179 179 ticks_per_jiffy = DIV_ROUND_UP(rate, HZ); 180 180 181 - ret = setup_irq(irq, &sun5i_timer_irq); 182 - if (ret) 183 - pr_warn("failed to setup irq %d\n", irq); 184 - 185 181 /* Enable timer0 interrupt */ 186 182 val = readl(timer_base + TIMER_IRQ_EN_REG); 187 183 writel(val | TIMER_IRQ_EN(0), timer_base + TIMER_IRQ_EN_REG); ··· 187 191 188 192 clockevents_config_and_register(&sun5i_clockevent, rate, 189 193 TIMER_SYNC_TICKS, 0xffffffff); 194 + 195 + ret = setup_irq(irq, &sun5i_timer_irq); 196 + if (ret) 197 + pr_warn("failed to setup irq %d\n", irq); 190 198 } 191 199 CLOCKSOURCE_OF_DECLARE(sun5i_a13, "allwinner,sun5i-a13-hstimer", 192 200 sun5i_timer_init);
+19 -16
drivers/gpu/drm/drm_crtc.c
··· 43 43 #include "drm_crtc_internal.h" 44 44 #include "drm_internal.h" 45 45 46 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 47 - struct drm_mode_fb_cmd2 *r, 48 - struct drm_file *file_priv); 46 + static struct drm_framebuffer * 47 + internal_framebuffer_create(struct drm_device *dev, 48 + struct drm_mode_fb_cmd2 *r, 49 + struct drm_file *file_priv); 49 50 50 51 /* Avoid boilerplate. I'm tired of typing. */ 51 52 #define DRM_ENUM_NAME_FN(fnname, list) \ ··· 2909 2908 */ 2910 2909 if (req->flags & DRM_MODE_CURSOR_BO) { 2911 2910 if (req->handle) { 2912 - fb = add_framebuffer_internal(dev, &fbreq, file_priv); 2911 + fb = internal_framebuffer_create(dev, &fbreq, file_priv); 2913 2912 if (IS_ERR(fb)) { 2914 2913 DRM_DEBUG_KMS("failed to wrap cursor buffer in drm framebuffer\n"); 2915 2914 return PTR_ERR(fb); 2916 2915 } 2917 - 2918 - drm_framebuffer_reference(fb); 2919 2916 } else { 2920 2917 fb = NULL; 2921 2918 } ··· 3266 3267 return 0; 3267 3268 } 3268 3269 3269 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 3270 - struct drm_mode_fb_cmd2 *r, 3271 - struct drm_file *file_priv) 3270 + static struct drm_framebuffer * 3271 + internal_framebuffer_create(struct drm_device *dev, 3272 + struct drm_mode_fb_cmd2 *r, 3273 + struct drm_file *file_priv) 3272 3274 { 3273 3275 struct drm_mode_config *config = &dev->mode_config; 3274 3276 struct drm_framebuffer *fb; ··· 3301 3301 return fb; 3302 3302 } 3303 3303 3304 - mutex_lock(&file_priv->fbs_lock); 3305 - r->fb_id = fb->base.id; 3306 - list_add(&fb->filp_head, &file_priv->fbs); 3307 - DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3308 - mutex_unlock(&file_priv->fbs_lock); 3309 - 3310 3304 return fb; 3311 3305 } 3312 3306 ··· 3322 3328 int drm_mode_addfb2(struct drm_device *dev, 3323 3329 void *data, struct drm_file *file_priv) 3324 3330 { 3331 + struct drm_mode_fb_cmd2 *r = data; 3325 3332 struct drm_framebuffer *fb; 3326 3333 3327 3334 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 3328 3335 return -EINVAL; 3329 3336 3330 - fb = add_framebuffer_internal(dev, data, file_priv); 3337 + fb = internal_framebuffer_create(dev, r, file_priv); 3331 3338 if (IS_ERR(fb)) 3332 3339 return PTR_ERR(fb); 3340 + 3341 + /* Transfer ownership to the filp for reaping on close */ 3342 + 3343 + DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3344 + mutex_lock(&file_priv->fbs_lock); 3345 + r->fb_id = fb->base.id; 3346 + list_add(&fb->filp_head, &file_priv->fbs); 3347 + mutex_unlock(&file_priv->fbs_lock); 3333 3348 3334 3349 return 0; 3335 3350 }
+8 -3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 733 733 struct drm_dp_sideband_msg_tx *txmsg) 734 734 { 735 735 bool ret; 736 - mutex_lock(&mgr->qlock); 736 + 737 + /* 738 + * All updates to txmsg->state are protected by mgr->qlock, and the two 739 + * cases we check here are terminal states. For those the barriers 740 + * provided by the wake_up/wait_event pair are enough. 741 + */ 737 742 ret = (txmsg->state == DRM_DP_SIDEBAND_TX_RX || 738 743 txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT); 739 - mutex_unlock(&mgr->qlock); 740 744 return ret; 741 745 } 742 746 ··· 1367 1363 return 0; 1368 1364 } 1369 1365 1370 - /* must be called holding qlock */ 1371 1366 static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr) 1372 1367 { 1373 1368 struct drm_dp_sideband_msg_tx *txmsg; 1374 1369 int ret; 1370 + 1371 + WARN_ON(!mutex_is_locked(&mgr->qlock)); 1375 1372 1376 1373 /* construct a chunk from the first msg in the tx_msg queue */ 1377 1374 if (list_empty(&mgr->tx_msg_downq)) {
+1 -1
drivers/gpu/drm/drm_mm.c
··· 403 403 unsigned rem; 404 404 405 405 rem = do_div(tmp, alignment); 406 - if (tmp) 406 + if (rem) 407 407 start += alignment - rem; 408 408 } 409 409
+20 -5
drivers/gpu/drm/i915/i915_gem.c
··· 2936 2936 req = obj->last_read_req; 2937 2937 2938 2938 /* Do this after OLR check to make sure we make forward progress polling 2939 - * on this IOCTL with a timeout <=0 (like busy ioctl) 2939 + * on this IOCTL with a timeout == 0 (like busy ioctl) 2940 2940 */ 2941 - if (args->timeout_ns <= 0) { 2941 + if (args->timeout_ns == 0) { 2942 2942 ret = -ETIME; 2943 2943 goto out; 2944 2944 } ··· 2948 2948 i915_gem_request_reference(req); 2949 2949 mutex_unlock(&dev->struct_mutex); 2950 2950 2951 - ret = __i915_wait_request(req, reset_counter, true, &args->timeout_ns, 2951 + ret = __i915_wait_request(req, reset_counter, true, 2952 + args->timeout_ns > 0 ? &args->timeout_ns : NULL, 2952 2953 file->driver_priv); 2953 2954 mutex_lock(&dev->struct_mutex); 2954 2955 i915_gem_request_unreference(req); ··· 4793 4792 if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt()) 4794 4793 return -EIO; 4795 4794 4795 + /* Double layer security blanket, see i915_gem_init() */ 4796 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4797 + 4796 4798 if (dev_priv->ellc_size) 4797 4799 I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf)); 4798 4800 ··· 4828 4824 for_each_ring(ring, dev_priv, i) { 4829 4825 ret = ring->init_hw(ring); 4830 4826 if (ret) 4831 - return ret; 4827 + goto out; 4832 4828 } 4833 4829 4834 4830 for (i = 0; i < NUM_L3_SLICES(dev); i++) ··· 4845 4841 DRM_ERROR("Context enable failed %d\n", ret); 4846 4842 i915_gem_cleanup_ringbuffer(dev); 4847 4843 4848 - return ret; 4844 + goto out; 4849 4845 } 4850 4846 4847 + out: 4848 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4851 4849 return ret; 4852 4850 } 4853 4851 ··· 4883 4877 dev_priv->gt.stop_ring = intel_logical_ring_stop; 4884 4878 } 4885 4879 4880 + /* This is just a security blanket to placate dragons. 4881 + * On some systems, we very sporadically observe that the first TLBs 4882 + * used by the CS may be stale, despite us poking the TLB reset. If 4883 + * we hold the forcewake during initialisation these problems 4884 + * just magically go away. 4885 + */ 4886 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4887 + 4886 4888 ret = i915_gem_init_userptr(dev); 4887 4889 if (ret) 4888 4890 goto out_unlock; ··· 4917 4903 } 4918 4904 4919 4905 out_unlock: 4906 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4920 4907 mutex_unlock(&dev->struct_mutex); 4921 4908 4922 4909 return ret;
+1 -1
drivers/gpu/drm/i915/intel_display.c
··· 9716 9716 struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 9717 9717 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 9718 9718 9719 - WARN_ON(!in_irq()); 9719 + WARN_ON(!in_interrupt()); 9720 9720 9721 9721 if (crtc == NULL) 9722 9722 return;
+7 -1
drivers/gpu/drm/i915/intel_uncore.c
··· 1048 1048 1049 1049 /* We need to init first for ECOBUS access and then 1050 1050 * determine later if we want to reinit, in case of MT access is 1051 - * not working 1051 + * not working. In this stage we don't know which flavour this 1052 + * ivb is, so it is better to reset also the gen6 fw registers 1053 + * before the ecobus check. 1052 1054 */ 1055 + 1056 + __raw_i915_write32(dev_priv, FORCEWAKE, 0); 1057 + __raw_posting_read(dev_priv, ECOBUS); 1058 + 1053 1059 fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER, 1054 1060 FORCEWAKE_MT, FORCEWAKE_MT_ACK); 1055 1061
+44 -22
drivers/gpu/drm/radeon/radeon_fence.c
··· 1030 1030 return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags); 1031 1031 } 1032 1032 1033 + struct radeon_wait_cb { 1034 + struct fence_cb base; 1035 + struct task_struct *task; 1036 + }; 1037 + 1038 + static void 1039 + radeon_fence_wait_cb(struct fence *fence, struct fence_cb *cb) 1040 + { 1041 + struct radeon_wait_cb *wait = 1042 + container_of(cb, struct radeon_wait_cb, base); 1043 + 1044 + wake_up_process(wait->task); 1045 + } 1046 + 1033 1047 static signed long radeon_fence_default_wait(struct fence *f, bool intr, 1034 1048 signed long t) 1035 1049 { 1036 1050 struct radeon_fence *fence = to_radeon_fence(f); 1037 1051 struct radeon_device *rdev = fence->rdev; 1038 - bool signaled; 1052 + struct radeon_wait_cb cb; 1039 1053 1040 - fence_enable_sw_signaling(&fence->base); 1054 + cb.task = current; 1041 1055 1042 - /* 1043 - * This function has to return -EDEADLK, but cannot hold 1044 - * exclusive_lock during the wait because some callers 1045 - * may already hold it. This means checking needs_reset without 1046 - * lock, and not fiddling with any gpu internals. 1047 - * 1048 - * The callback installed with fence_enable_sw_signaling will 1049 - * run before our wait_event_*timeout call, so we will see 1050 - * both the signaled fence and the changes to needs_reset. 1051 - */ 1056 + if (fence_add_callback(f, &cb.base, radeon_fence_wait_cb)) 1057 + return t; 1052 1058 1053 - if (intr) 1054 - t = wait_event_interruptible_timeout(rdev->fence_queue, 1055 - ((signaled = radeon_test_signaled(fence)) || 1056 - rdev->needs_reset), t); 1057 - else 1058 - t = wait_event_timeout(rdev->fence_queue, 1059 - ((signaled = radeon_test_signaled(fence)) || 1060 - rdev->needs_reset), t); 1059 + while (t > 0) { 1060 + if (intr) 1061 + set_current_state(TASK_INTERRUPTIBLE); 1062 + else 1063 + set_current_state(TASK_UNINTERRUPTIBLE); 1061 1064 1062 - if (t > 0 && !signaled) 1063 - return -EDEADLK; 1065 + /* 1066 + * radeon_test_signaled must be called after 1067 + * set_current_state to prevent a race with wake_up_process 1068 + */ 1069 + if (radeon_test_signaled(fence)) 1070 + break; 1071 + 1072 + if (rdev->needs_reset) { 1073 + t = -EDEADLK; 1074 + break; 1075 + } 1076 + 1077 + t = schedule_timeout(t); 1078 + 1079 + if (t > 0 && intr && signal_pending(current)) 1080 + t = -ERESTARTSYS; 1081 + } 1082 + 1083 + __set_current_state(TASK_RUNNING); 1084 + fence_remove_callback(f, &cb.base); 1085 + 1064 1086 return t; 1065 1087 } 1066 1088
+2 -4
drivers/gpu/drm/radeon/si.c
··· 7130 7130 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK); 7131 7131 7132 7132 if (!vclk || !dclk) { 7133 - /* keep the Bypass mode, put PLL to sleep */ 7134 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7133 + /* keep the Bypass mode */ 7135 7134 return 0; 7136 7135 } 7137 7136 ··· 7146 7147 /* set VCO_MODE to 1 */ 7147 7148 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK); 7148 7149 7149 - /* toggle UPLL_SLEEP to 1 then back to 0 */ 7150 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7150 + /* disable sleep mode */ 7151 7151 WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK); 7152 7152 7153 7153 /* deassert UPLL_RESET */
+41 -37
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 725 725 goto out_err1; 726 726 } 727 727 728 - ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 729 - (dev_priv->vram_size >> PAGE_SHIFT)); 730 - if (unlikely(ret != 0)) { 731 - DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 732 - goto out_err2; 733 - } 734 - 735 - dev_priv->has_gmr = true; 736 - if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 737 - refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 738 - VMW_PL_GMR) != 0) { 739 - DRM_INFO("No GMR memory available. " 740 - "Graphics memory resources are very limited.\n"); 741 - dev_priv->has_gmr = false; 742 - } 743 - 744 - if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 745 - dev_priv->has_mob = true; 746 - if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 747 - VMW_PL_MOB) != 0) { 748 - DRM_INFO("No MOB memory available. " 749 - "3D will be disabled.\n"); 750 - dev_priv->has_mob = false; 751 - } 752 - } 753 - 754 728 dev_priv->mmio_mtrr = arch_phys_wc_add(dev_priv->mmio_start, 755 729 dev_priv->mmio_size); 756 730 ··· 787 813 goto out_no_fman; 788 814 } 789 815 816 + 817 + ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 818 + (dev_priv->vram_size >> PAGE_SHIFT)); 819 + if (unlikely(ret != 0)) { 820 + DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 821 + goto out_no_vram; 822 + } 823 + 824 + dev_priv->has_gmr = true; 825 + if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 826 + refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 827 + VMW_PL_GMR) != 0) { 828 + DRM_INFO("No GMR memory available. " 829 + "Graphics memory resources are very limited.\n"); 830 + dev_priv->has_gmr = false; 831 + } 832 + 833 + if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 834 + dev_priv->has_mob = true; 835 + if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 836 + VMW_PL_MOB) != 0) { 837 + DRM_INFO("No MOB memory available. " 838 + "3D will be disabled.\n"); 839 + dev_priv->has_mob = false; 840 + } 841 + } 842 + 790 843 vmw_kms_save_vga(dev_priv); 791 844 792 845 /* Start kms and overlay systems, needs fifo. */ ··· 839 838 vmw_kms_close(dev_priv); 840 839 out_no_kms: 841 840 vmw_kms_restore_vga(dev_priv); 841 + if (dev_priv->has_mob) 842 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 843 + if (dev_priv->has_gmr) 844 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 845 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 846 + out_no_vram: 842 847 vmw_fence_manager_takedown(dev_priv->fman); 843 848 out_no_fman: 844 849 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) ··· 860 853 iounmap(dev_priv->mmio_virt); 861 854 out_err3: 862 855 arch_phys_wc_del(dev_priv->mmio_mtrr); 863 - if (dev_priv->has_mob) 864 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 865 - if (dev_priv->has_gmr) 866 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 867 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 868 - out_err2: 869 856 (void)ttm_bo_device_release(&dev_priv->bdev); 870 857 out_err1: 871 858 vmw_ttm_global_release(dev_priv); ··· 888 887 } 889 888 vmw_kms_close(dev_priv); 890 889 vmw_overlay_close(dev_priv); 890 + 891 + if (dev_priv->has_mob) 892 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 893 + if (dev_priv->has_gmr) 894 + (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 895 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 896 + 891 897 vmw_fence_manager_takedown(dev_priv->fman); 892 898 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) 893 899 drm_irq_uninstall(dev_priv->dev); ··· 906 898 ttm_object_device_release(&dev_priv->tdev); 907 899 iounmap(dev_priv->mmio_virt); 908 900 arch_phys_wc_del(dev_priv->mmio_mtrr); 909 - if (dev_priv->has_mob) 910 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 911 - if (dev_priv->has_gmr) 912 - (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 913 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 914 901 (void)ttm_bo_device_release(&dev_priv->bdev); 915 902 vmw_ttm_global_release(dev_priv); 916 903 ··· 1238 1235 { 1239 1236 struct drm_device *dev = pci_get_drvdata(pdev); 1240 1237 1238 + pci_disable_device(pdev); 1241 1239 drm_put_dev(dev); 1242 1240 } 1243 1241
+9 -9
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 890 890 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 891 891 if (unlikely(ret != 0)) { 892 892 DRM_ERROR("Could not find or use MOB buffer.\n"); 893 - return -EINVAL; 893 + ret = -EINVAL; 894 + goto out_no_reloc; 894 895 } 895 896 bo = &vmw_bo->base; 896 897 ··· 915 914 916 915 out_no_reloc: 917 916 vmw_dmabuf_unreference(&vmw_bo); 918 - vmw_bo_p = NULL; 917 + *vmw_bo_p = NULL; 919 918 return ret; 920 919 } 921 920 ··· 952 951 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 953 952 if (unlikely(ret != 0)) { 954 953 DRM_ERROR("Could not find or use GMR region.\n"); 955 - return -EINVAL; 954 + ret = -EINVAL; 955 + goto out_no_reloc; 956 956 } 957 957 bo = &vmw_bo->base; 958 958 ··· 976 974 977 975 out_no_reloc: 978 976 vmw_dmabuf_unreference(&vmw_bo); 979 - vmw_bo_p = NULL; 977 + *vmw_bo_p = NULL; 980 978 return ret; 981 979 } 982 980 ··· 2782 2780 NULL, arg->command_size, arg->throttle_us, 2783 2781 (void __user *)(unsigned long)arg->fence_rep, 2784 2782 NULL); 2785 - 2783 + ttm_read_unlock(&dev_priv->reservation_sem); 2786 2784 if (unlikely(ret != 0)) 2787 - goto out_unlock; 2785 + return ret; 2788 2786 2789 2787 vmw_kms_cursor_post_execbuf(dev_priv); 2790 2788 2791 - out_unlock: 2792 - ttm_read_unlock(&dev_priv->reservation_sem); 2793 - return ret; 2789 + return 0; 2794 2790 }
+3 -11
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 2033 2033 int i; 2034 2034 struct drm_mode_config *mode_config = &dev->mode_config; 2035 2035 2036 - ret = ttm_read_lock(&dev_priv->reservation_sem, true); 2037 - if (unlikely(ret != 0)) 2038 - return ret; 2039 - 2040 2036 if (!arg->num_outputs) { 2041 2037 struct drm_vmw_rect def_rect = {0, 0, 800, 600}; 2042 2038 vmw_du_update_layout(dev_priv, 1, &def_rect); 2043 - goto out_unlock; 2039 + return 0; 2044 2040 } 2045 2041 2046 2042 rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect); 2047 2043 rects = kcalloc(arg->num_outputs, sizeof(struct drm_vmw_rect), 2048 2044 GFP_KERNEL); 2049 - if (unlikely(!rects)) { 2050 - ret = -ENOMEM; 2051 - goto out_unlock; 2052 - } 2045 + if (unlikely(!rects)) 2046 + return -ENOMEM; 2053 2047 2054 2048 user_rects = (void __user *)(unsigned long)arg->rects; 2055 2049 ret = copy_from_user(rects, user_rects, rects_size); ··· 2068 2074 2069 2075 out_free: 2070 2076 kfree(rects); 2071 - out_unlock: 2072 - ttm_read_unlock(&dev_priv->reservation_sem); 2073 2077 return ret; 2074 2078 }
+1
drivers/hid/hid-core.c
··· 1959 1959 { HID_USB_DEVICE(USB_VENDOR_ID_THRUSTMASTER, 0xb65a) }, 1960 1960 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) }, 1961 1961 { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) }, 1962 + { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) }, 1962 1963 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED, USB_DEVICE_ID_TOPSEED_CYBERLINK) }, 1963 1964 { HID_USB_DEVICE(USB_VENDOR_ID_TOPSEED2, USB_DEVICE_ID_TOPSEED2_RF_COMBO) }, 1964 1965 { HID_USB_DEVICE(USB_VENDOR_ID_TWINHAN, USB_DEVICE_ID_TWINHAN_IR_REMOTE) },
+2
drivers/hid/hid-ids.h
··· 586 586 #define USB_VENDOR_ID_LOGITECH 0x046d 587 587 #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e 588 588 #define USB_DEVICE_ID_LOGITECH_T651 0xb00c 589 + #define USB_DEVICE_ID_LOGITECH_C077 0xc007 589 590 #define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101 590 591 #define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110 591 592 #define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f ··· 899 898 #define USB_VENDOR_ID_TIVO 0x150a 900 899 #define USB_DEVICE_ID_TIVO_SLIDE_BT 0x1200 901 900 #define USB_DEVICE_ID_TIVO_SLIDE 0x1201 901 + #define USB_DEVICE_ID_TIVO_SLIDE_PRO 0x1203 902 902 903 903 #define USB_VENDOR_ID_TOPSEED 0x0766 904 904 #define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
+1
drivers/hid/hid-tivo.c
··· 64 64 /* TiVo Slide Bluetooth remote, pairs with a Broadcom dongle */ 65 65 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_BT) }, 66 66 { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE) }, 67 + { HID_USB_DEVICE(USB_VENDOR_ID_TIVO, USB_DEVICE_ID_TIVO_SLIDE_PRO) }, 67 68 { } 68 69 }; 69 70 MODULE_DEVICE_TABLE(hid, tivo_devices);
+1
drivers/hid/usbhid/hid-quirks.c
··· 78 78 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 79 79 { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS }, 80 80 { USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET }, 81 + { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL }, 81 82 { USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET }, 82 83 { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS }, 83 84 { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3_JP, HID_QUIRK_NO_INIT_REPORTS },
+51 -33
drivers/hid/wacom_wac.c
··· 551 551 (features->type == CINTIQ && !(data[1] & 0x40))) 552 552 return 1; 553 553 554 - if (features->quirks & WACOM_QUIRK_MULTI_INPUT) 554 + if (wacom->shared) { 555 555 wacom->shared->stylus_in_proximity = true; 556 + 557 + if (wacom->shared->touch_down) 558 + return 1; 559 + } 556 560 557 561 /* in Range while exiting */ 558 562 if (((data[1] & 0xfe) == 0x20) && wacom->reporting_data) { ··· 1047 1043 struct input_dev *input = wacom->input; 1048 1044 unsigned char *data = wacom->data; 1049 1045 int i; 1050 - int current_num_contacts = 0; 1046 + int current_num_contacts = data[61]; 1051 1047 int contacts_to_send = 0; 1052 1048 int num_contacts_left = 4; /* maximum contacts per packet */ 1053 1049 int byte_per_packet = WACOM_BYTES_PER_24HDT_PACKET; 1054 1050 int y_offset = 2; 1051 + static int contact_with_no_pen_down_count = 0; 1055 1052 1056 1053 if (wacom->features.type == WACOM_27QHDT) { 1057 1054 current_num_contacts = data[63]; 1058 1055 num_contacts_left = 10; 1059 1056 byte_per_packet = WACOM_BYTES_PER_QHDTHID_PACKET; 1060 1057 y_offset = 0; 1061 - } else { 1062 - current_num_contacts = data[61]; 1063 1058 } 1064 1059 1065 1060 /* 1066 1061 * First packet resets the counter since only the first 1067 1062 * packet in series will have non-zero current_num_contacts. 1068 1063 */ 1069 - if (current_num_contacts) 1064 + if (current_num_contacts) { 1070 1065 wacom->num_contacts_left = current_num_contacts; 1066 + contact_with_no_pen_down_count = 0; 1067 + } 1071 1068 1072 1069 contacts_to_send = min(num_contacts_left, wacom->num_contacts_left); 1073 1070 ··· 1101 1096 input_report_abs(input, ABS_MT_WIDTH_MINOR, min(w, h)); 1102 1097 input_report_abs(input, ABS_MT_ORIENTATION, w > h); 1103 1098 } 1099 + contact_with_no_pen_down_count++; 1104 1100 } 1105 1101 } 1106 1102 input_mt_report_pointer_emulation(input, true); 1107 1103 1108 1104 wacom->num_contacts_left -= contacts_to_send; 1109 - if (wacom->num_contacts_left <= 0) 1105 + if (wacom->num_contacts_left <= 0) { 1110 1106 wacom->num_contacts_left = 0; 1111 - 1112 - wacom->shared->touch_down = (wacom->num_contacts_left > 0); 1107 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1108 + } 1113 1109 return 1; 1114 1110 } 1115 1111 ··· 1122 1116 int current_num_contacts = data[2]; 1123 1117 int contacts_to_send = 0; 1124 1118 int x_offset = 0; 1119 + static int contact_with_no_pen_down_count = 0; 1125 1120 1126 1121 /* MTTPC does not support Height and Width */ 1127 1122 if (wacom->features.type == MTTPC || wacom->features.type == MTTPC_B) ··· 1132 1125 * First packet resets the counter since only the first 1133 1126 * packet in series will have non-zero current_num_contacts. 1134 1127 */ 1135 - if (current_num_contacts) 1128 + if (current_num_contacts) { 1136 1129 wacom->num_contacts_left = current_num_contacts; 1130 + contact_with_no_pen_down_count = 0; 1131 + } 1137 1132 1138 1133 /* There are at most 5 contacts per packet */ 1139 1134 contacts_to_send = min(5, wacom->num_contacts_left); ··· 1156 1147 int y = get_unaligned_le16(&data[offset + x_offset + 9]); 1157 1148 input_report_abs(input, ABS_MT_POSITION_X, x); 1158 1149 input_report_abs(input, ABS_MT_POSITION_Y, y); 1150 + contact_with_no_pen_down_count++; 1159 1151 } 1160 1152 } 1161 1153 input_mt_report_pointer_emulation(input, true); 1162 1154 1163 1155 wacom->num_contacts_left -= contacts_to_send; 1164 - if (wacom->num_contacts_left < 0) 1156 + if (wacom->num_contacts_left <= 0) { 1165 1157 wacom->num_contacts_left = 0; 1166 - 1167 - wacom->shared->touch_down = (wacom->num_contacts_left > 0); 1158 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1159 + } 1168 1160 return 1; 1169 1161 } 1170 1162 ··· 1203 1193 { 1204 1194 unsigned char *data = wacom->data; 1205 1195 struct input_dev *input = wacom->input; 1206 - bool prox; 1196 + bool prox = !wacom->shared->stylus_in_proximity; 1207 1197 int x = 0, y = 0; 1208 1198 1209 1199 if (wacom->features.touch_max > 1 || len > WACOM_PKGLEN_TPC2FG) 1210 1200 return 0; 1211 1201 1212 - if (!wacom->shared->stylus_in_proximity) { 1213 - if (len == WACOM_PKGLEN_TPC1FG) { 1214 - prox = data[0] & 0x01; 1215 - x = get_unaligned_le16(&data[1]); 1216 - y = get_unaligned_le16(&data[3]); 1217 - } else if (len == WACOM_PKGLEN_TPC1FG_B) { 1218 - prox = data[2] & 0x01; 1219 - x = get_unaligned_le16(&data[3]); 1220 - y = get_unaligned_le16(&data[5]); 1221 - } else { 1222 - prox = data[1] & 0x01; 1223 - x = le16_to_cpup((__le16 *)&data[2]); 1224 - y = le16_to_cpup((__le16 *)&data[4]); 1225 - } 1226 - } else 1227 - /* force touch out when pen is in prox */ 1228 - prox = 0; 1202 + if (len == WACOM_PKGLEN_TPC1FG) { 1203 + prox = prox && (data[0] & 0x01); 1204 + x = get_unaligned_le16(&data[1]); 1205 + y = get_unaligned_le16(&data[3]); 1206 + } else if (len == WACOM_PKGLEN_TPC1FG_B) { 1207 + prox = prox && (data[2] & 0x01); 1208 + x = get_unaligned_le16(&data[3]); 1209 + y = get_unaligned_le16(&data[5]); 1210 + } else { 1211 + prox = prox && (data[1] & 0x01); 1212 + x = le16_to_cpup((__le16 *)&data[2]); 1213 + y = le16_to_cpup((__le16 *)&data[4]); 1214 + } 1229 1215 1230 1216 if (prox) { 1231 1217 input_report_abs(input, ABS_X, x); ··· 1619 1613 struct input_dev *pad_input = wacom->pad_input; 1620 1614 unsigned char *data = wacom->data; 1621 1615 int i; 1616 + int contact_with_no_pen_down_count = 0; 1622 1617 1623 1618 if (data[0] != 0x02) 1624 1619 return 0; ··· 1647 1640 } 1648 1641 input_report_abs(input, ABS_MT_POSITION_X, x); 1649 1642 input_report_abs(input, ABS_MT_POSITION_Y, y); 1643 + contact_with_no_pen_down_count++; 1650 1644 } 1651 1645 } 1652 1646 ··· 1657 1649 input_report_key(pad_input, BTN_FORWARD, (data[1] & 0x04) != 0); 1658 1650 input_report_key(pad_input, BTN_BACK, (data[1] & 0x02) != 0); 1659 1651 input_report_key(pad_input, BTN_RIGHT, (data[1] & 0x01) != 0); 1652 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1660 1653 1661 1654 return 1; 1662 1655 } 1663 1656 1664 - static void wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data) 1657 + static int wacom_bpt3_touch_msg(struct wacom_wac *wacom, unsigned char *data, int last_touch_count) 1665 1658 { 1666 1659 struct wacom_features *features = &wacom->features; 1667 1660 struct input_dev *input = wacom->input; ··· 1670 1661 int slot = input_mt_get_slot_by_key(input, data[0]); 1671 1662 1672 1663 if (slot < 0) 1673 - return; 1664 + return 0; 1674 1665 1675 1666 touch = touch && !wacom->shared->stylus_in_proximity; 1676 1667 ··· 1702 1693 input_report_abs(input, ABS_MT_POSITION_Y, y); 1703 1694 input_report_abs(input, ABS_MT_TOUCH_MAJOR, width); 1704 1695 input_report_abs(input, ABS_MT_TOUCH_MINOR, height); 1696 + last_touch_count++; 1705 1697 } 1698 + return last_touch_count; 1706 1699 } 1707 1700 1708 1701 static void wacom_bpt3_button_msg(struct wacom_wac *wacom, unsigned char *data) ··· 1729 1718 unsigned char *data = wacom->data; 1730 1719 int count = data[1] & 0x07; 1731 1720 int i; 1721 + int contact_with_no_pen_down_count = 0; 1732 1722 1733 1723 if (data[0] != 0x02) 1734 1724 return 0; ··· 1740 1728 int msg_id = data[offset]; 1741 1729 1742 1730 if (msg_id >= 2 && msg_id <= 17) 1743 - wacom_bpt3_touch_msg(wacom, data + offset); 1731 + contact_with_no_pen_down_count = 1732 + wacom_bpt3_touch_msg(wacom, data + offset, 1733 + contact_with_no_pen_down_count); 1744 1734 else if (msg_id == 128) 1745 1735 wacom_bpt3_button_msg(wacom, data + offset); 1746 1736 1747 1737 } 1748 1738 input_mt_report_pointer_emulation(input, true); 1739 + wacom->shared->touch_down = (contact_with_no_pen_down_count > 0); 1749 1740 1750 1741 return 1; 1751 1742 } ··· 1773 1758 } 1774 1759 return 0; 1775 1760 } 1761 + 1762 + if (wacom->shared->touch_down) 1763 + return 0; 1776 1764 1777 1765 prox = (data[1] & 0x20) == 0x20; 1778 1766
-3
drivers/i2c/i2c-core.c
··· 679 679 status = driver->remove(client); 680 680 } 681 681 682 - if (dev->of_node) 683 - irq_dispose_mapping(client->irq); 684 - 685 682 dev_pm_domain_detach(&client->dev, true); 686 683 return status; 687 684 }
+2 -2
drivers/ide/ide-tape.c
··· 1793 1793 tape->best_dsc_rw_freq = clamp_t(unsigned long, t, IDETAPE_DSC_RW_MIN, 1794 1794 IDETAPE_DSC_RW_MAX); 1795 1795 printk(KERN_INFO "ide-tape: %s <-> %s: %dKBps, %d*%dkB buffer, " 1796 - "%lums tDSC%s\n", 1796 + "%ums tDSC%s\n", 1797 1797 drive->name, tape->name, *(u16 *)&tape->caps[14], 1798 1798 (*(u16 *)&tape->caps[16] * 512) / tape->buffer_size, 1799 1799 tape->buffer_size / 1024, 1800 - tape->best_dsc_rw_freq * 1000 / HZ, 1800 + jiffies_to_msecs(tape->best_dsc_rw_freq), 1801 1801 (drive->dev_flags & IDE_DFLAG_USING_DMA) ? ", DMA" : ""); 1802 1802 1803 1803 ide_proc_register_driver(drive, tape->driver);
+16 -4
drivers/infiniband/hw/mlx4/mad.c
··· 64 64 #define GUID_TBL_BLK_NUM_ENTRIES 8 65 65 #define GUID_TBL_BLK_SIZE (GUID_TBL_ENTRY_SIZE * GUID_TBL_BLK_NUM_ENTRIES) 66 66 67 + /* Counters should be saturate once they reach their maximum value */ 68 + #define ASSIGN_32BIT_COUNTER(counter, value) do {\ 69 + if ((value) > U32_MAX) \ 70 + counter = cpu_to_be32(U32_MAX); \ 71 + else \ 72 + counter = cpu_to_be32(value); \ 73 + } while (0) 74 + 67 75 struct mlx4_mad_rcv_buf { 68 76 struct ib_grh grh; 69 77 u8 payload[256]; ··· 814 806 static void edit_counter(struct mlx4_counter *cnt, 815 807 struct ib_pma_portcounters *pma_cnt) 816 808 { 817 - pma_cnt->port_xmit_data = cpu_to_be32((be64_to_cpu(cnt->tx_bytes)>>2)); 818 - pma_cnt->port_rcv_data = cpu_to_be32((be64_to_cpu(cnt->rx_bytes)>>2)); 819 - pma_cnt->port_xmit_packets = cpu_to_be32(be64_to_cpu(cnt->tx_frames)); 820 - pma_cnt->port_rcv_packets = cpu_to_be32(be64_to_cpu(cnt->rx_frames)); 809 + ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data, 810 + (be64_to_cpu(cnt->tx_bytes) >> 2)); 811 + ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data, 812 + (be64_to_cpu(cnt->rx_bytes) >> 2)); 813 + ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets, 814 + be64_to_cpu(cnt->tx_frames)); 815 + ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets, 816 + be64_to_cpu(cnt->rx_frames)); 821 817 } 822 818 823 819 static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
+5 -1
drivers/infiniband/hw/mlx4/main.c
··· 2697 2697 spin_lock_bh(&ibdev->iboe.lock); 2698 2698 for (i = 0; i < MLX4_MAX_PORTS; ++i) { 2699 2699 struct net_device *curr_netdev = ibdev->iboe.netdevs[i]; 2700 + enum ib_port_state curr_port_state; 2700 2701 2701 - enum ib_port_state curr_port_state = 2702 + if (!curr_netdev) 2703 + continue; 2704 + 2705 + curr_port_state = 2702 2706 (netif_running(curr_netdev) && 2703 2707 netif_carrier_ok(curr_netdev)) ? 2704 2708 IB_PORT_ACTIVE : IB_PORT_DOWN;
+157 -55
drivers/input/mouse/synaptics.c
··· 67 67 #define X_MAX_POSITIVE 8176 68 68 #define Y_MAX_POSITIVE 8176 69 69 70 - /* maximum ABS_MT_POSITION displacement (in mm) */ 71 - #define DMAX 10 72 - 73 70 /***************************************************************************** 74 71 * Stuff we need even when we do not want native Synaptics support 75 72 ****************************************************************************/ ··· 120 123 121 124 static bool cr48_profile_sensor; 122 125 126 + #define ANY_BOARD_ID 0 123 127 struct min_max_quirk { 124 128 const char * const *pnp_ids; 129 + struct { 130 + unsigned long int min, max; 131 + } board_id; 125 132 int x_min, x_max, y_min, y_max; 126 133 }; 127 134 128 135 static const struct min_max_quirk min_max_pnpid_table[] = { 129 136 { 130 137 (const char * const []){"LEN0033", NULL}, 138 + {ANY_BOARD_ID, ANY_BOARD_ID}, 131 139 1024, 5052, 2258, 4832 132 140 }, 133 141 { 134 - (const char * const []){"LEN0035", "LEN0042", NULL}, 142 + (const char * const []){"LEN0042", NULL}, 143 + {ANY_BOARD_ID, ANY_BOARD_ID}, 135 144 1232, 5710, 1156, 4696 136 145 }, 137 146 { 138 147 (const char * const []){"LEN0034", "LEN0036", "LEN0037", 139 148 "LEN0039", "LEN2002", "LEN2004", 140 149 NULL}, 150 + {ANY_BOARD_ID, 2961}, 141 151 1024, 5112, 2024, 4832 142 152 }, 143 153 { 144 154 (const char * const []){"LEN2001", NULL}, 155 + {ANY_BOARD_ID, ANY_BOARD_ID}, 145 156 1024, 5022, 2508, 4832 146 157 }, 147 158 { 148 159 (const char * const []){"LEN2006", NULL}, 160 + {ANY_BOARD_ID, ANY_BOARD_ID}, 149 161 1264, 5675, 1171, 4688 150 162 }, 151 163 { } ··· 181 175 "LEN0041", 182 176 "LEN0042", /* Yoga */ 183 177 "LEN0045", 184 - "LEN0046", 185 178 "LEN0047", 186 - "LEN0048", 187 179 "LEN0049", 188 180 "LEN2000", 189 181 "LEN2001", /* Edge E431 */ ··· 239 235 return 0; 240 236 } 241 237 238 + static int synaptics_more_extended_queries(struct psmouse *psmouse) 239 + { 240 + struct synaptics_data *priv = psmouse->private; 241 + unsigned char buf[3]; 242 + 243 + if (synaptics_send_cmd(psmouse, SYN_QUE_MEXT_CAPAB_10, buf)) 244 + return -1; 245 + 246 + priv->ext_cap_10 = (buf[0]<<16) | (buf[1]<<8) | buf[2]; 247 + 248 + return 0; 249 + } 250 + 242 251 /* 243 - * Read the board id from the touchpad 252 + * Read the board id and the "More Extended Queries" from the touchpad 244 253 * The board id is encoded in the "QUERY MODES" response 245 254 */ 246 - static int synaptics_board_id(struct psmouse *psmouse) 255 + static int synaptics_query_modes(struct psmouse *psmouse) 247 256 { 248 257 struct synaptics_data *priv = psmouse->private; 249 258 unsigned char bid[3]; 250 259 260 + /* firmwares prior 7.5 have no board_id encoded */ 261 + if (SYN_ID_FULL(priv->identity) < 0x705) 262 + return 0; 263 + 251 264 if (synaptics_send_cmd(psmouse, SYN_QUE_MODES, bid)) 252 265 return -1; 253 266 priv->board_id = ((bid[0] & 0xfc) << 6) | bid[1]; 267 + 268 + if (SYN_MEXT_CAP_BIT(bid[0])) 269 + return synaptics_more_extended_queries(psmouse); 270 + 254 271 return 0; 255 272 } 256 273 ··· 371 346 { 372 347 struct synaptics_data *priv = psmouse->private; 373 348 unsigned char resp[3]; 374 - int i; 375 349 376 350 if (SYN_ID_MAJOR(priv->identity) < 4) 377 351 return 0; ··· 382 358 } 383 359 } 384 360 385 - for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) { 386 - if (psmouse_matches_pnp_id(psmouse, 387 - min_max_pnpid_table[i].pnp_ids)) { 388 - priv->x_min = min_max_pnpid_table[i].x_min; 389 - priv->x_max = min_max_pnpid_table[i].x_max; 390 - priv->y_min = min_max_pnpid_table[i].y_min; 391 - priv->y_max = min_max_pnpid_table[i].y_max; 392 - return 0; 393 - } 394 - } 395 - 396 361 if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 5 && 397 362 SYN_CAP_MAX_DIMENSIONS(priv->ext_cap_0c)) { 398 363 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MAX_COORDS, resp)) { ··· 390 377 } else { 391 378 priv->x_max = (resp[0] << 5) | ((resp[1] & 0x0f) << 1); 392 379 priv->y_max = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3); 380 + psmouse_info(psmouse, 381 + "queried max coordinates: x [..%d], y [..%d]\n", 382 + priv->x_max, priv->y_max); 393 383 } 394 384 } 395 385 396 - if (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 && 397 - SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c)) { 386 + if (SYN_CAP_MIN_DIMENSIONS(priv->ext_cap_0c) && 387 + (SYN_EXT_CAP_REQUESTS(priv->capabilities) >= 7 || 388 + /* 389 + * Firmware v8.1 does not report proper number of extended 390 + * capabilities, but has been proven to report correct min 391 + * coordinates. 392 + */ 393 + SYN_ID_FULL(priv->identity) == 0x801)) { 398 394 if (synaptics_send_cmd(psmouse, SYN_QUE_EXT_MIN_COORDS, resp)) { 399 395 psmouse_warn(psmouse, 400 396 "device claims to have min coordinates query, but I'm not able to read it.\n"); 401 397 } else { 402 398 priv->x_min = (resp[0] << 5) | ((resp[1] & 0x0f) << 1); 403 399 priv->y_min = (resp[2] << 5) | ((resp[1] & 0xf0) >> 3); 400 + psmouse_info(psmouse, 401 + "queried min coordinates: x [%d..], y [%d..]\n", 402 + priv->x_min, priv->y_min); 404 403 } 405 404 } 406 405 407 406 return 0; 407 + } 408 + 409 + /* 410 + * Apply quirk(s) if the hardware matches 411 + */ 412 + 413 + static void synaptics_apply_quirks(struct psmouse *psmouse) 414 + { 415 + struct synaptics_data *priv = psmouse->private; 416 + int i; 417 + 418 + for (i = 0; min_max_pnpid_table[i].pnp_ids; i++) { 419 + if (!psmouse_matches_pnp_id(psmouse, 420 + min_max_pnpid_table[i].pnp_ids)) 421 + continue; 422 + 423 + if (min_max_pnpid_table[i].board_id.min != ANY_BOARD_ID && 424 + priv->board_id < min_max_pnpid_table[i].board_id.min) 425 + continue; 426 + 427 + if (min_max_pnpid_table[i].board_id.max != ANY_BOARD_ID && 428 + priv->board_id > min_max_pnpid_table[i].board_id.max) 429 + continue; 430 + 431 + priv->x_min = min_max_pnpid_table[i].x_min; 432 + priv->x_max = min_max_pnpid_table[i].x_max; 433 + priv->y_min = min_max_pnpid_table[i].y_min; 434 + priv->y_max = min_max_pnpid_table[i].y_max; 435 + psmouse_info(psmouse, 436 + "quirked min/max coordinates: x [%d..%d], y [%d..%d]\n", 437 + priv->x_min, priv->x_max, 438 + priv->y_min, priv->y_max); 439 + break; 440 + } 408 441 } 409 442 410 443 static int synaptics_query_hardware(struct psmouse *psmouse) ··· 461 402 return -1; 462 403 if (synaptics_firmware_id(psmouse)) 463 404 return -1; 464 - if (synaptics_board_id(psmouse)) 405 + if (synaptics_query_modes(psmouse)) 465 406 return -1; 466 407 if (synaptics_capability(psmouse)) 467 408 return -1; 468 409 if (synaptics_resolution(psmouse)) 469 410 return -1; 411 + 412 + synaptics_apply_quirks(psmouse); 470 413 471 414 return 0; 472 415 } ··· 577 516 return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4; 578 517 } 579 518 580 - static void synaptics_pass_pt_packet(struct serio *ptport, unsigned char *packet) 519 + static void synaptics_pass_pt_packet(struct psmouse *psmouse, 520 + struct serio *ptport, 521 + unsigned char *packet) 581 522 { 523 + struct synaptics_data *priv = psmouse->private; 582 524 struct psmouse *child = serio_get_drvdata(ptport); 583 525 584 526 if (child && child->state == PSMOUSE_ACTIVATED) { 585 - serio_interrupt(ptport, packet[1], 0); 527 + serio_interrupt(ptport, packet[1] | priv->pt_buttons, 0); 586 528 serio_interrupt(ptport, packet[4], 0); 587 529 serio_interrupt(ptport, packet[5], 0); 588 530 if (child->pktsize == 4) 589 531 serio_interrupt(ptport, packet[2], 0); 590 - } else 532 + } else { 591 533 serio_interrupt(ptport, packet[1], 0); 534 + } 592 535 } 593 536 594 537 static void synaptics_pt_activate(struct psmouse *psmouse) ··· 668 603 default: 669 604 break; 670 605 } 606 + } 607 + 608 + static void synaptics_parse_ext_buttons(const unsigned char buf[], 609 + struct synaptics_data *priv, 610 + struct synaptics_hw_state *hw) 611 + { 612 + unsigned int ext_bits = 613 + (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1; 614 + unsigned int ext_mask = GENMASK(ext_bits - 1, 0); 615 + 616 + hw->ext_buttons = buf[4] & ext_mask; 617 + hw->ext_buttons |= (buf[5] & ext_mask) << ext_bits; 671 618 } 672 619 673 620 static bool is_forcepad; ··· 768 691 hw->down = ((buf[0] ^ buf[3]) & 0x02) ? 1 : 0; 769 692 } 770 693 771 - if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) && 694 + if (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) > 0 && 772 695 ((buf[0] ^ buf[3]) & 0x02)) { 773 - switch (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) & ~0x01) { 774 - default: 775 - /* 776 - * if nExtBtn is greater than 8 it should be 777 - * considered invalid and treated as 0 778 - */ 779 - break; 780 - case 8: 781 - hw->ext_buttons |= ((buf[5] & 0x08)) ? 0x80 : 0; 782 - hw->ext_buttons |= ((buf[4] & 0x08)) ? 0x40 : 0; 783 - case 6: 784 - hw->ext_buttons |= ((buf[5] & 0x04)) ? 0x20 : 0; 785 - hw->ext_buttons |= ((buf[4] & 0x04)) ? 0x10 : 0; 786 - case 4: 787 - hw->ext_buttons |= ((buf[5] & 0x02)) ? 0x08 : 0; 788 - hw->ext_buttons |= ((buf[4] & 0x02)) ? 0x04 : 0; 789 - case 2: 790 - hw->ext_buttons |= ((buf[5] & 0x01)) ? 0x02 : 0; 791 - hw->ext_buttons |= ((buf[4] & 0x01)) ? 0x01 : 0; 792 - } 696 + synaptics_parse_ext_buttons(buf, priv, hw); 793 697 } 794 698 } else { 795 699 hw->x = (((buf[1] & 0x1f) << 8) | buf[2]); ··· 832 774 } 833 775 } 834 776 777 + static void synaptics_report_ext_buttons(struct psmouse *psmouse, 778 + const struct synaptics_hw_state *hw) 779 + { 780 + struct input_dev *dev = psmouse->dev; 781 + struct synaptics_data *priv = psmouse->private; 782 + int ext_bits = (SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap) + 1) >> 1; 783 + char buf[6] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}; 784 + int i; 785 + 786 + if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap)) 787 + return; 788 + 789 + /* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */ 790 + if (SYN_ID_FULL(priv->identity) == 0x801 && 791 + !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02)) 792 + return; 793 + 794 + if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) { 795 + for (i = 0; i < ext_bits; i++) { 796 + input_report_key(dev, BTN_0 + 2 * i, 797 + hw->ext_buttons & (1 << i)); 798 + input_report_key(dev, BTN_1 + 2 * i, 799 + hw->ext_buttons & (1 << (i + ext_bits))); 800 + } 801 + return; 802 + } 803 + 804 + /* 805 + * This generation of touchpads has the trackstick buttons 806 + * physically wired to the touchpad. Re-route them through 807 + * the pass-through interface. 808 + */ 809 + if (!priv->pt_port) 810 + return; 811 + 812 + /* The trackstick expects at most 3 buttons */ 813 + priv->pt_buttons = SYN_CAP_EXT_BUTTON_STICK_L(hw->ext_buttons) | 814 + SYN_CAP_EXT_BUTTON_STICK_R(hw->ext_buttons) << 1 | 815 + SYN_CAP_EXT_BUTTON_STICK_M(hw->ext_buttons) << 2; 816 + 817 + synaptics_pass_pt_packet(psmouse, priv->pt_port, buf); 818 + } 819 + 835 820 static void synaptics_report_buttons(struct psmouse *psmouse, 836 821 const struct synaptics_hw_state *hw) 837 822 { 838 823 struct input_dev *dev = psmouse->dev; 839 824 struct synaptics_data *priv = psmouse->private; 840 - int i; 841 825 842 826 input_report_key(dev, BTN_LEFT, hw->left); 843 827 input_report_key(dev, BTN_RIGHT, hw->right); ··· 892 792 input_report_key(dev, BTN_BACK, hw->down); 893 793 } 894 794 895 - for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 896 - input_report_key(dev, BTN_0 + i, hw->ext_buttons & (1 << i)); 795 + synaptics_report_ext_buttons(psmouse, hw); 897 796 } 898 797 899 798 static void synaptics_report_mt_data(struct psmouse *psmouse, ··· 912 813 pos[i].y = synaptics_invert_y(hw[i]->y); 913 814 } 914 815 915 - input_mt_assign_slots(dev, slot, pos, nsemi, DMAX * priv->x_res); 816 + input_mt_assign_slots(dev, slot, pos, nsemi, 0); 916 817 917 818 for (i = 0; i < nsemi; i++) { 918 819 input_mt_slot(dev, slot[i]); ··· 1113 1014 if (SYN_CAP_PASS_THROUGH(priv->capabilities) && 1114 1015 synaptics_is_pt_packet(psmouse->packet)) { 1115 1016 if (priv->pt_port) 1116 - synaptics_pass_pt_packet(priv->pt_port, psmouse->packet); 1017 + synaptics_pass_pt_packet(psmouse, priv->pt_port, 1018 + psmouse->packet); 1117 1019 } else 1118 1020 synaptics_process_packet(psmouse); 1119 1021 ··· 1216 1116 __set_bit(BTN_BACK, dev->keybit); 1217 1117 } 1218 1118 1219 - for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 1220 - __set_bit(BTN_0 + i, dev->keybit); 1119 + if (!SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) 1120 + for (i = 0; i < SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap); i++) 1121 + __set_bit(BTN_0 + i, dev->keybit); 1221 1122 1222 1123 __clear_bit(EV_REL, dev->evbit); 1223 1124 __clear_bit(REL_X, dev->relbit); ··· 1226 1125 1227 1126 if (SYN_CAP_CLICKPAD(priv->ext_cap_0c)) { 1228 1127 __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); 1229 - if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids)) 1128 + if (psmouse_matches_pnp_id(psmouse, topbuttonpad_pnp_ids) && 1129 + !SYN_CAP_EXT_BUTTONS_STICK(priv->ext_cap_10)) 1230 1130 __set_bit(INPUT_PROP_TOPBUTTONPAD, dev->propbit); 1231 1131 /* Clickpads report only left button */ 1232 1132 __clear_bit(BTN_RIGHT, dev->keybit);
+28
drivers/input/mouse/synaptics.h
··· 22 22 #define SYN_QUE_EXT_CAPAB_0C 0x0c 23 23 #define SYN_QUE_EXT_MAX_COORDS 0x0d 24 24 #define SYN_QUE_EXT_MIN_COORDS 0x0f 25 + #define SYN_QUE_MEXT_CAPAB_10 0x10 25 26 26 27 /* synatics modes */ 27 28 #define SYN_BIT_ABSOLUTE_MODE (1 << 7) ··· 54 53 #define SYN_EXT_CAP_REQUESTS(c) (((c) & 0x700000) >> 20) 55 54 #define SYN_CAP_MULTI_BUTTON_NO(ec) (((ec) & 0x00f000) >> 12) 56 55 #define SYN_CAP_PRODUCT_ID(ec) (((ec) & 0xff0000) >> 16) 56 + #define SYN_MEXT_CAP_BIT(m) ((m) & (1 << 1)) 57 57 58 58 /* 59 59 * The following describes response for the 0x0c query. ··· 90 88 #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & 0x080000) 91 89 #define SYN_CAP_REDUCED_FILTERING(ex0c) ((ex0c) & 0x000400) 92 90 #define SYN_CAP_IMAGE_SENSOR(ex0c) ((ex0c) & 0x000800) 91 + 92 + /* 93 + * The following descibes response for the 0x10 query. 94 + * 95 + * byte mask name meaning 96 + * ---- ---- ------- ------------ 97 + * 1 0x01 ext buttons are stick buttons exported in the extended 98 + * capability are actually meant to be used 99 + * by the tracktick (pass-through). 100 + * 1 0x02 SecurePad the touchpad is a SecurePad, so it 101 + * contains a built-in fingerprint reader. 102 + * 1 0xe0 more ext count how many more extented queries are 103 + * available after this one. 104 + * 2 0xff SecurePad width the width of the SecurePad fingerprint 105 + * reader. 106 + * 3 0xff SecurePad height the height of the SecurePad fingerprint 107 + * reader. 108 + */ 109 + #define SYN_CAP_EXT_BUTTONS_STICK(ex10) ((ex10) & 0x010000) 110 + #define SYN_CAP_SECUREPAD(ex10) ((ex10) & 0x020000) 111 + 112 + #define SYN_CAP_EXT_BUTTON_STICK_L(eb) (!!((eb) & 0x01)) 113 + #define SYN_CAP_EXT_BUTTON_STICK_M(eb) (!!((eb) & 0x02)) 114 + #define SYN_CAP_EXT_BUTTON_STICK_R(eb) (!!((eb) & 0x04)) 93 115 94 116 /* synaptics modes query bits */ 95 117 #define SYN_MODE_ABSOLUTE(m) ((m) & (1 << 7)) ··· 169 143 unsigned long int capabilities; /* Capabilities */ 170 144 unsigned long int ext_cap; /* Extended Capabilities */ 171 145 unsigned long int ext_cap_0c; /* Ext Caps from 0x0c query */ 146 + unsigned long int ext_cap_10; /* Ext Caps from 0x10 query */ 172 147 unsigned long int identity; /* Identification */ 173 148 unsigned int x_res, y_res; /* X/Y resolution in units/mm */ 174 149 unsigned int x_max, y_max; /* Max coordinates (from FW) */ ··· 183 156 bool disable_gesture; /* disable gestures */ 184 157 185 158 struct serio *pt_port; /* Pass-through serio port */ 159 + unsigned char pt_buttons; /* Pass-through buttons */ 186 160 187 161 /* 188 162 * Last received Advanced Gesture Mode (AGM) packet. An AGM packet
+2
drivers/iommu/Kconfig
··· 23 23 config IOMMU_IO_PGTABLE_LPAE 24 24 bool "ARMv7/v8 Long Descriptor Format" 25 25 select IOMMU_IO_PGTABLE 26 + depends on ARM || ARM64 || COMPILE_TEST 26 27 help 27 28 Enable support for the ARM long descriptor pagetable format. 28 29 This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page ··· 64 63 bool "MSM IOMMU Support" 65 64 depends on ARM 66 65 depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST 66 + depends on BROKEN 67 67 select IOMMU_API 68 68 help 69 69 Support for the IOMMUs found on certain Qualcomm SOCs.
+7
drivers/iommu/exynos-iommu.c
··· 1186 1186 1187 1187 static int __init exynos_iommu_init(void) 1188 1188 { 1189 + struct device_node *np; 1189 1190 int ret; 1191 + 1192 + np = of_find_matching_node(NULL, sysmmu_of_match); 1193 + if (!np) 1194 + return 0; 1195 + 1196 + of_node_put(np); 1190 1197 1191 1198 lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table", 1192 1199 LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
+3 -2
drivers/iommu/io-pgtable-arm.c
··· 56 56 ((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1)) \ 57 57 * (d)->bits_per_level) + (d)->pg_shift) 58 58 59 - #define ARM_LPAE_PAGES_PER_PGD(d) ((d)->pgd_size >> (d)->pg_shift) 59 + #define ARM_LPAE_PAGES_PER_PGD(d) \ 60 + DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift) 60 61 61 62 /* 62 63 * Calculate the index at level l used to map virtual address a using the ··· 67 66 ((l) == ARM_LPAE_START_LVL(d) ? ilog2(ARM_LPAE_PAGES_PER_PGD(d)) : 0) 68 67 69 68 #define ARM_LPAE_LVL_IDX(a,l,d) \ 70 - (((a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 69 + (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 71 70 ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) 72 71 73 72 /* Calculate the block/page mapping size at level l for pagetable in d. */
+7
drivers/iommu/omap-iommu.c
··· 1376 1376 struct kmem_cache *p; 1377 1377 const unsigned long flags = SLAB_HWCACHE_ALIGN; 1378 1378 size_t align = 1 << 10; /* L2 pagetable alignement */ 1379 + struct device_node *np; 1380 + 1381 + np = of_find_matching_node(NULL, omap_iommu_of_match); 1382 + if (!np) 1383 + return 0; 1384 + 1385 + of_node_put(np); 1379 1386 1380 1387 p = kmem_cache_create("iopte_cache", IOPTE_TABLE_SIZE, align, flags, 1381 1388 iopte_cachep_ctor);
+7
drivers/iommu/rockchip-iommu.c
··· 1015 1015 1016 1016 static int __init rk_iommu_init(void) 1017 1017 { 1018 + struct device_node *np; 1018 1019 int ret; 1020 + 1021 + np = of_find_matching_node(NULL, rk_iommu_dt_ids); 1022 + if (!np) 1023 + return 0; 1024 + 1025 + of_node_put(np); 1019 1026 1020 1027 ret = bus_set_iommu(&platform_bus_type, &rk_iommu_ops); 1021 1028 if (ret)
+20 -1
drivers/irqchip/irq-armada-370-xp.c
··· 69 69 static void __iomem *main_int_base; 70 70 static struct irq_domain *armada_370_xp_mpic_domain; 71 71 static u32 doorbell_mask_reg; 72 + static int parent_irq; 72 73 #ifdef CONFIG_PCI_MSI 73 74 static struct irq_domain *armada_370_xp_msi_domain; 74 75 static DECLARE_BITMAP(msi_used, PCI_MSI_DOORBELL_NR); ··· 357 356 { 358 357 if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 359 358 armada_xp_mpic_smp_cpu_init(); 359 + 360 360 return NOTIFY_OK; 361 361 } 362 362 363 363 static struct notifier_block armada_370_xp_mpic_cpu_notifier = { 364 364 .notifier_call = armada_xp_mpic_secondary_init, 365 + .priority = 100, 366 + }; 367 + 368 + static int mpic_cascaded_secondary_init(struct notifier_block *nfb, 369 + unsigned long action, void *hcpu) 370 + { 371 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 372 + enable_percpu_irq(parent_irq, IRQ_TYPE_NONE); 373 + 374 + return NOTIFY_OK; 375 + } 376 + 377 + static struct notifier_block mpic_cascaded_cpu_notifier = { 378 + .notifier_call = mpic_cascaded_secondary_init, 365 379 .priority = 100, 366 380 }; 367 381 ··· 555 539 struct device_node *parent) 556 540 { 557 541 struct resource main_int_res, per_cpu_int_res; 558 - int parent_irq, nr_irqs, i; 542 + int nr_irqs, i; 559 543 u32 control; 560 544 561 545 BUG_ON(of_address_to_resource(node, 0, &main_int_res)); ··· 603 587 register_cpu_notifier(&armada_370_xp_mpic_cpu_notifier); 604 588 #endif 605 589 } else { 590 + #ifdef CONFIG_SMP 591 + register_cpu_notifier(&mpic_cascaded_cpu_notifier); 592 + #endif 606 593 irq_set_chained_handler(parent_irq, 607 594 armada_370_xp_mpic_handle_cascade_irq); 608 595 }
+128 -29
drivers/irqchip/irq-gic-v3-its.c
··· 416 416 { 417 417 struct its_cmd_block *cmd, *sync_cmd, *next_cmd; 418 418 struct its_collection *sync_col; 419 + unsigned long flags; 419 420 420 - raw_spin_lock(&its->lock); 421 + raw_spin_lock_irqsave(&its->lock, flags); 421 422 422 423 cmd = its_allocate_entry(its); 423 424 if (!cmd) { /* We're soooooo screewed... */ 424 425 pr_err_ratelimited("ITS can't allocate, dropping command\n"); 425 - raw_spin_unlock(&its->lock); 426 + raw_spin_unlock_irqrestore(&its->lock, flags); 426 427 return; 427 428 } 428 429 sync_col = builder(cmd, desc); ··· 443 442 444 443 post: 445 444 next_cmd = its_post_commands(its); 446 - raw_spin_unlock(&its->lock); 445 + raw_spin_unlock_irqrestore(&its->lock, flags); 447 446 448 447 its_wait_for_range_completion(its, cmd, next_cmd); 449 448 } ··· 800 799 { 801 800 int err; 802 801 int i; 803 - int psz = PAGE_SIZE; 802 + int psz = SZ_64K; 804 803 u64 shr = GITS_BASER_InnerShareable; 805 804 806 805 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 807 806 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); 808 807 u64 type = GITS_BASER_TYPE(val); 809 808 u64 entry_size = GITS_BASER_ENTRY_SIZE(val); 809 + int order = get_order(psz); 810 + int alloc_size; 810 811 u64 tmp; 811 812 void *base; 812 813 813 814 if (type == GITS_BASER_TYPE_NONE) 814 815 continue; 815 816 816 - /* We're lazy and only allocate a single page for now */ 817 - base = (void *)get_zeroed_page(GFP_KERNEL); 817 + /* 818 + * Allocate as many entries as required to fit the 819 + * range of device IDs that the ITS can grok... The ID 820 + * space being incredibly sparse, this results in a 821 + * massive waste of memory. 822 + * 823 + * For other tables, only allocate a single page. 824 + */ 825 + if (type == GITS_BASER_TYPE_DEVICE) { 826 + u64 typer = readq_relaxed(its->base + GITS_TYPER); 827 + u32 ids = GITS_TYPER_DEVBITS(typer); 828 + 829 + order = get_order((1UL << ids) * entry_size); 830 + if (order >= MAX_ORDER) { 831 + order = MAX_ORDER - 1; 832 + pr_warn("%s: Device Table too large, reduce its page order to %u\n", 833 + its->msi_chip.of_node->full_name, order); 834 + } 835 + } 836 + 837 + alloc_size = (1 << order) * PAGE_SIZE; 838 + base = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 818 839 if (!base) { 819 840 err = -ENOMEM; 820 841 goto out_free; ··· 864 841 break; 865 842 } 866 843 867 - val |= (PAGE_SIZE / psz) - 1; 844 + val |= (alloc_size / psz) - 1; 868 845 869 846 writeq_relaxed(val, its->base + GITS_BASER + i * 8); 870 847 tmp = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 905 882 } 906 883 907 884 pr_info("ITS: allocated %d %s @%lx (psz %dK, shr %d)\n", 908 - (int)(PAGE_SIZE / entry_size), 885 + (int)(alloc_size / entry_size), 909 886 its_base_type_string[type], 910 887 (unsigned long)virt_to_phys(base), 911 888 psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT); ··· 1043 1020 static struct its_device *its_find_device(struct its_node *its, u32 dev_id) 1044 1021 { 1045 1022 struct its_device *its_dev = NULL, *tmp; 1023 + unsigned long flags; 1046 1024 1047 - raw_spin_lock(&its->lock); 1025 + raw_spin_lock_irqsave(&its->lock, flags); 1048 1026 1049 1027 list_for_each_entry(tmp, &its->its_device_list, entry) { 1050 1028 if (tmp->device_id == dev_id) { ··· 1054 1030 } 1055 1031 } 1056 1032 1057 - raw_spin_unlock(&its->lock); 1033 + raw_spin_unlock_irqrestore(&its->lock, flags); 1058 1034 1059 1035 return its_dev; 1060 1036 } ··· 1064 1040 { 1065 1041 struct its_device *dev; 1066 1042 unsigned long *lpi_map; 1043 + unsigned long flags; 1067 1044 void *itt; 1068 1045 int lpi_base; 1069 1046 int nr_lpis; ··· 1081 1056 nr_ites = max(2UL, roundup_pow_of_two(nvecs)); 1082 1057 sz = nr_ites * its->ite_size; 1083 1058 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 1084 - itt = kmalloc(sz, GFP_KERNEL); 1059 + itt = kzalloc(sz, GFP_KERNEL); 1085 1060 lpi_map = its_lpi_alloc_chunks(nvecs, &lpi_base, &nr_lpis); 1086 1061 1087 1062 if (!dev || !itt || !lpi_map) { ··· 1100 1075 dev->device_id = dev_id; 1101 1076 INIT_LIST_HEAD(&dev->entry); 1102 1077 1103 - raw_spin_lock(&its->lock); 1078 + raw_spin_lock_irqsave(&its->lock, flags); 1104 1079 list_add(&dev->entry, &its->its_device_list); 1105 - raw_spin_unlock(&its->lock); 1080 + raw_spin_unlock_irqrestore(&its->lock, flags); 1106 1081 1107 1082 /* Bind the device to the first possible CPU */ 1108 1083 cpu = cpumask_first(cpu_online_mask); ··· 1116 1091 1117 1092 static void its_free_device(struct its_device *its_dev) 1118 1093 { 1119 - raw_spin_lock(&its_dev->its->lock); 1094 + unsigned long flags; 1095 + 1096 + raw_spin_lock_irqsave(&its_dev->its->lock, flags); 1120 1097 list_del(&its_dev->entry); 1121 - raw_spin_unlock(&its_dev->its->lock); 1098 + raw_spin_unlock_irqrestore(&its_dev->its->lock, flags); 1122 1099 kfree(its_dev->itt); 1123 1100 kfree(its_dev); 1124 1101 } ··· 1139 1112 return 0; 1140 1113 } 1141 1114 1115 + struct its_pci_alias { 1116 + struct pci_dev *pdev; 1117 + u32 dev_id; 1118 + u32 count; 1119 + }; 1120 + 1121 + static int its_pci_msi_vec_count(struct pci_dev *pdev) 1122 + { 1123 + int msi, msix; 1124 + 1125 + msi = max(pci_msi_vec_count(pdev), 0); 1126 + msix = max(pci_msix_vec_count(pdev), 0); 1127 + 1128 + return max(msi, msix); 1129 + } 1130 + 1131 + static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data) 1132 + { 1133 + struct its_pci_alias *dev_alias = data; 1134 + 1135 + dev_alias->dev_id = alias; 1136 + if (pdev != dev_alias->pdev) 1137 + dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev); 1138 + 1139 + return 0; 1140 + } 1141 + 1142 1142 static int its_msi_prepare(struct irq_domain *domain, struct device *dev, 1143 1143 int nvec, msi_alloc_info_t *info) 1144 1144 { 1145 1145 struct pci_dev *pdev; 1146 1146 struct its_node *its; 1147 - u32 dev_id; 1148 1147 struct its_device *its_dev; 1148 + struct its_pci_alias dev_alias; 1149 1149 1150 1150 if (!dev_is_pci(dev)) 1151 1151 return -EINVAL; 1152 1152 1153 1153 pdev = to_pci_dev(dev); 1154 - dev_id = PCI_DEVID(pdev->bus->number, pdev->devfn); 1154 + dev_alias.pdev = pdev; 1155 + dev_alias.count = nvec; 1156 + 1157 + pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias); 1155 1158 its = domain->parent->host_data; 1156 1159 1157 - its_dev = its_find_device(its, dev_id); 1158 - if (WARN_ON(its_dev)) 1159 - return -EINVAL; 1160 + its_dev = its_find_device(its, dev_alias.dev_id); 1161 + if (its_dev) { 1162 + /* 1163 + * We already have seen this ID, probably through 1164 + * another alias (PCI bridge of some sort). No need to 1165 + * create the device. 1166 + */ 1167 + dev_dbg(dev, "Reusing ITT for devID %x\n", dev_alias.dev_id); 1168 + goto out; 1169 + } 1160 1170 1161 - its_dev = its_create_device(its, dev_id, nvec); 1171 + its_dev = its_create_device(its, dev_alias.dev_id, dev_alias.count); 1162 1172 if (!its_dev) 1163 1173 return -ENOMEM; 1164 1174 1165 - dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", nvec, ilog2(nvec)); 1166 - 1175 + dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", 1176 + dev_alias.count, ilog2(dev_alias.count)); 1177 + out: 1167 1178 info->scratchpad[0].ptr = its_dev; 1168 1179 info->scratchpad[1].ptr = dev; 1169 1180 return 0; ··· 1320 1255 .deactivate = its_irq_domain_deactivate, 1321 1256 }; 1322 1257 1258 + static int its_force_quiescent(void __iomem *base) 1259 + { 1260 + u32 count = 1000000; /* 1s */ 1261 + u32 val; 1262 + 1263 + val = readl_relaxed(base + GITS_CTLR); 1264 + if (val & GITS_CTLR_QUIESCENT) 1265 + return 0; 1266 + 1267 + /* Disable the generation of all interrupts to this ITS */ 1268 + val &= ~GITS_CTLR_ENABLE; 1269 + writel_relaxed(val, base + GITS_CTLR); 1270 + 1271 + /* Poll GITS_CTLR and wait until ITS becomes quiescent */ 1272 + while (1) { 1273 + val = readl_relaxed(base + GITS_CTLR); 1274 + if (val & GITS_CTLR_QUIESCENT) 1275 + return 0; 1276 + 1277 + count--; 1278 + if (!count) 1279 + return -EBUSY; 1280 + 1281 + cpu_relax(); 1282 + udelay(1); 1283 + } 1284 + } 1285 + 1323 1286 static int its_probe(struct device_node *node, struct irq_domain *parent) 1324 1287 { 1325 1288 struct resource res; ··· 1373 1280 if (val != 0x30 && val != 0x40) { 1374 1281 pr_warn("%s: no ITS detected, giving up\n", node->full_name); 1375 1282 err = -ENODEV; 1283 + goto out_unmap; 1284 + } 1285 + 1286 + err = its_force_quiescent(its_base); 1287 + if (err) { 1288 + pr_warn("%s: failed to quiesce, giving up\n", 1289 + node->full_name); 1376 1290 goto out_unmap; 1377 1291 } 1378 1292 ··· 1423 1323 writeq_relaxed(baser, its->base + GITS_CBASER); 1424 1324 tmp = readq_relaxed(its->base + GITS_CBASER); 1425 1325 writeq_relaxed(0, its->base + GITS_CWRITER); 1426 - writel_relaxed(1, its->base + GITS_CTLR); 1326 + writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1427 1327 1428 1328 if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) { 1429 1329 pr_info("ITS: using cache flushing for cmd queue\n"); ··· 1482 1382 1483 1383 int its_cpu_init(void) 1484 1384 { 1485 - if (!gic_rdists_supports_plpis()) { 1486 - pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1487 - return -ENXIO; 1488 - } 1489 - 1490 1385 if (!list_empty(&its_nodes)) { 1386 + if (!gic_rdists_supports_plpis()) { 1387 + pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1388 + return -ENXIO; 1389 + } 1491 1390 its_cpu_init_lpis(); 1492 1391 its_cpu_init_collection(); 1493 1392 }
+1 -1
drivers/irqchip/irq-gic-v3.c
··· 466 466 tlist |= 1 << (mpidr & 0xf); 467 467 468 468 cpu = cpumask_next(cpu, mask); 469 - if (cpu == nr_cpu_ids) 469 + if (cpu >= nr_cpu_ids) 470 470 goto out; 471 471 472 472 mpidr = cpu_logical_map(cpu);
+12 -8
drivers/irqchip/irq-gic.c
··· 154 154 static void gic_mask_irq(struct irq_data *d) 155 155 { 156 156 u32 mask = 1 << (gic_irq(d) % 32); 157 + unsigned long flags; 157 158 158 - raw_spin_lock(&irq_controller_lock); 159 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 159 160 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_CLEAR + (gic_irq(d) / 32) * 4); 160 161 if (gic_arch_extn.irq_mask) 161 162 gic_arch_extn.irq_mask(d); 162 - raw_spin_unlock(&irq_controller_lock); 163 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 163 164 } 164 165 165 166 static void gic_unmask_irq(struct irq_data *d) 166 167 { 167 168 u32 mask = 1 << (gic_irq(d) % 32); 169 + unsigned long flags; 168 170 169 - raw_spin_lock(&irq_controller_lock); 171 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 170 172 if (gic_arch_extn.irq_unmask) 171 173 gic_arch_extn.irq_unmask(d); 172 174 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_SET + (gic_irq(d) / 32) * 4); 173 - raw_spin_unlock(&irq_controller_lock); 175 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 174 176 } 175 177 176 178 static void gic_eoi_irq(struct irq_data *d) ··· 190 188 { 191 189 void __iomem *base = gic_dist_base(d); 192 190 unsigned int gicirq = gic_irq(d); 191 + unsigned long flags; 193 192 int ret; 194 193 195 194 /* Interrupt configuration for SGIs can't be changed */ ··· 202 199 type != IRQ_TYPE_EDGE_RISING) 203 200 return -EINVAL; 204 201 205 - raw_spin_lock(&irq_controller_lock); 202 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 206 203 207 204 if (gic_arch_extn.irq_set_type) 208 205 gic_arch_extn.irq_set_type(d, type); 209 206 210 207 ret = gic_configure_irq(gicirq, type, base, NULL); 211 208 212 - raw_spin_unlock(&irq_controller_lock); 209 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 213 210 214 211 return ret; 215 212 } ··· 230 227 void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); 231 228 unsigned int cpu, shift = (gic_irq(d) % 4) * 8; 232 229 u32 val, mask, bit; 230 + unsigned long flags; 233 231 234 232 if (!force) 235 233 cpu = cpumask_any_and(mask_val, cpu_online_mask); ··· 240 236 if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) 241 237 return -EINVAL; 242 238 243 - raw_spin_lock(&irq_controller_lock); 239 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 244 240 mask = 0xff << shift; 245 241 bit = gic_cpu_map[cpu] << shift; 246 242 val = readl_relaxed(reg) & ~mask; 247 243 writel_relaxed(val | bit, reg); 248 - raw_spin_unlock(&irq_controller_lock); 244 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 249 245 250 246 return IRQ_SET_MASK_OK; 251 247 }
+1 -1
drivers/isdn/icn/icn.c
··· 1609 1609 if (ints[0] > 1) 1610 1610 membase = (unsigned long)ints[2]; 1611 1611 if (str && *str) { 1612 - strcpy(sid, str); 1612 + strlcpy(sid, str, sizeof(sid)); 1613 1613 icn_id = sid; 1614 1614 if ((p = strchr(sid, ','))) { 1615 1615 *p++ = 0;
+1 -1
drivers/mmc/core/pwrseq_simple.c
··· 124 124 PTR_ERR(pwrseq->reset_gpios[i]) != -ENOSYS) { 125 125 ret = PTR_ERR(pwrseq->reset_gpios[i]); 126 126 127 - while (--i) 127 + while (i--) 128 128 gpiod_put(pwrseq->reset_gpios[i]); 129 129 130 130 goto clk_put;
+1
drivers/mtd/nand/Kconfig
··· 526 526 527 527 config MTD_NAND_HISI504 528 528 tristate "Support for NAND controller on Hisilicon SoC Hip04" 529 + depends on HAS_DMA 529 530 help 530 531 Enables support for NAND controller on Hisilicon SoC Hip04. 531 532
+44 -6
drivers/mtd/nand/pxa3xx_nand.c
··· 480 480 nand_writel(info, NDCR, ndcr | int_mask); 481 481 } 482 482 483 + static void drain_fifo(struct pxa3xx_nand_info *info, void *data, int len) 484 + { 485 + if (info->ecc_bch) { 486 + int timeout; 487 + 488 + /* 489 + * According to the datasheet, when reading from NDDB 490 + * with BCH enabled, after each 32 bytes reads, we 491 + * have to make sure that the NDSR.RDDREQ bit is set. 492 + * 493 + * Drain the FIFO 8 32 bits reads at a time, and skip 494 + * the polling on the last read. 495 + */ 496 + while (len > 8) { 497 + __raw_readsl(info->mmio_base + NDDB, data, 8); 498 + 499 + for (timeout = 0; 500 + !(nand_readl(info, NDSR) & NDSR_RDDREQ); 501 + timeout++) { 502 + if (timeout >= 5) { 503 + dev_err(&info->pdev->dev, 504 + "Timeout on RDDREQ while draining the FIFO\n"); 505 + return; 506 + } 507 + 508 + mdelay(1); 509 + } 510 + 511 + data += 32; 512 + len -= 8; 513 + } 514 + } 515 + 516 + __raw_readsl(info->mmio_base + NDDB, data, len); 517 + } 518 + 483 519 static void handle_data_pio(struct pxa3xx_nand_info *info) 484 520 { 485 521 unsigned int do_bytes = min(info->data_size, info->chunk_size); ··· 532 496 DIV_ROUND_UP(info->oob_size, 4)); 533 497 break; 534 498 case STATE_PIO_READING: 535 - __raw_readsl(info->mmio_base + NDDB, 536 - info->data_buff + info->data_buff_pos, 537 - DIV_ROUND_UP(do_bytes, 4)); 499 + drain_fifo(info, 500 + info->data_buff + info->data_buff_pos, 501 + DIV_ROUND_UP(do_bytes, 4)); 538 502 539 503 if (info->oob_size > 0) 540 - __raw_readsl(info->mmio_base + NDDB, 541 - info->oob_buff + info->oob_buff_pos, 542 - DIV_ROUND_UP(info->oob_size, 4)); 504 + drain_fifo(info, 505 + info->oob_buff + info->oob_buff_pos, 506 + DIV_ROUND_UP(info->oob_size, 4)); 543 507 break; 544 508 default: 545 509 dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, ··· 1608 1572 int ret, irq, cs; 1609 1573 1610 1574 pdata = dev_get_platdata(&pdev->dev); 1575 + if (pdata->num_cs <= 0) 1576 + return -ENODEV; 1611 1577 info = devm_kzalloc(&pdev->dev, sizeof(*info) + (sizeof(*mtd) + 1612 1578 sizeof(*host)) * pdata->num_cs, GFP_KERNEL); 1613 1579 if (!info)
+1 -1
drivers/net/can/Kconfig
··· 131 131 132 132 config CAN_XILINXCAN 133 133 tristate "Xilinx CAN" 134 - depends on ARCH_ZYNQ || MICROBLAZE || COMPILE_TEST 134 + depends on ARCH_ZYNQ || ARM64 || MICROBLAZE || COMPILE_TEST 135 135 depends on COMMON_CLK && HAS_IOMEM 136 136 ---help--- 137 137 Xilinx CAN driver. This driver supports both soft AXI CAN IP and
+52 -33
drivers/net/can/usb/kvaser_usb.c
··· 14 14 * Copyright (C) 2015 Valeo S.A. 15 15 */ 16 16 17 + #include <linux/spinlock.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/completion.h> 19 20 #include <linux/module.h> ··· 468 467 struct kvaser_usb_net_priv { 469 468 struct can_priv can; 470 469 471 - atomic_t active_tx_urbs; 472 - struct usb_anchor tx_submitted; 470 + spinlock_t tx_contexts_lock; 471 + int active_tx_contexts; 473 472 struct kvaser_usb_tx_urb_context tx_contexts[MAX_TX_URBS]; 474 473 474 + struct usb_anchor tx_submitted; 475 475 struct completion start_comp, stop_comp; 476 476 477 477 struct kvaser_usb *dev; ··· 696 694 struct kvaser_usb_net_priv *priv; 697 695 struct sk_buff *skb; 698 696 struct can_frame *cf; 697 + unsigned long flags; 699 698 u8 channel, tid; 700 699 701 700 channel = msg->u.tx_acknowledge_header.channel; ··· 740 737 741 738 stats->tx_packets++; 742 739 stats->tx_bytes += context->dlc; 740 + 741 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 742 + 743 743 can_get_echo_skb(priv->netdev, context->echo_index); 744 - 745 744 context->echo_index = MAX_TX_URBS; 746 - atomic_dec(&priv->active_tx_urbs); 747 - 745 + --priv->active_tx_contexts; 748 746 netif_wake_queue(priv->netdev); 747 + 748 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 749 749 } 750 750 751 751 static void kvaser_usb_simple_msg_callback(struct urb *urb) ··· 807 801 usb_free_urb(urb); 808 802 809 803 return 0; 810 - } 811 - 812 - static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv) 813 - { 814 - int i; 815 - 816 - usb_kill_anchored_urbs(&priv->tx_submitted); 817 - atomic_set(&priv->active_tx_urbs, 0); 818 - 819 - for (i = 0; i < MAX_TX_URBS; i++) 820 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 821 804 } 822 805 823 806 static void kvaser_usb_rx_error_update_can_state(struct kvaser_usb_net_priv *priv, ··· 1510 1515 return err; 1511 1516 } 1512 1517 1518 + static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv) 1519 + { 1520 + int i; 1521 + 1522 + priv->active_tx_contexts = 0; 1523 + for (i = 0; i < MAX_TX_URBS; i++) 1524 + priv->tx_contexts[i].echo_index = MAX_TX_URBS; 1525 + } 1526 + 1527 + /* This method might sleep. Do not call it in the atomic context 1528 + * of URB completions. 1529 + */ 1530 + static void kvaser_usb_unlink_tx_urbs(struct kvaser_usb_net_priv *priv) 1531 + { 1532 + usb_kill_anchored_urbs(&priv->tx_submitted); 1533 + kvaser_usb_reset_tx_urb_contexts(priv); 1534 + } 1535 + 1513 1536 static void kvaser_usb_unlink_all_urbs(struct kvaser_usb *dev) 1514 1537 { 1515 1538 int i; ··· 1647 1634 struct kvaser_msg *msg; 1648 1635 int i, err, ret = NETDEV_TX_OK; 1649 1636 u8 *msg_tx_can_flags = NULL; /* GCC */ 1637 + unsigned long flags; 1650 1638 1651 1639 if (can_dropped_invalid_skb(netdev, skb)) 1652 1640 return NETDEV_TX_OK; ··· 1701 1687 if (cf->can_id & CAN_RTR_FLAG) 1702 1688 *msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME; 1703 1689 1690 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1704 1691 for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) { 1705 1692 if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) { 1706 1693 context = &priv->tx_contexts[i]; 1694 + 1695 + context->echo_index = i; 1696 + can_put_echo_skb(skb, netdev, context->echo_index); 1697 + ++priv->active_tx_contexts; 1698 + if (priv->active_tx_contexts >= MAX_TX_URBS) 1699 + netif_stop_queue(netdev); 1700 + 1707 1701 break; 1708 1702 } 1709 1703 } 1704 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 1710 1705 1711 1706 /* This should never happen; it implies a flow control bug */ 1712 1707 if (!context) { ··· 1727 1704 } 1728 1705 1729 1706 context->priv = priv; 1730 - context->echo_index = i; 1731 1707 context->dlc = cf->can_dlc; 1732 1708 1733 1709 msg->u.tx_can.tid = context->echo_index; ··· 1738 1716 kvaser_usb_write_bulk_callback, context); 1739 1717 usb_anchor_urb(urb, &priv->tx_submitted); 1740 1718 1741 - can_put_echo_skb(skb, netdev, context->echo_index); 1742 - 1743 - atomic_inc(&priv->active_tx_urbs); 1744 - 1745 - if (atomic_read(&priv->active_tx_urbs) >= MAX_TX_URBS) 1746 - netif_stop_queue(netdev); 1747 - 1748 1719 err = usb_submit_urb(urb, GFP_ATOMIC); 1749 1720 if (unlikely(err)) { 1750 - can_free_echo_skb(netdev, context->echo_index); 1721 + spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1751 1722 1752 - atomic_dec(&priv->active_tx_urbs); 1723 + can_free_echo_skb(netdev, context->echo_index); 1724 + context->echo_index = MAX_TX_URBS; 1725 + --priv->active_tx_contexts; 1726 + netif_wake_queue(netdev); 1727 + 1728 + spin_unlock_irqrestore(&priv->tx_contexts_lock, flags); 1729 + 1753 1730 usb_unanchor_urb(urb); 1754 1731 1755 1732 stats->tx_dropped++; ··· 1875 1854 struct kvaser_usb *dev = usb_get_intfdata(intf); 1876 1855 struct net_device *netdev; 1877 1856 struct kvaser_usb_net_priv *priv; 1878 - int i, err; 1857 + int err; 1879 1858 1880 1859 err = kvaser_usb_send_simple_msg(dev, CMD_RESET_CHIP, channel); 1881 1860 if (err) ··· 1889 1868 1890 1869 priv = netdev_priv(netdev); 1891 1870 1871 + init_usb_anchor(&priv->tx_submitted); 1892 1872 init_completion(&priv->start_comp); 1893 1873 init_completion(&priv->stop_comp); 1894 - 1895 - init_usb_anchor(&priv->tx_submitted); 1896 - atomic_set(&priv->active_tx_urbs, 0); 1897 - 1898 - for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) 1899 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 1900 1874 1901 1875 priv->dev = dev; 1902 1876 priv->netdev = netdev; 1903 1877 priv->channel = channel; 1878 + 1879 + spin_lock_init(&priv->tx_contexts_lock); 1880 + kvaser_usb_reset_tx_urb_contexts(priv); 1904 1881 1905 1882 priv->can.state = CAN_STATE_STOPPED; 1906 1883 priv->can.clock.freq = CAN_USB_CLOCK;
+29 -2
drivers/net/ethernet/amd/pcnet32.c
··· 1543 1543 { 1544 1544 struct pcnet32_private *lp; 1545 1545 int i, media; 1546 - int fdx, mii, fset, dxsuflo; 1546 + int fdx, mii, fset, dxsuflo, sram; 1547 1547 int chip_version; 1548 1548 char *chipname; 1549 1549 struct net_device *dev; ··· 1580 1580 } 1581 1581 1582 1582 /* initialize variables */ 1583 - fdx = mii = fset = dxsuflo = 0; 1583 + fdx = mii = fset = dxsuflo = sram = 0; 1584 1584 chip_version = (chip_version >> 12) & 0xffff; 1585 1585 1586 1586 switch (chip_version) { ··· 1613 1613 chipname = "PCnet/FAST III 79C973"; /* PCI */ 1614 1614 fdx = 1; 1615 1615 mii = 1; 1616 + sram = 1; 1616 1617 break; 1617 1618 case 0x2626: 1618 1619 chipname = "PCnet/Home 79C978"; /* PCI */ ··· 1637 1636 chipname = "PCnet/FAST III 79C975"; /* PCI */ 1638 1637 fdx = 1; 1639 1638 mii = 1; 1639 + sram = 1; 1640 1640 break; 1641 1641 case 0x2628: 1642 1642 chipname = "PCnet/PRO 79C976"; ··· 1664 1662 a->write_csr(ioaddr, 80, 1665 1663 (a->read_csr(ioaddr, 80) & 0x0C00) | 0x0c00); 1666 1664 dxsuflo = 1; 1665 + } 1666 + 1667 + /* 1668 + * The Am79C973/Am79C975 controllers come with 12K of SRAM 1669 + * which we can use for the Tx/Rx buffers but most importantly, 1670 + * the use of SRAM allow us to use the BCR18:NOUFLO bit to avoid 1671 + * Tx fifo underflows. 1672 + */ 1673 + if (sram) { 1674 + /* 1675 + * The SRAM is being configured in two steps. First we 1676 + * set the SRAM size in the BCR25:SRAM_SIZE bits. According 1677 + * to the datasheet, each bit corresponds to a 512-byte 1678 + * page so we can have at most 24 pages. The SRAM_SIZE 1679 + * holds the value of the upper 8 bits of the 16-bit SRAM size. 1680 + * The low 8-bits start at 0x00 and end at 0xff. So the 1681 + * address range is from 0x0000 up to 0x17ff. Therefore, 1682 + * the SRAM_SIZE is set to 0x17. The next step is to set 1683 + * the BCR26:SRAM_BND midway through so the Tx and Rx 1684 + * buffers can share the SRAM equally. 1685 + */ 1686 + a->write_bcr(ioaddr, 25, 0x17); 1687 + a->write_bcr(ioaddr, 26, 0xc); 1688 + /* And finally enable the NOUFLO bit */ 1689 + a->write_bcr(ioaddr, 18, a->read_bcr(ioaddr, 18) | (1 << 11)); 1667 1690 } 1668 1691 1669 1692 dev = alloc_etherdev(sizeof(*lp));
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12769 12769 NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | 12770 12770 NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO | 12771 12771 NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX; 12772 - if (!CHIP_IS_E1x(bp)) { 12772 + if (!chip_is_e1x) { 12773 12773 dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL | 12774 12774 NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT; 12775 12775 dev->hw_enc_features =
+1 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 1120 1120 } 1121 1121 1122 1122 /* Installed successfully, update the cached header too. */ 1123 - memcpy(card_fw, fs_fw, sizeof(*card_fw)); 1123 + *card_fw = *fs_fw; 1124 1124 card_fw_usable = 1; 1125 1125 *reset = 0; /* already reset as part of load_fw */ 1126 1126 }
+1 -1
drivers/net/ethernet/dec/tulip/tulip_core.c
··· 589 589 (unsigned int)tp->rx_ring[i].buffer1, 590 590 (unsigned int)tp->rx_ring[i].buffer2, 591 591 buf[0], buf[1], buf[2]); 592 - for (j = 0; buf[j] != 0xee && j < 1600; j++) 592 + for (j = 0; ((j < 1600) && buf[j] != 0xee); j++) 593 593 if (j < 100) 594 594 pr_cont(" %02x", buf[j]); 595 595 pr_cont(" j=%d\n", j);
+2
drivers/net/ethernet/emulex/benet/be.h
··· 362 362 u16 vlan_tag; 363 363 u32 tx_rate; 364 364 u32 plink_tracking; 365 + u32 privileges; 365 366 }; 366 367 367 368 enum vf_state { ··· 469 468 470 469 u8 __iomem *csr; /* CSR BAR used only for BE2/3 */ 471 470 u8 __iomem *db; /* Door Bell */ 471 + u8 __iomem *pcicfg; /* On SH,BEx only. Shadow of PCI config space */ 472 472 473 473 struct mutex mbox_lock; /* For serializing mbox cmds to BE card */ 474 474 struct be_dma_mem mbox_mem;
+7 -10
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 1849 1849 { 1850 1850 int num_eqs, i = 0; 1851 1851 1852 - if (lancer_chip(adapter) && num > 8) { 1853 - while (num) { 1854 - num_eqs = min(num, 8); 1855 - __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1856 - i += num_eqs; 1857 - num -= num_eqs; 1858 - } 1859 - } else { 1860 - __be_cmd_modify_eqd(adapter, set_eqd, num); 1852 + while (num) { 1853 + num_eqs = min(num, 8); 1854 + __be_cmd_modify_eqd(adapter, &set_eqd[i], num_eqs); 1855 + i += num_eqs; 1856 + num -= num_eqs; 1861 1857 } 1862 1858 1863 1859 return 0; ··· 1861 1865 1862 1866 /* Uses sycnhronous mcc */ 1863 1867 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 1864 - u32 num) 1868 + u32 num, u32 domain) 1865 1869 { 1866 1870 struct be_mcc_wrb *wrb; 1867 1871 struct be_cmd_req_vlan_config *req; ··· 1879 1883 be_wrb_cmd_hdr_prepare(&req->hdr, CMD_SUBSYSTEM_COMMON, 1880 1884 OPCODE_COMMON_NTWK_VLAN_CONFIG, sizeof(*req), 1881 1885 wrb, NULL); 1886 + req->hdr.domain = domain; 1882 1887 1883 1888 req->interface_id = if_id; 1884 1889 req->untagged = BE_IF_FLAGS_UNTAGGED & be_if_cap_flags(adapter) ? 1 : 0;
+1 -1
drivers/net/ethernet/emulex/benet/be_cmds.h
··· 2266 2266 int be_cmd_get_fw_ver(struct be_adapter *adapter); 2267 2267 int be_cmd_modify_eqd(struct be_adapter *adapter, struct be_set_eqd *, int num); 2268 2268 int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array, 2269 - u32 num); 2269 + u32 num, u32 domain); 2270 2270 int be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 status); 2271 2271 int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc); 2272 2272 int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc);
+98 -33
drivers/net/ethernet/emulex/benet/be_main.c
··· 1261 1261 for_each_set_bit(i, adapter->vids, VLAN_N_VID) 1262 1262 vids[num++] = cpu_to_le16(i); 1263 1263 1264 - status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num); 1264 + status = be_cmd_vlan_config(adapter, adapter->if_handle, vids, num, 0); 1265 1265 if (status) { 1266 1266 dev_err(dev, "Setting HW VLAN filtering failed\n"); 1267 1267 /* Set to VLAN promisc mode as setting VLAN filter failed */ ··· 1470 1470 return 0; 1471 1471 } 1472 1472 1473 + static int be_set_vf_tvt(struct be_adapter *adapter, int vf, u16 vlan) 1474 + { 1475 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1476 + u16 vids[BE_NUM_VLANS_SUPPORTED]; 1477 + int vf_if_id = vf_cfg->if_handle; 1478 + int status; 1479 + 1480 + /* Enable Transparent VLAN Tagging */ 1481 + status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, vf_if_id, 0); 1482 + if (status) 1483 + return status; 1484 + 1485 + /* Clear pre-programmed VLAN filters on VF if any, if TVT is enabled */ 1486 + vids[0] = 0; 1487 + status = be_cmd_vlan_config(adapter, vf_if_id, vids, 1, vf + 1); 1488 + if (!status) 1489 + dev_info(&adapter->pdev->dev, 1490 + "Cleared guest VLANs on VF%d", vf); 1491 + 1492 + /* After TVT is enabled, disallow VFs to program VLAN filters */ 1493 + if (vf_cfg->privileges & BE_PRIV_FILTMGMT) { 1494 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges & 1495 + ~BE_PRIV_FILTMGMT, vf + 1); 1496 + if (!status) 1497 + vf_cfg->privileges &= ~BE_PRIV_FILTMGMT; 1498 + } 1499 + return 0; 1500 + } 1501 + 1502 + static int be_clear_vf_tvt(struct be_adapter *adapter, int vf) 1503 + { 1504 + struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1505 + struct device *dev = &adapter->pdev->dev; 1506 + int status; 1507 + 1508 + /* Reset Transparent VLAN Tagging. */ 1509 + status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, vf + 1, 1510 + vf_cfg->if_handle, 0); 1511 + if (status) 1512 + return status; 1513 + 1514 + /* Allow VFs to program VLAN filtering */ 1515 + if (!(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 1516 + status = be_cmd_set_fn_privileges(adapter, vf_cfg->privileges | 1517 + BE_PRIV_FILTMGMT, vf + 1); 1518 + if (!status) { 1519 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 1520 + dev_info(dev, "VF%d: FILTMGMT priv enabled", vf); 1521 + } 1522 + } 1523 + 1524 + dev_info(dev, 1525 + "Disable/re-enable i/f in VM to clear Transparent VLAN tag"); 1526 + return 0; 1527 + } 1528 + 1473 1529 static int be_set_vf_vlan(struct net_device *netdev, int vf, u16 vlan, u8 qos) 1474 1530 { 1475 1531 struct be_adapter *adapter = netdev_priv(netdev); 1476 1532 struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf]; 1477 - int status = 0; 1533 + int status; 1478 1534 1479 1535 if (!sriov_enabled(adapter)) 1480 1536 return -EPERM; ··· 1540 1484 1541 1485 if (vlan || qos) { 1542 1486 vlan |= qos << VLAN_PRIO_SHIFT; 1543 - if (vf_cfg->vlan_tag != vlan) 1544 - status = be_cmd_set_hsw_config(adapter, vlan, vf + 1, 1545 - vf_cfg->if_handle, 0); 1487 + status = be_set_vf_tvt(adapter, vf, vlan); 1546 1488 } else { 1547 - /* Reset Transparent Vlan Tagging. */ 1548 - status = be_cmd_set_hsw_config(adapter, BE_RESET_VLAN_TAG_ID, 1549 - vf + 1, vf_cfg->if_handle, 0); 1489 + status = be_clear_vf_tvt(adapter, vf); 1550 1490 } 1551 1491 1552 1492 if (status) { 1553 1493 dev_err(&adapter->pdev->dev, 1554 - "VLAN %d config on VF %d failed : %#x\n", vlan, 1555 - vf, status); 1494 + "VLAN %d config on VF %d failed : %#x\n", vlan, vf, 1495 + status); 1556 1496 return be_cmd_status(status); 1557 1497 } 1558 1498 1559 1499 vf_cfg->vlan_tag = vlan; 1560 - 1561 1500 return 0; 1562 1501 } 1563 1502 ··· 2919 2868 } 2920 2869 } 2921 2870 } else { 2922 - pci_read_config_dword(adapter->pdev, 2923 - PCICFG_UE_STATUS_LOW, &ue_lo); 2924 - pci_read_config_dword(adapter->pdev, 2925 - PCICFG_UE_STATUS_HIGH, &ue_hi); 2926 - pci_read_config_dword(adapter->pdev, 2927 - PCICFG_UE_STATUS_LOW_MASK, &ue_lo_mask); 2928 - pci_read_config_dword(adapter->pdev, 2929 - PCICFG_UE_STATUS_HI_MASK, &ue_hi_mask); 2871 + ue_lo = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_LOW); 2872 + ue_hi = ioread32(adapter->pcicfg + PCICFG_UE_STATUS_HIGH); 2873 + ue_lo_mask = ioread32(adapter->pcicfg + 2874 + PCICFG_UE_STATUS_LOW_MASK); 2875 + ue_hi_mask = ioread32(adapter->pcicfg + 2876 + PCICFG_UE_STATUS_HI_MASK); 2930 2877 2931 2878 ue_lo = (ue_lo & ~ue_lo_mask); 2932 2879 ue_hi = (ue_hi & ~ue_hi_mask); ··· 3529 3480 u32 cap_flags, u32 vf) 3530 3481 { 3531 3482 u32 en_flags; 3532 - int status; 3533 3483 3534 3484 en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST | 3535 3485 BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS | ··· 3536 3488 3537 3489 en_flags &= cap_flags; 3538 3490 3539 - status = be_cmd_if_create(adapter, cap_flags, en_flags, 3540 - if_handle, vf); 3541 - 3542 - return status; 3491 + return be_cmd_if_create(adapter, cap_flags, en_flags, if_handle, vf); 3543 3492 } 3544 3493 3545 3494 static int be_vfs_if_create(struct be_adapter *adapter) ··· 3555 3510 status = be_cmd_get_profile_config(adapter, &res, 3556 3511 RESOURCE_LIMITS, 3557 3512 vf + 1); 3558 - if (!status) 3513 + if (!status) { 3559 3514 cap_flags = res.if_cap_flags; 3515 + /* Prevent VFs from enabling VLAN promiscuous 3516 + * mode 3517 + */ 3518 + cap_flags &= ~BE_IF_FLAGS_VLAN_PROMISCUOUS; 3519 + } 3560 3520 } 3561 3521 3562 3522 status = be_if_create(adapter, &vf_cfg->if_handle, ··· 3595 3545 struct device *dev = &adapter->pdev->dev; 3596 3546 struct be_vf_cfg *vf_cfg; 3597 3547 int status, old_vfs, vf; 3598 - u32 privileges; 3599 3548 3600 3549 old_vfs = pci_num_vf(adapter->pdev); 3601 3550 ··· 3624 3575 3625 3576 for_all_vfs(adapter, vf_cfg, vf) { 3626 3577 /* Allow VFs to programs MAC/VLAN filters */ 3627 - status = be_cmd_get_fn_privileges(adapter, &privileges, vf + 1); 3628 - if (!status && !(privileges & BE_PRIV_FILTMGMT)) { 3578 + status = be_cmd_get_fn_privileges(adapter, &vf_cfg->privileges, 3579 + vf + 1); 3580 + if (!status && !(vf_cfg->privileges & BE_PRIV_FILTMGMT)) { 3629 3581 status = be_cmd_set_fn_privileges(adapter, 3630 - privileges | 3582 + vf_cfg->privileges | 3631 3583 BE_PRIV_FILTMGMT, 3632 3584 vf + 1); 3633 - if (!status) 3585 + if (!status) { 3586 + vf_cfg->privileges |= BE_PRIV_FILTMGMT; 3634 3587 dev_info(dev, "VF%d has FILTMGMT privilege\n", 3635 3588 vf); 3589 + } 3636 3590 } 3637 3591 3638 3592 /* Allow full available bandwidth */ ··· 5206 5154 5207 5155 static int be_map_pci_bars(struct be_adapter *adapter) 5208 5156 { 5157 + struct pci_dev *pdev = adapter->pdev; 5209 5158 u8 __iomem *addr; 5210 5159 u32 sli_intf; 5211 5160 ··· 5216 5163 adapter->virtfn = (sli_intf & SLI_INTF_FT_MASK) ? 1 : 0; 5217 5164 5218 5165 if (BEx_chip(adapter) && be_physfn(adapter)) { 5219 - adapter->csr = pci_iomap(adapter->pdev, 2, 0); 5166 + adapter->csr = pci_iomap(pdev, 2, 0); 5220 5167 if (!adapter->csr) 5221 5168 return -ENOMEM; 5222 5169 } 5223 5170 5224 - addr = pci_iomap(adapter->pdev, db_bar(adapter), 0); 5171 + addr = pci_iomap(pdev, db_bar(adapter), 0); 5225 5172 if (!addr) 5226 5173 goto pci_map_err; 5227 5174 adapter->db = addr; 5175 + 5176 + if (skyhawk_chip(adapter) || BEx_chip(adapter)) { 5177 + if (be_physfn(adapter)) { 5178 + /* PCICFG is the 2nd BAR in BE2 */ 5179 + addr = pci_iomap(pdev, BE2_chip(adapter) ? 1 : 0, 0); 5180 + if (!addr) 5181 + goto pci_map_err; 5182 + adapter->pcicfg = addr; 5183 + } else { 5184 + adapter->pcicfg = adapter->db + SRIOV_VF_PCICFG_OFFSET; 5185 + } 5186 + } 5228 5187 5229 5188 be_roce_map_pci_bars(adapter); 5230 5189 return 0; 5231 5190 5232 5191 pci_map_err: 5233 - dev_err(&adapter->pdev->dev, "Error in mapping PCI BARs\n"); 5192 + dev_err(&pdev->dev, "Error in mapping PCI BARs\n"); 5234 5193 be_unmap_pci_bars(adapter); 5235 5194 return -ENOMEM; 5236 5195 }
+12 -25
drivers/net/ethernet/freescale/fec_main.c
··· 1189 1189 fec_enet_tx_queue(struct net_device *ndev, u16 queue_id) 1190 1190 { 1191 1191 struct fec_enet_private *fep; 1192 - struct bufdesc *bdp, *bdp_t; 1192 + struct bufdesc *bdp; 1193 1193 unsigned short status; 1194 1194 struct sk_buff *skb; 1195 1195 struct fec_enet_priv_tx_q *txq; 1196 1196 struct netdev_queue *nq; 1197 1197 int index = 0; 1198 - int i, bdnum; 1199 1198 int entries_free; 1200 1199 1201 1200 fep = netdev_priv(ndev); ··· 1215 1216 if (bdp == txq->cur_tx) 1216 1217 break; 1217 1218 1218 - bdp_t = bdp; 1219 - bdnum = 1; 1220 - index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep); 1221 - skb = txq->tx_skbuff[index]; 1222 - while (!skb) { 1223 - bdp_t = fec_enet_get_nextdesc(bdp_t, fep, queue_id); 1224 - index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep); 1225 - skb = txq->tx_skbuff[index]; 1226 - bdnum++; 1227 - } 1228 - if (skb_shinfo(skb)->nr_frags && 1229 - (status = bdp_t->cbd_sc) & BD_ENET_TX_READY) 1230 - break; 1219 + index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep); 1231 1220 1232 - for (i = 0; i < bdnum; i++) { 1233 - if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr)) 1234 - dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, 1235 - bdp->cbd_datlen, DMA_TO_DEVICE); 1236 - bdp->cbd_bufaddr = 0; 1237 - if (i < bdnum - 1) 1238 - bdp = fec_enet_get_nextdesc(bdp, fep, queue_id); 1239 - } 1221 + skb = txq->tx_skbuff[index]; 1240 1222 txq->tx_skbuff[index] = NULL; 1223 + if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr)) 1224 + dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, 1225 + bdp->cbd_datlen, DMA_TO_DEVICE); 1226 + bdp->cbd_bufaddr = 0; 1227 + if (!skb) { 1228 + bdp = fec_enet_get_nextdesc(bdp, fep, queue_id); 1229 + continue; 1230 + } 1241 1231 1242 1232 /* Check for errors. */ 1243 1233 if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC | ··· 1467 1479 1468 1480 vlan_packet_rcvd = true; 1469 1481 1470 - skb_copy_to_linear_data_offset(skb, VLAN_HLEN, 1471 - data, (2 * ETH_ALEN)); 1482 + memmove(skb->data + VLAN_HLEN, data, ETH_ALEN * 2); 1472 1483 skb_pull(skb, VLAN_HLEN); 1473 1484 } 1474 1485
+2 -2
drivers/net/ethernet/ibm/ibmveth.c
··· 1136 1136 ibmveth_replenish_task(adapter); 1137 1137 1138 1138 if (frames_processed < budget) { 1139 + napi_complete(napi); 1140 + 1139 1141 /* We think we are done - reenable interrupts, 1140 1142 * then check once more to make sure we are done. 1141 1143 */ ··· 1145 1143 VIO_IRQ_ENABLE); 1146 1144 1147 1145 BUG_ON(lpar_rc != H_SUCCESS); 1148 - 1149 - napi_complete(napi); 1150 1146 1151 1147 if (ibmveth_rxq_pending_buffer(adapter) && 1152 1148 napi_reschedule(napi)) {
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1698 1698 /* Schedule multicast task to populate multicast list */ 1699 1699 queue_work(mdev->workqueue, &priv->rx_mode_task); 1700 1700 1701 - mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 1702 - 1703 1701 #ifdef CONFIG_MLX4_EN_VXLAN 1704 1702 if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 1705 1703 vxlan_get_rx_port(dev); ··· 2879 2881 if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS) 2880 2882 queue_delayed_work(mdev->workqueue, &priv->service_task, 2881 2883 SERVICE_TASK_DELAY); 2884 + 2885 + mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 2882 2886 2883 2887 return 0; 2884 2888
+1 -1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 453 453 unsigned long rx_chksum_none; 454 454 unsigned long rx_chksum_complete; 455 455 unsigned long tx_chksum_offload; 456 - #define NUM_PORT_STATS 9 456 + #define NUM_PORT_STATS 10 457 457 }; 458 458 459 459 struct mlx4_en_perf_stats {
+8 -12
drivers/net/ethernet/smsc/smc91x.c
··· 2248 2248 const struct of_device_id *match = NULL; 2249 2249 struct smc_local *lp; 2250 2250 struct net_device *ndev; 2251 - struct resource *res; 2251 + struct resource *res, *ires; 2252 2252 unsigned int __iomem *addr; 2253 2253 unsigned long irq_flags = SMC_IRQ_FLAGS; 2254 - unsigned long irq_resflags; 2255 2254 int ret; 2256 2255 2257 2256 ndev = alloc_etherdev(sizeof(struct smc_local)); ··· 2342 2343 goto out_free_netdev; 2343 2344 } 2344 2345 2345 - ndev->irq = platform_get_irq(pdev, 0); 2346 - if (ndev->irq <= 0) { 2346 + ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 2347 + if (!ires) { 2347 2348 ret = -ENODEV; 2348 2349 goto out_release_io; 2349 2350 } 2350 - /* 2351 - * If this platform does not specify any special irqflags, or if 2352 - * the resource supplies a trigger, override the irqflags with 2353 - * the trigger flags from the resource. 2354 - */ 2355 - irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq)); 2356 - if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK) 2357 - irq_flags = irq_resflags & IRQF_TRIGGER_MASK; 2351 + 2352 + ndev->irq = ires->start; 2353 + 2354 + if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK) 2355 + irq_flags = ires->flags & IRQF_TRIGGER_MASK; 2358 2356 2359 2357 ret = smc_request_attrib(pdev, ndev); 2360 2358 if (ret)
+1 -1
drivers/net/ethernet/wiznet/w5100.c
··· 498 498 } 499 499 500 500 if (rx_count < budget) { 501 + napi_complete(napi); 501 502 w5100_write(priv, W5100_IMR, IR_S0); 502 503 mmiowb(); 503 - napi_complete(napi); 504 504 } 505 505 506 506 return rx_count;
+1 -1
drivers/net/ethernet/wiznet/w5300.c
··· 418 418 } 419 419 420 420 if (rx_count < budget) { 421 + napi_complete(napi); 421 422 w5300_write(priv, W5300_IMR, IR_S0); 422 423 mmiowb(); 423 - napi_complete(napi); 424 424 } 425 425 426 426 return rx_count;
+10 -1
drivers/net/usb/cx82310_eth.c
··· 300 300 .tx_fixup = cx82310_tx_fixup, 301 301 }; 302 302 303 + #define USB_DEVICE_CLASS(vend, prod, cl, sc, pr) \ 304 + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | \ 305 + USB_DEVICE_ID_MATCH_DEV_INFO, \ 306 + .idVendor = (vend), \ 307 + .idProduct = (prod), \ 308 + .bDeviceClass = (cl), \ 309 + .bDeviceSubClass = (sc), \ 310 + .bDeviceProtocol = (pr) 311 + 303 312 static const struct usb_device_id products[] = { 304 313 { 305 - USB_DEVICE_AND_INTERFACE_INFO(0x0572, 0xcb01, 0xff, 0, 0), 314 + USB_DEVICE_CLASS(0x0572, 0xcb01, 0xff, 0, 0), 306 315 .driver_info = (unsigned long) &cx82310_info 307 316 }, 308 317 { },
+4 -5
drivers/net/virtio_net.c
··· 1448 1448 { 1449 1449 int i; 1450 1450 1451 - for (i = 0; i < vi->max_queue_pairs; i++) 1451 + for (i = 0; i < vi->max_queue_pairs; i++) { 1452 + napi_hash_del(&vi->rq[i].napi); 1452 1453 netif_napi_del(&vi->rq[i].napi); 1454 + } 1453 1455 1454 1456 kfree(vi->rq); 1455 1457 kfree(vi->sq); ··· 1950 1948 cancel_delayed_work_sync(&vi->refill); 1951 1949 1952 1950 if (netif_running(vi->dev)) { 1953 - for (i = 0; i < vi->max_queue_pairs; i++) { 1951 + for (i = 0; i < vi->max_queue_pairs; i++) 1954 1952 napi_disable(&vi->rq[i].napi); 1955 - napi_hash_del(&vi->rq[i].napi); 1956 - netif_napi_del(&vi->rq[i].napi); 1957 - } 1958 1953 } 1959 1954 1960 1955 remove_vq_common(vi);
+2 -2
drivers/net/vxlan.c
··· 1203 1203 goto drop; 1204 1204 1205 1205 flags &= ~VXLAN_HF_RCO; 1206 - vni &= VXLAN_VID_MASK; 1206 + vni &= VXLAN_VNI_MASK; 1207 1207 } 1208 1208 1209 1209 /* For backwards compatibility, only allow reserved fields to be ··· 1224 1224 flags &= ~VXLAN_GBP_USED_BITS; 1225 1225 } 1226 1226 1227 - if (flags || (vni & ~VXLAN_VID_MASK)) { 1227 + if (flags || vni & ~VXLAN_VNI_MASK) { 1228 1228 /* If there are any unprocessed flags remaining treat 1229 1229 * this as a malformed packet. This behavior diverges from 1230 1230 * VXLAN RFC (RFC7348) which stipulates that bits in reserved
+1
drivers/net/wireless/b43/main.c
··· 5370 5370 case 0x432a: /* BCM4321 */ 5371 5371 case 0x432d: /* BCM4322 */ 5372 5372 case 0x4352: /* BCM43222 */ 5373 + case 0x435a: /* BCM43228 */ 5373 5374 case 0x4333: /* BCM4331 */ 5374 5375 case 0x43a2: /* BCM4360 */ 5375 5376 case 0x43b3: /* BCM4352 */
+12 -3
drivers/net/wireless/brcm80211/brcmfmac/vendor.c
··· 39 39 void *dcmd_buf = NULL, *wr_pointer; 40 40 u16 msglen, maxmsglen = PAGE_SIZE - 0x100; 41 41 42 - brcmf_dbg(TRACE, "cmd %x set %d len %d\n", cmdhdr->cmd, cmdhdr->set, 43 - cmdhdr->len); 42 + if (len < sizeof(*cmdhdr)) { 43 + brcmf_err("vendor command too short: %d\n", len); 44 + return -EINVAL; 45 + } 44 46 45 47 vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev); 46 48 ifp = vif->ifp; 47 49 48 - len -= sizeof(struct brcmf_vndr_dcmd_hdr); 50 + brcmf_dbg(TRACE, "ifidx=%d, cmd=%d\n", ifp->ifidx, cmdhdr->cmd); 51 + 52 + if (cmdhdr->offset > len) { 53 + brcmf_err("bad buffer offset %d > %d\n", cmdhdr->offset, len); 54 + return -EINVAL; 55 + } 56 + 57 + len -= cmdhdr->offset; 49 58 ret_len = cmdhdr->len; 50 59 if (ret_len > 0 || len > 0) { 51 60 if (len > BRCMF_DCMD_MAXLEN) {
+4 -2
drivers/net/wireless/iwlwifi/iwl-1000.c
··· 95 95 .nvm_calib_ver = EEPROM_1000_TX_POWER_VERSION, \ 96 96 .base_params = &iwl1000_base_params, \ 97 97 .eeprom_params = &iwl1000_eeprom_params, \ 98 - .led_mode = IWL_LED_BLINK 98 + .led_mode = IWL_LED_BLINK, \ 99 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 99 100 100 101 const struct iwl_cfg iwl1000_bgn_cfg = { 101 102 .name = "Intel(R) Centrino(R) Wireless-N 1000 BGN", ··· 122 121 .base_params = &iwl1000_base_params, \ 123 122 .eeprom_params = &iwl1000_eeprom_params, \ 124 123 .led_mode = IWL_LED_RF_STATE, \ 125 - .rx_with_siso_diversity = true 124 + .rx_with_siso_diversity = true, \ 125 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 126 126 127 127 const struct iwl_cfg iwl100_bgn_cfg = { 128 128 .name = "Intel(R) Centrino(R) Wireless-N 100 BGN",
+9 -4
drivers/net/wireless/iwlwifi/iwl-2000.c
··· 123 123 .nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ 124 124 .base_params = &iwl2000_base_params, \ 125 125 .eeprom_params = &iwl20x0_eeprom_params, \ 126 - .led_mode = IWL_LED_RF_STATE 126 + .led_mode = IWL_LED_RF_STATE, \ 127 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 128 + 127 129 128 130 const struct iwl_cfg iwl2000_2bgn_cfg = { 129 131 .name = "Intel(R) Centrino(R) Wireless-N 2200 BGN", ··· 151 149 .nvm_calib_ver = EEPROM_2000_TX_POWER_VERSION, \ 152 150 .base_params = &iwl2030_base_params, \ 153 151 .eeprom_params = &iwl20x0_eeprom_params, \ 154 - .led_mode = IWL_LED_RF_STATE 152 + .led_mode = IWL_LED_RF_STATE, \ 153 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 155 154 156 155 const struct iwl_cfg iwl2030_2bgn_cfg = { 157 156 .name = "Intel(R) Centrino(R) Wireless-N 2230 BGN", ··· 173 170 .base_params = &iwl2000_base_params, \ 174 171 .eeprom_params = &iwl20x0_eeprom_params, \ 175 172 .led_mode = IWL_LED_RF_STATE, \ 176 - .rx_with_siso_diversity = true 173 + .rx_with_siso_diversity = true, \ 174 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 177 175 178 176 const struct iwl_cfg iwl105_bgn_cfg = { 179 177 .name = "Intel(R) Centrino(R) Wireless-N 105 BGN", ··· 201 197 .base_params = &iwl2030_base_params, \ 202 198 .eeprom_params = &iwl20x0_eeprom_params, \ 203 199 .led_mode = IWL_LED_RF_STATE, \ 204 - .rx_with_siso_diversity = true 200 + .rx_with_siso_diversity = true, \ 201 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 205 202 206 203 const struct iwl_cfg iwl135_bgn_cfg = { 207 204 .name = "Intel(R) Centrino(R) Wireless-N 135 BGN",
+4 -2
drivers/net/wireless/iwlwifi/iwl-5000.c
··· 93 93 .nvm_calib_ver = EEPROM_5000_TX_POWER_VERSION, \ 94 94 .base_params = &iwl5000_base_params, \ 95 95 .eeprom_params = &iwl5000_eeprom_params, \ 96 - .led_mode = IWL_LED_BLINK 96 + .led_mode = IWL_LED_BLINK, \ 97 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 97 98 98 99 const struct iwl_cfg iwl5300_agn_cfg = { 99 100 .name = "Intel(R) Ultimate N WiFi Link 5300 AGN", ··· 159 158 .base_params = &iwl5000_base_params, \ 160 159 .eeprom_params = &iwl5000_eeprom_params, \ 161 160 .led_mode = IWL_LED_BLINK, \ 162 - .internal_wimax_coex = true 161 + .internal_wimax_coex = true, \ 162 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 163 163 164 164 const struct iwl_cfg iwl5150_agn_cfg = { 165 165 .name = "Intel(R) WiMAX/WiFi Link 5150 AGN",
+12 -6
drivers/net/wireless/iwlwifi/iwl-6000.c
··· 145 145 .nvm_calib_ver = EEPROM_6005_TX_POWER_VERSION, \ 146 146 .base_params = &iwl6000_g2_base_params, \ 147 147 .eeprom_params = &iwl6000_eeprom_params, \ 148 - .led_mode = IWL_LED_RF_STATE 148 + .led_mode = IWL_LED_RF_STATE, \ 149 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 149 150 150 151 const struct iwl_cfg iwl6005_2agn_cfg = { 151 152 .name = "Intel(R) Centrino(R) Advanced-N 6205 AGN", ··· 200 199 .nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ 201 200 .base_params = &iwl6000_g2_base_params, \ 202 201 .eeprom_params = &iwl6000_eeprom_params, \ 203 - .led_mode = IWL_LED_RF_STATE 202 + .led_mode = IWL_LED_RF_STATE, \ 203 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 204 204 205 205 const struct iwl_cfg iwl6030_2agn_cfg = { 206 206 .name = "Intel(R) Centrino(R) Advanced-N 6230 AGN", ··· 237 235 .nvm_calib_ver = EEPROM_6030_TX_POWER_VERSION, \ 238 236 .base_params = &iwl6000_g2_base_params, \ 239 237 .eeprom_params = &iwl6000_eeprom_params, \ 240 - .led_mode = IWL_LED_RF_STATE 238 + .led_mode = IWL_LED_RF_STATE, \ 239 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 241 240 242 241 const struct iwl_cfg iwl6035_2agn_cfg = { 243 242 .name = "Intel(R) Centrino(R) Advanced-N 6235 AGN", ··· 293 290 .nvm_calib_ver = EEPROM_6000_TX_POWER_VERSION, \ 294 291 .base_params = &iwl6000_base_params, \ 295 292 .eeprom_params = &iwl6000_eeprom_params, \ 296 - .led_mode = IWL_LED_BLINK 293 + .led_mode = IWL_LED_BLINK, \ 294 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 297 295 298 296 const struct iwl_cfg iwl6000i_2agn_cfg = { 299 297 .name = "Intel(R) Centrino(R) Advanced-N 6200 AGN", ··· 326 322 .base_params = &iwl6050_base_params, \ 327 323 .eeprom_params = &iwl6000_eeprom_params, \ 328 324 .led_mode = IWL_LED_BLINK, \ 329 - .internal_wimax_coex = true 325 + .internal_wimax_coex = true, \ 326 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 330 327 331 328 const struct iwl_cfg iwl6050_2agn_cfg = { 332 329 .name = "Intel(R) Centrino(R) Advanced-N + WiMAX 6250 AGN", ··· 352 347 .base_params = &iwl6050_base_params, \ 353 348 .eeprom_params = &iwl6000_eeprom_params, \ 354 349 .led_mode = IWL_LED_BLINK, \ 355 - .internal_wimax_coex = true 350 + .internal_wimax_coex = true, \ 351 + .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K 356 352 357 353 const struct iwl_cfg iwl6150_bgn_cfg = { 358 354 .name = "Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN",
+2 -1
drivers/net/wireless/iwlwifi/mvm/coex.c
··· 793 793 if (!vif->bss_conf.assoc) 794 794 smps_mode = IEEE80211_SMPS_AUTOMATIC; 795 795 796 - if (IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status, 796 + if (mvmvif->phy_ctxt && 797 + IWL_COEX_IS_RRC_ON(mvm->last_bt_notif.ttc_rrc_status, 797 798 mvmvif->phy_ctxt->id)) 798 799 smps_mode = IEEE80211_SMPS_AUTOMATIC; 799 800
+2 -1
drivers/net/wireless/iwlwifi/mvm/coex_legacy.c
··· 832 832 if (!vif->bss_conf.assoc) 833 833 smps_mode = IEEE80211_SMPS_AUTOMATIC; 834 834 835 - if (data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id)) 835 + if (mvmvif->phy_ctxt && 836 + data->notif->rrc_enabled & BIT(mvmvif->phy_ctxt->id)) 836 837 smps_mode = IEEE80211_SMPS_AUTOMATIC; 837 838 838 839 IWL_DEBUG_COEX(data->mvm,
+35 -3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 402 402 hw->wiphy->bands[IEEE80211_BAND_5GHZ] = 403 403 &mvm->nvm_data->bands[IEEE80211_BAND_5GHZ]; 404 404 405 - if (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_BEAMFORMER) 405 + if ((mvm->fw->ucode_capa.capa[0] & 406 + IWL_UCODE_TLV_CAPA_BEAMFORMER) && 407 + (mvm->fw->ucode_capa.api[0] & 408 + IWL_UCODE_TLV_API_LQ_SS_PARAMS)) 406 409 hw->wiphy->bands[IEEE80211_BAND_5GHZ]->vht_cap.cap |= 407 410 IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE; 408 411 } ··· 2274 2271 2275 2272 mutex_lock(&mvm->mutex); 2276 2273 2277 - iwl_mvm_cancel_scan(mvm); 2274 + /* Due to a race condition, it's possible that mac80211 asks 2275 + * us to stop a hw_scan when it's already stopped. This can 2276 + * happen, for instance, if we stopped the scan ourselves, 2277 + * called ieee80211_scan_completed() and the userspace called 2278 + * cancel scan scan before ieee80211_scan_work() could run. 2279 + * To handle that, simply return if the scan is not running. 2280 + */ 2281 + /* FIXME: for now, we ignore this race for UMAC scans, since 2282 + * they don't set the scan_status. 2283 + */ 2284 + if ((mvm->scan_status == IWL_MVM_SCAN_OS) || 2285 + (mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) 2286 + iwl_mvm_cancel_scan(mvm); 2278 2287 2279 2288 mutex_unlock(&mvm->mutex); 2280 2289 } ··· 2624 2609 int ret; 2625 2610 2626 2611 mutex_lock(&mvm->mutex); 2612 + 2613 + /* Due to a race condition, it's possible that mac80211 asks 2614 + * us to stop a sched_scan when it's already stopped. This 2615 + * can happen, for instance, if we stopped the scan ourselves, 2616 + * called ieee80211_sched_scan_stopped() and the userspace called 2617 + * stop sched scan scan before ieee80211_sched_scan_stopped_work() 2618 + * could run. To handle this, simply return if the scan is 2619 + * not running. 2620 + */ 2621 + /* FIXME: for now, we ignore this race for UMAC scans, since 2622 + * they don't set the scan_status. 2623 + */ 2624 + if (mvm->scan_status != IWL_MVM_SCAN_SCHED && 2625 + !(mvm->fw->ucode_capa.capa[0] & IWL_UCODE_TLV_CAPA_UMAC_SCAN)) { 2626 + mutex_unlock(&mvm->mutex); 2627 + return 0; 2628 + } 2629 + 2627 2630 ret = iwl_mvm_scan_offload_stop(mvm, false); 2628 2631 mutex_unlock(&mvm->mutex); 2629 2632 iwl_mvm_wait_for_async_handlers(mvm); 2630 2633 2631 2634 return ret; 2632 - 2633 2635 } 2634 2636 2635 2637 static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
+6 -7
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 587 587 if (mvm->scan_status == IWL_MVM_SCAN_NONE) 588 588 return 0; 589 589 590 - if (iwl_mvm_is_radio_killed(mvm)) 590 + if (iwl_mvm_is_radio_killed(mvm)) { 591 + ret = 0; 591 592 goto out; 593 + } 592 594 593 595 iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_done, 594 596 scan_done_notif, ··· 602 600 IWL_DEBUG_SCAN(mvm, "Send stop %sscan failed %d\n", 603 601 sched ? "offloaded " : "", ret); 604 602 iwl_remove_notification(&mvm->notif_wait, &wait_scan_done); 605 - return ret; 603 + goto out; 606 604 } 607 605 608 606 IWL_DEBUG_SCAN(mvm, "Successfully sent stop %sscan\n", 609 607 sched ? "offloaded " : ""); 610 608 611 609 ret = iwl_wait_notification(&mvm->notif_wait, &wait_scan_done, 1 * HZ); 612 - if (ret) 613 - return ret; 614 - 610 + out: 615 611 /* 616 612 * Clear the scan status so the next scan requests will succeed. This 617 613 * also ensures the Rx handler doesn't do anything, as the scan was ··· 619 619 if (mvm->scan_status == IWL_MVM_SCAN_OS) 620 620 iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 621 621 622 - out: 623 622 mvm->scan_status = IWL_MVM_SCAN_NONE; 624 623 625 624 if (notify) { ··· 628 629 ieee80211_scan_completed(mvm->hw, true); 629 630 } 630 631 631 - return 0; 632 + return ret; 632 633 } 633 634 634 635 static void iwl_mvm_unified_scan_fill_tx_cmd(struct iwl_mvm *mvm,
+3 -6
drivers/net/wireless/iwlwifi/mvm/time-event.c
··· 750 750 * request 751 751 */ 752 752 list_for_each_entry(te_data, &mvm->time_event_list, list) { 753 - if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE && 754 - te_data->running) { 753 + if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) { 755 754 mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 756 755 is_p2p = true; 757 756 goto remove_te; ··· 765 766 * request 766 767 */ 767 768 list_for_each_entry(te_data, &mvm->aux_roc_te_list, list) { 768 - if (te_data->running) { 769 - mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 770 - goto remove_te; 771 - } 769 + mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif); 770 + goto remove_te; 772 771 } 773 772 774 773 remove_te:
+5 -2
drivers/net/wireless/rtlwifi/base.c
··· 1386 1386 } 1387 1387 1388 1388 return true; 1389 - } else if (0x86DD == ether_type) { 1390 - return true; 1389 + } else if (ETH_P_IPV6 == ether_type) { 1390 + /* TODO: Handle any IPv6 cases that need special handling. 1391 + * For now, always return false 1392 + */ 1393 + goto end; 1391 1394 } 1392 1395 1393 1396 end:
+12 -11
drivers/net/xen-netback/netback.c
··· 96 96 static void make_tx_response(struct xenvif_queue *queue, 97 97 struct xen_netif_tx_request *txp, 98 98 s8 st); 99 + static void push_tx_responses(struct xenvif_queue *queue); 99 100 100 101 static inline int tx_work_todo(struct xenvif_queue *queue); 101 102 ··· 656 655 unsigned long flags; 657 656 658 657 do { 659 - int notify; 660 - 661 658 spin_lock_irqsave(&queue->response_lock, flags); 662 659 make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR); 663 - RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 660 + push_tx_responses(queue); 664 661 spin_unlock_irqrestore(&queue->response_lock, flags); 665 - if (notify) 666 - notify_remote_via_irq(queue->tx_irq); 667 - 668 662 if (cons == end) 669 663 break; 670 664 txp = RING_GET_REQUEST(&queue->tx, cons++); ··· 1651 1655 { 1652 1656 struct pending_tx_info *pending_tx_info; 1653 1657 pending_ring_idx_t index; 1654 - int notify; 1655 1658 unsigned long flags; 1656 1659 1657 1660 pending_tx_info = &queue->pending_tx_info[pending_idx]; ··· 1666 1671 index = pending_index(queue->pending_prod++); 1667 1672 queue->pending_ring[index] = pending_idx; 1668 1673 1669 - RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1674 + push_tx_responses(queue); 1670 1675 1671 1676 spin_unlock_irqrestore(&queue->response_lock, flags); 1672 - 1673 - if (notify) 1674 - notify_remote_via_irq(queue->tx_irq); 1675 1677 } 1676 1678 1677 1679 ··· 1687 1695 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL; 1688 1696 1689 1697 queue->tx.rsp_prod_pvt = ++i; 1698 + } 1699 + 1700 + static void push_tx_responses(struct xenvif_queue *queue) 1701 + { 1702 + int notify; 1703 + 1704 + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1705 + if (notify) 1706 + notify_remote_via_irq(queue->tx_irq); 1690 1707 } 1691 1708 1692 1709 static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
+1 -2
drivers/of/Kconfig
··· 84 84 bool 85 85 86 86 config OF_OVERLAY 87 - bool 88 - depends on OF 87 + bool "Device Tree overlays" 89 88 select OF_DYNAMIC 90 89 select OF_RESOLVE 91 90
+18 -9
drivers/of/base.c
··· 714 714 const char *path) 715 715 { 716 716 struct device_node *child; 717 - int len = strchrnul(path, '/') - path; 718 - int term; 717 + int len; 718 + const char *end; 719 719 720 + end = strchr(path, ':'); 721 + if (!end) 722 + end = strchrnul(path, '/'); 723 + 724 + len = end - path; 720 725 if (!len) 721 726 return NULL; 722 - 723 - term = strchrnul(path, ':') - path; 724 - if (term < len) 725 - len = term; 726 727 727 728 __for_each_child_of_node(parent, child) { 728 729 const char *name = strrchr(child->full_name, '/'); ··· 769 768 770 769 /* The path could begin with an alias */ 771 770 if (*path != '/') { 772 - char *p = strchrnul(path, '/'); 773 - int len = separator ? separator - path : p - path; 771 + int len; 772 + const char *p = separator; 773 + 774 + if (!p) 775 + p = strchrnul(path, '/'); 776 + len = p - path; 774 777 775 778 /* of_aliases must not be NULL */ 776 779 if (!of_aliases) ··· 799 794 path++; /* Increment past '/' delimiter */ 800 795 np = __of_find_node_by_path(np, path); 801 796 path = strchrnul(path, '/'); 797 + if (separator && separator < path) 798 + break; 802 799 } 803 800 raw_spin_unlock_irqrestore(&devtree_lock, flags); 804 801 return np; ··· 1893 1886 name = of_get_property(of_chosen, "linux,stdout-path", NULL); 1894 1887 if (IS_ENABLED(CONFIG_PPC) && !name) 1895 1888 name = of_get_property(of_aliases, "stdout", NULL); 1896 - if (name) 1889 + if (name) { 1897 1890 of_stdout = of_find_node_opts_by_path(name, &of_stdout_options); 1891 + add_preferred_console("stdout-path", 0, NULL); 1892 + } 1898 1893 } 1899 1894 1900 1895 if (!of_aliases)
+2 -1
drivers/of/overlay.c
··· 19 19 #include <linux/string.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/err.h> 22 + #include <linux/idr.h> 22 23 23 24 #include "of_private.h" 24 25 ··· 86 85 struct device_node *target, struct device_node *child) 87 86 { 88 87 const char *cname; 89 - struct device_node *tchild, *grandchild; 88 + struct device_node *tchild; 90 89 int ret = 0; 91 90 92 91 cname = kbasename(child->full_name);
+19 -9
drivers/of/unittest.c
··· 92 92 "option path test failed\n"); 93 93 of_node_put(np); 94 94 95 + np = of_find_node_opts_by_path("/testcase-data:test/option", &options); 96 + selftest(np && !strcmp("test/option", options), 97 + "option path test, subcase #1 failed\n"); 98 + of_node_put(np); 99 + 95 100 np = of_find_node_opts_by_path("/testcase-data:testoption", NULL); 96 101 selftest(np, "NULL option path test failed\n"); 97 102 of_node_put(np); ··· 105 100 &options); 106 101 selftest(np && !strcmp("testaliasoption", options), 107 102 "option alias path test failed\n"); 103 + of_node_put(np); 104 + 105 + np = of_find_node_opts_by_path("testcase-alias:test/alias/option", 106 + &options); 107 + selftest(np && !strcmp("test/alias/option", options), 108 + "option alias path test, subcase #1 failed\n"); 108 109 of_node_put(np); 109 110 110 111 np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL); ··· 389 378 rc = of_property_match_string(np, "phandle-list-names", "first"); 390 379 selftest(rc == 0, "first expected:0 got:%i\n", rc); 391 380 rc = of_property_match_string(np, "phandle-list-names", "second"); 392 - selftest(rc == 1, "second expected:0 got:%i\n", rc); 381 + selftest(rc == 1, "second expected:1 got:%i\n", rc); 393 382 rc = of_property_match_string(np, "phandle-list-names", "third"); 394 - selftest(rc == 2, "third expected:0 got:%i\n", rc); 383 + selftest(rc == 2, "third expected:2 got:%i\n", rc); 395 384 rc = of_property_match_string(np, "phandle-list-names", "fourth"); 396 385 selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); 397 386 rc = of_property_match_string(np, "missing-property", "blah"); ··· 489 478 struct device_node *n1, *n2, *n21, *nremove, *parent, *np; 490 479 struct of_changeset chgset; 491 480 492 - of_changeset_init(&chgset); 493 481 n1 = __of_node_dup(NULL, "/testcase-data/changeset/n1"); 494 482 selftest(n1, "testcase setup failure\n"); 495 483 n2 = __of_node_dup(NULL, "/testcase-data/changeset/n2"); ··· 989 979 return pdev != NULL; 990 980 } 991 981 992 - #if IS_ENABLED(CONFIG_I2C) 982 + #if IS_BUILTIN(CONFIG_I2C) 993 983 994 984 /* get the i2c client device instantiated at the path */ 995 985 static struct i2c_client *of_path_to_i2c_client(const char *path) ··· 1455 1445 return; 1456 1446 } 1457 1447 1458 - #if IS_ENABLED(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1448 + #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1459 1449 1460 1450 struct selftest_i2c_bus_data { 1461 1451 struct platform_device *pdev; ··· 1594 1584 .id_table = selftest_i2c_dev_id, 1595 1585 }; 1596 1586 1597 - #if IS_ENABLED(CONFIG_I2C_MUX) 1587 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1598 1588 1599 1589 struct selftest_i2c_mux_data { 1600 1590 int nchans; ··· 1705 1695 "could not register selftest i2c bus driver\n")) 1706 1696 return ret; 1707 1697 1708 - #if IS_ENABLED(CONFIG_I2C_MUX) 1698 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1709 1699 ret = i2c_add_driver(&selftest_i2c_mux_driver); 1710 1700 if (selftest(ret == 0, 1711 1701 "could not register selftest i2c mux driver\n")) ··· 1717 1707 1718 1708 static void of_selftest_overlay_i2c_cleanup(void) 1719 1709 { 1720 - #if IS_ENABLED(CONFIG_I2C_MUX) 1710 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1721 1711 i2c_del_driver(&selftest_i2c_mux_driver); 1722 1712 #endif 1723 1713 platform_driver_unregister(&selftest_i2c_bus_driver); ··· 1824 1814 of_selftest_overlay_10(); 1825 1815 of_selftest_overlay_11(); 1826 1816 1827 - #if IS_ENABLED(CONFIG_I2C) 1817 + #if IS_BUILTIN(CONFIG_I2C) 1828 1818 if (selftest(of_selftest_overlay_i2c_init() == 0, "i2c init failed\n")) 1829 1819 goto out; 1830 1820
+2 -2
drivers/pci/host/pci-xgene.c
··· 127 127 return false; 128 128 } 129 129 130 - static int xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 130 + static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 131 131 int offset) 132 132 { 133 133 struct xgene_pcie_port *port = bus->sysdata; ··· 137 137 return NULL; 138 138 139 139 xgene_pcie_set_rtdid_reg(bus, devfn); 140 - return xgene_pcie_get_cfg_base(bus); 140 + return xgene_pcie_get_cfg_base(bus) + offset; 141 141 } 142 142 143 143 static struct pci_ops xgene_pcie_ops = {
+3 -2
drivers/pci/pci-sysfs.c
··· 521 521 struct pci_dev *pdev = to_pci_dev(dev); 522 522 char *driver_override, *old = pdev->driver_override, *cp; 523 523 524 - if (count > PATH_MAX) 524 + /* We need to keep extra room for a newline */ 525 + if (count >= (PAGE_SIZE - 1)) 525 526 return -EINVAL; 526 527 527 528 driver_override = kstrndup(buf, count, GFP_KERNEL); ··· 550 549 { 551 550 struct pci_dev *pdev = to_pci_dev(dev); 552 551 553 - return sprintf(buf, "%s\n", pdev->driver_override); 552 + return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 554 553 } 555 554 static DEVICE_ATTR_RW(driver_override); 556 555
+188 -66
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 66 66 #define BYT_DIR_MASK (BIT(1) | BIT(2)) 67 67 #define BYT_TRIG_MASK (BIT(26) | BIT(25) | BIT(24)) 68 68 69 + #define BYT_CONF0_RESTORE_MASK (BYT_DIRECT_IRQ_EN | BYT_TRIG_MASK | \ 70 + BYT_PIN_MUX) 71 + #define BYT_VAL_RESTORE_MASK (BYT_DIR_MASK | BYT_LEVEL) 72 + 69 73 #define BYT_NGPIO_SCORE 102 70 74 #define BYT_NGPIO_NCORE 28 71 75 #define BYT_NGPIO_SUS 44 ··· 138 134 }, 139 135 }; 140 136 137 + struct byt_gpio_pin_context { 138 + u32 conf0; 139 + u32 val; 140 + }; 141 + 141 142 struct byt_gpio { 142 143 struct gpio_chip chip; 143 144 struct platform_device *pdev; 144 145 spinlock_t lock; 145 146 void __iomem *reg_base; 146 147 struct pinctrl_gpio_range *range; 148 + struct byt_gpio_pin_context *saved_context; 147 149 }; 148 150 149 151 #define to_byt_gpio(c) container_of(c, struct byt_gpio, chip) ··· 168 158 return vg->reg_base + reg_offset + reg; 169 159 } 170 160 171 - static bool is_special_pin(struct byt_gpio *vg, unsigned offset) 161 + static void byt_gpio_clear_triggering(struct byt_gpio *vg, unsigned offset) 162 + { 163 + void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 164 + unsigned long flags; 165 + u32 value; 166 + 167 + spin_lock_irqsave(&vg->lock, flags); 168 + value = readl(reg); 169 + value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL); 170 + writel(value, reg); 171 + spin_unlock_irqrestore(&vg->lock, flags); 172 + } 173 + 174 + static u32 byt_get_gpio_mux(struct byt_gpio *vg, unsigned offset) 172 175 { 173 176 /* SCORE pin 92-93 */ 174 177 if (!strcmp(vg->range->name, BYT_SCORE_ACPI_UID) && 175 178 offset >= 92 && offset <= 93) 176 - return true; 179 + return 1; 177 180 178 181 /* SUS pin 11-21 */ 179 182 if (!strcmp(vg->range->name, BYT_SUS_ACPI_UID) && 180 183 offset >= 11 && offset <= 21) 181 - return true; 184 + return 1; 182 185 183 - return false; 186 + return 0; 184 187 } 185 188 186 189 static int byt_gpio_request(struct gpio_chip *chip, unsigned offset) 187 190 { 188 191 struct byt_gpio *vg = to_byt_gpio(chip); 189 192 void __iomem *reg = byt_gpio_reg(chip, offset, BYT_CONF0_REG); 190 - u32 value; 191 - bool special; 193 + u32 value, gpio_mux; 192 194 193 195 /* 194 196 * In most cases, func pin mux 000 means GPIO function. 195 197 * But, some pins may have func pin mux 001 represents 196 - * GPIO function. Only allow user to export pin with 197 - * func pin mux preset as GPIO function by BIOS/FW. 198 + * GPIO function. 199 + * 200 + * Because there are devices out there where some pins were not 201 + * configured correctly we allow changing the mux value from 202 + * request (but print out warning about that). 198 203 */ 199 204 value = readl(reg) & BYT_PIN_MUX; 200 - special = is_special_pin(vg, offset); 201 - if ((special && value != 1) || (!special && value)) { 202 - dev_err(&vg->pdev->dev, 203 - "pin %u cannot be used as GPIO.\n", offset); 204 - return -EINVAL; 205 + gpio_mux = byt_get_gpio_mux(vg, offset); 206 + if (WARN_ON(gpio_mux != value)) { 207 + unsigned long flags; 208 + 209 + spin_lock_irqsave(&vg->lock, flags); 210 + value = readl(reg) & ~BYT_PIN_MUX; 211 + value |= gpio_mux; 212 + writel(value, reg); 213 + spin_unlock_irqrestore(&vg->lock, flags); 214 + 215 + dev_warn(&vg->pdev->dev, 216 + "pin %u forcibly re-configured as GPIO\n", offset); 205 217 } 206 218 207 219 pm_runtime_get(&vg->pdev->dev); ··· 234 202 static void byt_gpio_free(struct gpio_chip *chip, unsigned offset) 235 203 { 236 204 struct byt_gpio *vg = to_byt_gpio(chip); 237 - void __iomem *reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 238 - u32 value; 239 205 240 - /* clear interrupt triggering */ 241 - value = readl(reg); 242 - value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL); 243 - writel(value, reg); 244 - 206 + byt_gpio_clear_triggering(vg, offset); 245 207 pm_runtime_put(&vg->pdev->dev); 246 208 } 247 209 ··· 262 236 value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG | 263 237 BYT_TRIG_LVL); 264 238 265 - switch (type) { 266 - case IRQ_TYPE_LEVEL_HIGH: 267 - value |= BYT_TRIG_LVL; 268 - case IRQ_TYPE_EDGE_RISING: 269 - value |= BYT_TRIG_POS; 270 - break; 271 - case IRQ_TYPE_LEVEL_LOW: 272 - value |= BYT_TRIG_LVL; 273 - case IRQ_TYPE_EDGE_FALLING: 274 - value |= BYT_TRIG_NEG; 275 - break; 276 - case IRQ_TYPE_EDGE_BOTH: 277 - value |= (BYT_TRIG_NEG | BYT_TRIG_POS); 278 - break; 279 - } 280 239 writel(value, reg); 240 + 241 + if (type & IRQ_TYPE_EDGE_BOTH) 242 + __irq_set_handler_locked(d->irq, handle_edge_irq); 243 + else if (type & IRQ_TYPE_LEVEL_MASK) 244 + __irq_set_handler_locked(d->irq, handle_level_irq); 281 245 282 246 spin_unlock_irqrestore(&vg->lock, flags); 283 247 ··· 426 410 struct irq_data *data = irq_desc_get_irq_data(desc); 427 411 struct byt_gpio *vg = to_byt_gpio(irq_desc_get_handler_data(desc)); 428 412 struct irq_chip *chip = irq_data_get_irq_chip(data); 429 - u32 base, pin, mask; 413 + u32 base, pin; 430 414 void __iomem *reg; 431 - u32 pending; 415 + unsigned long pending; 432 416 unsigned virq; 433 - int looplimit = 0; 434 417 435 418 /* check from GPIO controller which pin triggered the interrupt */ 436 419 for (base = 0; base < vg->chip.ngpio; base += 32) { 437 - 438 420 reg = byt_gpio_reg(&vg->chip, base, BYT_INT_STAT_REG); 439 - 440 - while ((pending = readl(reg))) { 441 - pin = __ffs(pending); 442 - mask = BIT(pin); 443 - /* Clear before handling so we can't lose an edge */ 444 - writel(mask, reg); 445 - 421 + pending = readl(reg); 422 + for_each_set_bit(pin, &pending, 32) { 446 423 virq = irq_find_mapping(vg->chip.irqdomain, base + pin); 447 424 generic_handle_irq(virq); 448 - 449 - /* In case bios or user sets triggering incorretly a pin 450 - * might remain in "interrupt triggered" state. 451 - */ 452 - if (looplimit++ > 32) { 453 - dev_err(&vg->pdev->dev, 454 - "Gpio %d interrupt flood, disabling\n", 455 - base + pin); 456 - 457 - reg = byt_gpio_reg(&vg->chip, base + pin, 458 - BYT_CONF0_REG); 459 - mask = readl(reg); 460 - mask &= ~(BYT_TRIG_NEG | BYT_TRIG_POS | 461 - BYT_TRIG_LVL); 462 - writel(mask, reg); 463 - mask = readl(reg); /* flush */ 464 - break; 465 - } 466 425 } 467 426 } 468 427 chip->irq_eoi(data); 469 428 } 470 429 430 + static void byt_irq_ack(struct irq_data *d) 431 + { 432 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 433 + struct byt_gpio *vg = to_byt_gpio(gc); 434 + unsigned offset = irqd_to_hwirq(d); 435 + void __iomem *reg; 436 + 437 + reg = byt_gpio_reg(&vg->chip, offset, BYT_INT_STAT_REG); 438 + writel(BIT(offset % 32), reg); 439 + } 440 + 471 441 static void byt_irq_unmask(struct irq_data *d) 472 442 { 443 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 444 + struct byt_gpio *vg = to_byt_gpio(gc); 445 + unsigned offset = irqd_to_hwirq(d); 446 + unsigned long flags; 447 + void __iomem *reg; 448 + u32 value; 449 + 450 + spin_lock_irqsave(&vg->lock, flags); 451 + 452 + reg = byt_gpio_reg(&vg->chip, offset, BYT_CONF0_REG); 453 + value = readl(reg); 454 + 455 + switch (irqd_get_trigger_type(d)) { 456 + case IRQ_TYPE_LEVEL_HIGH: 457 + value |= BYT_TRIG_LVL; 458 + case IRQ_TYPE_EDGE_RISING: 459 + value |= BYT_TRIG_POS; 460 + break; 461 + case IRQ_TYPE_LEVEL_LOW: 462 + value |= BYT_TRIG_LVL; 463 + case IRQ_TYPE_EDGE_FALLING: 464 + value |= BYT_TRIG_NEG; 465 + break; 466 + case IRQ_TYPE_EDGE_BOTH: 467 + value |= (BYT_TRIG_NEG | BYT_TRIG_POS); 468 + break; 469 + } 470 + 471 + writel(value, reg); 472 + 473 + spin_unlock_irqrestore(&vg->lock, flags); 473 474 } 474 475 475 476 static void byt_irq_mask(struct irq_data *d) 476 477 { 478 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 479 + struct byt_gpio *vg = to_byt_gpio(gc); 480 + 481 + byt_gpio_clear_triggering(vg, irqd_to_hwirq(d)); 477 482 } 478 483 479 484 static struct irq_chip byt_irqchip = { 480 485 .name = "BYT-GPIO", 486 + .irq_ack = byt_irq_ack, 481 487 .irq_mask = byt_irq_mask, 482 488 .irq_unmask = byt_irq_unmask, 483 489 .irq_set_type = byt_irq_type, ··· 510 472 { 511 473 void __iomem *reg; 512 474 u32 base, value; 475 + int i; 476 + 477 + /* 478 + * Clear interrupt triggers for all pins that are GPIOs and 479 + * do not use direct IRQ mode. This will prevent spurious 480 + * interrupts from misconfigured pins. 481 + */ 482 + for (i = 0; i < vg->chip.ngpio; i++) { 483 + value = readl(byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG)); 484 + if ((value & BYT_PIN_MUX) == byt_get_gpio_mux(vg, i) && 485 + !(value & BYT_DIRECT_IRQ_EN)) { 486 + byt_gpio_clear_triggering(vg, i); 487 + dev_dbg(&vg->pdev->dev, "disabling GPIO %d\n", i); 488 + } 489 + } 513 490 514 491 /* clear interrupt status trigger registers */ 515 492 for (base = 0; base < vg->chip.ngpio; base += 32) { ··· 594 541 gc->can_sleep = false; 595 542 gc->dev = dev; 596 543 544 + #ifdef CONFIG_PM_SLEEP 545 + vg->saved_context = devm_kcalloc(&pdev->dev, gc->ngpio, 546 + sizeof(*vg->saved_context), GFP_KERNEL); 547 + #endif 548 + 597 549 ret = gpiochip_add(gc); 598 550 if (ret) { 599 551 dev_err(&pdev->dev, "failed adding byt-gpio chip\n"); ··· 627 569 return 0; 628 570 } 629 571 572 + #ifdef CONFIG_PM_SLEEP 573 + static int byt_gpio_suspend(struct device *dev) 574 + { 575 + struct platform_device *pdev = to_platform_device(dev); 576 + struct byt_gpio *vg = platform_get_drvdata(pdev); 577 + int i; 578 + 579 + for (i = 0; i < vg->chip.ngpio; i++) { 580 + void __iomem *reg; 581 + u32 value; 582 + 583 + reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG); 584 + value = readl(reg) & BYT_CONF0_RESTORE_MASK; 585 + vg->saved_context[i].conf0 = value; 586 + 587 + reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG); 588 + value = readl(reg) & BYT_VAL_RESTORE_MASK; 589 + vg->saved_context[i].val = value; 590 + } 591 + 592 + return 0; 593 + } 594 + 595 + static int byt_gpio_resume(struct device *dev) 596 + { 597 + struct platform_device *pdev = to_platform_device(dev); 598 + struct byt_gpio *vg = platform_get_drvdata(pdev); 599 + int i; 600 + 601 + for (i = 0; i < vg->chip.ngpio; i++) { 602 + void __iomem *reg; 603 + u32 value; 604 + 605 + reg = byt_gpio_reg(&vg->chip, i, BYT_CONF0_REG); 606 + value = readl(reg); 607 + if ((value & BYT_CONF0_RESTORE_MASK) != 608 + vg->saved_context[i].conf0) { 609 + value &= ~BYT_CONF0_RESTORE_MASK; 610 + value |= vg->saved_context[i].conf0; 611 + writel(value, reg); 612 + dev_info(dev, "restored pin %d conf0 %#08x", i, value); 613 + } 614 + 615 + reg = byt_gpio_reg(&vg->chip, i, BYT_VAL_REG); 616 + value = readl(reg); 617 + if ((value & BYT_VAL_RESTORE_MASK) != 618 + vg->saved_context[i].val) { 619 + u32 v; 620 + 621 + v = value & ~BYT_VAL_RESTORE_MASK; 622 + v |= vg->saved_context[i].val; 623 + if (v != value) { 624 + writel(v, reg); 625 + dev_dbg(dev, "restored pin %d val %#08x\n", 626 + i, v); 627 + } 628 + } 629 + } 630 + 631 + return 0; 632 + } 633 + #endif 634 + 630 635 static int byt_gpio_runtime_suspend(struct device *dev) 631 636 { 632 637 return 0; ··· 701 580 } 702 581 703 582 static const struct dev_pm_ops byt_gpio_pm_ops = { 704 - .runtime_suspend = byt_gpio_runtime_suspend, 705 - .runtime_resume = byt_gpio_runtime_resume, 583 + SET_LATE_SYSTEM_SLEEP_PM_OPS(byt_gpio_suspend, byt_gpio_resume) 584 + SET_RUNTIME_PM_OPS(byt_gpio_runtime_suspend, byt_gpio_runtime_resume, 585 + NULL) 706 586 }; 707 587 708 588 static const struct acpi_device_id byt_gpio_acpi_match[] = {
+1
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1226 1226 static int chv_gpio_direction_output(struct gpio_chip *chip, unsigned offset, 1227 1227 int value) 1228 1228 { 1229 + chv_gpio_set(chip, offset, value); 1229 1230 return pinctrl_gpio_direction_output(chip->base + offset); 1230 1231 } 1231 1232
+7 -10
drivers/pinctrl/pinctrl-at91.c
··· 1477 1477 /* the interrupt is already cleared before by reading ISR */ 1478 1478 } 1479 1479 1480 - static unsigned int gpio_irq_startup(struct irq_data *d) 1480 + static int gpio_irq_request_res(struct irq_data *d) 1481 1481 { 1482 1482 struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d); 1483 1483 unsigned pin = d->hwirq; 1484 1484 int ret; 1485 1485 1486 1486 ret = gpiochip_lock_as_irq(&at91_gpio->chip, pin); 1487 - if (ret) { 1487 + if (ret) 1488 1488 dev_err(at91_gpio->chip.dev, "unable to lock pind %lu IRQ\n", 1489 1489 d->hwirq); 1490 - return ret; 1491 - } 1492 - gpio_irq_unmask(d); 1493 - return 0; 1490 + 1491 + return ret; 1494 1492 } 1495 1493 1496 - static void gpio_irq_shutdown(struct irq_data *d) 1494 + static void gpio_irq_release_res(struct irq_data *d) 1497 1495 { 1498 1496 struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d); 1499 1497 unsigned pin = d->hwirq; 1500 1498 1501 - gpio_irq_mask(d); 1502 1499 gpiochip_unlock_as_irq(&at91_gpio->chip, pin); 1503 1500 } 1504 1501 ··· 1574 1577 static struct irq_chip gpio_irqchip = { 1575 1578 .name = "GPIO", 1576 1579 .irq_ack = gpio_irq_ack, 1577 - .irq_startup = gpio_irq_startup, 1578 - .irq_shutdown = gpio_irq_shutdown, 1580 + .irq_request_resources = gpio_irq_request_res, 1581 + .irq_release_resources = gpio_irq_release_res, 1579 1582 .irq_disable = gpio_irq_mask, 1580 1583 .irq_mask = gpio_irq_mask, 1581 1584 .irq_unmask = gpio_irq_unmask,
+1
drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c
··· 1011 1011 .pins = sun4i_a10_pins, 1012 1012 .npins = ARRAY_SIZE(sun4i_a10_pins), 1013 1013 .irq_banks = 1, 1014 + .irq_read_needs_mux = true, 1014 1015 }; 1015 1016 1016 1017 static int sun4i_a10_pinctrl_probe(struct platform_device *pdev)
+12 -2
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 29 29 #include <linux/slab.h> 30 30 31 31 #include "../core.h" 32 + #include "../../gpio/gpiolib.h" 32 33 #include "pinctrl-sunxi.h" 33 34 34 35 static struct irq_chip sunxi_pinctrl_edge_irq_chip; ··· 465 464 static int sunxi_pinctrl_gpio_get(struct gpio_chip *chip, unsigned offset) 466 465 { 467 466 struct sunxi_pinctrl *pctl = dev_get_drvdata(chip->dev); 468 - 469 467 u32 reg = sunxi_data_reg(offset); 470 468 u8 index = sunxi_data_offset(offset); 471 - u32 val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK; 469 + u32 set_mux = pctl->desc->irq_read_needs_mux && 470 + test_bit(FLAG_USED_AS_IRQ, &chip->desc[offset].flags); 471 + u32 val; 472 + 473 + if (set_mux) 474 + sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_INPUT); 475 + 476 + val = (readl(pctl->membase + reg) >> index) & DATA_PINS_MASK; 477 + 478 + if (set_mux) 479 + sunxi_pmx_set(pctl->pctl_dev, offset, SUN4I_FUNC_IRQ); 472 480 473 481 return val; 474 482 }
+4
drivers/pinctrl/sunxi/pinctrl-sunxi.h
··· 77 77 #define IRQ_LEVEL_LOW 0x03 78 78 #define IRQ_EDGE_BOTH 0x04 79 79 80 + #define SUN4I_FUNC_INPUT 0 81 + #define SUN4I_FUNC_IRQ 6 82 + 80 83 struct sunxi_desc_function { 81 84 const char *name; 82 85 u8 muxval; ··· 97 94 int npins; 98 95 unsigned pin_base; 99 96 unsigned irq_banks; 97 + bool irq_read_needs_mux; 100 98 }; 101 99 102 100 struct sunxi_pinctrl_function {
+17 -17
drivers/regulator/core.c
··· 1839 1839 } 1840 1840 1841 1841 if (rdev->ena_pin) { 1842 - ret = regulator_ena_gpio_ctrl(rdev, true); 1843 - if (ret < 0) 1844 - return ret; 1845 - rdev->ena_gpio_state = 1; 1842 + if (!rdev->ena_gpio_state) { 1843 + ret = regulator_ena_gpio_ctrl(rdev, true); 1844 + if (ret < 0) 1845 + return ret; 1846 + rdev->ena_gpio_state = 1; 1847 + } 1846 1848 } else if (rdev->desc->ops->enable) { 1847 1849 ret = rdev->desc->ops->enable(rdev); 1848 1850 if (ret < 0) ··· 1941 1939 trace_regulator_disable(rdev_get_name(rdev)); 1942 1940 1943 1941 if (rdev->ena_pin) { 1944 - ret = regulator_ena_gpio_ctrl(rdev, false); 1945 - if (ret < 0) 1946 - return ret; 1947 - rdev->ena_gpio_state = 0; 1942 + if (rdev->ena_gpio_state) { 1943 + ret = regulator_ena_gpio_ctrl(rdev, false); 1944 + if (ret < 0) 1945 + return ret; 1946 + rdev->ena_gpio_state = 0; 1947 + } 1948 1948 1949 1949 } else if (rdev->desc->ops->disable) { 1950 1950 ret = rdev->desc->ops->disable(rdev); ··· 3630 3626 config->ena_gpio, ret); 3631 3627 goto wash; 3632 3628 } 3633 - 3634 - if (config->ena_gpio_flags & GPIOF_OUT_INIT_HIGH) 3635 - rdev->ena_gpio_state = 1; 3636 - 3637 - if (config->ena_gpio_invert) 3638 - rdev->ena_gpio_state = !rdev->ena_gpio_state; 3639 3629 } 3640 3630 3641 3631 /* set regulator constraints */ ··· 3798 3800 list_for_each_entry(rdev, &regulator_list, list) { 3799 3801 mutex_lock(&rdev->mutex); 3800 3802 if (rdev->use_count > 0 || rdev->constraints->always_on) { 3801 - error = _regulator_do_enable(rdev); 3802 - if (error) 3803 - ret = error; 3803 + if (!_regulator_is_enabled(rdev)) { 3804 + error = _regulator_do_enable(rdev); 3805 + if (error) 3806 + ret = error; 3807 + } 3804 3808 } else { 3805 3809 if (!have_full_constraints()) 3806 3810 goto unlock;
+1
drivers/regulator/tps65910-regulator.c
··· 17 17 #include <linux/module.h> 18 18 #include <linux/init.h> 19 19 #include <linux/err.h> 20 + #include <linux/of.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/regulator/driver.h> 22 23 #include <linux/regulator/machine.h>
+16 -1
drivers/rpmsg/virtio_rpmsg_bus.c
··· 951 951 void *bufs_va; 952 952 int err = 0, i; 953 953 size_t total_buf_space; 954 + bool notify; 954 955 955 956 vrp = kzalloc(sizeof(*vrp), GFP_KERNEL); 956 957 if (!vrp) ··· 1031 1030 } 1032 1031 } 1033 1032 1033 + /* 1034 + * Prepare to kick but don't notify yet - we can't do this before 1035 + * device is ready. 1036 + */ 1037 + notify = virtqueue_kick_prepare(vrp->rvq); 1038 + 1039 + /* From this point on, we can notify and get callbacks. */ 1040 + virtio_device_ready(vdev); 1041 + 1034 1042 /* tell the remote processor it can start sending messages */ 1035 - virtqueue_kick(vrp->rvq); 1043 + /* 1044 + * this might be concurrent with callbacks, but we are only 1045 + * doing notify, not a full kick here, so that's ok. 1046 + */ 1047 + if (notify) 1048 + virtqueue_notify(vrp->rvq); 1036 1049 1037 1050 dev_info(&vdev->dev, "rpmsg host is online\n"); 1038 1051
+1
drivers/rtc/rtc-s3c.c
··· 849 849 850 850 static struct s3c_rtc_data const s3c6410_rtc_data = { 851 851 .max_user_freq = 32768, 852 + .needs_src_clk = true, 852 853 .irq_handler = s3c6410_rtc_irq, 853 854 .set_freq = s3c6410_rtc_setfreq, 854 855 .enable_tick = s3c6410_rtc_enable_tick,
+4 -2
drivers/scsi/libsas/sas_discover.c
··· 500 500 struct sas_discovery_event *ev = to_sas_discovery_event(work); 501 501 struct asd_sas_port *port = ev->port; 502 502 struct sas_ha_struct *ha = port->ha; 503 + struct domain_device *ddev = port->port_dev; 503 504 504 505 /* prevent revalidation from finding sata links in recovery */ 505 506 mutex_lock(&ha->disco_mutex); ··· 515 514 SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, 516 515 task_pid_nr(current)); 517 516 518 - if (port->port_dev) 519 - res = sas_ex_revalidate_domain(port->port_dev); 517 + if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || 518 + ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) 519 + res = sas_ex_revalidate_domain(ddev); 520 520 521 521 SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", 522 522 port->id, task_pid_nr(current), res);
+90 -136
drivers/usb/gadget/function/f_fs.c
··· 144 144 bool read; 145 145 146 146 struct kiocb *kiocb; 147 - const struct iovec *iovec; 148 - unsigned long nr_segs; 149 - char __user *buf; 150 - size_t len; 147 + struct iov_iter data; 148 + const void *to_free; 149 + char *buf; 151 150 152 151 struct mm_struct *mm; 153 152 struct work_struct work; ··· 648 649 io_data->req->actual; 649 650 650 651 if (io_data->read && ret > 0) { 651 - int i; 652 - size_t pos = 0; 653 - 654 - /* 655 - * Since req->length may be bigger than io_data->len (after 656 - * being rounded up to maxpacketsize), we may end up with more 657 - * data then user space has space for. 658 - */ 659 - ret = min_t(int, ret, io_data->len); 660 - 661 652 use_mm(io_data->mm); 662 - for (i = 0; i < io_data->nr_segs; i++) { 663 - size_t len = min_t(size_t, ret - pos, 664 - io_data->iovec[i].iov_len); 665 - if (!len) 666 - break; 667 - if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 668 - &io_data->buf[pos], len))) { 669 - ret = -EFAULT; 670 - break; 671 - } 672 - pos += len; 673 - } 653 + ret = copy_to_iter(io_data->buf, ret, &io_data->data); 654 + if (iov_iter_count(&io_data->data)) 655 + ret = -EFAULT; 674 656 unuse_mm(io_data->mm); 675 657 } 676 658 ··· 664 684 665 685 io_data->kiocb->private = NULL; 666 686 if (io_data->read) 667 - kfree(io_data->iovec); 687 + kfree(io_data->to_free); 668 688 kfree(io_data->buf); 669 689 kfree(io_data); 670 690 } ··· 723 743 * before the waiting completes, so do not assign to 'gadget' earlier 724 744 */ 725 745 struct usb_gadget *gadget = epfile->ffs->gadget; 746 + size_t copied; 726 747 727 748 spin_lock_irq(&epfile->ffs->eps_lock); 728 749 /* In the meantime, endpoint got disabled or changed. */ ··· 731 750 spin_unlock_irq(&epfile->ffs->eps_lock); 732 751 return -ESHUTDOWN; 733 752 } 753 + data_len = iov_iter_count(&io_data->data); 734 754 /* 735 755 * Controller may require buffer size to be aligned to 736 756 * maxpacketsize of an out endpoint. 737 757 */ 738 - data_len = io_data->read ? 739 - usb_ep_align_maybe(gadget, ep->ep, io_data->len) : 740 - io_data->len; 758 + if (io_data->read) 759 + data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); 741 760 spin_unlock_irq(&epfile->ffs->eps_lock); 742 761 743 762 data = kmalloc(data_len, GFP_KERNEL); 744 763 if (unlikely(!data)) 745 764 return -ENOMEM; 746 - if (io_data->aio && !io_data->read) { 747 - int i; 748 - size_t pos = 0; 749 - for (i = 0; i < io_data->nr_segs; i++) { 750 - if (unlikely(copy_from_user(&data[pos], 751 - io_data->iovec[i].iov_base, 752 - io_data->iovec[i].iov_len))) { 753 - ret = -EFAULT; 754 - goto error; 755 - } 756 - pos += io_data->iovec[i].iov_len; 757 - } 758 - } else { 759 - if (!io_data->read && 760 - unlikely(__copy_from_user(data, io_data->buf, 761 - io_data->len))) { 765 + if (!io_data->read) { 766 + copied = copy_from_iter(data, data_len, &io_data->data); 767 + if (copied != data_len) { 762 768 ret = -EFAULT; 763 769 goto error; 764 770 } ··· 844 876 */ 845 877 ret = ep->status; 846 878 if (io_data->read && ret > 0) { 847 - ret = min_t(size_t, ret, io_data->len); 848 - 849 - if (unlikely(copy_to_user(io_data->buf, 850 - data, ret))) 879 + ret = copy_to_iter(data, ret, &io_data->data); 880 + if (unlikely(iov_iter_count(&io_data->data))) 851 881 ret = -EFAULT; 852 882 } 853 883 } ··· 862 896 error: 863 897 kfree(data); 864 898 return ret; 865 - } 866 - 867 - static ssize_t 868 - ffs_epfile_write(struct file *file, const char __user *buf, size_t len, 869 - loff_t *ptr) 870 - { 871 - struct ffs_io_data io_data; 872 - 873 - ENTER(); 874 - 875 - io_data.aio = false; 876 - io_data.read = false; 877 - io_data.buf = (char * __user)buf; 878 - io_data.len = len; 879 - 880 - return ffs_epfile_io(file, &io_data); 881 - } 882 - 883 - static ssize_t 884 - ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr) 885 - { 886 - struct ffs_io_data io_data; 887 - 888 - ENTER(); 889 - 890 - io_data.aio = false; 891 - io_data.read = true; 892 - io_data.buf = buf; 893 - io_data.len = len; 894 - 895 - return ffs_epfile_io(file, &io_data); 896 899 } 897 900 898 901 static int ··· 900 965 return value; 901 966 } 902 967 903 - static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb, 904 - const struct iovec *iovec, 905 - unsigned long nr_segs, loff_t loff) 968 + static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from) 906 969 { 907 - struct ffs_io_data *io_data; 970 + struct ffs_io_data io_data, *p = &io_data; 971 + ssize_t res; 908 972 909 973 ENTER(); 910 974 911 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 912 - if (unlikely(!io_data)) 913 - return -ENOMEM; 914 - 915 - io_data->aio = true; 916 - io_data->read = false; 917 - io_data->kiocb = kiocb; 918 - io_data->iovec = iovec; 919 - io_data->nr_segs = nr_segs; 920 - io_data->len = kiocb->ki_nbytes; 921 - io_data->mm = current->mm; 922 - 923 - kiocb->private = io_data; 924 - 925 - kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 926 - 927 - return ffs_epfile_io(kiocb->ki_filp, io_data); 928 - } 929 - 930 - static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb, 931 - const struct iovec *iovec, 932 - unsigned long nr_segs, loff_t loff) 933 - { 934 - struct ffs_io_data *io_data; 935 - struct iovec *iovec_copy; 936 - 937 - ENTER(); 938 - 939 - iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL); 940 - if (unlikely(!iovec_copy)) 941 - return -ENOMEM; 942 - 943 - memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs); 944 - 945 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 946 - if (unlikely(!io_data)) { 947 - kfree(iovec_copy); 948 - return -ENOMEM; 975 + if (!is_sync_kiocb(kiocb)) { 976 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 977 + if (unlikely(!p)) 978 + return -ENOMEM; 979 + p->aio = true; 980 + } else { 981 + p->aio = false; 949 982 } 950 983 951 - io_data->aio = true; 952 - io_data->read = true; 953 - io_data->kiocb = kiocb; 954 - io_data->iovec = iovec_copy; 955 - io_data->nr_segs = nr_segs; 956 - io_data->len = kiocb->ki_nbytes; 957 - io_data->mm = current->mm; 984 + p->read = false; 985 + p->kiocb = kiocb; 986 + p->data = *from; 987 + p->mm = current->mm; 958 988 959 - kiocb->private = io_data; 989 + kiocb->private = p; 960 990 961 991 kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 962 992 963 - return ffs_epfile_io(kiocb->ki_filp, io_data); 993 + res = ffs_epfile_io(kiocb->ki_filp, p); 994 + if (res == -EIOCBQUEUED) 995 + return res; 996 + if (p->aio) 997 + kfree(p); 998 + else 999 + *from = p->data; 1000 + return res; 1001 + } 1002 + 1003 + static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to) 1004 + { 1005 + struct ffs_io_data io_data, *p = &io_data; 1006 + ssize_t res; 1007 + 1008 + ENTER(); 1009 + 1010 + if (!is_sync_kiocb(kiocb)) { 1011 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 1012 + if (unlikely(!p)) 1013 + return -ENOMEM; 1014 + p->aio = true; 1015 + } else { 1016 + p->aio = false; 1017 + } 1018 + 1019 + p->read = true; 1020 + p->kiocb = kiocb; 1021 + if (p->aio) { 1022 + p->to_free = dup_iter(&p->data, to, GFP_KERNEL); 1023 + if (!p->to_free) { 1024 + kfree(p); 1025 + return -ENOMEM; 1026 + } 1027 + } else { 1028 + p->data = *to; 1029 + p->to_free = NULL; 1030 + } 1031 + p->mm = current->mm; 1032 + 1033 + kiocb->private = p; 1034 + 1035 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 1036 + 1037 + res = ffs_epfile_io(kiocb->ki_filp, p); 1038 + if (res == -EIOCBQUEUED) 1039 + return res; 1040 + 1041 + if (p->aio) { 1042 + kfree(p->to_free); 1043 + kfree(p); 1044 + } else { 1045 + *to = p->data; 1046 + } 1047 + return res; 964 1048 } 965 1049 966 1050 static int ··· 1059 1105 .llseek = no_llseek, 1060 1106 1061 1107 .open = ffs_epfile_open, 1062 - .write = ffs_epfile_write, 1063 - .read = ffs_epfile_read, 1064 - .aio_write = ffs_epfile_aio_write, 1065 - .aio_read = ffs_epfile_aio_read, 1108 + .write = new_sync_write, 1109 + .read = new_sync_read, 1110 + .write_iter = ffs_epfile_write_iter, 1111 + .read_iter = ffs_epfile_read_iter, 1066 1112 .release = ffs_epfile_release, 1067 1113 .unlocked_ioctl = ffs_epfile_ioctl, 1068 1114 };
+194 -274
drivers/usb/gadget/legacy/inode.c
··· 74 74 MODULE_AUTHOR ("David Brownell"); 75 75 MODULE_LICENSE ("GPL"); 76 76 77 + static int ep_open(struct inode *, struct file *); 78 + 77 79 78 80 /*----------------------------------------------------------------------*/ 79 81 ··· 285 283 * still need dev->lock to use epdata->ep. 286 284 */ 287 285 static int 288 - get_ready_ep (unsigned f_flags, struct ep_data *epdata) 286 + get_ready_ep (unsigned f_flags, struct ep_data *epdata, bool is_write) 289 287 { 290 288 int val; 291 289 292 290 if (f_flags & O_NONBLOCK) { 293 291 if (!mutex_trylock(&epdata->lock)) 294 292 goto nonblock; 295 - if (epdata->state != STATE_EP_ENABLED) { 293 + if (epdata->state != STATE_EP_ENABLED && 294 + (!is_write || epdata->state != STATE_EP_READY)) { 296 295 mutex_unlock(&epdata->lock); 297 296 nonblock: 298 297 val = -EAGAIN; ··· 308 305 309 306 switch (epdata->state) { 310 307 case STATE_EP_ENABLED: 308 + return 0; 309 + case STATE_EP_READY: /* not configured yet */ 310 + if (is_write) 311 + return 0; 312 + // FALLTHRU 313 + case STATE_EP_UNBOUND: /* clean disconnect */ 311 314 break; 312 315 // case STATE_EP_DISABLED: /* "can't happen" */ 313 - // case STATE_EP_READY: /* "can't happen" */ 314 316 default: /* error! */ 315 317 pr_debug ("%s: ep %p not available, state %d\n", 316 318 shortname, epdata, epdata->state); 317 - // FALLTHROUGH 318 - case STATE_EP_UNBOUND: /* clean disconnect */ 319 - val = -ENODEV; 320 - mutex_unlock(&epdata->lock); 321 319 } 322 - return val; 320 + mutex_unlock(&epdata->lock); 321 + return -ENODEV; 323 322 } 324 323 325 324 static ssize_t ··· 368 363 return value; 369 364 } 370 365 371 - 372 - /* handle a synchronous OUT bulk/intr/iso transfer */ 373 - static ssize_t 374 - ep_read (struct file *fd, char __user *buf, size_t len, loff_t *ptr) 375 - { 376 - struct ep_data *data = fd->private_data; 377 - void *kbuf; 378 - ssize_t value; 379 - 380 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 381 - return value; 382 - 383 - /* halt any endpoint by doing a "wrong direction" i/o call */ 384 - if (usb_endpoint_dir_in(&data->desc)) { 385 - if (usb_endpoint_xfer_isoc(&data->desc)) { 386 - mutex_unlock(&data->lock); 387 - return -EINVAL; 388 - } 389 - DBG (data->dev, "%s halt\n", data->name); 390 - spin_lock_irq (&data->dev->lock); 391 - if (likely (data->ep != NULL)) 392 - usb_ep_set_halt (data->ep); 393 - spin_unlock_irq (&data->dev->lock); 394 - mutex_unlock(&data->lock); 395 - return -EBADMSG; 396 - } 397 - 398 - /* FIXME readahead for O_NONBLOCK and poll(); careful with ZLPs */ 399 - 400 - value = -ENOMEM; 401 - kbuf = kmalloc (len, GFP_KERNEL); 402 - if (unlikely (!kbuf)) 403 - goto free1; 404 - 405 - value = ep_io (data, kbuf, len); 406 - VDEBUG (data->dev, "%s read %zu OUT, status %d\n", 407 - data->name, len, (int) value); 408 - if (value >= 0 && copy_to_user (buf, kbuf, value)) 409 - value = -EFAULT; 410 - 411 - free1: 412 - mutex_unlock(&data->lock); 413 - kfree (kbuf); 414 - return value; 415 - } 416 - 417 - /* handle a synchronous IN bulk/intr/iso transfer */ 418 - static ssize_t 419 - ep_write (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 420 - { 421 - struct ep_data *data = fd->private_data; 422 - void *kbuf; 423 - ssize_t value; 424 - 425 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 426 - return value; 427 - 428 - /* halt any endpoint by doing a "wrong direction" i/o call */ 429 - if (!usb_endpoint_dir_in(&data->desc)) { 430 - if (usb_endpoint_xfer_isoc(&data->desc)) { 431 - mutex_unlock(&data->lock); 432 - return -EINVAL; 433 - } 434 - DBG (data->dev, "%s halt\n", data->name); 435 - spin_lock_irq (&data->dev->lock); 436 - if (likely (data->ep != NULL)) 437 - usb_ep_set_halt (data->ep); 438 - spin_unlock_irq (&data->dev->lock); 439 - mutex_unlock(&data->lock); 440 - return -EBADMSG; 441 - } 442 - 443 - /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */ 444 - 445 - value = -ENOMEM; 446 - kbuf = memdup_user(buf, len); 447 - if (IS_ERR(kbuf)) { 448 - value = PTR_ERR(kbuf); 449 - kbuf = NULL; 450 - goto free1; 451 - } 452 - 453 - value = ep_io (data, kbuf, len); 454 - VDEBUG (data->dev, "%s write %zu IN, status %d\n", 455 - data->name, len, (int) value); 456 - free1: 457 - mutex_unlock(&data->lock); 458 - kfree (kbuf); 459 - return value; 460 - } 461 - 462 366 static int 463 367 ep_release (struct inode *inode, struct file *fd) 464 368 { ··· 395 481 struct ep_data *data = fd->private_data; 396 482 int status; 397 483 398 - if ((status = get_ready_ep (fd->f_flags, data)) < 0) 484 + if ((status = get_ready_ep (fd->f_flags, data, false)) < 0) 399 485 return status; 400 486 401 487 spin_lock_irq (&data->dev->lock); ··· 431 517 struct mm_struct *mm; 432 518 struct work_struct work; 433 519 void *buf; 434 - const struct iovec *iv; 435 - unsigned long nr_segs; 520 + struct iov_iter to; 521 + const void *to_free; 436 522 unsigned actual; 437 523 }; 438 524 ··· 455 541 return value; 456 542 } 457 543 458 - static ssize_t ep_copy_to_user(struct kiocb_priv *priv) 459 - { 460 - ssize_t len, total; 461 - void *to_copy; 462 - int i; 463 - 464 - /* copy stuff into user buffers */ 465 - total = priv->actual; 466 - len = 0; 467 - to_copy = priv->buf; 468 - for (i=0; i < priv->nr_segs; i++) { 469 - ssize_t this = min((ssize_t)(priv->iv[i].iov_len), total); 470 - 471 - if (copy_to_user(priv->iv[i].iov_base, to_copy, this)) { 472 - if (len == 0) 473 - len = -EFAULT; 474 - break; 475 - } 476 - 477 - total -= this; 478 - len += this; 479 - to_copy += this; 480 - if (total == 0) 481 - break; 482 - } 483 - 484 - return len; 485 - } 486 - 487 544 static void ep_user_copy_worker(struct work_struct *work) 488 545 { 489 546 struct kiocb_priv *priv = container_of(work, struct kiocb_priv, work); ··· 463 578 size_t ret; 464 579 465 580 use_mm(mm); 466 - ret = ep_copy_to_user(priv); 581 + ret = copy_to_iter(priv->buf, priv->actual, &priv->to); 467 582 unuse_mm(mm); 583 + if (!ret) 584 + ret = -EFAULT; 468 585 469 586 /* completing the iocb can drop the ctx and mm, don't touch mm after */ 470 587 aio_complete(iocb, ret, ret); 471 588 472 589 kfree(priv->buf); 590 + kfree(priv->to_free); 473 591 kfree(priv); 474 592 } 475 593 ··· 491 603 * don't need to copy anything to userspace, so we can 492 604 * complete the aio request immediately. 493 605 */ 494 - if (priv->iv == NULL || unlikely(req->actual == 0)) { 606 + if (priv->to_free == NULL || unlikely(req->actual == 0)) { 495 607 kfree(req->buf); 608 + kfree(priv->to_free); 496 609 kfree(priv); 497 610 iocb->private = NULL; 498 611 /* aio_complete() reports bytes-transferred _and_ faults */ ··· 507 618 508 619 priv->buf = req->buf; 509 620 priv->actual = req->actual; 621 + INIT_WORK(&priv->work, ep_user_copy_worker); 510 622 schedule_work(&priv->work); 511 623 } 512 624 spin_unlock(&epdata->dev->lock); ··· 516 626 put_ep(epdata); 517 627 } 518 628 519 - static ssize_t 520 - ep_aio_rwtail( 521 - struct kiocb *iocb, 522 - char *buf, 523 - size_t len, 524 - struct ep_data *epdata, 525 - const struct iovec *iv, 526 - unsigned long nr_segs 527 - ) 629 + static ssize_t ep_aio(struct kiocb *iocb, 630 + struct kiocb_priv *priv, 631 + struct ep_data *epdata, 632 + char *buf, 633 + size_t len) 528 634 { 529 - struct kiocb_priv *priv; 530 - struct usb_request *req; 531 - ssize_t value; 635 + struct usb_request *req; 636 + ssize_t value; 532 637 533 - priv = kmalloc(sizeof *priv, GFP_KERNEL); 534 - if (!priv) { 535 - value = -ENOMEM; 536 - fail: 537 - kfree(buf); 538 - return value; 539 - } 540 638 iocb->private = priv; 541 639 priv->iocb = iocb; 542 - priv->iv = iv; 543 - priv->nr_segs = nr_segs; 544 - INIT_WORK(&priv->work, ep_user_copy_worker); 545 - 546 - value = get_ready_ep(iocb->ki_filp->f_flags, epdata); 547 - if (unlikely(value < 0)) { 548 - kfree(priv); 549 - goto fail; 550 - } 551 640 552 641 kiocb_set_cancel_fn(iocb, ep_aio_cancel); 553 642 get_ep(epdata); ··· 538 669 * allocate or submit those if the host disconnected. 539 670 */ 540 671 spin_lock_irq(&epdata->dev->lock); 541 - if (likely(epdata->ep)) { 542 - req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 543 - if (likely(req)) { 544 - priv->req = req; 545 - req->buf = buf; 546 - req->length = len; 547 - req->complete = ep_aio_complete; 548 - req->context = iocb; 549 - value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 550 - if (unlikely(0 != value)) 551 - usb_ep_free_request(epdata->ep, req); 552 - } else 553 - value = -EAGAIN; 554 - } else 555 - value = -ENODEV; 672 + value = -ENODEV; 673 + if (unlikely(epdata->ep)) 674 + goto fail; 675 + 676 + req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 677 + value = -ENOMEM; 678 + if (unlikely(!req)) 679 + goto fail; 680 + 681 + priv->req = req; 682 + req->buf = buf; 683 + req->length = len; 684 + req->complete = ep_aio_complete; 685 + req->context = iocb; 686 + value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 687 + if (unlikely(0 != value)) { 688 + usb_ep_free_request(epdata->ep, req); 689 + goto fail; 690 + } 556 691 spin_unlock_irq(&epdata->dev->lock); 692 + return -EIOCBQUEUED; 557 693 558 - mutex_unlock(&epdata->lock); 559 - 560 - if (unlikely(value)) { 561 - kfree(priv); 562 - put_ep(epdata); 563 - } else 564 - value = -EIOCBQUEUED; 694 + fail: 695 + spin_unlock_irq(&epdata->dev->lock); 696 + kfree(priv->to_free); 697 + kfree(priv); 698 + put_ep(epdata); 565 699 return value; 566 700 } 567 701 568 702 static ssize_t 569 - ep_aio_read(struct kiocb *iocb, const struct iovec *iov, 570 - unsigned long nr_segs, loff_t o) 703 + ep_read_iter(struct kiocb *iocb, struct iov_iter *to) 571 704 { 572 - struct ep_data *epdata = iocb->ki_filp->private_data; 573 - char *buf; 705 + struct file *file = iocb->ki_filp; 706 + struct ep_data *epdata = file->private_data; 707 + size_t len = iov_iter_count(to); 708 + ssize_t value; 709 + char *buf; 574 710 575 - if (unlikely(usb_endpoint_dir_in(&epdata->desc))) 576 - return -EINVAL; 711 + if ((value = get_ready_ep(file->f_flags, epdata, false)) < 0) 712 + return value; 577 713 578 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 579 - if (unlikely(!buf)) 714 + /* halt any endpoint by doing a "wrong direction" i/o call */ 715 + if (usb_endpoint_dir_in(&epdata->desc)) { 716 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 717 + !is_sync_kiocb(iocb)) { 718 + mutex_unlock(&epdata->lock); 719 + return -EINVAL; 720 + } 721 + DBG (epdata->dev, "%s halt\n", epdata->name); 722 + spin_lock_irq(&epdata->dev->lock); 723 + if (likely(epdata->ep != NULL)) 724 + usb_ep_set_halt(epdata->ep); 725 + spin_unlock_irq(&epdata->dev->lock); 726 + mutex_unlock(&epdata->lock); 727 + return -EBADMSG; 728 + } 729 + 730 + buf = kmalloc(len, GFP_KERNEL); 731 + if (unlikely(!buf)) { 732 + mutex_unlock(&epdata->lock); 580 733 return -ENOMEM; 581 - 582 - return ep_aio_rwtail(iocb, buf, iocb->ki_nbytes, epdata, iov, nr_segs); 734 + } 735 + if (is_sync_kiocb(iocb)) { 736 + value = ep_io(epdata, buf, len); 737 + if (value >= 0 && copy_to_iter(buf, value, to)) 738 + value = -EFAULT; 739 + } else { 740 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 741 + value = -ENOMEM; 742 + if (!priv) 743 + goto fail; 744 + priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL); 745 + if (!priv->to_free) { 746 + kfree(priv); 747 + goto fail; 748 + } 749 + value = ep_aio(iocb, priv, epdata, buf, len); 750 + if (value == -EIOCBQUEUED) 751 + buf = NULL; 752 + } 753 + fail: 754 + kfree(buf); 755 + mutex_unlock(&epdata->lock); 756 + return value; 583 757 } 584 758 759 + static ssize_t ep_config(struct ep_data *, const char *, size_t); 760 + 585 761 static ssize_t 586 - ep_aio_write(struct kiocb *iocb, const struct iovec *iov, 587 - unsigned long nr_segs, loff_t o) 762 + ep_write_iter(struct kiocb *iocb, struct iov_iter *from) 588 763 { 589 - struct ep_data *epdata = iocb->ki_filp->private_data; 590 - char *buf; 591 - size_t len = 0; 592 - int i = 0; 764 + struct file *file = iocb->ki_filp; 765 + struct ep_data *epdata = file->private_data; 766 + size_t len = iov_iter_count(from); 767 + bool configured; 768 + ssize_t value; 769 + char *buf; 593 770 594 - if (unlikely(!usb_endpoint_dir_in(&epdata->desc))) 595 - return -EINVAL; 771 + if ((value = get_ready_ep(file->f_flags, epdata, true)) < 0) 772 + return value; 596 773 597 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 598 - if (unlikely(!buf)) 599 - return -ENOMEM; 774 + configured = epdata->state == STATE_EP_ENABLED; 600 775 601 - for (i=0; i < nr_segs; i++) { 602 - if (unlikely(copy_from_user(&buf[len], iov[i].iov_base, 603 - iov[i].iov_len) != 0)) { 604 - kfree(buf); 605 - return -EFAULT; 776 + /* halt any endpoint by doing a "wrong direction" i/o call */ 777 + if (configured && !usb_endpoint_dir_in(&epdata->desc)) { 778 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 779 + !is_sync_kiocb(iocb)) { 780 + mutex_unlock(&epdata->lock); 781 + return -EINVAL; 606 782 } 607 - len += iov[i].iov_len; 783 + DBG (epdata->dev, "%s halt\n", epdata->name); 784 + spin_lock_irq(&epdata->dev->lock); 785 + if (likely(epdata->ep != NULL)) 786 + usb_ep_set_halt(epdata->ep); 787 + spin_unlock_irq(&epdata->dev->lock); 788 + mutex_unlock(&epdata->lock); 789 + return -EBADMSG; 608 790 } 609 - return ep_aio_rwtail(iocb, buf, len, epdata, NULL, 0); 791 + 792 + buf = kmalloc(len, GFP_KERNEL); 793 + if (unlikely(!buf)) { 794 + mutex_unlock(&epdata->lock); 795 + return -ENOMEM; 796 + } 797 + 798 + if (unlikely(copy_from_iter(buf, len, from) != len)) { 799 + value = -EFAULT; 800 + goto out; 801 + } 802 + 803 + if (unlikely(!configured)) { 804 + value = ep_config(epdata, buf, len); 805 + } else if (is_sync_kiocb(iocb)) { 806 + value = ep_io(epdata, buf, len); 807 + } else { 808 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 809 + value = -ENOMEM; 810 + if (priv) { 811 + value = ep_aio(iocb, priv, epdata, buf, len); 812 + if (value == -EIOCBQUEUED) 813 + buf = NULL; 814 + } 815 + } 816 + out: 817 + kfree(buf); 818 + mutex_unlock(&epdata->lock); 819 + return value; 610 820 } 611 821 612 822 /*----------------------------------------------------------------------*/ ··· 693 745 /* used after endpoint configuration */ 694 746 static const struct file_operations ep_io_operations = { 695 747 .owner = THIS_MODULE, 696 - .llseek = no_llseek, 697 748 698 - .read = ep_read, 699 - .write = ep_write, 700 - .unlocked_ioctl = ep_ioctl, 749 + .open = ep_open, 701 750 .release = ep_release, 702 - 703 - .aio_read = ep_aio_read, 704 - .aio_write = ep_aio_write, 751 + .llseek = no_llseek, 752 + .read = new_sync_read, 753 + .write = new_sync_write, 754 + .unlocked_ioctl = ep_ioctl, 755 + .read_iter = ep_read_iter, 756 + .write_iter = ep_write_iter, 705 757 }; 706 758 707 759 /* ENDPOINT INITIALIZATION ··· 718 770 * speed descriptor, then optional high speed descriptor. 719 771 */ 720 772 static ssize_t 721 - ep_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 773 + ep_config (struct ep_data *data, const char *buf, size_t len) 722 774 { 723 - struct ep_data *data = fd->private_data; 724 775 struct usb_ep *ep; 725 776 u32 tag; 726 777 int value, length = len; 727 - 728 - value = mutex_lock_interruptible(&data->lock); 729 - if (value < 0) 730 - return value; 731 778 732 779 if (data->state != STATE_EP_READY) { 733 780 value = -EL2HLT; ··· 734 791 goto fail0; 735 792 736 793 /* we might need to change message format someday */ 737 - if (copy_from_user (&tag, buf, 4)) { 738 - goto fail1; 739 - } 794 + memcpy(&tag, buf, 4); 740 795 if (tag != 1) { 741 796 DBG(data->dev, "config %s, bad tag %d\n", data->name, tag); 742 797 goto fail0; ··· 747 806 */ 748 807 749 808 /* full/low speed descriptor, then high speed */ 750 - if (copy_from_user (&data->desc, buf, USB_DT_ENDPOINT_SIZE)) { 751 - goto fail1; 752 - } 809 + memcpy(&data->desc, buf, USB_DT_ENDPOINT_SIZE); 753 810 if (data->desc.bLength != USB_DT_ENDPOINT_SIZE 754 811 || data->desc.bDescriptorType != USB_DT_ENDPOINT) 755 812 goto fail0; 756 813 if (len != USB_DT_ENDPOINT_SIZE) { 757 814 if (len != 2 * USB_DT_ENDPOINT_SIZE) 758 815 goto fail0; 759 - if (copy_from_user (&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 760 - USB_DT_ENDPOINT_SIZE)) { 761 - goto fail1; 762 - } 816 + memcpy(&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 817 + USB_DT_ENDPOINT_SIZE); 763 818 if (data->hs_desc.bLength != USB_DT_ENDPOINT_SIZE 764 819 || data->hs_desc.bDescriptorType 765 820 != USB_DT_ENDPOINT) { ··· 777 840 case USB_SPEED_LOW: 778 841 case USB_SPEED_FULL: 779 842 ep->desc = &data->desc; 780 - value = usb_ep_enable(ep); 781 - if (value == 0) 782 - data->state = STATE_EP_ENABLED; 783 843 break; 784 844 case USB_SPEED_HIGH: 785 845 /* fails if caller didn't provide that descriptor... */ 786 846 ep->desc = &data->hs_desc; 787 - value = usb_ep_enable(ep); 788 - if (value == 0) 789 - data->state = STATE_EP_ENABLED; 790 847 break; 791 848 default: 792 849 DBG(data->dev, "unconnected, %s init abandoned\n", 793 850 data->name); 794 851 value = -EINVAL; 852 + goto gone; 795 853 } 854 + value = usb_ep_enable(ep); 796 855 if (value == 0) { 797 - fd->f_op = &ep_io_operations; 856 + data->state = STATE_EP_ENABLED; 798 857 value = length; 799 858 } 800 859 gone: ··· 800 867 data->desc.bDescriptorType = 0; 801 868 data->hs_desc.bDescriptorType = 0; 802 869 } 803 - mutex_unlock(&data->lock); 804 870 return value; 805 871 fail0: 806 872 value = -EINVAL; 807 - goto fail; 808 - fail1: 809 - value = -EFAULT; 810 873 goto fail; 811 874 } 812 875 ··· 830 901 mutex_unlock(&data->lock); 831 902 return value; 832 903 } 833 - 834 - /* used before endpoint configuration */ 835 - static const struct file_operations ep_config_operations = { 836 - .llseek = no_llseek, 837 - 838 - .open = ep_open, 839 - .write = ep_config, 840 - .release = ep_release, 841 - }; 842 904 843 905 /*----------------------------------------------------------------------*/ 844 906 ··· 909 989 enum ep0_state state; 910 990 911 991 spin_lock_irq (&dev->lock); 992 + if (dev->state <= STATE_DEV_OPENED) { 993 + retval = -EINVAL; 994 + goto done; 995 + } 912 996 913 997 /* report fd mode change before acting on it */ 914 998 if (dev->setup_abort) { ··· 1111 1187 struct dev_data *dev = fd->private_data; 1112 1188 ssize_t retval = -ESRCH; 1113 1189 1114 - spin_lock_irq (&dev->lock); 1115 - 1116 1190 /* report fd mode change before acting on it */ 1117 1191 if (dev->setup_abort) { 1118 1192 dev->setup_abort = 0; ··· 1156 1234 } else 1157 1235 DBG (dev, "fail %s, state %d\n", __func__, dev->state); 1158 1236 1159 - spin_unlock_irq (&dev->lock); 1160 1237 return retval; 1161 1238 } 1162 1239 ··· 1202 1281 struct dev_data *dev = fd->private_data; 1203 1282 int mask = 0; 1204 1283 1284 + if (dev->state <= STATE_DEV_OPENED) 1285 + return DEFAULT_POLLMASK; 1286 + 1205 1287 poll_wait(fd, &dev->wait, wait); 1206 1288 1207 1289 spin_lock_irq (&dev->lock); ··· 1239 1315 1240 1316 return ret; 1241 1317 } 1242 - 1243 - /* used after device configuration */ 1244 - static const struct file_operations ep0_io_operations = { 1245 - .owner = THIS_MODULE, 1246 - .llseek = no_llseek, 1247 - 1248 - .read = ep0_read, 1249 - .write = ep0_write, 1250 - .fasync = ep0_fasync, 1251 - .poll = ep0_poll, 1252 - .unlocked_ioctl = dev_ioctl, 1253 - .release = dev_release, 1254 - }; 1255 1318 1256 1319 /*----------------------------------------------------------------------*/ 1257 1320 ··· 1561 1650 goto enomem1; 1562 1651 1563 1652 data->dentry = gadgetfs_create_file (dev->sb, data->name, 1564 - data, &ep_config_operations); 1653 + data, &ep_io_operations); 1565 1654 if (!data->dentry) 1566 1655 goto enomem2; 1567 1656 list_add_tail (&data->epfiles, &dev->epfiles); ··· 1763 1852 u32 tag; 1764 1853 char *kbuf; 1765 1854 1855 + spin_lock_irq(&dev->lock); 1856 + if (dev->state > STATE_DEV_OPENED) { 1857 + value = ep0_write(fd, buf, len, ptr); 1858 + spin_unlock_irq(&dev->lock); 1859 + return value; 1860 + } 1861 + spin_unlock_irq(&dev->lock); 1862 + 1766 1863 if (len < (USB_DT_CONFIG_SIZE + USB_DT_DEVICE_SIZE + 4)) 1767 1864 return -EINVAL; 1768 1865 ··· 1844 1925 * on, they can work ... except in cleanup paths that 1845 1926 * kick in after the ep0 descriptor is closed. 1846 1927 */ 1847 - fd->f_op = &ep0_io_operations; 1848 1928 value = len; 1849 1929 } 1850 1930 return value; ··· 1874 1956 return value; 1875 1957 } 1876 1958 1877 - static const struct file_operations dev_init_operations = { 1959 + static const struct file_operations ep0_operations = { 1878 1960 .llseek = no_llseek, 1879 1961 1880 1962 .open = dev_open, 1963 + .read = ep0_read, 1881 1964 .write = dev_config, 1882 1965 .fasync = ep0_fasync, 1966 + .poll = ep0_poll, 1883 1967 .unlocked_ioctl = dev_ioctl, 1884 1968 .release = dev_release, 1885 1969 }; ··· 1997 2077 goto Enomem; 1998 2078 1999 2079 dev->sb = sb; 2000 - dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &dev_init_operations); 2080 + dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &ep0_operations); 2001 2081 if (!dev->dentry) { 2002 2082 put_dev(dev); 2003 2083 goto Enomem;
+2
drivers/vfio/pci/vfio_pci_intrs.c
··· 868 868 func = vfio_pci_set_err_trigger; 869 869 break; 870 870 } 871 + break; 871 872 case VFIO_PCI_REQ_IRQ_INDEX: 872 873 switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { 873 874 case VFIO_IRQ_SET_ACTION_TRIGGER: 874 875 func = vfio_pci_set_req_trigger; 875 876 break; 876 877 } 878 + break; 877 879 } 878 880 879 881 if (!func)
+16 -5
drivers/virtio/virtio_balloon.c
··· 29 29 #include <linux/module.h> 30 30 #include <linux/balloon_compaction.h> 31 31 #include <linux/oom.h> 32 + #include <linux/wait.h> 32 33 33 34 /* 34 35 * Balloon device works in 4K page units. So each page is pointed to by ··· 335 334 static int balloon(void *_vballoon) 336 335 { 337 336 struct virtio_balloon *vb = _vballoon; 337 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 338 338 339 339 set_freezable(); 340 340 while (!kthread_should_stop()) { 341 341 s64 diff; 342 342 343 343 try_to_freeze(); 344 - wait_event_interruptible(vb->config_change, 345 - (diff = towards_target(vb)) != 0 346 - || vb->need_stats_update 347 - || kthread_should_stop() 348 - || freezing(current)); 344 + 345 + add_wait_queue(&vb->config_change, &wait); 346 + for (;;) { 347 + if ((diff = towards_target(vb)) != 0 || 348 + vb->need_stats_update || 349 + kthread_should_stop() || 350 + freezing(current)) 351 + break; 352 + wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); 353 + } 354 + remove_wait_queue(&vb->config_change, &wait); 355 + 349 356 if (vb->need_stats_update) 350 357 stats_handle_request(vb); 351 358 if (diff > 0) ··· 507 498 err = register_oom_notifier(&vb->nb); 508 499 if (err < 0) 509 500 goto out_oom_notify; 501 + 502 + virtio_device_ready(vdev); 510 503 511 504 vb->thread = kthread_run(balloon, vb, "vballoon"); 512 505 if (IS_ERR(vb->thread)) {
+82 -8
drivers/virtio/virtio_mmio.c
··· 156 156 void *buf, unsigned len) 157 157 { 158 158 struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 159 - u8 *ptr = buf; 160 - int i; 159 + void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG; 160 + u8 b; 161 + __le16 w; 162 + __le32 l; 161 163 162 - for (i = 0; i < len; i++) 163 - ptr[i] = readb(vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i); 164 + if (vm_dev->version == 1) { 165 + u8 *ptr = buf; 166 + int i; 167 + 168 + for (i = 0; i < len; i++) 169 + ptr[i] = readb(base + offset + i); 170 + return; 171 + } 172 + 173 + switch (len) { 174 + case 1: 175 + b = readb(base + offset); 176 + memcpy(buf, &b, sizeof b); 177 + break; 178 + case 2: 179 + w = cpu_to_le16(readw(base + offset)); 180 + memcpy(buf, &w, sizeof w); 181 + break; 182 + case 4: 183 + l = cpu_to_le32(readl(base + offset)); 184 + memcpy(buf, &l, sizeof l); 185 + break; 186 + case 8: 187 + l = cpu_to_le32(readl(base + offset)); 188 + memcpy(buf, &l, sizeof l); 189 + l = cpu_to_le32(ioread32(base + offset + sizeof l)); 190 + memcpy(buf + sizeof l, &l, sizeof l); 191 + break; 192 + default: 193 + BUG(); 194 + } 164 195 } 165 196 166 197 static void vm_set(struct virtio_device *vdev, unsigned offset, 167 198 const void *buf, unsigned len) 168 199 { 169 200 struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 170 - const u8 *ptr = buf; 171 - int i; 201 + void __iomem *base = vm_dev->base + VIRTIO_MMIO_CONFIG; 202 + u8 b; 203 + __le16 w; 204 + __le32 l; 172 205 173 - for (i = 0; i < len; i++) 174 - writeb(ptr[i], vm_dev->base + VIRTIO_MMIO_CONFIG + offset + i); 206 + if (vm_dev->version == 1) { 207 + const u8 *ptr = buf; 208 + int i; 209 + 210 + for (i = 0; i < len; i++) 211 + writeb(ptr[i], base + offset + i); 212 + 213 + return; 214 + } 215 + 216 + switch (len) { 217 + case 1: 218 + memcpy(&b, buf, sizeof b); 219 + writeb(b, base + offset); 220 + break; 221 + case 2: 222 + memcpy(&w, buf, sizeof w); 223 + writew(le16_to_cpu(w), base + offset); 224 + break; 225 + case 4: 226 + memcpy(&l, buf, sizeof l); 227 + writel(le32_to_cpu(l), base + offset); 228 + break; 229 + case 8: 230 + memcpy(&l, buf, sizeof l); 231 + writel(le32_to_cpu(l), base + offset); 232 + memcpy(&l, buf + sizeof l, sizeof l); 233 + writel(le32_to_cpu(l), base + offset + sizeof l); 234 + break; 235 + default: 236 + BUG(); 237 + } 238 + } 239 + 240 + static u32 vm_generation(struct virtio_device *vdev) 241 + { 242 + struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev); 243 + 244 + if (vm_dev->version == 1) 245 + return 0; 246 + else 247 + return readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION); 175 248 } 176 249 177 250 static u8 vm_get_status(struct virtio_device *vdev) ··· 513 440 static const struct virtio_config_ops virtio_mmio_config_ops = { 514 441 .get = vm_get, 515 442 .set = vm_set, 443 + .generation = vm_generation, 516 444 .get_status = vm_get_status, 517 445 .set_status = vm_set_status, 518 446 .reset = vm_reset,
+12 -6
drivers/xen/events/events_base.c
··· 526 526 pirq_query_unmask(irq); 527 527 528 528 rc = set_evtchn_to_irq(evtchn, irq); 529 - if (rc != 0) { 530 - pr_err("irq%d: Failed to set port to irq mapping (%d)\n", 531 - irq, rc); 532 - xen_evtchn_close(evtchn); 533 - return 0; 534 - } 529 + if (rc) 530 + goto err; 531 + 535 532 bind_evtchn_to_cpu(evtchn, 0); 536 533 info->evtchn = evtchn; 534 + 535 + rc = xen_evtchn_port_setup(info); 536 + if (rc) 537 + goto err; 537 538 538 539 out: 539 540 unmask_evtchn(evtchn); 540 541 eoi_pirq(irq_get_irq_data(irq)); 541 542 543 + return 0; 544 + 545 + err: 546 + pr_err("irq%d: Failed to set port to irq mapping (%d)\n", irq, rc); 547 + xen_evtchn_close(evtchn); 542 548 return 0; 543 549 } 544 550
+1 -1
drivers/xen/xen-pciback/conf_space.c
··· 16 16 #include "conf_space.h" 17 17 #include "conf_space_quirks.h" 18 18 19 - static bool permissive; 19 + bool permissive; 20 20 module_param(permissive, bool, 0644); 21 21 22 22 /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+2
drivers/xen/xen-pciback/conf_space.h
··· 64 64 void *data; 65 65 }; 66 66 67 + extern bool permissive; 68 + 67 69 #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) 68 70 69 71 /* Add fields to a device - the add_fields macro expects to get a pointer to
+47 -12
drivers/xen/xen-pciback/conf_space_header.c
··· 11 11 #include "pciback.h" 12 12 #include "conf_space.h" 13 13 14 + struct pci_cmd_info { 15 + u16 val; 16 + }; 17 + 14 18 struct pci_bar_info { 15 19 u32 val; 16 20 u32 len_val; ··· 24 20 #define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO)) 25 21 #define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER) 26 22 23 + /* Bits guests are allowed to control in permissive mode. */ 24 + #define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \ 25 + PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \ 26 + PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK) 27 + 28 + static void *command_init(struct pci_dev *dev, int offset) 29 + { 30 + struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); 31 + int err; 32 + 33 + if (!cmd) 34 + return ERR_PTR(-ENOMEM); 35 + 36 + err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val); 37 + if (err) { 38 + kfree(cmd); 39 + return ERR_PTR(err); 40 + } 41 + 42 + return cmd; 43 + } 44 + 27 45 static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data) 28 46 { 29 - int i; 30 - int ret; 47 + int ret = pci_read_config_word(dev, offset, value); 48 + const struct pci_cmd_info *cmd = data; 31 49 32 - ret = xen_pcibk_read_config_word(dev, offset, value, data); 33 - if (!pci_is_enabled(dev)) 34 - return ret; 35 - 36 - for (i = 0; i < PCI_ROM_RESOURCE; i++) { 37 - if (dev->resource[i].flags & IORESOURCE_IO) 38 - *value |= PCI_COMMAND_IO; 39 - if (dev->resource[i].flags & IORESOURCE_MEM) 40 - *value |= PCI_COMMAND_MEMORY; 41 - } 50 + *value &= PCI_COMMAND_GUEST; 51 + *value |= cmd->val & ~PCI_COMMAND_GUEST; 42 52 43 53 return ret; 44 54 } ··· 61 43 { 62 44 struct xen_pcibk_dev_data *dev_data; 63 45 int err; 46 + u16 val; 47 + struct pci_cmd_info *cmd = data; 64 48 65 49 dev_data = pci_get_drvdata(dev); 66 50 if (!pci_is_enabled(dev) && is_enable_cmd(value)) { ··· 102 82 value &= ~PCI_COMMAND_INVALIDATE; 103 83 } 104 84 } 85 + 86 + cmd->val = value; 87 + 88 + if (!permissive && (!dev_data || !dev_data->permissive)) 89 + return 0; 90 + 91 + /* Only allow the guest to control certain bits. */ 92 + err = pci_read_config_word(dev, offset, &val); 93 + if (err || val == value) 94 + return err; 95 + 96 + value &= PCI_COMMAND_GUEST; 97 + value |= val & ~PCI_COMMAND_GUEST; 105 98 106 99 return pci_write_config_word(dev, offset, value); 107 100 } ··· 315 282 { 316 283 .offset = PCI_COMMAND, 317 284 .size = 2, 285 + .init = command_init, 286 + .release = bar_release, 318 287 .u.w.read = command_read, 319 288 .u.w.write = command_write, 320 289 },
+17 -2
fs/fuse/dev.c
··· 890 890 891 891 newpage = buf->page; 892 892 893 - if (WARN_ON(!PageUptodate(newpage))) 894 - return -EIO; 893 + if (!PageUptodate(newpage)) 894 + SetPageUptodate(newpage); 895 895 896 896 ClearPageMappedToDisk(newpage); 897 897 ··· 1353 1353 return err; 1354 1354 } 1355 1355 1356 + static int fuse_dev_open(struct inode *inode, struct file *file) 1357 + { 1358 + /* 1359 + * The fuse device's file's private_data is used to hold 1360 + * the fuse_conn(ection) when it is mounted, and is used to 1361 + * keep track of whether the file has been mounted already. 1362 + */ 1363 + file->private_data = NULL; 1364 + return 0; 1365 + } 1366 + 1356 1367 static ssize_t fuse_dev_read(struct kiocb *iocb, const struct iovec *iov, 1357 1368 unsigned long nr_segs, loff_t pos) 1358 1369 { ··· 1808 1797 static int fuse_notify(struct fuse_conn *fc, enum fuse_notify_code code, 1809 1798 unsigned int size, struct fuse_copy_state *cs) 1810 1799 { 1800 + /* Don't try to move pages (yet) */ 1801 + cs->move_pages = 0; 1802 + 1811 1803 switch (code) { 1812 1804 case FUSE_NOTIFY_POLL: 1813 1805 return fuse_notify_poll(fc, size, cs); ··· 2231 2217 2232 2218 const struct file_operations fuse_dev_operations = { 2233 2219 .owner = THIS_MODULE, 2220 + .open = fuse_dev_open, 2234 2221 .llseek = no_llseek, 2235 2222 .read = do_sync_read, 2236 2223 .aio_read = fuse_dev_read,
+1 -1
fs/locks.c
··· 1728 1728 break; 1729 1729 } 1730 1730 } 1731 - trace_generic_delete_lease(inode, fl); 1731 + trace_generic_delete_lease(inode, victim); 1732 1732 if (victim) 1733 1733 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); 1734 1734 spin_unlock(&ctx->flc_lock);
+4 -3
fs/nilfs2/segment.c
··· 1907 1907 struct the_nilfs *nilfs) 1908 1908 { 1909 1909 struct nilfs_inode_info *ii, *n; 1910 + int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE); 1910 1911 int defer_iput = false; 1911 1912 1912 1913 spin_lock(&nilfs->ns_inode_lock); ··· 1920 1919 brelse(ii->i_bh); 1921 1920 ii->i_bh = NULL; 1922 1921 list_del_init(&ii->i_dirty); 1923 - if (!ii->vfs_inode.i_nlink) { 1922 + if (!ii->vfs_inode.i_nlink || during_mount) { 1924 1923 /* 1925 - * Defer calling iput() to avoid a deadlock 1926 - * over I_SYNC flag for inodes with i_nlink == 0 1924 + * Defer calling iput() to avoid deadlocks if 1925 + * i_nlink == 0 or mount is not yet finished. 1927 1926 */ 1928 1927 list_add_tail(&ii->i_dirty, &sci->sc_iput_queue); 1929 1928 defer_iput = true;
+2 -1
fs/notify/fanotify/fanotify.c
··· 143 143 !(marks_mask & FS_ISDIR & ~marks_ignored_mask)) 144 144 return false; 145 145 146 - if (event_mask & marks_mask & ~marks_ignored_mask) 146 + if (event_mask & FAN_ALL_OUTGOING_EVENTS & marks_mask & 147 + ~marks_ignored_mask) 147 148 return true; 148 149 149 150 return false;
+1 -1
fs/ocfs2/ocfs2.h
··· 502 502 503 503 static inline int ocfs2_supports_append_dio(struct ocfs2_super *osb) 504 504 { 505 - if (osb->s_feature_ro_compat & OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 505 + if (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 506 506 return 1; 507 507 return 0; 508 508 }
+8 -7
fs/ocfs2/ocfs2_fs.h
··· 102 102 | OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS \ 103 103 | OCFS2_FEATURE_INCOMPAT_REFCOUNT_TREE \ 104 104 | OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG \ 105 - | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO) 105 + | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO \ 106 + | OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 106 107 #define OCFS2_FEATURE_RO_COMPAT_SUPP (OCFS2_FEATURE_RO_COMPAT_UNWRITTEN \ 107 108 | OCFS2_FEATURE_RO_COMPAT_USRQUOTA \ 108 - | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA \ 109 - | OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 109 + | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA) 110 110 111 111 /* 112 112 * Heartbeat-only devices are missing journals and other files. The ··· 179 179 #define OCFS2_FEATURE_INCOMPAT_CLUSTERINFO 0x4000 180 180 181 181 /* 182 + * Append Direct IO support 183 + */ 184 + #define OCFS2_FEATURE_INCOMPAT_APPEND_DIO 0x8000 185 + 186 + /* 182 187 * backup superblock flag is used to indicate that this volume 183 188 * has backup superblocks. 184 189 */ ··· 205 200 #define OCFS2_FEATURE_RO_COMPAT_USRQUOTA 0x0002 206 201 #define OCFS2_FEATURE_RO_COMPAT_GRPQUOTA 0x0004 207 202 208 - /* 209 - * Append Direct IO support 210 - */ 211 - #define OCFS2_FEATURE_RO_COMPAT_APPEND_DIO 0x0008 212 203 213 204 /* The byte offset of the first backup block will be 1G. 214 205 * The following will be 4G, 16G, 64G, 256G and 1T.
+27 -6
fs/overlayfs/super.c
··· 529 529 { 530 530 struct ovl_fs *ufs = sb->s_fs_info; 531 531 532 - if (!(*flags & MS_RDONLY) && 533 - (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY))) 532 + if (!(*flags & MS_RDONLY) && !ufs->upper_mnt) 534 533 return -EROFS; 535 534 536 535 return 0; ··· 614 615 break; 615 616 616 617 default: 618 + pr_err("overlayfs: unrecognized mount option \"%s\" or missing value\n", p); 617 619 return -EINVAL; 618 620 } 619 621 } 622 + 623 + /* Workdir is useless in non-upper mount */ 624 + if (!config->upperdir && config->workdir) { 625 + pr_info("overlayfs: option \"workdir=%s\" is useless in a non-upper mount, ignore\n", 626 + config->workdir); 627 + kfree(config->workdir); 628 + config->workdir = NULL; 629 + } 630 + 620 631 return 0; 621 632 } 622 633 ··· 846 837 847 838 sb->s_stack_depth = 0; 848 839 if (ufs->config.upperdir) { 849 - /* FIXME: workdir is not needed for a R/O mount */ 850 840 if (!ufs->config.workdir) { 851 841 pr_err("overlayfs: missing 'workdir'\n"); 852 842 goto out_free_config; ··· 854 846 err = ovl_mount_dir(ufs->config.upperdir, &upperpath); 855 847 if (err) 856 848 goto out_free_config; 849 + 850 + /* Upper fs should not be r/o */ 851 + if (upperpath.mnt->mnt_sb->s_flags & MS_RDONLY) { 852 + pr_err("overlayfs: upper fs is r/o, try multi-lower layers mount\n"); 853 + err = -EINVAL; 854 + goto out_put_upperpath; 855 + } 857 856 858 857 err = ovl_mount_dir(ufs->config.workdir, &workpath); 859 858 if (err) ··· 884 869 885 870 err = -EINVAL; 886 871 stacklen = ovl_split_lowerdirs(lowertmp); 887 - if (stacklen > OVL_MAX_STACK) 872 + if (stacklen > OVL_MAX_STACK) { 873 + pr_err("overlayfs: too many lower directries, limit is %d\n", 874 + OVL_MAX_STACK); 888 875 goto out_free_lowertmp; 876 + } else if (!ufs->config.upperdir && stacklen == 1) { 877 + pr_err("overlayfs: at least 2 lowerdir are needed while upperdir nonexistent\n"); 878 + goto out_free_lowertmp; 879 + } 889 880 890 881 stack = kcalloc(stacklen, sizeof(struct path), GFP_KERNEL); 891 882 if (!stack) ··· 953 932 ufs->numlower++; 954 933 } 955 934 956 - /* If the upper fs is r/o or nonexistent, we mark overlayfs r/o too */ 957 - if (!ufs->upper_mnt || (ufs->upper_mnt->mnt_sb->s_flags & MS_RDONLY)) 935 + /* If the upper fs is nonexistent, we mark overlayfs r/o too */ 936 + if (!ufs->upper_mnt) 958 937 sb->s_flags |= MS_RDONLY; 959 938 960 939 sb->s_d_op = &ovl_dentry_operations;
+3
fs/proc/task_mmu.c
··· 1325 1325 1326 1326 static int pagemap_open(struct inode *inode, struct file *file) 1327 1327 { 1328 + /* do not disclose physical addresses: attack vector */ 1329 + if (!capable(CAP_SYS_ADMIN)) 1330 + return -EPERM; 1328 1331 pr_warn_once("Bits 55-60 of /proc/PID/pagemap entries are about " 1329 1332 "to stop being page-shift some time soon. See the " 1330 1333 "linux/Documentation/vm/pagemap.txt for details.\n");
+2 -1
include/dt-bindings/pinctrl/am33xx.h
··· 13 13 14 14 #define PULL_DISABLE (1 << 3) 15 15 #define INPUT_EN (1 << 5) 16 - #define SLEWCTRL_FAST (1 << 6) 16 + #define SLEWCTRL_SLOW (1 << 6) 17 + #define SLEWCTRL_FAST 0 17 18 18 19 /* update macro depending on INPUT_EN and PULL_ENA */ 19 20 #undef PIN_OUTPUT
+2 -1
include/dt-bindings/pinctrl/am43xx.h
··· 18 18 #define PULL_DISABLE (1 << 16) 19 19 #define PULL_UP (1 << 17) 20 20 #define INPUT_EN (1 << 18) 21 - #define SLEWCTRL_FAST (1 << 19) 21 + #define SLEWCTRL_SLOW (1 << 19) 22 + #define SLEWCTRL_FAST 0 22 23 #define DS0_PULL_UP_DOWN_EN (1 << 27) 23 24 24 25 #define PIN_OUTPUT (PULL_DISABLE)
+1
include/kvm/arm_vgic.h
··· 114 114 void (*sync_lr_elrsr)(struct kvm_vcpu *, int, struct vgic_lr); 115 115 u64 (*get_elrsr)(const struct kvm_vcpu *vcpu); 116 116 u64 (*get_eisr)(const struct kvm_vcpu *vcpu); 117 + void (*clear_eisr)(struct kvm_vcpu *vcpu); 117 118 u32 (*get_interrupt_status)(const struct kvm_vcpu *vcpu); 118 119 void (*enable_underflow)(struct kvm_vcpu *vcpu); 119 120 void (*disable_underflow)(struct kvm_vcpu *vcpu);
+18
include/linux/clk.h
··· 125 125 */ 126 126 int clk_get_phase(struct clk *clk); 127 127 128 + /** 129 + * clk_is_match - check if two clk's point to the same hardware clock 130 + * @p: clk compared against q 131 + * @q: clk compared against p 132 + * 133 + * Returns true if the two struct clk pointers both point to the same hardware 134 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 135 + * share the same struct clk_core object. 136 + * 137 + * Returns false otherwise. Note that two NULL clks are treated as matching. 138 + */ 139 + bool clk_is_match(const struct clk *p, const struct clk *q); 140 + 128 141 #else 129 142 130 143 static inline long clk_get_accuracy(struct clk *clk) ··· 153 140 static inline long clk_get_phase(struct clk *clk) 154 141 { 155 142 return -ENOTSUPP; 143 + } 144 + 145 + static inline bool clk_is_match(const struct clk *p, const struct clk *q) 146 + { 147 + return p == q; 156 148 } 157 149 158 150 #endif
+5
include/linux/irqchip/arm-gic-v3.h
··· 166 166 167 167 #define GITS_TRANSLATER 0x10040 168 168 169 + #define GITS_CTLR_ENABLE (1U << 0) 170 + #define GITS_CTLR_QUIESCENT (1U << 31) 171 + 172 + #define GITS_TYPER_DEVBITS_SHIFT 13 173 + #define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1) 169 174 #define GITS_TYPER_PTA (1UL << 19) 170 175 171 176 #define GITS_CBASER_VALID (1UL << 63)
+3 -6
include/linux/kasan.h
··· 5 5 6 6 struct kmem_cache; 7 7 struct page; 8 + struct vm_struct; 8 9 9 10 #ifdef CONFIG_KASAN 10 11 ··· 50 49 void kasan_slab_alloc(struct kmem_cache *s, void *object); 51 50 void kasan_slab_free(struct kmem_cache *s, void *object); 52 51 53 - #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 54 - 55 52 int kasan_module_alloc(void *addr, size_t size); 56 - void kasan_module_free(void *addr); 53 + void kasan_free_shadow(const struct vm_struct *vm); 57 54 58 55 #else /* CONFIG_KASAN */ 59 - 60 - #define MODULE_ALIGN 1 61 56 62 57 static inline void kasan_unpoison_shadow(const void *address, size_t size) {} 63 58 ··· 79 82 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {} 80 83 81 84 static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } 82 - static inline void kasan_module_free(void *addr) {} 85 + static inline void kasan_free_shadow(const struct vm_struct *vm) {} 83 86 84 87 #endif /* CONFIG_KASAN */ 85 88
+4
include/linux/module.h
··· 344 344 unsigned long *ftrace_callsites; 345 345 #endif 346 346 347 + #ifdef CONFIG_LIVEPATCH 348 + bool klp_alive; 349 + #endif 350 + 347 351 #ifdef CONFIG_MODULE_UNLOAD 348 352 /* What modules depend on me? */ 349 353 struct list_head source_list;
+8
include/linux/moduleloader.h
··· 84 84 85 85 /* Any cleanup before freeing mod->module_init */ 86 86 void module_arch_freeing_init(struct module *mod); 87 + 88 + #ifdef CONFIG_KASAN 89 + #include <linux/kasan.h> 90 + #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 91 + #else 92 + #define MODULE_ALIGN PAGE_SIZE 93 + #endif 94 + 87 95 #endif
+4 -1
include/linux/netdevice.h
··· 965 965 * Used to add FDB entries to dump requests. Implementers should add 966 966 * entries to skb and update idx with the number of entries. 967 967 * 968 - * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh) 968 + * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, 969 + * u16 flags) 969 970 * int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, 970 971 * struct net_device *dev, u32 filter_mask) 972 + * int (*ndo_bridge_dellink)(struct net_device *dev, struct nlmsghdr *nlh, 973 + * u16 flags); 971 974 * 972 975 * int (*ndo_change_carrier)(struct net_device *dev, bool new_carrier); 973 976 * Called to change device carrier. Soft-devices (like dummy, team, etc)
+1 -1
include/linux/of_platform.h
··· 84 84 static inline void of_platform_depopulate(struct device *parent) { } 85 85 #endif 86 86 87 - #ifdef CONFIG_OF_DYNAMIC 87 + #if defined(CONFIG_OF_DYNAMIC) && defined(CONFIG_OF_ADDRESS) 88 88 extern void of_platform_register_reconfig_notifier(void); 89 89 #else 90 90 static inline void of_platform_register_reconfig_notifier(void) { }
+3 -3
include/linux/pinctrl/consumer.h
··· 82 82 83 83 static inline struct pinctrl * __must_check pinctrl_get(struct device *dev) 84 84 { 85 - return ERR_PTR(-ENOSYS); 85 + return NULL; 86 86 } 87 87 88 88 static inline void pinctrl_put(struct pinctrl *p) ··· 93 93 struct pinctrl *p, 94 94 const char *name) 95 95 { 96 - return ERR_PTR(-ENOSYS); 96 + return NULL; 97 97 } 98 98 99 99 static inline int pinctrl_select_state(struct pinctrl *p, ··· 104 104 105 105 static inline struct pinctrl * __must_check devm_pinctrl_get(struct device *dev) 106 106 { 107 - return ERR_PTR(-ENOSYS); 107 + return NULL; 108 108 } 109 109 110 110 static inline void devm_pinctrl_put(struct pinctrl *p)
+7
include/linux/skbuff.h
··· 945 945 to->l4_hash = from->l4_hash; 946 946 }; 947 947 948 + static inline void skb_sender_cpu_clear(struct sk_buff *skb) 949 + { 950 + #ifdef CONFIG_XPS 951 + skb->sender_cpu = 0; 952 + #endif 953 + } 954 + 948 955 #ifdef NET_SKBUFF_DATA_USES_OFFSET 949 956 static inline unsigned char *skb_end_pointer(const struct sk_buff *skb) 950 957 {
+2
include/linux/uio.h
··· 98 98 size_t maxsize, size_t *start); 99 99 int iov_iter_npages(const struct iov_iter *i, int maxpages); 100 100 101 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); 102 + 101 103 static inline size_t iov_iter_count(struct iov_iter *i) 102 104 { 103 105 return i->count;
+1
include/linux/vmalloc.h
··· 17 17 #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ 18 18 #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ 19 19 #define VM_NO_GUARD 0x00000040 /* don't add guard page */ 20 + #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ 20 21 /* bits [20..32] reserved for arch specific ioremap internals */ 21 22 22 23 /*
+1
include/net/dst.h
··· 481 481 enum { 482 482 XFRM_LOOKUP_ICMP = 1 << 0, 483 483 XFRM_LOOKUP_QUEUE = 1 << 1, 484 + XFRM_LOOKUP_KEEP_DST_REF = 1 << 2, 484 485 }; 485 486 486 487 struct flowi;
+1
include/net/vxlan.h
··· 91 91 92 92 #define VXLAN_N_VID (1u << 24) 93 93 #define VXLAN_VID_MASK (VXLAN_N_VID - 1) 94 + #define VXLAN_VNI_MASK (VXLAN_VID_MASK << 8) 94 95 #define VXLAN_HLEN (sizeof(struct udphdr) + sizeof(struct vxlanhdr)) 95 96 96 97 struct vxlan_metadata {
+1 -1
include/soc/at91/at91sam9_ddrsdr.h
··· 92 92 #define AT91_DDRSDRC_UPD_MR (3 << 20) /* Update load mode register and extended mode register */ 93 93 94 94 #define AT91_DDRSDRC_MDR 0x20 /* Memory Device Register */ 95 - #define AT91_DDRSDRC_MD (3 << 0) /* Memory Device Type */ 95 + #define AT91_DDRSDRC_MD (7 << 0) /* Memory Device Type */ 96 96 #define AT91_DDRSDRC_MD_SDR 0 97 97 #define AT91_DDRSDRC_MD_LOW_POWER_SDR 1 98 98 #define AT91_DDRSDRC_MD_LOW_POWER_DDR 3
+6 -2
include/uapi/linux/virtio_blk.h
··· 60 60 __u32 size_max; 61 61 /* The maximum number of segments (if VIRTIO_BLK_F_SEG_MAX) */ 62 62 __u32 seg_max; 63 - /* geometry the device (if VIRTIO_BLK_F_GEOMETRY) */ 63 + /* geometry of the device (if VIRTIO_BLK_F_GEOMETRY) */ 64 64 struct virtio_blk_geometry { 65 65 __u16 cylinders; 66 66 __u8 heads; ··· 119 119 #define VIRTIO_BLK_T_BARRIER 0x80000000 120 120 #endif /* !VIRTIO_BLK_NO_LEGACY */ 121 121 122 - /* This is the first element of the read scatter-gather list. */ 122 + /* 123 + * This comes first in the read scatter-gather list. 124 + * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated, 125 + * this is the first element of the read scatter-gather list. 126 + */ 123 127 struct virtio_blk_outhdr { 124 128 /* VIRTIO_BLK_T* */ 125 129 __virtio32 type;
+10 -2
include/uapi/linux/virtio_scsi.h
··· 29 29 30 30 #include <linux/virtio_types.h> 31 31 32 - #define VIRTIO_SCSI_CDB_SIZE 32 33 - #define VIRTIO_SCSI_SENSE_SIZE 96 32 + /* Default values of the CDB and sense data size configuration fields */ 33 + #define VIRTIO_SCSI_CDB_DEFAULT_SIZE 32 34 + #define VIRTIO_SCSI_SENSE_DEFAULT_SIZE 96 35 + 36 + #ifndef VIRTIO_SCSI_CDB_SIZE 37 + #define VIRTIO_SCSI_CDB_SIZE VIRTIO_SCSI_CDB_DEFAULT_SIZE 38 + #endif 39 + #ifndef VIRTIO_SCSI_SENSE_SIZE 40 + #define VIRTIO_SCSI_SENSE_SIZE VIRTIO_SCSI_SENSE_DEFAULT_SIZE 41 + #endif 34 42 35 43 /* SCSI command request, followed by data-out */ 36 44 struct virtio_scsi_cmd_req {
+2 -2
include/xen/xenbus.h
··· 114 114 const char *mod_name); 115 115 116 116 #define xenbus_register_frontend(drv) \ 117 - __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME); 117 + __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME) 118 118 #define xenbus_register_backend(drv) \ 119 - __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME); 119 + __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME) 120 120 121 121 void xenbus_unregister_driver(struct xenbus_driver *drv); 122 122
+1 -1
kernel/events/core.c
··· 3591 3591 ctx = perf_event_ctx_lock_nested(event, SINGLE_DEPTH_NESTING); 3592 3592 WARN_ON_ONCE(ctx->parent_ctx); 3593 3593 perf_remove_from_context(event, true); 3594 - mutex_unlock(&ctx->mutex); 3594 + perf_event_ctx_unlock(event, ctx); 3595 3595 3596 3596 _free_event(event); 3597 3597 }
+26 -4
kernel/livepatch/core.c
··· 89 89 /* sets obj->mod if object is not vmlinux and module is found */ 90 90 static void klp_find_object_module(struct klp_object *obj) 91 91 { 92 + struct module *mod; 93 + 92 94 if (!klp_is_module(obj)) 93 95 return; 94 96 95 97 mutex_lock(&module_mutex); 96 98 /* 97 - * We don't need to take a reference on the module here because we have 98 - * the klp_mutex, which is also taken by the module notifier. This 99 - * prevents any module from unloading until we release the klp_mutex. 99 + * We do not want to block removal of patched modules and therefore 100 + * we do not take a reference here. The patches are removed by 101 + * a going module handler instead. 100 102 */ 101 - obj->mod = find_module(obj->name); 103 + mod = find_module(obj->name); 104 + /* 105 + * Do not mess work of the module coming and going notifiers. 106 + * Note that the patch might still be needed before the going handler 107 + * is called. Module functions can be called even in the GOING state 108 + * until mod->exit() finishes. This is especially important for 109 + * patches that modify semantic of the functions. 110 + */ 111 + if (mod && mod->klp_alive) 112 + obj->mod = mod; 113 + 102 114 mutex_unlock(&module_mutex); 103 115 } 104 116 ··· 779 767 return -EINVAL; 780 768 781 769 obj->state = KLP_DISABLED; 770 + obj->mod = NULL; 782 771 783 772 klp_find_object_module(obj); 784 773 ··· 973 960 return 0; 974 961 975 962 mutex_lock(&klp_mutex); 963 + 964 + /* 965 + * Each module has to know that the notifier has been called. 966 + * We never know what module will get patched by a new patch. 967 + */ 968 + if (action == MODULE_STATE_COMING) 969 + mod->klp_alive = true; 970 + else /* MODULE_STATE_GOING */ 971 + mod->klp_alive = false; 976 972 977 973 list_for_each_entry(patch, &klp_patches, list) { 978 974 for (obj = patch->objs; obj->funcs; obj++) {
-2
kernel/module.c
··· 56 56 #include <linux/async.h> 57 57 #include <linux/percpu.h> 58 58 #include <linux/kmemleak.h> 59 - #include <linux/kasan.h> 60 59 #include <linux/jump_label.h> 61 60 #include <linux/pfn.h> 62 61 #include <linux/bsearch.h> ··· 1813 1814 void __weak module_memfree(void *module_region) 1814 1815 { 1815 1816 vfree(module_region); 1816 - kasan_module_free(module_region); 1817 1817 } 1818 1818 1819 1819 void __weak module_arch_cleanup(struct module *mod)
+1 -1
lib/Makefile
··· 24 24 25 25 obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \ 26 26 bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ 27 - gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \ 27 + gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 29 percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o
+1 -1
mm/Makefile
··· 21 21 mm_init.o mmu_context.o percpu.o slab_common.o \ 22 22 compaction.o vmacache.o \ 23 23 interval_tree.o list_lru.o workingset.o \ 24 - iov_iter.o debug.o $(mmu-y) 24 + debug.o $(mmu-y) 25 25 26 26 obj-y += init-mm.o 27 27
+7 -5
mm/cma.c
··· 64 64 return (1UL << (align_order - cma->order_per_bit)) - 1; 65 65 } 66 66 67 + /* 68 + * Find a PFN aligned to the specified order and return an offset represented in 69 + * order_per_bits. 70 + */ 67 71 static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order) 68 72 { 69 - unsigned int alignment; 70 - 71 73 if (align_order <= cma->order_per_bit) 72 74 return 0; 73 - alignment = 1UL << (align_order - cma->order_per_bit); 74 - return ALIGN(cma->base_pfn, alignment) - 75 - (cma->base_pfn >> cma->order_per_bit); 75 + 76 + return (ALIGN(cma->base_pfn, (1UL << align_order)) 77 + - cma->base_pfn) >> cma->order_per_bit; 76 78 } 77 79 78 80 static unsigned long cma_bitmap_maxno(struct cma *cma)
+8 -3
mm/huge_memory.c
··· 1295 1295 * Avoid grouping on DSO/COW pages in specific and RO pages 1296 1296 * in general, RO pages shouldn't hurt as much anyway since 1297 1297 * they can be in shared cache state. 1298 + * 1299 + * FIXME! This checks "pmd_dirty()" as an approximation of 1300 + * "is this a read-only page", since checking "pmd_write()" 1301 + * is even more broken. We haven't actually turned this into 1302 + * a writable page, so pmd_write() will always be false. 1298 1303 */ 1299 - if (!pmd_write(pmd)) 1304 + if (!pmd_dirty(pmd)) 1300 1305 flags |= TNF_NO_GROUP; 1301 1306 1302 1307 /* ··· 1487 1482 1488 1483 if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) { 1489 1484 pmd_t entry; 1485 + ret = 1; 1490 1486 1491 1487 /* 1492 1488 * Avoid trapping faults against the zero page. The read-only ··· 1496 1490 */ 1497 1491 if (prot_numa && is_huge_zero_pmd(*pmd)) { 1498 1492 spin_unlock(ptl); 1499 - return 0; 1493 + return ret; 1500 1494 } 1501 1495 1502 1496 if (!prot_numa || !pmd_protnone(*pmd)) { 1503 - ret = 1; 1504 1497 entry = pmdp_get_and_clear_notify(mm, addr, pmd); 1505 1498 entry = pmd_modify(entry, newprot); 1506 1499 ret = HPAGE_PMD_NR;
+3 -1
mm/hugetlb.c
··· 917 917 __SetPageHead(page); 918 918 __ClearPageReserved(page); 919 919 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 920 - __SetPageTail(p); 921 920 /* 922 921 * For gigantic hugepages allocated through bootmem at 923 922 * boot, it's safer to be consistent with the not-gigantic ··· 932 933 __ClearPageReserved(p); 933 934 set_page_count(p, 0); 934 935 p->first_page = page; 936 + /* Make sure p->first_page is always valid for PageTail() */ 937 + smp_wmb(); 938 + __SetPageTail(p); 935 939 } 936 940 } 937 941
+15
mm/iov_iter.c lib/iov_iter.c
··· 751 751 return npages; 752 752 } 753 753 EXPORT_SYMBOL(iov_iter_npages); 754 + 755 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags) 756 + { 757 + *new = *old; 758 + if (new->type & ITER_BVEC) 759 + return new->bvec = kmemdup(new->bvec, 760 + new->nr_segs * sizeof(struct bio_vec), 761 + flags); 762 + else 763 + /* iovec and kvec have identical layout */ 764 + return new->iov = kmemdup(new->iov, 765 + new->nr_segs * sizeof(struct iovec), 766 + flags); 767 + } 768 + EXPORT_SYMBOL(dup_iter);
+11 -3
mm/kasan/kasan.c
··· 29 29 #include <linux/stacktrace.h> 30 30 #include <linux/string.h> 31 31 #include <linux/types.h> 32 + #include <linux/vmalloc.h> 32 33 #include <linux/kasan.h> 33 34 34 35 #include "kasan.h" ··· 415 414 GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, 416 415 PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 417 416 __builtin_return_address(0)); 418 - return ret ? 0 : -ENOMEM; 417 + 418 + if (ret) { 419 + find_vm_area(addr)->flags |= VM_KASAN; 420 + return 0; 421 + } 422 + 423 + return -ENOMEM; 419 424 } 420 425 421 - void kasan_module_free(void *addr) 426 + void kasan_free_shadow(const struct vm_struct *vm) 422 427 { 423 - vfree(kasan_mem_to_shadow(addr)); 428 + if (vm->flags & VM_KASAN) 429 + vfree(kasan_mem_to_shadow(vm->addr)); 424 430 } 425 431 426 432 static void register_global(struct kasan_global *global)
+3 -1
mm/memcontrol.c
··· 5232 5232 * on for the root memcg is enough. 5233 5233 */ 5234 5234 if (cgroup_on_dfl(root_css->cgroup)) 5235 - mem_cgroup_from_css(root_css)->use_hierarchy = true; 5235 + root_mem_cgroup->use_hierarchy = true; 5236 + else 5237 + root_mem_cgroup->use_hierarchy = false; 5236 5238 } 5237 5239 5238 5240 static u64 memory_current_read(struct cgroup_subsys_state *css,
+6 -1
mm/memory.c
··· 3072 3072 * Avoid grouping on DSO/COW pages in specific and RO pages 3073 3073 * in general, RO pages shouldn't hurt as much anyway since 3074 3074 * they can be in shared cache state. 3075 + * 3076 + * FIXME! This checks "pmd_dirty()" as an approximation of 3077 + * "is this a read-only page", since checking "pmd_write()" 3078 + * is even more broken. We haven't actually turned this into 3079 + * a writable page, so pmd_write() will always be false. 3075 3080 */ 3076 - if (!pte_write(pte)) 3081 + if (!pte_dirty(pte)) 3077 3082 flags |= TNF_NO_GROUP; 3078 3083 3079 3084 /*
+2 -2
mm/mlock.c
··· 26 26 27 27 int can_do_mlock(void) 28 28 { 29 - if (capable(CAP_IPC_LOCK)) 30 - return 1; 31 29 if (rlimit(RLIMIT_MEMLOCK) != 0) 30 + return 1; 31 + if (capable(CAP_IPC_LOCK)) 32 32 return 1; 33 33 return 0; 34 34 }
+1
mm/nommu.c
··· 62 62 EXPORT_SYMBOL(high_memory); 63 63 struct page *mem_map; 64 64 unsigned long max_mapnr; 65 + EXPORT_SYMBOL(max_mapnr); 65 66 unsigned long highest_memmap_pfn; 66 67 struct percpu_counter vm_committed_as; 67 68 int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
+2 -1
mm/page_alloc.c
··· 2373 2373 goto out; 2374 2374 } 2375 2375 /* Exhausted what can be done so it's blamo time */ 2376 - if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) 2376 + if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false) 2377 + || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) 2377 2378 *did_some_progress = 1; 2378 2379 out: 2379 2380 oom_zonelist_unlock(ac->zonelist, gfp_mask);
+1
mm/vmalloc.c
··· 1418 1418 spin_unlock(&vmap_area_lock); 1419 1419 1420 1420 vmap_debug_free_range(va->va_start, va->va_end); 1421 + kasan_free_shadow(vm); 1421 1422 free_unmap_vmap_area(va); 1422 1423 vm->size -= PAGE_SIZE; 1423 1424
+20 -4
net/9p/trans_virtio.c
··· 658 658 static void p9_virtio_remove(struct virtio_device *vdev) 659 659 { 660 660 struct virtio_chan *chan = vdev->priv; 661 - 662 - if (chan->inuse) 663 - p9_virtio_close(chan->client); 664 - vdev->config->del_vqs(vdev); 661 + unsigned long warning_time; 665 662 666 663 mutex_lock(&virtio_9p_lock); 664 + 665 + /* Remove self from list so we don't get new users. */ 667 666 list_del(&chan->chan_list); 667 + warning_time = jiffies; 668 + 669 + /* Wait for existing users to close. */ 670 + while (chan->inuse) { 671 + mutex_unlock(&virtio_9p_lock); 672 + msleep(250); 673 + if (time_after(jiffies, warning_time + 10 * HZ)) { 674 + dev_emerg(&vdev->dev, 675 + "p9_virtio_remove: waiting for device in use.\n"); 676 + warning_time = jiffies; 677 + } 678 + mutex_lock(&virtio_9p_lock); 679 + } 680 + 668 681 mutex_unlock(&virtio_9p_lock); 682 + 683 + vdev->config->del_vqs(vdev); 684 + 669 685 sysfs_remove_file(&(vdev->dev.kobj), &dev_attr_mount_tag.attr); 670 686 kobject_uevent(&(vdev->dev.kobj), KOBJ_CHANGE); 671 687 kfree(chan->tag);
+2
net/bridge/br_if.c
··· 563 563 */ 564 564 del_nbp(p); 565 565 566 + dev_set_mtu(br->dev, br_min_mtu(br)); 567 + 566 568 spin_lock_bh(&br->lock); 567 569 changed_addr = br_stp_recalculate_bridge_id(br); 568 570 spin_unlock_bh(&br->lock);
+1 -1
net/caif/caif_socket.c
··· 281 281 int copylen; 282 282 283 283 ret = -EOPNOTSUPP; 284 - if (m->msg_flags&MSG_OOB) 284 + if (flags & MSG_OOB) 285 285 goto read_error; 286 286 287 287 skb = skb_recv_datagram(sk, flags, 0 , &ret);
+7
net/compat.c
··· 49 49 __get_user(kmsg->msg_controllen, &umsg->msg_controllen) || 50 50 __get_user(kmsg->msg_flags, &umsg->msg_flags)) 51 51 return -EFAULT; 52 + 53 + if (!uaddr) 54 + kmsg->msg_namelen = 0; 55 + 56 + if (kmsg->msg_namelen < 0) 57 + return -EINVAL; 58 + 52 59 if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) 53 60 kmsg->msg_namelen = sizeof(struct sockaddr_storage); 54 61 kmsg->msg_control = compat_ptr(tmp3);
+13 -13
net/core/rtnetlink.c
··· 2187 2187 } 2188 2188 } 2189 2189 err = rtnl_configure_link(dev, ifm); 2190 - if (err < 0) { 2191 - if (ops->newlink) { 2192 - LIST_HEAD(list_kill); 2193 - 2194 - ops->dellink(dev, &list_kill); 2195 - unregister_netdevice_many(&list_kill); 2196 - } else { 2197 - unregister_netdevice(dev); 2198 - } 2199 - goto out; 2200 - } 2201 - 2190 + if (err < 0) 2191 + goto out_unregister; 2202 2192 if (link_net) { 2203 2193 err = dev_change_net_namespace(dev, dest_net, ifname); 2204 2194 if (err < 0) 2205 - unregister_netdevice(dev); 2195 + goto out_unregister; 2206 2196 } 2207 2197 out: 2208 2198 if (link_net) 2209 2199 put_net(link_net); 2210 2200 put_net(dest_net); 2211 2201 return err; 2202 + out_unregister: 2203 + if (ops->newlink) { 2204 + LIST_HEAD(list_kill); 2205 + 2206 + ops->dellink(dev, &list_kill); 2207 + unregister_netdevice_many(&list_kill); 2208 + } else { 2209 + unregister_netdevice(dev); 2210 + } 2211 + goto out; 2212 2212 } 2213 2213 } 2214 2214
+7 -3
net/core/skbuff.c
··· 3689 3689 struct sock *sk, int tstype) 3690 3690 { 3691 3691 struct sk_buff *skb; 3692 - bool tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY; 3692 + bool tsonly; 3693 3693 3694 - if (!sk || !skb_may_tx_timestamp(sk, tsonly)) 3694 + if (!sk) 3695 + return; 3696 + 3697 + tsonly = sk->sk_tsflags & SOF_TIMESTAMPING_OPT_TSONLY; 3698 + if (!skb_may_tx_timestamp(sk, tsonly)) 3695 3699 return; 3696 3700 3697 3701 if (tsonly) ··· 4133 4129 skb->ignore_df = 0; 4134 4130 skb_dst_drop(skb); 4135 4131 skb->mark = 0; 4136 - skb->sender_cpu = 0; 4132 + skb_sender_cpu_clear(skb); 4137 4133 skb_init_secmark(skb); 4138 4134 secpath_reset(skb); 4139 4135 nf_reset(skb);
+4
net/core/sock.c
··· 1655 1655 } 1656 1656 EXPORT_SYMBOL(sock_rfree); 1657 1657 1658 + /* 1659 + * Buffer destructor for skbs that are not used directly in read or write 1660 + * path, e.g. for error handler skbs. Automatically called from kfree_skb. 1661 + */ 1658 1662 void sock_efree(struct sk_buff *skb) 1659 1663 { 1660 1664 sock_put(skb->sk);
+6 -4
net/core/sysctl_net_core.c
··· 24 24 25 25 static int zero = 0; 26 26 static int one = 1; 27 + static int min_sndbuf = SOCK_MIN_SNDBUF; 28 + static int min_rcvbuf = SOCK_MIN_RCVBUF; 27 29 28 30 static int net_msg_warn; /* Unused, but still a sysctl */ 29 31 ··· 238 236 .maxlen = sizeof(int), 239 237 .mode = 0644, 240 238 .proc_handler = proc_dointvec_minmax, 241 - .extra1 = &one, 239 + .extra1 = &min_sndbuf, 242 240 }, 243 241 { 244 242 .procname = "rmem_max", ··· 246 244 .maxlen = sizeof(int), 247 245 .mode = 0644, 248 246 .proc_handler = proc_dointvec_minmax, 249 - .extra1 = &one, 247 + .extra1 = &min_rcvbuf, 250 248 }, 251 249 { 252 250 .procname = "wmem_default", ··· 254 252 .maxlen = sizeof(int), 255 253 .mode = 0644, 256 254 .proc_handler = proc_dointvec_minmax, 257 - .extra1 = &one, 255 + .extra1 = &min_sndbuf, 258 256 }, 259 257 { 260 258 .procname = "rmem_default", ··· 262 260 .maxlen = sizeof(int), 263 261 .mode = 0644, 264 262 .proc_handler = proc_dointvec_minmax, 265 - .extra1 = &one, 263 + .extra1 = &min_rcvbuf, 266 264 }, 267 265 { 268 266 .procname = "dev_weight",
+1
net/ipv4/inet_connection_sock.c
··· 269 269 release_sock(sk); 270 270 if (reqsk_queue_empty(&icsk->icsk_accept_queue)) 271 271 timeo = schedule_timeout(timeo); 272 + sched_annotate_sleep(); 272 273 lock_sock(sk); 273 274 err = 0; 274 275 if (!reqsk_queue_empty(&icsk->icsk_accept_queue))
+15 -3
net/ipv4/inet_diag.c
··· 90 90 } 91 91 } 92 92 93 + static size_t inet_sk_attr_size(void) 94 + { 95 + return nla_total_size(sizeof(struct tcp_info)) 96 + + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 97 + + nla_total_size(1) /* INET_DIAG_TOS */ 98 + + nla_total_size(1) /* INET_DIAG_TCLASS */ 99 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 100 + + nla_total_size(sizeof(struct inet_diag_msg)) 101 + + nla_total_size(SK_MEMINFO_VARS * sizeof(u32)) 102 + + nla_total_size(TCP_CA_NAME_MAX) 103 + + nla_total_size(sizeof(struct tcpvegas_info)) 104 + + 64; 105 + } 106 + 93 107 int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, 94 108 struct sk_buff *skb, const struct inet_diag_req_v2 *req, 95 109 struct user_namespace *user_ns, ··· 363 349 if (err) 364 350 goto out; 365 351 366 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 367 - sizeof(struct inet_diag_meminfo) + 368 - sizeof(struct tcp_info) + 64, GFP_KERNEL); 352 + rep = nlmsg_new(inet_sk_attr_size(), GFP_KERNEL); 369 353 if (!rep) { 370 354 err = -ENOMEM; 371 355 goto out;
+1
net/ipv4/ip_forward.c
··· 67 67 if (unlikely(opt->optlen)) 68 68 ip_forward_options(skb); 69 69 70 + skb_sender_cpu_clear(skb); 70 71 return dst_output(skb); 71 72 } 72 73
+6
net/ipv4/tcp_cong.c
··· 378 378 */ 379 379 void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked) 380 380 { 381 + /* If credits accumulated at a higher w, apply them gently now. */ 382 + if (tp->snd_cwnd_cnt >= w) { 383 + tp->snd_cwnd_cnt = 0; 384 + tp->snd_cwnd++; 385 + } 386 + 381 387 tp->snd_cwnd_cnt += acked; 382 388 if (tp->snd_cwnd_cnt >= w) { 383 389 u32 delta = tp->snd_cwnd_cnt / w;
+4 -2
net/ipv4/tcp_cubic.c
··· 306 306 } 307 307 } 308 308 309 - if (ca->cnt == 0) /* cannot be zero */ 310 - ca->cnt = 1; 309 + /* The maximum rate of cwnd increase CUBIC allows is 1 packet per 310 + * 2 packets ACKed, meaning cwnd grows at 1.5x per RTT. 311 + */ 312 + ca->cnt = max(ca->cnt, 2U); 311 313 } 312 314 313 315 static void bictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked)
+1 -5
net/ipv4/tcp_output.c
··· 2820 2820 } else { 2821 2821 /* Socket is locked, keep trying until memory is available. */ 2822 2822 for (;;) { 2823 - skb = alloc_skb_fclone(MAX_TCP_HEADER, 2824 - sk->sk_allocation); 2823 + skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); 2825 2824 if (skb) 2826 2825 break; 2827 2826 yield(); 2828 2827 } 2829 - 2830 - /* Reserve space for headers and prepare control bits. */ 2831 - skb_reserve(skb, MAX_TCP_HEADER); 2832 2828 /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */ 2833 2829 tcp_init_nondata_skb(skb, tp->write_seq, 2834 2830 TCPHDR_ACK | TCPHDR_FIN);
+1 -1
net/ipv4/xfrm4_output.c
··· 63 63 return err; 64 64 65 65 IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE; 66 + skb->protocol = htons(ETH_P_IP); 66 67 67 68 return x->outer_mode->output2(x, skb); 68 69 } ··· 72 71 int xfrm4_output_finish(struct sk_buff *skb) 73 72 { 74 73 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 75 - skb->protocol = htons(ETH_P_IP); 76 74 77 75 #ifdef CONFIG_NETFILTER 78 76 IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+1
net/ipv6/fib6_rules.c
··· 104 104 goto again; 105 105 flp6->saddr = saddr; 106 106 } 107 + err = rt->dst.error; 107 108 goto out; 108 109 } 109 110 again:
+1
net/ipv6/ip6_output.c
··· 318 318 319 319 static inline int ip6_forward_finish(struct sk_buff *skb) 320 320 { 321 + skb_sender_cpu_clear(skb); 321 322 return dst_output(skb); 322 323 } 323 324
+17 -16
net/ipv6/ip6_tunnel.c
··· 308 308 * Create tunnel matching given parameters. 309 309 * 310 310 * Return: 311 - * created tunnel or NULL 311 + * created tunnel or error pointer 312 312 **/ 313 313 314 314 static struct ip6_tnl *ip6_tnl_create(struct net *net, struct __ip6_tnl_parm *p) ··· 316 316 struct net_device *dev; 317 317 struct ip6_tnl *t; 318 318 char name[IFNAMSIZ]; 319 - int err; 319 + int err = -ENOMEM; 320 320 321 321 if (p->name[0]) 322 322 strlcpy(name, p->name, IFNAMSIZ); ··· 342 342 failed_free: 343 343 ip6_dev_free(dev); 344 344 failed: 345 - return NULL; 345 + return ERR_PTR(err); 346 346 } 347 347 348 348 /** ··· 356 356 * tunnel device is created and registered for use. 357 357 * 358 358 * Return: 359 - * matching tunnel or NULL 359 + * matching tunnel or error pointer 360 360 **/ 361 361 362 362 static struct ip6_tnl *ip6_tnl_locate(struct net *net, ··· 374 374 if (ipv6_addr_equal(local, &t->parms.laddr) && 375 375 ipv6_addr_equal(remote, &t->parms.raddr)) { 376 376 if (create) 377 - return NULL; 377 + return ERR_PTR(-EEXIST); 378 378 379 379 return t; 380 380 } 381 381 } 382 382 if (!create) 383 - return NULL; 383 + return ERR_PTR(-ENODEV); 384 384 return ip6_tnl_create(net, p); 385 385 } 386 386 ··· 1414 1414 } 1415 1415 ip6_tnl_parm_from_user(&p1, &p); 1416 1416 t = ip6_tnl_locate(net, &p1, 0); 1417 - if (t == NULL) 1417 + if (IS_ERR(t)) 1418 1418 t = netdev_priv(dev); 1419 1419 } else { 1420 1420 memset(&p, 0, sizeof(p)); ··· 1439 1439 ip6_tnl_parm_from_user(&p1, &p); 1440 1440 t = ip6_tnl_locate(net, &p1, cmd == SIOCADDTUNNEL); 1441 1441 if (cmd == SIOCCHGTUNNEL) { 1442 - if (t != NULL) { 1442 + if (!IS_ERR(t)) { 1443 1443 if (t->dev != dev) { 1444 1444 err = -EEXIST; 1445 1445 break; ··· 1451 1451 else 1452 1452 err = ip6_tnl_update(t, &p1); 1453 1453 } 1454 - if (t) { 1454 + if (!IS_ERR(t)) { 1455 1455 err = 0; 1456 1456 ip6_tnl_parm_to_user(&p, &t->parms); 1457 1457 if (copy_to_user(ifr->ifr_ifru.ifru_data, &p, sizeof(p))) 1458 1458 err = -EFAULT; 1459 1459 1460 - } else 1461 - err = (cmd == SIOCADDTUNNEL ? -ENOBUFS : -ENOENT); 1460 + } else { 1461 + err = PTR_ERR(t); 1462 + } 1462 1463 break; 1463 1464 case SIOCDELTUNNEL: 1464 1465 err = -EPERM; ··· 1473 1472 err = -ENOENT; 1474 1473 ip6_tnl_parm_from_user(&p1, &p); 1475 1474 t = ip6_tnl_locate(net, &p1, 0); 1476 - if (t == NULL) 1475 + if (IS_ERR(t)) 1477 1476 break; 1478 1477 err = -EPERM; 1479 1478 if (t->dev == ip6n->fb_tnl_dev) ··· 1667 1666 struct nlattr *tb[], struct nlattr *data[]) 1668 1667 { 1669 1668 struct net *net = dev_net(dev); 1670 - struct ip6_tnl *nt; 1669 + struct ip6_tnl *nt, *t; 1671 1670 1672 1671 nt = netdev_priv(dev); 1673 1672 ip6_tnl_netlink_parms(data, &nt->parms); 1674 1673 1675 - if (ip6_tnl_locate(net, &nt->parms, 0)) 1674 + t = ip6_tnl_locate(net, &nt->parms, 0); 1675 + if (!IS_ERR(t)) 1676 1676 return -EEXIST; 1677 1677 1678 1678 return ip6_tnl_create2(dev); ··· 1693 1691 ip6_tnl_netlink_parms(data, &p); 1694 1692 1695 1693 t = ip6_tnl_locate(net, &p, 0); 1696 - 1697 - if (t) { 1694 + if (!IS_ERR(t)) { 1698 1695 if (t->dev != dev) 1699 1696 return -EEXIST; 1700 1697 } else
+3 -5
net/ipv6/udp_offload.c
··· 112 112 fptr = (struct frag_hdr *)(skb_network_header(skb) + unfrag_ip6hlen); 113 113 fptr->nexthdr = nexthdr; 114 114 fptr->reserved = 0; 115 - if (skb_shinfo(skb)->ip6_frag_id) 116 - fptr->identification = skb_shinfo(skb)->ip6_frag_id; 117 - else 118 - ipv6_select_ident(fptr, 119 - (struct rt6_info *)skb_dst(skb)); 115 + if (!skb_shinfo(skb)->ip6_frag_id) 116 + ipv6_proxy_select_ident(skb); 117 + fptr->identification = skb_shinfo(skb)->ip6_frag_id; 120 118 121 119 /* Fragment the skb. ipv6 header and the remaining fields of the 122 120 * fragment header are updated in ipv6_gso_segment()
+1 -1
net/ipv6/xfrm6_output.c
··· 114 114 return err; 115 115 116 116 skb->ignore_df = 1; 117 + skb->protocol = htons(ETH_P_IPV6); 117 118 118 119 return x->outer_mode->output2(x, skb); 119 120 } ··· 123 122 int xfrm6_output_finish(struct sk_buff *skb) 124 123 { 125 124 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 126 - skb->protocol = htons(ETH_P_IPV6); 127 125 128 126 #ifdef CONFIG_NETFILTER 129 127 IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED;
+1
net/ipv6/xfrm6_policy.c
··· 200 200 201 201 #if IS_ENABLED(CONFIG_IPV6_MIP6) 202 202 case IPPROTO_MH: 203 + offset += ipv6_optlen(exthdr); 203 204 if (!onlyproto && pskb_may_pull(skb, nh + offset + 3 - skb->data)) { 204 205 struct ip6_mh *mh; 205 206
+18 -6
net/mac80211/ieee80211_i.h
··· 58 58 #define IEEE80211_UNSET_POWER_LEVEL INT_MIN 59 59 60 60 /* 61 - * Some APs experience problems when working with U-APSD. Decrease the 62 - * probability of that happening by using legacy mode for all ACs but VO. 63 - * The AP that caused us trouble was a Cisco 4410N. It ignores our 64 - * setting, and always treats non-VO ACs as legacy. 61 + * Some APs experience problems when working with U-APSD. Decreasing the 62 + * probability of that happening by using legacy mode for all ACs but VO isn't 63 + * enough. 64 + * 65 + * Cisco 4410N originally forced us to enable VO by default only because it 66 + * treated non-VO ACs as legacy. 67 + * 68 + * However some APs (notably Netgear R7000) silently reclassify packets to 69 + * different ACs. Since u-APSD ACs require trigger frames for frame retrieval 70 + * clients would never see some frames (e.g. ARP responses) or would fetch them 71 + * accidentally after a long time. 72 + * 73 + * It makes little sense to enable u-APSD queues by default because it needs 74 + * userspace applications to be aware of it to actually take advantage of the 75 + * possible additional powersavings. Implicitly depending on driver autotrigger 76 + * frame support doesn't make much sense. 65 77 */ 66 - #define IEEE80211_DEFAULT_UAPSD_QUEUES \ 67 - IEEE80211_WMM_IE_STA_QOSINFO_AC_VO 78 + #define IEEE80211_DEFAULT_UAPSD_QUEUES 0 68 79 69 80 #define IEEE80211_DEFAULT_MAX_SP_LEN \ 70 81 IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL ··· 464 453 unsigned int flags; 465 454 466 455 bool csa_waiting_bcn; 456 + bool csa_ignored_same_chan; 467 457 468 458 bool beacon_crc_valid; 469 459 u32 beacon_crc;
+15 -1
net/mac80211/mlme.c
··· 1150 1150 return; 1151 1151 } 1152 1152 1153 + if (cfg80211_chandef_identical(&csa_ie.chandef, 1154 + &sdata->vif.bss_conf.chandef)) { 1155 + if (ifmgd->csa_ignored_same_chan) 1156 + return; 1157 + sdata_info(sdata, 1158 + "AP %pM tries to chanswitch to same channel, ignore\n", 1159 + ifmgd->associated->bssid); 1160 + ifmgd->csa_ignored_same_chan = true; 1161 + return; 1162 + } 1163 + 1153 1164 mutex_lock(&local->mtx); 1154 1165 mutex_lock(&local->chanctx_mtx); 1155 1166 conf = rcu_dereference_protected(sdata->vif.chanctx_conf, ··· 1221 1210 sdata->vif.csa_active = true; 1222 1211 sdata->csa_chandef = csa_ie.chandef; 1223 1212 sdata->csa_block_tx = csa_ie.mode; 1213 + ifmgd->csa_ignored_same_chan = false; 1224 1214 1225 1215 if (sdata->csa_block_tx) 1226 1216 ieee80211_stop_vif_queues(local, sdata, ··· 2102 2090 2103 2091 sdata->vif.csa_active = false; 2104 2092 ifmgd->csa_waiting_bcn = false; 2093 + ifmgd->csa_ignored_same_chan = false; 2105 2094 if (sdata->csa_block_tx) { 2106 2095 ieee80211_wake_vif_queues(local, sdata, 2107 2096 IEEE80211_QUEUE_STOP_REASON_CSA); ··· 3217 3204 (1ULL << WLAN_EID_CHANNEL_SWITCH) | 3218 3205 (1ULL << WLAN_EID_PWR_CONSTRAINT) | 3219 3206 (1ULL << WLAN_EID_HT_CAPABILITY) | 3220 - (1ULL << WLAN_EID_HT_OPERATION); 3207 + (1ULL << WLAN_EID_HT_OPERATION) | 3208 + (1ULL << WLAN_EID_EXT_CHANSWITCH_ANN); 3221 3209 3222 3210 static void ieee80211_rx_mgmt_beacon(struct ieee80211_sub_if_data *sdata, 3223 3211 struct ieee80211_mgmt *mgmt, size_t len,
+3
net/mac80211/rx.c
··· 2214 2214 hdr = (struct ieee80211_hdr *) skb->data; 2215 2215 mesh_hdr = (struct ieee80211s_hdr *) (skb->data + hdrlen); 2216 2216 2217 + if (ieee80211_drop_unencrypted(rx, hdr->frame_control)) 2218 + return RX_DROP_MONITOR; 2219 + 2217 2220 /* frame is in RMC, don't forward */ 2218 2221 if (ieee80211_is_data(hdr->frame_control) && 2219 2222 is_multicast_ether_addr(hdr->addr1) &&
+1 -1
net/mac80211/util.c
··· 3178 3178 wdev_iter = &sdata_iter->wdev; 3179 3179 3180 3180 if (sdata_iter == sdata || 3181 - rcu_access_pointer(sdata_iter->vif.chanctx_conf) == NULL || 3181 + !ieee80211_sdata_running(sdata_iter) || 3182 3182 local->hw.wiphy->software_iftypes & BIT(wdev_iter->iftype)) 3183 3183 continue; 3184 3184
+22 -18
net/rds/iw_rdma.c
··· 88 88 int *unpinned); 89 89 static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr); 90 90 91 - static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id) 91 + static int rds_iw_get_device(struct sockaddr_in *src, struct sockaddr_in *dst, 92 + struct rds_iw_device **rds_iwdev, 93 + struct rdma_cm_id **cm_id) 92 94 { 93 95 struct rds_iw_device *iwdev; 94 96 struct rds_iw_cm_id *i_cm_id; ··· 114 112 src_addr->sin_port, 115 113 dst_addr->sin_addr.s_addr, 116 114 dst_addr->sin_port, 117 - rs->rs_bound_addr, 118 - rs->rs_bound_port, 119 - rs->rs_conn_addr, 120 - rs->rs_conn_port); 115 + src->sin_addr.s_addr, 116 + src->sin_port, 117 + dst->sin_addr.s_addr, 118 + dst->sin_port); 121 119 #ifdef WORKING_TUPLE_DETECTION 122 - if (src_addr->sin_addr.s_addr == rs->rs_bound_addr && 123 - src_addr->sin_port == rs->rs_bound_port && 124 - dst_addr->sin_addr.s_addr == rs->rs_conn_addr && 125 - dst_addr->sin_port == rs->rs_conn_port) { 120 + if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr && 121 + src_addr->sin_port == src->sin_port && 122 + dst_addr->sin_addr.s_addr == dst->sin_addr.s_addr && 123 + dst_addr->sin_port == dst->sin_port) { 126 124 #else 127 125 /* FIXME - needs to compare the local and remote 128 126 * ipaddr/port tuple, but the ipaddr is the only ··· 130 128 * zero'ed. It doesn't appear to be properly populated 131 129 * during connection setup... 132 130 */ 133 - if (src_addr->sin_addr.s_addr == rs->rs_bound_addr) { 131 + if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr) { 134 132 #endif 135 133 spin_unlock_irq(&iwdev->spinlock); 136 134 *rds_iwdev = iwdev; ··· 182 180 { 183 181 struct sockaddr_in *src_addr, *dst_addr; 184 182 struct rds_iw_device *rds_iwdev_old; 185 - struct rds_sock rs; 186 183 struct rdma_cm_id *pcm_id; 187 184 int rc; 188 185 189 186 src_addr = (struct sockaddr_in *)&cm_id->route.addr.src_addr; 190 187 dst_addr = (struct sockaddr_in *)&cm_id->route.addr.dst_addr; 191 188 192 - rs.rs_bound_addr = src_addr->sin_addr.s_addr; 193 - rs.rs_bound_port = src_addr->sin_port; 194 - rs.rs_conn_addr = dst_addr->sin_addr.s_addr; 195 - rs.rs_conn_port = dst_addr->sin_port; 196 - 197 - rc = rds_iw_get_device(&rs, &rds_iwdev_old, &pcm_id); 189 + rc = rds_iw_get_device(src_addr, dst_addr, &rds_iwdev_old, &pcm_id); 198 190 if (rc) 199 191 rds_iw_remove_cm_id(rds_iwdev, cm_id); 200 192 ··· 594 598 struct rds_iw_device *rds_iwdev; 595 599 struct rds_iw_mr *ibmr = NULL; 596 600 struct rdma_cm_id *cm_id; 601 + struct sockaddr_in src = { 602 + .sin_addr.s_addr = rs->rs_bound_addr, 603 + .sin_port = rs->rs_bound_port, 604 + }; 605 + struct sockaddr_in dst = { 606 + .sin_addr.s_addr = rs->rs_conn_addr, 607 + .sin_port = rs->rs_conn_port, 608 + }; 597 609 int ret; 598 610 599 - ret = rds_iw_get_device(rs, &rds_iwdev, &cm_id); 611 + ret = rds_iw_get_device(&src, &dst, &rds_iwdev, &cm_id); 600 612 if (ret || !cm_id) { 601 613 ret = -ENODEV; 602 614 goto out;
+1 -1
net/rxrpc/ar-recvmsg.c
··· 87 87 if (!skb) { 88 88 /* nothing remains on the queue */ 89 89 if (copied && 90 - (msg->msg_flags & MSG_PEEK || timeo == 0)) 90 + (flags & MSG_PEEK || timeo == 0)) 91 91 goto out; 92 92 93 93 /* wait for a message to turn up */
+28 -8
net/sched/act_bpf.c
··· 25 25 struct tcf_result *res) 26 26 { 27 27 struct tcf_bpf *b = a->priv; 28 - int action; 29 - int filter_res; 28 + int action, filter_res; 30 29 31 30 spin_lock(&b->tcf_lock); 31 + 32 32 b->tcf_tm.lastuse = jiffies; 33 33 bstats_update(&b->tcf_bstats, skb); 34 - action = b->tcf_action; 35 34 36 35 filter_res = BPF_PROG_RUN(b->filter, skb); 37 - if (filter_res == 0) { 38 - /* Return code 0 from the BPF program 39 - * is being interpreted as a drop here. 40 - */ 41 - action = TC_ACT_SHOT; 36 + 37 + /* A BPF program may overwrite the default action opcode. 38 + * Similarly as in cls_bpf, if filter_res == -1 we use the 39 + * default action specified from tc. 40 + * 41 + * In case a different well-known TC_ACT opcode has been 42 + * returned, it will overwrite the default one. 43 + * 44 + * For everything else that is unkown, TC_ACT_UNSPEC is 45 + * returned. 46 + */ 47 + switch (filter_res) { 48 + case TC_ACT_PIPE: 49 + case TC_ACT_RECLASSIFY: 50 + case TC_ACT_OK: 51 + action = filter_res; 52 + break; 53 + case TC_ACT_SHOT: 54 + action = filter_res; 42 55 b->tcf_qstats.drops++; 56 + break; 57 + case TC_ACT_UNSPEC: 58 + action = b->tcf_action; 59 + break; 60 + default: 61 + action = TC_ACT_UNSPEC; 62 + break; 43 63 } 44 64 45 65 spin_unlock(&b->tcf_lock);
+4 -1
net/sched/cls_u32.c
··· 78 78 struct tc_u_common *tp_c; 79 79 int refcnt; 80 80 unsigned int divisor; 81 - struct tc_u_knode __rcu *ht[1]; 82 81 struct rcu_head rcu; 82 + /* The 'ht' field MUST be the last field in structure to allow for 83 + * more entries allocated at end of structure. 84 + */ 85 + struct tc_u_knode __rcu *ht[1]; 83 86 }; 84 87 85 88 struct tc_u_common {
+4
net/socket.c
··· 1650 1650 1651 1651 if (len > INT_MAX) 1652 1652 len = INT_MAX; 1653 + if (unlikely(!access_ok(VERIFY_READ, buff, len))) 1654 + return -EFAULT; 1653 1655 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1654 1656 if (!sock) 1655 1657 goto out; ··· 1710 1708 1711 1709 if (size > INT_MAX) 1712 1710 size = INT_MAX; 1711 + if (unlikely(!access_ok(VERIFY_WRITE, ubuf, size))) 1712 + return -EFAULT; 1713 1713 sock = sockfd_lookup_light(fd, &err, &fput_needed); 1714 1714 if (!sock) 1715 1715 goto out;
+10
net/wireless/nl80211.c
··· 4400 4400 if (parse_station_flags(info, dev->ieee80211_ptr->iftype, &params)) 4401 4401 return -EINVAL; 4402 4402 4403 + /* HT/VHT requires QoS, but if we don't have that just ignore HT/VHT 4404 + * as userspace might just pass through the capabilities from the IEs 4405 + * directly, rather than enforcing this restriction and returning an 4406 + * error in this case. 4407 + */ 4408 + if (!(params.sta_flags_set & BIT(NL80211_STA_FLAG_WME))) { 4409 + params.ht_capa = NULL; 4410 + params.vht_capa = NULL; 4411 + } 4412 + 4403 4413 /* When you run into this, adjust the code below for the new flag */ 4404 4414 BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7); 4405 4415
+6 -6
net/xfrm/xfrm_policy.c
··· 2269 2269 * have the xfrm_state's. We need to wait for KM to 2270 2270 * negotiate new SA's or bail out with error.*/ 2271 2271 if (net->xfrm.sysctl_larval_drop) { 2272 - dst_release(dst); 2273 - xfrm_pols_put(pols, drop_pols); 2274 2272 XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTNOSTATES); 2275 - 2276 - return ERR_PTR(-EREMOTE); 2273 + err = -EREMOTE; 2274 + goto error; 2277 2275 } 2278 2276 2279 2277 err = -EAGAIN; ··· 2322 2324 error: 2323 2325 dst_release(dst); 2324 2326 dropdst: 2325 - dst_release(dst_orig); 2327 + if (!(flags & XFRM_LOOKUP_KEEP_DST_REF)) 2328 + dst_release(dst_orig); 2326 2329 xfrm_pols_put(pols, drop_pols); 2327 2330 return ERR_PTR(err); 2328 2331 } ··· 2337 2338 struct sock *sk, int flags) 2338 2339 { 2339 2340 struct dst_entry *dst = xfrm_lookup(net, dst_orig, fl, sk, 2340 - flags | XFRM_LOOKUP_QUEUE); 2341 + flags | XFRM_LOOKUP_QUEUE | 2342 + XFRM_LOOKUP_KEEP_DST_REF); 2341 2343 2342 2344 if (IS_ERR(dst) && PTR_ERR(dst) == -EREMOTE) 2343 2345 return make_blackhole(net, dst_orig->ops->family, dst_orig);
+4
sound/core/control.c
··· 1170 1170 1171 1171 if (info->count < 1) 1172 1172 return -EINVAL; 1173 + if (!*info->id.name) 1174 + return -EINVAL; 1175 + if (strnlen(info->id.name, sizeof(info->id.name)) >= sizeof(info->id.name)) 1176 + return -EINVAL; 1173 1177 access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE : 1174 1178 (info->access & (SNDRV_CTL_ELEM_ACCESS_READWRITE| 1175 1179 SNDRV_CTL_ELEM_ACCESS_INACTIVE|
+9 -9
sound/firewire/dice/dice-interface.h
··· 299 299 #define RX_ISOCHRONOUS 0x008 300 300 301 301 /* 302 + * Index of first quadlet to be interpreted; read/write. If > 0, that many 303 + * quadlets at the beginning of each data block will be ignored, and all the 304 + * audio and MIDI quadlets will follow. 305 + */ 306 + #define RX_SEQ_START 0x00c 307 + 308 + /* 302 309 * The number of audio channels; read-only. There will be one quadlet per 303 310 * channel. 304 311 */ 305 - #define RX_NUMBER_AUDIO 0x00c 312 + #define RX_NUMBER_AUDIO 0x010 306 313 307 314 /* 308 315 * The number of MIDI ports, 0-8; read-only. If > 0, there will be one 309 316 * additional quadlet in each data block, following the audio quadlets. 310 317 */ 311 - #define RX_NUMBER_MIDI 0x010 312 - 313 - /* 314 - * Index of first quadlet to be interpreted; read/write. If > 0, that many 315 - * quadlets at the beginning of each data block will be ignored, and all the 316 - * audio and MIDI quadlets will follow. 317 - */ 318 - #define RX_SEQ_START 0x014 318 + #define RX_NUMBER_MIDI 0x014 319 319 320 320 /* 321 321 * Names of all audio channels; read-only. Quadlets are byte-swapped. Names
+2 -2
sound/firewire/dice/dice-proc.c
··· 99 99 } tx; 100 100 struct { 101 101 u32 iso; 102 + u32 seq_start; 102 103 u32 number_audio; 103 104 u32 number_midi; 104 - u32 seq_start; 105 105 char names[RX_NAMES_SIZE]; 106 106 u32 ac3_caps; 107 107 u32 ac3_enable; ··· 204 204 break; 205 205 snd_iprintf(buffer, "rx %u:\n", stream); 206 206 snd_iprintf(buffer, " iso channel: %d\n", (int)buf.rx.iso); 207 + snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 207 208 snd_iprintf(buffer, " audio channels: %u\n", 208 209 buf.rx.number_audio); 209 210 snd_iprintf(buffer, " midi ports: %u\n", buf.rx.number_midi); 210 - snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 211 211 if (quadlets >= 68) { 212 212 dice_proc_fixup_string(buf.rx.names, RX_NAMES_SIZE); 213 213 snd_iprintf(buffer, " names: %s\n", buf.rx.names);
+1 -2
sound/firewire/iso-resources.c
··· 26 26 int fw_iso_resources_init(struct fw_iso_resources *r, struct fw_unit *unit) 27 27 { 28 28 r->channels_mask = ~0uLL; 29 - r->unit = fw_unit_get(unit); 29 + r->unit = unit; 30 30 mutex_init(&r->mutex); 31 31 r->allocated = false; 32 32 ··· 42 42 { 43 43 WARN_ON(r->allocated); 44 44 mutex_destroy(&r->mutex); 45 - fw_unit_put(r->unit); 46 45 } 47 46 EXPORT_SYMBOL(fw_iso_resources_destroy); 48 47
+1 -1
sound/pci/hda/hda_controller.c
··· 1164 1164 } 1165 1165 } 1166 1166 1167 - if (!bus->no_response_fallback) 1167 + if (bus->no_response_fallback) 1168 1168 return -1; 1169 1169 1170 1170 if (!chip->polling_mode && chip->poll_count < 2) {
+39 -8
sound/pci/hda/hda_generic.c
··· 687 687 return val; 688 688 } 689 689 690 + /* is this a stereo widget or a stereo-to-mono mix? */ 691 + static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, int dir) 692 + { 693 + unsigned int wcaps = get_wcaps(codec, nid); 694 + hda_nid_t conn; 695 + 696 + if (wcaps & AC_WCAP_STEREO) 697 + return true; 698 + if (dir != HDA_INPUT || get_wcaps_type(wcaps) != AC_WID_AUD_MIX) 699 + return false; 700 + if (snd_hda_get_num_conns(codec, nid) != 1) 701 + return false; 702 + if (snd_hda_get_connections(codec, nid, &conn, 1) < 0) 703 + return false; 704 + return !!(get_wcaps(codec, conn) & AC_WCAP_STEREO); 705 + } 706 + 690 707 /* initialize the amp value (only at the first time) */ 691 708 static void init_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx) 692 709 { 693 710 unsigned int caps = query_amp_caps(codec, nid, dir); 694 711 int val = get_amp_val_to_activate(codec, nid, dir, caps, false); 695 - snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 712 + 713 + if (is_stereo_amps(codec, nid, dir)) 714 + snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 715 + else 716 + snd_hda_codec_amp_init(codec, nid, 0, dir, idx, 0xff, val); 717 + } 718 + 719 + /* update the amp, doing in stereo or mono depending on NID */ 720 + static int update_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx, 721 + unsigned int mask, unsigned int val) 722 + { 723 + if (is_stereo_amps(codec, nid, dir)) 724 + return snd_hda_codec_amp_stereo(codec, nid, dir, idx, 725 + mask, val); 726 + else 727 + return snd_hda_codec_amp_update(codec, nid, 0, dir, idx, 728 + mask, val); 696 729 } 697 730 698 731 /* calculate amp value mask we can modify; ··· 765 732 return; 766 733 767 734 val &= mask; 768 - snd_hda_codec_amp_stereo(codec, nid, dir, idx, mask, val); 735 + update_amp(codec, nid, dir, idx, mask, val); 769 736 } 770 737 771 738 static void activate_amp_out(struct hda_codec *codec, struct nid_path *path, ··· 4457 4424 has_amp = nid_has_mute(codec, mix, HDA_INPUT); 4458 4425 for (i = 0; i < nums; i++) { 4459 4426 if (has_amp) 4460 - snd_hda_codec_amp_stereo(codec, mix, 4461 - HDA_INPUT, i, 4462 - 0xff, HDA_AMP_MUTE); 4427 + update_amp(codec, mix, HDA_INPUT, i, 4428 + 0xff, HDA_AMP_MUTE); 4463 4429 else if (nid_has_volume(codec, conn[i], HDA_OUTPUT)) 4464 - snd_hda_codec_amp_stereo(codec, conn[i], 4465 - HDA_OUTPUT, 0, 4466 - 0xff, HDA_AMP_MUTE); 4430 + update_amp(codec, conn[i], HDA_OUTPUT, 0, 4431 + 0xff, HDA_AMP_MUTE); 4467 4432 } 4468 4433 } 4469 4434
+30 -8
sound/pci/hda/hda_proc.c
··· 134 134 (caps & AC_AMPCAP_MUTE) >> AC_AMPCAP_MUTE_SHIFT); 135 135 } 136 136 137 + /* is this a stereo widget or a stereo-to-mono mix? */ 138 + static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, 139 + int dir, unsigned int wcaps, int indices) 140 + { 141 + hda_nid_t conn; 142 + 143 + if (wcaps & AC_WCAP_STEREO) 144 + return true; 145 + /* check for a stereo-to-mono mix; it must be: 146 + * only a single connection, only for input, and only a mixer widget 147 + */ 148 + if (indices != 1 || dir != HDA_INPUT || 149 + get_wcaps_type(wcaps) != AC_WID_AUD_MIX) 150 + return false; 151 + 152 + if (snd_hda_get_raw_connections(codec, nid, &conn, 1) < 0) 153 + return false; 154 + /* the connection source is a stereo? */ 155 + wcaps = snd_hda_param_read(codec, conn, AC_PAR_AUDIO_WIDGET_CAP); 156 + return !!(wcaps & AC_WCAP_STEREO); 157 + } 158 + 137 159 static void print_amp_vals(struct snd_info_buffer *buffer, 138 160 struct hda_codec *codec, hda_nid_t nid, 139 - int dir, int stereo, int indices) 161 + int dir, unsigned int wcaps, int indices) 140 162 { 141 163 unsigned int val; 164 + bool stereo; 142 165 int i; 166 + 167 + stereo = is_stereo_amps(codec, nid, dir, wcaps, indices); 143 168 144 169 dir = dir == HDA_OUTPUT ? AC_AMP_GET_OUTPUT : AC_AMP_GET_INPUT; 145 170 for (i = 0; i < indices; i++) { ··· 782 757 (codec->single_adc_amp && 783 758 wid_type == AC_WID_AUD_IN)) 784 759 print_amp_vals(buffer, codec, nid, HDA_INPUT, 785 - wid_caps & AC_WCAP_STEREO, 786 - 1); 760 + wid_caps, 1); 787 761 else 788 762 print_amp_vals(buffer, codec, nid, HDA_INPUT, 789 - wid_caps & AC_WCAP_STEREO, 790 - conn_len); 763 + wid_caps, conn_len); 791 764 } 792 765 if (wid_caps & AC_WCAP_OUT_AMP) { 793 766 snd_iprintf(buffer, " Amp-Out caps: "); ··· 794 771 if (wid_type == AC_WID_PIN && 795 772 codec->pin_amp_workaround) 796 773 print_amp_vals(buffer, codec, nid, HDA_OUTPUT, 797 - wid_caps & AC_WCAP_STEREO, 798 - conn_len); 774 + wid_caps, conn_len); 799 775 else 800 776 print_amp_vals(buffer, codec, nid, HDA_OUTPUT, 801 - wid_caps & AC_WCAP_STEREO, 1); 777 + wid_caps, 1); 802 778 } 803 779 804 780 switch (wid_type) {
+2
sound/pci/hda/patch_cirrus.c
··· 393 393 SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81), 394 394 SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122), 395 395 SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101), 396 + SND_PCI_QUIRK(0x106b, 0x5600, "MacBookAir 5,2", CS420X_MBP81), 396 397 SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42), 397 398 SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 398 399 {} /* terminator */ ··· 585 584 return -ENOMEM; 586 585 587 586 spec->gen.automute_hook = cs_automute; 587 + codec->single_adc_amp = 1; 588 588 589 589 snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl, 590 590 cs420x_fixups);
+11
sound/pci/hda/patch_conexant.c
··· 223 223 CXT_PINCFG_LENOVO_TP410, 224 224 CXT_PINCFG_LEMOTE_A1004, 225 225 CXT_PINCFG_LEMOTE_A1205, 226 + CXT_PINCFG_COMPAQ_CQ60, 226 227 CXT_FIXUP_STEREO_DMIC, 227 228 CXT_FIXUP_INC_MIC_BOOST, 228 229 CXT_FIXUP_HEADPHONE_MIC_PIN, ··· 661 660 .type = HDA_FIXUP_PINS, 662 661 .v.pins = cxt_pincfg_lemote, 663 662 }, 663 + [CXT_PINCFG_COMPAQ_CQ60] = { 664 + .type = HDA_FIXUP_PINS, 665 + .v.pins = (const struct hda_pintbl[]) { 666 + /* 0x17 was falsely set up as a mic, it should 0x1d */ 667 + { 0x17, 0x400001f0 }, 668 + { 0x1d, 0x97a70120 }, 669 + { } 670 + } 671 + }, 664 672 [CXT_FIXUP_STEREO_DMIC] = { 665 673 .type = HDA_FIXUP_FUNC, 666 674 .v.func = cxt_fixup_stereo_dmic, ··· 779 769 }; 780 770 781 771 static const struct snd_pci_quirk cxt5051_fixups[] = { 772 + SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60), 782 773 SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200), 783 774 {} 784 775 };
+2 -2
sound/soc/codecs/adav80x.c
··· 317 317 { 318 318 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 319 319 struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); 320 - unsigned int deemph = ucontrol->value.enumerated.item[0]; 320 + unsigned int deemph = ucontrol->value.integer.value[0]; 321 321 322 322 if (deemph > 1) 323 323 return -EINVAL; ··· 333 333 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 334 334 struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); 335 335 336 - ucontrol->value.enumerated.item[0] = adav80x->deemph; 336 + ucontrol->value.integer.value[0] = adav80x->deemph; 337 337 return 0; 338 338 }; 339 339
+2 -2
sound/soc/codecs/ak4641.c
··· 76 76 { 77 77 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 78 78 struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); 79 - int deemph = ucontrol->value.enumerated.item[0]; 79 + int deemph = ucontrol->value.integer.value[0]; 80 80 81 81 if (deemph > 1) 82 82 return -EINVAL; ··· 92 92 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 93 93 struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); 94 94 95 - ucontrol->value.enumerated.item[0] = ak4641->deemph; 95 + ucontrol->value.integer.value[0] = ak4641->deemph; 96 96 return 0; 97 97 }; 98 98
+22 -22
sound/soc/codecs/ak4671.c
··· 343 343 }; 344 344 345 345 static const struct snd_soc_dapm_route ak4671_intercon[] = { 346 - {"DAC Left", "NULL", "PMPLL"}, 347 - {"DAC Right", "NULL", "PMPLL"}, 348 - {"ADC Left", "NULL", "PMPLL"}, 349 - {"ADC Right", "NULL", "PMPLL"}, 346 + {"DAC Left", NULL, "PMPLL"}, 347 + {"DAC Right", NULL, "PMPLL"}, 348 + {"ADC Left", NULL, "PMPLL"}, 349 + {"ADC Right", NULL, "PMPLL"}, 350 350 351 351 /* Outputs */ 352 - {"LOUT1", "NULL", "LOUT1 Mixer"}, 353 - {"ROUT1", "NULL", "ROUT1 Mixer"}, 354 - {"LOUT2", "NULL", "LOUT2 Mix Amp"}, 355 - {"ROUT2", "NULL", "ROUT2 Mix Amp"}, 356 - {"LOUT3", "NULL", "LOUT3 Mixer"}, 357 - {"ROUT3", "NULL", "ROUT3 Mixer"}, 352 + {"LOUT1", NULL, "LOUT1 Mixer"}, 353 + {"ROUT1", NULL, "ROUT1 Mixer"}, 354 + {"LOUT2", NULL, "LOUT2 Mix Amp"}, 355 + {"ROUT2", NULL, "ROUT2 Mix Amp"}, 356 + {"LOUT3", NULL, "LOUT3 Mixer"}, 357 + {"ROUT3", NULL, "ROUT3 Mixer"}, 358 358 359 359 {"LOUT1 Mixer", "DACL", "DAC Left"}, 360 360 {"ROUT1 Mixer", "DACR", "DAC Right"}, 361 361 {"LOUT2 Mixer", "DACHL", "DAC Left"}, 362 362 {"ROUT2 Mixer", "DACHR", "DAC Right"}, 363 - {"LOUT2 Mix Amp", "NULL", "LOUT2 Mixer"}, 364 - {"ROUT2 Mix Amp", "NULL", "ROUT2 Mixer"}, 363 + {"LOUT2 Mix Amp", NULL, "LOUT2 Mixer"}, 364 + {"ROUT2 Mix Amp", NULL, "ROUT2 Mixer"}, 365 365 {"LOUT3 Mixer", "DACSL", "DAC Left"}, 366 366 {"ROUT3 Mixer", "DACSR", "DAC Right"}, 367 367 ··· 381 381 {"LIN2", NULL, "Mic Bias"}, 382 382 {"RIN2", NULL, "Mic Bias"}, 383 383 384 - {"ADC Left", "NULL", "LIN MUX"}, 385 - {"ADC Right", "NULL", "RIN MUX"}, 384 + {"ADC Left", NULL, "LIN MUX"}, 385 + {"ADC Right", NULL, "RIN MUX"}, 386 386 387 387 /* Analog Loops */ 388 - {"LIN1 Mixing Circuit", "NULL", "LIN1"}, 389 - {"RIN1 Mixing Circuit", "NULL", "RIN1"}, 390 - {"LIN2 Mixing Circuit", "NULL", "LIN2"}, 391 - {"RIN2 Mixing Circuit", "NULL", "RIN2"}, 392 - {"LIN3 Mixing Circuit", "NULL", "LIN3"}, 393 - {"RIN3 Mixing Circuit", "NULL", "RIN3"}, 394 - {"LIN4 Mixing Circuit", "NULL", "LIN4"}, 395 - {"RIN4 Mixing Circuit", "NULL", "RIN4"}, 388 + {"LIN1 Mixing Circuit", NULL, "LIN1"}, 389 + {"RIN1 Mixing Circuit", NULL, "RIN1"}, 390 + {"LIN2 Mixing Circuit", NULL, "LIN2"}, 391 + {"RIN2 Mixing Circuit", NULL, "RIN2"}, 392 + {"LIN3 Mixing Circuit", NULL, "LIN3"}, 393 + {"RIN3 Mixing Circuit", NULL, "RIN3"}, 394 + {"LIN4 Mixing Circuit", NULL, "LIN4"}, 395 + {"RIN4 Mixing Circuit", NULL, "RIN4"}, 396 396 397 397 {"LOUT1 Mixer", "LINL1", "LIN1 Mixing Circuit"}, 398 398 {"ROUT1 Mixer", "RINR1", "RIN1 Mixing Circuit"},
+2 -2
sound/soc/codecs/cs4271.c
··· 286 286 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 287 287 struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); 288 288 289 - ucontrol->value.enumerated.item[0] = cs4271->deemph; 289 + ucontrol->value.integer.value[0] = cs4271->deemph; 290 290 return 0; 291 291 } 292 292 ··· 296 296 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 297 297 struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); 298 298 299 - cs4271->deemph = ucontrol->value.enumerated.item[0]; 299 + cs4271->deemph = ucontrol->value.integer.value[0]; 300 300 return cs4271_set_deemph(codec); 301 301 } 302 302
+4 -4
sound/soc/codecs/da732x.c
··· 876 876 877 877 static const struct snd_soc_dapm_route da732x_dapm_routes[] = { 878 878 /* Inputs */ 879 - {"AUX1L PGA", "NULL", "AUX1L"}, 880 - {"AUX1R PGA", "NULL", "AUX1R"}, 879 + {"AUX1L PGA", NULL, "AUX1L"}, 880 + {"AUX1R PGA", NULL, "AUX1R"}, 881 881 {"MIC1 PGA", NULL, "MIC1"}, 882 - {"MIC2 PGA", "NULL", "MIC2"}, 883 - {"MIC3 PGA", "NULL", "MIC3"}, 882 + {"MIC2 PGA", NULL, "MIC2"}, 883 + {"MIC3 PGA", NULL, "MIC3"}, 884 884 885 885 /* Capture Path */ 886 886 {"ADC1 Left MUX", "MIC1", "MIC1 PGA"},
+2 -2
sound/soc/codecs/es8328.c
··· 120 120 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 121 121 struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec); 122 122 123 - ucontrol->value.enumerated.item[0] = es8328->deemph; 123 + ucontrol->value.integer.value[0] = es8328->deemph; 124 124 return 0; 125 125 } 126 126 ··· 129 129 { 130 130 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 131 131 struct es8328_priv *es8328 = snd_soc_codec_get_drvdata(codec); 132 - int deemph = ucontrol->value.enumerated.item[0]; 132 + int deemph = ucontrol->value.integer.value[0]; 133 133 int ret; 134 134 135 135 if (deemph > 1)
+2 -2
sound/soc/codecs/pcm1681.c
··· 118 118 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 119 119 struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); 120 120 121 - ucontrol->value.enumerated.item[0] = priv->deemph; 121 + ucontrol->value.integer.value[0] = priv->deemph; 122 122 123 123 return 0; 124 124 } ··· 129 129 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 130 130 struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); 131 131 132 - priv->deemph = ucontrol->value.enumerated.item[0]; 132 + priv->deemph = ucontrol->value.integer.value[0]; 133 133 134 134 return pcm1681_set_deemph(codec); 135 135 }
+1 -1
sound/soc/codecs/rt286.c
··· 1198 1198 .ident = "Dell Dino", 1199 1199 .matches = { 1200 1200 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 1201 - DMI_MATCH(DMI_BOARD_NAME, "0144P8") 1201 + DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9343") 1202 1202 } 1203 1203 }, 1204 1204 { }
+1 -7
sound/soc/codecs/sgtl5000.c
··· 1151 1151 /* Enable VDDC charge pump */ 1152 1152 ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP; 1153 1153 } else if (vddio >= 3100 && vdda >= 3100) { 1154 - /* 1155 - * if vddio and vddd > 3.1v, 1156 - * charge pump should be clean before set ana_pwr 1157 - */ 1158 - snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER, 1159 - SGTL5000_VDDC_CHRGPMP_POWERUP, 0); 1160 - 1154 + ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP; 1161 1155 /* VDDC use VDDIO rail */ 1162 1156 lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD; 1163 1157 lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO <<
+2 -2
sound/soc/codecs/sn95031.c
··· 538 538 /* speaker map */ 539 539 { "IHFOUTL", NULL, "Speaker Rail"}, 540 540 { "IHFOUTR", NULL, "Speaker Rail"}, 541 - { "IHFOUTL", "NULL", "Speaker Left Playback"}, 542 - { "IHFOUTR", "NULL", "Speaker Right Playback"}, 541 + { "IHFOUTL", NULL, "Speaker Left Playback"}, 542 + { "IHFOUTR", NULL, "Speaker Right Playback"}, 543 543 { "Speaker Left Playback", NULL, "Speaker Left Filter"}, 544 544 { "Speaker Right Playback", NULL, "Speaker Right Filter"}, 545 545 { "Speaker Left Filter", NULL, "IHFDAC Left"},
+2 -2
sound/soc/codecs/tas5086.c
··· 281 281 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 282 282 struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); 283 283 284 - ucontrol->value.enumerated.item[0] = priv->deemph; 284 + ucontrol->value.integer.value[0] = priv->deemph; 285 285 286 286 return 0; 287 287 } ··· 292 292 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 293 293 struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); 294 294 295 - priv->deemph = ucontrol->value.enumerated.item[0]; 295 + priv->deemph = ucontrol->value.integer.value[0]; 296 296 297 297 return tas5086_set_deemph(codec); 298 298 }
+4 -4
sound/soc/codecs/wm2000.c
··· 610 610 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 611 611 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 612 612 613 - ucontrol->value.enumerated.item[0] = wm2000->anc_active; 613 + ucontrol->value.integer.value[0] = wm2000->anc_active; 614 614 615 615 return 0; 616 616 } ··· 620 620 { 621 621 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 622 622 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 623 - int anc_active = ucontrol->value.enumerated.item[0]; 623 + int anc_active = ucontrol->value.integer.value[0]; 624 624 int ret; 625 625 626 626 if (anc_active > 1) ··· 643 643 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 644 644 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 645 645 646 - ucontrol->value.enumerated.item[0] = wm2000->spk_ena; 646 + ucontrol->value.integer.value[0] = wm2000->spk_ena; 647 647 648 648 return 0; 649 649 } ··· 653 653 { 654 654 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 655 655 struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); 656 - int val = ucontrol->value.enumerated.item[0]; 656 + int val = ucontrol->value.integer.value[0]; 657 657 int ret; 658 658 659 659 if (val > 1)
+2 -2
sound/soc/codecs/wm8731.c
··· 125 125 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 126 126 struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); 127 127 128 - ucontrol->value.enumerated.item[0] = wm8731->deemph; 128 + ucontrol->value.integer.value[0] = wm8731->deemph; 129 129 130 130 return 0; 131 131 } ··· 135 135 { 136 136 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 137 137 struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); 138 - int deemph = ucontrol->value.enumerated.item[0]; 138 + int deemph = ucontrol->value.integer.value[0]; 139 139 int ret = 0; 140 140 141 141 if (deemph > 1)
+2 -2
sound/soc/codecs/wm8903.c
··· 442 442 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 443 443 struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); 444 444 445 - ucontrol->value.enumerated.item[0] = wm8903->deemph; 445 + ucontrol->value.integer.value[0] = wm8903->deemph; 446 446 447 447 return 0; 448 448 } ··· 452 452 { 453 453 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 454 454 struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); 455 - int deemph = ucontrol->value.enumerated.item[0]; 455 + int deemph = ucontrol->value.integer.value[0]; 456 456 int ret = 0; 457 457 458 458 if (deemph > 1)
+2 -2
sound/soc/codecs/wm8904.c
··· 525 525 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 526 526 struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); 527 527 528 - ucontrol->value.enumerated.item[0] = wm8904->deemph; 528 + ucontrol->value.integer.value[0] = wm8904->deemph; 529 529 return 0; 530 530 } 531 531 ··· 534 534 { 535 535 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 536 536 struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); 537 - int deemph = ucontrol->value.enumerated.item[0]; 537 + int deemph = ucontrol->value.integer.value[0]; 538 538 539 539 if (deemph > 1) 540 540 return -EINVAL;
+2 -2
sound/soc/codecs/wm8955.c
··· 393 393 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 394 394 struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); 395 395 396 - ucontrol->value.enumerated.item[0] = wm8955->deemph; 396 + ucontrol->value.integer.value[0] = wm8955->deemph; 397 397 return 0; 398 398 } 399 399 ··· 402 402 { 403 403 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 404 404 struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); 405 - int deemph = ucontrol->value.enumerated.item[0]; 405 + int deemph = ucontrol->value.integer.value[0]; 406 406 407 407 if (deemph > 1) 408 408 return -EINVAL;
+2 -2
sound/soc/codecs/wm8960.c
··· 184 184 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 185 185 struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); 186 186 187 - ucontrol->value.enumerated.item[0] = wm8960->deemph; 187 + ucontrol->value.integer.value[0] = wm8960->deemph; 188 188 return 0; 189 189 } 190 190 ··· 193 193 { 194 194 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol); 195 195 struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); 196 - int deemph = ucontrol->value.enumerated.item[0]; 196 + int deemph = ucontrol->value.integer.value[0]; 197 197 198 198 if (deemph > 1) 199 199 return -EINVAL;
+3 -3
sound/soc/codecs/wm9712.c
··· 180 180 struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol); 181 181 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm); 182 182 struct wm9712_priv *wm9712 = snd_soc_codec_get_drvdata(codec); 183 - unsigned int val = ucontrol->value.enumerated.item[0]; 183 + unsigned int val = ucontrol->value.integer.value[0]; 184 184 struct soc_mixer_control *mc = 185 185 (struct soc_mixer_control *)kcontrol->private_value; 186 186 unsigned int mixer, mask, shift, old; ··· 193 193 194 194 mutex_lock(&wm9712->lock); 195 195 old = wm9712->hp_mixer[mixer]; 196 - if (ucontrol->value.enumerated.item[0]) 196 + if (ucontrol->value.integer.value[0]) 197 197 wm9712->hp_mixer[mixer] |= mask; 198 198 else 199 199 wm9712->hp_mixer[mixer] &= ~mask; ··· 231 231 mixer = mc->shift >> 8; 232 232 shift = mc->shift & 0xff; 233 233 234 - ucontrol->value.enumerated.item[0] = 234 + ucontrol->value.integer.value[0] = 235 235 (wm9712->hp_mixer[mixer] >> shift) & 1; 236 236 237 237 return 0;
+3 -3
sound/soc/codecs/wm9713.c
··· 255 255 struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol); 256 256 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(dapm); 257 257 struct wm9713_priv *wm9713 = snd_soc_codec_get_drvdata(codec); 258 - unsigned int val = ucontrol->value.enumerated.item[0]; 258 + unsigned int val = ucontrol->value.integer.value[0]; 259 259 struct soc_mixer_control *mc = 260 260 (struct soc_mixer_control *)kcontrol->private_value; 261 261 unsigned int mixer, mask, shift, old; ··· 268 268 269 269 mutex_lock(&wm9713->lock); 270 270 old = wm9713->hp_mixer[mixer]; 271 - if (ucontrol->value.enumerated.item[0]) 271 + if (ucontrol->value.integer.value[0]) 272 272 wm9713->hp_mixer[mixer] |= mask; 273 273 else 274 274 wm9713->hp_mixer[mixer] &= ~mask; ··· 306 306 mixer = mc->shift >> 8; 307 307 shift = mc->shift & 0xff; 308 308 309 - ucontrol->value.enumerated.item[0] = 309 + ucontrol->value.integer.value[0] = 310 310 (wm9713->hp_mixer[mixer] >> shift) & 1; 311 311 312 312 return 0;
+2 -2
sound/soc/fsl/fsl_spdif.c
··· 1049 1049 enum spdif_txrate index, bool round) 1050 1050 { 1051 1051 const u32 rate[] = { 32000, 44100, 48000, 96000, 192000 }; 1052 - bool is_sysclk = clk == spdif_priv->sysclk; 1052 + bool is_sysclk = clk_is_match(clk, spdif_priv->sysclk); 1053 1053 u64 rate_ideal, rate_actual, sub; 1054 1054 u32 sysclk_dfmin, sysclk_dfmax; 1055 1055 u32 txclk_df, sysclk_df, arate; ··· 1143 1143 spdif_priv->txclk_src[index], rate[index]); 1144 1144 dev_dbg(&pdev->dev, "use txclk df %d for %dHz sample rate\n", 1145 1145 spdif_priv->txclk_df[index], rate[index]); 1146 - if (spdif_priv->txclk[index] == spdif_priv->sysclk) 1146 + if (clk_is_match(spdif_priv->txclk[index], spdif_priv->sysclk)) 1147 1147 dev_dbg(&pdev->dev, "use sysclk df %d for %dHz sample rate\n", 1148 1148 spdif_priv->sysclk_df[index], rate[index]); 1149 1149 dev_dbg(&pdev->dev, "the best rate for %dHz sample rate is %dHz\n",
+2 -2
sound/soc/fsl/fsl_ssi.c
··· 603 603 factor = (div2 + 1) * (7 * psr + 1) * 2; 604 604 605 605 for (i = 0; i < 255; i++) { 606 - tmprate = freq * factor * (i + 2); 606 + tmprate = freq * factor * (i + 1); 607 607 608 608 if (baudclk_is_used) 609 609 clkrate = clk_get_rate(ssi_private->baudclk); ··· 1227 1227 ssi_private->dma_params_tx.addr = ssi_private->ssi_phys + CCSR_SSI_STX0; 1228 1228 ssi_private->dma_params_rx.addr = ssi_private->ssi_phys + CCSR_SSI_SRX0; 1229 1229 1230 - ret = !of_property_read_u32_array(np, "dmas", dmas, 4); 1230 + ret = of_property_read_u32_array(np, "dmas", dmas, 4); 1231 1231 if (ssi_private->use_dma && !ret && dmas[2] == IMX_DMATYPE_SSI_DUAL) { 1232 1232 ssi_private->use_dual_fifo = true; 1233 1233 /* When using dual fifo mode, we need to keep watermark
-3
sound/soc/intel/sst-haswell-dsp.c
··· 207 207 module = (void *)module + sizeof(*module) + module->mod_size; 208 208 } 209 209 210 - /* allocate scratch mem regions */ 211 - sst_block_alloc_scratch(dsp); 212 - 213 210 return 0; 214 211 } 215 212
+24 -8
sound/soc/intel/sst-haswell-ipc.c
··· 1732 1732 int sst_hsw_dsp_load(struct sst_hsw *hsw) 1733 1733 { 1734 1734 struct sst_dsp *dsp = hsw->dsp; 1735 + struct sst_fw *sst_fw, *t; 1735 1736 int ret; 1736 1737 1737 1738 dev_dbg(hsw->dev, "loading audio DSP...."); ··· 1749 1748 return ret; 1750 1749 } 1751 1750 1752 - ret = sst_fw_reload(hsw->sst_fw); 1753 - if (ret < 0) { 1754 - dev_err(hsw->dev, "error: SST FW reload failed\n"); 1755 - sst_dsp_dma_put_channel(dsp); 1756 - return -ENOMEM; 1751 + list_for_each_entry_safe_reverse(sst_fw, t, &dsp->fw_list, list) { 1752 + ret = sst_fw_reload(sst_fw); 1753 + if (ret < 0) { 1754 + dev_err(hsw->dev, "error: SST FW reload failed\n"); 1755 + sst_dsp_dma_put_channel(dsp); 1756 + return -ENOMEM; 1757 + } 1757 1758 } 1759 + ret = sst_block_alloc_scratch(hsw->dsp); 1760 + if (ret < 0) 1761 + return -EINVAL; 1758 1762 1759 1763 sst_dsp_dma_put_channel(dsp); 1760 1764 return 0; ··· 1815 1809 1816 1810 int sst_hsw_dsp_runtime_sleep(struct sst_hsw *hsw) 1817 1811 { 1818 - sst_fw_unload(hsw->sst_fw); 1819 - sst_block_free_scratch(hsw->dsp); 1812 + struct sst_fw *sst_fw, *t; 1813 + struct sst_dsp *dsp = hsw->dsp; 1814 + 1815 + list_for_each_entry_safe(sst_fw, t, &dsp->fw_list, list) { 1816 + sst_fw_unload(sst_fw); 1817 + } 1818 + sst_block_free_scratch(dsp); 1820 1819 1821 1820 hsw->boot_complete = false; 1822 1821 1823 - sst_dsp_sleep(hsw->dsp); 1822 + sst_dsp_sleep(dsp); 1824 1823 1825 1824 return 0; 1826 1825 } ··· 1953 1942 dev_err(dev, "error: failed to load firmware\n"); 1954 1943 goto fw_err; 1955 1944 } 1945 + 1946 + /* allocate scratch mem regions */ 1947 + ret = sst_block_alloc_scratch(hsw->dsp); 1948 + if (ret < 0) 1949 + goto boot_err; 1956 1950 1957 1951 /* wait for DSP boot completion */ 1958 1952 sst_dsp_boot(hsw->dsp);
+1 -1
sound/soc/kirkwood/kirkwood-i2s.c
··· 579 579 if (PTR_ERR(priv->extclk) == -EPROBE_DEFER) 580 580 return -EPROBE_DEFER; 581 581 } else { 582 - if (priv->extclk == priv->clk) { 582 + if (clk_is_match(priv->extclk, priv->clk)) { 583 583 devm_clk_put(&pdev->dev, priv->extclk); 584 584 priv->extclk = ERR_PTR(-EINVAL); 585 585 } else {
+30 -11
sound/soc/soc-core.c
··· 347 347 if (!buf) 348 348 return -ENOMEM; 349 349 350 + mutex_lock(&client_mutex); 351 + 350 352 list_for_each_entry(codec, &codec_list, list) { 351 353 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", 352 354 codec->component.name); ··· 359 357 break; 360 358 } 361 359 } 360 + 361 + mutex_unlock(&client_mutex); 362 362 363 363 if (ret >= 0) 364 364 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); ··· 386 382 if (!buf) 387 383 return -ENOMEM; 388 384 385 + mutex_lock(&client_mutex); 386 + 389 387 list_for_each_entry(component, &component_list, list) { 390 388 list_for_each_entry(dai, &component->dai_list, list) { 391 389 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", ··· 400 394 } 401 395 } 402 396 } 397 + 398 + mutex_unlock(&client_mutex); 403 399 404 400 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); 405 401 ··· 426 418 if (!buf) 427 419 return -ENOMEM; 428 420 421 + mutex_lock(&client_mutex); 422 + 429 423 list_for_each_entry(platform, &platform_list, list) { 430 424 len = snprintf(buf + ret, PAGE_SIZE - ret, "%s\n", 431 425 platform->component.name); ··· 438 428 break; 439 429 } 440 430 } 431 + 432 + mutex_unlock(&client_mutex); 441 433 442 434 ret = simple_read_from_buffer(user_buf, count, ppos, buf, ret); 443 435 ··· 848 836 { 849 837 struct snd_soc_component *component; 850 838 839 + lockdep_assert_held(&client_mutex); 840 + 851 841 list_for_each_entry(component, &component_list, list) { 852 842 if (of_node) { 853 843 if (component->dev->of_node == of_node) ··· 867 853 { 868 854 struct snd_soc_component *component; 869 855 struct snd_soc_dai *dai; 856 + 857 + lockdep_assert_held(&client_mutex); 870 858 871 859 /* Find CPU DAI from registered DAIs*/ 872 860 list_for_each_entry(component, &component_list, list) { ··· 1524 1508 struct snd_soc_codec *codec; 1525 1509 int ret, i, order; 1526 1510 1511 + mutex_lock(&client_mutex); 1527 1512 mutex_lock_nested(&card->mutex, SND_SOC_CARD_CLASS_INIT); 1528 1513 1529 1514 /* bind DAIs */ ··· 1679 1662 card->instantiated = 1; 1680 1663 snd_soc_dapm_sync(&card->dapm); 1681 1664 mutex_unlock(&card->mutex); 1665 + mutex_unlock(&client_mutex); 1682 1666 1683 1667 return 0; 1684 1668 ··· 1698 1680 1699 1681 base_error: 1700 1682 mutex_unlock(&card->mutex); 1683 + mutex_unlock(&client_mutex); 1701 1684 1702 1685 return ret; 1703 1686 } ··· 2732 2713 list_del(&component->list); 2733 2714 } 2734 2715 2735 - static void snd_soc_component_del(struct snd_soc_component *component) 2736 - { 2737 - mutex_lock(&client_mutex); 2738 - snd_soc_component_del_unlocked(component); 2739 - mutex_unlock(&client_mutex); 2740 - } 2741 - 2742 2716 int snd_soc_register_component(struct device *dev, 2743 2717 const struct snd_soc_component_driver *cmpnt_drv, 2744 2718 struct snd_soc_dai_driver *dai_drv, ··· 2779 2767 { 2780 2768 struct snd_soc_component *cmpnt; 2781 2769 2770 + mutex_lock(&client_mutex); 2782 2771 list_for_each_entry(cmpnt, &component_list, list) { 2783 2772 if (dev == cmpnt->dev && cmpnt->registered_as_component) 2784 2773 goto found; 2785 2774 } 2775 + mutex_unlock(&client_mutex); 2786 2776 return; 2787 2777 2788 2778 found: 2789 - snd_soc_component_del(cmpnt); 2779 + snd_soc_component_del_unlocked(cmpnt); 2780 + mutex_unlock(&client_mutex); 2790 2781 snd_soc_component_cleanup(cmpnt); 2791 2782 kfree(cmpnt); 2792 2783 } ··· 2897 2882 { 2898 2883 struct snd_soc_platform *platform; 2899 2884 2885 + mutex_lock(&client_mutex); 2900 2886 list_for_each_entry(platform, &platform_list, list) { 2901 - if (dev == platform->dev) 2887 + if (dev == platform->dev) { 2888 + mutex_unlock(&client_mutex); 2902 2889 return platform; 2890 + } 2903 2891 } 2892 + mutex_unlock(&client_mutex); 2904 2893 2905 2894 return NULL; 2906 2895 } ··· 3109 3090 { 3110 3091 struct snd_soc_codec *codec; 3111 3092 3093 + mutex_lock(&client_mutex); 3112 3094 list_for_each_entry(codec, &codec_list, list) { 3113 3095 if (dev == codec->dev) 3114 3096 goto found; 3115 3097 } 3098 + mutex_unlock(&client_mutex); 3116 3099 return; 3117 3100 3118 3101 found: 3119 - 3120 - mutex_lock(&client_mutex); 3121 3102 list_del(&codec->list); 3122 3103 snd_soc_component_del_unlocked(&codec->component); 3123 3104 mutex_unlock(&client_mutex);
+30
sound/usb/quirks-table.h
··· 1773 1773 } 1774 1774 } 1775 1775 }, 1776 + { 1777 + USB_DEVICE(0x0582, 0x0159), 1778 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1779 + /* .vendor_name = "Roland", */ 1780 + /* .product_name = "UA-22", */ 1781 + .ifnum = QUIRK_ANY_INTERFACE, 1782 + .type = QUIRK_COMPOSITE, 1783 + .data = (const struct snd_usb_audio_quirk[]) { 1784 + { 1785 + .ifnum = 0, 1786 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1787 + }, 1788 + { 1789 + .ifnum = 1, 1790 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1791 + }, 1792 + { 1793 + .ifnum = 2, 1794 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 1795 + .data = & (const struct snd_usb_midi_endpoint_info) { 1796 + .out_cables = 0x0001, 1797 + .in_cables = 0x0001 1798 + } 1799 + }, 1800 + { 1801 + .ifnum = -1 1802 + } 1803 + } 1804 + } 1805 + }, 1776 1806 /* this catches most recent vendor-specific Roland devices */ 1777 1807 { 1778 1808 .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+2
tools/perf/util/annotate.c
··· 30 30 31 31 static void ins__delete(struct ins_operands *ops) 32 32 { 33 + if (ops == NULL) 34 + return; 33 35 zfree(&ops->source.raw); 34 36 zfree(&ops->source.name); 35 37 zfree(&ops->target.raw);
+1 -1
tools/power/cpupower/Makefile
··· 209 209 210 210 $(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ) 211 211 $(ECHO) " CC " $@ 212 - $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@ 212 + $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@ 213 213 $(QUIET) $(STRIPCMD) $@ 214 214 215 215 $(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
+9 -1
tools/testing/selftests/exec/execveat.c
··· 30 30 #ifdef __NR_execveat 31 31 return syscall(__NR_execveat, fd, path, argv, envp, flags); 32 32 #else 33 - errno = -ENOSYS; 33 + errno = ENOSYS; 34 34 return -1; 35 35 #endif 36 36 } ··· 233 233 int fd_script_ephemeral = open_or_die("script.ephemeral", O_RDONLY); 234 234 int fd_cloexec = open_or_die("execveat", O_RDONLY|O_CLOEXEC); 235 235 int fd_script_cloexec = open_or_die("script", O_RDONLY|O_CLOEXEC); 236 + 237 + /* Check if we have execveat at all, and bail early if not */ 238 + errno = 0; 239 + execveat_(-1, NULL, NULL, NULL, 0); 240 + if (errno == ENOSYS) { 241 + printf("[FAIL] ENOSYS calling execveat - no kernel support?\n"); 242 + return 1; 243 + } 236 244 237 245 /* Change file position to confirm it doesn't affect anything */ 238 246 lseek(fd, 10, SEEK_SET);
+8
virt/kvm/arm/vgic-v2.c
··· 72 72 { 73 73 if (!(lr_desc.state & LR_STATE_MASK)) 74 74 vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr |= (1ULL << lr); 75 + else 76 + vcpu->arch.vgic_cpu.vgic_v2.vgic_elrsr &= ~(1ULL << lr); 75 77 } 76 78 77 79 static u64 vgic_v2_get_elrsr(const struct kvm_vcpu *vcpu) ··· 84 82 static u64 vgic_v2_get_eisr(const struct kvm_vcpu *vcpu) 85 83 { 86 84 return vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr; 85 + } 86 + 87 + static void vgic_v2_clear_eisr(struct kvm_vcpu *vcpu) 88 + { 89 + vcpu->arch.vgic_cpu.vgic_v2.vgic_eisr = 0; 87 90 } 88 91 89 92 static u32 vgic_v2_get_interrupt_status(const struct kvm_vcpu *vcpu) ··· 155 148 .sync_lr_elrsr = vgic_v2_sync_lr_elrsr, 156 149 .get_elrsr = vgic_v2_get_elrsr, 157 150 .get_eisr = vgic_v2_get_eisr, 151 + .clear_eisr = vgic_v2_clear_eisr, 158 152 .get_interrupt_status = vgic_v2_get_interrupt_status, 159 153 .enable_underflow = vgic_v2_enable_underflow, 160 154 .disable_underflow = vgic_v2_disable_underflow,
+8
virt/kvm/arm/vgic-v3.c
··· 104 104 { 105 105 if (!(lr_desc.state & LR_STATE_MASK)) 106 106 vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr |= (1U << lr); 107 + else 108 + vcpu->arch.vgic_cpu.vgic_v3.vgic_elrsr &= ~(1U << lr); 107 109 } 108 110 109 111 static u64 vgic_v3_get_elrsr(const struct kvm_vcpu *vcpu) ··· 116 114 static u64 vgic_v3_get_eisr(const struct kvm_vcpu *vcpu) 117 115 { 118 116 return vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr; 117 + } 118 + 119 + static void vgic_v3_clear_eisr(struct kvm_vcpu *vcpu) 120 + { 121 + vcpu->arch.vgic_cpu.vgic_v3.vgic_eisr = 0; 119 122 } 120 123 121 124 static u32 vgic_v3_get_interrupt_status(const struct kvm_vcpu *vcpu) ··· 199 192 .sync_lr_elrsr = vgic_v3_sync_lr_elrsr, 200 193 .get_elrsr = vgic_v3_get_elrsr, 201 194 .get_eisr = vgic_v3_get_eisr, 195 + .clear_eisr = vgic_v3_clear_eisr, 202 196 .get_interrupt_status = vgic_v3_get_interrupt_status, 203 197 .enable_underflow = vgic_v3_enable_underflow, 204 198 .disable_underflow = vgic_v3_disable_underflow,
+20 -2
virt/kvm/arm/vgic.c
··· 883 883 return vgic_ops->get_eisr(vcpu); 884 884 } 885 885 886 + static inline void vgic_clear_eisr(struct kvm_vcpu *vcpu) 887 + { 888 + vgic_ops->clear_eisr(vcpu); 889 + } 890 + 886 891 static inline u32 vgic_get_interrupt_status(struct kvm_vcpu *vcpu) 887 892 { 888 893 return vgic_ops->get_interrupt_status(vcpu); ··· 927 922 vgic_set_lr(vcpu, lr_nr, vlr); 928 923 clear_bit(lr_nr, vgic_cpu->lr_used); 929 924 vgic_cpu->vgic_irq_lr_map[irq] = LR_EMPTY; 925 + vgic_sync_lr_elrsr(vcpu, lr_nr, vlr); 930 926 } 931 927 932 928 /* ··· 984 978 BUG_ON(!test_bit(lr, vgic_cpu->lr_used)); 985 979 vlr.state |= LR_STATE_PENDING; 986 980 vgic_set_lr(vcpu, lr, vlr); 981 + vgic_sync_lr_elrsr(vcpu, lr, vlr); 987 982 return true; 988 983 } 989 984 } ··· 1006 999 vlr.state |= LR_EOI_INT; 1007 1000 1008 1001 vgic_set_lr(vcpu, lr, vlr); 1002 + vgic_sync_lr_elrsr(vcpu, lr, vlr); 1009 1003 1010 1004 return true; 1011 1005 } ··· 1143 1135 1144 1136 if (status & INT_STATUS_UNDERFLOW) 1145 1137 vgic_disable_underflow(vcpu); 1138 + 1139 + /* 1140 + * In the next iterations of the vcpu loop, if we sync the vgic state 1141 + * after flushing it, but before entering the guest (this happens for 1142 + * pending signals and vmid rollovers), then make sure we don't pick 1143 + * up any old maintenance interrupts here. 1144 + */ 1145 + vgic_clear_eisr(vcpu); 1146 1146 1147 1147 return level_pending; 1148 1148 } ··· 1599 1583 * emulation. So check this here again. KVM_CREATE_DEVICE does 1600 1584 * the proper checks already. 1601 1585 */ 1602 - if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) 1603 - return -ENODEV; 1586 + if (type == KVM_DEV_TYPE_ARM_VGIC_V2 && !vgic->can_emulate_gicv2) { 1587 + ret = -ENODEV; 1588 + goto out; 1589 + } 1604 1590 1605 1591 /* 1606 1592 * Any time a vcpu is run, vcpu_load is called which tries to grab the
+1
virt/kvm/kvm_main.c
··· 2492 2492 case KVM_CAP_SIGNAL_MSI: 2493 2493 #endif 2494 2494 #ifdef CONFIG_HAVE_KVM_IRQFD 2495 + case KVM_CAP_IRQFD: 2495 2496 case KVM_CAP_IRQFD_RESAMPLE: 2496 2497 #endif 2497 2498 case KVM_CAP_CHECK_EXTENSION_VM: