Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-intel-next-2015-03-13-merge' of git://anongit.freedesktop.org/drm-intel into drm-next

drm-intel-next-2015-03-13-rebased:
- EU count report param for gen9+ (Jeff McGee)
- piles of pll/wm/... fixes for chv, finally out of preliminary hw support
(Ville, Vijay)
- gen9 rps support from Akash
- more work to move towards atomic from Matt, Ander and others
- runtime pm support for skl (Damien)
- edp1.4 intermediate link clock support (Sonika)
- use frontbuffer tracking for fbc (Paulo)
- remove ilk rc6 (John Harrison)
- a bunch of smaller things and fixes all over

Includes backmerge because git rerere couldn't keep up any more.

* tag 'drm-intel-next-2015-03-13-merge' of git://anongit.freedesktop.org/drm-intel: (366 commits)
drm/i915: Make sure the primary plane is enabled before reading out the fb state
drm/i915: Update DRIVER_DATE to 20150313
drm/i915: Fix vmap_batch page iterator overrun
drm/i915: Export total subslice and EU counts
drm/i915: redefine WARN_ON_ONCE to include the condition
drm/i915/skl: Implement WaDisableHBR2
drm/i915: Remove the preliminary_hw_support shackles from CHV
drm/i915: Read CHV_PLL_DW8 from the correct offset
drm/i915: Rewrite IVB FDI bifurcation conflict checks
drm/i915: Rewrite some some of the FDI lane checks
drm/i915/skl: Enable the RPS interrupts programming
drm/i915/skl: Enabling processing of Turbo interrupts
drm/i915/skl: Updated the i915_frequency_info debugfs function
drm/i915: Simplify the way BC bifurcation state consistency is kept
drm/i915/skl: Updated the act_freq_mhz_show sysfs function
drm/i915/skl: Updated the gen9_enable_rps function
drm/i915/skl: Updated the gen6_rps_limits function
drm/i915/skl: Restructured the gen6_set_rps_thresholds function
drm/i915/skl: Updated the gen6_set_rps function
drm/i915/skl: Updated the gen6_init_rps_frequencies function
...

+4320 -2471
+2
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 22 22 - pclkN, clkN: Pairs of parent of input clock and input clock to the 23 23 devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 24 24 are supported currently. 25 + - power-domains: phandle pointing to the parent power domain, for more details 26 + see Documentation/devicetree/bindings/power/power_domain.txt 25 27 26 28 Node of a device using power domains must have a power-domains property 27 29 defined with a phandle to respective power domain.
+4
Documentation/devicetree/bindings/arm/sti.txt
··· 13 13 Required root node property: 14 14 compatible = "st,stih407"; 15 15 16 + Boards with the ST STiH410 SoC shall have the following properties: 17 + Required root node property: 18 + compatible = "st,stih410"; 19 + 16 20 Boards with the ST STiH418 SoC shall have the following properties: 17 21 Required root node property: 18 22 compatible = "st,stih418";
+4 -1
Documentation/devicetree/bindings/net/apm-xgene-enet.txt
··· 4 4 APM X-Gene SoC. 5 5 6 6 Required properties for all the ethernet interfaces: 7 - - compatible: Should be "apm,xgene-enet" 7 + - compatible: Should state binding information from the following list, 8 + - "apm,xgene-enet": RGMII based 1G interface 9 + - "apm,xgene1-sgenet": SGMII based 1G interface 10 + - "apm,xgene1-xgenet": XFI based 10G interface 8 11 - reg: Address and length of the register set for the device. It contains the 9 12 information of registers in the same order as described by reg-names 10 13 - reg-names: Should contain the register set names
+29
Documentation/devicetree/bindings/power/power_domain.txt
··· 19 19 providing multiple PM domains (e.g. power controllers), but can be any value 20 20 as specified by device tree binding documentation of particular provider. 21 21 22 + Optional properties: 23 + - power-domains : A phandle and PM domain specifier as defined by bindings of 24 + the power controller specified by phandle. 25 + Some power domains might be powered from another power domain (or have 26 + other hardware specific dependencies). For representing such dependency 27 + a standard PM domain consumer binding is used. When provided, all domains 28 + created by the given provider should be subdomains of the domain 29 + specified by this binding. More details about power domain specifier are 30 + available in the next section. 31 + 22 32 Example: 23 33 24 34 power: power-controller@12340000 { ··· 39 29 40 30 The node above defines a power controller that is a PM domain provider and 41 31 expects one cell as its phandle argument. 32 + 33 + Example 2: 34 + 35 + parent: power-controller@12340000 { 36 + compatible = "foo,power-controller"; 37 + reg = <0x12340000 0x1000>; 38 + #power-domain-cells = <1>; 39 + }; 40 + 41 + child: power-controller@12340000 { 42 + compatible = "foo,power-controller"; 43 + reg = <0x12341000 0x1000>; 44 + power-domains = <&parent 0>; 45 + #power-domain-cells = <1>; 46 + }; 47 + 48 + The nodes above define two power controllers: 'parent' and 'child'. 49 + Domains created by the 'child' power controller are subdomains of '0' power 50 + domain provided by the 'parent' power controller. 42 51 43 52 ==PM domain consumers== 44 53
+19
Documentation/devicetree/bindings/serial/axis,etraxfs-uart.txt
··· 1 + ETRAX FS UART 2 + 3 + Required properties: 4 + - compatible : "axis,etraxfs-uart" 5 + - reg: offset and length of the register set for the device. 6 + - interrupts: device interrupt 7 + 8 + Optional properties: 9 + - {dtr,dsr,ri,cd}-gpios: specify a GPIO for DTR/DSR/RI/CD 10 + line respectively. 11 + 12 + Example: 13 + 14 + serial@b00260000 { 15 + compatible = "axis,etraxfs-uart"; 16 + reg = <0xb0026000 0x1000>; 17 + interrupts = <68>; 18 + status = "disabled"; 19 + };
Documentation/devicetree/bindings/serial/of-serial.txt Documentation/devicetree/bindings/serial/8250.txt
+3
Documentation/devicetree/bindings/submitting-patches.txt
··· 12 12 13 13 devicetree@vger.kernel.org 14 14 15 + and Cc: the DT maintainers. Use scripts/get_maintainer.pl to identify 16 + all of the DT maintainers. 17 + 15 18 3) The Documentation/ portion of the patch should come in the series before 16 19 the code implementing the binding. 17 20
+2
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 20 20 ams AMS AG 21 21 amstaos AMS-Taos Inc. 22 22 apm Applied Micro Circuits Corporation (APM) 23 + arasan Arasan Chip Systems 23 24 arm ARM Ltd. 24 25 armadeus ARMadeus Systems SARL 25 26 asahi-kasei Asahi Kasei Corp. ··· 28 27 auo AU Optronics Corporation 29 28 avago Avago Technologies 30 29 avic Shanghai AVIC Optoelectronics Co., Ltd. 30 + axis Axis Communications AB 31 31 bosch Bosch Sensortec GmbH 32 32 brcm Broadcom Corporation 33 33 buffalo Buffalo, Inc.
+5
Documentation/devicetree/bindings/watchdog/atmel-wdt.txt
··· 26 26 - atmel,disable : Should be present if you want to disable the watchdog. 27 27 - atmel,idle-halt : Should be present if you want to stop the watchdog when 28 28 entering idle state. 29 + CAUTION: This property should be used with care, it actually makes the 30 + watchdog not counting when the CPU is in idle state, therefore the 31 + watchdog reset time depends on mean CPU usage and will not reset at all 32 + if the CPU stop working while it is in idle state, which is probably 33 + not what you want. 29 34 - atmel,dbg-halt : Should be present if you want to stop the watchdog when 30 35 entering debug state. 31 36
+14 -3
MAINTAINERS
··· 1030 1030 F: arch/arm/boot/dts/imx* 1031 1031 F: arch/arm/configs/imx*_defconfig 1032 1032 1033 + ARM/FREESCALE VYBRID ARM ARCHITECTURE 1034 + M: Shawn Guo <shawn.guo@linaro.org> 1035 + M: Sascha Hauer <kernel@pengutronix.de> 1036 + R: Stefan Agner <stefan@agner.ch> 1037 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1038 + S: Maintained 1039 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 1040 + F: arch/arm/mach-imx/*vf610* 1041 + F: arch/arm/boot/dts/vf* 1042 + 1033 1043 ARM/GLOMATION GESBC9312SX MACHINE SUPPORT 1034 1044 M: Lennert Buytenhek <kernel@wantstofly.org> 1035 1045 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 1198 1188 M: Jason Cooper <jason@lakedaemon.net> 1199 1189 M: Andrew Lunn <andrew@lunn.ch> 1200 1190 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1191 + M: Gregory Clement <gregory.clement@free-electrons.com> 1201 1192 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1202 1193 S: Maintained 1203 1194 F: arch/arm/mach-dove/ ··· 2118 2107 2119 2108 BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE 2120 2109 M: Christian Daudt <bcm@fixthebug.org> 2121 - M: Matt Porter <mporter@linaro.org> 2122 2110 M: Florian Fainelli <f.fainelli@gmail.com> 2123 2111 L: bcm-kernel-feedback-list@broadcom.com 2124 2112 T: git git://github.com/broadcom/mach-bcm ··· 2379 2369 2380 2370 CAN NETWORK LAYER 2381 2371 M: Oliver Hartkopp <socketcan@hartkopp.net> 2372 + M: Marc Kleine-Budde <mkl@pengutronix.de> 2382 2373 L: linux-can@vger.kernel.org 2383 - W: http://gitorious.org/linux-can 2374 + W: https://github.com/linux-can 2384 2375 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2385 2376 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2386 2377 S: Maintained ··· 2397 2386 M: Wolfgang Grandegger <wg@grandegger.com> 2398 2387 M: Marc Kleine-Budde <mkl@pengutronix.de> 2399 2388 L: linux-can@vger.kernel.org 2400 - W: http://gitorious.org/linux-can 2389 + W: https://github.com/linux-can 2401 2390 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 2402 2391 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 2403 2392 S: Maintained
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+1
arch/arm/Makefile
··· 150 150 machine-$(CONFIG_ARCH_CLPS711X) += clps711x 151 151 machine-$(CONFIG_ARCH_CNS3XXX) += cns3xxx 152 152 machine-$(CONFIG_ARCH_DAVINCI) += davinci 153 + machine-$(CONFIG_ARCH_DIGICOLOR) += digicolor 153 154 machine-$(CONFIG_ARCH_DOVE) += dove 154 155 machine-$(CONFIG_ARCH_EBSA110) += ebsa110 155 156 machine-$(CONFIG_ARCH_EFM32) += efm32
+8
arch/arm/boot/dts/am335x-bone-common.dtsi
··· 301 301 cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>; 302 302 cd-inverted; 303 303 }; 304 + 305 + &aes { 306 + status = "okay"; 307 + }; 308 + 309 + &sham { 310 + status = "okay"; 311 + };
-8
arch/arm/boot/dts/am335x-bone.dts
··· 24 24 &mmc1 { 25 25 vmmc-supply = <&ldo3_reg>; 26 26 }; 27 - 28 - &sham { 29 - status = "okay"; 30 - }; 31 - 32 - &aes { 33 - status = "okay"; 34 - };
+4
arch/arm/boot/dts/am335x-lxm.dts
··· 328 328 dual_emac_res_vlan = <3>; 329 329 }; 330 330 331 + &phy_sel { 332 + rmii-clock-ext; 333 + }; 334 + 331 335 &mac { 332 336 pinctrl-names = "default", "sleep"; 333 337 pinctrl-0 = <&cpsw_default>;
+3 -3
arch/arm/boot/dts/am33xx-clocks.dtsi
··· 99 99 ehrpwm0_tbclk: ehrpwm0_tbclk@44e10664 { 100 100 #clock-cells = <0>; 101 101 compatible = "ti,gate-clock"; 102 - clocks = <&dpll_per_m2_ck>; 102 + clocks = <&l4ls_gclk>; 103 103 ti,bit-shift = <0>; 104 104 reg = <0x0664>; 105 105 }; ··· 107 107 ehrpwm1_tbclk: ehrpwm1_tbclk@44e10664 { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <1>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm2_tbclk: ehrpwm2_tbclk@44e10664 { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <2>; 120 120 reg = <0x0664>; 121 121 };
+6 -6
arch/arm/boot/dts/am43xx-clocks.dtsi
··· 107 107 ehrpwm0_tbclk: ehrpwm0_tbclk { 108 108 #clock-cells = <0>; 109 109 compatible = "ti,gate-clock"; 110 - clocks = <&dpll_per_m2_ck>; 110 + clocks = <&l4ls_gclk>; 111 111 ti,bit-shift = <0>; 112 112 reg = <0x0664>; 113 113 }; ··· 115 115 ehrpwm1_tbclk: ehrpwm1_tbclk { 116 116 #clock-cells = <0>; 117 117 compatible = "ti,gate-clock"; 118 - clocks = <&dpll_per_m2_ck>; 118 + clocks = <&l4ls_gclk>; 119 119 ti,bit-shift = <1>; 120 120 reg = <0x0664>; 121 121 }; ··· 123 123 ehrpwm2_tbclk: ehrpwm2_tbclk { 124 124 #clock-cells = <0>; 125 125 compatible = "ti,gate-clock"; 126 - clocks = <&dpll_per_m2_ck>; 126 + clocks = <&l4ls_gclk>; 127 127 ti,bit-shift = <2>; 128 128 reg = <0x0664>; 129 129 }; ··· 131 131 ehrpwm3_tbclk: ehrpwm3_tbclk { 132 132 #clock-cells = <0>; 133 133 compatible = "ti,gate-clock"; 134 - clocks = <&dpll_per_m2_ck>; 134 + clocks = <&l4ls_gclk>; 135 135 ti,bit-shift = <4>; 136 136 reg = <0x0664>; 137 137 }; ··· 139 139 ehrpwm4_tbclk: ehrpwm4_tbclk { 140 140 #clock-cells = <0>; 141 141 compatible = "ti,gate-clock"; 142 - clocks = <&dpll_per_m2_ck>; 142 + clocks = <&l4ls_gclk>; 143 143 ti,bit-shift = <5>; 144 144 reg = <0x0664>; 145 145 }; ··· 147 147 ehrpwm5_tbclk: ehrpwm5_tbclk { 148 148 #clock-cells = <0>; 149 149 compatible = "ti,gate-clock"; 150 - clocks = <&dpll_per_m2_ck>; 150 + clocks = <&l4ls_gclk>; 151 151 ti,bit-shift = <6>; 152 152 reg = <0x0664>; 153 153 };
+3 -4
arch/arm/boot/dts/at91sam9260.dtsi
··· 494 494 495 495 pinctrl_usart3_rts: usart3_rts-0 { 496 496 atmel,pins = 497 - <AT91_PIOB 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC8 periph B */ 497 + <AT91_PIOC 8 AT91_PERIPH_B AT91_PINCTRL_NONE>; 498 498 }; 499 499 500 500 pinctrl_usart3_cts: usart3_cts-0 { 501 501 atmel,pins = 502 - <AT91_PIOB 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PC10 periph B */ 502 + <AT91_PIOC 10 AT91_PERIPH_B AT91_PINCTRL_NONE>; 503 503 }; 504 504 }; 505 505 ··· 853 853 }; 854 854 855 855 usb1: gadget@fffa4000 { 856 - compatible = "atmel,at91rm9200-udc"; 856 + compatible = "atmel,at91sam9260-udc"; 857 857 reg = <0xfffa4000 0x4000>; 858 858 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 859 859 clocks = <&udc_clk>, <&udpck>; ··· 976 976 atmel,watchdog-type = "hardware"; 977 977 atmel,reset-type = "all"; 978 978 atmel,dbg-halt; 979 - atmel,idle-halt; 980 979 status = "disabled"; 981 980 }; 982 981
+5 -4
arch/arm/boot/dts/at91sam9261.dtsi
··· 124 124 }; 125 125 126 126 usb1: gadget@fffa4000 { 127 - compatible = "atmel,at91rm9200-udc"; 127 + compatible = "atmel,at91sam9261-udc"; 128 128 reg = <0xfffa4000 0x4000>; 129 129 interrupts = <10 IRQ_TYPE_LEVEL_HIGH 2>; 130 - clocks = <&usb>, <&udc_clk>, <&udpck>; 131 - clock-names = "usb_clk", "udc_clk", "udpck"; 130 + clocks = <&udc_clk>, <&udpck>; 131 + clock-names = "pclk", "hclk"; 132 + atmel,matrix = <&matrix>; 132 133 status = "disabled"; 133 134 }; 134 135 ··· 263 262 }; 264 263 265 264 matrix: matrix@ffffee00 { 266 - compatible = "atmel,at91sam9260-bus-matrix"; 265 + compatible = "atmel,at91sam9260-bus-matrix", "syscon"; 267 266 reg = <0xffffee00 0x200>; 268 267 }; 269 268
+2 -3
arch/arm/boot/dts/at91sam9263.dtsi
··· 69 69 70 70 sram1: sram@00500000 { 71 71 compatible = "mmio-sram"; 72 - reg = <0x00300000 0x4000>; 72 + reg = <0x00500000 0x4000>; 73 73 }; 74 74 75 75 ahb { ··· 856 856 }; 857 857 858 858 usb1: gadget@fff78000 { 859 - compatible = "atmel,at91rm9200-udc"; 859 + compatible = "atmel,at91sam9263-udc"; 860 860 reg = <0xfff78000 0x4000>; 861 861 interrupts = <24 IRQ_TYPE_LEVEL_HIGH 2>; 862 862 clocks = <&udc_clk>, <&udpck>; ··· 905 905 atmel,watchdog-type = "hardware"; 906 906 atmel,reset-type = "all"; 907 907 atmel,dbg-halt; 908 - atmel,idle-halt; 909 908 status = "disabled"; 910 909 }; 911 910
+1 -2
arch/arm/boot/dts/at91sam9g45.dtsi
··· 1116 1116 atmel,watchdog-type = "hardware"; 1117 1117 atmel,reset-type = "all"; 1118 1118 atmel,dbg-halt; 1119 - atmel,idle-halt; 1120 1119 status = "disabled"; 1121 1120 }; 1122 1121 ··· 1300 1301 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1301 1302 reg = <0x00800000 0x100000>; 1302 1303 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1303 - clocks = <&usb>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 + clocks = <&utmi>, <&uhphs_clk>, <&uhphs_clk>, <&uhpck>; 1304 1305 clock-names = "usb_clk", "ehci_clk", "hclk", "uhpck"; 1305 1306 status = "disabled"; 1306 1307 };
-1
arch/arm/boot/dts/at91sam9n12.dtsi
··· 894 894 atmel,watchdog-type = "hardware"; 895 895 atmel,reset-type = "all"; 896 896 atmel,dbg-halt; 897 - atmel,idle-halt; 898 897 status = "disabled"; 899 898 }; 900 899
+2 -3
arch/arm/boot/dts/at91sam9x5.dtsi
··· 1066 1066 reg = <0x00500000 0x80000 1067 1067 0xf803c000 0x400>; 1068 1068 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>; 1069 - clocks = <&usb>, <&udphs_clk>; 1069 + clocks = <&utmi>, <&udphs_clk>; 1070 1070 clock-names = "hclk", "pclk"; 1071 1071 status = "disabled"; 1072 1072 ··· 1130 1130 atmel,watchdog-type = "hardware"; 1131 1131 atmel,reset-type = "all"; 1132 1132 atmel,dbg-halt; 1133 - atmel,idle-halt; 1134 1133 status = "disabled"; 1135 1134 }; 1136 1135 ··· 1185 1186 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1186 1187 reg = <0x00700000 0x100000>; 1187 1188 interrupts = <22 IRQ_TYPE_LEVEL_HIGH 2>; 1188 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1189 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1189 1190 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1190 1191 status = "disabled"; 1191 1192 };
+4 -6
arch/arm/boot/dts/dra7-evm.dts
··· 263 263 264 264 dcan1_pins_default: dcan1_pins_default { 265 265 pinctrl-single,pins = < 266 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 267 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 268 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 266 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 267 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 269 268 >; 270 269 }; 271 270 272 271 dcan1_pins_sleep: dcan1_pins_sleep { 273 272 pinctrl-single,pins = < 274 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 275 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 276 - 0x418 (MUX_MODE15) /* wakeup0.off */ 273 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 274 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 277 275 >; 278 276 }; 279 277 };
+4 -6
arch/arm/boot/dts/dra72-evm.dts
··· 119 119 120 120 dcan1_pins_default: dcan1_pins_default { 121 121 pinctrl-single,pins = < 122 - 0x3d0 (PIN_OUTPUT | MUX_MODE0) /* dcan1_tx */ 123 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 124 - 0x418 (PULL_DIS | MUX_MODE1) /* wakeup0.dcan1_rx */ 122 + 0x3d0 (PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */ 123 + 0x418 (PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */ 125 124 >; 126 125 }; 127 126 128 127 dcan1_pins_sleep: dcan1_pins_sleep { 129 128 pinctrl-single,pins = < 130 - 0x3d0 (MUX_MODE15) /* dcan1_tx.off */ 131 - 0x3d4 (MUX_MODE15) /* dcan1_rx.off */ 132 - 0x418 (MUX_MODE15) /* wakeup0.off */ 129 + 0x3d0 (MUX_MODE15 | PULL_UP) /* dcan1_tx.off */ 130 + 0x418 (MUX_MODE15 | PULL_UP) /* wakeup0.off */ 133 131 >; 134 132 }; 135 133
+81 -9
arch/arm/boot/dts/dra7xx-clocks.dtsi
··· 243 243 ti,invert-autoidle-bit; 244 244 }; 245 245 246 + dpll_core_byp_mux: dpll_core_byp_mux { 247 + #clock-cells = <0>; 248 + compatible = "ti,mux-clock"; 249 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 250 + ti,bit-shift = <23>; 251 + reg = <0x012c>; 252 + }; 253 + 246 254 dpll_core_ck: dpll_core_ck { 247 255 #clock-cells = <0>; 248 256 compatible = "ti,omap4-dpll-core-clock"; 249 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 257 + clocks = <&sys_clkin1>, <&dpll_core_byp_mux>; 250 258 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 251 259 }; 252 260 ··· 317 309 clock-div = <1>; 318 310 }; 319 311 312 + dpll_dsp_byp_mux: dpll_dsp_byp_mux { 313 + #clock-cells = <0>; 314 + compatible = "ti,mux-clock"; 315 + clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 316 + ti,bit-shift = <23>; 317 + reg = <0x0240>; 318 + }; 319 + 320 320 dpll_dsp_ck: dpll_dsp_ck { 321 321 #clock-cells = <0>; 322 322 compatible = "ti,omap4-dpll-clock"; 323 - clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>; 323 + clocks = <&sys_clkin1>, <&dpll_dsp_byp_mux>; 324 324 reg = <0x0234>, <0x0238>, <0x0240>, <0x023c>; 325 325 }; 326 326 ··· 351 335 clock-div = <1>; 352 336 }; 353 337 338 + dpll_iva_byp_mux: dpll_iva_byp_mux { 339 + #clock-cells = <0>; 340 + compatible = "ti,mux-clock"; 341 + clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 342 + ti,bit-shift = <23>; 343 + reg = <0x01ac>; 344 + }; 345 + 354 346 dpll_iva_ck: dpll_iva_ck { 355 347 #clock-cells = <0>; 356 348 compatible = "ti,omap4-dpll-clock"; 357 - clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>; 349 + clocks = <&sys_clkin1>, <&dpll_iva_byp_mux>; 358 350 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 359 351 }; 360 352 ··· 385 361 clock-div = <1>; 386 362 }; 387 363 364 + dpll_gpu_byp_mux: dpll_gpu_byp_mux { 365 + #clock-cells = <0>; 366 + compatible = "ti,mux-clock"; 367 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 368 + ti,bit-shift = <23>; 369 + reg = <0x02e4>; 370 + }; 371 + 388 372 dpll_gpu_ck: dpll_gpu_ck { 389 373 #clock-cells = <0>; 390 374 compatible = "ti,omap4-dpll-clock"; 391 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 375 + clocks = <&sys_clkin1>, <&dpll_gpu_byp_mux>; 392 376 reg = <0x02d8>, <0x02dc>, <0x02e4>, <0x02e0>; 393 377 }; 394 378 ··· 430 398 clock-div = <1>; 431 399 }; 432 400 401 + dpll_ddr_byp_mux: dpll_ddr_byp_mux { 402 + #clock-cells = <0>; 403 + compatible = "ti,mux-clock"; 404 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 405 + ti,bit-shift = <23>; 406 + reg = <0x021c>; 407 + }; 408 + 433 409 dpll_ddr_ck: dpll_ddr_ck { 434 410 #clock-cells = <0>; 435 411 compatible = "ti,omap4-dpll-clock"; 436 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 412 + clocks = <&sys_clkin1>, <&dpll_ddr_byp_mux>; 437 413 reg = <0x0210>, <0x0214>, <0x021c>, <0x0218>; 438 414 }; 439 415 ··· 456 416 ti,invert-autoidle-bit; 457 417 }; 458 418 419 + dpll_gmac_byp_mux: dpll_gmac_byp_mux { 420 + #clock-cells = <0>; 421 + compatible = "ti,mux-clock"; 422 + clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 423 + ti,bit-shift = <23>; 424 + reg = <0x02b4>; 425 + }; 426 + 459 427 dpll_gmac_ck: dpll_gmac_ck { 460 428 #clock-cells = <0>; 461 429 compatible = "ti,omap4-dpll-clock"; 462 - clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>; 430 + clocks = <&sys_clkin1>, <&dpll_gmac_byp_mux>; 463 431 reg = <0x02a8>, <0x02ac>, <0x02b4>, <0x02b0>; 464 432 }; 465 433 ··· 530 482 clock-div = <1>; 531 483 }; 532 484 485 + dpll_eve_byp_mux: dpll_eve_byp_mux { 486 + #clock-cells = <0>; 487 + compatible = "ti,mux-clock"; 488 + clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 489 + ti,bit-shift = <23>; 490 + reg = <0x0290>; 491 + }; 492 + 533 493 dpll_eve_ck: dpll_eve_ck { 534 494 #clock-cells = <0>; 535 495 compatible = "ti,omap4-dpll-clock"; 536 - clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>; 496 + clocks = <&sys_clkin1>, <&dpll_eve_byp_mux>; 537 497 reg = <0x0284>, <0x0288>, <0x0290>, <0x028c>; 538 498 }; 539 499 ··· 1305 1249 clock-div = <1>; 1306 1250 }; 1307 1251 1252 + dpll_per_byp_mux: dpll_per_byp_mux { 1253 + #clock-cells = <0>; 1254 + compatible = "ti,mux-clock"; 1255 + clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1256 + ti,bit-shift = <23>; 1257 + reg = <0x014c>; 1258 + }; 1259 + 1308 1260 dpll_per_ck: dpll_per_ck { 1309 1261 #clock-cells = <0>; 1310 1262 compatible = "ti,omap4-dpll-clock"; 1311 - clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>; 1263 + clocks = <&sys_clkin1>, <&dpll_per_byp_mux>; 1312 1264 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 1313 1265 }; 1314 1266 ··· 1339 1275 clock-div = <1>; 1340 1276 }; 1341 1277 1278 + dpll_usb_byp_mux: dpll_usb_byp_mux { 1279 + #clock-cells = <0>; 1280 + compatible = "ti,mux-clock"; 1281 + clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1282 + ti,bit-shift = <23>; 1283 + reg = <0x018c>; 1284 + }; 1285 + 1342 1286 dpll_usb_ck: dpll_usb_ck { 1343 1287 #clock-cells = <0>; 1344 1288 compatible = "ti,omap4-dpll-j-type-clock"; 1345 - clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>; 1289 + clocks = <&sys_clkin1>, <&dpll_usb_byp_mux>; 1346 1290 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 1347 1291 }; 1348 1292
+2
arch/arm/boot/dts/exynos3250.dtsi
··· 18 18 */ 19 19 20 20 #include "skeleton.dtsi" 21 + #include "exynos4-cpu-thermal.dtsi" 21 22 #include <dt-bindings/clock/exynos3250.h> 22 23 23 24 / { ··· 194 193 interrupts = <0 216 0>; 195 194 clocks = <&cmu CLK_TMU_APBIF>; 196 195 clock-names = "tmu_apbif"; 196 + #include "exynos4412-tmu-sensor-conf.dtsi" 197 197 status = "disabled"; 198 198 }; 199 199
+52
arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4 thermal zone 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal.h> 13 + 14 + / { 15 + thermal-zones { 16 + cpu_thermal: cpu-thermal { 17 + thermal-sensors = <&tmu 0>; 18 + polling-delay-passive = <0>; 19 + polling-delay = <0>; 20 + trips { 21 + cpu_alert0: cpu-alert-0 { 22 + temperature = <70000>; /* millicelsius */ 23 + hysteresis = <10000>; /* millicelsius */ 24 + type = "active"; 25 + }; 26 + cpu_alert1: cpu-alert-1 { 27 + temperature = <95000>; /* millicelsius */ 28 + hysteresis = <10000>; /* millicelsius */ 29 + type = "active"; 30 + }; 31 + cpu_alert2: cpu-alert-2 { 32 + temperature = <110000>; /* millicelsius */ 33 + hysteresis = <10000>; /* millicelsius */ 34 + type = "active"; 35 + }; 36 + cpu_crit0: cpu-crit-0 { 37 + temperature = <120000>; /* millicelsius */ 38 + hysteresis = <0>; /* millicelsius */ 39 + type = "critical"; 40 + }; 41 + }; 42 + cooling-maps { 43 + map0 { 44 + trip = <&cpu_alert0>; 45 + }; 46 + map1 { 47 + trip = <&cpu_alert1>; 48 + }; 49 + }; 50 + }; 51 + }; 52 + };
+45
arch/arm/boot/dts/exynos4.dtsi
··· 38 38 i2c5 = &i2c_5; 39 39 i2c6 = &i2c_6; 40 40 i2c7 = &i2c_7; 41 + i2c8 = &i2c_8; 41 42 csis0 = &csis_0; 42 43 csis1 = &csis_1; 43 44 fimc0 = &fimc_0; ··· 105 104 compatible = "samsung,exynos4210-pd"; 106 105 reg = <0x10023C20 0x20>; 107 106 #power-domain-cells = <0>; 107 + power-domains = <&pd_lcd0>; 108 108 }; 109 109 110 110 pd_cam: cam-power-domain@10023C00 { ··· 556 554 status = "disabled"; 557 555 }; 558 556 557 + i2c_8: i2c@138E0000 { 558 + #address-cells = <1>; 559 + #size-cells = <0>; 560 + compatible = "samsung,s3c2440-hdmiphy-i2c"; 561 + reg = <0x138E0000 0x100>; 562 + interrupts = <0 93 0>; 563 + clocks = <&clock CLK_I2C_HDMI>; 564 + clock-names = "i2c"; 565 + status = "disabled"; 566 + 567 + hdmi_i2c_phy: hdmiphy@38 { 568 + compatible = "exynos4210-hdmiphy"; 569 + reg = <0x38>; 570 + }; 571 + }; 572 + 559 573 spi_0: spi@13920000 { 560 574 compatible = "samsung,exynos4210-spi"; 561 575 reg = <0x13920000 0x100>; ··· 678 660 clock-names = "sclk_fimd", "fimd"; 679 661 power-domains = <&pd_lcd0>; 680 662 samsung,sysreg = <&sys_reg>; 663 + status = "disabled"; 664 + }; 665 + 666 + tmu: tmu@100C0000 { 667 + #include "exynos4412-tmu-sensor-conf.dtsi" 668 + }; 669 + 670 + hdmi: hdmi@12D00000 { 671 + compatible = "samsung,exynos4210-hdmi"; 672 + reg = <0x12D00000 0x70000>; 673 + interrupts = <0 92 0>; 674 + clock-names = "hdmi", "sclk_hdmi", "sclk_pixel", "sclk_hdmiphy", 675 + "mout_hdmi"; 676 + clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 677 + <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, 678 + <&clock CLK_MOUT_HDMI>; 679 + phy = <&hdmi_i2c_phy>; 680 + power-domains = <&pd_tv>; 681 + samsung,syscon-phandle = <&pmu_system_controller>; 682 + status = "disabled"; 683 + }; 684 + 685 + mixer: mixer@12C10000 { 686 + compatible = "samsung,exynos4210-mixer"; 687 + interrupts = <0 91 0>; 688 + reg = <0x12C10000 0x2100>, <0x12c00000 0x300>; 689 + power-domains = <&pd_tv>; 681 690 status = "disabled"; 682 691 }; 683 692
+19
arch/arm/boot/dts/exynos4210-trats.dts
··· 426 426 status = "okay"; 427 427 }; 428 428 429 + tmu@100C0000 { 430 + status = "okay"; 431 + }; 432 + 433 + thermal-zones { 434 + cpu_thermal: cpu-thermal { 435 + cooling-maps { 436 + map0 { 437 + /* Corresponds to 800MHz at freq_table */ 438 + cooling-device = <&cpu0 2 2>; 439 + }; 440 + map1 { 441 + /* Corresponds to 200MHz at freq_table */ 442 + cooling-device = <&cpu0 4 4>; 443 + }; 444 + }; 445 + }; 446 + }; 447 + 429 448 camera { 430 449 pinctrl-names = "default"; 431 450 pinctrl-0 = <>;
+57
arch/arm/boot/dts/exynos4210-universal_c210.dts
··· 505 505 assigned-clock-rates = <0>, <160000000>; 506 506 }; 507 507 }; 508 + 509 + hdmi_en: voltage-regulator-hdmi-5v { 510 + compatible = "regulator-fixed"; 511 + regulator-name = "HDMI_5V"; 512 + regulator-min-microvolt = <5000000>; 513 + regulator-max-microvolt = <5000000>; 514 + gpio = <&gpe0 1 0>; 515 + enable-active-high; 516 + }; 517 + 518 + hdmi_ddc: i2c-ddc { 519 + compatible = "i2c-gpio"; 520 + gpios = <&gpe4 2 0 &gpe4 3 0>; 521 + i2c-gpio,delay-us = <100>; 522 + #address-cells = <1>; 523 + #size-cells = <0>; 524 + 525 + pinctrl-0 = <&i2c_ddc_bus>; 526 + pinctrl-names = "default"; 527 + status = "okay"; 528 + }; 529 + 530 + mixer@12C10000 { 531 + status = "okay"; 532 + }; 533 + 534 + hdmi@12D00000 { 535 + hpd-gpio = <&gpx3 7 0>; 536 + pinctrl-names = "default"; 537 + pinctrl-0 = <&hdmi_hpd>; 538 + hdmi-en-supply = <&hdmi_en>; 539 + vdd-supply = <&ldo3_reg>; 540 + vdd_osc-supply = <&ldo4_reg>; 541 + vdd_pll-supply = <&ldo3_reg>; 542 + ddc = <&hdmi_ddc>; 543 + status = "okay"; 544 + }; 545 + 546 + i2c@138E0000 { 547 + status = "okay"; 548 + }; 549 + }; 550 + 551 + &pinctrl_1 { 552 + hdmi_hpd: hdmi-hpd { 553 + samsung,pins = "gpx3-7"; 554 + samsung,pin-pud = <0>; 555 + }; 556 + }; 557 + 558 + &pinctrl_0 { 559 + i2c_ddc_bus: i2c-ddc-bus { 560 + samsung,pins = "gpe4-2", "gpe4-3"; 561 + samsung,pin-function = <2>; 562 + samsung,pin-pud = <3>; 563 + samsung,pin-drv = <0>; 564 + }; 508 565 }; 509 566 510 567 &mdma1 {
+36 -2
arch/arm/boot/dts/exynos4210.dtsi
··· 21 21 22 22 #include "exynos4.dtsi" 23 23 #include "exynos4210-pinctrl.dtsi" 24 + #include "exynos4-cpu-thermal.dtsi" 24 25 25 26 / { 26 27 compatible = "samsung,exynos4210", "samsung,exynos4"; ··· 36 35 #address-cells = <1>; 37 36 #size-cells = <0>; 38 37 39 - cpu@900 { 38 + cpu0: cpu@900 { 40 39 device_type = "cpu"; 41 40 compatible = "arm,cortex-a9"; 42 41 reg = <0x900>; 42 + cooling-min-level = <4>; 43 + cooling-max-level = <2>; 44 + #cooling-cells = <2>; /* min followed by max */ 43 45 }; 44 46 45 47 cpu@901 { ··· 157 153 reg = <0x03860000 0x1000>; 158 154 }; 159 155 160 - tmu@100C0000 { 156 + tmu: tmu@100C0000 { 161 157 compatible = "samsung,exynos4210-tmu"; 162 158 interrupt-parent = <&combiner>; 163 159 reg = <0x100C0000 0x100>; 164 160 interrupts = <2 4>; 165 161 clocks = <&clock CLK_TMU_APBIF>; 166 162 clock-names = "tmu_apbif"; 163 + samsung,tmu_gain = <15>; 164 + samsung,tmu_reference_voltage = <7>; 167 165 status = "disabled"; 166 + }; 167 + 168 + thermal-zones { 169 + cpu_thermal: cpu-thermal { 170 + polling-delay-passive = <0>; 171 + polling-delay = <0>; 172 + thermal-sensors = <&tmu 0>; 173 + 174 + trips { 175 + cpu_alert0: cpu-alert-0 { 176 + temperature = <85000>; /* millicelsius */ 177 + }; 178 + cpu_alert1: cpu-alert-1 { 179 + temperature = <100000>; /* millicelsius */ 180 + }; 181 + cpu_alert2: cpu-alert-2 { 182 + temperature = <110000>; /* millicelsius */ 183 + }; 184 + }; 185 + }; 168 186 }; 169 187 170 188 g2d@12800000 { ··· 227 201 samsung,mainscaler-ext; 228 202 samsung,lcd-wb; 229 203 }; 204 + }; 205 + 206 + mixer: mixer@12C10000 { 207 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp", "mout_mixer", 208 + "sclk_mixer"; 209 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 210 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>, 211 + <&clock CLK_MOUT_MIXER>, <&clock CLK_SCLK_MIXER>; 230 212 }; 231 213 232 214 ppmu_lcd1: ppmu_lcd1@12240000 {
+4 -1
arch/arm/boot/dts/exynos4212.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+64
arch/arm/boot/dts/exynos4412-odroid-common.dtsi
··· 249 249 regulator-always-on; 250 250 }; 251 251 252 + ldo8_reg: ldo@8 { 253 + regulator-compatible = "LDO8"; 254 + regulator-name = "VDD10_HDMI_1.0V"; 255 + regulator-min-microvolt = <1000000>; 256 + regulator-max-microvolt = <1000000>; 257 + }; 258 + 259 + ldo10_reg: ldo@10 { 260 + regulator-compatible = "LDO10"; 261 + regulator-name = "VDDQ_MIPIHSI_1.8V"; 262 + regulator-min-microvolt = <1800000>; 263 + regulator-max-microvolt = <1800000>; 264 + }; 265 + 252 266 ldo11_reg: LDO11 { 253 267 regulator-name = "VDD18_ABB1_1.8V"; 254 268 regulator-min-microvolt = <1800000>; ··· 425 411 ehci: ehci@12580000 { 426 412 status = "okay"; 427 413 }; 414 + 415 + tmu@100C0000 { 416 + vtmu-supply = <&ldo10_reg>; 417 + status = "okay"; 418 + }; 419 + 420 + thermal-zones { 421 + cpu_thermal: cpu-thermal { 422 + cooling-maps { 423 + map0 { 424 + /* Corresponds to 800MHz at freq_table */ 425 + cooling-device = <&cpu0 7 7>; 426 + }; 427 + map1 { 428 + /* Corresponds to 200MHz at freq_table */ 429 + cooling-device = <&cpu0 13 13>; 430 + }; 431 + }; 432 + }; 433 + }; 434 + 435 + mixer: mixer@12C10000 { 436 + status = "okay"; 437 + }; 438 + 439 + hdmi@12D00000 { 440 + hpd-gpio = <&gpx3 7 0>; 441 + pinctrl-names = "default"; 442 + pinctrl-0 = <&hdmi_hpd>; 443 + vdd-supply = <&ldo8_reg>; 444 + vdd_osc-supply = <&ldo10_reg>; 445 + vdd_pll-supply = <&ldo8_reg>; 446 + ddc = <&hdmi_ddc>; 447 + status = "okay"; 448 + }; 449 + 450 + hdmi_ddc: i2c@13880000 { 451 + status = "okay"; 452 + pinctrl-names = "default"; 453 + pinctrl-0 = <&i2c2_bus>; 454 + }; 455 + 456 + i2c@138E0000 { 457 + status = "okay"; 458 + }; 428 459 }; 429 460 430 461 &pinctrl_1 { ··· 483 424 samsung,pin-function = <0>; 484 425 samsung,pin-pud = <0>; 485 426 samsung,pin-drv = <0>; 427 + }; 428 + 429 + hdmi_hpd: hdmi-hpd { 430 + samsung,pins = "gpx3-7"; 431 + samsung,pin-pud = <1>; 486 432 }; 487 433 };
+24
arch/arm/boot/dts/exynos4412-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos4412 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <8>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <55>; 19 + samsung,tmu_min_efuse_value = <40>; 20 + samsung,tmu_max_efuse_value = <100>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <85>; 23 + samsung,tmu_default_temp_offset = <50>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+15
arch/arm/boot/dts/exynos4412-trats2.dts
··· 927 927 pulldown-ohm = <100000>; /* 100K */ 928 928 io-channels = <&adc 2>; /* Battery temperature */ 929 929 }; 930 + 931 + thermal-zones { 932 + cpu_thermal: cpu-thermal { 933 + cooling-maps { 934 + map0 { 935 + /* Corresponds to 800MHz at freq_table */ 936 + cooling-device = <&cpu0 7 7>; 937 + }; 938 + map1 { 939 + /* Corresponds to 200MHz at freq_table */ 940 + cooling-device = <&cpu0 13 13>; 941 + }; 942 + }; 943 + }; 944 + }; 930 945 }; 931 946 932 947 &pmu_system_controller {
+4 -1
arch/arm/boot/dts/exynos4412.dtsi
··· 26 26 #address-cells = <1>; 27 27 #size-cells = <0>; 28 28 29 - cpu@A00 { 29 + cpu0: cpu@A00 { 30 30 device_type = "cpu"; 31 31 compatible = "arm,cortex-a9"; 32 32 reg = <0xA00>; 33 + cooling-min-level = <13>; 34 + cooling-max-level = <7>; 35 + #cooling-cells = <2>; /* min followed by max */ 33 36 }; 34 37 35 38 cpu@A01 {
+12
arch/arm/boot/dts/exynos4x12.dtsi
··· 19 19 20 20 #include "exynos4.dtsi" 21 21 #include "exynos4x12-pinctrl.dtsi" 22 + #include "exynos4-cpu-thermal.dtsi" 22 23 23 24 / { 24 25 aliases { ··· 297 296 clocks = <&clock 383>; 298 297 clock-names = "tmu_apbif"; 299 298 status = "disabled"; 299 + }; 300 + 301 + hdmi: hdmi@12D00000 { 302 + compatible = "samsung,exynos4212-hdmi"; 303 + }; 304 + 305 + mixer: mixer@12C10000 { 306 + compatible = "samsung,exynos4212-mixer"; 307 + clock-names = "mixer", "hdmi", "sclk_hdmi", "vp"; 308 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 309 + <&clock CLK_SCLK_HDMI>, <&clock CLK_VP>; 300 310 }; 301 311 };
+39 -5
arch/arm/boot/dts/exynos5250.dtsi
··· 20 20 #include <dt-bindings/clock/exynos5250.h> 21 21 #include "exynos5.dtsi" 22 22 #include "exynos5250-pinctrl.dtsi" 23 - 23 + #include "exynos4-cpu-thermal.dtsi" 24 24 #include <dt-bindings/clock/exynos-audss-clk.h> 25 25 26 26 / { ··· 58 58 #address-cells = <1>; 59 59 #size-cells = <0>; 60 60 61 - cpu@0 { 61 + cpu0: cpu@0 { 62 62 device_type = "cpu"; 63 63 compatible = "arm,cortex-a15"; 64 64 reg = <0>; 65 65 clock-frequency = <1700000000>; 66 + cooling-min-level = <15>; 67 + cooling-max-level = <9>; 68 + #cooling-cells = <2>; /* min followed by max */ 66 69 }; 67 70 cpu@1 { 68 71 device_type = "cpu"; ··· 102 99 pd_mfc: mfc-power-domain@10044040 { 103 100 compatible = "samsung,exynos4210-pd"; 104 101 reg = <0x10044040 0x20>; 102 + #power-domain-cells = <0>; 103 + }; 104 + 105 + pd_disp1: disp1-power-domain@100440A0 { 106 + compatible = "samsung,exynos4210-pd"; 107 + reg = <0x100440A0 0x20>; 105 108 #power-domain-cells = <0>; 106 109 }; 107 110 ··· 244 235 status = "disabled"; 245 236 }; 246 237 247 - tmu@10060000 { 238 + tmu: tmu@10060000 { 248 239 compatible = "samsung,exynos5250-tmu"; 249 240 reg = <0x10060000 0x100>; 250 241 interrupts = <0 65 0>; 251 242 clocks = <&clock CLK_TMU>; 252 243 clock-names = "tmu_apbif"; 244 + #include "exynos4412-tmu-sensor-conf.dtsi" 245 + }; 246 + 247 + thermal-zones { 248 + cpu_thermal: cpu-thermal { 249 + polling-delay-passive = <0>; 250 + polling-delay = <0>; 251 + thermal-sensors = <&tmu 0>; 252 + 253 + cooling-maps { 254 + map0 { 255 + /* Corresponds to 800MHz at freq_table */ 256 + cooling-device = <&cpu0 9 9>; 257 + }; 258 + map1 { 259 + /* Corresponds to 200MHz at freq_table */ 260 + cooling-device = <&cpu0 15 15>; 261 + }; 262 + }; 263 + }; 253 264 }; 254 265 255 266 serial@12C00000 { ··· 748 719 hdmi: hdmi { 749 720 compatible = "samsung,exynos4212-hdmi"; 750 721 reg = <0x14530000 0x70000>; 722 + power-domains = <&pd_disp1>; 751 723 interrupts = <0 95 0>; 752 724 clocks = <&clock CLK_HDMI>, <&clock CLK_SCLK_HDMI>, 753 725 <&clock CLK_SCLK_PIXEL>, <&clock CLK_SCLK_HDMIPHY>, ··· 761 731 mixer { 762 732 compatible = "samsung,exynos5250-mixer"; 763 733 reg = <0x14450000 0x10000>; 734 + power-domains = <&pd_disp1>; 764 735 interrupts = <0 94 0>; 765 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 766 - clock-names = "mixer", "sclk_hdmi"; 736 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 737 + <&clock CLK_SCLK_HDMI>; 738 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 767 739 }; 768 740 769 741 dp_phy: video-phy@10040720 { ··· 775 743 }; 776 744 777 745 dp: dp-controller@145B0000 { 746 + power-domains = <&pd_disp1>; 778 747 clocks = <&clock CLK_DP>; 779 748 clock-names = "dp"; 780 749 phys = <&dp_phy>; ··· 783 750 }; 784 751 785 752 fimd: fimd@14400000 { 753 + power-domains = <&pd_disp1>; 786 754 clocks = <&clock CLK_SCLK_FIMD1>, <&clock CLK_FIMD1>; 787 755 clock-names = "sclk_fimd", "fimd"; 788 756 };
+35
arch/arm/boot/dts/exynos5420-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5420 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <85000>; /* millicelsius */ 17 + hysteresis = <10000>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-alert-1 { 21 + temperature = <103000>; /* millicelsius */ 22 + hysteresis = <10000>; /* millicelsius */ 23 + type = "active"; 24 + }; 25 + cpu-alert-2 { 26 + temperature = <110000>; /* millicelsius */ 27 + hysteresis = <10000>; /* millicelsius */ 28 + type = "active"; 29 + }; 30 + cpu-crit-0 { 31 + temperature = <1200000>; /* millicelsius */ 32 + hysteresis = <0>; /* millicelsius */ 33 + type = "critical"; 34 + }; 35 + };
+31 -2
arch/arm/boot/dts/exynos5420.dtsi
··· 740 740 compatible = "samsung,exynos5420-mixer"; 741 741 reg = <0x14450000 0x10000>; 742 742 interrupts = <0 94 0>; 743 - clocks = <&clock CLK_MIXER>, <&clock CLK_SCLK_HDMI>; 744 - clock-names = "mixer", "sclk_hdmi"; 743 + clocks = <&clock CLK_MIXER>, <&clock CLK_HDMI>, 744 + <&clock CLK_SCLK_HDMI>; 745 + clock-names = "mixer", "hdmi", "sclk_hdmi"; 745 746 power-domains = <&disp_pd>; 746 747 }; 747 748 ··· 783 782 interrupts = <0 65 0>; 784 783 clocks = <&clock CLK_TMU>; 785 784 clock-names = "tmu_apbif"; 785 + #include "exynos4412-tmu-sensor-conf.dtsi" 786 786 }; 787 787 788 788 tmu_cpu1: tmu@10064000 { ··· 792 790 interrupts = <0 183 0>; 793 791 clocks = <&clock CLK_TMU>; 794 792 clock-names = "tmu_apbif"; 793 + #include "exynos4412-tmu-sensor-conf.dtsi" 795 794 }; 796 795 797 796 tmu_cpu2: tmu@10068000 { ··· 801 798 interrupts = <0 184 0>; 802 799 clocks = <&clock CLK_TMU>, <&clock CLK_TMU>; 803 800 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 801 + #include "exynos4412-tmu-sensor-conf.dtsi" 804 802 }; 805 803 806 804 tmu_cpu3: tmu@1006c000 { ··· 810 806 interrupts = <0 185 0>; 811 807 clocks = <&clock CLK_TMU>, <&clock CLK_TMU_GPU>; 812 808 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 809 + #include "exynos4412-tmu-sensor-conf.dtsi" 813 810 }; 814 811 815 812 tmu_gpu: tmu@100a0000 { ··· 819 814 interrupts = <0 215 0>; 820 815 clocks = <&clock CLK_TMU_GPU>, <&clock CLK_TMU>; 821 816 clock-names = "tmu_apbif", "tmu_triminfo_apbif"; 817 + #include "exynos4412-tmu-sensor-conf.dtsi" 818 + }; 819 + 820 + thermal-zones { 821 + cpu0_thermal: cpu0-thermal { 822 + thermal-sensors = <&tmu_cpu0>; 823 + #include "exynos5420-trip-points.dtsi" 824 + }; 825 + cpu1_thermal: cpu1-thermal { 826 + thermal-sensors = <&tmu_cpu1>; 827 + #include "exynos5420-trip-points.dtsi" 828 + }; 829 + cpu2_thermal: cpu2-thermal { 830 + thermal-sensors = <&tmu_cpu2>; 831 + #include "exynos5420-trip-points.dtsi" 832 + }; 833 + cpu3_thermal: cpu3-thermal { 834 + thermal-sensors = <&tmu_cpu3>; 835 + #include "exynos5420-trip-points.dtsi" 836 + }; 837 + gpu_thermal: gpu-thermal { 838 + thermal-sensors = <&tmu_gpu>; 839 + #include "exynos5420-trip-points.dtsi" 840 + }; 822 841 }; 823 842 824 843 watchdog: watchdog@101D0000 {
+24
arch/arm/boot/dts/exynos5440-tmu-sensor-conf.dtsi
··· 1 + /* 2 + * Device tree sources for Exynos5440 TMU sensor configuration 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + #include <dt-bindings/thermal/thermal_exynos.h> 13 + 14 + #thermal-sensor-cells = <0>; 15 + samsung,tmu_gain = <5>; 16 + samsung,tmu_reference_voltage = <16>; 17 + samsung,tmu_noise_cancel_mode = <4>; 18 + samsung,tmu_efuse_value = <0x5d2d>; 19 + samsung,tmu_min_efuse_value = <16>; 20 + samsung,tmu_max_efuse_value = <76>; 21 + samsung,tmu_first_point_trim = <25>; 22 + samsung,tmu_second_point_trim = <70>; 23 + samsung,tmu_default_temp_offset = <25>; 24 + samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
+25
arch/arm/boot/dts/exynos5440-trip-points.dtsi
··· 1 + /* 2 + * Device tree sources for default Exynos5440 thermal zone definition 3 + * 4 + * Copyright (c) 2014 Lukasz Majewski <l.majewski@samsung.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + */ 11 + 12 + polling-delay-passive = <0>; 13 + polling-delay = <0>; 14 + trips { 15 + cpu-alert-0 { 16 + temperature = <100000>; /* millicelsius */ 17 + hysteresis = <0>; /* millicelsius */ 18 + type = "active"; 19 + }; 20 + cpu-crit-0 { 21 + temperature = <1050000>; /* millicelsius */ 22 + hysteresis = <0>; /* millicelsius */ 23 + type = "critical"; 24 + }; 25 + };
+18
arch/arm/boot/dts/exynos5440.dtsi
··· 219 219 interrupts = <0 58 0>; 220 220 clocks = <&clock CLK_B_125>; 221 221 clock-names = "tmu_apbif"; 222 + #include "exynos5440-tmu-sensor-conf.dtsi" 222 223 }; 223 224 224 225 tmuctrl_1: tmuctrl@16011C { ··· 228 227 interrupts = <0 58 0>; 229 228 clocks = <&clock CLK_B_125>; 230 229 clock-names = "tmu_apbif"; 230 + #include "exynos5440-tmu-sensor-conf.dtsi" 231 231 }; 232 232 233 233 tmuctrl_2: tmuctrl@160120 { ··· 237 235 interrupts = <0 58 0>; 238 236 clocks = <&clock CLK_B_125>; 239 237 clock-names = "tmu_apbif"; 238 + #include "exynos5440-tmu-sensor-conf.dtsi" 239 + }; 240 + 241 + thermal-zones { 242 + cpu0_thermal: cpu0-thermal { 243 + thermal-sensors = <&tmuctrl_0>; 244 + #include "exynos5440-trip-points.dtsi" 245 + }; 246 + cpu1_thermal: cpu1-thermal { 247 + thermal-sensors = <&tmuctrl_1>; 248 + #include "exynos5440-trip-points.dtsi" 249 + }; 250 + cpu2_thermal: cpu2-thermal { 251 + thermal-sensors = <&tmuctrl_2>; 252 + #include "exynos5440-trip-points.dtsi" 253 + }; 240 254 }; 241 255 242 256 sata@210000 {
+2
arch/arm/boot/dts/imx6qdl-sabresd.dtsi
··· 35 35 regulator-max-microvolt = <5000000>; 36 36 gpio = <&gpio3 22 0>; 37 37 enable-active-high; 38 + vin-supply = <&swbst_reg>; 38 39 }; 39 40 40 41 reg_usb_h1_vbus: regulator@1 { ··· 46 45 regulator-max-microvolt = <5000000>; 47 46 gpio = <&gpio1 29 0>; 48 47 enable-active-high; 48 + vin-supply = <&swbst_reg>; 49 49 }; 50 50 51 51 reg_audio: regulator@2 {
+2
arch/arm/boot/dts/imx6sl-evk.dts
··· 52 52 regulator-max-microvolt = <5000000>; 53 53 gpio = <&gpio4 0 0>; 54 54 enable-active-high; 55 + vin-supply = <&swbst_reg>; 55 56 }; 56 57 57 58 reg_usb_otg2_vbus: regulator@1 { ··· 63 62 regulator-max-microvolt = <5000000>; 64 63 gpio = <&gpio4 2 0>; 65 64 enable-active-high; 65 + vin-supply = <&swbst_reg>; 66 66 }; 67 67 68 68 reg_aud3v: regulator@2 {
+1 -1
arch/arm/boot/dts/omap5-core-thermal.dtsi
··· 13 13 14 14 core_thermal: core_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 2>;
+1 -1
arch/arm/boot/dts/omap5-gpu-thermal.dtsi
··· 13 13 14 14 gpu_thermal: gpu_thermal { 15 15 polling-delay-passive = <250>; /* milliseconds */ 16 - polling-delay = <1000>; /* milliseconds */ 16 + polling-delay = <500>; /* milliseconds */ 17 17 18 18 /* sensor ID */ 19 19 thermal-sensors = <&bandgap 1>;
+4
arch/arm/boot/dts/omap5.dtsi
··· 1079 1079 }; 1080 1080 }; 1081 1081 1082 + &cpu_thermal { 1083 + polling-delay = <500>; /* milliseconds */ 1084 + }; 1085 + 1082 1086 /include/ "omap54xx-clocks.dtsi"
+37 -4
arch/arm/boot/dts/omap54xx-clocks.dtsi
··· 167 167 ti,index-starts-at-one; 168 168 }; 169 169 170 + dpll_core_byp_mux: dpll_core_byp_mux { 171 + #clock-cells = <0>; 172 + compatible = "ti,mux-clock"; 173 + clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 174 + ti,bit-shift = <23>; 175 + reg = <0x012c>; 176 + }; 177 + 170 178 dpll_core_ck: dpll_core_ck { 171 179 #clock-cells = <0>; 172 180 compatible = "ti,omap4-dpll-core-clock"; 173 - clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>; 181 + clocks = <&sys_clkin>, <&dpll_core_byp_mux>; 174 182 reg = <0x0120>, <0x0124>, <0x012c>, <0x0128>; 175 183 }; 176 184 ··· 302 294 clock-div = <1>; 303 295 }; 304 296 297 + dpll_iva_byp_mux: dpll_iva_byp_mux { 298 + #clock-cells = <0>; 299 + compatible = "ti,mux-clock"; 300 + clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 301 + ti,bit-shift = <23>; 302 + reg = <0x01ac>; 303 + }; 304 + 305 305 dpll_iva_ck: dpll_iva_ck { 306 306 #clock-cells = <0>; 307 307 compatible = "ti,omap4-dpll-clock"; 308 - clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>; 308 + clocks = <&sys_clkin>, <&dpll_iva_byp_mux>; 309 309 reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>; 310 310 }; 311 311 ··· 615 599 }; 616 600 }; 617 601 &cm_core_clocks { 602 + 603 + dpll_per_byp_mux: dpll_per_byp_mux { 604 + #clock-cells = <0>; 605 + compatible = "ti,mux-clock"; 606 + clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 607 + ti,bit-shift = <23>; 608 + reg = <0x014c>; 609 + }; 610 + 618 611 dpll_per_ck: dpll_per_ck { 619 612 #clock-cells = <0>; 620 613 compatible = "ti,omap4-dpll-clock"; 621 - clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>; 614 + clocks = <&sys_clkin>, <&dpll_per_byp_mux>; 622 615 reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>; 623 616 }; 624 617 ··· 739 714 ti,index-starts-at-one; 740 715 }; 741 716 717 + dpll_usb_byp_mux: dpll_usb_byp_mux { 718 + #clock-cells = <0>; 719 + compatible = "ti,mux-clock"; 720 + clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 721 + ti,bit-shift = <23>; 722 + reg = <0x018c>; 723 + }; 724 + 742 725 dpll_usb_ck: dpll_usb_ck { 743 726 #clock-cells = <0>; 744 727 compatible = "ti,omap4-dpll-j-type-clock"; 745 - clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>; 728 + clocks = <&sys_clkin>, <&dpll_usb_byp_mux>; 746 729 reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>; 747 730 }; 748 731
+1 -2
arch/arm/boot/dts/sama5d3.dtsi
··· 1248 1248 atmel,watchdog-type = "hardware"; 1249 1249 atmel,reset-type = "all"; 1250 1250 atmel,dbg-halt; 1251 - atmel,idle-halt; 1252 1251 status = "disabled"; 1253 1252 }; 1254 1253 ··· 1415 1416 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 1416 1417 reg = <0x00700000 0x100000>; 1417 1418 interrupts = <32 IRQ_TYPE_LEVEL_HIGH 2>; 1418 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 1419 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 1419 1420 clock-names = "usb_clk", "ehci_clk", "uhpck"; 1420 1421 status = "disabled"; 1421 1422 };
+5 -4
arch/arm/boot/dts/sama5d4.dtsi
··· 66 66 gpio4 = &pioE; 67 67 tcb0 = &tcb0; 68 68 tcb1 = &tcb1; 69 + i2c0 = &i2c0; 69 70 i2c2 = &i2c2; 70 71 }; 71 72 cpus { ··· 260 259 compatible = "atmel,at91sam9g45-ehci", "usb-ehci"; 261 260 reg = <0x00600000 0x100000>; 262 261 interrupts = <46 IRQ_TYPE_LEVEL_HIGH 2>; 263 - clocks = <&usb>, <&uhphs_clk>, <&uhpck>; 262 + clocks = <&utmi>, <&uhphs_clk>, <&uhpck>; 264 263 clock-names = "usb_clk", "ehci_clk", "uhpck"; 265 264 status = "disabled"; 266 265 }; ··· 462 461 463 462 lcdck: lcdck { 464 463 #clock-cells = <0>; 465 - reg = <4>; 466 - clocks = <&smd>; 464 + reg = <3>; 465 + clocks = <&mck>; 467 466 }; 468 467 469 468 smdck: smdck { ··· 771 770 reg = <50>; 772 771 }; 773 772 774 - lcd_clk: lcd_clk { 773 + lcdc_clk: lcdc_clk { 775 774 #clock-cells = <0>; 776 775 reg = <51>; 777 776 };
+6
arch/arm/boot/dts/socfpga.dtsi
··· 713 713 reg-shift = <2>; 714 714 reg-io-width = <4>; 715 715 clocks = <&l4_sp_clk>; 716 + dmas = <&pdma 28>, 717 + <&pdma 29>; 718 + dma-names = "tx", "rx"; 716 719 }; 717 720 718 721 uart1: serial1@ffc03000 { ··· 725 722 reg-shift = <2>; 726 723 reg-io-width = <4>; 727 724 clocks = <&l4_sp_clk>; 725 + dmas = <&pdma 30>, 726 + <&pdma 31>; 727 + dma-names = "tx", "rx"; 728 728 }; 729 729 730 730 rst: rstmgr@ffd05000 {
+1
arch/arm/configs/at91_dt_defconfig
··· 70 70 CONFIG_BLK_DEV_SD=y 71 71 # CONFIG_SCSI_LOWLEVEL is not set 72 72 CONFIG_NETDEVICES=y 73 + CONFIG_ARM_AT91_ETHER=y 73 74 CONFIG_MACB=y 74 75 # CONFIG_NET_VENDOR_BROADCOM is not set 75 76 CONFIG_DM9000=y
+1 -1
arch/arm/configs/multi_v7_defconfig
··· 99 99 CONFIG_PCI_RCAR_GEN2_PCIE=y 100 100 CONFIG_PCIEPORTBUS=y 101 101 CONFIG_SMP=y 102 - CONFIG_NR_CPUS=8 102 + CONFIG_NR_CPUS=16 103 103 CONFIG_HIGHPTE=y 104 104 CONFIG_CMA=y 105 105 CONFIG_ARM_APPENDED_DTB=y
+1
arch/arm/configs/omap2plus_defconfig
··· 377 377 CONFIG_PWM_TWL_LED=m 378 378 CONFIG_OMAP_USB2=m 379 379 CONFIG_TI_PIPE3=y 380 + CONFIG_TWL4030_USB=m 380 381 CONFIG_EXT2_FS=y 381 382 CONFIG_EXT3_FS=y 382 383 # CONFIG_EXT3_FS_XATTR is not set
-2
arch/arm/configs/sama5_defconfig
··· 3 3 CONFIG_SYSVIPC=y 4 4 CONFIG_IRQ_DOMAIN_DEBUG=y 5 5 CONFIG_LOG_BUF_SHIFT=14 6 - CONFIG_SYSFS_DEPRECATED=y 7 - CONFIG_SYSFS_DEPRECATED_V2=y 8 6 CONFIG_BLK_DEV_INITRD=y 9 7 CONFIG_EMBEDDED=y 10 8 CONFIG_SLAB=y
+1
arch/arm/configs/sunxi_defconfig
··· 4 4 CONFIG_PERF_EVENTS=y 5 5 CONFIG_ARCH_SUNXI=y 6 6 CONFIG_SMP=y 7 + CONFIG_NR_CPUS=8 7 8 CONFIG_AEABI=y 8 9 CONFIG_HIGHMEM=y 9 10 CONFIG_HIGHPTE=y
+1 -1
arch/arm/configs/vexpress_defconfig
··· 118 118 CONFIG_USB=y 119 119 CONFIG_USB_ANNOUNCE_NEW_DEVICES=y 120 120 CONFIG_USB_MON=y 121 - CONFIG_USB_ISP1760_HCD=y 122 121 CONFIG_USB_STORAGE=y 122 + CONFIG_USB_ISP1760=y 123 123 CONFIG_MMC=y 124 124 CONFIG_MMC_ARMMMCI=y 125 125 CONFIG_NEW_LEDS=y
+4 -1
arch/arm/include/debug/at91.S
··· 18 18 #define AT91_DBGU 0xfc00c000 /* SAMA5D4_BASE_USART3 */ 19 19 #endif 20 20 21 - /* Keep in sync with mach-at91/include/mach/hardware.h */ 21 + #ifdef CONFIG_MMU 22 22 #define AT91_IO_P2V(x) ((x) - 0x01000000) 23 + #else 24 + #define AT91_IO_P2V(x) (x) 25 + #endif 23 26 24 27 #define AT91_DBGU_SR (0x14) /* Status Register */ 25 28 #define AT91_DBGU_THR (0x1c) /* Transmitter Holding Register */
+10 -12
arch/arm/mach-at91/pm.c
··· 270 270 phys_addr_t sram_pbase; 271 271 unsigned long sram_base; 272 272 struct device_node *node; 273 - struct platform_device *pdev; 273 + struct platform_device *pdev = NULL; 274 274 275 - node = of_find_compatible_node(NULL, NULL, "mmio-sram"); 276 - if (!node) { 277 - pr_warn("%s: failed to find sram node!\n", __func__); 278 - return; 275 + for_each_compatible_node(node, NULL, "mmio-sram") { 276 + pdev = of_find_device_by_node(node); 277 + if (pdev) { 278 + of_node_put(node); 279 + break; 280 + } 279 281 } 280 282 281 - pdev = of_find_device_by_node(node); 282 283 if (!pdev) { 283 284 pr_warn("%s: failed to find sram device!\n", __func__); 284 - goto put_node; 285 + return; 285 286 } 286 287 287 288 sram_pool = dev_get_gen_pool(&pdev->dev); 288 289 if (!sram_pool) { 289 290 pr_warn("%s: sram pool unavailable!\n", __func__); 290 - goto put_node; 291 + return; 291 292 } 292 293 293 294 sram_base = gen_pool_alloc(sram_pool, at91_slow_clock_sz); 294 295 if (!sram_base) { 295 296 pr_warn("%s: unable to alloc ocram!\n", __func__); 296 - goto put_node; 297 + return; 297 298 } 298 299 299 300 sram_pbase = gen_pool_virt_to_phys(sram_pool, sram_base); 300 301 slow_clock = __arm_ioremap_exec(sram_pbase, at91_slow_clock_sz, false); 301 - 302 - put_node: 303 - of_node_put(node); 304 302 } 305 303 #endif 306 304
+1 -1
arch/arm/mach-at91/pm.h
··· 44 44 " mcr p15, 0, %0, c7, c0, 4\n\t" 45 45 " str %5, [%1, %2]" 46 46 : 47 - : "r" (0), "r" (AT91_BASE_SYS), "r" (AT91RM9200_SDRAMC_LPR), 47 + : "r" (0), "r" (at91_ramc_base[0]), "r" (AT91RM9200_SDRAMC_LPR), 48 48 "r" (1), "r" (AT91RM9200_SDRAMC_SRR), 49 49 "r" (lpr)); 50 50 }
+46 -34
arch/arm/mach-at91/pm_slowclock.S
··· 25 25 */ 26 26 #undef SLOWDOWN_MASTER_CLOCK 27 27 28 - #define MCKRDY_TIMEOUT 1000 29 - #define MOSCRDY_TIMEOUT 1000 30 - #define PLLALOCK_TIMEOUT 1000 31 - #define PLLBLOCK_TIMEOUT 1000 32 - 33 28 pmc .req r0 34 29 sdramc .req r1 35 30 ramc1 .req r2 ··· 36 41 * Wait until master clock is ready (after switching master clock source) 37 42 */ 38 43 .macro wait_mckrdy 39 - mov tmp2, #MCKRDY_TIMEOUT 40 - 1: sub tmp2, tmp2, #1 41 - cmp tmp2, #0 42 - beq 2f 43 - ldr tmp1, [pmc, #AT91_PMC_SR] 44 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 44 45 tst tmp1, #AT91_PMC_MCKRDY 45 46 beq 1b 46 - 2: 47 47 .endm 48 48 49 49 /* 50 50 * Wait until master oscillator has stabilized. 51 51 */ 52 52 .macro wait_moscrdy 53 - mov tmp2, #MOSCRDY_TIMEOUT 54 - 1: sub tmp2, tmp2, #1 55 - cmp tmp2, #0 56 - beq 2f 57 - ldr tmp1, [pmc, #AT91_PMC_SR] 53 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 58 54 tst tmp1, #AT91_PMC_MOSCS 59 55 beq 1b 60 - 2: 61 56 .endm 62 57 63 58 /* 64 59 * Wait until PLLA has locked. 65 60 */ 66 61 .macro wait_pllalock 67 - mov tmp2, #PLLALOCK_TIMEOUT 68 - 1: sub tmp2, tmp2, #1 69 - cmp tmp2, #0 70 - beq 2f 71 - ldr tmp1, [pmc, #AT91_PMC_SR] 62 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 72 63 tst tmp1, #AT91_PMC_LOCKA 73 64 beq 1b 74 - 2: 75 65 .endm 76 66 77 67 /* 78 68 * Wait until PLLB has locked. 79 69 */ 80 70 .macro wait_pllblock 81 - mov tmp2, #PLLBLOCK_TIMEOUT 82 - 1: sub tmp2, tmp2, #1 83 - cmp tmp2, #0 84 - beq 2f 85 - ldr tmp1, [pmc, #AT91_PMC_SR] 71 + 1: ldr tmp1, [pmc, #AT91_PMC_SR] 86 72 tst tmp1, #AT91_PMC_LOCKB 87 73 beq 1b 88 - 2: 89 74 .endm 90 75 91 76 .text 77 + 78 + .arm 92 79 93 80 /* void at91_slow_clock(void __iomem *pmc, void __iomem *sdramc, 94 81 * void __iomem *ramc1, int memctrl) ··· 111 134 cmp memctrl, #AT91_MEMCTRL_DDRSDR 112 135 bne sdr_sr_enable 113 136 137 + /* LPDDR1 --> force DDR2 mode during self-refresh */ 138 + ldr tmp1, [sdramc, #AT91_DDRSDRC_MDR] 139 + str tmp1, .saved_sam9_mdr 140 + bic tmp1, tmp1, #~AT91_DDRSDRC_MD 141 + cmp tmp1, #AT91_DDRSDRC_MD_LOW_POWER_DDR 142 + ldreq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 143 + biceq tmp1, tmp1, #AT91_DDRSDRC_MD 144 + orreq tmp1, tmp1, #AT91_DDRSDRC_MD_DDR2 145 + streq tmp1, [sdramc, #AT91_DDRSDRC_MDR] 146 + 114 147 /* prepare for DDRAM self-refresh mode */ 115 148 ldr tmp1, [sdramc, #AT91_DDRSDRC_LPR] 116 149 str tmp1, .saved_sam9_lpr ··· 129 142 130 143 /* figure out if we use the second ram controller */ 131 144 cmp ramc1, #0 132 - ldrne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 133 - strne tmp2, .saved_sam9_lpr1 134 - bicne tmp2, #AT91_DDRSDRC_LPCB 135 - orrne tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 145 + beq ddr_no_2nd_ctrl 146 + 147 + ldr tmp2, [ramc1, #AT91_DDRSDRC_MDR] 148 + str tmp2, .saved_sam9_mdr1 149 + bic tmp2, tmp2, #~AT91_DDRSDRC_MD 150 + cmp tmp2, #AT91_DDRSDRC_MD_LOW_POWER_DDR 151 + ldreq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 152 + biceq tmp2, tmp2, #AT91_DDRSDRC_MD 153 + orreq tmp2, tmp2, #AT91_DDRSDRC_MD_DDR2 154 + streq tmp2, [ramc1, #AT91_DDRSDRC_MDR] 155 + 156 + ldr tmp2, [ramc1, #AT91_DDRSDRC_LPR] 157 + str tmp2, .saved_sam9_lpr1 158 + bic tmp2, #AT91_DDRSDRC_LPCB 159 + orr tmp2, #AT91_DDRSDRC_LPCB_SELF_REFRESH 136 160 137 161 /* Enable DDRAM self-refresh mode */ 162 + str tmp2, [ramc1, #AT91_DDRSDRC_LPR] 163 + ddr_no_2nd_ctrl: 138 164 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 139 - strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 140 165 141 166 b sdr_sr_done 142 167 ··· 207 208 /* Turn off the main oscillator */ 208 209 ldr tmp1, [pmc, #AT91_CKGR_MOR] 209 210 bic tmp1, tmp1, #AT91_PMC_MOSCEN 211 + orr tmp1, tmp1, #AT91_PMC_KEY 210 212 str tmp1, [pmc, #AT91_CKGR_MOR] 211 213 212 214 /* Wait for interrupt */ ··· 216 216 /* Turn on the main oscillator */ 217 217 ldr tmp1, [pmc, #AT91_CKGR_MOR] 218 218 orr tmp1, tmp1, #AT91_PMC_MOSCEN 219 + orr tmp1, tmp1, #AT91_PMC_KEY 219 220 str tmp1, [pmc, #AT91_CKGR_MOR] 220 221 221 222 wait_moscrdy ··· 281 280 */ 282 281 cmp memctrl, #AT91_MEMCTRL_DDRSDR 283 282 bne sdr_en_restore 283 + /* Restore MDR in case of LPDDR1 */ 284 + ldr tmp1, .saved_sam9_mdr 285 + str tmp1, [sdramc, #AT91_DDRSDRC_MDR] 284 286 /* Restore LPR on AT91 with DDRAM */ 285 287 ldr tmp1, .saved_sam9_lpr 286 288 str tmp1, [sdramc, #AT91_DDRSDRC_LPR] 287 289 288 290 /* if we use the second ram controller */ 289 291 cmp ramc1, #0 292 + ldrne tmp2, .saved_sam9_mdr1 293 + strne tmp2, [ramc1, #AT91_DDRSDRC_MDR] 290 294 ldrne tmp2, .saved_sam9_lpr1 291 295 strne tmp2, [ramc1, #AT91_DDRSDRC_LPR] 292 296 ··· 323 317 .word 0 324 318 325 319 .saved_sam9_lpr1: 320 + .word 0 321 + 322 + .saved_sam9_mdr: 323 + .word 0 324 + 325 + .saved_sam9_mdr1: 326 326 .word 0 327 327 328 328 ENTRY(at91_slow_clock_sz)
+1 -2
arch/arm/mach-exynos/platsmp.c
··· 126 126 */ 127 127 void exynos_cpu_power_down(int cpu) 128 128 { 129 - if (cpu == 0 && (of_machine_is_compatible("samsung,exynos5420") || 130 - of_machine_is_compatible("samsung,exynos5800"))) { 129 + if (cpu == 0 && (soc_is_exynos5420() || soc_is_exynos5800())) { 131 130 /* 132 131 * Bypass power down for CPU0 during suspend. Check for 133 132 * the SYS_PWR_REG value to decide if we are suspending
+28
arch/arm/mach-exynos/pm_domains.c
··· 161 161 of_genpd_add_provider_simple(np, &pd->pd); 162 162 } 163 163 164 + /* Assign the child power domains to their parents */ 165 + for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 166 + struct generic_pm_domain *child_domain, *parent_domain; 167 + struct of_phandle_args args; 168 + 169 + args.np = np; 170 + args.args_count = 0; 171 + child_domain = of_genpd_get_from_provider(&args); 172 + if (!child_domain) 173 + continue; 174 + 175 + if (of_parse_phandle_with_args(np, "power-domains", 176 + "#power-domain-cells", 0, &args) != 0) 177 + continue; 178 + 179 + parent_domain = of_genpd_get_from_provider(&args); 180 + if (!parent_domain) 181 + continue; 182 + 183 + if (pm_genpd_add_subdomain(parent_domain, child_domain)) 184 + pr_warn("%s failed to add subdomain: %s\n", 185 + parent_domain->name, child_domain->name); 186 + else 187 + pr_info("%s has as child subdomain: %s.\n", 188 + parent_domain->name, child_domain->name); 189 + of_node_put(np); 190 + } 191 + 164 192 return 0; 165 193 } 166 194 arch_initcall(exynos4_pm_init_power_domain);
+2 -2
arch/arm/mach-exynos/suspend.c
··· 87 87 static u32 exynos_irqwake_intmask = 0xffffffff; 88 88 89 89 static const struct exynos_wkup_irq exynos3250_wkup_irq[] = { 90 - { 73, BIT(1) }, /* RTC alarm */ 91 - { 74, BIT(2) }, /* RTC tick */ 90 + { 105, BIT(1) }, /* RTC alarm */ 91 + { 106, BIT(2) }, /* RTC tick */ 92 92 { /* sentinel */ }, 93 93 }; 94 94
+3 -2
arch/arm/mach-imx/mach-imx6q.c
··· 211 211 * set bit IOMUXC_GPR1[21]. Or the PTP clock must be from pad 212 212 * (external OSC), and we need to clear the bit. 213 213 */ 214 - clksel = ptp_clk == enet_ref ? IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 215 - IMX6Q_GPR1_ENET_CLK_SEL_PAD; 214 + clksel = clk_is_match(ptp_clk, enet_ref) ? 215 + IMX6Q_GPR1_ENET_CLK_SEL_ANATOP : 216 + IMX6Q_GPR1_ENET_CLK_SEL_PAD; 216 217 gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); 217 218 if (!IS_ERR(gpr)) 218 219 regmap_update_bits(gpr, IOMUXC_GPR1,
+5 -5
arch/arm/mach-omap2/omap_hwmod.c
··· 1692 1692 if (ret == -EBUSY) 1693 1693 pr_warn("omap_hwmod: %s: failed to hardreset\n", oh->name); 1694 1694 1695 - if (!ret) { 1695 + if (oh->clkdm) { 1696 1696 /* 1697 1697 * Set the clockdomain to HW_AUTO, assuming that the 1698 1698 * previous state was HW_AUTO. 1699 1699 */ 1700 - if (oh->clkdm && hwsup) 1700 + if (hwsup) 1701 1701 clkdm_allow_idle(oh->clkdm); 1702 - } else { 1703 - if (oh->clkdm) 1704 - clkdm_hwmod_disable(oh->clkdm, oh); 1702 + 1703 + clkdm_hwmod_disable(oh->clkdm, oh); 1705 1704 } 1706 1705 1707 1706 return ret; ··· 2697 2698 INIT_LIST_HEAD(&oh->master_ports); 2698 2699 INIT_LIST_HEAD(&oh->slave_ports); 2699 2700 spin_lock_init(&oh->_lock); 2701 + lockdep_set_class(&oh->_lock, &oh->hwmod_key); 2700 2702 2701 2703 oh->_state = _HWMOD_STATE_REGISTERED; 2702 2704
+1
arch/arm/mach-omap2/omap_hwmod.h
··· 674 674 u32 _sysc_cache; 675 675 void __iomem *_mpu_rt_va; 676 676 spinlock_t _lock; 677 + struct lock_class_key hwmod_key; /* unique lock class */ 677 678 struct list_head node; 678 679 struct omap_hwmod_ocp_if *_mpu_port; 679 680 unsigned int (*xlate_irq)(unsigned int);
+24 -79
arch/arm/mach-omap2/omap_hwmod_7xx_data.c
··· 1466 1466 * 1467 1467 */ 1468 1468 1469 - static struct omap_hwmod_class dra7xx_pcie_hwmod_class = { 1469 + static struct omap_hwmod_class dra7xx_pciess_hwmod_class = { 1470 1470 .name = "pcie", 1471 1471 }; 1472 1472 1473 1473 /* pcie1 */ 1474 - static struct omap_hwmod dra7xx_pcie1_hwmod = { 1474 + static struct omap_hwmod dra7xx_pciess1_hwmod = { 1475 1475 .name = "pcie1", 1476 - .class = &dra7xx_pcie_hwmod_class, 1476 + .class = &dra7xx_pciess_hwmod_class, 1477 1477 .clkdm_name = "pcie_clkdm", 1478 - .main_clk = "l4_root_clk_div", 1479 - .prcm = { 1480 - .omap4 = { 1481 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1482 - .modulemode = MODULEMODE_SWCTRL, 1483 - }, 1484 - }, 1485 - }; 1486 - 1487 - /* pcie2 */ 1488 - static struct omap_hwmod dra7xx_pcie2_hwmod = { 1489 - .name = "pcie2", 1490 - .class = &dra7xx_pcie_hwmod_class, 1491 - .clkdm_name = "pcie_clkdm", 1492 - .main_clk = "l4_root_clk_div", 1493 - .prcm = { 1494 - .omap4 = { 1495 - .clkctrl_offs = DRA7XX_CM_PCIE_CLKSTCTRL_OFFSET, 1496 - .modulemode = MODULEMODE_SWCTRL, 1497 - }, 1498 - }, 1499 - }; 1500 - 1501 - /* 1502 - * 'PCIE PHY' class 1503 - * 1504 - */ 1505 - 1506 - static struct omap_hwmod_class dra7xx_pcie_phy_hwmod_class = { 1507 - .name = "pcie-phy", 1508 - }; 1509 - 1510 - /* pcie1 phy */ 1511 - static struct omap_hwmod dra7xx_pcie1_phy_hwmod = { 1512 - .name = "pcie1-phy", 1513 - .class = &dra7xx_pcie_phy_hwmod_class, 1514 - .clkdm_name = "l3init_clkdm", 1515 1478 .main_clk = "l4_root_clk_div", 1516 1479 .prcm = { 1517 1480 .omap4 = { ··· 1485 1522 }, 1486 1523 }; 1487 1524 1488 - /* pcie2 phy */ 1489 - static struct omap_hwmod dra7xx_pcie2_phy_hwmod = { 1490 - .name = "pcie2-phy", 1491 - .class = &dra7xx_pcie_phy_hwmod_class, 1492 - .clkdm_name = "l3init_clkdm", 1525 + /* pcie2 */ 1526 + static struct omap_hwmod dra7xx_pciess2_hwmod = { 1527 + .name = "pcie2", 1528 + .class = &dra7xx_pciess_hwmod_class, 1529 + .clkdm_name = "pcie_clkdm", 1493 1530 .main_clk = "l4_root_clk_div", 1494 1531 .prcm = { 1495 1532 .omap4 = { ··· 2840 2877 .user = OCP_USER_MPU | OCP_USER_SDMA, 2841 2878 }; 2842 2879 2843 - /* l3_main_1 -> pcie1 */ 2844 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie1 = { 2880 + /* l3_main_1 -> pciess1 */ 2881 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess1 = { 2845 2882 .master = &dra7xx_l3_main_1_hwmod, 2846 - .slave = &dra7xx_pcie1_hwmod, 2883 + .slave = &dra7xx_pciess1_hwmod, 2847 2884 .clk = "l3_iclk_div", 2848 2885 .user = OCP_USER_MPU | OCP_USER_SDMA, 2849 2886 }; 2850 2887 2851 - /* l4_cfg -> pcie1 */ 2852 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1 = { 2888 + /* l4_cfg -> pciess1 */ 2889 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess1 = { 2853 2890 .master = &dra7xx_l4_cfg_hwmod, 2854 - .slave = &dra7xx_pcie1_hwmod, 2891 + .slave = &dra7xx_pciess1_hwmod, 2855 2892 .clk = "l4_root_clk_div", 2856 2893 .user = OCP_USER_MPU | OCP_USER_SDMA, 2857 2894 }; 2858 2895 2859 - /* l3_main_1 -> pcie2 */ 2860 - static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pcie2 = { 2896 + /* l3_main_1 -> pciess2 */ 2897 + static struct omap_hwmod_ocp_if dra7xx_l3_main_1__pciess2 = { 2861 2898 .master = &dra7xx_l3_main_1_hwmod, 2862 - .slave = &dra7xx_pcie2_hwmod, 2899 + .slave = &dra7xx_pciess2_hwmod, 2863 2900 .clk = "l3_iclk_div", 2864 2901 .user = OCP_USER_MPU | OCP_USER_SDMA, 2865 2902 }; 2866 2903 2867 - /* l4_cfg -> pcie2 */ 2868 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2 = { 2904 + /* l4_cfg -> pciess2 */ 2905 + static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pciess2 = { 2869 2906 .master = &dra7xx_l4_cfg_hwmod, 2870 - .slave = &dra7xx_pcie2_hwmod, 2871 - .clk = "l4_root_clk_div", 2872 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2873 - }; 2874 - 2875 - /* l4_cfg -> pcie1 phy */ 2876 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie1_phy = { 2877 - .master = &dra7xx_l4_cfg_hwmod, 2878 - .slave = &dra7xx_pcie1_phy_hwmod, 2879 - .clk = "l4_root_clk_div", 2880 - .user = OCP_USER_MPU | OCP_USER_SDMA, 2881 - }; 2882 - 2883 - /* l4_cfg -> pcie2 phy */ 2884 - static struct omap_hwmod_ocp_if dra7xx_l4_cfg__pcie2_phy = { 2885 - .master = &dra7xx_l4_cfg_hwmod, 2886 - .slave = &dra7xx_pcie2_phy_hwmod, 2907 + .slave = &dra7xx_pciess2_hwmod, 2887 2908 .clk = "l4_root_clk_div", 2888 2909 .user = OCP_USER_MPU | OCP_USER_SDMA, 2889 2910 }; ··· 3274 3327 &dra7xx_l4_cfg__mpu, 3275 3328 &dra7xx_l4_cfg__ocp2scp1, 3276 3329 &dra7xx_l4_cfg__ocp2scp3, 3277 - &dra7xx_l3_main_1__pcie1, 3278 - &dra7xx_l4_cfg__pcie1, 3279 - &dra7xx_l3_main_1__pcie2, 3280 - &dra7xx_l4_cfg__pcie2, 3281 - &dra7xx_l4_cfg__pcie1_phy, 3282 - &dra7xx_l4_cfg__pcie2_phy, 3330 + &dra7xx_l3_main_1__pciess1, 3331 + &dra7xx_l4_cfg__pciess1, 3332 + &dra7xx_l3_main_1__pciess2, 3333 + &dra7xx_l4_cfg__pciess2, 3283 3334 &dra7xx_l3_main_1__qspi, 3284 3335 &dra7xx_l4_per3__rtcss, 3285 3336 &dra7xx_l4_cfg__sata,
+1
arch/arm/mach-omap2/pdata-quirks.c
··· 173 173 174 174 static void __init omap3_evm_legacy_init(void) 175 175 { 176 + hsmmc2_internal_input_clk(); 176 177 legacy_init_wl12xx(WL12XX_REFCLOCK_38, 0, 149); 177 178 } 178 179
+2 -2
arch/arm/mach-omap2/prm44xx.c
··· 252 252 { 253 253 saved_mask[0] = 254 254 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 255 - OMAP4_PRM_IRQSTATUS_MPU_OFFSET); 255 + OMAP4_PRM_IRQENABLE_MPU_OFFSET); 256 256 saved_mask[1] = 257 257 omap4_prm_read_inst_reg(OMAP4430_PRM_OCP_SOCKET_INST, 258 - OMAP4_PRM_IRQSTATUS_MPU_2_OFFSET); 258 + OMAP4_PRM_IRQENABLE_MPU_2_OFFSET); 259 259 260 260 omap4_prm_write_inst_reg(0, OMAP4430_PRM_OCP_SOCKET_INST, 261 261 OMAP4_PRM_IRQENABLE_MPU_OFFSET);
+1
arch/arm/mach-pxa/idp.c
··· 36 36 #include <linux/platform_data/video-pxafb.h> 37 37 #include <mach/bitfield.h> 38 38 #include <linux/platform_data/mmc-pxamci.h> 39 + #include <linux/smc91x.h> 39 40 40 41 #include "generic.h" 41 42 #include "devices.h"
+1 -1
arch/arm/mach-pxa/lpd270.c
··· 195 195 }; 196 196 197 197 struct smc91x_platdata smc91x_platdata = { 198 - .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT; 198 + .flags = SMC91X_USE_16BIT | SMC91X_NOWAIT, 199 199 }; 200 200 201 201 static struct platform_device smc91x_device = {
+2 -2
arch/arm/mach-sa1100/neponset.c
··· 268 268 .id = 0, 269 269 .res = smc91x_resources, 270 270 .num_res = ARRAY_SIZE(smc91x_resources), 271 - .data = &smc91c_platdata, 272 - .size_data = sizeof(smc91c_platdata), 271 + .data = &smc91x_platdata, 272 + .size_data = sizeof(smc91x_platdata), 273 273 }; 274 274 int ret, irq; 275 275
+1 -1
arch/arm/mach-sa1100/pleb.c
··· 54 54 .num_resources = ARRAY_SIZE(smc91x_resources), 55 55 .resource = smc91x_resources, 56 56 .dev = { 57 - .platform_data = &smc91c_platdata, 57 + .platform_data = &smc91x_platdata, 58 58 }, 59 59 }; 60 60
+1 -1
arch/arm/mach-socfpga/core.h
··· 45 45 46 46 extern unsigned long socfpga_cpu1start_addr; 47 47 48 - #define SOCFPGA_SCU_VIRT_BASE 0xfffec000 48 + #define SOCFPGA_SCU_VIRT_BASE 0xfee00000 49 49 50 50 #endif
+5
arch/arm/mach-socfpga/socfpga.c
··· 23 23 #include <asm/hardware/cache-l2x0.h> 24 24 #include <asm/mach/arch.h> 25 25 #include <asm/mach/map.h> 26 + #include <asm/cacheflush.h> 26 27 27 28 #include "core.h" 28 29 ··· 73 72 if (of_property_read_u32(np, "cpu1-start-addr", 74 73 (u32 *) &socfpga_cpu1start_addr)) 75 74 pr_err("SMP: Need cpu1-start-addr in device tree.\n"); 75 + 76 + /* Ensure that socfpga_cpu1start_addr is visible to other CPUs */ 77 + smp_wmb(); 78 + sync_cache_w(&socfpga_cpu1start_addr); 76 79 77 80 sys_manager_base_addr = of_iomap(np, 0); 78 81
+1
arch/arm/mach-sti/board-dt.c
··· 18 18 "st,stih415", 19 19 "st,stih416", 20 20 "st,stih407", 21 + "st,stih410", 21 22 "st,stih418", 22 23 NULL 23 24 };
+2 -2
arch/arm64/boot/dts/apm/apm-storm.dtsi
··· 622 622 }; 623 623 624 624 sgenet0: ethernet@1f210000 { 625 - compatible = "apm,xgene-enet"; 625 + compatible = "apm,xgene1-sgenet"; 626 626 status = "disabled"; 627 627 reg = <0x0 0x1f210000 0x0 0xd100>, 628 628 <0x0 0x1f200000 0x0 0Xc300>, ··· 636 636 }; 637 637 638 638 xgenet: ethernet@1f610000 { 639 - compatible = "apm,xgene-enet"; 639 + compatible = "apm,xgene1-xgenet"; 640 640 status = "disabled"; 641 641 reg = <0x0 0x1f610000 0x0 0xd100>, 642 642 <0x0 0x1f600000 0x0 0Xc300>,
+3
arch/arm64/include/asm/tlb.h
··· 48 48 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, 49 49 unsigned long addr) 50 50 { 51 + __flush_tlb_pgtable(tlb->mm, addr); 51 52 pgtable_page_dtor(pte); 52 53 tlb_remove_entry(tlb, pte); 53 54 } ··· 57 56 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, 58 57 unsigned long addr) 59 58 { 59 + __flush_tlb_pgtable(tlb->mm, addr); 60 60 tlb_remove_entry(tlb, virt_to_page(pmdp)); 61 61 } 62 62 #endif ··· 66 64 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, 67 65 unsigned long addr) 68 66 { 67 + __flush_tlb_pgtable(tlb->mm, addr); 69 68 tlb_remove_entry(tlb, virt_to_page(pudp)); 70 69 } 71 70 #endif
+13
arch/arm64/include/asm/tlbflush.h
··· 144 144 } 145 145 146 146 /* 147 + * Used to invalidate the TLB (walk caches) corresponding to intermediate page 148 + * table levels (pgd/pud/pmd). 149 + */ 150 + static inline void __flush_tlb_pgtable(struct mm_struct *mm, 151 + unsigned long uaddr) 152 + { 153 + unsigned long addr = uaddr >> 12 | ((unsigned long)ASID(mm) << 48); 154 + 155 + dsb(ishst); 156 + asm("tlbi vae1is, %0" : : "r" (addr)); 157 + dsb(ish); 158 + } 159 + /* 147 160 * On AArch64, the cache coherency is handled via the set_pte_at() function. 148 161 */ 149 162 static inline void update_mmu_cache(struct vm_area_struct *vma,
+9
arch/arm64/kernel/efi.c
··· 354 354 efi_set_pgd(current->active_mm); 355 355 preempt_enable(); 356 356 } 357 + 358 + /* 359 + * UpdateCapsule() depends on the system being shutdown via 360 + * ResetSystem(). 361 + */ 362 + bool efi_poweroff_required(void) 363 + { 364 + return efi_enabled(EFI_RUNTIME_SERVICES); 365 + }
+1 -1
arch/arm64/kernel/head.S
··· 585 585 * zeroing of .bss would clobber it. 586 586 */ 587 587 .pushsection .data..cacheline_aligned 588 - ENTRY(__boot_cpu_mode) 589 588 .align L1_CACHE_SHIFT 589 + ENTRY(__boot_cpu_mode) 590 590 .long BOOT_CPU_MODE_EL2 591 591 .long 0 592 592 .popsection
+8
arch/arm64/kernel/process.c
··· 21 21 #include <stdarg.h> 22 22 23 23 #include <linux/compat.h> 24 + #include <linux/efi.h> 24 25 #include <linux/export.h> 25 26 #include <linux/sched.h> 26 27 #include <linux/kernel.h> ··· 150 149 /* Disable interrupts first */ 151 150 local_irq_disable(); 152 151 smp_send_stop(); 152 + 153 + /* 154 + * UpdateCapsule() depends on the system being reset via 155 + * ResetSystem(). 156 + */ 157 + if (efi_enabled(EFI_RUNTIME_SERVICES)) 158 + efi_reboot(reboot_mode, NULL); 153 159 154 160 /* Now call the architecture specific reboot code. */ 155 161 if (arm_pm_restart)
+5
arch/c6x/include/asm/pgtable.h
··· 67 67 */ 68 68 #define pgtable_cache_init() do { } while (0) 69 69 70 + /* 71 + * c6x is !MMU, so define the simpliest implementation 72 + */ 73 + #define pgprot_writecombine pgprot_noncached 74 + 70 75 #include <asm-generic/pgtable.h> 71 76 72 77 #endif /* _ASM_C6X_PGTABLE_H */
+4 -3
arch/microblaze/kernel/entry.S
··· 348 348 * The LP register should point to the location where the called function 349 349 * should return. [note that MAKE_SYS_CALL uses label 1] */ 350 350 /* See if the system call number is valid */ 351 + blti r12, 5f 351 352 addi r11, r12, -__NR_syscalls; 352 - bgei r11,5f; 353 + bgei r11, 5f; 353 354 /* Figure out which function to use for this system call. */ 354 355 /* Note Microblaze barrel shift is optional, so don't rely on it */ 355 356 add r12, r12, r12; /* convert num -> ptr */ ··· 376 375 377 376 /* The syscall number is invalid, return an error. */ 378 377 5: 379 - rtsd r15, 8; /* looks like a normal subroutine return */ 378 + braid ret_from_trap 380 379 addi r3, r0, -ENOSYS; 381 380 382 381 /* Entry point used to return from a syscall/trap */ ··· 412 411 bri 1b 413 412 414 413 /* Maybe handle a signal */ 415 - 5: 414 + 5: 416 415 andi r11, r19, _TIF_SIGPENDING | _TIF_NOTIFY_RESUME; 417 416 beqi r11, 4f; /* Signals to handle, handle them */ 418 417
+47
arch/nios2/include/asm/ptrace.h
··· 15 15 16 16 #include <uapi/asm/ptrace.h> 17 17 18 + /* This struct defines the way the registers are stored on the 19 + stack during a system call. */ 20 + 18 21 #ifndef __ASSEMBLY__ 22 + struct pt_regs { 23 + unsigned long r8; /* r8-r15 Caller-saved GP registers */ 24 + unsigned long r9; 25 + unsigned long r10; 26 + unsigned long r11; 27 + unsigned long r12; 28 + unsigned long r13; 29 + unsigned long r14; 30 + unsigned long r15; 31 + unsigned long r1; /* Assembler temporary */ 32 + unsigned long r2; /* Retval LS 32bits */ 33 + unsigned long r3; /* Retval MS 32bits */ 34 + unsigned long r4; /* r4-r7 Register arguments */ 35 + unsigned long r5; 36 + unsigned long r6; 37 + unsigned long r7; 38 + unsigned long orig_r2; /* Copy of r2 ?? */ 39 + unsigned long ra; /* Return address */ 40 + unsigned long fp; /* Frame pointer */ 41 + unsigned long sp; /* Stack pointer */ 42 + unsigned long gp; /* Global pointer */ 43 + unsigned long estatus; 44 + unsigned long ea; /* Exception return address (pc) */ 45 + unsigned long orig_r7; 46 + }; 47 + 48 + /* 49 + * This is the extended stack used by signal handlers and the context 50 + * switcher: it's pushed after the normal "struct pt_regs". 51 + */ 52 + struct switch_stack { 53 + unsigned long r16; /* r16-r23 Callee-saved GP registers */ 54 + unsigned long r17; 55 + unsigned long r18; 56 + unsigned long r19; 57 + unsigned long r20; 58 + unsigned long r21; 59 + unsigned long r22; 60 + unsigned long r23; 61 + unsigned long fp; 62 + unsigned long gp; 63 + unsigned long ra; 64 + }; 65 + 19 66 #define user_mode(regs) (((regs)->estatus & ESTATUS_EU)) 20 67 21 68 #define instruction_pointer(regs) ((regs)->ra)
-32
arch/nios2/include/asm/ucontext.h
··· 1 - /* 2 - * Copyright (C) 2010 Tobias Klauser <tklauser@distanz.ch> 3 - * Copyright (C) 2004 Microtronix Datacom Ltd 4 - * 5 - * This file is subject to the terms and conditions of the GNU General Public 6 - * License. See the file "COPYING" in the main directory of this archive 7 - * for more details. 8 - */ 9 - 10 - #ifndef _ASM_NIOS2_UCONTEXT_H 11 - #define _ASM_NIOS2_UCONTEXT_H 12 - 13 - typedef int greg_t; 14 - #define NGREG 32 15 - typedef greg_t gregset_t[NGREG]; 16 - 17 - struct mcontext { 18 - int version; 19 - gregset_t gregs; 20 - }; 21 - 22 - #define MCONTEXT_VERSION 2 23 - 24 - struct ucontext { 25 - unsigned long uc_flags; 26 - struct ucontext *uc_link; 27 - stack_t uc_stack; 28 - struct mcontext uc_mcontext; 29 - sigset_t uc_sigmask; /* mask last for extensibility */ 30 - }; 31 - 32 - #endif
+2
arch/nios2/include/uapi/asm/Kbuild
··· 2 2 3 3 header-y += elf.h 4 4 header-y += ucontext.h 5 + 6 + generic-y += ucontext.h
+1 -3
arch/nios2/include/uapi/asm/elf.h
··· 50 50 51 51 typedef unsigned long elf_greg_t; 52 52 53 - #define ELF_NGREG \ 54 - ((sizeof(struct pt_regs) + sizeof(struct switch_stack)) / \ 55 - sizeof(elf_greg_t)) 53 + #define ELF_NGREG 49 56 54 typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 57 55 58 56 typedef unsigned long elf_fpregset_t;
+3 -47
arch/nios2/include/uapi/asm/ptrace.h
··· 67 67 68 68 #define NUM_PTRACE_REG (PTR_TLBMISC + 1) 69 69 70 - /* this struct defines the way the registers are stored on the 71 - stack during a system call. 72 - 73 - There is a fake_regs in setup.c that has to match pt_regs.*/ 74 - 75 - struct pt_regs { 76 - unsigned long r8; /* r8-r15 Caller-saved GP registers */ 77 - unsigned long r9; 78 - unsigned long r10; 79 - unsigned long r11; 80 - unsigned long r12; 81 - unsigned long r13; 82 - unsigned long r14; 83 - unsigned long r15; 84 - unsigned long r1; /* Assembler temporary */ 85 - unsigned long r2; /* Retval LS 32bits */ 86 - unsigned long r3; /* Retval MS 32bits */ 87 - unsigned long r4; /* r4-r7 Register arguments */ 88 - unsigned long r5; 89 - unsigned long r6; 90 - unsigned long r7; 91 - unsigned long orig_r2; /* Copy of r2 ?? */ 92 - unsigned long ra; /* Return address */ 93 - unsigned long fp; /* Frame pointer */ 94 - unsigned long sp; /* Stack pointer */ 95 - unsigned long gp; /* Global pointer */ 96 - unsigned long estatus; 97 - unsigned long ea; /* Exception return address (pc) */ 98 - unsigned long orig_r7; 99 - }; 100 - 101 - /* 102 - * This is the extended stack used by signal handlers and the context 103 - * switcher: it's pushed after the normal "struct pt_regs". 104 - */ 105 - struct switch_stack { 106 - unsigned long r16; /* r16-r23 Callee-saved GP registers */ 107 - unsigned long r17; 108 - unsigned long r18; 109 - unsigned long r19; 110 - unsigned long r20; 111 - unsigned long r21; 112 - unsigned long r22; 113 - unsigned long r23; 114 - unsigned long fp; 115 - unsigned long gp; 116 - unsigned long ra; 70 + /* User structures for general purpose registers. */ 71 + struct user_pt_regs { 72 + __u32 regs[49]; 117 73 }; 118 74 119 75 #endif /* __ASSEMBLY__ */
+7 -5
arch/nios2/include/uapi/asm/sigcontext.h
··· 15 15 * details. 16 16 */ 17 17 18 - #ifndef _ASM_NIOS2_SIGCONTEXT_H 19 - #define _ASM_NIOS2_SIGCONTEXT_H 18 + #ifndef _UAPI__ASM_SIGCONTEXT_H 19 + #define _UAPI__ASM_SIGCONTEXT_H 20 20 21 - #include <asm/ptrace.h> 21 + #include <linux/types.h> 22 + 23 + #define MCONTEXT_VERSION 2 22 24 23 25 struct sigcontext { 24 - struct pt_regs regs; 25 - unsigned long sc_mask; /* old sigmask */ 26 + int version; 27 + unsigned long gregs[32]; 26 28 }; 27 29 28 30 #endif
+2 -2
arch/nios2/kernel/signal.c
··· 39 39 struct ucontext *uc, int *pr2) 40 40 { 41 41 int temp; 42 - greg_t *gregs = uc->uc_mcontext.gregs; 42 + unsigned long *gregs = uc->uc_mcontext.gregs; 43 43 int err; 44 44 45 45 /* Always make any pending restarted system calls return -EINTR */ ··· 127 127 static inline int rt_setup_ucontext(struct ucontext *uc, struct pt_regs *regs) 128 128 { 129 129 struct switch_stack *sw = (struct switch_stack *)regs - 1; 130 - greg_t *gregs = uc->uc_mcontext.gregs; 130 + unsigned long *gregs = uc->uc_mcontext.gregs; 131 131 int err = 0; 132 132 133 133 err |= __put_user(MCONTEXT_VERSION, &uc->uc_mcontext.version);
+6 -6
arch/s390/include/asm/kvm_host.h
··· 515 515 #define S390_ARCH_FAC_MASK_SIZE_U64 \ 516 516 (S390_ARCH_FAC_MASK_SIZE_BYTE / sizeof(u64)) 517 517 518 - struct s390_model_fac { 519 - /* facilities used in SIE context */ 520 - __u64 sie[S390_ARCH_FAC_LIST_SIZE_U64]; 521 - /* subset enabled by kvm */ 522 - __u64 kvm[S390_ARCH_FAC_LIST_SIZE_U64]; 518 + struct kvm_s390_fac { 519 + /* facility list requested by guest */ 520 + __u64 list[S390_ARCH_FAC_LIST_SIZE_U64]; 521 + /* facility mask supported by kvm & hosting machine */ 522 + __u64 mask[S390_ARCH_FAC_LIST_SIZE_U64]; 523 523 }; 524 524 525 525 struct kvm_s390_cpu_model { 526 - struct s390_model_fac *fac; 526 + struct kvm_s390_fac *fac; 527 527 struct cpuid cpu_id; 528 528 unsigned short ibc; 529 529 };
+1 -1
arch/s390/include/asm/mmu_context.h
··· 62 62 { 63 63 int cpu = smp_processor_id(); 64 64 65 + S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 65 66 if (prev == next) 66 67 return; 67 68 if (MACHINE_HAS_TLB_LC) ··· 74 73 atomic_dec(&prev->context.attach_count); 75 74 if (MACHINE_HAS_TLB_LC) 76 75 cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask); 77 - S390_lowcore.user_asce = next->context.asce_bits | __pa(next->pgd); 78 76 } 79 77 80 78 #define finish_arch_post_lock_switch finish_arch_post_lock_switch
+1 -10
arch/s390/include/asm/page.h
··· 37 37 #endif 38 38 } 39 39 40 - static inline void clear_page(void *page) 41 - { 42 - register unsigned long reg1 asm ("1") = 0; 43 - register void *reg2 asm ("2") = page; 44 - register unsigned long reg3 asm ("3") = 4096; 45 - asm volatile( 46 - " mvcl 2,0" 47 - : "+d" (reg2), "+d" (reg3) : "d" (reg1) 48 - : "memory", "cc"); 49 - } 40 + #define clear_page(page) memset((page), 0, PAGE_SIZE) 50 41 51 42 /* 52 43 * copy_page uses the mvcl instruction with 0xb0 padding byte in order to
+8 -4
arch/s390/kernel/jump_label.c
··· 36 36 insn->offset = (entry->target - entry->code) >> 1; 37 37 } 38 38 39 - static void jump_label_bug(struct jump_entry *entry, struct insn *insn) 39 + static void jump_label_bug(struct jump_entry *entry, struct insn *expected, 40 + struct insn *new) 40 41 { 41 42 unsigned char *ipc = (unsigned char *)entry->code; 42 - unsigned char *ipe = (unsigned char *)insn; 43 + unsigned char *ipe = (unsigned char *)expected; 44 + unsigned char *ipn = (unsigned char *)new; 43 45 44 46 pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc); 45 47 pr_emerg("Found: %02x %02x %02x %02x %02x %02x\n", 46 48 ipc[0], ipc[1], ipc[2], ipc[3], ipc[4], ipc[5]); 47 49 pr_emerg("Expected: %02x %02x %02x %02x %02x %02x\n", 48 50 ipe[0], ipe[1], ipe[2], ipe[3], ipe[4], ipe[5]); 51 + pr_emerg("New: %02x %02x %02x %02x %02x %02x\n", 52 + ipn[0], ipn[1], ipn[2], ipn[3], ipn[4], ipn[5]); 49 53 panic("Corrupted kernel text"); 50 54 } 51 55 ··· 73 69 } 74 70 if (init) { 75 71 if (memcmp((void *)entry->code, &orignop, sizeof(orignop))) 76 - jump_label_bug(entry, &old); 72 + jump_label_bug(entry, &orignop, &new); 77 73 } else { 78 74 if (memcmp((void *)entry->code, &old, sizeof(old))) 79 - jump_label_bug(entry, &old); 75 + jump_label_bug(entry, &old, &new); 80 76 } 81 77 probe_kernel_write((void *)entry->code, &new, sizeof(new)); 82 78 }
+1
arch/s390/kernel/module.c
··· 436 436 const Elf_Shdr *sechdrs, 437 437 struct module *me) 438 438 { 439 + jump_label_apply_nops(me); 439 440 vfree(me->arch.syminfo); 440 441 me->arch.syminfo = NULL; 441 442 return 0;
+1 -1
arch/s390/kernel/processor.c
··· 18 18 19 19 static DEFINE_PER_CPU(struct cpuid, cpu_id); 20 20 21 - void cpu_relax(void) 21 + void notrace cpu_relax(void) 22 22 { 23 23 if (!smp_cpu_mtid && MACHINE_HAS_DIAG44) 24 24 asm volatile("diag 0,0,0x44");
+31 -37
arch/s390/kvm/kvm-s390.c
··· 522 522 memcpy(&kvm->arch.model.cpu_id, &proc->cpuid, 523 523 sizeof(struct cpuid)); 524 524 kvm->arch.model.ibc = proc->ibc; 525 - memcpy(kvm->arch.model.fac->kvm, proc->fac_list, 525 + memcpy(kvm->arch.model.fac->list, proc->fac_list, 526 526 S390_ARCH_FAC_LIST_SIZE_BYTE); 527 527 } else 528 528 ret = -EFAULT; ··· 556 556 } 557 557 memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid)); 558 558 proc->ibc = kvm->arch.model.ibc; 559 - memcpy(&proc->fac_list, kvm->arch.model.fac->kvm, S390_ARCH_FAC_LIST_SIZE_BYTE); 559 + memcpy(&proc->fac_list, kvm->arch.model.fac->list, S390_ARCH_FAC_LIST_SIZE_BYTE); 560 560 if (copy_to_user((void __user *)attr->addr, proc, sizeof(*proc))) 561 561 ret = -EFAULT; 562 562 kfree(proc); ··· 576 576 } 577 577 get_cpu_id((struct cpuid *) &mach->cpuid); 578 578 mach->ibc = sclp_get_ibc(); 579 - memcpy(&mach->fac_mask, kvm_s390_fac_list_mask, 580 - kvm_s390_fac_list_mask_size() * sizeof(u64)); 579 + memcpy(&mach->fac_mask, kvm->arch.model.fac->mask, 580 + S390_ARCH_FAC_LIST_SIZE_BYTE); 581 581 memcpy((unsigned long *)&mach->fac_list, S390_lowcore.stfle_fac_list, 582 - S390_ARCH_FAC_LIST_SIZE_U64); 582 + S390_ARCH_FAC_LIST_SIZE_BYTE); 583 583 if (copy_to_user((void __user *)attr->addr, mach, sizeof(*mach))) 584 584 ret = -EFAULT; 585 585 kfree(mach); ··· 778 778 static int kvm_s390_query_ap_config(u8 *config) 779 779 { 780 780 u32 fcn_code = 0x04000000UL; 781 - u32 cc; 781 + u32 cc = 0; 782 782 783 + memset(config, 0, 128); 783 784 asm volatile( 784 785 "lgr 0,%1\n" 785 786 "lgr 2,%2\n" 786 787 ".long 0xb2af0000\n" /* PQAP(QCI) */ 787 - "ipm %0\n" 788 + "0: ipm %0\n" 788 789 "srl %0,28\n" 789 - : "=r" (cc) 790 + "1:\n" 791 + EX_TABLE(0b, 1b) 792 + : "+r" (cc) 790 793 : "r" (fcn_code), "r" (config) 791 794 : "cc", "0", "2", "memory" 792 795 ); ··· 842 839 843 840 kvm_s390_set_crycb_format(kvm); 844 841 845 - /* Disable AES/DEA protected key functions by default */ 846 - kvm->arch.crypto.aes_kw = 0; 847 - kvm->arch.crypto.dea_kw = 0; 842 + /* Enable AES/DEA protected key functions by default */ 843 + kvm->arch.crypto.aes_kw = 1; 844 + kvm->arch.crypto.dea_kw = 1; 845 + get_random_bytes(kvm->arch.crypto.crycb->aes_wrapping_key_mask, 846 + sizeof(kvm->arch.crypto.crycb->aes_wrapping_key_mask)); 847 + get_random_bytes(kvm->arch.crypto.crycb->dea_wrapping_key_mask, 848 + sizeof(kvm->arch.crypto.crycb->dea_wrapping_key_mask)); 848 849 849 850 return 0; 850 851 } ··· 893 886 /* 894 887 * The architectural maximum amount of facilities is 16 kbit. To store 895 888 * this amount, 2 kbyte of memory is required. Thus we need a full 896 - * page to hold the active copy (arch.model.fac->sie) and the current 897 - * facilities set (arch.model.fac->kvm). Its address size has to be 889 + * page to hold the guest facility list (arch.model.fac->list) and the 890 + * facility mask (arch.model.fac->mask). Its address size has to be 898 891 * 31 bits and word aligned. 899 892 */ 900 893 kvm->arch.model.fac = 901 - (struct s390_model_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 894 + (struct kvm_s390_fac *) get_zeroed_page(GFP_KERNEL | GFP_DMA); 902 895 if (!kvm->arch.model.fac) 903 896 goto out_nofac; 904 897 905 - memcpy(kvm->arch.model.fac->kvm, S390_lowcore.stfle_fac_list, 906 - S390_ARCH_FAC_LIST_SIZE_U64); 907 - 908 - /* 909 - * If this KVM host runs *not* in a LPAR, relax the facility bits 910 - * of the kvm facility mask by all missing facilities. This will allow 911 - * to determine the right CPU model by means of the remaining facilities. 912 - * Live guest migration must prohibit the migration of KVMs running in 913 - * a LPAR to non LPAR hosts. 914 - */ 915 - if (!MACHINE_IS_LPAR) 916 - for (i = 0; i < kvm_s390_fac_list_mask_size(); i++) 917 - kvm_s390_fac_list_mask[i] &= kvm->arch.model.fac->kvm[i]; 918 - 919 - /* 920 - * Apply the kvm facility mask to limit the kvm supported/tolerated 921 - * facility list. 922 - */ 898 + /* Populate the facility mask initially. */ 899 + memcpy(kvm->arch.model.fac->mask, S390_lowcore.stfle_fac_list, 900 + S390_ARCH_FAC_LIST_SIZE_BYTE); 923 901 for (i = 0; i < S390_ARCH_FAC_LIST_SIZE_U64; i++) { 924 902 if (i < kvm_s390_fac_list_mask_size()) 925 - kvm->arch.model.fac->kvm[i] &= kvm_s390_fac_list_mask[i]; 903 + kvm->arch.model.fac->mask[i] &= kvm_s390_fac_list_mask[i]; 926 904 else 927 - kvm->arch.model.fac->kvm[i] = 0UL; 905 + kvm->arch.model.fac->mask[i] = 0UL; 928 906 } 907 + 908 + /* Populate the facility list initially. */ 909 + memcpy(kvm->arch.model.fac->list, kvm->arch.model.fac->mask, 910 + S390_ARCH_FAC_LIST_SIZE_BYTE); 929 911 930 912 kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id); 931 913 kvm->arch.model.ibc = sclp_get_ibc() & 0x0fff; ··· 1161 1165 1162 1166 mutex_lock(&vcpu->kvm->lock); 1163 1167 vcpu->arch.cpu_id = vcpu->kvm->arch.model.cpu_id; 1164 - memcpy(vcpu->kvm->arch.model.fac->sie, vcpu->kvm->arch.model.fac->kvm, 1165 - S390_ARCH_FAC_LIST_SIZE_BYTE); 1166 1168 vcpu->arch.sie_block->ibc = vcpu->kvm->arch.model.ibc; 1167 1169 mutex_unlock(&vcpu->kvm->lock); 1168 1170 ··· 1206 1212 vcpu->arch.sie_block->scaol = (__u32)(__u64)kvm->arch.sca; 1207 1213 set_bit(63 - id, (unsigned long *) &kvm->arch.sca->mcn); 1208 1214 } 1209 - vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->sie; 1215 + vcpu->arch.sie_block->fac = (int) (long) kvm->arch.model.fac->list; 1210 1216 1211 1217 spin_lock_init(&vcpu->arch.local_int.lock); 1212 1218 vcpu->arch.local_int.float_int = &kvm->arch.float_int;
+2 -1
arch/s390/kvm/kvm-s390.h
··· 128 128 /* test availability of facility in a kvm intance */ 129 129 static inline int test_kvm_facility(struct kvm *kvm, unsigned long nr) 130 130 { 131 - return __test_facility(nr, kvm->arch.model.fac->kvm); 131 + return __test_facility(nr, kvm->arch.model.fac->mask) && 132 + __test_facility(nr, kvm->arch.model.fac->list); 132 133 } 133 134 134 135 /* are cpu states controlled by user space */
+1 -1
arch/s390/kvm/priv.c
··· 348 348 * We need to shift the lower 32 facility bits (bit 0-31) from a u64 349 349 * into a u32 memory representation. They will remain bits 0-31. 350 350 */ 351 - fac = *vcpu->kvm->arch.model.fac->sie >> 32; 351 + fac = *vcpu->kvm->arch.model.fac->list >> 32; 352 352 rc = write_guest_lc(vcpu, offsetof(struct _lowcore, stfl_fac_list), 353 353 &fac, sizeof(fac)); 354 354 if (rc)
+16 -12
arch/s390/pci/pci.c
··· 287 287 addr = ZPCI_IOMAP_ADDR_BASE | ((u64) idx << 48); 288 288 return (void __iomem *) addr + offset; 289 289 } 290 - EXPORT_SYMBOL_GPL(pci_iomap_range); 290 + EXPORT_SYMBOL(pci_iomap_range); 291 291 292 292 void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) 293 293 { ··· 309 309 } 310 310 spin_unlock(&zpci_iomap_lock); 311 311 } 312 - EXPORT_SYMBOL_GPL(pci_iounmap); 312 + EXPORT_SYMBOL(pci_iounmap); 313 313 314 314 static int pci_read(struct pci_bus *bus, unsigned int devfn, int where, 315 315 int size, u32 *val) ··· 483 483 airq_iv_free_bit(zpci_aisb_iv, zdev->aisb); 484 484 } 485 485 486 - static void zpci_map_resources(struct zpci_dev *zdev) 486 + static void zpci_map_resources(struct pci_dev *pdev) 487 487 { 488 - struct pci_dev *pdev = zdev->pdev; 489 488 resource_size_t len; 490 489 int i; 491 490 ··· 498 499 } 499 500 } 500 501 501 - static void zpci_unmap_resources(struct zpci_dev *zdev) 502 + static void zpci_unmap_resources(struct pci_dev *pdev) 502 503 { 503 - struct pci_dev *pdev = zdev->pdev; 504 504 resource_size_t len; 505 505 int i; 506 506 ··· 649 651 650 652 zdev->pdev = pdev; 651 653 pdev->dev.groups = zpci_attr_groups; 652 - zpci_map_resources(zdev); 654 + zpci_map_resources(pdev); 653 655 654 656 for (i = 0; i < PCI_BAR_COUNT; i++) { 655 657 res = &pdev->resource[i]; ··· 661 663 return 0; 662 664 } 663 665 666 + void pcibios_release_device(struct pci_dev *pdev) 667 + { 668 + zpci_unmap_resources(pdev); 669 + } 670 + 664 671 int pcibios_enable_device(struct pci_dev *pdev, int mask) 665 672 { 666 673 struct zpci_dev *zdev = get_zdev(pdev); ··· 673 670 zdev->pdev = pdev; 674 671 zpci_debug_init_device(zdev); 675 672 zpci_fmb_enable_device(zdev); 676 - zpci_map_resources(zdev); 677 673 678 674 return pci_enable_resources(pdev, mask); 679 675 } ··· 681 679 { 682 680 struct zpci_dev *zdev = get_zdev(pdev); 683 681 684 - zpci_unmap_resources(zdev); 685 682 zpci_fmb_disable_device(zdev); 686 683 zpci_debug_exit_device(zdev); 687 684 zdev->pdev = NULL; ··· 689 688 #ifdef CONFIG_HIBERNATE_CALLBACKS 690 689 static int zpci_restore(struct device *dev) 691 690 { 692 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 691 + struct pci_dev *pdev = to_pci_dev(dev); 692 + struct zpci_dev *zdev = get_zdev(pdev); 693 693 int ret = 0; 694 694 695 695 if (zdev->state != ZPCI_FN_STATE_ONLINE) ··· 700 698 if (ret) 701 699 goto out; 702 700 703 - zpci_map_resources(zdev); 701 + zpci_map_resources(pdev); 704 702 zpci_register_ioat(zdev, 0, zdev->start_dma + PAGE_OFFSET, 705 703 zdev->start_dma + zdev->iommu_size - 1, 706 704 (u64) zdev->dma_table); ··· 711 709 712 710 static int zpci_freeze(struct device *dev) 713 711 { 714 - struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); 712 + struct pci_dev *pdev = to_pci_dev(dev); 713 + struct zpci_dev *zdev = get_zdev(pdev); 715 714 716 715 if (zdev->state != ZPCI_FN_STATE_ONLINE) 717 716 return 0; 718 717 719 718 zpci_unregister_ioat(zdev, 0); 719 + zpci_unmap_resources(pdev); 720 720 return clp_disable_fh(zdev); 721 721 } 722 722
+8 -9
arch/s390/pci/pci_mmio.c
··· 64 64 if (copy_from_user(buf, user_buffer, length)) 65 65 goto out; 66 66 67 - memcpy_toio(io_addr, buf, length); 68 - ret = 0; 67 + ret = zpci_memcpy_toio(io_addr, buf, length); 69 68 out: 70 69 if (buf != local_buf) 71 70 kfree(buf); ··· 97 98 goto out; 98 99 io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK)); 99 100 100 - ret = -EFAULT; 101 - if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) 101 + if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE) { 102 + ret = -EFAULT; 102 103 goto out; 103 - 104 - memcpy_fromio(buf, io_addr, length); 105 - 104 + } 105 + ret = zpci_memcpy_fromio(buf, io_addr, length); 106 + if (ret) 107 + goto out; 106 108 if (copy_to_user(user_buffer, buf, length)) 107 - goto out; 109 + ret = -EFAULT; 108 110 109 - ret = 0; 110 111 out: 111 112 if (buf != local_buf) 112 113 kfree(buf);
+1 -1
arch/x86/xen/p2m.c
··· 563 563 if (p2m_pfn == PFN_DOWN(__pa(p2m_missing))) 564 564 p2m_init(p2m); 565 565 else 566 - p2m_init_identity(p2m, pfn); 566 + p2m_init_identity(p2m, pfn & ~(P2M_PER_PAGE - 1)); 567 567 568 568 spin_lock_irqsave(&p2m_update_lock, flags); 569 569
+4 -1
drivers/acpi/acpi_lpss.c
··· 65 65 66 66 struct lpss_device_desc { 67 67 unsigned int flags; 68 + const char *clk_con_id; 68 69 unsigned int prv_offset; 69 70 size_t prv_size_override; 70 71 void (*setup)(struct lpss_private_data *pdata); ··· 141 140 142 141 static struct lpss_device_desc lpt_uart_dev_desc = { 143 142 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_LTR, 143 + .clk_con_id = "baudclk", 144 144 .prv_offset = 0x800, 145 145 .setup = lpss_uart_setup, 146 146 }; ··· 158 156 159 157 static struct lpss_device_desc byt_uart_dev_desc = { 160 158 .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX, 159 + .clk_con_id = "baudclk", 161 160 .prv_offset = 0x800, 162 161 .setup = lpss_uart_setup, 163 162 }; ··· 316 313 return PTR_ERR(clk); 317 314 318 315 pdata->clk = clk; 319 - clk_register_clkdev(clk, NULL, devname); 316 + clk_register_clkdev(clk, dev_desc->clk_con_id, devname); 320 317 return 0; 321 318 } 322 319
+2
drivers/ata/sata_fsl.c
··· 869 869 */ 870 870 ata_msleep(ap, 1); 871 871 872 + sata_set_spd(link); 873 + 872 874 /* 873 875 * Now, bring the host controller online again, this can take time 874 876 * as PHY reset and communication establishment, 1st D2H FIS and
+19 -25
drivers/char/tpm/tpm-chip.c
··· 140 140 { 141 141 int rc; 142 142 143 - rc = device_add(&chip->dev); 144 - if (rc) { 145 - dev_err(&chip->dev, 146 - "unable to device_register() %s, major %d, minor %d, err=%d\n", 147 - chip->devname, MAJOR(chip->dev.devt), 148 - MINOR(chip->dev.devt), rc); 149 - 150 - return rc; 151 - } 152 - 153 143 rc = cdev_add(&chip->cdev, chip->dev.devt, 1); 154 144 if (rc) { 155 145 dev_err(&chip->dev, ··· 148 158 MINOR(chip->dev.devt), rc); 149 159 150 160 device_unregister(&chip->dev); 161 + return rc; 162 + } 163 + 164 + rc = device_add(&chip->dev); 165 + if (rc) { 166 + dev_err(&chip->dev, 167 + "unable to device_register() %s, major %d, minor %d, err=%d\n", 168 + chip->devname, MAJOR(chip->dev.devt), 169 + MINOR(chip->dev.devt), rc); 170 + 151 171 return rc; 152 172 } 153 173 ··· 174 174 * tpm_chip_register() - create a character device for the TPM chip 175 175 * @chip: TPM chip to use. 176 176 * 177 - * Creates a character device for the TPM chip and adds sysfs interfaces for 178 - * the device, PPI and TCPA. As the last step this function adds the 179 - * chip to the list of TPM chips available for use. 177 + * Creates a character device for the TPM chip and adds sysfs attributes for 178 + * the device. As the last step this function adds the chip to the list of TPM 179 + * chips available for in-kernel use. 180 180 * 181 - * NOTE: This function should be only called after the chip initialization 182 - * is complete. 183 - * 184 - * Called from tpm_<specific>.c probe function only for devices 185 - * the driver has determined it should claim. Prior to calling 186 - * this function the specific probe function has called pci_enable_device 187 - * upon errant exit from this function specific probe function should call 188 - * pci_disable_device 181 + * This function should be only called after the chip initialization is 182 + * complete. 189 183 */ 190 184 int tpm_chip_register(struct tpm_chip *chip) 191 185 { 192 186 int rc; 193 - 194 - rc = tpm_dev_add_device(chip); 195 - if (rc) 196 - return rc; 197 187 198 188 /* Populate sysfs for TPM1 devices. */ 199 189 if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { ··· 197 207 198 208 chip->bios_dir = tpm_bios_log_setup(chip->devname); 199 209 } 210 + 211 + rc = tpm_dev_add_device(chip); 212 + if (rc) 213 + return rc; 200 214 201 215 /* Make the chip available. */ 202 216 spin_lock(&driver_lock);
+5 -5
drivers/char/tpm/tpm_ibmvtpm.c
··· 124 124 { 125 125 struct ibmvtpm_dev *ibmvtpm; 126 126 struct ibmvtpm_crq crq; 127 - u64 *word = (u64 *) &crq; 127 + __be64 *word = (__be64 *)&crq; 128 128 int rc; 129 129 130 130 ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); ··· 145 145 memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count); 146 146 crq.valid = (u8)IBMVTPM_VALID_CMD; 147 147 crq.msg = (u8)VTPM_TPM_COMMAND; 148 - crq.len = (u16)count; 149 - crq.data = ibmvtpm->rtce_dma_handle; 148 + crq.len = cpu_to_be16(count); 149 + crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle); 150 150 151 - rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(word[0]), 152 - cpu_to_be64(word[1])); 151 + rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]), 152 + be64_to_cpu(word[1])); 153 153 if (rc != H_SUCCESS) { 154 154 dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc); 155 155 rc = 0;
+3 -3
drivers/char/tpm/tpm_ibmvtpm.h
··· 22 22 struct ibmvtpm_crq { 23 23 u8 valid; 24 24 u8 msg; 25 - u16 len; 26 - u32 data; 27 - u64 reserved; 25 + __be16 len; 26 + __be32 data; 27 + __be64 reserved; 28 28 } __attribute__((packed, aligned(8))); 29 29 30 30 struct ibmvtpm_crq_queue {
+14 -15
drivers/clk/clk-divider.c
··· 144 144 divider->flags); 145 145 } 146 146 147 - /* 148 - * The reverse of DIV_ROUND_UP: The maximum number which 149 - * divided by m is r 150 - */ 151 - #define MULT_ROUND_UP(r, m) ((r) * (m) + (m) - 1) 152 - 153 147 static bool _is_valid_table_div(const struct clk_div_table *table, 154 148 unsigned int div) 155 149 { ··· 219 225 unsigned long parent_rate, unsigned long rate, 220 226 unsigned long flags) 221 227 { 222 - int up, down, div; 228 + int up, down; 229 + unsigned long up_rate, down_rate; 223 230 224 - up = down = div = DIV_ROUND_CLOSEST(parent_rate, rate); 231 + up = DIV_ROUND_UP(parent_rate, rate); 232 + down = parent_rate / rate; 225 233 226 234 if (flags & CLK_DIVIDER_POWER_OF_TWO) { 227 - up = __roundup_pow_of_two(div); 228 - down = __rounddown_pow_of_two(div); 235 + up = __roundup_pow_of_two(up); 236 + down = __rounddown_pow_of_two(down); 229 237 } else if (table) { 230 - up = _round_up_table(table, div); 231 - down = _round_down_table(table, div); 238 + up = _round_up_table(table, up); 239 + down = _round_down_table(table, down); 232 240 } 233 241 234 - return (up - div) <= (div - down) ? up : down; 242 + up_rate = DIV_ROUND_UP(parent_rate, up); 243 + down_rate = DIV_ROUND_UP(parent_rate, down); 244 + 245 + return (rate - up_rate) <= (down_rate - rate) ? up : down; 235 246 } 236 247 237 248 static int _div_round(const struct clk_div_table *table, ··· 312 313 return i; 313 314 } 314 315 parent_rate = __clk_round_rate(__clk_get_parent(hw->clk), 315 - MULT_ROUND_UP(rate, i)); 316 + rate * i); 316 317 now = DIV_ROUND_UP(parent_rate, i); 317 318 if (_is_best_div(rate, now, best, flags)) { 318 319 bestdiv = i; ··· 352 353 bestdiv = readl(divider->reg) >> divider->shift; 353 354 bestdiv &= div_mask(divider->width); 354 355 bestdiv = _get_div(divider->table, bestdiv, divider->flags); 355 - return bestdiv; 356 + return DIV_ROUND_UP(*prate, bestdiv); 356 357 } 357 358 358 359 return divider_round_rate(hw, rate, prate, divider->table,
+26 -1
drivers/clk/clk.c
··· 1350 1350 1351 1351 return rate; 1352 1352 } 1353 - EXPORT_SYMBOL_GPL(clk_core_get_rate); 1354 1353 1355 1354 /** 1356 1355 * clk_get_rate - return the rate of clk ··· 2168 2169 2169 2170 return clk_core_get_phase(clk->core); 2170 2171 } 2172 + 2173 + /** 2174 + * clk_is_match - check if two clk's point to the same hardware clock 2175 + * @p: clk compared against q 2176 + * @q: clk compared against p 2177 + * 2178 + * Returns true if the two struct clk pointers both point to the same hardware 2179 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 2180 + * share the same struct clk_core object. 2181 + * 2182 + * Returns false otherwise. Note that two NULL clks are treated as matching. 2183 + */ 2184 + bool clk_is_match(const struct clk *p, const struct clk *q) 2185 + { 2186 + /* trivial case: identical struct clk's or both NULL */ 2187 + if (p == q) 2188 + return true; 2189 + 2190 + /* true if clk->core pointers match. Avoid derefing garbage */ 2191 + if (!IS_ERR_OR_NULL(p) && !IS_ERR_OR_NULL(q)) 2192 + if (p->core == q->core) 2193 + return true; 2194 + 2195 + return false; 2196 + } 2197 + EXPORT_SYMBOL_GPL(clk_is_match); 2171 2198 2172 2199 /** 2173 2200 * __clk_init - initialize the data structures in a struct clk
+13
drivers/clk/qcom/gcc-msm8960.c
··· 48 48 }, 49 49 }; 50 50 51 + static struct clk_regmap pll4_vote = { 52 + .enable_reg = 0x34c0, 53 + .enable_mask = BIT(4), 54 + .hw.init = &(struct clk_init_data){ 55 + .name = "pll4_vote", 56 + .parent_names = (const char *[]){ "pll4" }, 57 + .num_parents = 1, 58 + .ops = &clk_pll_vote_ops, 59 + }, 60 + }; 61 + 51 62 static struct clk_pll pll8 = { 52 63 .l_reg = 0x3144, 53 64 .m_reg = 0x3148, ··· 3034 3023 3035 3024 static struct clk_regmap *gcc_msm8960_clks[] = { 3036 3025 [PLL3] = &pll3.clkr, 3026 + [PLL4_VOTE] = &pll4_vote, 3037 3027 [PLL8] = &pll8.clkr, 3038 3028 [PLL8_VOTE] = &pll8_vote, 3039 3029 [PLL14] = &pll14.clkr, ··· 3259 3247 3260 3248 static struct clk_regmap *gcc_apq8064_clks[] = { 3261 3249 [PLL3] = &pll3.clkr, 3250 + [PLL4_VOTE] = &pll4_vote, 3262 3251 [PLL8] = &pll8.clkr, 3263 3252 [PLL8_VOTE] = &pll8_vote, 3264 3253 [PLL14] = &pll14.clkr,
-1
drivers/clk/qcom/lcc-ipq806x.c
··· 462 462 .remove = lcc_ipq806x_remove, 463 463 .driver = { 464 464 .name = "lcc-ipq806x", 465 - .owner = THIS_MODULE, 466 465 .of_match_table = lcc_ipq806x_match_table, 467 466 }, 468 467 };
+3 -4
drivers/clk/qcom/lcc-msm8960.c
··· 417 417 .mnctr_en_bit = 8, 418 418 .mnctr_reset_bit = 7, 419 419 .mnctr_mode_shift = 5, 420 - .n_val_shift = 16, 421 - .m_val_shift = 16, 420 + .n_val_shift = 24, 421 + .m_val_shift = 8, 422 422 .width = 8, 423 423 }, 424 424 .p = { ··· 547 547 return PTR_ERR(regmap); 548 548 549 549 /* Use the correct frequency plan depending on speed of PLL4 */ 550 - val = regmap_read(regmap, 0x4, &val); 550 + regmap_read(regmap, 0x4, &val); 551 551 if (val == 0x12) { 552 552 slimbus_src.freq_tbl = clk_tbl_aif_osr_492; 553 553 mi2s_osr_src.freq_tbl = clk_tbl_aif_osr_492; ··· 574 574 .remove = lcc_msm8960_remove, 575 575 .driver = { 576 576 .name = "lcc-msm8960", 577 - .owner = THIS_MODULE, 578 577 .of_match_table = lcc_msm8960_match_table, 579 578 }, 580 579 };
+3 -3
drivers/clk/ti/fapll.c
··· 84 84 struct fapll_data *fd = to_fapll(hw); 85 85 u32 v = readl_relaxed(fd->base); 86 86 87 - v |= (1 << FAPLL_MAIN_PLLEN); 87 + v |= FAPLL_MAIN_PLLEN; 88 88 writel_relaxed(v, fd->base); 89 89 90 90 return 0; ··· 95 95 struct fapll_data *fd = to_fapll(hw); 96 96 u32 v = readl_relaxed(fd->base); 97 97 98 - v &= ~(1 << FAPLL_MAIN_PLLEN); 98 + v &= ~FAPLL_MAIN_PLLEN; 99 99 writel_relaxed(v, fd->base); 100 100 } 101 101 ··· 104 104 struct fapll_data *fd = to_fapll(hw); 105 105 u32 v = readl_relaxed(fd->base); 106 106 107 - return v & (1 << FAPLL_MAIN_PLLEN); 107 + return v & FAPLL_MAIN_PLLEN; 108 108 } 109 109 110 110 static unsigned long ti_fapll_recalc_rate(struct clk_hw *hw,
+19 -16
drivers/gpu/drm/drm_crtc.c
··· 43 43 #include "drm_crtc_internal.h" 44 44 #include "drm_internal.h" 45 45 46 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 47 - struct drm_mode_fb_cmd2 *r, 48 - struct drm_file *file_priv); 46 + static struct drm_framebuffer * 47 + internal_framebuffer_create(struct drm_device *dev, 48 + struct drm_mode_fb_cmd2 *r, 49 + struct drm_file *file_priv); 49 50 50 51 /* Avoid boilerplate. I'm tired of typing. */ 51 52 #define DRM_ENUM_NAME_FN(fnname, list) \ ··· 2944 2943 */ 2945 2944 if (req->flags & DRM_MODE_CURSOR_BO) { 2946 2945 if (req->handle) { 2947 - fb = add_framebuffer_internal(dev, &fbreq, file_priv); 2946 + fb = internal_framebuffer_create(dev, &fbreq, file_priv); 2948 2947 if (IS_ERR(fb)) { 2949 2948 DRM_DEBUG_KMS("failed to wrap cursor buffer in drm framebuffer\n"); 2950 2949 return PTR_ERR(fb); 2951 2950 } 2952 - 2953 - drm_framebuffer_reference(fb); 2954 2951 } else { 2955 2952 fb = NULL; 2956 2953 } ··· 3307 3308 return 0; 3308 3309 } 3309 3310 3310 - static struct drm_framebuffer *add_framebuffer_internal(struct drm_device *dev, 3311 - struct drm_mode_fb_cmd2 *r, 3312 - struct drm_file *file_priv) 3311 + static struct drm_framebuffer * 3312 + internal_framebuffer_create(struct drm_device *dev, 3313 + struct drm_mode_fb_cmd2 *r, 3314 + struct drm_file *file_priv) 3313 3315 { 3314 3316 struct drm_mode_config *config = &dev->mode_config; 3315 3317 struct drm_framebuffer *fb; ··· 3348 3348 return fb; 3349 3349 } 3350 3350 3351 - mutex_lock(&file_priv->fbs_lock); 3352 - r->fb_id = fb->base.id; 3353 - list_add(&fb->filp_head, &file_priv->fbs); 3354 - DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3355 - mutex_unlock(&file_priv->fbs_lock); 3356 - 3357 3351 return fb; 3358 3352 } 3359 3353 ··· 3369 3375 int drm_mode_addfb2(struct drm_device *dev, 3370 3376 void *data, struct drm_file *file_priv) 3371 3377 { 3378 + struct drm_mode_fb_cmd2 *r = data; 3372 3379 struct drm_framebuffer *fb; 3373 3380 3374 3381 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 3375 3382 return -EINVAL; 3376 3383 3377 - fb = add_framebuffer_internal(dev, data, file_priv); 3384 + fb = internal_framebuffer_create(dev, r, file_priv); 3378 3385 if (IS_ERR(fb)) 3379 3386 return PTR_ERR(fb); 3387 + 3388 + /* Transfer ownership to the filp for reaping on close */ 3389 + 3390 + DRM_DEBUG_KMS("[FB:%d]\n", fb->base.id); 3391 + mutex_lock(&file_priv->fbs_lock); 3392 + r->fb_id = fb->base.id; 3393 + list_add(&fb->filp_head, &file_priv->fbs); 3394 + mutex_unlock(&file_priv->fbs_lock); 3380 3395 3381 3396 return 0; 3382 3397 }
+8 -3
drivers/gpu/drm/drm_dp_mst_topology.c
··· 733 733 struct drm_dp_sideband_msg_tx *txmsg) 734 734 { 735 735 bool ret; 736 - mutex_lock(&mgr->qlock); 736 + 737 + /* 738 + * All updates to txmsg->state are protected by mgr->qlock, and the two 739 + * cases we check here are terminal states. For those the barriers 740 + * provided by the wake_up/wait_event pair are enough. 741 + */ 737 742 ret = (txmsg->state == DRM_DP_SIDEBAND_TX_RX || 738 743 txmsg->state == DRM_DP_SIDEBAND_TX_TIMEOUT); 739 - mutex_unlock(&mgr->qlock); 740 744 return ret; 741 745 } 742 746 ··· 1367 1363 return 0; 1368 1364 } 1369 1365 1370 - /* must be called holding qlock */ 1371 1366 static void process_single_down_tx_qlock(struct drm_dp_mst_topology_mgr *mgr) 1372 1367 { 1373 1368 struct drm_dp_sideband_msg_tx *txmsg; 1374 1369 int ret; 1370 + 1371 + WARN_ON(!mutex_is_locked(&mgr->qlock)); 1375 1372 1376 1373 /* construct a chunk from the first msg in the tx_msg queue */ 1377 1374 if (list_empty(&mgr->tx_msg_downq)) {
+1 -1
drivers/gpu/drm/drm_mm.c
··· 403 403 unsigned rem; 404 404 405 405 rem = do_div(tmp, alignment); 406 - if (tmp) 406 + if (rem) 407 407 start += alignment - rem; 408 408 } 409 409
+4 -1
drivers/gpu/drm/i915/i915_cmd_parser.c
··· 836 836 } 837 837 838 838 i = 0; 839 - for_each_sg_page(obj->pages->sgl, &sg_iter, npages, first_page) 839 + for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, first_page) { 840 840 pages[i++] = sg_page_iter_page(&sg_iter); 841 + if (i == npages) 842 + break; 843 + } 841 844 842 845 addr = vmap(pages, i, 0, PAGE_KERNEL); 843 846 if (addr == NULL) {
+175 -38
drivers/gpu/drm/i915/i915_debugfs.c
··· 1090 1090 seq_printf(m, "Current P-state: %d\n", 1091 1091 (rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT); 1092 1092 } else if (IS_GEN6(dev) || (IS_GEN7(dev) && !IS_VALLEYVIEW(dev)) || 1093 - IS_BROADWELL(dev)) { 1093 + IS_BROADWELL(dev) || IS_GEN9(dev)) { 1094 1094 u32 gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS); 1095 1095 u32 rp_state_limits = I915_READ(GEN6_RP_STATE_LIMITS); 1096 1096 u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP); ··· 1109 1109 intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 1110 1110 1111 1111 reqf = I915_READ(GEN6_RPNSWREQ); 1112 - reqf &= ~GEN6_TURBO_DISABLE; 1113 - if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 1114 - reqf >>= 24; 1115 - else 1116 - reqf >>= 25; 1112 + if (IS_GEN9(dev)) 1113 + reqf >>= 23; 1114 + else { 1115 + reqf &= ~GEN6_TURBO_DISABLE; 1116 + if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 1117 + reqf >>= 24; 1118 + else 1119 + reqf >>= 25; 1120 + } 1117 1121 reqf = intel_gpu_freq(dev_priv, reqf); 1118 1122 1119 1123 rpmodectl = I915_READ(GEN6_RP_CONTROL); ··· 1131 1127 rpdownei = I915_READ(GEN6_RP_CUR_DOWN_EI); 1132 1128 rpcurdown = I915_READ(GEN6_RP_CUR_DOWN); 1133 1129 rpprevdown = I915_READ(GEN6_RP_PREV_DOWN); 1134 - if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 1130 + if (IS_GEN9(dev)) 1131 + cagf = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT; 1132 + else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 1135 1133 cagf = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT; 1136 1134 else 1137 1135 cagf = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT; ··· 1159 1153 pm_ier, pm_imr, pm_isr, pm_iir, pm_mask); 1160 1154 seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status); 1161 1155 seq_printf(m, "Render p-state ratio: %d\n", 1162 - (gt_perf_status & 0xff00) >> 8); 1156 + (gt_perf_status & (IS_GEN9(dev) ? 0x1ff00 : 0xff00)) >> 8); 1163 1157 seq_printf(m, "Render p-state VID: %d\n", 1164 1158 gt_perf_status & 0xff); 1165 1159 seq_printf(m, "Render p-state limit: %d\n", ··· 1184 1178 GEN6_CURBSYTAVG_MASK); 1185 1179 1186 1180 max_freq = (rp_state_cap & 0xff0000) >> 16; 1181 + max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1); 1187 1182 seq_printf(m, "Lowest (RPN) frequency: %dMHz\n", 1188 1183 intel_gpu_freq(dev_priv, max_freq)); 1189 1184 1190 1185 max_freq = (rp_state_cap & 0xff00) >> 8; 1186 + max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1); 1191 1187 seq_printf(m, "Nominal (RP1) frequency: %dMHz\n", 1192 1188 intel_gpu_freq(dev_priv, max_freq)); 1193 1189 1194 1190 max_freq = rp_state_cap & 0xff; 1191 + max_freq *= (IS_SKYLAKE(dev) ? GEN9_FREQ_SCALER : 1); 1195 1192 seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n", 1196 1193 intel_gpu_freq(dev_priv, max_freq)); 1197 1194 ··· 1840 1831 if (ret) 1841 1832 return ret; 1842 1833 1843 - if (dev_priv->ips.pwrctx) { 1844 - seq_puts(m, "power context "); 1845 - describe_obj(m, dev_priv->ips.pwrctx); 1846 - seq_putc(m, '\n'); 1847 - } 1848 - 1849 - if (dev_priv->ips.renderctx) { 1850 - seq_puts(m, "render context "); 1851 - describe_obj(m, dev_priv->ips.renderctx); 1852 - seq_putc(m, '\n'); 1853 - } 1854 - 1855 1834 list_for_each_entry(ctx, &dev_priv->context_list, link) { 1856 1835 if (!i915.enable_execlists && 1857 1836 ctx->legacy_hw_ctx.rcs_state == NULL) ··· 2243 2246 enum pipe pipe; 2244 2247 bool enabled = false; 2245 2248 2249 + if (!HAS_PSR(dev)) { 2250 + seq_puts(m, "PSR not supported\n"); 2251 + return 0; 2252 + } 2253 + 2246 2254 intel_runtime_pm_get(dev_priv); 2247 2255 2248 2256 mutex_lock(&dev_priv->psr.lock); ··· 2260 2258 seq_printf(m, "Re-enable work scheduled: %s\n", 2261 2259 yesno(work_busy(&dev_priv->psr.work.work))); 2262 2260 2263 - if (HAS_PSR(dev)) { 2264 - if (HAS_DDI(dev)) 2265 - enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE; 2266 - else { 2267 - for_each_pipe(dev_priv, pipe) { 2268 - stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) & 2269 - VLV_EDP_PSR_CURR_STATE_MASK; 2270 - if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) || 2271 - (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE)) 2272 - enabled = true; 2273 - } 2261 + if (HAS_DDI(dev)) 2262 + enabled = I915_READ(EDP_PSR_CTL(dev)) & EDP_PSR_ENABLE; 2263 + else { 2264 + for_each_pipe(dev_priv, pipe) { 2265 + stat[pipe] = I915_READ(VLV_PSRSTAT(pipe)) & 2266 + VLV_EDP_PSR_CURR_STATE_MASK; 2267 + if ((stat[pipe] == VLV_EDP_PSR_ACTIVE_NORFB_UP) || 2268 + (stat[pipe] == VLV_EDP_PSR_ACTIVE_SF_UPDATE)) 2269 + enabled = true; 2274 2270 } 2275 2271 } 2276 2272 seq_printf(m, "HW Enabled & Active bit: %s", yesno(enabled)); ··· 2285 2285 yesno((bool)dev_priv->psr.link_standby)); 2286 2286 2287 2287 /* CHV PSR has no kind of performance counter */ 2288 - if (HAS_PSR(dev) && HAS_DDI(dev)) { 2288 + if (HAS_DDI(dev)) { 2289 2289 psrperf = I915_READ(EDP_PSR_PERF_CNT(dev)) & 2290 2290 EDP_PSR_PERF_CNT_MASK; 2291 2291 ··· 2308 2308 u8 crc[6]; 2309 2309 2310 2310 drm_modeset_lock_all(dev); 2311 - list_for_each_entry(connector, &dev->mode_config.connector_list, 2312 - base.head) { 2311 + for_each_intel_encoder(dev, connector) { 2313 2312 2314 2313 if (connector->base.dpms != DRM_MODE_DPMS_ON) 2315 2314 continue; ··· 2676 2677 active = cursor_position(dev, crtc->pipe, &x, &y); 2677 2678 seq_printf(m, "\tcursor visible? %s, position (%d, %d), size %dx%d, addr 0x%08x, active? %s\n", 2678 2679 yesno(crtc->cursor_base), 2679 - x, y, crtc->cursor_width, crtc->cursor_height, 2680 + x, y, crtc->base.cursor->state->crtc_w, 2681 + crtc->base.cursor->state->crtc_h, 2680 2682 crtc->cursor_addr, yesno(active)); 2681 2683 } 2682 2684 ··· 2853 2853 for_each_pipe(dev_priv, pipe) { 2854 2854 seq_printf(m, "Pipe %c\n", pipe_name(pipe)); 2855 2855 2856 - for_each_plane(pipe, plane) { 2856 + for_each_plane(dev_priv, pipe, plane) { 2857 2857 entry = &ddb->plane[pipe][plane]; 2858 2858 seq_printf(m, " Plane%-8d%8u%8u%8u\n", plane + 1, 2859 2859 entry->start, entry->end, ··· 2866 2866 } 2867 2867 2868 2868 drm_modeset_unlock_all(dev); 2869 + 2870 + return 0; 2871 + } 2872 + 2873 + static void drrs_status_per_crtc(struct seq_file *m, 2874 + struct drm_device *dev, struct intel_crtc *intel_crtc) 2875 + { 2876 + struct intel_encoder *intel_encoder; 2877 + struct drm_i915_private *dev_priv = dev->dev_private; 2878 + struct i915_drrs *drrs = &dev_priv->drrs; 2879 + int vrefresh = 0; 2880 + 2881 + for_each_encoder_on_crtc(dev, &intel_crtc->base, intel_encoder) { 2882 + /* Encoder connected on this CRTC */ 2883 + switch (intel_encoder->type) { 2884 + case INTEL_OUTPUT_EDP: 2885 + seq_puts(m, "eDP:\n"); 2886 + break; 2887 + case INTEL_OUTPUT_DSI: 2888 + seq_puts(m, "DSI:\n"); 2889 + break; 2890 + case INTEL_OUTPUT_HDMI: 2891 + seq_puts(m, "HDMI:\n"); 2892 + break; 2893 + case INTEL_OUTPUT_DISPLAYPORT: 2894 + seq_puts(m, "DP:\n"); 2895 + break; 2896 + default: 2897 + seq_printf(m, "Other encoder (id=%d).\n", 2898 + intel_encoder->type); 2899 + return; 2900 + } 2901 + } 2902 + 2903 + if (dev_priv->vbt.drrs_type == STATIC_DRRS_SUPPORT) 2904 + seq_puts(m, "\tVBT: DRRS_type: Static"); 2905 + else if (dev_priv->vbt.drrs_type == SEAMLESS_DRRS_SUPPORT) 2906 + seq_puts(m, "\tVBT: DRRS_type: Seamless"); 2907 + else if (dev_priv->vbt.drrs_type == DRRS_NOT_SUPPORTED) 2908 + seq_puts(m, "\tVBT: DRRS_type: None"); 2909 + else 2910 + seq_puts(m, "\tVBT: DRRS_type: FIXME: Unrecognized Value"); 2911 + 2912 + seq_puts(m, "\n\n"); 2913 + 2914 + if (intel_crtc->config->has_drrs) { 2915 + struct intel_panel *panel; 2916 + 2917 + mutex_lock(&drrs->mutex); 2918 + /* DRRS Supported */ 2919 + seq_puts(m, "\tDRRS Supported: Yes\n"); 2920 + 2921 + /* disable_drrs() will make drrs->dp NULL */ 2922 + if (!drrs->dp) { 2923 + seq_puts(m, "Idleness DRRS: Disabled"); 2924 + mutex_unlock(&drrs->mutex); 2925 + return; 2926 + } 2927 + 2928 + panel = &drrs->dp->attached_connector->panel; 2929 + seq_printf(m, "\t\tBusy_frontbuffer_bits: 0x%X", 2930 + drrs->busy_frontbuffer_bits); 2931 + 2932 + seq_puts(m, "\n\t\t"); 2933 + if (drrs->refresh_rate_type == DRRS_HIGH_RR) { 2934 + seq_puts(m, "DRRS_State: DRRS_HIGH_RR\n"); 2935 + vrefresh = panel->fixed_mode->vrefresh; 2936 + } else if (drrs->refresh_rate_type == DRRS_LOW_RR) { 2937 + seq_puts(m, "DRRS_State: DRRS_LOW_RR\n"); 2938 + vrefresh = panel->downclock_mode->vrefresh; 2939 + } else { 2940 + seq_printf(m, "DRRS_State: Unknown(%d)\n", 2941 + drrs->refresh_rate_type); 2942 + mutex_unlock(&drrs->mutex); 2943 + return; 2944 + } 2945 + seq_printf(m, "\t\tVrefresh: %d", vrefresh); 2946 + 2947 + seq_puts(m, "\n\t\t"); 2948 + mutex_unlock(&drrs->mutex); 2949 + } else { 2950 + /* DRRS not supported. Print the VBT parameter*/ 2951 + seq_puts(m, "\tDRRS Supported : No"); 2952 + } 2953 + seq_puts(m, "\n"); 2954 + } 2955 + 2956 + static int i915_drrs_status(struct seq_file *m, void *unused) 2957 + { 2958 + struct drm_info_node *node = m->private; 2959 + struct drm_device *dev = node->minor->dev; 2960 + struct intel_crtc *intel_crtc; 2961 + int active_crtc_cnt = 0; 2962 + 2963 + for_each_intel_crtc(dev, intel_crtc) { 2964 + drm_modeset_lock(&intel_crtc->base.mutex, NULL); 2965 + 2966 + if (intel_crtc->active) { 2967 + active_crtc_cnt++; 2968 + seq_printf(m, "\nCRTC %d: ", active_crtc_cnt); 2969 + 2970 + drrs_status_per_crtc(m, dev, intel_crtc); 2971 + } 2972 + 2973 + drm_modeset_unlock(&intel_crtc->base.mutex); 2974 + } 2975 + 2976 + if (!active_crtc_cnt) 2977 + seq_puts(m, "No active crtc found\n"); 2869 2978 2870 2979 return 0; 2871 2980 } ··· 4471 4362 struct drm_i915_private *dev_priv = dev->dev_private; 4472 4363 unsigned int s_tot = 0, ss_tot = 0, ss_per = 0, eu_tot = 0, eu_per = 0; 4473 4364 4474 - if (INTEL_INFO(dev)->gen < 9) 4365 + if ((INTEL_INFO(dev)->gen < 8) || IS_BROADWELL(dev)) 4475 4366 return -ENODEV; 4476 4367 4477 4368 seq_puts(m, "SSEU Device Info\n"); ··· 4493 4384 yesno(INTEL_INFO(dev)->has_eu_pg)); 4494 4385 4495 4386 seq_puts(m, "SSEU Device Status\n"); 4496 - if (IS_SKYLAKE(dev)) { 4387 + if (IS_CHERRYVIEW(dev)) { 4388 + const int ss_max = 2; 4389 + int ss; 4390 + u32 sig1[ss_max], sig2[ss_max]; 4391 + 4392 + sig1[0] = I915_READ(CHV_POWER_SS0_SIG1); 4393 + sig1[1] = I915_READ(CHV_POWER_SS1_SIG1); 4394 + sig2[0] = I915_READ(CHV_POWER_SS0_SIG2); 4395 + sig2[1] = I915_READ(CHV_POWER_SS1_SIG2); 4396 + 4397 + for (ss = 0; ss < ss_max; ss++) { 4398 + unsigned int eu_cnt; 4399 + 4400 + if (sig1[ss] & CHV_SS_PG_ENABLE) 4401 + /* skip disabled subslice */ 4402 + continue; 4403 + 4404 + s_tot = 1; 4405 + ss_per++; 4406 + eu_cnt = ((sig1[ss] & CHV_EU08_PG_ENABLE) ? 0 : 2) + 4407 + ((sig1[ss] & CHV_EU19_PG_ENABLE) ? 0 : 2) + 4408 + ((sig1[ss] & CHV_EU210_PG_ENABLE) ? 0 : 2) + 4409 + ((sig2[ss] & CHV_EU311_PG_ENABLE) ? 0 : 2); 4410 + eu_tot += eu_cnt; 4411 + eu_per = max(eu_per, eu_cnt); 4412 + } 4413 + ss_tot = ss_per; 4414 + } else if (IS_SKYLAKE(dev)) { 4497 4415 const int s_max = 3, ss_max = 4; 4498 4416 int s, ss; 4499 4417 u32 s_reg[s_max], eu_reg[2*s_max], eu_mask[2]; ··· 4684 4548 {"i915_wa_registers", i915_wa_registers, 0}, 4685 4549 {"i915_ddb_info", i915_ddb_info, 0}, 4686 4550 {"i915_sseu_status", i915_sseu_status, 0}, 4551 + {"i915_drrs_status", i915_drrs_status, 0}, 4687 4552 }; 4688 4553 #define I915_DEBUGFS_ENTRIES ARRAY_SIZE(i915_debugfs_list) 4689 4554
+47 -6
drivers/gpu/drm/i915/i915_dma.c
··· 68 68 case I915_PARAM_CHIPSET_ID: 69 69 value = dev->pdev->device; 70 70 break; 71 + case I915_PARAM_REVISION: 72 + value = dev->pdev->revision; 73 + break; 71 74 case I915_PARAM_HAS_GEM: 72 75 value = 1; 73 76 break; ··· 152 149 break; 153 150 case I915_PARAM_MMAP_VERSION: 154 151 value = 1; 152 + break; 153 + case I915_PARAM_SUBSLICE_TOTAL: 154 + value = INTEL_INFO(dev)->subslice_total; 155 + if (!value) 156 + return -ENODEV; 157 + break; 158 + case I915_PARAM_EU_TOTAL: 159 + value = INTEL_INFO(dev)->eu_total; 160 + if (!value) 161 + return -ENODEV; 155 162 break; 156 163 default: 157 164 DRM_DEBUG("Unknown parameter %d\n", param->param); ··· 621 608 622 609 /* Initialize slice/subslice/EU info */ 623 610 if (IS_CHERRYVIEW(dev)) { 624 - u32 fuse, mask_eu; 611 + u32 fuse, eu_dis; 625 612 626 613 fuse = I915_READ(CHV_FUSE_GT); 627 - mask_eu = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK | 628 - CHV_FGT_EU_DIS_SS0_R1_MASK | 629 - CHV_FGT_EU_DIS_SS1_R0_MASK | 630 - CHV_FGT_EU_DIS_SS1_R1_MASK); 631 - info->eu_total = 16 - hweight32(mask_eu); 614 + 615 + info->slice_total = 1; 616 + 617 + if (!(fuse & CHV_FGT_DISABLE_SS0)) { 618 + info->subslice_per_slice++; 619 + eu_dis = fuse & (CHV_FGT_EU_DIS_SS0_R0_MASK | 620 + CHV_FGT_EU_DIS_SS0_R1_MASK); 621 + info->eu_total += 8 - hweight32(eu_dis); 622 + } 623 + 624 + if (!(fuse & CHV_FGT_DISABLE_SS1)) { 625 + info->subslice_per_slice++; 626 + eu_dis = fuse & (CHV_FGT_EU_DIS_SS1_R0_MASK | 627 + CHV_FGT_EU_DIS_SS1_R1_MASK); 628 + info->eu_total += 8 - hweight32(eu_dis); 629 + } 630 + 631 + info->subslice_total = info->subslice_per_slice; 632 + /* 633 + * CHV expected to always have a uniform distribution of EU 634 + * across subslices. 635 + */ 636 + info->eu_per_subslice = info->subslice_total ? 637 + info->eu_total / info->subslice_total : 638 + 0; 639 + /* 640 + * CHV supports subslice power gating on devices with more than 641 + * one subslice, and supports EU power gating on devices with 642 + * more than one EU pair per subslice. 643 + */ 644 + info->has_slice_pg = 0; 645 + info->has_subslice_pg = (info->subslice_total > 1); 646 + info->has_eu_pg = (info->eu_per_subslice > 2); 632 647 } else if (IS_SKYLAKE(dev)) { 633 648 const int s_max = 3, ss_max = 4, eu_max = 8; 634 649 int s, ss;
-7
drivers/gpu/drm/i915/i915_drv.c
··· 346 346 }; 347 347 348 348 static const struct intel_device_info intel_cherryview_info = { 349 - .is_preliminary = 1, 350 349 .gen = 8, .num_pipes = 3, 351 350 .need_gfx_hws = 1, .has_hotplug = 1, 352 351 .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING, ··· 879 880 DRM_ERROR("Failed hw init on reset %d\n", ret); 880 881 return ret; 881 882 } 882 - 883 - /* 884 - * FIXME: This races pretty badly against concurrent holders of 885 - * ring interrupts. This is possible since we've started to drop 886 - * dev->struct_mutex in select places when waiting for the gpu. 887 - */ 888 883 889 884 /* 890 885 * rps/rc6 re-init is necessary to restore state lost after the
+49 -16
drivers/gpu/drm/i915/i915_drv.h
··· 56 56 57 57 #define DRIVER_NAME "i915" 58 58 #define DRIVER_DESC "Intel Graphics" 59 - #define DRIVER_DATE "20150227" 59 + #define DRIVER_DATE "20150313" 60 60 61 61 #undef WARN_ON 62 62 /* Many gcc seem to no see through this and fall over :( */ ··· 69 69 #else 70 70 #define WARN_ON(x) WARN((x), "WARN_ON(" #x ")") 71 71 #endif 72 + 73 + #undef WARN_ON_ONCE 74 + #define WARN_ON_ONCE(x) WARN_ONCE((x), "WARN_ON_ONCE(" #x ")") 72 75 73 76 #define MISSING_CASE(x) WARN(1, "Missing switch case (%lu) in %s\n", \ 74 77 (long) (x), __func__); ··· 226 223 227 224 #define for_each_pipe(__dev_priv, __p) \ 228 225 for ((__p) = 0; (__p) < INTEL_INFO(__dev_priv)->num_pipes; (__p)++) 229 - #define for_each_plane(pipe, p) \ 230 - for ((p) = 0; (p) < INTEL_INFO(dev)->num_sprites[(pipe)] + 1; (p)++) 231 - #define for_each_sprite(p, s) for ((s) = 0; (s) < INTEL_INFO(dev)->num_sprites[(p)]; (s)++) 226 + #define for_each_plane(__dev_priv, __pipe, __p) \ 227 + for ((__p) = 0; \ 228 + (__p) < INTEL_INFO(__dev_priv)->num_sprites[(__pipe)] + 1; \ 229 + (__p)++) 230 + #define for_each_sprite(__dev_priv, __p, __s) \ 231 + for ((__s) = 0; \ 232 + (__s) < INTEL_INFO(__dev_priv)->num_sprites[(__p)]; \ 233 + (__s)++) 232 234 233 235 #define for_each_crtc(dev, crtc) \ 234 236 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) ··· 245 237 list_for_each_entry(intel_encoder, \ 246 238 &(dev)->mode_config.encoder_list, \ 247 239 base.head) 240 + 241 + #define for_each_intel_connector(dev, intel_connector) \ 242 + list_for_each_entry(intel_connector, \ 243 + &dev->mode_config.connector_list, \ 244 + base.head) 245 + 248 246 249 247 #define for_each_encoder_on_crtc(dev, __crtc, intel_encoder) \ 250 248 list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \ ··· 797 783 struct list_head link; 798 784 }; 799 785 786 + enum fb_op_origin { 787 + ORIGIN_GTT, 788 + ORIGIN_CPU, 789 + ORIGIN_CS, 790 + ORIGIN_FLIP, 791 + }; 792 + 800 793 struct i915_fbc { 801 794 unsigned long uncompressed_size; 802 795 unsigned threshold; 803 796 unsigned int fb_id; 797 + unsigned int possible_framebuffer_bits; 798 + unsigned int busy_bits; 804 799 struct intel_crtc *crtc; 805 800 int y; 806 801 ··· 821 798 /* Tracks whether the HW is actually enabled, not whether the feature is 822 799 * possible. */ 823 800 bool enabled; 824 - 825 - /* On gen8 some rings cannont perform fbc clean operation so for now 826 - * we are doing this on SW with mmio. 827 - * This variable works in the opposite information direction 828 - * of ring->fbc_dirty telling software on frontbuffer tracking 829 - * to perform the cache clean on sw side. 830 - */ 831 - bool need_sw_cache_clean; 832 801 833 802 struct intel_fbc_work { 834 803 struct delayed_work work; ··· 1068 1053 1069 1054 int c_m; 1070 1055 int r_t; 1071 - 1072 - struct drm_i915_gem_object *pwrctx; 1073 - struct drm_i915_gem_object *renderctx; 1074 1056 }; 1075 1057 1076 1058 struct drm_i915_private; ··· 1408 1396 uint32_t wm_linetime[3]; 1409 1397 bool enable_fbc_wm; 1410 1398 enum intel_ddb_partitioning partitioning; 1399 + }; 1400 + 1401 + struct vlv_wm_values { 1402 + struct { 1403 + uint16_t primary; 1404 + uint16_t sprite[2]; 1405 + uint8_t cursor; 1406 + } pipe[3]; 1407 + 1408 + struct { 1409 + uint16_t plane; 1410 + uint8_t cursor; 1411 + } sr; 1412 + 1413 + struct { 1414 + uint8_t cursor; 1415 + uint8_t sprite[2]; 1416 + uint8_t primary; 1417 + } ddl[3]; 1411 1418 }; 1412 1419 1413 1420 struct skl_ddb_entry { ··· 1791 1760 union { 1792 1761 struct ilk_wm_values hw; 1793 1762 struct skl_wm_values skl_hw; 1763 + struct vlv_wm_values vlv; 1794 1764 }; 1795 1765 } wm; 1796 1766 ··· 2428 2396 #define NUM_L3_SLICES(dev) (IS_HSW_GT3(dev) ? 2 : HAS_L3_DPF(dev)) 2429 2397 2430 2398 #define GT_FREQUENCY_MULTIPLIER 50 2399 + #define GEN9_FREQ_SCALER 3 2431 2400 2432 2401 #include "i915_trace.h" 2433 2402 ··· 2466 2433 bool disable_display; 2467 2434 bool disable_vtd_wa; 2468 2435 int use_mmio_flip; 2469 - bool mmio_debug; 2436 + int mmio_debug; 2470 2437 bool verbose_state_checks; 2471 2438 bool nuclear_pageflip; 2472 2439 };
+40 -12
drivers/gpu/drm/i915/i915_gem.c
··· 351 351 struct drm_device *dev = obj->base.dev; 352 352 void *vaddr = obj->phys_handle->vaddr + args->offset; 353 353 char __user *user_data = to_user_ptr(args->data_ptr); 354 - int ret; 354 + int ret = 0; 355 355 356 356 /* We manually control the domain here and pretend that it 357 357 * remains coherent i.e. in the GTT domain, like shmem_pwrite. ··· 360 360 if (ret) 361 361 return ret; 362 362 363 + intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU); 363 364 if (__copy_from_user_inatomic_nocache(vaddr, user_data, args->size)) { 364 365 unsigned long unwritten; 365 366 ··· 371 370 mutex_unlock(&dev->struct_mutex); 372 371 unwritten = copy_from_user(vaddr, user_data, args->size); 373 372 mutex_lock(&dev->struct_mutex); 374 - if (unwritten) 375 - return -EFAULT; 373 + if (unwritten) { 374 + ret = -EFAULT; 375 + goto out; 376 + } 376 377 } 377 378 378 379 drm_clflush_virt_range(vaddr, args->size); 379 380 i915_gem_chipset_flush(dev); 380 - return 0; 381 + 382 + out: 383 + intel_fb_obj_flush(obj, false); 384 + return ret; 381 385 } 382 386 383 387 void *i915_gem_object_alloc(struct drm_device *dev) ··· 816 810 817 811 offset = i915_gem_obj_ggtt_offset(obj) + args->offset; 818 812 813 + intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT); 814 + 819 815 while (remain > 0) { 820 816 /* Operation in this page 821 817 * ··· 838 830 if (fast_user_write(dev_priv->gtt.mappable, page_base, 839 831 page_offset, user_data, page_length)) { 840 832 ret = -EFAULT; 841 - goto out_unpin; 833 + goto out_flush; 842 834 } 843 835 844 836 remain -= page_length; ··· 846 838 offset += page_length; 847 839 } 848 840 841 + out_flush: 842 + intel_fb_obj_flush(obj, false); 849 843 out_unpin: 850 844 i915_gem_object_ggtt_unpin(obj); 851 845 out: ··· 962 952 if (ret) 963 953 return ret; 964 954 955 + intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU); 956 + 965 957 i915_gem_object_pin_pages(obj); 966 958 967 959 offset = args->offset; ··· 1042 1030 if (needs_clflush_after) 1043 1031 i915_gem_chipset_flush(dev); 1044 1032 1033 + intel_fb_obj_flush(obj, false); 1045 1034 return ret; 1046 1035 } 1047 1036 ··· 2942 2929 req = obj->last_read_req; 2943 2930 2944 2931 /* Do this after OLR check to make sure we make forward progress polling 2945 - * on this IOCTL with a timeout <=0 (like busy ioctl) 2932 + * on this IOCTL with a timeout == 0 (like busy ioctl) 2946 2933 */ 2947 - if (args->timeout_ns <= 0) { 2934 + if (args->timeout_ns == 0) { 2948 2935 ret = -ETIME; 2949 2936 goto out; 2950 2937 } ··· 2954 2941 i915_gem_request_reference(req); 2955 2942 mutex_unlock(&dev->struct_mutex); 2956 2943 2957 - ret = __i915_wait_request(req, reset_counter, true, &args->timeout_ns, 2944 + ret = __i915_wait_request(req, reset_counter, true, 2945 + args->timeout_ns > 0 ? &args->timeout_ns : NULL, 2958 2946 file->driver_priv); 2959 2947 mutex_lock(&dev->struct_mutex); 2960 2948 i915_gem_request_unreference(req); ··· 3770 3756 } 3771 3757 3772 3758 if (write) 3773 - intel_fb_obj_invalidate(obj, NULL); 3759 + intel_fb_obj_invalidate(obj, NULL, ORIGIN_GTT); 3774 3760 3775 3761 trace_i915_gem_object_change_domain(obj, 3776 3762 old_read_domains, ··· 4085 4071 } 4086 4072 4087 4073 if (write) 4088 - intel_fb_obj_invalidate(obj, NULL); 4074 + intel_fb_obj_invalidate(obj, NULL, ORIGIN_CPU); 4089 4075 4090 4076 trace_i915_gem_object_change_domain(obj, 4091 4077 old_read_domains, ··· 4795 4781 if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt()) 4796 4782 return -EIO; 4797 4783 4784 + /* Double layer security blanket, see i915_gem_init() */ 4785 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4786 + 4798 4787 if (dev_priv->ellc_size) 4799 4788 I915_WRITE(HSW_IDICR, I915_READ(HSW_IDICR) | IDIHASHMSK(0xf)); 4800 4789 ··· 4830 4813 for_each_ring(ring, dev_priv, i) { 4831 4814 ret = ring->init_hw(ring); 4832 4815 if (ret) 4833 - return ret; 4816 + goto out; 4834 4817 } 4835 4818 4836 4819 for (i = 0; i < NUM_L3_SLICES(dev); i++) ··· 4847 4830 DRM_ERROR("Context enable failed %d\n", ret); 4848 4831 i915_gem_cleanup_ringbuffer(dev); 4849 4832 4850 - return ret; 4833 + goto out; 4851 4834 } 4852 4835 4836 + out: 4837 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4853 4838 return ret; 4854 4839 } 4855 4840 ··· 4885 4866 dev_priv->gt.stop_ring = intel_logical_ring_stop; 4886 4867 } 4887 4868 4869 + /* This is just a security blanket to placate dragons. 4870 + * On some systems, we very sporadically observe that the first TLBs 4871 + * used by the CS may be stale, despite us poking the TLB reset. If 4872 + * we hold the forcewake during initialisation these problems 4873 + * just magically go away. 4874 + */ 4875 + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); 4876 + 4888 4877 ret = i915_gem_init_userptr(dev); 4889 4878 if (ret) 4890 4879 goto out_unlock; ··· 4919 4892 } 4920 4893 4921 4894 out_unlock: 4895 + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4922 4896 mutex_unlock(&dev->struct_mutex); 4923 4897 4924 4898 return ret;
+2 -2
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 971 971 obj->dirty = 1; 972 972 i915_gem_request_assign(&obj->last_write_req, req); 973 973 974 - intel_fb_obj_invalidate(obj, ring); 974 + intel_fb_obj_invalidate(obj, ring, ORIGIN_CS); 975 975 976 976 /* update for the implicit flush after a batch */ 977 977 obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS; ··· 1518 1518 * - The batch is already pinned into the relevant ppgtt, so we 1519 1519 * already have the backing storage fully allocated. 1520 1520 * - No other BO uses the global gtt (well contexts, but meh), 1521 - * so we don't really have issues with mutliple objects not 1521 + * so we don't really have issues with multiple objects not 1522 1522 * fitting due to fragmentation. 1523 1523 * So this is actually safe. 1524 1524 */
+15 -6
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 716 716 if (size % (1<<30)) 717 717 DRM_INFO("Pages will be wasted unless GTT size (%llu) is divisible by 1GB\n", size); 718 718 719 - /* 1. Do all our allocations for page directories and page tables. */ 720 - ret = gen8_ppgtt_alloc(ppgtt, max_pdp); 719 + /* 1. Do all our allocations for page directories and page tables. 720 + * We allocate more than was asked so that we can point the unused parts 721 + * to valid entries that point to scratch page. Dynamic page tables 722 + * will fix this eventually. 723 + */ 724 + ret = gen8_ppgtt_alloc(ppgtt, GEN8_LEGACY_PDPES); 721 725 if (ret) 722 726 return ret; 723 727 724 728 /* 725 729 * 2. Create DMA mappings for the page directories and page tables. 726 730 */ 727 - for (i = 0; i < max_pdp; i++) { 731 + for (i = 0; i < GEN8_LEGACY_PDPES; i++) { 728 732 ret = gen8_ppgtt_setup_page_directories(ppgtt, i); 729 733 if (ret) 730 734 goto bail; ··· 748 744 * plugged in correctly. So we do that now/here. For aliasing PPGTT, we 749 745 * will never need to touch the PDEs again. 750 746 */ 751 - for (i = 0; i < max_pdp; i++) { 747 + for (i = 0; i < GEN8_LEGACY_PDPES; i++) { 752 748 struct i915_page_directory_entry *pd = ppgtt->pdp.page_directory[i]; 753 749 gen8_ppgtt_pde_t *pd_vaddr; 754 750 pd_vaddr = kmap_atomic(ppgtt->pdp.page_directory[i]->page); ··· 768 764 ppgtt->base.insert_entries = gen8_ppgtt_insert_entries; 769 765 ppgtt->base.cleanup = gen8_ppgtt_cleanup; 770 766 ppgtt->base.start = 0; 771 - ppgtt->base.total = ppgtt->num_pd_entries * GEN8_PTES_PER_PAGE * PAGE_SIZE; 772 767 773 - ppgtt->base.clear_range(&ppgtt->base, 0, ppgtt->base.total, true); 768 + /* This is the area that we advertise as usable for the caller */ 769 + ppgtt->base.total = max_pdp * GEN8_PDES_PER_PAGE * GEN8_PTES_PER_PAGE * PAGE_SIZE; 770 + 771 + /* Set all ptes to a valid scratch page. Also above requested space */ 772 + ppgtt->base.clear_range(&ppgtt->base, 0, 773 + ppgtt->num_pd_pages * GEN8_PTES_PER_PAGE * PAGE_SIZE, 774 + true); 774 775 775 776 DRM_DEBUG_DRIVER("Allocated %d pages for page directories (%d wasted)\n", 776 777 ppgtt->num_pd_pages, ppgtt->num_pd_pages - max_pdp);
+14 -10
drivers/gpu/drm/i915/i915_irq.c
··· 1696 1696 * the work queue. */ 1697 1697 static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir) 1698 1698 { 1699 - /* TODO: RPS on GEN9+ is not supported yet. */ 1700 - if (WARN_ONCE(INTEL_INFO(dev_priv)->gen >= 9, 1701 - "GEN9+: unexpected RPS IRQ\n")) 1702 - return; 1703 - 1704 1699 if (pm_iir & dev_priv->pm_rps_events) { 1705 1700 spin_lock(&dev_priv->irq_lock); 1706 1701 gen6_disable_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events); ··· 3164 3169 ibx_irq_reset(dev); 3165 3170 } 3166 3171 3167 - void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv) 3172 + void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv, 3173 + unsigned int pipe_mask) 3168 3174 { 3169 3175 uint32_t extra_ier = GEN8_PIPE_VBLANK | GEN8_PIPE_FIFO_UNDERRUN; 3170 3176 3171 3177 spin_lock_irq(&dev_priv->irq_lock); 3172 - GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B, dev_priv->de_irq_mask[PIPE_B], 3173 - ~dev_priv->de_irq_mask[PIPE_B] | extra_ier); 3174 - GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C, dev_priv->de_irq_mask[PIPE_C], 3175 - ~dev_priv->de_irq_mask[PIPE_C] | extra_ier); 3178 + if (pipe_mask & 1 << PIPE_A) 3179 + GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_A, 3180 + dev_priv->de_irq_mask[PIPE_A], 3181 + ~dev_priv->de_irq_mask[PIPE_A] | extra_ier); 3182 + if (pipe_mask & 1 << PIPE_B) 3183 + GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_B, 3184 + dev_priv->de_irq_mask[PIPE_B], 3185 + ~dev_priv->de_irq_mask[PIPE_B] | extra_ier); 3186 + if (pipe_mask & 1 << PIPE_C) 3187 + GEN8_IRQ_INIT_NDX(DE_PIPE, PIPE_C, 3188 + dev_priv->de_irq_mask[PIPE_C], 3189 + ~dev_priv->de_irq_mask[PIPE_C] | extra_ier); 3176 3190 spin_unlock_irq(&dev_priv->irq_lock); 3177 3191 } 3178 3192
+3 -3
drivers/gpu/drm/i915/i915_params.c
··· 171 171 MODULE_PARM_DESC(use_mmio_flip, 172 172 "use MMIO flips (-1=never, 0=driver discretion [default], 1=always)"); 173 173 174 - module_param_named(mmio_debug, i915.mmio_debug, bool, 0600); 174 + module_param_named(mmio_debug, i915.mmio_debug, int, 0600); 175 175 MODULE_PARM_DESC(mmio_debug, 176 - "Enable the MMIO debug code (default: false). This may negatively " 177 - "affect performance."); 176 + "Enable the MMIO debug code for the first N failures (default: off). " 177 + "This may negatively affect performance."); 178 178 179 179 module_param_named(verbose_state_checks, i915.verbose_state_checks, bool, 0600); 180 180 MODULE_PARM_DESC(verbose_state_checks,
+61 -20
drivers/gpu/drm/i915/i915_reg.h
··· 566 566 #define DSPFREQSTAT_MASK (0x3 << DSPFREQSTAT_SHIFT) 567 567 #define DSPFREQGUAR_SHIFT 14 568 568 #define DSPFREQGUAR_MASK (0x3 << DSPFREQGUAR_SHIFT) 569 + #define DSP_MAXFIFO_PM5_STATUS (1 << 22) /* chv */ 570 + #define DSP_AUTO_CDCLK_GATE_DISABLE (1 << 7) /* chv */ 571 + #define DSP_MAXFIFO_PM5_ENABLE (1 << 6) /* chv */ 569 572 #define _DP_SSC(val, pipe) ((val) << (2 * (pipe))) 570 573 #define DP_SSC_MASK(pipe) _DP_SSC(0x3, (pipe)) 571 574 #define DP_SSC_PWR_ON(pipe) _DP_SSC(0x0, (pipe)) ··· 643 640 644 641 #define FB_GFX_FMIN_AT_VMIN_FUSE 0x137 645 642 #define FB_GFX_FMIN_AT_VMIN_FUSE_SHIFT 8 643 + 644 + #define PUNIT_REG_DDR_SETUP2 0x139 645 + #define FORCE_DDR_FREQ_REQ_ACK (1 << 8) 646 + #define FORCE_DDR_LOW_FREQ (1 << 1) 647 + #define FORCE_DDR_HIGH_FREQ (1 << 0) 646 648 647 649 #define PUNIT_GPU_STATUS_REG 0xdb 648 650 #define PUNIT_GPU_STATUS_MAX_FREQ_SHIFT 16 ··· 1037 1029 #define DPIO_CHV_FIRST_MOD (0 << 8) 1038 1030 #define DPIO_CHV_SECOND_MOD (1 << 8) 1039 1031 #define DPIO_CHV_FEEDFWD_GAIN_SHIFT 0 1032 + #define DPIO_CHV_FEEDFWD_GAIN_MASK (0xF << 0) 1040 1033 #define CHV_PLL_DW3(ch) _PIPE(ch, _CHV_PLL_DW3_CH0, _CHV_PLL_DW3_CH1) 1041 1034 1042 1035 #define _CHV_PLL_DW6_CH0 0x8018 ··· 1049 1040 1050 1041 #define _CHV_PLL_DW8_CH0 0x8020 1051 1042 #define _CHV_PLL_DW8_CH1 0x81A0 1043 + #define DPIO_CHV_TDC_TARGET_CNT_SHIFT 0 1044 + #define DPIO_CHV_TDC_TARGET_CNT_MASK (0x3FF << 0) 1052 1045 #define CHV_PLL_DW8(ch) _PIPE(ch, _CHV_PLL_DW8_CH0, _CHV_PLL_DW8_CH1) 1053 1046 1054 1047 #define _CHV_PLL_DW9_CH0 0x8024 1055 1048 #define _CHV_PLL_DW9_CH1 0x81A4 1056 1049 #define DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT 1 /* 3 bits */ 1050 + #define DPIO_CHV_INT_LOCK_THRESHOLD_MASK (7 << 1) 1057 1051 #define DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE 1 /* 1: coarse & 0 : fine */ 1058 1052 #define CHV_PLL_DW9(ch) _PIPE(ch, _CHV_PLL_DW9_CH0, _CHV_PLL_DW9_CH1) 1059 1053 ··· 1534 1522 1535 1523 /* Fuse readout registers for GT */ 1536 1524 #define CHV_FUSE_GT (VLV_DISPLAY_BASE + 0x2168) 1525 + #define CHV_FGT_DISABLE_SS0 (1 << 10) 1526 + #define CHV_FGT_DISABLE_SS1 (1 << 11) 1537 1527 #define CHV_FGT_EU_DIS_SS0_R0_SHIFT 16 1538 1528 #define CHV_FGT_EU_DIS_SS0_R0_MASK (0xf << CHV_FGT_EU_DIS_SS0_R0_SHIFT) 1539 1529 #define CHV_FGT_EU_DIS_SS0_R1_SHIFT 20 ··· 2113 2099 #define CDCLK_FREQ_SHIFT 4 2114 2100 #define CDCLK_FREQ_MASK (0x1f << CDCLK_FREQ_SHIFT) 2115 2101 #define CZCLK_FREQ_MASK 0xf 2102 + 2103 + #define GCI_CONTROL (VLV_DISPLAY_BASE + 0x650C) 2104 + #define PFI_CREDIT_63 (9 << 28) /* chv only */ 2105 + #define PFI_CREDIT_31 (8 << 28) /* chv only */ 2106 + #define PFI_CREDIT(x) (((x) - 8) << 28) /* 8-15 */ 2107 + #define PFI_CREDIT_RESEND (1 << 27) 2108 + #define VGA_FAST_MODE_DISABLE (1 << 14) 2109 + 2116 2110 #define GMBUSFREQ_VLV (VLV_DISPLAY_BASE + 0x6510) 2117 2111 2118 2112 /* ··· 2448 2426 #define GEN6_GT_PERF_STATUS (MCHBAR_MIRROR_BASE_SNB + 0x5948) 2449 2427 #define GEN6_RP_STATE_LIMITS (MCHBAR_MIRROR_BASE_SNB + 0x5994) 2450 2428 #define GEN6_RP_STATE_CAP (MCHBAR_MIRROR_BASE_SNB + 0x5998) 2429 + 2430 + #define INTERVAL_1_28_US(us) (((us) * 100) >> 7) 2431 + #define INTERVAL_1_33_US(us) (((us) * 3) >> 2) 2432 + #define GT_INTERVAL_FROM_US(dev_priv, us) (IS_GEN9(dev_priv) ? \ 2433 + INTERVAL_1_33_US(us) : \ 2434 + INTERVAL_1_28_US(us)) 2451 2435 2452 2436 /* 2453 2437 * Logical Context regs ··· 3047 3019 3048 3020 /* Video Data Island Packet control */ 3049 3021 #define VIDEO_DIP_DATA 0x61178 3050 - /* Read the description of VIDEO_DIP_DATA (before Haswel) or VIDEO_DIP_ECC 3022 + /* Read the description of VIDEO_DIP_DATA (before Haswell) or VIDEO_DIP_ECC 3051 3023 * (Haswell and newer) to see which VIDEO_DIP_DATA byte corresponds to each byte 3052 3024 * of the infoframe structure specified by CEA-861. */ 3053 3025 #define VIDEO_DIP_DATA_SIZE 32 ··· 4093 4065 #define DPINVGTT_STATUS_MASK 0xff 4094 4066 #define DPINVGTT_STATUS_MASK_CHV 0xfff 4095 4067 4096 - #define DSPARB 0x70030 4068 + #define DSPARB (dev_priv->info.display_mmio_offset + 0x70030) 4097 4069 #define DSPARB_CSTART_MASK (0x7f << 7) 4098 4070 #define DSPARB_CSTART_SHIFT 7 4099 4071 #define DSPARB_BSTART_MASK (0x7f) 4100 4072 #define DSPARB_BSTART_SHIFT 0 4101 4073 #define DSPARB_BEND_SHIFT 9 /* on 855 */ 4102 4074 #define DSPARB_AEND_SHIFT 0 4075 + 4076 + #define DSPARB2 (VLV_DISPLAY_BASE + 0x70060) /* vlv/chv */ 4077 + #define DSPARB3 (VLV_DISPLAY_BASE + 0x7006c) /* chv */ 4103 4078 4104 4079 /* pnv/gen4/g4x/vlv/chv */ 4105 4080 #define DSPFW1 (dev_priv->info.display_mmio_offset + 0x70034) ··· 4127 4096 #define DSPFW_SPRITEB_MASK_VLV (0xff<<16) /* vlv/chv */ 4128 4097 #define DSPFW_CURSORA_SHIFT 8 4129 4098 #define DSPFW_CURSORA_MASK (0x3f<<8) 4130 - #define DSPFW_PLANEC_SHIFT_OLD 0 4131 - #define DSPFW_PLANEC_MASK_OLD (0x7f<<0) /* pre-gen4 sprite C */ 4099 + #define DSPFW_PLANEC_OLD_SHIFT 0 4100 + #define DSPFW_PLANEC_OLD_MASK (0x7f<<0) /* pre-gen4 sprite C */ 4132 4101 #define DSPFW_SPRITEA_SHIFT 0 4133 4102 #define DSPFW_SPRITEA_MASK (0x7f<<0) /* g4x */ 4134 4103 #define DSPFW_SPRITEA_MASK_VLV (0xff<<0) /* vlv/chv */ ··· 4167 4136 #define DSPFW_SPRITED_WM1_SHIFT 24 4168 4137 #define DSPFW_SPRITED_WM1_MASK (0xff<<24) 4169 4138 #define DSPFW_SPRITED_SHIFT 16 4170 - #define DSPFW_SPRITED_MASK (0xff<<16) 4139 + #define DSPFW_SPRITED_MASK_VLV (0xff<<16) 4171 4140 #define DSPFW_SPRITEC_WM1_SHIFT 8 4172 4141 #define DSPFW_SPRITEC_WM1_MASK (0xff<<8) 4173 4142 #define DSPFW_SPRITEC_SHIFT 0 4174 - #define DSPFW_SPRITEC_MASK (0xff<<0) 4143 + #define DSPFW_SPRITEC_MASK_VLV (0xff<<0) 4175 4144 #define DSPFW8_CHV (VLV_DISPLAY_BASE + 0x700b8) 4176 4145 #define DSPFW_SPRITEF_WM1_SHIFT 24 4177 4146 #define DSPFW_SPRITEF_WM1_MASK (0xff<<24) 4178 4147 #define DSPFW_SPRITEF_SHIFT 16 4179 - #define DSPFW_SPRITEF_MASK (0xff<<16) 4148 + #define DSPFW_SPRITEF_MASK_VLV (0xff<<16) 4180 4149 #define DSPFW_SPRITEE_WM1_SHIFT 8 4181 4150 #define DSPFW_SPRITEE_WM1_MASK (0xff<<8) 4182 4151 #define DSPFW_SPRITEE_SHIFT 0 4183 - #define DSPFW_SPRITEE_MASK (0xff<<0) 4152 + #define DSPFW_SPRITEE_MASK_VLV (0xff<<0) 4184 4153 #define DSPFW9_CHV (VLV_DISPLAY_BASE + 0x7007c) /* wtf #2? */ 4185 4154 #define DSPFW_PLANEC_WM1_SHIFT 24 4186 4155 #define DSPFW_PLANEC_WM1_MASK (0xff<<24) 4187 4156 #define DSPFW_PLANEC_SHIFT 16 4188 - #define DSPFW_PLANEC_MASK (0xff<<16) 4157 + #define DSPFW_PLANEC_MASK_VLV (0xff<<16) 4189 4158 #define DSPFW_CURSORC_WM1_SHIFT 8 4190 4159 #define DSPFW_CURSORC_WM1_MASK (0x3f<<16) 4191 4160 #define DSPFW_CURSORC_SHIFT 0 ··· 4194 4163 /* vlv/chv high order bits */ 4195 4164 #define DSPHOWM (VLV_DISPLAY_BASE + 0x70064) 4196 4165 #define DSPFW_SR_HI_SHIFT 24 4197 - #define DSPFW_SR_HI_MASK (1<<24) 4166 + #define DSPFW_SR_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */ 4198 4167 #define DSPFW_SPRITEF_HI_SHIFT 23 4199 4168 #define DSPFW_SPRITEF_HI_MASK (1<<23) 4200 4169 #define DSPFW_SPRITEE_HI_SHIFT 22 ··· 4215 4184 #define DSPFW_PLANEA_HI_MASK (1<<0) 4216 4185 #define DSPHOWM1 (VLV_DISPLAY_BASE + 0x70068) 4217 4186 #define DSPFW_SR_WM1_HI_SHIFT 24 4218 - #define DSPFW_SR_WM1_HI_MASK (1<<24) 4187 + #define DSPFW_SR_WM1_HI_MASK (3<<24) /* 2 bits for chv, 1 for vlv */ 4219 4188 #define DSPFW_SPRITEF_WM1_HI_SHIFT 23 4220 4189 #define DSPFW_SPRITEF_WM1_HI_MASK (1<<23) 4221 4190 #define DSPFW_SPRITEE_WM1_HI_SHIFT 22 ··· 4236 4205 #define DSPFW_PLANEA_WM1_HI_MASK (1<<0) 4237 4206 4238 4207 /* drain latency register values*/ 4239 - #define DRAIN_LATENCY_PRECISION_16 16 4240 - #define DRAIN_LATENCY_PRECISION_32 32 4241 - #define DRAIN_LATENCY_PRECISION_64 64 4242 4208 #define VLV_DDL(pipe) (VLV_DISPLAY_BASE + 0x70050 + 4 * (pipe)) 4243 - #define DDL_CURSOR_PRECISION_HIGH (1<<31) 4244 - #define DDL_CURSOR_PRECISION_LOW (0<<31) 4245 4209 #define DDL_CURSOR_SHIFT 24 4246 - #define DDL_SPRITE_PRECISION_HIGH(sprite) (1<<(15+8*(sprite))) 4247 - #define DDL_SPRITE_PRECISION_LOW(sprite) (0<<(15+8*(sprite))) 4248 4210 #define DDL_SPRITE_SHIFT(sprite) (8+8*(sprite)) 4249 - #define DDL_PLANE_PRECISION_HIGH (1<<7) 4250 - #define DDL_PLANE_PRECISION_LOW (0<<7) 4251 4211 #define DDL_PLANE_SHIFT 0 4212 + #define DDL_PRECISION_HIGH (1<<7) 4213 + #define DDL_PRECISION_LOW (0<<7) 4252 4214 #define DRAIN_LATENCY_MASK 0x7f 4215 + 4216 + #define CBR1_VLV (VLV_DISPLAY_BASE + 0x70400) 4217 + #define CBR_PND_DEADLINE_DISABLE (1<<31) 4253 4218 4254 4219 /* FIFO watermark sizes etc */ 4255 4220 #define G4X_FIFO_LINE_SIZE 64 ··· 6107 6080 #define GEN6_TURBO_DISABLE (1<<31) 6108 6081 #define GEN6_FREQUENCY(x) ((x)<<25) 6109 6082 #define HSW_FREQUENCY(x) ((x)<<24) 6083 + #define GEN9_FREQUENCY(x) ((x)<<23) 6110 6084 #define GEN6_OFFSET(x) ((x)<<19) 6111 6085 #define GEN6_AGGRESSIVE_TURBO (0<<15) 6112 6086 #define GEN6_RC_VIDEO_FREQ 0xA00C ··· 6126 6098 #define GEN6_RPSTAT1 0xA01C 6127 6099 #define GEN6_CAGF_SHIFT 8 6128 6100 #define HSW_CAGF_SHIFT 7 6101 + #define GEN9_CAGF_SHIFT 23 6129 6102 #define GEN6_CAGF_MASK (0x7f << GEN6_CAGF_SHIFT) 6130 6103 #define HSW_CAGF_MASK (0x7f << HSW_CAGF_SHIFT) 6104 + #define GEN9_CAGF_MASK (0x1ff << GEN9_CAGF_SHIFT) 6131 6105 #define GEN6_RP_CONTROL 0xA024 6132 6106 #define GEN6_RP_MEDIA_TURBO (1<<11) 6133 6107 #define GEN6_RP_MEDIA_MODE_MASK (3<<9) ··· 6254 6224 #define GEN6_RC3 2 6255 6225 #define GEN6_RC6 3 6256 6226 #define GEN6_RC7 4 6227 + 6228 + #define CHV_POWER_SS0_SIG1 0xa720 6229 + #define CHV_POWER_SS1_SIG1 0xa728 6230 + #define CHV_SS_PG_ENABLE (1<<1) 6231 + #define CHV_EU08_PG_ENABLE (1<<9) 6232 + #define CHV_EU19_PG_ENABLE (1<<17) 6233 + #define CHV_EU210_PG_ENABLE (1<<25) 6234 + 6235 + #define CHV_POWER_SS0_SIG2 0xa724 6236 + #define CHV_POWER_SS1_SIG2 0xa72c 6237 + #define CHV_EU311_PG_ENABLE (1<<1) 6257 6238 6258 6239 #define GEN9_SLICE0_PGCTL_ACK 0x804c 6259 6240 #define GEN9_SLICE1_PGCTL_ACK 0x8050
+3 -1
drivers/gpu/drm/i915/i915_sysfs.c
··· 319 319 ret = intel_gpu_freq(dev_priv, (freq >> 8) & 0xff); 320 320 } else { 321 321 u32 rpstat = I915_READ(GEN6_RPSTAT1); 322 - if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) 322 + if (IS_GEN9(dev_priv)) 323 + ret = (rpstat & GEN9_CAGF_MASK) >> GEN9_CAGF_SHIFT; 324 + else if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) 323 325 ret = (rpstat & HSW_CAGF_MASK) >> HSW_CAGF_SHIFT; 324 326 else 325 327 ret = (rpstat & GEN6_CAGF_MASK) >> GEN6_CAGF_SHIFT;
+1 -1
drivers/gpu/drm/i915/i915_vgpu.h
··· 32 32 * The following structure pages are defined in GEN MMIO space 33 33 * for virtualization. (One page for now) 34 34 */ 35 - #define VGT_MAGIC 0x4776544776544776 /* 'vGTvGTvG' */ 35 + #define VGT_MAGIC 0x4776544776544776ULL /* 'vGTvGTvG' */ 36 36 #define VGT_VERSION_MAJOR 1 37 37 #define VGT_VERSION_MINOR 0 38 38
+9 -3
drivers/gpu/drm/i915/intel_atomic.c
··· 214 214 intel_crtc_duplicate_state(struct drm_crtc *crtc) 215 215 { 216 216 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 217 + struct intel_crtc_state *crtc_state; 217 218 218 219 if (WARN_ON(!intel_crtc->config)) 219 - return kzalloc(sizeof(*intel_crtc->config), GFP_KERNEL); 220 + crtc_state = kzalloc(sizeof(*crtc_state), GFP_KERNEL); 221 + else 222 + crtc_state = kmemdup(intel_crtc->config, 223 + sizeof(*intel_crtc->config), GFP_KERNEL); 220 224 221 - return kmemdup(intel_crtc->config, sizeof(*intel_crtc->config), 222 - GFP_KERNEL); 225 + if (crtc_state) 226 + crtc_state->base.crtc = crtc; 227 + 228 + return &crtc_state->base; 223 229 } 224 230 225 231 /**
+23 -16
drivers/gpu/drm/i915/intel_ddi.c
··· 156 156 157 157 static const struct ddi_buf_trans skl_ddi_translations_hdmi[] = { 158 158 /* Idx NT mV T mV db */ 159 - { 0x00000018, 0x000000a0 }, /* 0: 400 400 0 */ 160 - { 0x00004014, 0x00000098 }, /* 1: 400 600 3.5 */ 161 - { 0x00006012, 0x00000088 }, /* 2: 400 800 6 */ 162 - { 0x00000018, 0x0000003c }, /* 3: 450 450 0 */ 163 - { 0x00000018, 0x00000098 }, /* 4: 600 600 0 */ 164 - { 0x00003015, 0x00000088 }, /* 5: 600 800 2.5 */ 165 - { 0x00005013, 0x00000080 }, /* 6: 600 1000 4.5 */ 166 - { 0x00000018, 0x00000088 }, /* 7: 800 800 0 */ 167 - { 0x00000096, 0x00000080 }, /* 8: 800 1000 2 */ 168 - { 0x00000018, 0x00000080 }, /* 9: 1200 1200 0 */ 159 + { 0x00004014, 0x00000087 }, /* 0: 800 1000 2 */ 169 160 }; 170 161 171 162 enum port intel_ddi_get_encoder_port(struct intel_encoder *intel_encoder) ··· 193 202 { 194 203 struct drm_i915_private *dev_priv = dev->dev_private; 195 204 u32 reg; 196 - int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_800mV_0dB, 205 + int i, n_hdmi_entries, n_dp_entries, n_edp_entries, hdmi_default_entry, 197 206 size; 198 207 int hdmi_level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift; 199 208 const struct ddi_buf_trans *ddi_translations_fdi; ··· 214 223 n_edp_entries = ARRAY_SIZE(skl_ddi_translations_dp); 215 224 } 216 225 226 + /* 227 + * On SKL, the recommendation from the hw team is to always use 228 + * a certain type of level shifter (and thus the corresponding 229 + * 800mV+2dB entry). Given that's the only validated entry, we 230 + * override what is in the VBT, at least until further notice. 231 + */ 232 + hdmi_level = 0; 217 233 ddi_translations_hdmi = skl_ddi_translations_hdmi; 218 234 n_hdmi_entries = ARRAY_SIZE(skl_ddi_translations_hdmi); 219 - hdmi_800mV_0dB = 7; 235 + hdmi_default_entry = 0; 220 236 } else if (IS_BROADWELL(dev)) { 221 237 ddi_translations_fdi = bdw_ddi_translations_fdi; 222 238 ddi_translations_dp = bdw_ddi_translations_dp; ··· 232 234 n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp); 233 235 n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp); 234 236 n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi); 235 - hdmi_800mV_0dB = 7; 237 + hdmi_default_entry = 7; 236 238 } else if (IS_HASWELL(dev)) { 237 239 ddi_translations_fdi = hsw_ddi_translations_fdi; 238 240 ddi_translations_dp = hsw_ddi_translations_dp; ··· 240 242 ddi_translations_hdmi = hsw_ddi_translations_hdmi; 241 243 n_dp_entries = n_edp_entries = ARRAY_SIZE(hsw_ddi_translations_dp); 242 244 n_hdmi_entries = ARRAY_SIZE(hsw_ddi_translations_hdmi); 243 - hdmi_800mV_0dB = 6; 245 + hdmi_default_entry = 6; 244 246 } else { 245 247 WARN(1, "ddi translation table missing\n"); 246 248 ddi_translations_edp = bdw_ddi_translations_dp; ··· 250 252 n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp); 251 253 n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp); 252 254 n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi); 253 - hdmi_800mV_0dB = 7; 255 + hdmi_default_entry = 7; 254 256 } 255 257 256 258 switch (port) { ··· 293 295 /* Choose a good default if VBT is badly populated */ 294 296 if (hdmi_level == HDMI_LEVEL_SHIFT_UNKNOWN || 295 297 hdmi_level >= n_hdmi_entries) 296 - hdmi_level = hdmi_800mV_0dB; 298 + hdmi_level = hdmi_default_entry; 297 299 298 300 /* Entry 9 is for HDMI: */ 299 301 I915_WRITE(reg, ddi_translations_hdmi[hdmi_level].trans1); ··· 784 786 case DPLL_CRTL1_LINK_RATE_810: 785 787 link_clock = 81000; 786 788 break; 789 + case DPLL_CRTL1_LINK_RATE_1080: 790 + link_clock = 108000; 791 + break; 787 792 case DPLL_CRTL1_LINK_RATE_1350: 788 793 link_clock = 135000; 794 + break; 795 + case DPLL_CRTL1_LINK_RATE_1620: 796 + link_clock = 162000; 797 + break; 798 + case DPLL_CRTL1_LINK_RATE_2160: 799 + link_clock = 216000; 789 800 break; 790 801 case DPLL_CRTL1_LINK_RATE_2700: 791 802 link_clock = 270000;
+219 -181
drivers/gpu/drm/i915/intel_display.c
··· 37 37 #include <drm/i915_drm.h> 38 38 #include "i915_drv.h" 39 39 #include "i915_trace.h" 40 + #include <drm/drm_atomic.h> 40 41 #include <drm/drm_atomic_helper.h> 41 42 #include <drm/drm_dp_helper.h> 42 43 #include <drm/drm_crtc_helper.h> ··· 391 390 * them would make no difference. 392 391 */ 393 392 .dot = { .min = 25000 * 5, .max = 540000 * 5}, 394 - .vco = { .min = 4860000, .max = 6480000 }, 393 + .vco = { .min = 4800000, .max = 6480000 }, 395 394 .n = { .min = 1, .max = 1 }, 396 395 .m1 = { .min = 2, .max = 2 }, 397 396 .m2 = { .min = 24 << 22, .max = 175 << 22 }, ··· 897 896 * 898 897 * We can ditch the crtc->primary->fb check as soon as we can 899 898 * properly reconstruct framebuffers. 899 + * 900 + * FIXME: The intel_crtc->active here should be switched to 901 + * crtc->state->active once we have proper CRTC states wired up 902 + * for atomic. 900 903 */ 901 - return intel_crtc->active && crtc->primary->fb && 904 + return intel_crtc->active && crtc->primary->state->fb && 902 905 intel_crtc->config->base.adjusted_mode.crtc_clock; 903 906 } 904 907 ··· 1305 1300 u32 val; 1306 1301 1307 1302 if (INTEL_INFO(dev)->gen >= 9) { 1308 - for_each_sprite(pipe, sprite) { 1303 + for_each_sprite(dev_priv, pipe, sprite) { 1309 1304 val = I915_READ(PLANE_CTL(pipe, sprite)); 1310 1305 I915_STATE_WARN(val & PLANE_CTL_ENABLE, 1311 1306 "plane %d assertion failure, should be off on pipe %c but is still active\n", 1312 1307 sprite, pipe_name(pipe)); 1313 1308 } 1314 1309 } else if (IS_VALLEYVIEW(dev)) { 1315 - for_each_sprite(pipe, sprite) { 1310 + for_each_sprite(dev_priv, pipe, sprite) { 1316 1311 reg = SPCNTR(pipe, sprite); 1317 1312 val = I915_READ(reg); 1318 1313 I915_STATE_WARN(val & SP_ENABLE, ··· 2538 2533 break; 2539 2534 } 2540 2535 } 2541 - 2542 2536 } 2543 2537 2544 2538 static void i9xx_update_primary_plane(struct drm_crtc *crtc, ··· 2658 2654 2659 2655 I915_WRITE(reg, dspcntr); 2660 2656 2661 - DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n", 2662 - i915_gem_obj_ggtt_offset(obj), linear_offset, x, y, 2663 - fb->pitches[0]); 2664 2657 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]); 2665 2658 if (INTEL_INFO(dev)->gen >= 4) { 2666 2659 I915_WRITE(DSPSURF(plane), ··· 2759 2758 2760 2759 I915_WRITE(reg, dspcntr); 2761 2760 2762 - DRM_DEBUG_KMS("Writing base %08lX %08lX %d %d %d\n", 2763 - i915_gem_obj_ggtt_offset(obj), linear_offset, x, y, 2764 - fb->pitches[0]); 2765 2761 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]); 2766 2762 I915_WRITE(DSPSURF(plane), 2767 2763 i915_gem_obj_ggtt_offset(obj) + intel_crtc->dspaddr_offset); ··· 2883 2885 fb->pixel_format); 2884 2886 2885 2887 I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl); 2886 - 2887 - DRM_DEBUG_KMS("Writing base %08lX %d,%d,%d,%d pitch=%d\n", 2888 - i915_gem_obj_ggtt_offset(obj), 2889 - x, y, fb->width, fb->height, 2890 - fb->pitches[0]); 2891 2888 2892 2889 I915_WRITE(PLANE_POS(pipe, 0), 0); 2893 2890 I915_WRITE(PLANE_OFFSET(pipe, 0), (y << 16) | x); ··· 3139 3146 if (IS_IVYBRIDGE(dev)) 3140 3147 I915_WRITE(reg, I915_READ(reg) | FDI_FS_ERRC_ENABLE | 3141 3148 FDI_FE_ERRC_ENABLE); 3142 - } 3143 - 3144 - static bool pipe_has_enabled_pch(struct intel_crtc *crtc) 3145 - { 3146 - return crtc->base.state->enable && crtc->active && 3147 - crtc->config->has_pch_encoder; 3148 - } 3149 - 3150 - static void ivb_modeset_global_resources(struct drm_device *dev) 3151 - { 3152 - struct drm_i915_private *dev_priv = dev->dev_private; 3153 - struct intel_crtc *pipe_B_crtc = 3154 - to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]); 3155 - struct intel_crtc *pipe_C_crtc = 3156 - to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_C]); 3157 - uint32_t temp; 3158 - 3159 - /* 3160 - * When everything is off disable fdi C so that we could enable fdi B 3161 - * with all lanes. Note that we don't care about enabled pipes without 3162 - * an enabled pch encoder. 3163 - */ 3164 - if (!pipe_has_enabled_pch(pipe_B_crtc) && 3165 - !pipe_has_enabled_pch(pipe_C_crtc)) { 3166 - WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE); 3167 - WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE); 3168 - 3169 - temp = I915_READ(SOUTH_CHICKEN1); 3170 - temp &= ~FDI_BC_BIFURCATION_SELECT; 3171 - DRM_DEBUG_KMS("disabling fdi C rx\n"); 3172 - I915_WRITE(SOUTH_CHICKEN1, temp); 3173 - } 3174 3149 } 3175 3150 3176 3151 /* The FDI link training functions for ILK/Ibexpeak. */ ··· 3796 3835 I915_READ(VSYNCSHIFT(cpu_transcoder))); 3797 3836 } 3798 3837 3799 - static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev) 3838 + static void cpt_set_fdi_bc_bifurcation(struct drm_device *dev, bool enable) 3800 3839 { 3801 3840 struct drm_i915_private *dev_priv = dev->dev_private; 3802 3841 uint32_t temp; 3803 3842 3804 3843 temp = I915_READ(SOUTH_CHICKEN1); 3805 - if (temp & FDI_BC_BIFURCATION_SELECT) 3844 + if (!!(temp & FDI_BC_BIFURCATION_SELECT) == enable) 3806 3845 return; 3807 3846 3808 3847 WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE); 3809 3848 WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE); 3810 3849 3811 - temp |= FDI_BC_BIFURCATION_SELECT; 3812 - DRM_DEBUG_KMS("enabling fdi C rx\n"); 3850 + temp &= ~FDI_BC_BIFURCATION_SELECT; 3851 + if (enable) 3852 + temp |= FDI_BC_BIFURCATION_SELECT; 3853 + 3854 + DRM_DEBUG_KMS("%sabling fdi C rx\n", enable ? "en" : "dis"); 3813 3855 I915_WRITE(SOUTH_CHICKEN1, temp); 3814 3856 POSTING_READ(SOUTH_CHICKEN1); 3815 3857 } ··· 3820 3856 static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc) 3821 3857 { 3822 3858 struct drm_device *dev = intel_crtc->base.dev; 3823 - struct drm_i915_private *dev_priv = dev->dev_private; 3824 3859 3825 3860 switch (intel_crtc->pipe) { 3826 3861 case PIPE_A: 3827 3862 break; 3828 3863 case PIPE_B: 3829 3864 if (intel_crtc->config->fdi_lanes > 2) 3830 - WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT); 3865 + cpt_set_fdi_bc_bifurcation(dev, false); 3831 3866 else 3832 - cpt_enable_fdi_bc_bifurcation(dev); 3867 + cpt_set_fdi_bc_bifurcation(dev, true); 3833 3868 3834 3869 break; 3835 3870 case PIPE_C: 3836 - cpt_enable_fdi_bc_bifurcation(dev); 3871 + cpt_set_fdi_bc_bifurcation(dev, true); 3837 3872 3838 3873 break; 3839 3874 default: ··· 4167 4204 } 4168 4205 } 4169 4206 4207 + /* 4208 + * Disable a plane internally without actually modifying the plane's state. 4209 + * This will allow us to easily restore the plane later by just reprogramming 4210 + * its state. 4211 + */ 4212 + static void disable_plane_internal(struct drm_plane *plane) 4213 + { 4214 + struct intel_plane *intel_plane = to_intel_plane(plane); 4215 + struct drm_plane_state *state = 4216 + plane->funcs->atomic_duplicate_state(plane); 4217 + struct intel_plane_state *intel_state = to_intel_plane_state(state); 4218 + 4219 + intel_state->visible = false; 4220 + intel_plane->commit_plane(plane, intel_state); 4221 + 4222 + intel_plane_destroy_state(plane, state); 4223 + } 4224 + 4170 4225 static void intel_disable_sprite_planes(struct drm_crtc *crtc) 4171 4226 { 4172 4227 struct drm_device *dev = crtc->dev; ··· 4194 4213 4195 4214 drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) { 4196 4215 intel_plane = to_intel_plane(plane); 4197 - if (intel_plane->pipe == pipe) 4198 - plane->funcs->disable_plane(plane); 4216 + if (plane->fb && intel_plane->pipe == pipe) 4217 + disable_plane_internal(plane); 4199 4218 } 4200 4219 } 4201 4220 ··· 4964 4983 WARN_ON(dev_priv->display.get_display_clock_speed(dev) != dev_priv->vlv_cdclk_freq); 4965 4984 4966 4985 switch (cdclk) { 4967 - case 400000: 4968 - cmd = 3; 4969 - break; 4970 4986 case 333333: 4971 4987 case 320000: 4972 - cmd = 2; 4973 - break; 4974 4988 case 266667: 4975 - cmd = 1; 4976 - break; 4977 4989 case 200000: 4978 - cmd = 0; 4979 4990 break; 4980 4991 default: 4981 4992 MISSING_CASE(cdclk); 4982 4993 return; 4983 4994 } 4995 + 4996 + /* 4997 + * Specs are full of misinformation, but testing on actual 4998 + * hardware has shown that we just need to write the desired 4999 + * CCK divider into the Punit register. 5000 + */ 5001 + cmd = DIV_ROUND_CLOSEST(dev_priv->hpll_freq << 1, cdclk) - 1; 4984 5002 4985 5003 mutex_lock(&dev_priv->rps.hw_lock); 4986 5004 val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ); ··· 5000 5020 int max_pixclk) 5001 5021 { 5002 5022 int freq_320 = (dev_priv->hpll_freq << 1) % 320000 != 0 ? 333333 : 320000; 5003 - 5004 - /* FIXME: Punit isn't quite ready yet */ 5005 - if (IS_CHERRYVIEW(dev_priv->dev)) 5006 - return 400000; 5023 + int limit = IS_CHERRYVIEW(dev_priv) ? 95 : 90; 5007 5024 5008 5025 /* 5009 5026 * Really only a few cases to deal with, as only 4 CDclks are supported: 5010 5027 * 200MHz 5011 5028 * 267MHz 5012 5029 * 320/333MHz (depends on HPLL freq) 5013 - * 400MHz 5014 - * So we check to see whether we're above 90% of the lower bin and 5015 - * adjust if needed. 5030 + * 400MHz (VLV only) 5031 + * So we check to see whether we're above 90% (VLV) or 95% (CHV) 5032 + * of the lower bin and adjust if needed. 5016 5033 * 5017 5034 * We seem to get an unstable or solid color picture at 200MHz. 5018 5035 * Not sure what's wrong. For now use 200MHz only when all pipes 5019 5036 * are off. 5020 5037 */ 5021 - if (max_pixclk > freq_320*9/10) 5038 + if (!IS_CHERRYVIEW(dev_priv) && 5039 + max_pixclk > freq_320*limit/100) 5022 5040 return 400000; 5023 - else if (max_pixclk > 266667*9/10) 5041 + else if (max_pixclk > 266667*limit/100) 5024 5042 return freq_320; 5025 5043 else if (max_pixclk > 0) 5026 5044 return 266667; ··· 5059 5081 *prepare_pipes |= (1 << intel_crtc->pipe); 5060 5082 } 5061 5083 5084 + static void vlv_program_pfi_credits(struct drm_i915_private *dev_priv) 5085 + { 5086 + unsigned int credits, default_credits; 5087 + 5088 + if (IS_CHERRYVIEW(dev_priv)) 5089 + default_credits = PFI_CREDIT(12); 5090 + else 5091 + default_credits = PFI_CREDIT(8); 5092 + 5093 + if (DIV_ROUND_CLOSEST(dev_priv->vlv_cdclk_freq, 1000) >= dev_priv->rps.cz_freq) { 5094 + /* CHV suggested value is 31 or 63 */ 5095 + if (IS_CHERRYVIEW(dev_priv)) 5096 + credits = PFI_CREDIT_31; 5097 + else 5098 + credits = PFI_CREDIT(15); 5099 + } else { 5100 + credits = default_credits; 5101 + } 5102 + 5103 + /* 5104 + * WA - write default credits before re-programming 5105 + * FIXME: should we also set the resend bit here? 5106 + */ 5107 + I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE | 5108 + default_credits); 5109 + 5110 + I915_WRITE(GCI_CONTROL, VGA_FAST_MODE_DISABLE | 5111 + credits | PFI_CREDIT_RESEND); 5112 + 5113 + /* 5114 + * FIXME is this guaranteed to clear 5115 + * immediately or should we poll for it? 5116 + */ 5117 + WARN_ON(I915_READ(GCI_CONTROL) & PFI_CREDIT_RESEND); 5118 + } 5119 + 5062 5120 static void valleyview_modeset_global_resources(struct drm_device *dev) 5063 5121 { 5064 5122 struct drm_i915_private *dev_priv = dev->dev_private; ··· 5117 5103 cherryview_set_cdclk(dev, req_cdclk); 5118 5104 else 5119 5105 valleyview_set_cdclk(dev, req_cdclk); 5106 + 5107 + vlv_program_pfi_credits(dev_priv); 5120 5108 5121 5109 intel_display_power_put(dev_priv, POWER_DOMAIN_PIPE_A); 5122 5110 } ··· 5533 5517 return encoder->get_hw_state(encoder, &pipe); 5534 5518 } 5535 5519 5520 + static int pipe_required_fdi_lanes(struct drm_device *dev, enum pipe pipe) 5521 + { 5522 + struct intel_crtc *crtc = 5523 + to_intel_crtc(intel_get_crtc_for_pipe(dev, pipe)); 5524 + 5525 + if (crtc->base.state->enable && 5526 + crtc->config->has_pch_encoder) 5527 + return crtc->config->fdi_lanes; 5528 + 5529 + return 0; 5530 + } 5531 + 5536 5532 static bool ironlake_check_fdi_lanes(struct drm_device *dev, enum pipe pipe, 5537 5533 struct intel_crtc_state *pipe_config) 5538 5534 { 5539 - struct drm_i915_private *dev_priv = dev->dev_private; 5540 - struct intel_crtc *pipe_B_crtc = 5541 - to_intel_crtc(dev_priv->pipe_to_crtc_mapping[PIPE_B]); 5542 - 5543 5535 DRM_DEBUG_KMS("checking fdi config on pipe %c, lanes %i\n", 5544 5536 pipe_name(pipe), pipe_config->fdi_lanes); 5545 5537 if (pipe_config->fdi_lanes > 4) { ··· 5574 5550 case PIPE_A: 5575 5551 return true; 5576 5552 case PIPE_B: 5577 - if (dev_priv->pipe_to_crtc_mapping[PIPE_C]->enabled && 5578 - pipe_config->fdi_lanes > 2) { 5553 + if (pipe_config->fdi_lanes > 2 && 5554 + pipe_required_fdi_lanes(dev, PIPE_C) > 0) { 5579 5555 DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n", 5580 5556 pipe_name(pipe), pipe_config->fdi_lanes); 5581 5557 return false; 5582 5558 } 5583 5559 return true; 5584 5560 case PIPE_C: 5585 - if (!pipe_has_enabled_pch(pipe_B_crtc) || 5586 - pipe_B_crtc->config->fdi_lanes <= 2) { 5587 - if (pipe_config->fdi_lanes > 2) { 5588 - DRM_DEBUG_KMS("invalid shared fdi lane config on pipe %c: %i lanes\n", 5589 - pipe_name(pipe), pipe_config->fdi_lanes); 5590 - return false; 5591 - } 5592 - } else { 5561 + if (pipe_config->fdi_lanes > 2) { 5562 + DRM_DEBUG_KMS("only 2 lanes on pipe %c: required %i lanes\n", 5563 + pipe_name(pipe), pipe_config->fdi_lanes); 5564 + return false; 5565 + } 5566 + if (pipe_required_fdi_lanes(dev, PIPE_B) > 2) { 5593 5567 DRM_DEBUG_KMS("fdi link B uses too many lanes to enable link C\n"); 5594 5568 return false; 5595 5569 } ··· 5720 5698 struct drm_i915_private *dev_priv = dev->dev_private; 5721 5699 u32 val; 5722 5700 int divider; 5723 - 5724 - /* FIXME: Punit isn't quite ready yet */ 5725 - if (IS_CHERRYVIEW(dev)) 5726 - return 400000; 5727 5701 5728 5702 if (dev_priv->hpll_freq == 0) 5729 5703 dev_priv->hpll_freq = valleyview_get_vco(dev_priv); ··· 6162 6144 int pipe = crtc->pipe; 6163 6145 int dpll_reg = DPLL(crtc->pipe); 6164 6146 enum dpio_channel port = vlv_pipe_to_channel(pipe); 6165 - u32 loopfilter, intcoeff; 6147 + u32 loopfilter, tribuf_calcntr; 6166 6148 u32 bestn, bestm1, bestm2, bestp1, bestp2, bestm2_frac; 6167 - int refclk; 6149 + u32 dpio_val; 6150 + int vco; 6168 6151 6169 6152 bestn = pipe_config->dpll.n; 6170 6153 bestm2_frac = pipe_config->dpll.m2 & 0x3fffff; ··· 6173 6154 bestm2 = pipe_config->dpll.m2 >> 22; 6174 6155 bestp1 = pipe_config->dpll.p1; 6175 6156 bestp2 = pipe_config->dpll.p2; 6157 + vco = pipe_config->dpll.vco; 6158 + dpio_val = 0; 6159 + loopfilter = 0; 6176 6160 6177 6161 /* 6178 6162 * Enable Refclk and SSC ··· 6201 6179 1 << DPIO_CHV_N_DIV_SHIFT); 6202 6180 6203 6181 /* M2 fraction division */ 6204 - vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac); 6182 + if (bestm2_frac) 6183 + vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW2(port), bestm2_frac); 6205 6184 6206 6185 /* M2 fraction division enable */ 6207 - vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port), 6208 - DPIO_CHV_FRAC_DIV_EN | 6209 - (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT)); 6186 + dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW3(port)); 6187 + dpio_val &= ~(DPIO_CHV_FEEDFWD_GAIN_MASK | DPIO_CHV_FRAC_DIV_EN); 6188 + dpio_val |= (2 << DPIO_CHV_FEEDFWD_GAIN_SHIFT); 6189 + if (bestm2_frac) 6190 + dpio_val |= DPIO_CHV_FRAC_DIV_EN; 6191 + vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW3(port), dpio_val); 6192 + 6193 + /* Program digital lock detect threshold */ 6194 + dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW9(port)); 6195 + dpio_val &= ~(DPIO_CHV_INT_LOCK_THRESHOLD_MASK | 6196 + DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE); 6197 + dpio_val |= (0x5 << DPIO_CHV_INT_LOCK_THRESHOLD_SHIFT); 6198 + if (!bestm2_frac) 6199 + dpio_val |= DPIO_CHV_INT_LOCK_THRESHOLD_SEL_COARSE; 6200 + vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW9(port), dpio_val); 6210 6201 6211 6202 /* Loop filter */ 6212 - refclk = i9xx_get_refclk(crtc, 0); 6213 - loopfilter = 5 << DPIO_CHV_PROP_COEFF_SHIFT | 6214 - 2 << DPIO_CHV_GAIN_CTRL_SHIFT; 6215 - if (refclk == 100000) 6216 - intcoeff = 11; 6217 - else if (refclk == 38400) 6218 - intcoeff = 10; 6219 - else 6220 - intcoeff = 9; 6221 - loopfilter |= intcoeff << DPIO_CHV_INT_COEFF_SHIFT; 6203 + if (vco == 5400000) { 6204 + loopfilter |= (0x3 << DPIO_CHV_PROP_COEFF_SHIFT); 6205 + loopfilter |= (0x8 << DPIO_CHV_INT_COEFF_SHIFT); 6206 + loopfilter |= (0x1 << DPIO_CHV_GAIN_CTRL_SHIFT); 6207 + tribuf_calcntr = 0x9; 6208 + } else if (vco <= 6200000) { 6209 + loopfilter |= (0x5 << DPIO_CHV_PROP_COEFF_SHIFT); 6210 + loopfilter |= (0xB << DPIO_CHV_INT_COEFF_SHIFT); 6211 + loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT); 6212 + tribuf_calcntr = 0x9; 6213 + } else if (vco <= 6480000) { 6214 + loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT); 6215 + loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT); 6216 + loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT); 6217 + tribuf_calcntr = 0x8; 6218 + } else { 6219 + /* Not supported. Apply the same limits as in the max case */ 6220 + loopfilter |= (0x4 << DPIO_CHV_PROP_COEFF_SHIFT); 6221 + loopfilter |= (0x9 << DPIO_CHV_INT_COEFF_SHIFT); 6222 + loopfilter |= (0x3 << DPIO_CHV_GAIN_CTRL_SHIFT); 6223 + tribuf_calcntr = 0; 6224 + } 6222 6225 vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW6(port), loopfilter); 6226 + 6227 + dpio_val = vlv_dpio_read(dev_priv, pipe, CHV_PLL_DW8(port)); 6228 + dpio_val &= ~DPIO_CHV_TDC_TARGET_CNT_MASK; 6229 + dpio_val |= (tribuf_calcntr << DPIO_CHV_TDC_TARGET_CNT_SHIFT); 6230 + vlv_dpio_write(dev_priv, pipe, CHV_PLL_DW8(port), dpio_val); 6223 6231 6224 6232 /* AFC Recal */ 6225 6233 vlv_dpio_write(dev_priv, pipe, CHV_CMN_DW14(port), ··· 8461 8409 uint32_t cntl = 0, size = 0; 8462 8410 8463 8411 if (base) { 8464 - unsigned int width = intel_crtc->cursor_width; 8465 - unsigned int height = intel_crtc->cursor_height; 8412 + unsigned int width = intel_crtc->base.cursor->state->crtc_w; 8413 + unsigned int height = intel_crtc->base.cursor->state->crtc_h; 8466 8414 unsigned int stride = roundup_pow_of_two(width) * 4; 8467 8415 8468 8416 switch (stride) { ··· 8526 8474 cntl = 0; 8527 8475 if (base) { 8528 8476 cntl = MCURSOR_GAMMA_ENABLE; 8529 - switch (intel_crtc->cursor_width) { 8477 + switch (intel_crtc->base.cursor->state->crtc_w) { 8530 8478 case 64: 8531 8479 cntl |= CURSOR_MODE_64_ARGB_AX; 8532 8480 break; ··· 8537 8485 cntl |= CURSOR_MODE_256_ARGB_AX; 8538 8486 break; 8539 8487 default: 8540 - MISSING_CASE(intel_crtc->cursor_width); 8488 + MISSING_CASE(intel_crtc->base.cursor->state->crtc_w); 8541 8489 return; 8542 8490 } 8543 8491 cntl |= pipe << 28; /* Connect to correct pipe */ ··· 8584 8532 base = 0; 8585 8533 8586 8534 if (x < 0) { 8587 - if (x + intel_crtc->cursor_width <= 0) 8535 + if (x + intel_crtc->base.cursor->state->crtc_w <= 0) 8588 8536 base = 0; 8589 8537 8590 8538 pos |= CURSOR_POS_SIGN << CURSOR_X_SHIFT; ··· 8593 8541 pos |= x << CURSOR_X_SHIFT; 8594 8542 8595 8543 if (y < 0) { 8596 - if (y + intel_crtc->cursor_height <= 0) 8544 + if (y + intel_crtc->base.cursor->state->crtc_h <= 0) 8597 8545 base = 0; 8598 8546 8599 8547 pos |= CURSOR_POS_SIGN << CURSOR_Y_SHIFT; ··· 8609 8557 /* ILK+ do this automagically */ 8610 8558 if (HAS_GMCH_DISPLAY(dev) && 8611 8559 crtc->cursor->state->rotation == BIT(DRM_ROTATE_180)) { 8612 - base += (intel_crtc->cursor_height * 8613 - intel_crtc->cursor_width - 1) * 4; 8560 + base += (intel_crtc->base.cursor->state->crtc_h * 8561 + intel_crtc->base.cursor->state->crtc_w - 1) * 4; 8614 8562 } 8615 8563 8616 8564 if (IS_845G(dev) || IS_I865G(dev)) ··· 9271 9219 mutex_lock(&dev->struct_mutex); 9272 9220 intel_unpin_fb_obj(intel_fb_obj(work->old_fb)); 9273 9221 drm_gem_object_unreference(&work->pending_flip_obj->base); 9274 - drm_framebuffer_unreference(work->old_fb); 9275 9222 9276 9223 intel_fbc_update(dev); 9277 9224 ··· 9279 9228 mutex_unlock(&dev->struct_mutex); 9280 9229 9281 9230 intel_frontbuffer_flip_complete(dev, INTEL_FRONTBUFFER_PRIMARY(pipe)); 9231 + drm_framebuffer_unreference(work->old_fb); 9282 9232 9283 9233 BUG_ON(atomic_read(&to_intel_crtc(work->crtc)->unpin_work_count) == 0); 9284 9234 atomic_dec(&to_intel_crtc(work->crtc)->unpin_work_count); ··· 9851 9799 struct drm_crtc *crtc = dev_priv->pipe_to_crtc_mapping[pipe]; 9852 9800 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 9853 9801 9854 - WARN_ON(!in_irq()); 9802 + WARN_ON(!in_interrupt()); 9855 9803 9856 9804 if (crtc == NULL) 9857 9805 return; ··· 9943 9891 if (atomic_read(&intel_crtc->unpin_work_count) >= 2) 9944 9892 flush_workqueue(dev_priv->wq); 9945 9893 9946 - ret = i915_mutex_lock_interruptible(dev); 9947 - if (ret) 9948 - goto cleanup; 9949 - 9950 9894 /* Reference the objects for the scheduled work. */ 9951 9895 drm_framebuffer_reference(work->old_fb); 9952 9896 drm_gem_object_reference(&obj->base); ··· 9951 9903 update_state_fb(crtc->primary); 9952 9904 9953 9905 work->pending_flip_obj = obj; 9906 + 9907 + ret = i915_mutex_lock_interruptible(dev); 9908 + if (ret) 9909 + goto cleanup; 9954 9910 9955 9911 atomic_inc(&intel_crtc->unpin_work_count); 9956 9912 intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter); ··· 10020 9968 intel_unpin_fb_obj(obj); 10021 9969 cleanup_pending: 10022 9970 atomic_dec(&intel_crtc->unpin_work_count); 9971 + mutex_unlock(&dev->struct_mutex); 9972 + cleanup: 10023 9973 crtc->primary->fb = old_fb; 10024 9974 update_state_fb(crtc->primary); 10025 - drm_framebuffer_unreference(work->old_fb); 10026 - drm_gem_object_unreference(&obj->base); 10027 - mutex_unlock(&dev->struct_mutex); 10028 9975 10029 - cleanup: 9976 + drm_gem_object_unreference_unlocked(&obj->base); 9977 + drm_framebuffer_unreference(work->old_fb); 9978 + 10030 9979 spin_lock_irq(&dev->event_lock); 10031 9980 intel_crtc->unpin_work = NULL; 10032 9981 spin_unlock_irq(&dev->event_lock); ··· 10067 10014 struct intel_encoder *encoder; 10068 10015 struct intel_connector *connector; 10069 10016 10070 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10071 - base.head) { 10017 + for_each_intel_connector(dev, connector) { 10072 10018 connector->new_encoder = 10073 10019 to_intel_encoder(connector->base.encoder); 10074 10020 } ··· 10098 10046 struct intel_encoder *encoder; 10099 10047 struct intel_connector *connector; 10100 10048 10101 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10102 - base.head) { 10049 + for_each_intel_connector(dev, connector) { 10103 10050 connector->base.encoder = &connector->new_encoder->base; 10104 10051 } 10105 10052 ··· 10186 10135 pipe_config->pipe_bpp = bpp; 10187 10136 10188 10137 /* Clamp display bpp to EDID value */ 10189 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10190 - base.head) { 10138 + for_each_intel_connector(dev, connector) { 10191 10139 if (!connector->new_encoder || 10192 10140 connector->new_encoder->new_crtc != crtc) 10193 10141 continue; ··· 10313 10263 * list to detect the problem on ddi platforms 10314 10264 * where there's just one encoder per digital port. 10315 10265 */ 10316 - list_for_each_entry(connector, 10317 - &dev->mode_config.connector_list, base.head) { 10266 + for_each_intel_connector(dev, connector) { 10318 10267 struct intel_encoder *encoder = connector->new_encoder; 10319 10268 10320 10269 if (!encoder) ··· 10486 10437 * to be part of the prepare_pipes mask. We don't (yet) support global 10487 10438 * modeset across multiple crtcs, so modeset_pipes will only have one 10488 10439 * bit set at most. */ 10489 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10490 - base.head) { 10440 + for_each_intel_connector(dev, connector) { 10491 10441 if (connector->base.encoder == &connector->new_encoder->base) 10492 10442 continue; 10493 10443 ··· 10855 10807 continue; 10856 10808 10857 10809 /* planes */ 10858 - for_each_plane(pipe, plane) { 10810 + for_each_plane(dev_priv, pipe, plane) { 10859 10811 hw_entry = &hw_ddb.plane[pipe][plane]; 10860 10812 sw_entry = &sw_ddb->plane[pipe][plane]; 10861 10813 ··· 10889 10841 { 10890 10842 struct intel_connector *connector; 10891 10843 10892 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10893 - base.head) { 10844 + for_each_intel_connector(dev, connector) { 10894 10845 /* This also checks the encoder/connector hw state with the 10895 10846 * ->get_hw_state callbacks. */ 10896 10847 intel_connector_check_state(connector); ··· 10919 10872 I915_STATE_WARN(encoder->connectors_active && !encoder->base.crtc, 10920 10873 "encoder's active_connectors set, but no crtc\n"); 10921 10874 10922 - list_for_each_entry(connector, &dev->mode_config.connector_list, 10923 - base.head) { 10875 + for_each_intel_connector(dev, connector) { 10924 10876 if (connector->base.encoder != &encoder->base) 10925 10877 continue; 10926 10878 enabled = true; ··· 11440 11394 } 11441 11395 11442 11396 count = 0; 11443 - list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) { 11397 + for_each_intel_connector(dev, connector) { 11444 11398 connector->new_encoder = 11445 11399 to_intel_encoder(config->save_connector_encoders[count++]); 11446 11400 } ··· 11532 11486 WARN_ON(!set->fb && (set->num_connectors != 0)); 11533 11487 WARN_ON(set->fb && (set->num_connectors == 0)); 11534 11488 11535 - list_for_each_entry(connector, &dev->mode_config.connector_list, 11536 - base.head) { 11489 + for_each_intel_connector(dev, connector) { 11537 11490 /* Otherwise traverse passed in connector list and get encoders 11538 11491 * for them. */ 11539 11492 for (ro = 0; ro < set->num_connectors; ro++) { ··· 11557 11512 11558 11513 11559 11514 if (&connector->new_encoder->base != connector->base.encoder) { 11560 - DRM_DEBUG_KMS("encoder changed, full mode switch\n"); 11515 + DRM_DEBUG_KMS("[CONNECTOR:%d:%s] encoder changed, full mode switch\n", 11516 + connector->base.base.id, 11517 + connector->base.name); 11561 11518 config->mode_changed = true; 11562 11519 } 11563 11520 } 11564 11521 /* connector->new_encoder is now updated for all connectors. */ 11565 11522 11566 11523 /* Update crtc of enabled connectors. */ 11567 - list_for_each_entry(connector, &dev->mode_config.connector_list, 11568 - base.head) { 11524 + for_each_intel_connector(dev, connector) { 11569 11525 struct drm_crtc *new_crtc; 11570 11526 11571 11527 if (!connector->new_encoder) ··· 11595 11549 /* Check for any encoders that needs to be disabled. */ 11596 11550 for_each_intel_encoder(dev, encoder) { 11597 11551 int num_connectors = 0; 11598 - list_for_each_entry(connector, 11599 - &dev->mode_config.connector_list, 11600 - base.head) { 11552 + for_each_intel_connector(dev, connector) { 11601 11553 if (connector->new_encoder == encoder) { 11602 11554 WARN_ON(!connector->new_encoder->new_crtc); 11603 11555 num_connectors++; ··· 11610 11566 /* Only now check for crtc changes so we don't miss encoders 11611 11567 * that will be disabled. */ 11612 11568 if (&encoder->new_crtc->base != encoder->base.crtc) { 11613 - DRM_DEBUG_KMS("crtc changed, full mode switch\n"); 11569 + DRM_DEBUG_KMS("[ENCODER:%d:%s] crtc changed, full mode switch\n", 11570 + encoder->base.base.id, 11571 + encoder->base.name); 11614 11572 config->mode_changed = true; 11615 11573 } 11616 11574 } 11617 11575 /* Now we've also updated encoder->new_crtc for all encoders. */ 11618 - list_for_each_entry(connector, &dev->mode_config.connector_list, 11619 - base.head) { 11576 + for_each_intel_connector(dev, connector) { 11620 11577 if (connector->new_encoder) 11621 11578 if (connector->new_encoder != connector->encoder) 11622 11579 connector->encoder = connector->new_encoder; ··· 11633 11588 } 11634 11589 11635 11590 if (crtc->new_enabled != crtc->base.state->enable) { 11636 - DRM_DEBUG_KMS("crtc %sabled, full mode switch\n", 11591 + DRM_DEBUG_KMS("[CRTC:%d] %sabled, full mode switch\n", 11592 + crtc->base.base.id, 11637 11593 crtc->new_enabled ? "en" : "dis"); 11638 11594 config->mode_changed = true; 11639 11595 } ··· 11657 11611 DRM_DEBUG_KMS("Trying to restore without FB -> disabling pipe %c\n", 11658 11612 pipe_name(crtc->pipe)); 11659 11613 11660 - list_for_each_entry(connector, &dev->mode_config.connector_list, base.head) { 11614 + for_each_intel_connector(dev, connector) { 11661 11615 if (connector->new_encoder && 11662 11616 connector->new_encoder->new_crtc == crtc) 11663 11617 connector->new_encoder = NULL; ··· 12228 12182 } 12229 12183 12230 12184 const struct drm_plane_funcs intel_plane_funcs = { 12231 - .update_plane = drm_atomic_helper_update_plane, 12232 - .disable_plane = drm_atomic_helper_disable_plane, 12185 + .update_plane = drm_plane_helper_update, 12186 + .disable_plane = drm_plane_helper_disable, 12233 12187 .destroy = intel_plane_destroy, 12234 12188 .set_property = drm_atomic_helper_plane_set_property, 12235 12189 .atomic_get_property = intel_plane_atomic_get_property, ··· 12348 12302 12349 12303 finish: 12350 12304 if (intel_crtc->active) { 12351 - if (intel_crtc->cursor_width != state->base.crtc_w) 12305 + if (plane->state->crtc_w != state->base.crtc_w) 12352 12306 intel_crtc->atomic.update_wm = true; 12353 12307 12354 12308 intel_crtc->atomic.fb_bits |= ··· 12391 12345 intel_crtc->cursor_addr = addr; 12392 12346 intel_crtc->cursor_bo = obj; 12393 12347 update: 12394 - intel_crtc->cursor_width = state->base.crtc_w; 12395 - intel_crtc->cursor_height = state->base.crtc_h; 12396 12348 12397 12349 if (intel_crtc->active) 12398 12350 intel_crtc_update_cursor(crtc, state->visible); ··· 12618 12574 if (HAS_DDI(dev)) { 12619 12575 int found; 12620 12576 12621 - /* Haswell uses DDI functions to detect digital outputs */ 12577 + /* 12578 + * Haswell uses DDI functions to detect digital outputs. 12579 + * On SKL pre-D0 the strap isn't connected, so we assume 12580 + * it's there. 12581 + */ 12622 12582 found = I915_READ(DDI_BUF_CTL_A) & DDI_INIT_DISPLAY_DETECTED; 12623 - /* DDI A only supports eDP */ 12624 - if (found) 12583 + /* WaIgnoreDDIAStrap: skl */ 12584 + if (found || 12585 + (IS_SKYLAKE(dev) && INTEL_REVID(dev) < SKL_REVID_D0)) 12625 12586 intel_ddi_init(dev, PORT_A); 12626 12587 12627 12588 /* DDI B, C and D detection is indicated by the SFUSE_STRAP ··· 13117 13068 } else if (IS_IVYBRIDGE(dev)) { 13118 13069 /* FIXME: detect B0+ stepping and use auto training */ 13119 13070 dev_priv->display.fdi_link_train = ivb_manual_fdi_link_train; 13120 - dev_priv->display.modeset_global_resources = 13121 - ivb_modeset_global_resources; 13122 13071 } else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) { 13123 13072 dev_priv->display.fdi_link_train = hsw_fdi_link_train; 13124 13073 } else if (IS_VALLEYVIEW(dev)) { ··· 13412 13365 13413 13366 for_each_pipe(dev_priv, pipe) { 13414 13367 intel_crtc_init(dev, pipe); 13415 - for_each_sprite(pipe, sprite) { 13368 + for_each_sprite(dev_priv, pipe, sprite) { 13416 13369 ret = intel_plane_init(dev, pipe, sprite); 13417 13370 if (ret) 13418 13371 DRM_DEBUG_KMS("pipe %c sprite %c init failed: %d\n", ··· 13468 13421 /* We can't just switch on the pipe A, we need to set things up with a 13469 13422 * proper mode and output configuration. As a gross hack, enable pipe A 13470 13423 * by enabling the load detect pipe once. */ 13471 - list_for_each_entry(connector, 13472 - &dev->mode_config.connector_list, 13473 - base.head) { 13424 + for_each_intel_connector(dev, connector) { 13474 13425 if (connector->encoder->type == INTEL_OUTPUT_ANALOG) { 13475 13426 crt = &connector->base; 13476 13427 break; ··· 13539 13494 crtc->plane = plane; 13540 13495 13541 13496 /* ... and break all links. */ 13542 - list_for_each_entry(connector, &dev->mode_config.connector_list, 13543 - base.head) { 13497 + for_each_intel_connector(dev, connector) { 13544 13498 if (connector->encoder->base.crtc != &crtc->base) 13545 13499 continue; 13546 13500 ··· 13548 13504 } 13549 13505 /* multiple connectors may have the same encoder: 13550 13506 * handle them and break crtc link separately */ 13551 - list_for_each_entry(connector, &dev->mode_config.connector_list, 13552 - base.head) 13507 + for_each_intel_connector(dev, connector) 13553 13508 if (connector->encoder->base.crtc == &crtc->base) { 13554 13509 connector->encoder->base.crtc = NULL; 13555 13510 connector->encoder->connectors_active = false; ··· 13652 13609 * a bug in one of the get_hw_state functions. Or someplace else 13653 13610 * in our code, like the register restore mess on resume. Clamp 13654 13611 * things to off as a safer default. */ 13655 - list_for_each_entry(connector, 13656 - &dev->mode_config.connector_list, 13657 - base.head) { 13612 + for_each_intel_connector(dev, connector) { 13658 13613 if (connector->encoder != encoder) 13659 13614 continue; 13660 13615 connector->base.dpms = DRM_MODE_DPMS_OFF; ··· 13767 13726 pipe_name(pipe)); 13768 13727 } 13769 13728 13770 - list_for_each_entry(connector, &dev->mode_config.connector_list, 13771 - base.head) { 13729 + for_each_intel_connector(dev, connector) { 13772 13730 if (connector->get_hw_state(connector)) { 13773 13731 connector->base.dpms = DRM_MODE_DPMS_ON; 13774 13732 connector->encoder->connectors_active = true; ··· 13946 13906 intel_unregister_dsm_handler(); 13947 13907 13948 13908 intel_fbc_disable(dev); 13949 - 13950 - ironlake_teardown_rc6(dev); 13951 13909 13952 13910 mutex_unlock(&dev->struct_mutex); 13953 13911
+169 -18
drivers/gpu/drm/i915/intel_dp.c
··· 84 84 { DP_LINK_BW_5_4, /* m2_int = 27, m2_fraction = 0 */ 85 85 { .p1 = 2, .p2 = 1, .n = 1, .m1 = 2, .m2 = 0x6c00000 } } 86 86 }; 87 + /* Skylake supports following rates */ 88 + static const uint32_t gen9_rates[] = { 162000, 216000, 270000, 324000, 89 + 432000, 540000 }; 90 + 91 + static const uint32_t default_rates[] = { 162000, 270000, 540000 }; 87 92 88 93 /** 89 94 * is_edp - is the given port attached to an eDP panel (either CPU or PCH) ··· 134 129 case DP_LINK_BW_2_7: 135 130 break; 136 131 case DP_LINK_BW_5_4: /* 1.2 capable displays may advertise higher bw */ 137 - if (((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || 132 + if (IS_SKYLAKE(dev) && INTEL_REVID(dev) <= SKL_REVID_B0) 133 + /* WaDisableHBR2:skl */ 134 + max_link_bw = DP_LINK_BW_2_7; 135 + else if (((IS_HASWELL(dev) && !IS_HSW_ULX(dev)) || 138 136 INTEL_INFO(dev)->gen >= 8) && 139 137 intel_dp->dpcd[DP_DPCD_REV] >= 0x12) 140 138 max_link_bw = DP_LINK_BW_5_4; ··· 1083 1075 } 1084 1076 1085 1077 static void 1086 - skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_bw) 1078 + skl_edp_set_pll_config(struct intel_crtc_state *pipe_config, int link_clock) 1087 1079 { 1088 1080 u32 ctrl1; 1089 1081 ··· 1092 1084 pipe_config->dpll_hw_state.cfgcr2 = 0; 1093 1085 1094 1086 ctrl1 = DPLL_CTRL1_OVERRIDE(SKL_DPLL0); 1095 - switch (link_bw) { 1096 - case DP_LINK_BW_1_62: 1087 + switch (link_clock / 2) { 1088 + case 81000: 1097 1089 ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_810, 1098 1090 SKL_DPLL0); 1099 1091 break; 1100 - case DP_LINK_BW_2_7: 1092 + case 135000: 1101 1093 ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1350, 1102 1094 SKL_DPLL0); 1103 1095 break; 1104 - case DP_LINK_BW_5_4: 1096 + case 270000: 1105 1097 ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2700, 1106 1098 SKL_DPLL0); 1107 1099 break; 1100 + case 162000: 1101 + ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1620, 1102 + SKL_DPLL0); 1103 + break; 1104 + /* TBD: For DP link rates 2.16 GHz and 4.32 GHz, VCO is 8640 which 1105 + results in CDCLK change. Need to handle the change of CDCLK by 1106 + disabling pipes and re-enabling them */ 1107 + case 108000: 1108 + ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_1080, 1109 + SKL_DPLL0); 1110 + break; 1111 + case 216000: 1112 + ctrl1 |= DPLL_CRTL1_LINK_RATE(DPLL_CRTL1_LINK_RATE_2160, 1113 + SKL_DPLL0); 1114 + break; 1115 + 1108 1116 } 1109 1117 pipe_config->dpll_hw_state.ctrl1 = ctrl1; 1110 1118 } ··· 1139 1115 pipe_config->ddi_pll_sel = PORT_CLK_SEL_LCPLL_2700; 1140 1116 break; 1141 1117 } 1118 + } 1119 + 1120 + static int 1121 + intel_read_sink_rates(struct intel_dp *intel_dp, uint32_t *sink_rates) 1122 + { 1123 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 1124 + int i = 0; 1125 + uint16_t val; 1126 + 1127 + if (INTEL_INFO(dev)->gen >= 9 && intel_dp->supported_rates[0]) { 1128 + /* 1129 + * Receiver supports only main-link rate selection by 1130 + * link rate table method, so read link rates from 1131 + * supported_link_rates 1132 + */ 1133 + for (i = 0; i < DP_MAX_SUPPORTED_RATES; ++i) { 1134 + val = le16_to_cpu(intel_dp->supported_rates[i]); 1135 + if (val == 0) 1136 + break; 1137 + 1138 + sink_rates[i] = val * 200; 1139 + } 1140 + 1141 + if (i <= 0) 1142 + DRM_ERROR("No rates in SUPPORTED_LINK_RATES"); 1143 + } 1144 + return i; 1145 + } 1146 + 1147 + static int 1148 + intel_read_source_rates(struct intel_dp *intel_dp, uint32_t *source_rates) 1149 + { 1150 + struct drm_device *dev = intel_dp_to_dev(intel_dp); 1151 + int i; 1152 + int max_default_rate; 1153 + 1154 + if (INTEL_INFO(dev)->gen >= 9 && intel_dp->supported_rates[0]) { 1155 + for (i = 0; i < ARRAY_SIZE(gen9_rates); ++i) 1156 + source_rates[i] = gen9_rates[i]; 1157 + } else { 1158 + /* Index of the max_link_bw supported + 1 */ 1159 + max_default_rate = (intel_dp_max_link_bw(intel_dp) >> 3) + 1; 1160 + for (i = 0; i < max_default_rate; ++i) 1161 + source_rates[i] = default_rates[i]; 1162 + } 1163 + return i; 1142 1164 } 1143 1165 1144 1166 static void ··· 1220 1150 } 1221 1151 } 1222 1152 1153 + static int intel_supported_rates(const uint32_t *source_rates, int source_len, 1154 + const uint32_t *sink_rates, int sink_len, uint32_t *supported_rates) 1155 + { 1156 + int i = 0, j = 0, k = 0; 1157 + 1158 + /* For panels with edp version less than 1.4 */ 1159 + if (sink_len == 0) { 1160 + for (i = 0; i < source_len; ++i) 1161 + supported_rates[i] = source_rates[i]; 1162 + return source_len; 1163 + } 1164 + 1165 + /* For edp1.4 panels, find the common rates between source and sink */ 1166 + while (i < source_len && j < sink_len) { 1167 + if (source_rates[i] == sink_rates[j]) { 1168 + supported_rates[k] = source_rates[i]; 1169 + ++k; 1170 + ++i; 1171 + ++j; 1172 + } else if (source_rates[i] < sink_rates[j]) { 1173 + ++i; 1174 + } else { 1175 + ++j; 1176 + } 1177 + } 1178 + return k; 1179 + } 1180 + 1181 + static int rate_to_index(uint32_t find, const uint32_t *rates) 1182 + { 1183 + int i = 0; 1184 + 1185 + for (i = 0; i < DP_MAX_SUPPORTED_RATES; ++i) 1186 + if (find == rates[i]) 1187 + break; 1188 + 1189 + return i; 1190 + } 1191 + 1223 1192 bool 1224 1193 intel_dp_compute_config(struct intel_encoder *encoder, 1225 1194 struct intel_crtc_state *pipe_config) ··· 1275 1166 int max_lane_count = intel_dp_max_lane_count(intel_dp); 1276 1167 /* Conveniently, the link BW constants become indices with a shift...*/ 1277 1168 int min_clock = 0; 1278 - int max_clock = intel_dp_max_link_bw(intel_dp) >> 3; 1169 + int max_clock; 1279 1170 int bpp, mode_rate; 1280 - static int bws[] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7, DP_LINK_BW_5_4 }; 1281 1171 int link_avail, link_clock; 1172 + uint32_t sink_rates[8]; 1173 + uint32_t supported_rates[8] = {0}; 1174 + uint32_t source_rates[8]; 1175 + int source_len, sink_len, supported_len; 1176 + 1177 + sink_len = intel_read_sink_rates(intel_dp, sink_rates); 1178 + 1179 + source_len = intel_read_source_rates(intel_dp, source_rates); 1180 + 1181 + supported_len = intel_supported_rates(source_rates, source_len, 1182 + sink_rates, sink_len, supported_rates); 1183 + 1184 + /* No common link rates between source and sink */ 1185 + WARN_ON(supported_len <= 0); 1186 + 1187 + max_clock = supported_len - 1; 1282 1188 1283 1189 if (HAS_PCH_SPLIT(dev) && !HAS_DDI(dev) && port != PORT_A) 1284 1190 pipe_config->has_pch_encoder = true; ··· 1317 1193 return false; 1318 1194 1319 1195 DRM_DEBUG_KMS("DP link computation with max lane count %i " 1320 - "max bw %02x pixel clock %iKHz\n", 1321 - max_lane_count, bws[max_clock], 1196 + "max bw %d pixel clock %iKHz\n", 1197 + max_lane_count, supported_rates[max_clock], 1322 1198 adjusted_mode->crtc_clock); 1323 1199 1324 1200 /* Walk through all bpp values. Luckily they're all nicely spaced with 2 ··· 1347 1223 bpp); 1348 1224 1349 1225 for (clock = min_clock; clock <= max_clock; clock++) { 1350 - for (lane_count = min_lane_count; lane_count <= max_lane_count; lane_count <<= 1) { 1351 - link_clock = drm_dp_bw_code_to_link_rate(bws[clock]); 1226 + for (lane_count = min_lane_count; 1227 + lane_count <= max_lane_count; 1228 + lane_count <<= 1) { 1229 + 1230 + link_clock = supported_rates[clock]; 1352 1231 link_avail = intel_dp_max_data_rate(link_clock, 1353 1232 lane_count); 1354 1233 ··· 1380 1253 if (intel_dp->color_range) 1381 1254 pipe_config->limited_color_range = true; 1382 1255 1383 - intel_dp->link_bw = bws[clock]; 1384 1256 intel_dp->lane_count = lane_count; 1257 + 1258 + intel_dp->link_bw = 1259 + drm_dp_link_rate_to_bw_code(supported_rates[clock]); 1260 + 1261 + if (INTEL_INFO(dev)->gen >= 9 && intel_dp->supported_rates[0]) { 1262 + intel_dp->rate_select = 1263 + rate_to_index(supported_rates[clock], sink_rates); 1264 + intel_dp->link_bw = 0; 1265 + } 1266 + 1385 1267 pipe_config->pipe_bpp = bpp; 1386 - pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw); 1268 + pipe_config->port_clock = supported_rates[clock]; 1387 1269 1388 1270 DRM_DEBUG_KMS("DP link bw %02x lane count %d clock %d bpp %d\n", 1389 1271 intel_dp->link_bw, intel_dp->lane_count, ··· 1415 1279 } 1416 1280 1417 1281 if (IS_SKYLAKE(dev) && is_edp(intel_dp)) 1418 - skl_edp_set_pll_config(pipe_config, intel_dp->link_bw); 1282 + skl_edp_set_pll_config(pipe_config, supported_rates[clock]); 1419 1283 else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 1420 1284 hsw_dp_set_ddi_pll_sel(pipe_config, intel_dp->link_bw); 1421 1285 else ··· 3502 3366 if (drm_dp_enhanced_frame_cap(intel_dp->dpcd)) 3503 3367 link_config[1] |= DP_LANE_COUNT_ENHANCED_FRAME_EN; 3504 3368 drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_BW_SET, link_config, 2); 3369 + if (INTEL_INFO(dev)->gen >= 9 && intel_dp->supported_rates[0]) 3370 + drm_dp_dpcd_write(&intel_dp->aux, DP_LINK_RATE_SET, 3371 + &intel_dp->rate_select, 1); 3505 3372 3506 3373 link_config[0] = 0; 3507 3374 link_config[1] = DP_SET_ANSI_8B10B; ··· 3717 3578 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 3718 3579 struct drm_device *dev = dig_port->base.base.dev; 3719 3580 struct drm_i915_private *dev_priv = dev->dev_private; 3581 + uint8_t rev; 3720 3582 3721 3583 if (intel_dp_dpcd_read_wake(&intel_dp->aux, 0x000, intel_dp->dpcd, 3722 3584 sizeof(intel_dp->dpcd)) < 0) ··· 3749 3609 } else 3750 3610 intel_dp->use_tps3 = false; 3751 3611 3612 + /* Intermediate frequency support */ 3613 + if (is_edp(intel_dp) && 3614 + (intel_dp->dpcd[DP_EDP_CONFIGURATION_CAP] & DP_DPCD_DISPLAY_CONTROL_CAPABLE) && 3615 + (intel_dp_dpcd_read_wake(&intel_dp->aux, DP_EDP_DPCD_REV, &rev, 1) == 1) && 3616 + (rev >= 0x03)) { /* eDp v1.4 or higher */ 3617 + intel_dp_dpcd_read_wake(&intel_dp->aux, 3618 + DP_SUPPORTED_LINK_RATES, 3619 + intel_dp->supported_rates, 3620 + sizeof(intel_dp->supported_rates)); 3621 + } 3752 3622 if (!(intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] & 3753 3623 DP_DWN_STRM_PORT_PRESENT)) 3754 3624 return true; /* native DP sink */ ··· 5116 4966 if (!dev_priv->drrs.dp) 5117 4967 return; 5118 4968 4969 + cancel_delayed_work_sync(&dev_priv->drrs.work); 4970 + 5119 4971 mutex_lock(&dev_priv->drrs.mutex); 5120 4972 crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc; 5121 4973 pipe = to_intel_crtc(crtc)->pipe; 5122 4974 5123 4975 if (dev_priv->drrs.refresh_rate_type == DRRS_LOW_RR) { 5124 - cancel_delayed_work_sync(&dev_priv->drrs.work); 5125 4976 intel_dp_set_drrs_state(dev_priv->dev, 5126 4977 dev_priv->drrs.dp->attached_connector->panel. 5127 4978 fixed_mode->vrefresh); ··· 5155 5004 if (!dev_priv->drrs.dp) 5156 5005 return; 5157 5006 5007 + cancel_delayed_work_sync(&dev_priv->drrs.work); 5008 + 5158 5009 mutex_lock(&dev_priv->drrs.mutex); 5159 5010 crtc = dp_to_dig_port(dev_priv->drrs.dp)->base.base.crtc; 5160 5011 pipe = to_intel_crtc(crtc)->pipe; 5161 5012 dev_priv->drrs.busy_frontbuffer_bits &= ~frontbuffer_bits; 5162 - 5163 - cancel_delayed_work_sync(&dev_priv->drrs.work); 5164 5013 5165 5014 if (dev_priv->drrs.refresh_rate_type != DRRS_LOW_RR && 5166 5015 !dev_priv->drrs.busy_frontbuffer_bits)
+2 -2
drivers/gpu/drm/i915/intel_dp_mst.c
··· 58 58 pipe_config->pipe_bpp = 24; 59 59 pipe_config->port_clock = drm_dp_bw_code_to_link_rate(intel_dp->link_bw); 60 60 61 - list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) { 61 + for_each_intel_connector(dev, intel_connector) { 62 62 if (intel_connector->new_encoder == encoder) { 63 63 found = intel_connector; 64 64 break; ··· 140 140 struct drm_crtc *crtc = encoder->base.crtc; 141 141 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 142 142 143 - list_for_each_entry(intel_connector, &dev->mode_config.connector_list, base.head) { 143 + for_each_intel_connector(dev, intel_connector) { 144 144 if (intel_connector->new_encoder == encoder) { 145 145 found = intel_connector; 146 146 break;
+11 -5
drivers/gpu/drm/i915/intel_drv.h
··· 464 464 465 465 struct drm_i915_gem_object *cursor_bo; 466 466 uint32_t cursor_addr; 467 - int16_t cursor_width, cursor_height; 468 467 uint32_t cursor_cntl; 469 468 uint32_t cursor_size; 470 469 uint32_t cursor_base; ··· 622 623 uint32_t color_range; 623 624 bool color_range_auto; 624 625 uint8_t link_bw; 626 + uint8_t rate_select; 625 627 uint8_t lane_count; 626 628 uint8_t dpcd[DP_RECEIVER_CAP_SIZE]; 627 629 uint8_t psr_dpcd[EDP_PSR_RECEIVER_CAP_SIZE]; 628 630 uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS]; 631 + __le16 supported_rates[DP_MAX_SUPPORTED_RATES]; 629 632 struct drm_dp_aux aux; 630 633 uint8_t train_set[4]; 631 634 int panel_power_up_delay; ··· 840 839 } 841 840 842 841 int intel_get_crtc_scanline(struct intel_crtc *crtc); 843 - void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv); 842 + void gen8_irq_power_well_post_enable(struct drm_i915_private *dev_priv, 843 + unsigned int pipe_mask); 844 844 845 845 /* intel_crt.c */ 846 846 void intel_crt_init(struct drm_device *dev); ··· 876 874 877 875 /* intel_frontbuffer.c */ 878 876 void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, 879 - struct intel_engine_cs *ring); 877 + struct intel_engine_cs *ring, 878 + enum fb_op_origin origin); 880 879 void intel_frontbuffer_flip_prepare(struct drm_device *dev, 881 880 unsigned frontbuffer_bits); 882 881 void intel_frontbuffer_flip_complete(struct drm_device *dev, ··· 1118 1115 void intel_fbc_update(struct drm_device *dev); 1119 1116 void intel_fbc_init(struct drm_i915_private *dev_priv); 1120 1117 void intel_fbc_disable(struct drm_device *dev); 1121 - void bdw_fbc_sw_flush(struct drm_device *dev, u32 value); 1118 + void intel_fbc_invalidate(struct drm_i915_private *dev_priv, 1119 + unsigned int frontbuffer_bits, 1120 + enum fb_op_origin origin); 1121 + void intel_fbc_flush(struct drm_i915_private *dev_priv, 1122 + unsigned int frontbuffer_bits); 1122 1123 1123 1124 /* intel_hdmi.c */ 1124 1125 void intel_hdmi_init(struct drm_device *dev, int hdmi_reg, enum port port); ··· 1238 1231 void intel_disable_gt_powersave(struct drm_device *dev); 1239 1232 void intel_suspend_gt_powersave(struct drm_device *dev); 1240 1233 void intel_reset_gt_powersave(struct drm_device *dev); 1241 - void ironlake_teardown_rc6(struct drm_device *dev); 1242 1234 void gen6_update_ring_freq(struct drm_device *dev); 1243 1235 void gen6_rps_idle(struct drm_i915_private *dev_priv); 1244 1236 void gen6_rps_boost(struct drm_i915_private *dev_priv);
+54 -37
drivers/gpu/drm/i915/intel_fbc.c
··· 174 174 return I915_READ(DPFC_CONTROL) & DPFC_CTL_EN; 175 175 } 176 176 177 - static void snb_fbc_blit_update(struct drm_device *dev) 177 + static void intel_fbc_nuke(struct drm_i915_private *dev_priv) 178 178 { 179 - struct drm_i915_private *dev_priv = dev->dev_private; 180 - u32 blt_ecoskpd; 181 - 182 - /* Make sure blitter notifies FBC of writes */ 183 - 184 - /* Blitter is part of Media powerwell on VLV. No impact of 185 - * his param in other platforms for now */ 186 - intel_uncore_forcewake_get(dev_priv, FORCEWAKE_MEDIA); 187 - 188 - blt_ecoskpd = I915_READ(GEN6_BLITTER_ECOSKPD); 189 - blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY << 190 - GEN6_BLITTER_LOCK_SHIFT; 191 - I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd); 192 - blt_ecoskpd |= GEN6_BLITTER_FBC_NOTIFY; 193 - I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd); 194 - blt_ecoskpd &= ~(GEN6_BLITTER_FBC_NOTIFY << 195 - GEN6_BLITTER_LOCK_SHIFT); 196 - I915_WRITE(GEN6_BLITTER_ECOSKPD, blt_ecoskpd); 197 - POSTING_READ(GEN6_BLITTER_ECOSKPD); 198 - 199 - intel_uncore_forcewake_put(dev_priv, FORCEWAKE_MEDIA); 179 + I915_WRITE(MSG_FBC_REND_STATE, FBC_REND_NUKE); 180 + POSTING_READ(MSG_FBC_REND_STATE); 200 181 } 201 182 202 183 static void ilk_fbc_enable(struct drm_crtc *crtc) ··· 220 239 I915_WRITE(SNB_DPFC_CTL_SA, 221 240 SNB_CPU_FENCE_ENABLE | obj->fence_reg); 222 241 I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y); 223 - snb_fbc_blit_update(dev); 224 242 } 243 + 244 + intel_fbc_nuke(dev_priv); 225 245 226 246 DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane)); 227 247 } ··· 302 320 SNB_CPU_FENCE_ENABLE | obj->fence_reg); 303 321 I915_WRITE(DPFC_CPU_FENCE_OFFSET, crtc->y); 304 322 305 - snb_fbc_blit_update(dev); 323 + intel_fbc_nuke(dev_priv); 306 324 307 325 DRM_DEBUG_KMS("enabled fbc on plane %c\n", plane_name(intel_crtc->plane)); 308 326 } ··· 320 338 struct drm_i915_private *dev_priv = dev->dev_private; 321 339 322 340 return dev_priv->fbc.enabled; 323 - } 324 - 325 - void bdw_fbc_sw_flush(struct drm_device *dev, u32 value) 326 - { 327 - struct drm_i915_private *dev_priv = dev->dev_private; 328 - 329 - if (!IS_GEN8(dev)) 330 - return; 331 - 332 - if (!intel_fbc_enabled(dev)) 333 - return; 334 - 335 - I915_WRITE(MSG_FBC_REND_STATE, value); 336 341 } 337 342 338 343 static void intel_fbc_work_fn(struct work_struct *__work) ··· 654 685 i915_gem_stolen_cleanup_compression(dev); 655 686 } 656 687 688 + void intel_fbc_invalidate(struct drm_i915_private *dev_priv, 689 + unsigned int frontbuffer_bits, 690 + enum fb_op_origin origin) 691 + { 692 + struct drm_device *dev = dev_priv->dev; 693 + unsigned int fbc_bits; 694 + 695 + if (origin == ORIGIN_GTT) 696 + return; 697 + 698 + if (dev_priv->fbc.enabled) 699 + fbc_bits = INTEL_FRONTBUFFER_PRIMARY(dev_priv->fbc.crtc->pipe); 700 + else if (dev_priv->fbc.fbc_work) 701 + fbc_bits = INTEL_FRONTBUFFER_PRIMARY( 702 + to_intel_crtc(dev_priv->fbc.fbc_work->crtc)->pipe); 703 + else 704 + fbc_bits = dev_priv->fbc.possible_framebuffer_bits; 705 + 706 + dev_priv->fbc.busy_bits |= (fbc_bits & frontbuffer_bits); 707 + 708 + if (dev_priv->fbc.busy_bits) 709 + intel_fbc_disable(dev); 710 + } 711 + 712 + void intel_fbc_flush(struct drm_i915_private *dev_priv, 713 + unsigned int frontbuffer_bits) 714 + { 715 + struct drm_device *dev = dev_priv->dev; 716 + 717 + if (!dev_priv->fbc.busy_bits) 718 + return; 719 + 720 + dev_priv->fbc.busy_bits &= ~frontbuffer_bits; 721 + 722 + if (!dev_priv->fbc.busy_bits) 723 + intel_fbc_update(dev); 724 + } 725 + 657 726 /** 658 727 * intel_fbc_init - Initialize FBC 659 728 * @dev_priv: the i915 device ··· 700 693 */ 701 694 void intel_fbc_init(struct drm_i915_private *dev_priv) 702 695 { 696 + enum pipe pipe; 697 + 703 698 if (!HAS_FBC(dev_priv)) { 704 699 dev_priv->fbc.enabled = false; 705 700 dev_priv->fbc.no_fbc_reason = FBC_UNSUPPORTED; 706 701 return; 702 + } 703 + 704 + for_each_pipe(dev_priv, pipe) { 705 + dev_priv->fbc.possible_framebuffer_bits |= 706 + INTEL_FRONTBUFFER_PRIMARY(pipe); 707 + 708 + if (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8) 709 + break; 707 710 } 708 711 709 712 if (INTEL_INFO(dev_priv)->gen >= 7) {
+26 -1
drivers/gpu/drm/i915/intel_fbdev.c
··· 71 71 return ret; 72 72 } 73 73 74 + static int intel_fbdev_blank(int blank, struct fb_info *info) 75 + { 76 + struct drm_fb_helper *fb_helper = info->par; 77 + struct intel_fbdev *ifbdev = 78 + container_of(fb_helper, struct intel_fbdev, helper); 79 + int ret; 80 + 81 + ret = drm_fb_helper_blank(blank, info); 82 + 83 + if (ret == 0) { 84 + /* 85 + * FIXME: fbdev presumes that all callbacks also work from 86 + * atomic contexts and relies on that for emergency oops 87 + * printing. KMS totally doesn't do that and the locking here is 88 + * by far not the only place this goes wrong. Ignore this for 89 + * now until we solve this for real. 90 + */ 91 + mutex_lock(&fb_helper->dev->struct_mutex); 92 + intel_fb_obj_invalidate(ifbdev->fb->obj, NULL, ORIGIN_GTT); 93 + mutex_unlock(&fb_helper->dev->struct_mutex); 94 + } 95 + 96 + return ret; 97 + } 98 + 74 99 static struct fb_ops intelfb_ops = { 75 100 .owner = THIS_MODULE, 76 101 .fb_check_var = drm_fb_helper_check_var, ··· 104 79 .fb_copyarea = cfb_copyarea, 105 80 .fb_imageblit = cfb_imageblit, 106 81 .fb_pan_display = drm_fb_helper_pan_display, 107 - .fb_blank = drm_fb_helper_blank, 82 + .fb_blank = intel_fbdev_blank, 108 83 .fb_setcmap = drm_fb_helper_setcmap, 109 84 .fb_debug_enter = drm_fb_helper_debug_enter, 110 85 .fb_debug_leave = drm_fb_helper_debug_leave,
+5 -13
drivers/gpu/drm/i915/intel_frontbuffer.c
··· 118 118 continue; 119 119 120 120 intel_increase_pllclock(dev, pipe); 121 - if (ring && intel_fbc_enabled(dev)) 122 - ring->fbc_dirty = true; 123 121 } 124 122 } 125 123 ··· 125 127 * intel_fb_obj_invalidate - invalidate frontbuffer object 126 128 * @obj: GEM object to invalidate 127 129 * @ring: set for asynchronous rendering 130 + * @origin: which operation caused the invalidation 128 131 * 129 132 * This function gets called every time rendering on the given object starts and 130 133 * frontbuffer caching (fbc, low refresh rate for DRRS, panel self refresh) must ··· 134 135 * scheduled. 135 136 */ 136 137 void intel_fb_obj_invalidate(struct drm_i915_gem_object *obj, 137 - struct intel_engine_cs *ring) 138 + struct intel_engine_cs *ring, 139 + enum fb_op_origin origin) 138 140 { 139 141 struct drm_device *dev = obj->base.dev; 140 142 struct drm_i915_private *dev_priv = dev->dev_private; ··· 158 158 159 159 intel_psr_invalidate(dev, obj->frontbuffer_bits); 160 160 intel_edp_drrs_invalidate(dev, obj->frontbuffer_bits); 161 + intel_fbc_invalidate(dev_priv, obj->frontbuffer_bits, origin); 161 162 } 162 163 163 164 /** ··· 186 185 187 186 intel_edp_drrs_flush(dev, frontbuffer_bits); 188 187 intel_psr_flush(dev, frontbuffer_bits); 189 - 190 - /* 191 - * FIXME: Unconditional fbc flushing here is a rather gross hack and 192 - * needs to be reworked into a proper frontbuffer tracking scheme like 193 - * psr employs. 194 - */ 195 - if (dev_priv->fbc.need_sw_cache_clean) { 196 - dev_priv->fbc.need_sw_cache_clean = false; 197 - bdw_fbc_sw_flush(dev, FBC_REND_CACHE_CLEAN); 198 - } 188 + intel_fbc_flush(dev_priv, frontbuffer_bits); 199 189 } 200 190 201 191 /**
+505 -515
drivers/gpu/drm/i915/intel_pm.c
··· 263 263 return NULL; 264 264 } 265 265 266 + static void chv_set_memory_dvfs(struct drm_i915_private *dev_priv, bool enable) 267 + { 268 + u32 val; 269 + 270 + mutex_lock(&dev_priv->rps.hw_lock); 271 + 272 + val = vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2); 273 + if (enable) 274 + val &= ~FORCE_DDR_HIGH_FREQ; 275 + else 276 + val |= FORCE_DDR_HIGH_FREQ; 277 + val &= ~FORCE_DDR_LOW_FREQ; 278 + val |= FORCE_DDR_FREQ_REQ_ACK; 279 + vlv_punit_write(dev_priv, PUNIT_REG_DDR_SETUP2, val); 280 + 281 + if (wait_for((vlv_punit_read(dev_priv, PUNIT_REG_DDR_SETUP2) & 282 + FORCE_DDR_FREQ_REQ_ACK) == 0, 3)) 283 + DRM_ERROR("timed out waiting for Punit DDR DVFS request\n"); 284 + 285 + mutex_unlock(&dev_priv->rps.hw_lock); 286 + } 287 + 288 + static void chv_set_memory_pm5(struct drm_i915_private *dev_priv, bool enable) 289 + { 290 + u32 val; 291 + 292 + mutex_lock(&dev_priv->rps.hw_lock); 293 + 294 + val = vlv_punit_read(dev_priv, PUNIT_REG_DSPFREQ); 295 + if (enable) 296 + val |= DSP_MAXFIFO_PM5_ENABLE; 297 + else 298 + val &= ~DSP_MAXFIFO_PM5_ENABLE; 299 + vlv_punit_write(dev_priv, PUNIT_REG_DSPFREQ, val); 300 + 301 + mutex_unlock(&dev_priv->rps.hw_lock); 302 + } 303 + 304 + #define FW_WM(value, plane) \ 305 + (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK) 306 + 266 307 void intel_set_memory_cxsr(struct drm_i915_private *dev_priv, bool enable) 267 308 { 268 309 struct drm_device *dev = dev_priv->dev; ··· 311 270 312 271 if (IS_VALLEYVIEW(dev)) { 313 272 I915_WRITE(FW_BLC_SELF_VLV, enable ? FW_CSPWRDWNEN : 0); 273 + if (IS_CHERRYVIEW(dev)) 274 + chv_set_memory_pm5(dev_priv, enable); 314 275 } else if (IS_G4X(dev) || IS_CRESTLINE(dev)) { 315 276 I915_WRITE(FW_BLC_SELF, enable ? FW_BLC_SELF_EN : 0); 316 277 } else if (IS_PINEVIEW(dev)) { ··· 335 292 enable ? "enabled" : "disabled"); 336 293 } 337 294 295 + 338 296 /* 339 297 * Latency for FIFO fetches is dependent on several factors: 340 298 * - memory configuration (speed, channels) ··· 351 307 * platforms but not overly aggressive on lower latency configs. 352 308 */ 353 309 static const int pessimal_latency_ns = 5000; 310 + 311 + #define VLV_FIFO_START(dsparb, dsparb2, lo_shift, hi_shift) \ 312 + ((((dsparb) >> (lo_shift)) & 0xff) | ((((dsparb2) >> (hi_shift)) & 0x1) << 8)) 313 + 314 + static int vlv_get_fifo_size(struct drm_device *dev, 315 + enum pipe pipe, int plane) 316 + { 317 + struct drm_i915_private *dev_priv = dev->dev_private; 318 + int sprite0_start, sprite1_start, size; 319 + 320 + switch (pipe) { 321 + uint32_t dsparb, dsparb2, dsparb3; 322 + case PIPE_A: 323 + dsparb = I915_READ(DSPARB); 324 + dsparb2 = I915_READ(DSPARB2); 325 + sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 0, 0); 326 + sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 8, 4); 327 + break; 328 + case PIPE_B: 329 + dsparb = I915_READ(DSPARB); 330 + dsparb2 = I915_READ(DSPARB2); 331 + sprite0_start = VLV_FIFO_START(dsparb, dsparb2, 16, 8); 332 + sprite1_start = VLV_FIFO_START(dsparb, dsparb2, 24, 12); 333 + break; 334 + case PIPE_C: 335 + dsparb2 = I915_READ(DSPARB2); 336 + dsparb3 = I915_READ(DSPARB3); 337 + sprite0_start = VLV_FIFO_START(dsparb3, dsparb2, 0, 16); 338 + sprite1_start = VLV_FIFO_START(dsparb3, dsparb2, 8, 20); 339 + break; 340 + default: 341 + return 0; 342 + } 343 + 344 + switch (plane) { 345 + case 0: 346 + size = sprite0_start; 347 + break; 348 + case 1: 349 + size = sprite1_start - sprite0_start; 350 + break; 351 + case 2: 352 + size = 512 - 1 - sprite1_start; 353 + break; 354 + default: 355 + return 0; 356 + } 357 + 358 + DRM_DEBUG_KMS("Pipe %c %s %c FIFO size: %d\n", 359 + pipe_name(pipe), plane == 0 ? "primary" : "sprite", 360 + plane == 0 ? plane_name(pipe) : sprite_name(pipe, plane - 1), 361 + size); 362 + 363 + return size; 364 + } 354 365 355 366 static int i9xx_get_fifo_size(struct drm_device *dev, int plane) 356 367 { ··· 652 553 crtc = single_enabled_crtc(dev); 653 554 if (crtc) { 654 555 const struct drm_display_mode *adjusted_mode; 655 - int pixel_size = crtc->primary->fb->bits_per_pixel / 8; 556 + int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8; 656 557 int clock; 657 558 658 559 adjusted_mode = &to_intel_crtc(crtc)->config->base.adjusted_mode; ··· 664 565 pixel_size, latency->display_sr); 665 566 reg = I915_READ(DSPFW1); 666 567 reg &= ~DSPFW_SR_MASK; 667 - reg |= wm << DSPFW_SR_SHIFT; 568 + reg |= FW_WM(wm, SR); 668 569 I915_WRITE(DSPFW1, reg); 669 570 DRM_DEBUG_KMS("DSPFW1 register is %x\n", reg); 670 571 ··· 674 575 pixel_size, latency->cursor_sr); 675 576 reg = I915_READ(DSPFW3); 676 577 reg &= ~DSPFW_CURSOR_SR_MASK; 677 - reg |= (wm & 0x3f) << DSPFW_CURSOR_SR_SHIFT; 578 + reg |= FW_WM(wm, CURSOR_SR); 678 579 I915_WRITE(DSPFW3, reg); 679 580 680 581 /* Display HPLL off SR */ ··· 683 584 pixel_size, latency->display_hpll_disable); 684 585 reg = I915_READ(DSPFW3); 685 586 reg &= ~DSPFW_HPLL_SR_MASK; 686 - reg |= wm & DSPFW_HPLL_SR_MASK; 587 + reg |= FW_WM(wm, HPLL_SR); 687 588 I915_WRITE(DSPFW3, reg); 688 589 689 590 /* cursor HPLL off SR */ ··· 692 593 pixel_size, latency->cursor_hpll_disable); 693 594 reg = I915_READ(DSPFW3); 694 595 reg &= ~DSPFW_HPLL_CURSOR_MASK; 695 - reg |= (wm & 0x3f) << DSPFW_HPLL_CURSOR_SHIFT; 596 + reg |= FW_WM(wm, HPLL_CURSOR); 696 597 I915_WRITE(DSPFW3, reg); 697 598 DRM_DEBUG_KMS("DSPFW3 register is %x\n", reg); 698 599 ··· 728 629 clock = adjusted_mode->crtc_clock; 729 630 htotal = adjusted_mode->crtc_htotal; 730 631 hdisplay = to_intel_crtc(crtc)->config->pipe_src_w; 731 - pixel_size = crtc->primary->fb->bits_per_pixel / 8; 632 + pixel_size = crtc->primary->state->fb->bits_per_pixel / 8; 732 633 733 634 /* Use the small buffer method to calculate plane watermark */ 734 635 entries = ((clock * pixel_size / 1000) * display_latency_ns) / 1000; ··· 743 644 /* Use the large buffer method to calculate cursor watermark */ 744 645 line_time_us = max(htotal * 1000 / clock, 1); 745 646 line_count = (cursor_latency_ns / line_time_us + 1000) / 1000; 746 - entries = line_count * to_intel_crtc(crtc)->cursor_width * pixel_size; 647 + entries = line_count * crtc->cursor->state->crtc_w * pixel_size; 747 648 tlb_miss = cursor->fifo_size*cursor->cacheline_size - hdisplay * 8; 748 649 if (tlb_miss > 0) 749 650 entries += tlb_miss; ··· 815 716 clock = adjusted_mode->crtc_clock; 816 717 htotal = adjusted_mode->crtc_htotal; 817 718 hdisplay = to_intel_crtc(crtc)->config->pipe_src_w; 818 - pixel_size = crtc->primary->fb->bits_per_pixel / 8; 719 + pixel_size = crtc->primary->state->fb->bits_per_pixel / 8; 819 720 820 721 line_time_us = max(htotal * 1000 / clock, 1); 821 722 line_count = (latency_ns / line_time_us + 1000) / 1000; ··· 829 730 *display_wm = entries + display->guard_size; 830 731 831 732 /* calculate the self-refresh watermark for display cursor */ 832 - entries = line_count * pixel_size * to_intel_crtc(crtc)->cursor_width; 733 + entries = line_count * pixel_size * crtc->cursor->state->crtc_w; 833 734 entries = DIV_ROUND_UP(entries, cursor->cacheline_size); 834 735 *cursor_wm = entries + cursor->guard_size; 835 736 ··· 838 739 display, cursor); 839 740 } 840 741 841 - static bool vlv_compute_drain_latency(struct drm_crtc *crtc, 842 - int pixel_size, 843 - int *prec_mult, 844 - int *drain_latency) 742 + #define FW_WM_VLV(value, plane) \ 743 + (((value) << DSPFW_ ## plane ## _SHIFT) & DSPFW_ ## plane ## _MASK_VLV) 744 + 745 + static void vlv_write_wm_values(struct intel_crtc *crtc, 746 + const struct vlv_wm_values *wm) 747 + { 748 + struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 749 + enum pipe pipe = crtc->pipe; 750 + 751 + I915_WRITE(VLV_DDL(pipe), 752 + (wm->ddl[pipe].cursor << DDL_CURSOR_SHIFT) | 753 + (wm->ddl[pipe].sprite[1] << DDL_SPRITE_SHIFT(1)) | 754 + (wm->ddl[pipe].sprite[0] << DDL_SPRITE_SHIFT(0)) | 755 + (wm->ddl[pipe].primary << DDL_PLANE_SHIFT)); 756 + 757 + I915_WRITE(DSPFW1, 758 + FW_WM(wm->sr.plane, SR) | 759 + FW_WM(wm->pipe[PIPE_B].cursor, CURSORB) | 760 + FW_WM_VLV(wm->pipe[PIPE_B].primary, PLANEB) | 761 + FW_WM_VLV(wm->pipe[PIPE_A].primary, PLANEA)); 762 + I915_WRITE(DSPFW2, 763 + FW_WM_VLV(wm->pipe[PIPE_A].sprite[1], SPRITEB) | 764 + FW_WM(wm->pipe[PIPE_A].cursor, CURSORA) | 765 + FW_WM_VLV(wm->pipe[PIPE_A].sprite[0], SPRITEA)); 766 + I915_WRITE(DSPFW3, 767 + FW_WM(wm->sr.cursor, CURSOR_SR)); 768 + 769 + if (IS_CHERRYVIEW(dev_priv)) { 770 + I915_WRITE(DSPFW7_CHV, 771 + FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) | 772 + FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC)); 773 + I915_WRITE(DSPFW8_CHV, 774 + FW_WM_VLV(wm->pipe[PIPE_C].sprite[1], SPRITEF) | 775 + FW_WM_VLV(wm->pipe[PIPE_C].sprite[0], SPRITEE)); 776 + I915_WRITE(DSPFW9_CHV, 777 + FW_WM_VLV(wm->pipe[PIPE_C].primary, PLANEC) | 778 + FW_WM(wm->pipe[PIPE_C].cursor, CURSORC)); 779 + I915_WRITE(DSPHOWM, 780 + FW_WM(wm->sr.plane >> 9, SR_HI) | 781 + FW_WM(wm->pipe[PIPE_C].sprite[1] >> 8, SPRITEF_HI) | 782 + FW_WM(wm->pipe[PIPE_C].sprite[0] >> 8, SPRITEE_HI) | 783 + FW_WM(wm->pipe[PIPE_C].primary >> 8, PLANEC_HI) | 784 + FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) | 785 + FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) | 786 + FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) | 787 + FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) | 788 + FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) | 789 + FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI)); 790 + } else { 791 + I915_WRITE(DSPFW7, 792 + FW_WM_VLV(wm->pipe[PIPE_B].sprite[1], SPRITED) | 793 + FW_WM_VLV(wm->pipe[PIPE_B].sprite[0], SPRITEC)); 794 + I915_WRITE(DSPHOWM, 795 + FW_WM(wm->sr.plane >> 9, SR_HI) | 796 + FW_WM(wm->pipe[PIPE_B].sprite[1] >> 8, SPRITED_HI) | 797 + FW_WM(wm->pipe[PIPE_B].sprite[0] >> 8, SPRITEC_HI) | 798 + FW_WM(wm->pipe[PIPE_B].primary >> 8, PLANEB_HI) | 799 + FW_WM(wm->pipe[PIPE_A].sprite[1] >> 8, SPRITEB_HI) | 800 + FW_WM(wm->pipe[PIPE_A].sprite[0] >> 8, SPRITEA_HI) | 801 + FW_WM(wm->pipe[PIPE_A].primary >> 8, PLANEA_HI)); 802 + } 803 + 804 + POSTING_READ(DSPFW1); 805 + 806 + dev_priv->wm.vlv = *wm; 807 + } 808 + 809 + #undef FW_WM_VLV 810 + 811 + static uint8_t vlv_compute_drain_latency(struct drm_crtc *crtc, 812 + struct drm_plane *plane) 845 813 { 846 814 struct drm_device *dev = crtc->dev; 847 - int entries; 848 - int clock = to_intel_crtc(crtc)->config->base.adjusted_mode.crtc_clock; 815 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 816 + int entries, prec_mult, drain_latency, pixel_size; 817 + int clock = intel_crtc->config->base.adjusted_mode.crtc_clock; 818 + const int high_precision = IS_CHERRYVIEW(dev) ? 16 : 64; 819 + 820 + /* 821 + * FIXME the plane might have an fb 822 + * but be invisible (eg. due to clipping) 823 + */ 824 + if (!intel_crtc->active || !plane->state->fb) 825 + return 0; 849 826 850 827 if (WARN(clock == 0, "Pixel clock is zero!\n")) 851 - return false; 828 + return 0; 829 + 830 + pixel_size = drm_format_plane_cpp(plane->state->fb->pixel_format, 0); 852 831 853 832 if (WARN(pixel_size == 0, "Pixel size is zero!\n")) 854 - return false; 833 + return 0; 855 834 856 835 entries = DIV_ROUND_UP(clock, 1000) * pixel_size; 857 - if (IS_CHERRYVIEW(dev)) 858 - *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_32 : 859 - DRAIN_LATENCY_PRECISION_16; 860 - else 861 - *prec_mult = (entries > 128) ? DRAIN_LATENCY_PRECISION_64 : 862 - DRAIN_LATENCY_PRECISION_32; 863 - *drain_latency = (64 * (*prec_mult) * 4) / entries; 864 836 865 - if (*drain_latency > DRAIN_LATENCY_MASK) 866 - *drain_latency = DRAIN_LATENCY_MASK; 837 + prec_mult = high_precision; 838 + drain_latency = 64 * prec_mult * 4 / entries; 839 + 840 + if (drain_latency > DRAIN_LATENCY_MASK) { 841 + prec_mult /= 2; 842 + drain_latency = 64 * prec_mult * 4 / entries; 843 + } 844 + 845 + if (drain_latency > DRAIN_LATENCY_MASK) 846 + drain_latency = DRAIN_LATENCY_MASK; 847 + 848 + return drain_latency | (prec_mult == high_precision ? 849 + DDL_PRECISION_HIGH : DDL_PRECISION_LOW); 850 + } 851 + 852 + static int vlv_compute_wm(struct intel_crtc *crtc, 853 + struct intel_plane *plane, 854 + int fifo_size) 855 + { 856 + int clock, entries, pixel_size; 857 + 858 + /* 859 + * FIXME the plane might have an fb 860 + * but be invisible (eg. due to clipping) 861 + */ 862 + if (!crtc->active || !plane->base.state->fb) 863 + return 0; 864 + 865 + pixel_size = drm_format_plane_cpp(plane->base.state->fb->pixel_format, 0); 866 + clock = crtc->config->base.adjusted_mode.crtc_clock; 867 + 868 + entries = DIV_ROUND_UP(clock, 1000) * pixel_size; 869 + 870 + /* 871 + * Set up the watermark such that we don't start issuing memory 872 + * requests until we are within PND's max deadline value (256us). 873 + * Idea being to be idle as long as possible while still taking 874 + * advatange of PND's deadline scheduling. The limit of 8 875 + * cachelines (used when the FIFO will anyway drain in less time 876 + * than 256us) should match what we would be done if trickle 877 + * feed were enabled. 878 + */ 879 + return fifo_size - clamp(DIV_ROUND_UP(256 * entries, 64), 0, fifo_size - 8); 880 + } 881 + 882 + static bool vlv_compute_sr_wm(struct drm_device *dev, 883 + struct vlv_wm_values *wm) 884 + { 885 + struct drm_i915_private *dev_priv = to_i915(dev); 886 + struct drm_crtc *crtc; 887 + enum pipe pipe = INVALID_PIPE; 888 + int num_planes = 0; 889 + int fifo_size = 0; 890 + struct intel_plane *plane; 891 + 892 + wm->sr.cursor = wm->sr.plane = 0; 893 + 894 + crtc = single_enabled_crtc(dev); 895 + /* maxfifo not supported on pipe C */ 896 + if (crtc && to_intel_crtc(crtc)->pipe != PIPE_C) { 897 + pipe = to_intel_crtc(crtc)->pipe; 898 + num_planes = !!wm->pipe[pipe].primary + 899 + !!wm->pipe[pipe].sprite[0] + 900 + !!wm->pipe[pipe].sprite[1]; 901 + fifo_size = INTEL_INFO(dev_priv)->num_pipes * 512 - 1; 902 + } 903 + 904 + if (fifo_size == 0 || num_planes > 1) 905 + return false; 906 + 907 + wm->sr.cursor = vlv_compute_wm(to_intel_crtc(crtc), 908 + to_intel_plane(crtc->cursor), 0x3f); 909 + 910 + list_for_each_entry(plane, &dev->mode_config.plane_list, base.head) { 911 + if (plane->base.type == DRM_PLANE_TYPE_CURSOR) 912 + continue; 913 + 914 + if (plane->pipe != pipe) 915 + continue; 916 + 917 + wm->sr.plane = vlv_compute_wm(to_intel_crtc(crtc), 918 + plane, fifo_size); 919 + if (wm->sr.plane != 0) 920 + break; 921 + } 867 922 868 923 return true; 869 924 } 870 925 871 - /* 872 - * Update drain latency registers of memory arbiter 873 - * 874 - * Valleyview SoC has a new memory arbiter and needs drain latency registers 875 - * to be programmed. Each plane has a drain latency multiplier and a drain 876 - * latency value. 877 - */ 878 - 879 - static void vlv_update_drain_latency(struct drm_crtc *crtc) 926 + static void valleyview_update_wm(struct drm_crtc *crtc) 880 927 { 881 928 struct drm_device *dev = crtc->dev; 882 929 struct drm_i915_private *dev_priv = dev->dev_private; 883 930 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 884 - int pixel_size; 885 - int drain_latency; 886 931 enum pipe pipe = intel_crtc->pipe; 887 - int plane_prec, prec_mult, plane_dl; 888 - const int high_precision = IS_CHERRYVIEW(dev) ? 889 - DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64; 932 + bool cxsr_enabled; 933 + struct vlv_wm_values wm = dev_priv->wm.vlv; 890 934 891 - plane_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_PLANE_PRECISION_HIGH | 892 - DRAIN_LATENCY_MASK | DDL_CURSOR_PRECISION_HIGH | 893 - (DRAIN_LATENCY_MASK << DDL_CURSOR_SHIFT)); 935 + wm.ddl[pipe].primary = vlv_compute_drain_latency(crtc, crtc->primary); 936 + wm.pipe[pipe].primary = vlv_compute_wm(intel_crtc, 937 + to_intel_plane(crtc->primary), 938 + vlv_get_fifo_size(dev, pipe, 0)); 894 939 895 - if (!intel_crtc_active(crtc)) { 896 - I915_WRITE(VLV_DDL(pipe), plane_dl); 940 + wm.ddl[pipe].cursor = vlv_compute_drain_latency(crtc, crtc->cursor); 941 + wm.pipe[pipe].cursor = vlv_compute_wm(intel_crtc, 942 + to_intel_plane(crtc->cursor), 943 + 0x3f); 944 + 945 + cxsr_enabled = vlv_compute_sr_wm(dev, &wm); 946 + 947 + if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0) 897 948 return; 898 - } 899 949 900 - /* Primary plane Drain Latency */ 901 - pixel_size = crtc->primary->fb->bits_per_pixel / 8; /* BPP */ 902 - if (vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) { 903 - plane_prec = (prec_mult == high_precision) ? 904 - DDL_PLANE_PRECISION_HIGH : 905 - DDL_PLANE_PRECISION_LOW; 906 - plane_dl |= plane_prec | drain_latency; 907 - } 950 + DRM_DEBUG_KMS("Setting FIFO watermarks - %c: plane=%d, cursor=%d, " 951 + "SR: plane=%d, cursor=%d\n", pipe_name(pipe), 952 + wm.pipe[pipe].primary, wm.pipe[pipe].cursor, 953 + wm.sr.plane, wm.sr.cursor); 908 954 909 - /* Cursor Drain Latency 910 - * BPP is always 4 for cursor 955 + /* 956 + * FIXME DDR DVFS introduces massive memory latencies which 957 + * are not known to system agent so any deadline specified 958 + * by the display may not be respected. To support DDR DVFS 959 + * the watermark code needs to be rewritten to essentially 960 + * bypass deadline mechanism and rely solely on the 961 + * watermarks. For now disable DDR DVFS. 911 962 */ 912 - pixel_size = 4; 963 + if (IS_CHERRYVIEW(dev_priv)) 964 + chv_set_memory_dvfs(dev_priv, false); 913 965 914 - /* Program cursor DL only if it is enabled */ 915 - if (intel_crtc->cursor_base && 916 - vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, &drain_latency)) { 917 - plane_prec = (prec_mult == high_precision) ? 918 - DDL_CURSOR_PRECISION_HIGH : 919 - DDL_CURSOR_PRECISION_LOW; 920 - plane_dl |= plane_prec | (drain_latency << DDL_CURSOR_SHIFT); 921 - } 922 - 923 - I915_WRITE(VLV_DDL(pipe), plane_dl); 924 - } 925 - 926 - #define single_plane_enabled(mask) is_power_of_2(mask) 927 - 928 - static void valleyview_update_wm(struct drm_crtc *crtc) 929 - { 930 - struct drm_device *dev = crtc->dev; 931 - static const int sr_latency_ns = 12000; 932 - struct drm_i915_private *dev_priv = dev->dev_private; 933 - int planea_wm, planeb_wm, cursora_wm, cursorb_wm; 934 - int plane_sr, cursor_sr; 935 - int ignore_plane_sr, ignore_cursor_sr; 936 - unsigned int enabled = 0; 937 - bool cxsr_enabled; 938 - 939 - vlv_update_drain_latency(crtc); 940 - 941 - if (g4x_compute_wm0(dev, PIPE_A, 942 - &valleyview_wm_info, pessimal_latency_ns, 943 - &valleyview_cursor_wm_info, pessimal_latency_ns, 944 - &planea_wm, &cursora_wm)) 945 - enabled |= 1 << PIPE_A; 946 - 947 - if (g4x_compute_wm0(dev, PIPE_B, 948 - &valleyview_wm_info, pessimal_latency_ns, 949 - &valleyview_cursor_wm_info, pessimal_latency_ns, 950 - &planeb_wm, &cursorb_wm)) 951 - enabled |= 1 << PIPE_B; 952 - 953 - if (single_plane_enabled(enabled) && 954 - g4x_compute_srwm(dev, ffs(enabled) - 1, 955 - sr_latency_ns, 956 - &valleyview_wm_info, 957 - &valleyview_cursor_wm_info, 958 - &plane_sr, &ignore_cursor_sr) && 959 - g4x_compute_srwm(dev, ffs(enabled) - 1, 960 - 2*sr_latency_ns, 961 - &valleyview_wm_info, 962 - &valleyview_cursor_wm_info, 963 - &ignore_plane_sr, &cursor_sr)) { 964 - cxsr_enabled = true; 965 - } else { 966 - cxsr_enabled = false; 966 + if (!cxsr_enabled) 967 967 intel_set_memory_cxsr(dev_priv, false); 968 - plane_sr = cursor_sr = 0; 969 - } 970 968 971 - DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, " 972 - "B: plane=%d, cursor=%d, SR: plane=%d, cursor=%d\n", 973 - planea_wm, cursora_wm, 974 - planeb_wm, cursorb_wm, 975 - plane_sr, cursor_sr); 976 - 977 - I915_WRITE(DSPFW1, 978 - (plane_sr << DSPFW_SR_SHIFT) | 979 - (cursorb_wm << DSPFW_CURSORB_SHIFT) | 980 - (planeb_wm << DSPFW_PLANEB_SHIFT) | 981 - (planea_wm << DSPFW_PLANEA_SHIFT)); 982 - I915_WRITE(DSPFW2, 983 - (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) | 984 - (cursora_wm << DSPFW_CURSORA_SHIFT)); 985 - I915_WRITE(DSPFW3, 986 - (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) | 987 - (cursor_sr << DSPFW_CURSOR_SR_SHIFT)); 988 - 989 - if (cxsr_enabled) 990 - intel_set_memory_cxsr(dev_priv, true); 991 - } 992 - 993 - static void cherryview_update_wm(struct drm_crtc *crtc) 994 - { 995 - struct drm_device *dev = crtc->dev; 996 - static const int sr_latency_ns = 12000; 997 - struct drm_i915_private *dev_priv = dev->dev_private; 998 - int planea_wm, planeb_wm, planec_wm; 999 - int cursora_wm, cursorb_wm, cursorc_wm; 1000 - int plane_sr, cursor_sr; 1001 - int ignore_plane_sr, ignore_cursor_sr; 1002 - unsigned int enabled = 0; 1003 - bool cxsr_enabled; 1004 - 1005 - vlv_update_drain_latency(crtc); 1006 - 1007 - if (g4x_compute_wm0(dev, PIPE_A, 1008 - &valleyview_wm_info, pessimal_latency_ns, 1009 - &valleyview_cursor_wm_info, pessimal_latency_ns, 1010 - &planea_wm, &cursora_wm)) 1011 - enabled |= 1 << PIPE_A; 1012 - 1013 - if (g4x_compute_wm0(dev, PIPE_B, 1014 - &valleyview_wm_info, pessimal_latency_ns, 1015 - &valleyview_cursor_wm_info, pessimal_latency_ns, 1016 - &planeb_wm, &cursorb_wm)) 1017 - enabled |= 1 << PIPE_B; 1018 - 1019 - if (g4x_compute_wm0(dev, PIPE_C, 1020 - &valleyview_wm_info, pessimal_latency_ns, 1021 - &valleyview_cursor_wm_info, pessimal_latency_ns, 1022 - &planec_wm, &cursorc_wm)) 1023 - enabled |= 1 << PIPE_C; 1024 - 1025 - if (single_plane_enabled(enabled) && 1026 - g4x_compute_srwm(dev, ffs(enabled) - 1, 1027 - sr_latency_ns, 1028 - &valleyview_wm_info, 1029 - &valleyview_cursor_wm_info, 1030 - &plane_sr, &ignore_cursor_sr) && 1031 - g4x_compute_srwm(dev, ffs(enabled) - 1, 1032 - 2*sr_latency_ns, 1033 - &valleyview_wm_info, 1034 - &valleyview_cursor_wm_info, 1035 - &ignore_plane_sr, &cursor_sr)) { 1036 - cxsr_enabled = true; 1037 - } else { 1038 - cxsr_enabled = false; 1039 - intel_set_memory_cxsr(dev_priv, false); 1040 - plane_sr = cursor_sr = 0; 1041 - } 1042 - 1043 - DRM_DEBUG_KMS("Setting FIFO watermarks - A: plane=%d, cursor=%d, " 1044 - "B: plane=%d, cursor=%d, C: plane=%d, cursor=%d, " 1045 - "SR: plane=%d, cursor=%d\n", 1046 - planea_wm, cursora_wm, 1047 - planeb_wm, cursorb_wm, 1048 - planec_wm, cursorc_wm, 1049 - plane_sr, cursor_sr); 1050 - 1051 - I915_WRITE(DSPFW1, 1052 - (plane_sr << DSPFW_SR_SHIFT) | 1053 - (cursorb_wm << DSPFW_CURSORB_SHIFT) | 1054 - (planeb_wm << DSPFW_PLANEB_SHIFT) | 1055 - (planea_wm << DSPFW_PLANEA_SHIFT)); 1056 - I915_WRITE(DSPFW2, 1057 - (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) | 1058 - (cursora_wm << DSPFW_CURSORA_SHIFT)); 1059 - I915_WRITE(DSPFW3, 1060 - (I915_READ(DSPFW3) & ~DSPFW_CURSOR_SR_MASK) | 1061 - (cursor_sr << DSPFW_CURSOR_SR_SHIFT)); 1062 - I915_WRITE(DSPFW9_CHV, 1063 - (I915_READ(DSPFW9_CHV) & ~(DSPFW_PLANEC_MASK | 1064 - DSPFW_CURSORC_MASK)) | 1065 - (planec_wm << DSPFW_PLANEC_SHIFT) | 1066 - (cursorc_wm << DSPFW_CURSORC_SHIFT)); 969 + vlv_write_wm_values(intel_crtc, &wm); 1067 970 1068 971 if (cxsr_enabled) 1069 972 intel_set_memory_cxsr(dev_priv, true); ··· 1080 979 { 1081 980 struct drm_device *dev = crtc->dev; 1082 981 struct drm_i915_private *dev_priv = dev->dev_private; 1083 - int pipe = to_intel_plane(plane)->pipe; 982 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 983 + enum pipe pipe = intel_crtc->pipe; 1084 984 int sprite = to_intel_plane(plane)->plane; 1085 - int drain_latency; 1086 - int plane_prec; 1087 - int sprite_dl; 1088 - int prec_mult; 1089 - const int high_precision = IS_CHERRYVIEW(dev) ? 1090 - DRAIN_LATENCY_PRECISION_32 : DRAIN_LATENCY_PRECISION_64; 985 + bool cxsr_enabled; 986 + struct vlv_wm_values wm = dev_priv->wm.vlv; 1091 987 1092 - sprite_dl = I915_READ(VLV_DDL(pipe)) & ~(DDL_SPRITE_PRECISION_HIGH(sprite) | 1093 - (DRAIN_LATENCY_MASK << DDL_SPRITE_SHIFT(sprite))); 988 + if (enabled) { 989 + wm.ddl[pipe].sprite[sprite] = 990 + vlv_compute_drain_latency(crtc, plane); 1094 991 1095 - if (enabled && vlv_compute_drain_latency(crtc, pixel_size, &prec_mult, 1096 - &drain_latency)) { 1097 - plane_prec = (prec_mult == high_precision) ? 1098 - DDL_SPRITE_PRECISION_HIGH(sprite) : 1099 - DDL_SPRITE_PRECISION_LOW(sprite); 1100 - sprite_dl |= plane_prec | 1101 - (drain_latency << DDL_SPRITE_SHIFT(sprite)); 992 + wm.pipe[pipe].sprite[sprite] = 993 + vlv_compute_wm(intel_crtc, 994 + to_intel_plane(plane), 995 + vlv_get_fifo_size(dev, pipe, sprite+1)); 996 + } else { 997 + wm.ddl[pipe].sprite[sprite] = 0; 998 + wm.pipe[pipe].sprite[sprite] = 0; 1102 999 } 1103 1000 1104 - I915_WRITE(VLV_DDL(pipe), sprite_dl); 1001 + cxsr_enabled = vlv_compute_sr_wm(dev, &wm); 1002 + 1003 + if (memcmp(&wm, &dev_priv->wm.vlv, sizeof(wm)) == 0) 1004 + return; 1005 + 1006 + DRM_DEBUG_KMS("Setting FIFO watermarks - %c: sprite %c=%d, " 1007 + "SR: plane=%d, cursor=%d\n", pipe_name(pipe), 1008 + sprite_name(pipe, sprite), 1009 + wm.pipe[pipe].sprite[sprite], 1010 + wm.sr.plane, wm.sr.cursor); 1011 + 1012 + if (!cxsr_enabled) 1013 + intel_set_memory_cxsr(dev_priv, false); 1014 + 1015 + vlv_write_wm_values(intel_crtc, &wm); 1016 + 1017 + if (cxsr_enabled) 1018 + intel_set_memory_cxsr(dev_priv, true); 1105 1019 } 1020 + 1021 + #define single_plane_enabled(mask) is_power_of_2(mask) 1106 1022 1107 1023 static void g4x_update_wm(struct drm_crtc *crtc) 1108 1024 { ··· 1163 1045 plane_sr, cursor_sr); 1164 1046 1165 1047 I915_WRITE(DSPFW1, 1166 - (plane_sr << DSPFW_SR_SHIFT) | 1167 - (cursorb_wm << DSPFW_CURSORB_SHIFT) | 1168 - (planeb_wm << DSPFW_PLANEB_SHIFT) | 1169 - (planea_wm << DSPFW_PLANEA_SHIFT)); 1048 + FW_WM(plane_sr, SR) | 1049 + FW_WM(cursorb_wm, CURSORB) | 1050 + FW_WM(planeb_wm, PLANEB) | 1051 + FW_WM(planea_wm, PLANEA)); 1170 1052 I915_WRITE(DSPFW2, 1171 1053 (I915_READ(DSPFW2) & ~DSPFW_CURSORA_MASK) | 1172 - (cursora_wm << DSPFW_CURSORA_SHIFT)); 1054 + FW_WM(cursora_wm, CURSORA)); 1173 1055 /* HPLL off in SR has some issues on G4x... disable it */ 1174 1056 I915_WRITE(DSPFW3, 1175 1057 (I915_READ(DSPFW3) & ~(DSPFW_HPLL_SR_EN | DSPFW_CURSOR_SR_MASK)) | 1176 - (cursor_sr << DSPFW_CURSOR_SR_SHIFT)); 1058 + FW_WM(cursor_sr, CURSOR_SR)); 1177 1059 1178 1060 if (cxsr_enabled) 1179 1061 intel_set_memory_cxsr(dev_priv, true); ··· 1198 1080 int clock = adjusted_mode->crtc_clock; 1199 1081 int htotal = adjusted_mode->crtc_htotal; 1200 1082 int hdisplay = to_intel_crtc(crtc)->config->pipe_src_w; 1201 - int pixel_size = crtc->primary->fb->bits_per_pixel / 8; 1083 + int pixel_size = crtc->primary->state->fb->bits_per_pixel / 8; 1202 1084 unsigned long line_time_us; 1203 1085 int entries; 1204 1086 ··· 1216 1098 entries, srwm); 1217 1099 1218 1100 entries = (((sr_latency_ns / line_time_us) + 1000) / 1000) * 1219 - pixel_size * to_intel_crtc(crtc)->cursor_width; 1101 + pixel_size * crtc->cursor->state->crtc_w; 1220 1102 entries = DIV_ROUND_UP(entries, 1221 1103 i965_cursor_wm_info.cacheline_size); 1222 1104 cursor_sr = i965_cursor_wm_info.fifo_size - ··· 1239 1121 srwm); 1240 1122 1241 1123 /* 965 has limitations... */ 1242 - I915_WRITE(DSPFW1, (srwm << DSPFW_SR_SHIFT) | 1243 - (8 << DSPFW_CURSORB_SHIFT) | 1244 - (8 << DSPFW_PLANEB_SHIFT) | 1245 - (8 << DSPFW_PLANEA_SHIFT)); 1246 - I915_WRITE(DSPFW2, (8 << DSPFW_CURSORA_SHIFT) | 1247 - (8 << DSPFW_PLANEC_SHIFT_OLD)); 1124 + I915_WRITE(DSPFW1, FW_WM(srwm, SR) | 1125 + FW_WM(8, CURSORB) | 1126 + FW_WM(8, PLANEB) | 1127 + FW_WM(8, PLANEA)); 1128 + I915_WRITE(DSPFW2, FW_WM(8, CURSORA) | 1129 + FW_WM(8, PLANEC_OLD)); 1248 1130 /* update cursor SR watermark */ 1249 - I915_WRITE(DSPFW3, (cursor_sr << DSPFW_CURSOR_SR_SHIFT)); 1131 + I915_WRITE(DSPFW3, FW_WM(cursor_sr, CURSOR_SR)); 1250 1132 1251 1133 if (cxsr_enabled) 1252 1134 intel_set_memory_cxsr(dev_priv, true); 1253 1135 } 1136 + 1137 + #undef FW_WM 1254 1138 1255 1139 static void i9xx_update_wm(struct drm_crtc *unused_crtc) 1256 1140 { ··· 1277 1157 crtc = intel_get_crtc_for_plane(dev, 0); 1278 1158 if (intel_crtc_active(crtc)) { 1279 1159 const struct drm_display_mode *adjusted_mode; 1280 - int cpp = crtc->primary->fb->bits_per_pixel / 8; 1160 + int cpp = crtc->primary->state->fb->bits_per_pixel / 8; 1281 1161 if (IS_GEN2(dev)) 1282 1162 cpp = 4; 1283 1163 ··· 1299 1179 crtc = intel_get_crtc_for_plane(dev, 1); 1300 1180 if (intel_crtc_active(crtc)) { 1301 1181 const struct drm_display_mode *adjusted_mode; 1302 - int cpp = crtc->primary->fb->bits_per_pixel / 8; 1182 + int cpp = crtc->primary->state->fb->bits_per_pixel / 8; 1303 1183 if (IS_GEN2(dev)) 1304 1184 cpp = 4; 1305 1185 ··· 1322 1202 if (IS_I915GM(dev) && enabled) { 1323 1203 struct drm_i915_gem_object *obj; 1324 1204 1325 - obj = intel_fb_obj(enabled->primary->fb); 1205 + obj = intel_fb_obj(enabled->primary->state->fb); 1326 1206 1327 1207 /* self-refresh seems busted with untiled */ 1328 1208 if (obj->tiling_mode == I915_TILING_NONE) ··· 1346 1226 int clock = adjusted_mode->crtc_clock; 1347 1227 int htotal = adjusted_mode->crtc_htotal; 1348 1228 int hdisplay = to_intel_crtc(enabled)->config->pipe_src_w; 1349 - int pixel_size = enabled->primary->fb->bits_per_pixel / 8; 1229 + int pixel_size = enabled->primary->state->fb->bits_per_pixel / 8; 1350 1230 unsigned long line_time_us; 1351 1231 int entries; 1352 1232 ··· 1783 1663 struct drm_display_mode *mode = &intel_crtc->config->base.adjusted_mode; 1784 1664 u32 linetime, ips_linetime; 1785 1665 1786 - if (!intel_crtc_active(crtc)) 1666 + if (!intel_crtc->active) 1787 1667 return 0; 1788 1668 1789 1669 /* The WM are computed with base on how long it takes to fill a single ··· 2038 1918 enum pipe pipe = intel_crtc->pipe; 2039 1919 struct drm_plane *plane; 2040 1920 2041 - if (!intel_crtc_active(crtc)) 1921 + if (!intel_crtc->active) 2042 1922 return; 2043 1923 2044 1924 p->active = true; 2045 1925 p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal; 2046 1926 p->pixel_rate = ilk_pipe_pixel_rate(dev, crtc); 2047 - p->pri.bytes_per_pixel = crtc->primary->fb->bits_per_pixel / 8; 2048 - p->cur.bytes_per_pixel = 4; 1927 + 1928 + if (crtc->primary->state->fb) { 1929 + p->pri.enabled = true; 1930 + p->pri.bytes_per_pixel = 1931 + crtc->primary->state->fb->bits_per_pixel / 8; 1932 + } else { 1933 + p->pri.enabled = false; 1934 + p->pri.bytes_per_pixel = 0; 1935 + } 1936 + 1937 + if (crtc->cursor->state->fb) { 1938 + p->cur.enabled = true; 1939 + p->cur.bytes_per_pixel = 4; 1940 + } else { 1941 + p->cur.enabled = false; 1942 + p->cur.bytes_per_pixel = 0; 1943 + } 2049 1944 p->pri.horiz_pixels = intel_crtc->config->pipe_src_w; 2050 - p->cur.horiz_pixels = intel_crtc->cursor_width; 2051 - /* TODO: for now, assume primary and cursor planes are always enabled. */ 2052 - p->pri.enabled = true; 2053 - p->cur.enabled = true; 1945 + p->cur.horiz_pixels = intel_crtc->base.cursor->state->crtc_w; 2054 1946 2055 1947 drm_for_each_legacy_plane(plane, &dev->mode_config.plane_list) { 2056 1948 struct intel_plane *intel_plane = to_intel_plane(plane); ··· 2562 2430 2563 2431 nth_active_pipe = 0; 2564 2432 for_each_crtc(dev, crtc) { 2565 - if (!intel_crtc_active(crtc)) 2433 + if (!to_intel_crtc(crtc)->active) 2566 2434 continue; 2567 2435 2568 2436 if (crtc == for_crtc) ··· 2595 2463 void skl_ddb_get_hw_state(struct drm_i915_private *dev_priv, 2596 2464 struct skl_ddb_allocation *ddb /* out */) 2597 2465 { 2598 - struct drm_device *dev = dev_priv->dev; 2599 2466 enum pipe pipe; 2600 2467 int plane; 2601 2468 u32 val; 2602 2469 2603 2470 for_each_pipe(dev_priv, pipe) { 2604 - for_each_plane(pipe, plane) { 2471 + for_each_plane(dev_priv, pipe, plane) { 2605 2472 val = I915_READ(PLANE_BUF_CFG(pipe, plane)); 2606 2473 skl_ddb_entry_init_from_hw(&ddb->plane[pipe][plane], 2607 2474 val); ··· 2649 2518 struct skl_ddb_allocation *ddb /* out */) 2650 2519 { 2651 2520 struct drm_device *dev = crtc->dev; 2521 + struct drm_i915_private *dev_priv = dev->dev_private; 2652 2522 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 2653 2523 enum pipe pipe = intel_crtc->pipe; 2654 2524 struct skl_ddb_entry *alloc = &ddb->pipe[pipe]; ··· 2674 2542 alloc->end -= cursor_blocks; 2675 2543 2676 2544 /* 1. Allocate the mininum required blocks for each active plane */ 2677 - for_each_plane(pipe, plane) { 2545 + for_each_plane(dev_priv, pipe, plane) { 2678 2546 const struct intel_plane_wm_parameters *p; 2679 2547 2680 2548 p = &params->plane[plane]; ··· 2802 2670 struct drm_plane *plane; 2803 2671 2804 2672 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) 2805 - config->num_pipes_active += intel_crtc_active(crtc); 2673 + config->num_pipes_active += to_intel_crtc(crtc)->active; 2806 2674 2807 2675 /* FIXME: I don't think we need those two global parameters on SKL */ 2808 2676 list_for_each_entry(plane, &dev->mode_config.plane_list, head) { ··· 2823 2691 struct drm_framebuffer *fb; 2824 2692 int i = 1; /* Index for sprite planes start */ 2825 2693 2826 - p->active = intel_crtc_active(crtc); 2694 + p->active = intel_crtc->active; 2827 2695 if (p->active) { 2828 2696 p->pipe_htotal = intel_crtc->config->base.adjusted_mode.crtc_htotal; 2829 2697 p->pixel_rate = skl_pipe_pixel_rate(intel_crtc->config); 2830 2698 2831 - /* 2832 - * For now, assume primary and cursor planes are always enabled. 2833 - */ 2834 - p->plane[0].enabled = true; 2835 - p->plane[0].bytes_per_pixel = 2836 - crtc->primary->fb->bits_per_pixel / 8; 2699 + fb = crtc->primary->state->fb; 2700 + if (fb) { 2701 + p->plane[0].enabled = true; 2702 + p->plane[0].bytes_per_pixel = fb->bits_per_pixel / 8; 2703 + p->plane[0].tiling = fb->modifier[0]; 2704 + } else { 2705 + p->plane[0].enabled = false; 2706 + p->plane[0].bytes_per_pixel = 0; 2707 + p->plane[0].tiling = DRM_FORMAT_MOD_NONE; 2708 + } 2837 2709 p->plane[0].horiz_pixels = intel_crtc->config->pipe_src_w; 2838 2710 p->plane[0].vert_pixels = intel_crtc->config->pipe_src_h; 2839 - p->plane[0].tiling = DRM_FORMAT_MOD_NONE; 2840 - fb = crtc->primary->state->fb; 2841 - /* 2842 - * Framebuffer can be NULL on plane disable, but it does not 2843 - * matter for watermarks if we assume no tiling in that case. 2844 - */ 2845 - if (fb) 2846 - p->plane[0].tiling = fb->modifier[0]; 2847 2711 2848 - p->cursor.enabled = true; 2849 - p->cursor.bytes_per_pixel = 4; 2850 - p->cursor.horiz_pixels = intel_crtc->cursor_width ? 2851 - intel_crtc->cursor_width : 64; 2712 + fb = crtc->cursor->state->fb; 2713 + if (fb) { 2714 + p->cursor.enabled = true; 2715 + p->cursor.bytes_per_pixel = fb->bits_per_pixel / 8; 2716 + p->cursor.horiz_pixels = crtc->cursor->state->crtc_w; 2717 + p->cursor.vert_pixels = crtc->cursor->state->crtc_h; 2718 + } else { 2719 + p->cursor.enabled = false; 2720 + p->cursor.bytes_per_pixel = 0; 2721 + p->cursor.horiz_pixels = 64; 2722 + p->cursor.vert_pixels = 64; 2723 + } 2852 2724 } 2853 2725 2854 2726 list_for_each_entry(plane, &dev->mode_config.plane_list, head) { ··· 2958 2822 static uint32_t 2959 2823 skl_compute_linetime_wm(struct drm_crtc *crtc, struct skl_pipe_wm_parameters *p) 2960 2824 { 2961 - if (!intel_crtc_active(crtc)) 2825 + if (!to_intel_crtc(crtc)->active) 2962 2826 return 0; 2963 2827 2964 2828 return DIV_ROUND_UP(8 * p->pipe_htotal * 1000, p->pixel_rate); ··· 3132 2996 static void 3133 2997 skl_wm_flush_pipe(struct drm_i915_private *dev_priv, enum pipe pipe, int pass) 3134 2998 { 3135 - struct drm_device *dev = dev_priv->dev; 3136 2999 int plane; 3137 3000 3138 3001 DRM_DEBUG_KMS("flush pipe %c (pass %d)\n", pipe_name(pipe), pass); 3139 3002 3140 - for_each_plane(pipe, plane) { 3003 + for_each_plane(dev_priv, pipe, plane) { 3141 3004 I915_WRITE(PLANE_SURF(pipe, plane), 3142 3005 I915_READ(PLANE_SURF(pipe, plane))); 3143 3006 } ··· 3505 3370 hw->plane_trans[pipe][i] = I915_READ(PLANE_WM_TRANS(pipe, i)); 3506 3371 hw->cursor_trans[pipe] = I915_READ(CUR_WM_TRANS(pipe)); 3507 3372 3508 - if (!intel_crtc_active(crtc)) 3373 + if (!intel_crtc->active) 3509 3374 return; 3510 3375 3511 3376 hw->dirty[pipe] = true; ··· 3560 3425 if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 3561 3426 hw->wm_linetime[pipe] = I915_READ(PIPE_WM_LINETIME(pipe)); 3562 3427 3563 - active->pipe_enabled = intel_crtc_active(crtc); 3428 + active->pipe_enabled = intel_crtc->active; 3564 3429 3565 3430 if (active->pipe_enabled) { 3566 3431 u32 tmp = hw->wm_pipe[pipe]; ··· 3672 3537 dev_priv->display.update_sprite_wm(plane, crtc, 3673 3538 sprite_width, sprite_height, 3674 3539 pixel_size, enabled, scaled); 3675 - } 3676 - 3677 - static struct drm_i915_gem_object * 3678 - intel_alloc_context_page(struct drm_device *dev) 3679 - { 3680 - struct drm_i915_gem_object *ctx; 3681 - int ret; 3682 - 3683 - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 3684 - 3685 - ctx = i915_gem_alloc_object(dev, 4096); 3686 - if (!ctx) { 3687 - DRM_DEBUG("failed to alloc power context, RC6 disabled\n"); 3688 - return NULL; 3689 - } 3690 - 3691 - ret = i915_gem_obj_ggtt_pin(ctx, 4096, 0); 3692 - if (ret) { 3693 - DRM_ERROR("failed to pin power context: %d\n", ret); 3694 - goto err_unref; 3695 - } 3696 - 3697 - ret = i915_gem_object_set_to_gtt_domain(ctx, 1); 3698 - if (ret) { 3699 - DRM_ERROR("failed to set-domain on power context: %d\n", ret); 3700 - goto err_unpin; 3701 - } 3702 - 3703 - return ctx; 3704 - 3705 - err_unpin: 3706 - i915_gem_object_ggtt_unpin(ctx); 3707 - err_unref: 3708 - drm_gem_object_unreference(&ctx->base); 3709 - return NULL; 3710 3540 } 3711 3541 3712 3542 /** ··· 3806 3706 * ourselves, instead of doing a rmw cycle (which might result in us clearing 3807 3707 * all limits and the gpu stuck at whatever frequency it is at atm). 3808 3708 */ 3809 - static u32 gen6_rps_limits(struct drm_i915_private *dev_priv, u8 val) 3709 + static u32 intel_rps_limits(struct drm_i915_private *dev_priv, u8 val) 3810 3710 { 3811 3711 u32 limits; 3812 3712 ··· 3816 3716 * the hw runs at the minimal clock before selecting the desired 3817 3717 * frequency, if the down threshold expires in that window we will not 3818 3718 * receive a down interrupt. */ 3819 - limits = dev_priv->rps.max_freq_softlimit << 24; 3820 - if (val <= dev_priv->rps.min_freq_softlimit) 3821 - limits |= dev_priv->rps.min_freq_softlimit << 16; 3719 + if (IS_GEN9(dev_priv->dev)) { 3720 + limits = (dev_priv->rps.max_freq_softlimit) << 23; 3721 + if (val <= dev_priv->rps.min_freq_softlimit) 3722 + limits |= (dev_priv->rps.min_freq_softlimit) << 14; 3723 + } else { 3724 + limits = dev_priv->rps.max_freq_softlimit << 24; 3725 + if (val <= dev_priv->rps.min_freq_softlimit) 3726 + limits |= dev_priv->rps.min_freq_softlimit << 16; 3727 + } 3822 3728 3823 3729 return limits; 3824 3730 } ··· 3832 3726 static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val) 3833 3727 { 3834 3728 int new_power; 3729 + u32 threshold_up = 0, threshold_down = 0; /* in % */ 3730 + u32 ei_up = 0, ei_down = 0; 3835 3731 3836 3732 new_power = dev_priv->rps.power; 3837 3733 switch (dev_priv->rps.power) { ··· 3866 3758 switch (new_power) { 3867 3759 case LOW_POWER: 3868 3760 /* Upclock if more than 95% busy over 16ms */ 3869 - I915_WRITE(GEN6_RP_UP_EI, 12500); 3870 - I915_WRITE(GEN6_RP_UP_THRESHOLD, 11800); 3761 + ei_up = 16000; 3762 + threshold_up = 95; 3871 3763 3872 3764 /* Downclock if less than 85% busy over 32ms */ 3873 - I915_WRITE(GEN6_RP_DOWN_EI, 25000); 3874 - I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 21250); 3875 - 3876 - I915_WRITE(GEN6_RP_CONTROL, 3877 - GEN6_RP_MEDIA_TURBO | 3878 - GEN6_RP_MEDIA_HW_NORMAL_MODE | 3879 - GEN6_RP_MEDIA_IS_GFX | 3880 - GEN6_RP_ENABLE | 3881 - GEN6_RP_UP_BUSY_AVG | 3882 - GEN6_RP_DOWN_IDLE_AVG); 3765 + ei_down = 32000; 3766 + threshold_down = 85; 3883 3767 break; 3884 3768 3885 3769 case BETWEEN: 3886 3770 /* Upclock if more than 90% busy over 13ms */ 3887 - I915_WRITE(GEN6_RP_UP_EI, 10250); 3888 - I915_WRITE(GEN6_RP_UP_THRESHOLD, 9225); 3771 + ei_up = 13000; 3772 + threshold_up = 90; 3889 3773 3890 3774 /* Downclock if less than 75% busy over 32ms */ 3891 - I915_WRITE(GEN6_RP_DOWN_EI, 25000); 3892 - I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 18750); 3893 - 3894 - I915_WRITE(GEN6_RP_CONTROL, 3895 - GEN6_RP_MEDIA_TURBO | 3896 - GEN6_RP_MEDIA_HW_NORMAL_MODE | 3897 - GEN6_RP_MEDIA_IS_GFX | 3898 - GEN6_RP_ENABLE | 3899 - GEN6_RP_UP_BUSY_AVG | 3900 - GEN6_RP_DOWN_IDLE_AVG); 3775 + ei_down = 32000; 3776 + threshold_down = 75; 3901 3777 break; 3902 3778 3903 3779 case HIGH_POWER: 3904 3780 /* Upclock if more than 85% busy over 10ms */ 3905 - I915_WRITE(GEN6_RP_UP_EI, 8000); 3906 - I915_WRITE(GEN6_RP_UP_THRESHOLD, 6800); 3781 + ei_up = 10000; 3782 + threshold_up = 85; 3907 3783 3908 3784 /* Downclock if less than 60% busy over 32ms */ 3909 - I915_WRITE(GEN6_RP_DOWN_EI, 25000); 3910 - I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 15000); 3911 - 3912 - I915_WRITE(GEN6_RP_CONTROL, 3913 - GEN6_RP_MEDIA_TURBO | 3914 - GEN6_RP_MEDIA_HW_NORMAL_MODE | 3915 - GEN6_RP_MEDIA_IS_GFX | 3916 - GEN6_RP_ENABLE | 3917 - GEN6_RP_UP_BUSY_AVG | 3918 - GEN6_RP_DOWN_IDLE_AVG); 3785 + ei_down = 32000; 3786 + threshold_down = 60; 3919 3787 break; 3920 3788 } 3789 + 3790 + I915_WRITE(GEN6_RP_UP_EI, 3791 + GT_INTERVAL_FROM_US(dev_priv, ei_up)); 3792 + I915_WRITE(GEN6_RP_UP_THRESHOLD, 3793 + GT_INTERVAL_FROM_US(dev_priv, (ei_up * threshold_up / 100))); 3794 + 3795 + I915_WRITE(GEN6_RP_DOWN_EI, 3796 + GT_INTERVAL_FROM_US(dev_priv, ei_down)); 3797 + I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 3798 + GT_INTERVAL_FROM_US(dev_priv, (ei_down * threshold_down / 100))); 3799 + 3800 + I915_WRITE(GEN6_RP_CONTROL, 3801 + GEN6_RP_MEDIA_TURBO | 3802 + GEN6_RP_MEDIA_HW_NORMAL_MODE | 3803 + GEN6_RP_MEDIA_IS_GFX | 3804 + GEN6_RP_ENABLE | 3805 + GEN6_RP_UP_BUSY_AVG | 3806 + GEN6_RP_DOWN_IDLE_AVG); 3921 3807 3922 3808 dev_priv->rps.power = new_power; 3923 3809 dev_priv->rps.last_adj = 0; ··· 3949 3847 if (val != dev_priv->rps.cur_freq) { 3950 3848 gen6_set_rps_thresholds(dev_priv, val); 3951 3849 3952 - if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 3850 + if (IS_GEN9(dev)) 3851 + I915_WRITE(GEN6_RPNSWREQ, 3852 + GEN9_FREQUENCY(val)); 3853 + else if (IS_HASWELL(dev) || IS_BROADWELL(dev)) 3953 3854 I915_WRITE(GEN6_RPNSWREQ, 3954 3855 HSW_FREQUENCY(val)); 3955 3856 else ··· 3965 3860 /* Make sure we continue to get interrupts 3966 3861 * until we hit the minimum or maximum frequencies. 3967 3862 */ 3968 - I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, gen6_rps_limits(dev_priv, val)); 3863 + I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, intel_rps_limits(dev_priv, val)); 3969 3864 I915_WRITE(GEN6_PMINTRMSK, gen6_rps_pm_mask(dev_priv, val)); 3970 3865 3971 3866 POSTING_READ(GEN6_RPNSWREQ); ··· 4186 4081 dev_priv->rps.rp0_freq = (rp_state_cap >> 0) & 0xff; 4187 4082 dev_priv->rps.rp1_freq = (rp_state_cap >> 8) & 0xff; 4188 4083 dev_priv->rps.min_freq = (rp_state_cap >> 16) & 0xff; 4084 + if (IS_SKYLAKE(dev)) { 4085 + /* Store the frequency values in 16.66 MHZ units, which is 4086 + the natural hardware unit for SKL */ 4087 + dev_priv->rps.rp0_freq *= GEN9_FREQ_SCALER; 4088 + dev_priv->rps.rp1_freq *= GEN9_FREQ_SCALER; 4089 + dev_priv->rps.min_freq *= GEN9_FREQ_SCALER; 4090 + } 4189 4091 /* hw_max = RP0 until we check for overclocking */ 4190 4092 dev_priv->rps.max_freq = dev_priv->rps.rp0_freq; 4191 4093 ··· 4233 4121 4234 4122 gen6_init_rps_frequencies(dev); 4235 4123 4236 - I915_WRITE(GEN6_RPNSWREQ, 0xc800000); 4237 - I915_WRITE(GEN6_RC_VIDEO_FREQ, 0xc800000); 4124 + /* Program defaults and thresholds for RPS*/ 4125 + I915_WRITE(GEN6_RC_VIDEO_FREQ, 4126 + GEN9_FREQUENCY(dev_priv->rps.rp1_freq)); 4238 4127 4239 - I915_WRITE(GEN6_RP_DOWN_TIMEOUT, 0xf4240); 4240 - I915_WRITE(GEN6_RP_INTERRUPT_LIMITS, 0x12060000); 4241 - I915_WRITE(GEN6_RP_UP_THRESHOLD, 0xe808); 4242 - I915_WRITE(GEN6_RP_DOWN_THRESHOLD, 0x3bd08); 4243 - I915_WRITE(GEN6_RP_UP_EI, 0x101d0); 4244 - I915_WRITE(GEN6_RP_DOWN_EI, 0x55730); 4128 + /* 1 second timeout*/ 4129 + I915_WRITE(GEN6_RP_DOWN_TIMEOUT, 4130 + GT_INTERVAL_FROM_US(dev_priv, 1000000)); 4131 + 4245 4132 I915_WRITE(GEN6_RP_IDLE_HYSTERSIS, 0xa); 4246 - I915_WRITE(GEN6_PMINTRMSK, 0x6); 4247 - I915_WRITE(GEN6_RP_CONTROL, GEN6_RP_MEDIA_TURBO | 4248 - GEN6_RP_MEDIA_HW_MODE | GEN6_RP_MEDIA_IS_GFX | 4249 - GEN6_RP_ENABLE | GEN6_RP_UP_BUSY_AVG | 4250 - GEN6_RP_DOWN_IDLE_AVG); 4251 4133 4252 - gen6_enable_rps_interrupts(dev); 4134 + /* Leaning on the below call to gen6_set_rps to program/setup the 4135 + * Up/Down EI & threshold registers, as well as the RP_CONTROL, 4136 + * RP_INTERRUPT_LIMITS & RPNSWREQ registers */ 4137 + dev_priv->rps.power = HIGH_POWER; /* force a reset */ 4138 + gen6_set_rps(dev_priv->dev, dev_priv->rps.min_freq_softlimit); 4253 4139 4254 4140 intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 4255 4141 } ··· 5100 4990 intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); 5101 4991 } 5102 4992 5103 - void ironlake_teardown_rc6(struct drm_device *dev) 5104 - { 5105 - struct drm_i915_private *dev_priv = dev->dev_private; 5106 - 5107 - if (dev_priv->ips.renderctx) { 5108 - i915_gem_object_ggtt_unpin(dev_priv->ips.renderctx); 5109 - drm_gem_object_unreference(&dev_priv->ips.renderctx->base); 5110 - dev_priv->ips.renderctx = NULL; 5111 - } 5112 - 5113 - if (dev_priv->ips.pwrctx) { 5114 - i915_gem_object_ggtt_unpin(dev_priv->ips.pwrctx); 5115 - drm_gem_object_unreference(&dev_priv->ips.pwrctx->base); 5116 - dev_priv->ips.pwrctx = NULL; 5117 - } 5118 - } 5119 - 5120 - static void ironlake_disable_rc6(struct drm_device *dev) 5121 - { 5122 - struct drm_i915_private *dev_priv = dev->dev_private; 5123 - 5124 - if (I915_READ(PWRCTXA)) { 5125 - /* Wake the GPU, prevent RC6, then restore RSTDBYCTL */ 5126 - I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) | RCX_SW_EXIT); 5127 - wait_for(((I915_READ(RSTDBYCTL) & RSX_STATUS_MASK) == RSX_STATUS_ON), 5128 - 50); 5129 - 5130 - I915_WRITE(PWRCTXA, 0); 5131 - POSTING_READ(PWRCTXA); 5132 - 5133 - I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT); 5134 - POSTING_READ(RSTDBYCTL); 5135 - } 5136 - } 5137 - 5138 - static int ironlake_setup_rc6(struct drm_device *dev) 5139 - { 5140 - struct drm_i915_private *dev_priv = dev->dev_private; 5141 - 5142 - if (dev_priv->ips.renderctx == NULL) 5143 - dev_priv->ips.renderctx = intel_alloc_context_page(dev); 5144 - if (!dev_priv->ips.renderctx) 5145 - return -ENOMEM; 5146 - 5147 - if (dev_priv->ips.pwrctx == NULL) 5148 - dev_priv->ips.pwrctx = intel_alloc_context_page(dev); 5149 - if (!dev_priv->ips.pwrctx) { 5150 - ironlake_teardown_rc6(dev); 5151 - return -ENOMEM; 5152 - } 5153 - 5154 - return 0; 5155 - } 5156 - 5157 - static void ironlake_enable_rc6(struct drm_device *dev) 5158 - { 5159 - struct drm_i915_private *dev_priv = dev->dev_private; 5160 - struct intel_engine_cs *ring = &dev_priv->ring[RCS]; 5161 - bool was_interruptible; 5162 - int ret; 5163 - 5164 - /* rc6 disabled by default due to repeated reports of hanging during 5165 - * boot and resume. 5166 - */ 5167 - if (!intel_enable_rc6(dev)) 5168 - return; 5169 - 5170 - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 5171 - 5172 - ret = ironlake_setup_rc6(dev); 5173 - if (ret) 5174 - return; 5175 - 5176 - was_interruptible = dev_priv->mm.interruptible; 5177 - dev_priv->mm.interruptible = false; 5178 - 5179 - /* 5180 - * GPU can automatically power down the render unit if given a page 5181 - * to save state. 5182 - */ 5183 - ret = intel_ring_begin(ring, 6); 5184 - if (ret) { 5185 - ironlake_teardown_rc6(dev); 5186 - dev_priv->mm.interruptible = was_interruptible; 5187 - return; 5188 - } 5189 - 5190 - intel_ring_emit(ring, MI_SUSPEND_FLUSH | MI_SUSPEND_FLUSH_EN); 5191 - intel_ring_emit(ring, MI_SET_CONTEXT); 5192 - intel_ring_emit(ring, i915_gem_obj_ggtt_offset(dev_priv->ips.renderctx) | 5193 - MI_MM_SPACE_GTT | 5194 - MI_SAVE_EXT_STATE_EN | 5195 - MI_RESTORE_EXT_STATE_EN | 5196 - MI_RESTORE_INHIBIT); 5197 - intel_ring_emit(ring, MI_SUSPEND_FLUSH); 5198 - intel_ring_emit(ring, MI_NOOP); 5199 - intel_ring_emit(ring, MI_FLUSH); 5200 - intel_ring_advance(ring); 5201 - 5202 - /* 5203 - * Wait for the command parser to advance past MI_SET_CONTEXT. The HW 5204 - * does an implicit flush, combined with MI_FLUSH above, it should be 5205 - * safe to assume that renderctx is valid 5206 - */ 5207 - ret = intel_ring_idle(ring); 5208 - dev_priv->mm.interruptible = was_interruptible; 5209 - if (ret) { 5210 - DRM_ERROR("failed to enable ironlake power savings\n"); 5211 - ironlake_teardown_rc6(dev); 5212 - return; 5213 - } 5214 - 5215 - I915_WRITE(PWRCTXA, i915_gem_obj_ggtt_offset(dev_priv->ips.pwrctx) | PWRCTX_EN); 5216 - I915_WRITE(RSTDBYCTL, I915_READ(RSTDBYCTL) & ~RCX_SW_EXIT); 5217 - 5218 - intel_print_rc6_info(dev, GEN6_RC_CTL_RC6_ENABLE); 5219 - } 5220 - 5221 4993 static unsigned long intel_pxfreq(u32 vidfreq) 5222 4994 { 5223 4995 unsigned long freq; ··· 5612 5620 5613 5621 flush_delayed_work(&dev_priv->rps.delayed_resume_work); 5614 5622 5615 - /* 5616 - * TODO: disable RPS interrupts on GEN9+ too once RPS support 5617 - * is added for it. 5618 - */ 5619 - if (INTEL_INFO(dev)->gen < 9) 5620 - gen6_disable_rps_interrupts(dev); 5623 + gen6_disable_rps_interrupts(dev); 5621 5624 } 5622 5625 5623 5626 /** ··· 5642 5655 5643 5656 if (IS_IRONLAKE_M(dev)) { 5644 5657 ironlake_disable_drps(dev); 5645 - ironlake_disable_rc6(dev); 5646 5658 } else if (INTEL_INFO(dev)->gen >= 6) { 5647 5659 intel_suspend_gt_powersave(dev); 5648 5660 ··· 5669 5683 5670 5684 mutex_lock(&dev_priv->rps.hw_lock); 5671 5685 5672 - /* 5673 - * TODO: reset/enable RPS interrupts on GEN9+ too, once RPS support is 5674 - * added for it. 5675 - */ 5676 - if (INTEL_INFO(dev)->gen < 9) 5677 - gen6_reset_rps_interrupts(dev); 5686 + gen6_reset_rps_interrupts(dev); 5678 5687 5679 5688 if (IS_CHERRYVIEW(dev)) { 5680 5689 cherryview_enable_rps(dev); ··· 5688 5707 } 5689 5708 dev_priv->rps.enabled = true; 5690 5709 5691 - if (INTEL_INFO(dev)->gen < 9) 5692 - gen6_enable_rps_interrupts(dev); 5710 + gen6_enable_rps_interrupts(dev); 5693 5711 5694 5712 mutex_unlock(&dev_priv->rps.hw_lock); 5695 5713 ··· 5706 5726 if (IS_IRONLAKE_M(dev)) { 5707 5727 mutex_lock(&dev->struct_mutex); 5708 5728 ironlake_enable_drps(dev); 5709 - ironlake_enable_rc6(dev); 5710 5729 intel_init_emon(dev); 5711 5730 mutex_unlock(&dev->struct_mutex); 5712 5731 } else if (INTEL_INFO(dev)->gen >= 6) { ··· 6238 6259 gen6_check_mch_setup(dev); 6239 6260 } 6240 6261 6262 + static void vlv_init_display_clock_gating(struct drm_i915_private *dev_priv) 6263 + { 6264 + I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE); 6265 + 6266 + /* 6267 + * Disable trickle feed and enable pnd deadline calculation 6268 + */ 6269 + I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE); 6270 + I915_WRITE(CBR1_VLV, 0); 6271 + } 6272 + 6241 6273 static void valleyview_init_clock_gating(struct drm_device *dev) 6242 6274 { 6243 6275 struct drm_i915_private *dev_priv = dev->dev_private; 6244 6276 6245 - I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE); 6277 + vlv_init_display_clock_gating(dev_priv); 6246 6278 6247 6279 /* WaDisableEarlyCull:vlv */ 6248 6280 I915_WRITE(_3D_CHICKEN3, ··· 6301 6311 I915_WRITE(GEN7_UCGCTL4, 6302 6312 I915_READ(GEN7_UCGCTL4) | GEN7_L3BANK2X_CLOCK_GATE_DISABLE); 6303 6313 6304 - I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE); 6305 - 6306 6314 /* 6307 6315 * BSpec says this must be set, even though 6308 6316 * WaDisable4x2SubspanOptimization isn't listed for VLV. ··· 6337 6349 { 6338 6350 struct drm_i915_private *dev_priv = dev->dev_private; 6339 6351 6340 - I915_WRITE(DSPCLK_GATE_D, VRHUNIT_CLOCK_GATE_DISABLE); 6341 - 6342 - I915_WRITE(MI_ARB_VLV, MI_ARB_DISPLAY_TRICKLE_FEED_DISABLE); 6352 + vlv_init_display_clock_gating(dev_priv); 6343 6353 6344 6354 /* WaVSRefCountFullforceMissDisable:chv */ 6345 6355 /* WaDSRefCountFullforceMissDisable:chv */ ··· 6527 6541 else if (INTEL_INFO(dev)->gen == 8) 6528 6542 dev_priv->display.init_clock_gating = broadwell_init_clock_gating; 6529 6543 } else if (IS_CHERRYVIEW(dev)) { 6530 - dev_priv->display.update_wm = cherryview_update_wm; 6544 + dev_priv->display.update_wm = valleyview_update_wm; 6531 6545 dev_priv->display.update_sprite_wm = valleyview_update_sprite_wm; 6532 6546 dev_priv->display.init_clock_gating = 6533 6547 cherryview_init_clock_gating; ··· 6695 6709 6696 6710 int intel_gpu_freq(struct drm_i915_private *dev_priv, int val) 6697 6711 { 6698 - if (IS_CHERRYVIEW(dev_priv->dev)) 6712 + if (IS_GEN9(dev_priv->dev)) 6713 + return (val * GT_FREQUENCY_MULTIPLIER) / GEN9_FREQ_SCALER; 6714 + else if (IS_CHERRYVIEW(dev_priv->dev)) 6699 6715 return chv_gpu_freq(dev_priv, val); 6700 6716 else if (IS_VALLEYVIEW(dev_priv->dev)) 6701 6717 return byt_gpu_freq(dev_priv, val); ··· 6707 6719 6708 6720 int intel_freq_opcode(struct drm_i915_private *dev_priv, int val) 6709 6721 { 6710 - if (IS_CHERRYVIEW(dev_priv->dev)) 6722 + if (IS_GEN9(dev_priv->dev)) 6723 + return (val * GEN9_FREQ_SCALER) / GT_FREQUENCY_MULTIPLIER; 6724 + else if (IS_CHERRYVIEW(dev_priv->dev)) 6711 6725 return chv_freq_opcode(dev_priv, val); 6712 6726 else if (IS_VALLEYVIEW(dev_priv->dev)) 6713 6727 return byt_freq_opcode(dev_priv, val);
+3 -44
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 317 317 return 0; 318 318 } 319 319 320 - static int gen7_ring_fbc_flush(struct intel_engine_cs *ring, u32 value) 321 - { 322 - int ret; 323 - 324 - if (!ring->fbc_dirty) 325 - return 0; 326 - 327 - ret = intel_ring_begin(ring, 6); 328 - if (ret) 329 - return ret; 330 - /* WaFbcNukeOn3DBlt:ivb/hsw */ 331 - intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1)); 332 - intel_ring_emit(ring, MSG_FBC_REND_STATE); 333 - intel_ring_emit(ring, value); 334 - intel_ring_emit(ring, MI_STORE_REGISTER_MEM(1) | MI_SRM_LRM_GLOBAL_GTT); 335 - intel_ring_emit(ring, MSG_FBC_REND_STATE); 336 - intel_ring_emit(ring, ring->scratch.gtt_offset + 256); 337 - intel_ring_advance(ring); 338 - 339 - ring->fbc_dirty = false; 340 - return 0; 341 - } 342 - 343 320 static int 344 321 gen7_render_ring_flush(struct intel_engine_cs *ring, 345 322 u32 invalidate_domains, u32 flush_domains) ··· 375 398 intel_ring_emit(ring, 0); 376 399 intel_ring_advance(ring); 377 400 378 - if (!invalidate_domains && flush_domains) 379 - return gen7_ring_fbc_flush(ring, FBC_REND_NUKE); 380 - 381 401 return 0; 382 402 } 383 403 ··· 432 458 return ret; 433 459 } 434 460 435 - ret = gen8_emit_pipe_control(ring, flags, scratch_addr); 436 - if (ret) 437 - return ret; 438 - 439 - if (!invalidate_domains && flush_domains) 440 - return gen7_ring_fbc_flush(ring, FBC_REND_NUKE); 441 - 442 - return 0; 461 + return gen8_emit_pipe_control(ring, flags, scratch_addr); 443 462 } 444 463 445 464 static void ring_write_tail(struct intel_engine_cs *ring, ··· 2444 2477 u32 invalidate, u32 flush) 2445 2478 { 2446 2479 struct drm_device *dev = ring->dev; 2447 - struct drm_i915_private *dev_priv = dev->dev_private; 2448 2480 uint32_t cmd; 2449 2481 int ret; 2450 2482 ··· 2452 2486 return ret; 2453 2487 2454 2488 cmd = MI_FLUSH_DW; 2455 - if (INTEL_INFO(ring->dev)->gen >= 8) 2489 + if (INTEL_INFO(dev)->gen >= 8) 2456 2490 cmd += 1; 2457 2491 2458 2492 /* We always require a command barrier so that subsequent ··· 2472 2506 cmd |= MI_INVALIDATE_TLB; 2473 2507 intel_ring_emit(ring, cmd); 2474 2508 intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT); 2475 - if (INTEL_INFO(ring->dev)->gen >= 8) { 2509 + if (INTEL_INFO(dev)->gen >= 8) { 2476 2510 intel_ring_emit(ring, 0); /* upper addr */ 2477 2511 intel_ring_emit(ring, 0); /* value */ 2478 2512 } else { ··· 2480 2514 intel_ring_emit(ring, MI_NOOP); 2481 2515 } 2482 2516 intel_ring_advance(ring); 2483 - 2484 - if (!invalidate && flush) { 2485 - if (IS_GEN7(dev)) 2486 - return gen7_ring_fbc_flush(ring, FBC_REND_CACHE_CLEAN); 2487 - else if (IS_BROADWELL(dev)) 2488 - dev_priv->fbc.need_sw_cache_clean = true; 2489 - } 2490 2517 2491 2518 return 0; 2492 2519 }
-1
drivers/gpu/drm/i915/intel_ringbuffer.h
··· 267 267 */ 268 268 struct drm_i915_gem_request *outstanding_lazy_request; 269 269 bool gpu_caches_dirty; 270 - bool fbc_dirty; 271 270 272 271 wait_queue_head_t irq_queue; 273 272
+46 -10
drivers/gpu/drm/i915/intel_runtime_pm.c
··· 194 194 outb(inb(VGA_MSR_READ), VGA_MSR_WRITE); 195 195 vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 196 196 197 - if (IS_BROADWELL(dev) || (INTEL_INFO(dev)->gen >= 9)) 198 - gen8_irq_power_well_post_enable(dev_priv); 197 + if (IS_BROADWELL(dev)) 198 + gen8_irq_power_well_post_enable(dev_priv, 199 + 1 << PIPE_C | 1 << PIPE_B); 200 + } 201 + 202 + static void skl_power_well_post_enable(struct drm_i915_private *dev_priv, 203 + struct i915_power_well *power_well) 204 + { 205 + struct drm_device *dev = dev_priv->dev; 206 + 207 + /* 208 + * After we re-enable the power well, if we touch VGA register 0x3d5 209 + * we'll get unclaimed register interrupts. This stops after we write 210 + * anything to the VGA MSR register. The vgacon module uses this 211 + * register all the time, so if we unbind our driver and, as a 212 + * consequence, bind vgacon, we'll get stuck in an infinite loop at 213 + * console_unlock(). So make here we touch the VGA MSR register, making 214 + * sure vgacon can keep working normally without triggering interrupts 215 + * and error messages. 216 + */ 217 + if (power_well->data == SKL_DISP_PW_2) { 218 + vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO); 219 + outb(inb(VGA_MSR_READ), VGA_MSR_WRITE); 220 + vga_put(dev->pdev, VGA_RSRC_LEGACY_IO); 221 + 222 + gen8_irq_power_well_post_enable(dev_priv, 223 + 1 << PIPE_C | 1 << PIPE_B); 224 + } 225 + 226 + if (power_well->data == SKL_DISP_PW_1) { 227 + intel_prepare_ddi(dev); 228 + gen8_irq_power_well_post_enable(dev_priv, 1 << PIPE_A); 229 + } 199 230 } 200 231 201 232 static void hsw_set_power_well(struct drm_i915_private *dev_priv, ··· 324 293 { 325 294 uint32_t tmp, fuse_status; 326 295 uint32_t req_mask, state_mask; 327 - bool check_fuse_status = false; 296 + bool is_enabled, enable_requested, check_fuse_status = false; 328 297 329 298 tmp = I915_READ(HSW_PWR_WELL_DRIVER); 330 299 fuse_status = I915_READ(SKL_FUSE_STATUS); ··· 355 324 } 356 325 357 326 req_mask = SKL_POWER_WELL_REQ(power_well->data); 327 + enable_requested = tmp & req_mask; 358 328 state_mask = SKL_POWER_WELL_STATE(power_well->data); 329 + is_enabled = tmp & state_mask; 359 330 360 331 if (enable) { 361 - if (!(tmp & req_mask)) { 332 + if (!enable_requested) { 362 333 I915_WRITE(HSW_PWR_WELL_DRIVER, tmp | req_mask); 363 - DRM_DEBUG_KMS("Enabling %s\n", power_well->name); 364 334 } 365 335 366 - if (!(tmp & state_mask)) { 336 + if (!is_enabled) { 337 + DRM_DEBUG_KMS("Enabling %s\n", power_well->name); 367 338 if (wait_for((I915_READ(HSW_PWR_WELL_DRIVER) & 368 339 state_mask), 1)) 369 340 DRM_ERROR("%s enable timeout\n", ··· 373 340 check_fuse_status = true; 374 341 } 375 342 } else { 376 - if (tmp & req_mask) { 343 + if (enable_requested) { 377 344 I915_WRITE(HSW_PWR_WELL_DRIVER, tmp & ~req_mask); 378 345 POSTING_READ(HSW_PWR_WELL_DRIVER); 379 346 DRM_DEBUG_KMS("Disabling %s\n", power_well->name); ··· 391 358 DRM_ERROR("PG2 distributing status timeout\n"); 392 359 } 393 360 } 361 + 362 + if (enable && !is_enabled) 363 + skl_power_well_post_enable(dev_priv, power_well); 394 364 } 395 365 396 366 static void hsw_power_well_sync_hw(struct drm_i915_private *dev_priv, ··· 1456 1420 } 1457 1421 1458 1422 /** 1459 - * intel_aux_display_runtime_get - grab an auxilliary power domain reference 1423 + * intel_aux_display_runtime_get - grab an auxiliary power domain reference 1460 1424 * @dev_priv: i915 device instance 1461 1425 * 1462 1426 * This function grabs a power domain reference for the auxiliary power domain ··· 1473 1437 } 1474 1438 1475 1439 /** 1476 - * intel_aux_display_runtime_put - release an auxilliary power domain reference 1440 + * intel_aux_display_runtime_put - release an auxiliary power domain reference 1477 1441 * @dev_priv: i915 device instance 1478 1442 * 1479 - * This function drops the auxilliary power domain reference obtained by 1443 + * This function drops the auxiliary power domain reference obtained by 1480 1444 * intel_aux_display_runtime_get() and might power down the corresponding 1481 1445 * hardware block right away if this is the last reference. 1482 1446 */
+1 -1
drivers/gpu/drm/i915/intel_sdvo.c
··· 1247 1247 1248 1248 switch (crtc->config->pixel_multiplier) { 1249 1249 default: 1250 - WARN(1, "unknown pixel mutlipler specified\n"); 1250 + WARN(1, "unknown pixel multiplier specified\n"); 1251 1251 case 1: rate = SDVO_CLOCK_RATE_MULT_1X; break; 1252 1252 case 2: rate = SDVO_CLOCK_RATE_MULT_2X; break; 1253 1253 case 4: rate = SDVO_CLOCK_RATE_MULT_4X; break;
+2 -2
drivers/gpu/drm/i915/intel_sprite.c
··· 1361 1361 1362 1362 int intel_plane_restore(struct drm_plane *plane) 1363 1363 { 1364 - if (!plane->crtc || !plane->fb) 1364 + if (!plane->crtc || !plane->state->fb) 1365 1365 return 0; 1366 1366 1367 - return plane->funcs->update_plane(plane, plane->crtc, plane->fb, 1367 + return plane->funcs->update_plane(plane, plane->crtc, plane->state->fb, 1368 1368 plane->state->crtc_x, plane->state->crtc_y, 1369 1369 plane->state->crtc_w, plane->state->crtc_h, 1370 1370 plane->state->src_x, plane->state->src_y,
+15 -3
drivers/gpu/drm/i915/intel_uncore.c
··· 557 557 WARN(1, "Unclaimed register detected %s %s register 0x%x\n", 558 558 when, op, reg); 559 559 __raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM); 560 + i915.mmio_debug--; /* Only report the first N failures */ 560 561 } 561 562 } 562 563 563 564 static void 564 565 hsw_unclaimed_reg_detect(struct drm_i915_private *dev_priv) 565 566 { 566 - if (i915.mmio_debug) 567 + static bool mmio_debug_once = true; 568 + 569 + if (i915.mmio_debug || !mmio_debug_once) 567 570 return; 568 571 569 572 if (__raw_i915_read32(dev_priv, FPGA_DBG) & FPGA_DBG_RM_NOCLAIM) { 570 - DRM_ERROR("Unclaimed register detected. Please use the i915.mmio_debug=1 to debug this problem."); 573 + DRM_DEBUG("Unclaimed register detected, " 574 + "enabling oneshot unclaimed register reporting. " 575 + "Please use i915.mmio_debug=N for more information.\n"); 571 576 __raw_i915_write32(dev_priv, FPGA_DBG, FPGA_DBG_RM_NOCLAIM); 577 + i915.mmio_debug = mmio_debug_once--; 572 578 } 573 579 } 574 580 ··· 1088 1082 1089 1083 /* We need to init first for ECOBUS access and then 1090 1084 * determine later if we want to reinit, in case of MT access is 1091 - * not working 1085 + * not working. In this stage we don't know which flavour this 1086 + * ivb is, so it is better to reset also the gen6 fw registers 1087 + * before the ecobus check. 1092 1088 */ 1089 + 1090 + __raw_i915_write32(dev_priv, FORCEWAKE, 0); 1091 + __raw_posting_read(dev_priv, ECOBUS); 1092 + 1093 1093 fw_domain_init(dev_priv, FW_DOMAIN_ID_RENDER, 1094 1094 FORCEWAKE_MT, FORCEWAKE_MT_ACK); 1095 1095
+44 -22
drivers/gpu/drm/radeon/radeon_fence.c
··· 1030 1030 return test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags); 1031 1031 } 1032 1032 1033 + struct radeon_wait_cb { 1034 + struct fence_cb base; 1035 + struct task_struct *task; 1036 + }; 1037 + 1038 + static void 1039 + radeon_fence_wait_cb(struct fence *fence, struct fence_cb *cb) 1040 + { 1041 + struct radeon_wait_cb *wait = 1042 + container_of(cb, struct radeon_wait_cb, base); 1043 + 1044 + wake_up_process(wait->task); 1045 + } 1046 + 1033 1047 static signed long radeon_fence_default_wait(struct fence *f, bool intr, 1034 1048 signed long t) 1035 1049 { 1036 1050 struct radeon_fence *fence = to_radeon_fence(f); 1037 1051 struct radeon_device *rdev = fence->rdev; 1038 - bool signaled; 1052 + struct radeon_wait_cb cb; 1039 1053 1040 - fence_enable_sw_signaling(&fence->base); 1054 + cb.task = current; 1041 1055 1042 - /* 1043 - * This function has to return -EDEADLK, but cannot hold 1044 - * exclusive_lock during the wait because some callers 1045 - * may already hold it. This means checking needs_reset without 1046 - * lock, and not fiddling with any gpu internals. 1047 - * 1048 - * The callback installed with fence_enable_sw_signaling will 1049 - * run before our wait_event_*timeout call, so we will see 1050 - * both the signaled fence and the changes to needs_reset. 1051 - */ 1056 + if (fence_add_callback(f, &cb.base, radeon_fence_wait_cb)) 1057 + return t; 1052 1058 1053 - if (intr) 1054 - t = wait_event_interruptible_timeout(rdev->fence_queue, 1055 - ((signaled = radeon_test_signaled(fence)) || 1056 - rdev->needs_reset), t); 1057 - else 1058 - t = wait_event_timeout(rdev->fence_queue, 1059 - ((signaled = radeon_test_signaled(fence)) || 1060 - rdev->needs_reset), t); 1059 + while (t > 0) { 1060 + if (intr) 1061 + set_current_state(TASK_INTERRUPTIBLE); 1062 + else 1063 + set_current_state(TASK_UNINTERRUPTIBLE); 1061 1064 1062 - if (t > 0 && !signaled) 1063 - return -EDEADLK; 1065 + /* 1066 + * radeon_test_signaled must be called after 1067 + * set_current_state to prevent a race with wake_up_process 1068 + */ 1069 + if (radeon_test_signaled(fence)) 1070 + break; 1071 + 1072 + if (rdev->needs_reset) { 1073 + t = -EDEADLK; 1074 + break; 1075 + } 1076 + 1077 + t = schedule_timeout(t); 1078 + 1079 + if (t > 0 && intr && signal_pending(current)) 1080 + t = -ERESTARTSYS; 1081 + } 1082 + 1083 + __set_current_state(TASK_RUNNING); 1084 + fence_remove_callback(f, &cb.base); 1085 + 1064 1086 return t; 1065 1087 } 1066 1088
+2 -4
drivers/gpu/drm/radeon/si.c
··· 7236 7236 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK); 7237 7237 7238 7238 if (!vclk || !dclk) { 7239 - /* keep the Bypass mode, put PLL to sleep */ 7240 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7239 + /* keep the Bypass mode */ 7241 7240 return 0; 7242 7241 } 7243 7242 ··· 7252 7253 /* set VCO_MODE to 1 */ 7253 7254 WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK); 7254 7255 7255 - /* toggle UPLL_SLEEP to 1 then back to 0 */ 7256 - WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); 7256 + /* disable sleep mode */ 7257 7257 WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK); 7258 7258 7259 7259 /* deassert UPLL_RESET */
+41 -37
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 725 725 goto out_err1; 726 726 } 727 727 728 - ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 729 - (dev_priv->vram_size >> PAGE_SHIFT)); 730 - if (unlikely(ret != 0)) { 731 - DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 732 - goto out_err2; 733 - } 734 - 735 - dev_priv->has_gmr = true; 736 - if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 737 - refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 738 - VMW_PL_GMR) != 0) { 739 - DRM_INFO("No GMR memory available. " 740 - "Graphics memory resources are very limited.\n"); 741 - dev_priv->has_gmr = false; 742 - } 743 - 744 - if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 745 - dev_priv->has_mob = true; 746 - if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 747 - VMW_PL_MOB) != 0) { 748 - DRM_INFO("No MOB memory available. " 749 - "3D will be disabled.\n"); 750 - dev_priv->has_mob = false; 751 - } 752 - } 753 - 754 728 dev_priv->mmio_mtrr = arch_phys_wc_add(dev_priv->mmio_start, 755 729 dev_priv->mmio_size); 756 730 ··· 787 813 goto out_no_fman; 788 814 } 789 815 816 + 817 + ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, 818 + (dev_priv->vram_size >> PAGE_SHIFT)); 819 + if (unlikely(ret != 0)) { 820 + DRM_ERROR("Failed initializing memory manager for VRAM.\n"); 821 + goto out_no_vram; 822 + } 823 + 824 + dev_priv->has_gmr = true; 825 + if (((dev_priv->capabilities & (SVGA_CAP_GMR | SVGA_CAP_GMR2)) == 0) || 826 + refuse_dma || ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, 827 + VMW_PL_GMR) != 0) { 828 + DRM_INFO("No GMR memory available. " 829 + "Graphics memory resources are very limited.\n"); 830 + dev_priv->has_gmr = false; 831 + } 832 + 833 + if (dev_priv->capabilities & SVGA_CAP_GBOBJECTS) { 834 + dev_priv->has_mob = true; 835 + if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_MOB, 836 + VMW_PL_MOB) != 0) { 837 + DRM_INFO("No MOB memory available. " 838 + "3D will be disabled.\n"); 839 + dev_priv->has_mob = false; 840 + } 841 + } 842 + 790 843 vmw_kms_save_vga(dev_priv); 791 844 792 845 /* Start kms and overlay systems, needs fifo. */ ··· 839 838 vmw_kms_close(dev_priv); 840 839 out_no_kms: 841 840 vmw_kms_restore_vga(dev_priv); 841 + if (dev_priv->has_mob) 842 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 843 + if (dev_priv->has_gmr) 844 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 845 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 846 + out_no_vram: 842 847 vmw_fence_manager_takedown(dev_priv->fman); 843 848 out_no_fman: 844 849 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) ··· 860 853 iounmap(dev_priv->mmio_virt); 861 854 out_err3: 862 855 arch_phys_wc_del(dev_priv->mmio_mtrr); 863 - if (dev_priv->has_mob) 864 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 865 - if (dev_priv->has_gmr) 866 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 867 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 868 - out_err2: 869 856 (void)ttm_bo_device_release(&dev_priv->bdev); 870 857 out_err1: 871 858 vmw_ttm_global_release(dev_priv); ··· 888 887 } 889 888 vmw_kms_close(dev_priv); 890 889 vmw_overlay_close(dev_priv); 890 + 891 + if (dev_priv->has_mob) 892 + (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 893 + if (dev_priv->has_gmr) 894 + (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 895 + (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 896 + 891 897 vmw_fence_manager_takedown(dev_priv->fman); 892 898 if (dev_priv->capabilities & SVGA_CAP_IRQMASK) 893 899 drm_irq_uninstall(dev_priv->dev); ··· 906 898 ttm_object_device_release(&dev_priv->tdev); 907 899 iounmap(dev_priv->mmio_virt); 908 900 arch_phys_wc_del(dev_priv->mmio_mtrr); 909 - if (dev_priv->has_mob) 910 - (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_MOB); 911 - if (dev_priv->has_gmr) 912 - (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); 913 - (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); 914 901 (void)ttm_bo_device_release(&dev_priv->bdev); 915 902 vmw_ttm_global_release(dev_priv); 916 903 ··· 1238 1235 { 1239 1236 struct drm_device *dev = pci_get_drvdata(pdev); 1240 1237 1238 + pci_disable_device(pdev); 1241 1239 drm_put_dev(dev); 1242 1240 } 1243 1241
+9 -9
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 890 890 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 891 891 if (unlikely(ret != 0)) { 892 892 DRM_ERROR("Could not find or use MOB buffer.\n"); 893 - return -EINVAL; 893 + ret = -EINVAL; 894 + goto out_no_reloc; 894 895 } 895 896 bo = &vmw_bo->base; 896 897 ··· 915 914 916 915 out_no_reloc: 917 916 vmw_dmabuf_unreference(&vmw_bo); 918 - vmw_bo_p = NULL; 917 + *vmw_bo_p = NULL; 919 918 return ret; 920 919 } 921 920 ··· 952 951 ret = vmw_user_dmabuf_lookup(sw_context->fp->tfile, handle, &vmw_bo); 953 952 if (unlikely(ret != 0)) { 954 953 DRM_ERROR("Could not find or use GMR region.\n"); 955 - return -EINVAL; 954 + ret = -EINVAL; 955 + goto out_no_reloc; 956 956 } 957 957 bo = &vmw_bo->base; 958 958 ··· 976 974 977 975 out_no_reloc: 978 976 vmw_dmabuf_unreference(&vmw_bo); 979 - vmw_bo_p = NULL; 977 + *vmw_bo_p = NULL; 980 978 return ret; 981 979 } 982 980 ··· 2782 2780 NULL, arg->command_size, arg->throttle_us, 2783 2781 (void __user *)(unsigned long)arg->fence_rep, 2784 2782 NULL); 2785 - 2783 + ttm_read_unlock(&dev_priv->reservation_sem); 2786 2784 if (unlikely(ret != 0)) 2787 - goto out_unlock; 2785 + return ret; 2788 2786 2789 2787 vmw_kms_cursor_post_execbuf(dev_priv); 2790 2788 2791 - out_unlock: 2792 - ttm_read_unlock(&dev_priv->reservation_sem); 2793 - return ret; 2789 + return 0; 2794 2790 }
+3 -11
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 2033 2033 int i; 2034 2034 struct drm_mode_config *mode_config = &dev->mode_config; 2035 2035 2036 - ret = ttm_read_lock(&dev_priv->reservation_sem, true); 2037 - if (unlikely(ret != 0)) 2038 - return ret; 2039 - 2040 2036 if (!arg->num_outputs) { 2041 2037 struct drm_vmw_rect def_rect = {0, 0, 800, 600}; 2042 2038 vmw_du_update_layout(dev_priv, 1, &def_rect); 2043 - goto out_unlock; 2039 + return 0; 2044 2040 } 2045 2041 2046 2042 rects_size = arg->num_outputs * sizeof(struct drm_vmw_rect); 2047 2043 rects = kcalloc(arg->num_outputs, sizeof(struct drm_vmw_rect), 2048 2044 GFP_KERNEL); 2049 - if (unlikely(!rects)) { 2050 - ret = -ENOMEM; 2051 - goto out_unlock; 2052 - } 2045 + if (unlikely(!rects)) 2046 + return -ENOMEM; 2053 2047 2054 2048 user_rects = (void __user *)(unsigned long)arg->rects; 2055 2049 ret = copy_from_user(rects, user_rects, rects_size); ··· 2068 2074 2069 2075 out_free: 2070 2076 kfree(rects); 2071 - out_unlock: 2072 - ttm_read_unlock(&dev_priv->reservation_sem); 2073 2077 return ret; 2074 2078 }
-3
drivers/i2c/i2c-core.c
··· 679 679 status = driver->remove(client); 680 680 } 681 681 682 - if (dev->of_node) 683 - irq_dispose_mapping(client->irq); 684 - 685 682 dev_pm_domain_detach(&client->dev, true); 686 683 return status; 687 684 }
+3 -3
drivers/input/keyboard/tc3589x-keypad.c
··· 411 411 412 412 input_set_drvdata(input, keypad); 413 413 414 - error = request_threaded_irq(irq, NULL, 415 - tc3589x_keypad_irq, plat->irqtype, 416 - "tc3589x-keypad", keypad); 414 + error = request_threaded_irq(irq, NULL, tc3589x_keypad_irq, 415 + plat->irqtype | IRQF_ONESHOT, 416 + "tc3589x-keypad", keypad); 417 417 if (error < 0) { 418 418 dev_err(&pdev->dev, 419 419 "Could not allocate irq %d,error %d\n",
+1
drivers/input/misc/mma8450.c
··· 187 187 idev->private = m; 188 188 idev->input->name = MMA8450_DRV_NAME; 189 189 idev->input->id.bustype = BUS_I2C; 190 + idev->input->dev.parent = &c->dev; 190 191 idev->poll = mma8450_poll; 191 192 idev->poll_interval = POLL_INTERVAL; 192 193 idev->poll_interval_max = POLL_INTERVAL_MAX;
+3 -1
drivers/input/mouse/alps.c
··· 2605 2605 return -ENOMEM; 2606 2606 2607 2607 error = alps_identify(psmouse, priv); 2608 - if (error) 2608 + if (error) { 2609 + kfree(priv); 2609 2610 return error; 2611 + } 2610 2612 2611 2613 if (set_properties) { 2612 2614 psmouse->vendor = "ALPS";
+1 -1
drivers/input/mouse/cyapa_gen3.c
··· 20 20 #include <linux/input/mt.h> 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 - #include <linux/unaligned/access_ok.h> 23 + #include <asm/unaligned.h> 24 24 #include "cyapa.h" 25 25 26 26
+2 -2
drivers/input/mouse/cyapa_gen5.c
··· 17 17 #include <linux/mutex.h> 18 18 #include <linux/completion.h> 19 19 #include <linux/slab.h> 20 - #include <linux/unaligned/access_ok.h> 20 + #include <asm/unaligned.h> 21 21 #include <linux/crc-itu-t.h> 22 22 #include "cyapa.h" 23 23 ··· 1926 1926 electrodes_tx = cyapa->electrodes_x; 1927 1927 max_element_cnt = ((cyapa->aligned_electrodes_rx + 7) & 1928 1928 ~7u) * electrodes_tx; 1929 - } else if (idac_data_type == GEN5_RETRIEVE_SELF_CAP_PWC_DATA) { 1929 + } else { 1930 1930 offset = 2; 1931 1931 max_element_cnt = cyapa->electrodes_x + 1932 1932 cyapa->electrodes_y;
+35 -15
drivers/input/mouse/focaltech.c
··· 67 67 68 68 #define FOC_MAX_FINGERS 5 69 69 70 - #define FOC_MAX_X 2431 71 - #define FOC_MAX_Y 1663 72 - 73 70 /* 74 71 * Current state of a single finger on the touchpad. 75 72 */ ··· 126 129 input_mt_slot(dev, i); 127 130 input_mt_report_slot_state(dev, MT_TOOL_FINGER, active); 128 131 if (active) { 129 - input_report_abs(dev, ABS_MT_POSITION_X, finger->x); 132 + unsigned int clamped_x, clamped_y; 133 + /* 134 + * The touchpad might report invalid data, so we clamp 135 + * the resulting values so that we do not confuse 136 + * userspace. 137 + */ 138 + clamped_x = clamp(finger->x, 0U, priv->x_max); 139 + clamped_y = clamp(finger->y, 0U, priv->y_max); 140 + input_report_abs(dev, ABS_MT_POSITION_X, clamped_x); 130 141 input_report_abs(dev, ABS_MT_POSITION_Y, 131 - FOC_MAX_Y - finger->y); 142 + priv->y_max - clamped_y); 132 143 } 133 144 } 134 145 input_mt_report_pointer_emulation(dev, true); ··· 184 179 } 185 180 186 181 state->pressed = (packet[0] >> 4) & 1; 187 - 188 - /* 189 - * packet[5] contains some kind of tool size in the most 190 - * significant nibble. 0xff is a special value (latching) that 191 - * signals a large contact area. 192 - */ 193 - if (packet[5] == 0xff) { 194 - state->fingers[finger].valid = false; 195 - return; 196 - } 197 182 198 183 state->fingers[finger].x = ((packet[1] & 0xf) << 8) | packet[2]; 199 184 state->fingers[finger].y = (packet[3] << 8) | packet[4]; ··· 376 381 377 382 return 0; 378 383 } 384 + 385 + void focaltech_set_resolution(struct psmouse *psmouse, unsigned int resolution) 386 + { 387 + /* not supported yet */ 388 + } 389 + 390 + static void focaltech_set_rate(struct psmouse *psmouse, unsigned int rate) 391 + { 392 + /* not supported yet */ 393 + } 394 + 395 + static void focaltech_set_scale(struct psmouse *psmouse, 396 + enum psmouse_scale scale) 397 + { 398 + /* not supported yet */ 399 + } 400 + 379 401 int focaltech_init(struct psmouse *psmouse) 380 402 { 381 403 struct focaltech_data *priv; ··· 427 415 psmouse->cleanup = focaltech_reset; 428 416 /* resync is not supported yet */ 429 417 psmouse->resync_time = 0; 418 + /* 419 + * rate/resolution/scale changes are not supported yet, and 420 + * the generic implementations of these functions seem to 421 + * confuse some touchpads 422 + */ 423 + psmouse->set_resolution = focaltech_set_resolution; 424 + psmouse->set_rate = focaltech_set_rate; 425 + psmouse->set_scale = focaltech_set_scale; 430 426 431 427 return 0; 432 428
+13 -1
drivers/input/mouse/psmouse-base.c
··· 454 454 } 455 455 456 456 /* 457 + * Here we set the mouse scaling. 458 + */ 459 + 460 + static void psmouse_set_scale(struct psmouse *psmouse, enum psmouse_scale scale) 461 + { 462 + ps2_command(&psmouse->ps2dev, NULL, 463 + scale == PSMOUSE_SCALE21 ? PSMOUSE_CMD_SETSCALE21 : 464 + PSMOUSE_CMD_SETSCALE11); 465 + } 466 + 467 + /* 457 468 * psmouse_poll() - default poll handler. Everyone except for ALPS uses it. 458 469 */ 459 470 ··· 700 689 701 690 psmouse->set_rate = psmouse_set_rate; 702 691 psmouse->set_resolution = psmouse_set_resolution; 692 + psmouse->set_scale = psmouse_set_scale; 703 693 psmouse->poll = psmouse_poll; 704 694 psmouse->protocol_handler = psmouse_process_byte; 705 695 psmouse->pktsize = 3; ··· 1172 1160 if (psmouse_max_proto != PSMOUSE_PS2) { 1173 1161 psmouse->set_rate(psmouse, psmouse->rate); 1174 1162 psmouse->set_resolution(psmouse, psmouse->resolution); 1175 - ps2_command(&psmouse->ps2dev, NULL, PSMOUSE_CMD_SETSCALE11); 1163 + psmouse->set_scale(psmouse, PSMOUSE_SCALE11); 1176 1164 } 1177 1165 } 1178 1166
+6
drivers/input/mouse/psmouse.h
··· 36 36 PSMOUSE_FULL_PACKET 37 37 } psmouse_ret_t; 38 38 39 + enum psmouse_scale { 40 + PSMOUSE_SCALE11, 41 + PSMOUSE_SCALE21 42 + }; 43 + 39 44 struct psmouse { 40 45 void *private; 41 46 struct input_dev *dev; ··· 72 67 psmouse_ret_t (*protocol_handler)(struct psmouse *psmouse); 73 68 void (*set_rate)(struct psmouse *psmouse, unsigned int rate); 74 69 void (*set_resolution)(struct psmouse *psmouse, unsigned int resolution); 70 + void (*set_scale)(struct psmouse *psmouse, enum psmouse_scale scale); 75 71 76 72 int (*reconnect)(struct psmouse *psmouse); 77 73 void (*disconnect)(struct psmouse *psmouse);
+1
drivers/input/touchscreen/Kconfig
··· 943 943 tristate "Allwinner sun4i resistive touchscreen controller support" 944 944 depends on ARCH_SUNXI || COMPILE_TEST 945 945 depends on HWMON 946 + depends on THERMAL || !THERMAL_OF 946 947 help 947 948 This selects support for the resistive touchscreen controller 948 949 found on Allwinner sunxi SoCs.
+2
drivers/iommu/Kconfig
··· 23 23 config IOMMU_IO_PGTABLE_LPAE 24 24 bool "ARMv7/v8 Long Descriptor Format" 25 25 select IOMMU_IO_PGTABLE 26 + depends on ARM || ARM64 || COMPILE_TEST 26 27 help 27 28 Enable support for the ARM long descriptor pagetable format. 28 29 This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page ··· 64 63 bool "MSM IOMMU Support" 65 64 depends on ARM 66 65 depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST 66 + depends on BROKEN 67 67 select IOMMU_API 68 68 help 69 69 Support for the IOMMUs found on certain Qualcomm SOCs.
+7
drivers/iommu/exynos-iommu.c
··· 1186 1186 1187 1187 static int __init exynos_iommu_init(void) 1188 1188 { 1189 + struct device_node *np; 1189 1190 int ret; 1191 + 1192 + np = of_find_matching_node(NULL, sysmmu_of_match); 1193 + if (!np) 1194 + return 0; 1195 + 1196 + of_node_put(np); 1190 1197 1191 1198 lv2table_kmem_cache = kmem_cache_create("exynos-iommu-lv2table", 1192 1199 LV2TABLE_SIZE, LV2TABLE_SIZE, 0, NULL);
+3 -2
drivers/iommu/io-pgtable-arm.c
··· 56 56 ((((d)->levels - ((l) - ARM_LPAE_START_LVL(d) + 1)) \ 57 57 * (d)->bits_per_level) + (d)->pg_shift) 58 58 59 - #define ARM_LPAE_PAGES_PER_PGD(d) ((d)->pgd_size >> (d)->pg_shift) 59 + #define ARM_LPAE_PAGES_PER_PGD(d) \ 60 + DIV_ROUND_UP((d)->pgd_size, 1UL << (d)->pg_shift) 60 61 61 62 /* 62 63 * Calculate the index at level l used to map virtual address a using the ··· 67 66 ((l) == ARM_LPAE_START_LVL(d) ? ilog2(ARM_LPAE_PAGES_PER_PGD(d)) : 0) 68 67 69 68 #define ARM_LPAE_LVL_IDX(a,l,d) \ 70 - (((a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 69 + (((u64)(a) >> ARM_LPAE_LVL_SHIFT(l,d)) & \ 71 70 ((1 << ((d)->bits_per_level + ARM_LPAE_PGD_IDX(l,d))) - 1)) 72 71 73 72 /* Calculate the block/page mapping size at level l for pagetable in d. */
+7
drivers/iommu/omap-iommu.c
··· 1376 1376 struct kmem_cache *p; 1377 1377 const unsigned long flags = SLAB_HWCACHE_ALIGN; 1378 1378 size_t align = 1 << 10; /* L2 pagetable alignement */ 1379 + struct device_node *np; 1380 + 1381 + np = of_find_matching_node(NULL, omap_iommu_of_match); 1382 + if (!np) 1383 + return 0; 1384 + 1385 + of_node_put(np); 1379 1386 1380 1387 p = kmem_cache_create("iopte_cache", IOPTE_TABLE_SIZE, align, flags, 1381 1388 iopte_cachep_ctor);
+7
drivers/iommu/rockchip-iommu.c
··· 1015 1015 1016 1016 static int __init rk_iommu_init(void) 1017 1017 { 1018 + struct device_node *np; 1018 1019 int ret; 1020 + 1021 + np = of_find_matching_node(NULL, rk_iommu_dt_ids); 1022 + if (!np) 1023 + return 0; 1024 + 1025 + of_node_put(np); 1019 1026 1020 1027 ret = bus_set_iommu(&platform_bus_type, &rk_iommu_ops); 1021 1028 if (ret)
+20 -1
drivers/irqchip/irq-armada-370-xp.c
··· 69 69 static void __iomem *main_int_base; 70 70 static struct irq_domain *armada_370_xp_mpic_domain; 71 71 static u32 doorbell_mask_reg; 72 + static int parent_irq; 72 73 #ifdef CONFIG_PCI_MSI 73 74 static struct irq_domain *armada_370_xp_msi_domain; 74 75 static DECLARE_BITMAP(msi_used, PCI_MSI_DOORBELL_NR); ··· 357 356 { 358 357 if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 359 358 armada_xp_mpic_smp_cpu_init(); 359 + 360 360 return NOTIFY_OK; 361 361 } 362 362 363 363 static struct notifier_block armada_370_xp_mpic_cpu_notifier = { 364 364 .notifier_call = armada_xp_mpic_secondary_init, 365 + .priority = 100, 366 + }; 367 + 368 + static int mpic_cascaded_secondary_init(struct notifier_block *nfb, 369 + unsigned long action, void *hcpu) 370 + { 371 + if (action == CPU_STARTING || action == CPU_STARTING_FROZEN) 372 + enable_percpu_irq(parent_irq, IRQ_TYPE_NONE); 373 + 374 + return NOTIFY_OK; 375 + } 376 + 377 + static struct notifier_block mpic_cascaded_cpu_notifier = { 378 + .notifier_call = mpic_cascaded_secondary_init, 365 379 .priority = 100, 366 380 }; 367 381 ··· 555 539 struct device_node *parent) 556 540 { 557 541 struct resource main_int_res, per_cpu_int_res; 558 - int parent_irq, nr_irqs, i; 542 + int nr_irqs, i; 559 543 u32 control; 560 544 561 545 BUG_ON(of_address_to_resource(node, 0, &main_int_res)); ··· 603 587 register_cpu_notifier(&armada_370_xp_mpic_cpu_notifier); 604 588 #endif 605 589 } else { 590 + #ifdef CONFIG_SMP 591 + register_cpu_notifier(&mpic_cascaded_cpu_notifier); 592 + #endif 606 593 irq_set_chained_handler(parent_irq, 607 594 armada_370_xp_mpic_handle_cascade_irq); 608 595 }
+128 -29
drivers/irqchip/irq-gic-v3-its.c
··· 416 416 { 417 417 struct its_cmd_block *cmd, *sync_cmd, *next_cmd; 418 418 struct its_collection *sync_col; 419 + unsigned long flags; 419 420 420 - raw_spin_lock(&its->lock); 421 + raw_spin_lock_irqsave(&its->lock, flags); 421 422 422 423 cmd = its_allocate_entry(its); 423 424 if (!cmd) { /* We're soooooo screewed... */ 424 425 pr_err_ratelimited("ITS can't allocate, dropping command\n"); 425 - raw_spin_unlock(&its->lock); 426 + raw_spin_unlock_irqrestore(&its->lock, flags); 426 427 return; 427 428 } 428 429 sync_col = builder(cmd, desc); ··· 443 442 444 443 post: 445 444 next_cmd = its_post_commands(its); 446 - raw_spin_unlock(&its->lock); 445 + raw_spin_unlock_irqrestore(&its->lock, flags); 447 446 448 447 its_wait_for_range_completion(its, cmd, next_cmd); 449 448 } ··· 800 799 { 801 800 int err; 802 801 int i; 803 - int psz = PAGE_SIZE; 802 + int psz = SZ_64K; 804 803 u64 shr = GITS_BASER_InnerShareable; 805 804 806 805 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 807 806 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); 808 807 u64 type = GITS_BASER_TYPE(val); 809 808 u64 entry_size = GITS_BASER_ENTRY_SIZE(val); 809 + int order = get_order(psz); 810 + int alloc_size; 810 811 u64 tmp; 811 812 void *base; 812 813 813 814 if (type == GITS_BASER_TYPE_NONE) 814 815 continue; 815 816 816 - /* We're lazy and only allocate a single page for now */ 817 - base = (void *)get_zeroed_page(GFP_KERNEL); 817 + /* 818 + * Allocate as many entries as required to fit the 819 + * range of device IDs that the ITS can grok... The ID 820 + * space being incredibly sparse, this results in a 821 + * massive waste of memory. 822 + * 823 + * For other tables, only allocate a single page. 824 + */ 825 + if (type == GITS_BASER_TYPE_DEVICE) { 826 + u64 typer = readq_relaxed(its->base + GITS_TYPER); 827 + u32 ids = GITS_TYPER_DEVBITS(typer); 828 + 829 + order = get_order((1UL << ids) * entry_size); 830 + if (order >= MAX_ORDER) { 831 + order = MAX_ORDER - 1; 832 + pr_warn("%s: Device Table too large, reduce its page order to %u\n", 833 + its->msi_chip.of_node->full_name, order); 834 + } 835 + } 836 + 837 + alloc_size = (1 << order) * PAGE_SIZE; 838 + base = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 818 839 if (!base) { 819 840 err = -ENOMEM; 820 841 goto out_free; ··· 864 841 break; 865 842 } 866 843 867 - val |= (PAGE_SIZE / psz) - 1; 844 + val |= (alloc_size / psz) - 1; 868 845 869 846 writeq_relaxed(val, its->base + GITS_BASER + i * 8); 870 847 tmp = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 905 882 } 906 883 907 884 pr_info("ITS: allocated %d %s @%lx (psz %dK, shr %d)\n", 908 - (int)(PAGE_SIZE / entry_size), 885 + (int)(alloc_size / entry_size), 909 886 its_base_type_string[type], 910 887 (unsigned long)virt_to_phys(base), 911 888 psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT); ··· 1043 1020 static struct its_device *its_find_device(struct its_node *its, u32 dev_id) 1044 1021 { 1045 1022 struct its_device *its_dev = NULL, *tmp; 1023 + unsigned long flags; 1046 1024 1047 - raw_spin_lock(&its->lock); 1025 + raw_spin_lock_irqsave(&its->lock, flags); 1048 1026 1049 1027 list_for_each_entry(tmp, &its->its_device_list, entry) { 1050 1028 if (tmp->device_id == dev_id) { ··· 1054 1030 } 1055 1031 } 1056 1032 1057 - raw_spin_unlock(&its->lock); 1033 + raw_spin_unlock_irqrestore(&its->lock, flags); 1058 1034 1059 1035 return its_dev; 1060 1036 } ··· 1064 1040 { 1065 1041 struct its_device *dev; 1066 1042 unsigned long *lpi_map; 1043 + unsigned long flags; 1067 1044 void *itt; 1068 1045 int lpi_base; 1069 1046 int nr_lpis; ··· 1081 1056 nr_ites = max(2UL, roundup_pow_of_two(nvecs)); 1082 1057 sz = nr_ites * its->ite_size; 1083 1058 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 1084 - itt = kmalloc(sz, GFP_KERNEL); 1059 + itt = kzalloc(sz, GFP_KERNEL); 1085 1060 lpi_map = its_lpi_alloc_chunks(nvecs, &lpi_base, &nr_lpis); 1086 1061 1087 1062 if (!dev || !itt || !lpi_map) { ··· 1100 1075 dev->device_id = dev_id; 1101 1076 INIT_LIST_HEAD(&dev->entry); 1102 1077 1103 - raw_spin_lock(&its->lock); 1078 + raw_spin_lock_irqsave(&its->lock, flags); 1104 1079 list_add(&dev->entry, &its->its_device_list); 1105 - raw_spin_unlock(&its->lock); 1080 + raw_spin_unlock_irqrestore(&its->lock, flags); 1106 1081 1107 1082 /* Bind the device to the first possible CPU */ 1108 1083 cpu = cpumask_first(cpu_online_mask); ··· 1116 1091 1117 1092 static void its_free_device(struct its_device *its_dev) 1118 1093 { 1119 - raw_spin_lock(&its_dev->its->lock); 1094 + unsigned long flags; 1095 + 1096 + raw_spin_lock_irqsave(&its_dev->its->lock, flags); 1120 1097 list_del(&its_dev->entry); 1121 - raw_spin_unlock(&its_dev->its->lock); 1098 + raw_spin_unlock_irqrestore(&its_dev->its->lock, flags); 1122 1099 kfree(its_dev->itt); 1123 1100 kfree(its_dev); 1124 1101 } ··· 1139 1112 return 0; 1140 1113 } 1141 1114 1115 + struct its_pci_alias { 1116 + struct pci_dev *pdev; 1117 + u32 dev_id; 1118 + u32 count; 1119 + }; 1120 + 1121 + static int its_pci_msi_vec_count(struct pci_dev *pdev) 1122 + { 1123 + int msi, msix; 1124 + 1125 + msi = max(pci_msi_vec_count(pdev), 0); 1126 + msix = max(pci_msix_vec_count(pdev), 0); 1127 + 1128 + return max(msi, msix); 1129 + } 1130 + 1131 + static int its_get_pci_alias(struct pci_dev *pdev, u16 alias, void *data) 1132 + { 1133 + struct its_pci_alias *dev_alias = data; 1134 + 1135 + dev_alias->dev_id = alias; 1136 + if (pdev != dev_alias->pdev) 1137 + dev_alias->count += its_pci_msi_vec_count(dev_alias->pdev); 1138 + 1139 + return 0; 1140 + } 1141 + 1142 1142 static int its_msi_prepare(struct irq_domain *domain, struct device *dev, 1143 1143 int nvec, msi_alloc_info_t *info) 1144 1144 { 1145 1145 struct pci_dev *pdev; 1146 1146 struct its_node *its; 1147 - u32 dev_id; 1148 1147 struct its_device *its_dev; 1148 + struct its_pci_alias dev_alias; 1149 1149 1150 1150 if (!dev_is_pci(dev)) 1151 1151 return -EINVAL; 1152 1152 1153 1153 pdev = to_pci_dev(dev); 1154 - dev_id = PCI_DEVID(pdev->bus->number, pdev->devfn); 1154 + dev_alias.pdev = pdev; 1155 + dev_alias.count = nvec; 1156 + 1157 + pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias); 1155 1158 its = domain->parent->host_data; 1156 1159 1157 - its_dev = its_find_device(its, dev_id); 1158 - if (WARN_ON(its_dev)) 1159 - return -EINVAL; 1160 + its_dev = its_find_device(its, dev_alias.dev_id); 1161 + if (its_dev) { 1162 + /* 1163 + * We already have seen this ID, probably through 1164 + * another alias (PCI bridge of some sort). No need to 1165 + * create the device. 1166 + */ 1167 + dev_dbg(dev, "Reusing ITT for devID %x\n", dev_alias.dev_id); 1168 + goto out; 1169 + } 1160 1170 1161 - its_dev = its_create_device(its, dev_id, nvec); 1171 + its_dev = its_create_device(its, dev_alias.dev_id, dev_alias.count); 1162 1172 if (!its_dev) 1163 1173 return -ENOMEM; 1164 1174 1165 - dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", nvec, ilog2(nvec)); 1166 - 1175 + dev_dbg(&pdev->dev, "ITT %d entries, %d bits\n", 1176 + dev_alias.count, ilog2(dev_alias.count)); 1177 + out: 1167 1178 info->scratchpad[0].ptr = its_dev; 1168 1179 info->scratchpad[1].ptr = dev; 1169 1180 return 0; ··· 1320 1255 .deactivate = its_irq_domain_deactivate, 1321 1256 }; 1322 1257 1258 + static int its_force_quiescent(void __iomem *base) 1259 + { 1260 + u32 count = 1000000; /* 1s */ 1261 + u32 val; 1262 + 1263 + val = readl_relaxed(base + GITS_CTLR); 1264 + if (val & GITS_CTLR_QUIESCENT) 1265 + return 0; 1266 + 1267 + /* Disable the generation of all interrupts to this ITS */ 1268 + val &= ~GITS_CTLR_ENABLE; 1269 + writel_relaxed(val, base + GITS_CTLR); 1270 + 1271 + /* Poll GITS_CTLR and wait until ITS becomes quiescent */ 1272 + while (1) { 1273 + val = readl_relaxed(base + GITS_CTLR); 1274 + if (val & GITS_CTLR_QUIESCENT) 1275 + return 0; 1276 + 1277 + count--; 1278 + if (!count) 1279 + return -EBUSY; 1280 + 1281 + cpu_relax(); 1282 + udelay(1); 1283 + } 1284 + } 1285 + 1323 1286 static int its_probe(struct device_node *node, struct irq_domain *parent) 1324 1287 { 1325 1288 struct resource res; ··· 1373 1280 if (val != 0x30 && val != 0x40) { 1374 1281 pr_warn("%s: no ITS detected, giving up\n", node->full_name); 1375 1282 err = -ENODEV; 1283 + goto out_unmap; 1284 + } 1285 + 1286 + err = its_force_quiescent(its_base); 1287 + if (err) { 1288 + pr_warn("%s: failed to quiesce, giving up\n", 1289 + node->full_name); 1376 1290 goto out_unmap; 1377 1291 } 1378 1292 ··· 1423 1323 writeq_relaxed(baser, its->base + GITS_CBASER); 1424 1324 tmp = readq_relaxed(its->base + GITS_CBASER); 1425 1325 writeq_relaxed(0, its->base + GITS_CWRITER); 1426 - writel_relaxed(1, its->base + GITS_CTLR); 1326 + writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1427 1327 1428 1328 if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) { 1429 1329 pr_info("ITS: using cache flushing for cmd queue\n"); ··· 1482 1382 1483 1383 int its_cpu_init(void) 1484 1384 { 1485 - if (!gic_rdists_supports_plpis()) { 1486 - pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1487 - return -ENXIO; 1488 - } 1489 - 1490 1385 if (!list_empty(&its_nodes)) { 1386 + if (!gic_rdists_supports_plpis()) { 1387 + pr_info("CPU%d: LPIs not supported\n", smp_processor_id()); 1388 + return -ENXIO; 1389 + } 1491 1390 its_cpu_init_lpis(); 1492 1391 its_cpu_init_collection(); 1493 1392 }
+1 -1
drivers/irqchip/irq-gic-v3.c
··· 466 466 tlist |= 1 << (mpidr & 0xf); 467 467 468 468 cpu = cpumask_next(cpu, mask); 469 - if (cpu == nr_cpu_ids) 469 + if (cpu >= nr_cpu_ids) 470 470 goto out; 471 471 472 472 mpidr = cpu_logical_map(cpu);
+12 -8
drivers/irqchip/irq-gic.c
··· 154 154 static void gic_mask_irq(struct irq_data *d) 155 155 { 156 156 u32 mask = 1 << (gic_irq(d) % 32); 157 + unsigned long flags; 157 158 158 - raw_spin_lock(&irq_controller_lock); 159 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 159 160 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_CLEAR + (gic_irq(d) / 32) * 4); 160 161 if (gic_arch_extn.irq_mask) 161 162 gic_arch_extn.irq_mask(d); 162 - raw_spin_unlock(&irq_controller_lock); 163 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 163 164 } 164 165 165 166 static void gic_unmask_irq(struct irq_data *d) 166 167 { 167 168 u32 mask = 1 << (gic_irq(d) % 32); 169 + unsigned long flags; 168 170 169 - raw_spin_lock(&irq_controller_lock); 171 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 170 172 if (gic_arch_extn.irq_unmask) 171 173 gic_arch_extn.irq_unmask(d); 172 174 writel_relaxed(mask, gic_dist_base(d) + GIC_DIST_ENABLE_SET + (gic_irq(d) / 32) * 4); 173 - raw_spin_unlock(&irq_controller_lock); 175 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 174 176 } 175 177 176 178 static void gic_eoi_irq(struct irq_data *d) ··· 190 188 { 191 189 void __iomem *base = gic_dist_base(d); 192 190 unsigned int gicirq = gic_irq(d); 191 + unsigned long flags; 193 192 int ret; 194 193 195 194 /* Interrupt configuration for SGIs can't be changed */ ··· 202 199 type != IRQ_TYPE_EDGE_RISING) 203 200 return -EINVAL; 204 201 205 - raw_spin_lock(&irq_controller_lock); 202 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 206 203 207 204 if (gic_arch_extn.irq_set_type) 208 205 gic_arch_extn.irq_set_type(d, type); 209 206 210 207 ret = gic_configure_irq(gicirq, type, base, NULL); 211 208 212 - raw_spin_unlock(&irq_controller_lock); 209 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 213 210 214 211 return ret; 215 212 } ··· 230 227 void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); 231 228 unsigned int cpu, shift = (gic_irq(d) % 4) * 8; 232 229 u32 val, mask, bit; 230 + unsigned long flags; 233 231 234 232 if (!force) 235 233 cpu = cpumask_any_and(mask_val, cpu_online_mask); ··· 240 236 if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) 241 237 return -EINVAL; 242 238 243 - raw_spin_lock(&irq_controller_lock); 239 + raw_spin_lock_irqsave(&irq_controller_lock, flags); 244 240 mask = 0xff << shift; 245 241 bit = gic_cpu_map[cpu] << shift; 246 242 val = readl_relaxed(reg) & ~mask; 247 243 writel_relaxed(val | bit, reg); 248 - raw_spin_unlock(&irq_controller_lock); 244 + raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 249 245 250 246 return IRQ_SET_MASK_OK; 251 247 }
+1
drivers/mtd/nand/Kconfig
··· 526 526 527 527 config MTD_NAND_HISI504 528 528 tristate "Support for NAND controller on Hisilicon SoC Hip04" 529 + depends on HAS_DMA 529 530 help 530 531 Enables support for NAND controller on Hisilicon SoC Hip04. 531 532
+44 -6
drivers/mtd/nand/pxa3xx_nand.c
··· 480 480 nand_writel(info, NDCR, ndcr | int_mask); 481 481 } 482 482 483 + static void drain_fifo(struct pxa3xx_nand_info *info, void *data, int len) 484 + { 485 + if (info->ecc_bch) { 486 + int timeout; 487 + 488 + /* 489 + * According to the datasheet, when reading from NDDB 490 + * with BCH enabled, after each 32 bytes reads, we 491 + * have to make sure that the NDSR.RDDREQ bit is set. 492 + * 493 + * Drain the FIFO 8 32 bits reads at a time, and skip 494 + * the polling on the last read. 495 + */ 496 + while (len > 8) { 497 + __raw_readsl(info->mmio_base + NDDB, data, 8); 498 + 499 + for (timeout = 0; 500 + !(nand_readl(info, NDSR) & NDSR_RDDREQ); 501 + timeout++) { 502 + if (timeout >= 5) { 503 + dev_err(&info->pdev->dev, 504 + "Timeout on RDDREQ while draining the FIFO\n"); 505 + return; 506 + } 507 + 508 + mdelay(1); 509 + } 510 + 511 + data += 32; 512 + len -= 8; 513 + } 514 + } 515 + 516 + __raw_readsl(info->mmio_base + NDDB, data, len); 517 + } 518 + 483 519 static void handle_data_pio(struct pxa3xx_nand_info *info) 484 520 { 485 521 unsigned int do_bytes = min(info->data_size, info->chunk_size); ··· 532 496 DIV_ROUND_UP(info->oob_size, 4)); 533 497 break; 534 498 case STATE_PIO_READING: 535 - __raw_readsl(info->mmio_base + NDDB, 536 - info->data_buff + info->data_buff_pos, 537 - DIV_ROUND_UP(do_bytes, 4)); 499 + drain_fifo(info, 500 + info->data_buff + info->data_buff_pos, 501 + DIV_ROUND_UP(do_bytes, 4)); 538 502 539 503 if (info->oob_size > 0) 540 - __raw_readsl(info->mmio_base + NDDB, 541 - info->oob_buff + info->oob_buff_pos, 542 - DIV_ROUND_UP(info->oob_size, 4)); 504 + drain_fifo(info, 505 + info->oob_buff + info->oob_buff_pos, 506 + DIV_ROUND_UP(info->oob_size, 4)); 543 507 break; 544 508 default: 545 509 dev_err(&info->pdev->dev, "%s: invalid state %d\n", __func__, ··· 1608 1572 int ret, irq, cs; 1609 1573 1610 1574 pdata = dev_get_platdata(&pdev->dev); 1575 + if (pdata->num_cs <= 0) 1576 + return -ENODEV; 1611 1577 info = devm_kzalloc(&pdev->dev, sizeof(*info) + (sizeof(*mtd) + 1612 1578 sizeof(*host)) * pdata->num_cs, GFP_KERNEL); 1613 1579 if (!info)
+8
drivers/net/can/dev.c
··· 579 579 skb->pkt_type = PACKET_BROADCAST; 580 580 skb->ip_summed = CHECKSUM_UNNECESSARY; 581 581 582 + skb_reset_mac_header(skb); 583 + skb_reset_network_header(skb); 584 + skb_reset_transport_header(skb); 585 + 582 586 can_skb_reserve(skb); 583 587 can_skb_prv(skb)->ifindex = dev->ifindex; 584 588 ··· 606 602 skb->protocol = htons(ETH_P_CANFD); 607 603 skb->pkt_type = PACKET_BROADCAST; 608 604 skb->ip_summed = CHECKSUM_UNNECESSARY; 605 + 606 + skb_reset_mac_header(skb); 607 + skb_reset_network_header(skb); 608 + skb_reset_transport_header(skb); 609 609 610 610 can_skb_reserve(skb); 611 611 can_skb_prv(skb)->ifindex = dev->ifindex;
+31 -17
drivers/net/can/usb/kvaser_usb.c
··· 14 14 * Copyright (C) 2015 Valeo S.A. 15 15 */ 16 16 17 + #include <linux/kernel.h> 17 18 #include <linux/completion.h> 18 19 #include <linux/module.h> 19 20 #include <linux/netdevice.h> ··· 585 584 while (pos <= actual_len - MSG_HEADER_LEN) { 586 585 tmp = buf + pos; 587 586 588 - if (!tmp->len) 589 - break; 587 + /* Handle messages crossing the USB endpoint max packet 588 + * size boundary. Check kvaser_usb_read_bulk_callback() 589 + * for further details. 590 + */ 591 + if (tmp->len == 0) { 592 + pos = round_up(pos, 593 + dev->bulk_in->wMaxPacketSize); 594 + continue; 595 + } 590 596 591 597 if (pos + tmp->len > actual_len) { 592 598 dev_err(dev->udev->dev.parent, ··· 795 787 netdev_err(netdev, "Error transmitting URB\n"); 796 788 usb_unanchor_urb(urb); 797 789 usb_free_urb(urb); 798 - kfree(buf); 799 790 return err; 800 791 } 801 792 ··· 1324 1317 while (pos <= urb->actual_length - MSG_HEADER_LEN) { 1325 1318 msg = urb->transfer_buffer + pos; 1326 1319 1327 - if (!msg->len) 1328 - break; 1320 + /* The Kvaser firmware can only read and write messages that 1321 + * does not cross the USB's endpoint wMaxPacketSize boundary. 1322 + * If a follow-up command crosses such boundary, firmware puts 1323 + * a placeholder zero-length command in its place then aligns 1324 + * the real command to the next max packet size. 1325 + * 1326 + * Handle such cases or we're going to miss a significant 1327 + * number of events in case of a heavy rx load on the bus. 1328 + */ 1329 + if (msg->len == 0) { 1330 + pos = round_up(pos, dev->bulk_in->wMaxPacketSize); 1331 + continue; 1332 + } 1329 1333 1330 1334 if (pos + msg->len > urb->actual_length) { 1331 1335 dev_err(dev->udev->dev.parent, "Format error\n"); ··· 1344 1326 } 1345 1327 1346 1328 kvaser_usb_handle_message(dev, msg); 1347 - 1348 1329 pos += msg->len; 1349 1330 } 1350 1331 ··· 1632 1615 struct urb *urb; 1633 1616 void *buf; 1634 1617 struct kvaser_msg *msg; 1635 - int i, err; 1636 - int ret = NETDEV_TX_OK; 1618 + int i, err, ret = NETDEV_TX_OK; 1637 1619 u8 *msg_tx_can_flags = NULL; /* GCC */ 1638 1620 1639 1621 if (can_dropped_invalid_skb(netdev, skb)) ··· 1650 1634 if (!buf) { 1651 1635 stats->tx_dropped++; 1652 1636 dev_kfree_skb(skb); 1653 - goto nobufmem; 1637 + goto freeurb; 1654 1638 } 1655 1639 1656 1640 msg = buf; ··· 1697 1681 /* This should never happen; it implies a flow control bug */ 1698 1682 if (!context) { 1699 1683 netdev_warn(netdev, "cannot find free context\n"); 1684 + 1685 + kfree(buf); 1700 1686 ret = NETDEV_TX_BUSY; 1701 - goto releasebuf; 1687 + goto freeurb; 1702 1688 } 1703 1689 1704 1690 context->priv = priv; ··· 1737 1719 else 1738 1720 netdev_warn(netdev, "Failed tx_urb %d\n", err); 1739 1721 1740 - goto releasebuf; 1722 + goto freeurb; 1741 1723 } 1742 1724 1743 - usb_free_urb(urb); 1725 + ret = NETDEV_TX_OK; 1744 1726 1745 - return NETDEV_TX_OK; 1746 - 1747 - releasebuf: 1748 - kfree(buf); 1749 - nobufmem: 1727 + freeurb: 1750 1728 usb_free_urb(urb); 1751 1729 return ret; 1752 1730 }
+4
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 879 879 880 880 pdev->usb_if = ppdev->usb_if; 881 881 pdev->cmd_buffer_addr = ppdev->cmd_buffer_addr; 882 + 883 + /* do a copy of the ctrlmode[_supported] too */ 884 + dev->can.ctrlmode = ppdev->dev.can.ctrlmode; 885 + dev->can.ctrlmode_supported = ppdev->dev.can.ctrlmode_supported; 882 886 } 883 887 884 888 pdev->usb_if->dev[dev->ctrl_idx] = dev;
+1 -1
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
··· 593 593 if (!xgene_ring_mgr_init(pdata)) 594 594 return -ENODEV; 595 595 596 - if (!efi_enabled(EFI_BOOT)) { 596 + if (pdata->clk) { 597 597 clk_prepare_enable(pdata->clk); 598 598 clk_disable_unprepare(pdata->clk); 599 599 clk_prepare_enable(pdata->clk);
+4
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1025 1025 #ifdef CONFIG_ACPI 1026 1026 static const struct acpi_device_id xgene_enet_acpi_match[] = { 1027 1027 { "APMC0D05", }, 1028 + { "APMC0D30", }, 1029 + { "APMC0D31", }, 1028 1030 { } 1029 1031 }; 1030 1032 MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match); ··· 1035 1033 #ifdef CONFIG_OF 1036 1034 static struct of_device_id xgene_enet_of_match[] = { 1037 1035 {.compatible = "apm,xgene-enet",}, 1036 + {.compatible = "apm,xgene1-sgenet",}, 1037 + {.compatible = "apm,xgene1-xgenet",}, 1038 1038 {}, 1039 1039 }; 1040 1040
+4 -4
drivers/net/ethernet/broadcom/bcm63xx_enet.c
··· 486 486 { 487 487 struct bcm_enet_priv *priv; 488 488 struct net_device *dev; 489 - int tx_work_done, rx_work_done; 489 + int rx_work_done; 490 490 491 491 priv = container_of(napi, struct bcm_enet_priv, napi); 492 492 dev = priv->net_dev; ··· 498 498 ENETDMAC_IR, priv->tx_chan); 499 499 500 500 /* reclaim sent skb */ 501 - tx_work_done = bcm_enet_tx_reclaim(dev, 0); 501 + bcm_enet_tx_reclaim(dev, 0); 502 502 503 503 spin_lock(&priv->rx_lock); 504 504 rx_work_done = bcm_enet_receive_queue(dev, budget); 505 505 spin_unlock(&priv->rx_lock); 506 506 507 - if (rx_work_done >= budget || tx_work_done > 0) { 508 - /* rx/tx queue is not yet empty/clean */ 507 + if (rx_work_done >= budget) { 508 + /* rx queue is not yet empty/clean */ 509 509 return rx_work_done; 510 510 } 511 511
-7
drivers/net/ethernet/broadcom/bgmac.c
··· 302 302 slot->skb = skb; 303 303 slot->dma_addr = dma_addr; 304 304 305 - if (slot->dma_addr & 0xC0000000) 306 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 307 - 308 305 return 0; 309 306 } 310 307 ··· 502 505 ring->mmio_base); 503 506 goto err_dma_free; 504 507 } 505 - if (ring->dma_base & 0xC0000000) 506 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 507 508 508 509 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 509 510 BGMAC_DMA_RING_TX); ··· 531 536 err = -ENOMEM; 532 537 goto err_dma_free; 533 538 } 534 - if (ring->dma_base & 0xC0000000) 535 - bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n"); 536 539 537 540 ring->unaligned = bgmac_dma_unaligned(bgmac, ring, 538 541 BGMAC_DMA_RING_RX);
+3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12722 12722 pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS, 12723 12723 PCICFG_VENDOR_ID_OFFSET); 12724 12724 12725 + /* Set PCIe reset type to fundamental for EEH recovery */ 12726 + pdev->needs_freset = 1; 12727 + 12725 12728 /* AER (Advanced Error reporting) configuration */ 12726 12729 rc = pci_enable_pcie_error_reporting(pdev); 12727 12730 if (!rc)
+4 -2
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 73 73 if (wol->wolopts & ~(WAKE_MAGIC | WAKE_MAGICSECURE)) 74 74 return -EINVAL; 75 75 76 + reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 76 77 if (wol->wolopts & WAKE_MAGICSECURE) { 77 78 bcmgenet_umac_writel(priv, get_unaligned_be16(&wol->sopass[0]), 78 79 UMAC_MPD_PW_MS); 79 80 bcmgenet_umac_writel(priv, get_unaligned_be32(&wol->sopass[2]), 80 81 UMAC_MPD_PW_LS); 81 - reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 82 82 reg |= MPD_PW_EN; 83 - bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 83 + } else { 84 + reg &= ~MPD_PW_EN; 84 85 } 86 + bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 85 87 86 88 /* Flag the device and relevant IRQ as wakeup capable */ 87 89 if (wol->wolopts) {
+4 -4
drivers/net/ethernet/cadence/macb.c
··· 2113 2113 }; 2114 2114 2115 2115 #if defined(CONFIG_OF) 2116 - static struct macb_config pc302gem_config = { 2116 + static const struct macb_config pc302gem_config = { 2117 2117 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2118 2118 .dma_burst_length = 16, 2119 2119 }; 2120 2120 2121 - static struct macb_config sama5d3_config = { 2121 + static const struct macb_config sama5d3_config = { 2122 2122 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 2123 2123 .dma_burst_length = 16, 2124 2124 }; 2125 2125 2126 - static struct macb_config sama5d4_config = { 2126 + static const struct macb_config sama5d4_config = { 2127 2127 .caps = 0, 2128 2128 .dma_burst_length = 4, 2129 2129 }; ··· 2154 2154 if (bp->pdev->dev.of_node) { 2155 2155 match = of_match_node(macb_dt_ids, bp->pdev->dev.of_node); 2156 2156 if (match && match->data) { 2157 - config = (const struct macb_config *)match->data; 2157 + config = match->data; 2158 2158 2159 2159 bp->caps = config->caps; 2160 2160 /*
+1 -1
drivers/net/ethernet/cadence/macb.h
··· 351 351 352 352 /* Bitfields in MID */ 353 353 #define MACB_IDNUM_OFFSET 16 354 - #define MACB_IDNUM_SIZE 16 354 + #define MACB_IDNUM_SIZE 12 355 355 #define MACB_REV_OFFSET 0 356 356 #define MACB_REV_SIZE 16 357 357
+1 -2
drivers/net/ethernet/freescale/fec_main.c
··· 1597 1597 writel(int_events, fep->hwp + FEC_IEVENT); 1598 1598 fec_enet_collect_events(fep, int_events); 1599 1599 1600 - if (fep->work_tx || fep->work_rx) { 1600 + if ((fep->work_tx || fep->work_rx) && fep->link) { 1601 1601 ret = IRQ_HANDLED; 1602 1602 1603 1603 if (napi_schedule_prep(&fep->napi)) { ··· 3383 3383 regulator_disable(fep->reg_phy); 3384 3384 if (fep->ptp_clock) 3385 3385 ptp_clock_unregister(fep->ptp_clock); 3386 - fec_enet_clk_enable(ndev, false); 3387 3386 of_node_put(fep->phy_node); 3388 3387 free_netdev(ndev); 3389 3388
+17 -2
drivers/net/ethernet/freescale/gianfar.c
··· 747 747 return 0; 748 748 } 749 749 750 + static int gfar_of_group_count(struct device_node *np) 751 + { 752 + struct device_node *child; 753 + int num = 0; 754 + 755 + for_each_available_child_of_node(np, child) 756 + if (!of_node_cmp(child->name, "queue-group")) 757 + num++; 758 + 759 + return num; 760 + } 761 + 750 762 static int gfar_of_init(struct platform_device *ofdev, struct net_device **pdev) 751 763 { 752 764 const char *model; ··· 796 784 num_rx_qs = 1; 797 785 } else { /* MQ_MG_MODE */ 798 786 /* get the actual number of supported groups */ 799 - unsigned int num_grps = of_get_available_child_count(np); 787 + unsigned int num_grps = gfar_of_group_count(np); 800 788 801 789 if (num_grps == 0 || num_grps > MAXGROUPS) { 802 790 dev_err(&ofdev->dev, "Invalid # of int groups(%d)\n", ··· 863 851 864 852 /* Parse and initialize group specific information */ 865 853 if (priv->mode == MQ_MG_MODE) { 866 - for_each_child_of_node(np, child) { 854 + for_each_available_child_of_node(np, child) { 855 + if (of_node_cmp(child->name, "queue-group")) 856 + continue; 857 + 867 858 err = gfar_parse_group(child, priv, model); 868 859 if (err) 869 860 goto err_grp_init;
+1
drivers/net/ethernet/smsc/smc91x.c
··· 92 92 #include "smc91x.h" 93 93 94 94 #if defined(CONFIG_ASSABET_NEPONSET) 95 + #include <mach/assabet.h> 95 96 #include <mach/neponset.h> 96 97 #endif 97 98
+36 -29
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 272 272 struct stmmac_priv *priv = NULL; 273 273 struct plat_stmmacenet_data *plat_dat = NULL; 274 274 const char *mac = NULL; 275 + int irq, wol_irq, lpi_irq; 276 + 277 + /* Get IRQ information early to have an ability to ask for deferred 278 + * probe if needed before we went too far with resource allocation. 279 + */ 280 + irq = platform_get_irq_byname(pdev, "macirq"); 281 + if (irq < 0) { 282 + if (irq != -EPROBE_DEFER) { 283 + dev_err(dev, 284 + "MAC IRQ configuration information not found\n"); 285 + } 286 + return irq; 287 + } 288 + 289 + /* On some platforms e.g. SPEAr the wake up irq differs from the mac irq 290 + * The external wake up irq can be passed through the platform code 291 + * named as "eth_wake_irq" 292 + * 293 + * In case the wake up interrupt is not passed from the platform 294 + * so the driver will continue to use the mac irq (ndev->irq) 295 + */ 296 + wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 297 + if (wol_irq < 0) { 298 + if (wol_irq == -EPROBE_DEFER) 299 + return -EPROBE_DEFER; 300 + wol_irq = irq; 301 + } 302 + 303 + lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 304 + if (lpi_irq == -EPROBE_DEFER) 305 + return -EPROBE_DEFER; 275 306 276 307 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 277 308 addr = devm_ioremap_resource(dev, res); ··· 354 323 return PTR_ERR(priv); 355 324 } 356 325 326 + /* Copy IRQ values to priv structure which is now avaialble */ 327 + priv->dev->irq = irq; 328 + priv->wol_irq = wol_irq; 329 + priv->lpi_irq = lpi_irq; 330 + 357 331 /* Get MAC address if available (DT) */ 358 332 if (mac) 359 333 memcpy(priv->dev->dev_addr, mac, ETH_ALEN); 360 - 361 - /* Get the MAC information */ 362 - priv->dev->irq = platform_get_irq_byname(pdev, "macirq"); 363 - if (priv->dev->irq < 0) { 364 - if (priv->dev->irq != -EPROBE_DEFER) { 365 - netdev_err(priv->dev, 366 - "MAC IRQ configuration information not found\n"); 367 - } 368 - return priv->dev->irq; 369 - } 370 - 371 - /* 372 - * On some platforms e.g. SPEAr the wake up irq differs from the mac irq 373 - * The external wake up irq can be passed through the platform code 374 - * named as "eth_wake_irq" 375 - * 376 - * In case the wake up interrupt is not passed from the platform 377 - * so the driver will continue to use the mac irq (ndev->irq) 378 - */ 379 - priv->wol_irq = platform_get_irq_byname(pdev, "eth_wake_irq"); 380 - if (priv->wol_irq < 0) { 381 - if (priv->wol_irq == -EPROBE_DEFER) 382 - return -EPROBE_DEFER; 383 - priv->wol_irq = priv->dev->irq; 384 - } 385 - 386 - priv->lpi_irq = platform_get_irq_byname(pdev, "eth_lpi"); 387 - if (priv->lpi_irq == -EPROBE_DEFER) 388 - return -EPROBE_DEFER; 389 334 390 335 platform_set_drvdata(pdev, priv->dev); 391 336
+3 -3
drivers/net/team/team.c
··· 1730 1730 if (dev->type == ARPHRD_ETHER && !is_valid_ether_addr(addr->sa_data)) 1731 1731 return -EADDRNOTAVAIL; 1732 1732 memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 1733 - rcu_read_lock(); 1734 - list_for_each_entry_rcu(port, &team->port_list, list) 1733 + mutex_lock(&team->lock); 1734 + list_for_each_entry(port, &team->port_list, list) 1735 1735 if (team->ops.port_change_dev_addr) 1736 1736 team->ops.port_change_dev_addr(team, port); 1737 - rcu_read_unlock(); 1737 + mutex_unlock(&team->lock); 1738 1738 return 0; 1739 1739 } 1740 1740
+1 -2
drivers/net/xen-netback/interface.c
··· 340 340 unsigned int num_queues = vif->num_queues; 341 341 int i; 342 342 unsigned int queue_index; 343 - struct xenvif_stats *vif_stats; 344 343 345 344 for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) { 346 345 unsigned long accum = 0; 347 346 for (queue_index = 0; queue_index < num_queues; ++queue_index) { 348 - vif_stats = &vif->queues[queue_index].stats; 347 + void *vif_stats = &vif->queues[queue_index].stats; 349 348 accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset); 350 349 } 351 350 data[i] = accum;
+12 -10
drivers/net/xen-netback/netback.c
··· 1349 1349 { 1350 1350 unsigned int offset = skb_headlen(skb); 1351 1351 skb_frag_t frags[MAX_SKB_FRAGS]; 1352 - int i; 1352 + int i, f; 1353 1353 struct ubuf_info *uarg; 1354 1354 struct sk_buff *nskb = skb_shinfo(skb)->frag_list; 1355 1355 ··· 1389 1389 frags[i].page_offset = 0; 1390 1390 skb_frag_size_set(&frags[i], len); 1391 1391 } 1392 - /* swap out with old one */ 1393 - memcpy(skb_shinfo(skb)->frags, 1394 - frags, 1395 - i * sizeof(skb_frag_t)); 1396 - skb_shinfo(skb)->nr_frags = i; 1397 - skb->truesize += i * PAGE_SIZE; 1398 1392 1399 - /* remove traces of mapped pages and frag_list */ 1393 + /* Copied all the bits from the frag list -- free it. */ 1400 1394 skb_frag_list_init(skb); 1395 + xenvif_skb_zerocopy_prepare(queue, nskb); 1396 + kfree_skb(nskb); 1397 + 1398 + /* Release all the original (foreign) frags. */ 1399 + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) 1400 + skb_frag_unref(skb, f); 1401 1401 uarg = skb_shinfo(skb)->destructor_arg; 1402 1402 /* increase inflight counter to offset decrement in callback */ 1403 1403 atomic_inc(&queue->inflight_packets); 1404 1404 uarg->callback(uarg, true); 1405 1405 skb_shinfo(skb)->destructor_arg = NULL; 1406 1406 1407 - xenvif_skb_zerocopy_prepare(queue, nskb); 1408 - kfree_skb(nskb); 1407 + /* Fill the skb with the new (local) frags. */ 1408 + memcpy(skb_shinfo(skb)->frags, frags, i * sizeof(skb_frag_t)); 1409 + skb_shinfo(skb)->nr_frags = i; 1410 + skb->truesize += i * PAGE_SIZE; 1409 1411 1410 1412 return 0; 1411 1413 }
+1 -2
drivers/of/Kconfig
··· 84 84 bool 85 85 86 86 config OF_OVERLAY 87 - bool 88 - depends on OF 87 + bool "Device Tree overlays" 89 88 select OF_DYNAMIC 90 89 select OF_RESOLVE 91 90
+18 -9
drivers/of/base.c
··· 714 714 const char *path) 715 715 { 716 716 struct device_node *child; 717 - int len = strchrnul(path, '/') - path; 718 - int term; 717 + int len; 718 + const char *end; 719 719 720 + end = strchr(path, ':'); 721 + if (!end) 722 + end = strchrnul(path, '/'); 723 + 724 + len = end - path; 720 725 if (!len) 721 726 return NULL; 722 - 723 - term = strchrnul(path, ':') - path; 724 - if (term < len) 725 - len = term; 726 727 727 728 __for_each_child_of_node(parent, child) { 728 729 const char *name = strrchr(child->full_name, '/'); ··· 769 768 770 769 /* The path could begin with an alias */ 771 770 if (*path != '/') { 772 - char *p = strchrnul(path, '/'); 773 - int len = separator ? separator - path : p - path; 771 + int len; 772 + const char *p = separator; 773 + 774 + if (!p) 775 + p = strchrnul(path, '/'); 776 + len = p - path; 774 777 775 778 /* of_aliases must not be NULL */ 776 779 if (!of_aliases) ··· 799 794 path++; /* Increment past '/' delimiter */ 800 795 np = __of_find_node_by_path(np, path); 801 796 path = strchrnul(path, '/'); 797 + if (separator && separator < path) 798 + break; 802 799 } 803 800 raw_spin_unlock_irqrestore(&devtree_lock, flags); 804 801 return np; ··· 1893 1886 name = of_get_property(of_chosen, "linux,stdout-path", NULL); 1894 1887 if (IS_ENABLED(CONFIG_PPC) && !name) 1895 1888 name = of_get_property(of_aliases, "stdout", NULL); 1896 - if (name) 1889 + if (name) { 1897 1890 of_stdout = of_find_node_opts_by_path(name, &of_stdout_options); 1891 + add_preferred_console("stdout-path", 0, NULL); 1892 + } 1898 1893 } 1899 1894 1900 1895 if (!of_aliases)
+2 -1
drivers/of/overlay.c
··· 19 19 #include <linux/string.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/err.h> 22 + #include <linux/idr.h> 22 23 23 24 #include "of_private.h" 24 25 ··· 86 85 struct device_node *target, struct device_node *child) 87 86 { 88 87 const char *cname; 89 - struct device_node *tchild, *grandchild; 88 + struct device_node *tchild; 90 89 int ret = 0; 91 90 92 91 cname = kbasename(child->full_name);
+19 -9
drivers/of/unittest.c
··· 92 92 "option path test failed\n"); 93 93 of_node_put(np); 94 94 95 + np = of_find_node_opts_by_path("/testcase-data:test/option", &options); 96 + selftest(np && !strcmp("test/option", options), 97 + "option path test, subcase #1 failed\n"); 98 + of_node_put(np); 99 + 95 100 np = of_find_node_opts_by_path("/testcase-data:testoption", NULL); 96 101 selftest(np, "NULL option path test failed\n"); 97 102 of_node_put(np); ··· 105 100 &options); 106 101 selftest(np && !strcmp("testaliasoption", options), 107 102 "option alias path test failed\n"); 103 + of_node_put(np); 104 + 105 + np = of_find_node_opts_by_path("testcase-alias:test/alias/option", 106 + &options); 107 + selftest(np && !strcmp("test/alias/option", options), 108 + "option alias path test, subcase #1 failed\n"); 108 109 of_node_put(np); 109 110 110 111 np = of_find_node_opts_by_path("testcase-alias:testaliasoption", NULL); ··· 389 378 rc = of_property_match_string(np, "phandle-list-names", "first"); 390 379 selftest(rc == 0, "first expected:0 got:%i\n", rc); 391 380 rc = of_property_match_string(np, "phandle-list-names", "second"); 392 - selftest(rc == 1, "second expected:0 got:%i\n", rc); 381 + selftest(rc == 1, "second expected:1 got:%i\n", rc); 393 382 rc = of_property_match_string(np, "phandle-list-names", "third"); 394 - selftest(rc == 2, "third expected:0 got:%i\n", rc); 383 + selftest(rc == 2, "third expected:2 got:%i\n", rc); 395 384 rc = of_property_match_string(np, "phandle-list-names", "fourth"); 396 385 selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); 397 386 rc = of_property_match_string(np, "missing-property", "blah"); ··· 489 478 struct device_node *n1, *n2, *n21, *nremove, *parent, *np; 490 479 struct of_changeset chgset; 491 480 492 - of_changeset_init(&chgset); 493 481 n1 = __of_node_dup(NULL, "/testcase-data/changeset/n1"); 494 482 selftest(n1, "testcase setup failure\n"); 495 483 n2 = __of_node_dup(NULL, "/testcase-data/changeset/n2"); ··· 989 979 return pdev != NULL; 990 980 } 991 981 992 - #if IS_ENABLED(CONFIG_I2C) 982 + #if IS_BUILTIN(CONFIG_I2C) 993 983 994 984 /* get the i2c client device instantiated at the path */ 995 985 static struct i2c_client *of_path_to_i2c_client(const char *path) ··· 1455 1445 return; 1456 1446 } 1457 1447 1458 - #if IS_ENABLED(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1448 + #if IS_BUILTIN(CONFIG_I2C) && IS_ENABLED(CONFIG_OF_OVERLAY) 1459 1449 1460 1450 struct selftest_i2c_bus_data { 1461 1451 struct platform_device *pdev; ··· 1594 1584 .id_table = selftest_i2c_dev_id, 1595 1585 }; 1596 1586 1597 - #if IS_ENABLED(CONFIG_I2C_MUX) 1587 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1598 1588 1599 1589 struct selftest_i2c_mux_data { 1600 1590 int nchans; ··· 1705 1695 "could not register selftest i2c bus driver\n")) 1706 1696 return ret; 1707 1697 1708 - #if IS_ENABLED(CONFIG_I2C_MUX) 1698 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1709 1699 ret = i2c_add_driver(&selftest_i2c_mux_driver); 1710 1700 if (selftest(ret == 0, 1711 1701 "could not register selftest i2c mux driver\n")) ··· 1717 1707 1718 1708 static void of_selftest_overlay_i2c_cleanup(void) 1719 1709 { 1720 - #if IS_ENABLED(CONFIG_I2C_MUX) 1710 + #if IS_BUILTIN(CONFIG_I2C_MUX) 1721 1711 i2c_del_driver(&selftest_i2c_mux_driver); 1722 1712 #endif 1723 1713 platform_driver_unregister(&selftest_i2c_bus_driver); ··· 1824 1814 of_selftest_overlay_10(); 1825 1815 of_selftest_overlay_11(); 1826 1816 1827 - #if IS_ENABLED(CONFIG_I2C) 1817 + #if IS_BUILTIN(CONFIG_I2C) 1828 1818 if (selftest(of_selftest_overlay_i2c_init() == 0, "i2c init failed\n")) 1829 1819 goto out; 1830 1820
+2 -2
drivers/pci/host/pci-xgene.c
··· 127 127 return false; 128 128 } 129 129 130 - static int xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 130 + static void __iomem *xgene_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 131 131 int offset) 132 132 { 133 133 struct xgene_pcie_port *port = bus->sysdata; ··· 137 137 return NULL; 138 138 139 139 xgene_pcie_set_rtdid_reg(bus, devfn); 140 - return xgene_pcie_get_cfg_base(bus); 140 + return xgene_pcie_get_cfg_base(bus) + offset; 141 141 } 142 142 143 143 static struct pci_ops xgene_pcie_ops = {
+3 -2
drivers/pci/pci-sysfs.c
··· 521 521 struct pci_dev *pdev = to_pci_dev(dev); 522 522 char *driver_override, *old = pdev->driver_override, *cp; 523 523 524 - if (count > PATH_MAX) 524 + /* We need to keep extra room for a newline */ 525 + if (count >= (PAGE_SIZE - 1)) 525 526 return -EINVAL; 526 527 527 528 driver_override = kstrndup(buf, count, GFP_KERNEL); ··· 550 549 { 551 550 struct pci_dev *pdev = to_pci_dev(dev); 552 551 553 - return sprintf(buf, "%s\n", pdev->driver_override); 552 + return snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 554 553 } 555 554 static DEVICE_ATTR_RW(driver_override); 556 555
-7
drivers/regulator/core.c
··· 3444 3444 if (attr == &dev_attr_requested_microamps.attr) 3445 3445 return rdev->desc->type == REGULATOR_CURRENT ? mode : 0; 3446 3446 3447 - /* all the other attributes exist to support constraints; 3448 - * don't show them if there are no constraints, or if the 3449 - * relevant supporting methods are missing. 3450 - */ 3451 - if (!rdev->constraints) 3452 - return 0; 3453 - 3454 3447 /* constraints need specific supporting methods */ 3455 3448 if (attr == &dev_attr_min_microvolts.attr || 3456 3449 attr == &dev_attr_max_microvolts.attr)
+9
drivers/regulator/da9210-regulator.c
··· 152 152 config.regmap = chip->regmap; 153 153 config.of_node = dev->of_node; 154 154 155 + /* Mask all interrupt sources to deassert interrupt line */ 156 + error = regmap_write(chip->regmap, DA9210_REG_MASK_A, ~0); 157 + if (!error) 158 + error = regmap_write(chip->regmap, DA9210_REG_MASK_B, ~0); 159 + if (error) { 160 + dev_err(&i2c->dev, "Failed to write to mask reg: %d\n", error); 161 + return error; 162 + } 163 + 155 164 rdev = devm_regulator_register(&i2c->dev, &da9210_reg, &config); 156 165 if (IS_ERR(rdev)) { 157 166 dev_err(&i2c->dev, "Failed to register DA9210 regulator\n");
+8
drivers/regulator/rk808-regulator.c
··· 235 235 .vsel_mask = RK808_LDO_VSEL_MASK, 236 236 .enable_reg = RK808_LDO_EN_REG, 237 237 .enable_mask = BIT(0), 238 + .enable_time = 400, 238 239 .owner = THIS_MODULE, 239 240 }, { 240 241 .name = "LDO_REG2", ··· 250 249 .vsel_mask = RK808_LDO_VSEL_MASK, 251 250 .enable_reg = RK808_LDO_EN_REG, 252 251 .enable_mask = BIT(1), 252 + .enable_time = 400, 253 253 .owner = THIS_MODULE, 254 254 }, { 255 255 .name = "LDO_REG3", ··· 265 263 .vsel_mask = RK808_BUCK4_VSEL_MASK, 266 264 .enable_reg = RK808_LDO_EN_REG, 267 265 .enable_mask = BIT(2), 266 + .enable_time = 400, 268 267 .owner = THIS_MODULE, 269 268 }, { 270 269 .name = "LDO_REG4", ··· 280 277 .vsel_mask = RK808_LDO_VSEL_MASK, 281 278 .enable_reg = RK808_LDO_EN_REG, 282 279 .enable_mask = BIT(3), 280 + .enable_time = 400, 283 281 .owner = THIS_MODULE, 284 282 }, { 285 283 .name = "LDO_REG5", ··· 295 291 .vsel_mask = RK808_LDO_VSEL_MASK, 296 292 .enable_reg = RK808_LDO_EN_REG, 297 293 .enable_mask = BIT(4), 294 + .enable_time = 400, 298 295 .owner = THIS_MODULE, 299 296 }, { 300 297 .name = "LDO_REG6", ··· 310 305 .vsel_mask = RK808_LDO_VSEL_MASK, 311 306 .enable_reg = RK808_LDO_EN_REG, 312 307 .enable_mask = BIT(5), 308 + .enable_time = 400, 313 309 .owner = THIS_MODULE, 314 310 }, { 315 311 .name = "LDO_REG7", ··· 325 319 .vsel_mask = RK808_LDO_VSEL_MASK, 326 320 .enable_reg = RK808_LDO_EN_REG, 327 321 .enable_mask = BIT(6), 322 + .enable_time = 400, 328 323 .owner = THIS_MODULE, 329 324 }, { 330 325 .name = "LDO_REG8", ··· 340 333 .vsel_mask = RK808_LDO_VSEL_MASK, 341 334 .enable_reg = RK808_LDO_EN_REG, 342 335 .enable_mask = BIT(7), 336 + .enable_time = 400, 343 337 .owner = THIS_MODULE, 344 338 }, { 345 339 .name = "SWITCH_REG1",
+1
drivers/rtc/rtc-s3c.c
··· 849 849 850 850 static struct s3c_rtc_data const s3c6410_rtc_data = { 851 851 .max_user_freq = 32768, 852 + .needs_src_clk = true, 852 853 .irq_handler = s3c6410_rtc_irq, 853 854 .set_freq = s3c6410_rtc_setfreq, 854 855 .enable_tick = s3c6410_rtc_enable_tick,
+1 -1
drivers/s390/block/dcssblk.c
··· 547 547 * parse input 548 548 */ 549 549 num_of_segments = 0; 550 - for (i = 0; ((buf[i] != '\0') && (buf[i] != '\n') && i < count); i++) { 550 + for (i = 0; (i < count && (buf[i] != '\0') && (buf[i] != '\n')); i++) { 551 551 for (j = i; (buf[j] != ':') && 552 552 (buf[j] != '\0') && 553 553 (buf[j] != '\n') &&
+1 -1
drivers/s390/block/scm_blk_cluster.c
··· 92 92 add = 0; 93 93 continue; 94 94 } 95 - for (pos = 0; pos <= iter->aob->request.msb_count; pos++) { 95 + for (pos = 0; pos < iter->aob->request.msb_count; pos++) { 96 96 if (clusters_intersect(req, iter->request[pos]) && 97 97 (rq_data_dir(req) == WRITE || 98 98 rq_data_dir(iter->request[pos]) == WRITE)) {
+4 -2
drivers/scsi/libsas/sas_discover.c
··· 500 500 struct sas_discovery_event *ev = to_sas_discovery_event(work); 501 501 struct asd_sas_port *port = ev->port; 502 502 struct sas_ha_struct *ha = port->ha; 503 + struct domain_device *ddev = port->port_dev; 503 504 504 505 /* prevent revalidation from finding sata links in recovery */ 505 506 mutex_lock(&ha->disco_mutex); ··· 515 514 SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, 516 515 task_pid_nr(current)); 517 516 518 - if (port->port_dev) 519 - res = sas_ex_revalidate_domain(port->port_dev); 517 + if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || 518 + ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) 519 + res = sas_ex_revalidate_domain(ddev); 520 520 521 521 SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", 522 522 port->id, task_pid_nr(current), res);
+6 -6
drivers/spi/spi-atmel.c
··· 764 764 (unsigned long long)xfer->rx_dma); 765 765 } 766 766 767 - /* REVISIT: We're waiting for ENDRX before we start the next 767 + /* REVISIT: We're waiting for RXBUFF before we start the next 768 768 * transfer because we need to handle some difficult timing 769 - * issues otherwise. If we wait for ENDTX in one transfer and 770 - * then starts waiting for ENDRX in the next, it's difficult 771 - * to tell the difference between the ENDRX interrupt we're 772 - * actually waiting for and the ENDRX interrupt of the 769 + * issues otherwise. If we wait for TXBUFE in one transfer and 770 + * then starts waiting for RXBUFF in the next, it's difficult 771 + * to tell the difference between the RXBUFF interrupt we're 772 + * actually waiting for and the RXBUFF interrupt of the 773 773 * previous transfer. 774 774 * 775 775 * It should be doable, though. Just not now... 776 776 */ 777 - spi_writel(as, IER, SPI_BIT(ENDRX) | SPI_BIT(OVRES)); 777 + spi_writel(as, IER, SPI_BIT(RXBUFF) | SPI_BIT(OVRES)); 778 778 spi_writel(as, PTCR, SPI_BIT(TXTEN) | SPI_BIT(RXTEN)); 779 779 } 780 780
+6
drivers/spi/spi-dw-mid.c
··· 139 139 1, 140 140 DMA_MEM_TO_DEV, 141 141 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 142 + if (!txdesc) 143 + return NULL; 144 + 142 145 txdesc->callback = dw_spi_dma_tx_done; 143 146 txdesc->callback_param = dws; 144 147 ··· 187 184 1, 188 185 DMA_DEV_TO_MEM, 189 186 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 187 + if (!rxdesc) 188 + return NULL; 189 + 190 190 rxdesc->callback = dw_spi_dma_rx_done; 191 191 rxdesc->callback_param = dws; 192 192
+2 -2
drivers/spi/spi-dw-pci.c
··· 36 36 37 37 static struct spi_pci_desc spi_pci_mid_desc_1 = { 38 38 .setup = dw_spi_mid_init, 39 - .num_cs = 32, 39 + .num_cs = 5, 40 40 .bus_num = 0, 41 41 }; 42 42 43 43 static struct spi_pci_desc spi_pci_mid_desc_2 = { 44 44 .setup = dw_spi_mid_init, 45 - .num_cs = 4, 45 + .num_cs = 2, 46 46 .bus_num = 1, 47 47 }; 48 48
+2 -2
drivers/spi/spi-dw.c
··· 621 621 if (!dws->fifo_len) { 622 622 u32 fifo; 623 623 624 - for (fifo = 2; fifo <= 256; fifo++) { 624 + for (fifo = 1; fifo < 256; fifo++) { 625 625 dw_writew(dws, DW_SPI_TXFLTR, fifo); 626 626 if (fifo != dw_readw(dws, DW_SPI_TXFLTR)) 627 627 break; 628 628 } 629 629 dw_writew(dws, DW_SPI_TXFLTR, 0); 630 630 631 - dws->fifo_len = (fifo == 2) ? 0 : fifo - 1; 631 + dws->fifo_len = (fifo == 1) ? 0 : fifo; 632 632 dev_dbg(dev, "Detected FIFO size: %u bytes\n", dws->fifo_len); 633 633 } 634 634 }
+7
drivers/spi/spi-img-spfi.c
··· 459 459 unsigned long flags; 460 460 int ret; 461 461 462 + if (xfer->len > SPFI_TRANSACTION_TSIZE_MASK) { 463 + dev_err(spfi->dev, 464 + "Transfer length (%d) is greater than the max supported (%d)", 465 + xfer->len, SPFI_TRANSACTION_TSIZE_MASK); 466 + return -EINVAL; 467 + } 468 + 462 469 /* 463 470 * Stop all DMA and reset the controller if the previous transaction 464 471 * timed-out and never completed it's DMA.
+1 -1
drivers/spi/spi-pl022.c
··· 534 534 pl022->cur_msg = NULL; 535 535 pl022->cur_transfer = NULL; 536 536 pl022->cur_chip = NULL; 537 - spi_finalize_current_message(pl022->master); 538 537 539 538 /* disable the SPI/SSP operation */ 540 539 writew((readw(SSP_CR1(pl022->virtbase)) & 541 540 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); 542 541 542 + spi_finalize_current_message(pl022->master); 543 543 } 544 544 545 545 /**
+22
drivers/spi/spi-ti-qspi.c
··· 101 101 #define QSPI_FLEN(n) ((n - 1) << 0) 102 102 103 103 /* STATUS REGISTER */ 104 + #define BUSY 0x01 104 105 #define WC 0x02 105 106 106 107 /* INTERRUPT REGISTER */ ··· 200 199 ti_qspi_write(qspi, ctx_reg->clkctrl, QSPI_SPI_CLOCK_CNTRL_REG); 201 200 } 202 201 202 + static inline u32 qspi_is_busy(struct ti_qspi *qspi) 203 + { 204 + u32 stat; 205 + unsigned long timeout = jiffies + QSPI_COMPLETION_TIMEOUT; 206 + 207 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 208 + while ((stat & BUSY) && time_after(timeout, jiffies)) { 209 + cpu_relax(); 210 + stat = ti_qspi_read(qspi, QSPI_SPI_STATUS_REG); 211 + } 212 + 213 + WARN(stat & BUSY, "qspi busy\n"); 214 + return stat & BUSY; 215 + } 216 + 203 217 static int qspi_write_msg(struct ti_qspi *qspi, struct spi_transfer *t) 204 218 { 205 219 int wlen, count; ··· 227 211 wlen = t->bits_per_word >> 3; /* in bytes */ 228 212 229 213 while (count) { 214 + if (qspi_is_busy(qspi)) 215 + return -EBUSY; 216 + 230 217 switch (wlen) { 231 218 case 1: 232 219 dev_dbg(qspi->dev, "tx cmd %08x dc %08x data %02x\n", ··· 285 266 286 267 while (count) { 287 268 dev_dbg(qspi->dev, "rx cmd %08x dc %08x\n", cmd, qspi->dc); 269 + if (qspi_is_busy(qspi)) 270 + return -EBUSY; 271 + 288 272 ti_qspi_write(qspi, cmd, QSPI_SPI_CMD_REG); 289 273 if (!wait_for_completion_timeout(&qspi->transfer_complete, 290 274 QSPI_COMPLETION_TIMEOUT)) {
+90 -136
drivers/usb/gadget/function/f_fs.c
··· 144 144 bool read; 145 145 146 146 struct kiocb *kiocb; 147 - const struct iovec *iovec; 148 - unsigned long nr_segs; 149 - char __user *buf; 150 - size_t len; 147 + struct iov_iter data; 148 + const void *to_free; 149 + char *buf; 151 150 152 151 struct mm_struct *mm; 153 152 struct work_struct work; ··· 648 649 io_data->req->actual; 649 650 650 651 if (io_data->read && ret > 0) { 651 - int i; 652 - size_t pos = 0; 653 - 654 - /* 655 - * Since req->length may be bigger than io_data->len (after 656 - * being rounded up to maxpacketsize), we may end up with more 657 - * data then user space has space for. 658 - */ 659 - ret = min_t(int, ret, io_data->len); 660 - 661 652 use_mm(io_data->mm); 662 - for (i = 0; i < io_data->nr_segs; i++) { 663 - size_t len = min_t(size_t, ret - pos, 664 - io_data->iovec[i].iov_len); 665 - if (!len) 666 - break; 667 - if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 668 - &io_data->buf[pos], len))) { 669 - ret = -EFAULT; 670 - break; 671 - } 672 - pos += len; 673 - } 653 + ret = copy_to_iter(io_data->buf, ret, &io_data->data); 654 + if (iov_iter_count(&io_data->data)) 655 + ret = -EFAULT; 674 656 unuse_mm(io_data->mm); 675 657 } 676 658 ··· 664 684 665 685 io_data->kiocb->private = NULL; 666 686 if (io_data->read) 667 - kfree(io_data->iovec); 687 + kfree(io_data->to_free); 668 688 kfree(io_data->buf); 669 689 kfree(io_data); 670 690 } ··· 723 743 * before the waiting completes, so do not assign to 'gadget' earlier 724 744 */ 725 745 struct usb_gadget *gadget = epfile->ffs->gadget; 746 + size_t copied; 726 747 727 748 spin_lock_irq(&epfile->ffs->eps_lock); 728 749 /* In the meantime, endpoint got disabled or changed. */ ··· 731 750 spin_unlock_irq(&epfile->ffs->eps_lock); 732 751 return -ESHUTDOWN; 733 752 } 753 + data_len = iov_iter_count(&io_data->data); 734 754 /* 735 755 * Controller may require buffer size to be aligned to 736 756 * maxpacketsize of an out endpoint. 737 757 */ 738 - data_len = io_data->read ? 739 - usb_ep_align_maybe(gadget, ep->ep, io_data->len) : 740 - io_data->len; 758 + if (io_data->read) 759 + data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); 741 760 spin_unlock_irq(&epfile->ffs->eps_lock); 742 761 743 762 data = kmalloc(data_len, GFP_KERNEL); 744 763 if (unlikely(!data)) 745 764 return -ENOMEM; 746 - if (io_data->aio && !io_data->read) { 747 - int i; 748 - size_t pos = 0; 749 - for (i = 0; i < io_data->nr_segs; i++) { 750 - if (unlikely(copy_from_user(&data[pos], 751 - io_data->iovec[i].iov_base, 752 - io_data->iovec[i].iov_len))) { 753 - ret = -EFAULT; 754 - goto error; 755 - } 756 - pos += io_data->iovec[i].iov_len; 757 - } 758 - } else { 759 - if (!io_data->read && 760 - unlikely(__copy_from_user(data, io_data->buf, 761 - io_data->len))) { 765 + if (!io_data->read) { 766 + copied = copy_from_iter(data, data_len, &io_data->data); 767 + if (copied != data_len) { 762 768 ret = -EFAULT; 763 769 goto error; 764 770 } ··· 844 876 */ 845 877 ret = ep->status; 846 878 if (io_data->read && ret > 0) { 847 - ret = min_t(size_t, ret, io_data->len); 848 - 849 - if (unlikely(copy_to_user(io_data->buf, 850 - data, ret))) 879 + ret = copy_to_iter(data, ret, &io_data->data); 880 + if (unlikely(iov_iter_count(&io_data->data))) 851 881 ret = -EFAULT; 852 882 } 853 883 } ··· 862 896 error: 863 897 kfree(data); 864 898 return ret; 865 - } 866 - 867 - static ssize_t 868 - ffs_epfile_write(struct file *file, const char __user *buf, size_t len, 869 - loff_t *ptr) 870 - { 871 - struct ffs_io_data io_data; 872 - 873 - ENTER(); 874 - 875 - io_data.aio = false; 876 - io_data.read = false; 877 - io_data.buf = (char * __user)buf; 878 - io_data.len = len; 879 - 880 - return ffs_epfile_io(file, &io_data); 881 - } 882 - 883 - static ssize_t 884 - ffs_epfile_read(struct file *file, char __user *buf, size_t len, loff_t *ptr) 885 - { 886 - struct ffs_io_data io_data; 887 - 888 - ENTER(); 889 - 890 - io_data.aio = false; 891 - io_data.read = true; 892 - io_data.buf = buf; 893 - io_data.len = len; 894 - 895 - return ffs_epfile_io(file, &io_data); 896 899 } 897 900 898 901 static int ··· 900 965 return value; 901 966 } 902 967 903 - static ssize_t ffs_epfile_aio_write(struct kiocb *kiocb, 904 - const struct iovec *iovec, 905 - unsigned long nr_segs, loff_t loff) 968 + static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from) 906 969 { 907 - struct ffs_io_data *io_data; 970 + struct ffs_io_data io_data, *p = &io_data; 971 + ssize_t res; 908 972 909 973 ENTER(); 910 974 911 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 912 - if (unlikely(!io_data)) 913 - return -ENOMEM; 914 - 915 - io_data->aio = true; 916 - io_data->read = false; 917 - io_data->kiocb = kiocb; 918 - io_data->iovec = iovec; 919 - io_data->nr_segs = nr_segs; 920 - io_data->len = kiocb->ki_nbytes; 921 - io_data->mm = current->mm; 922 - 923 - kiocb->private = io_data; 924 - 925 - kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 926 - 927 - return ffs_epfile_io(kiocb->ki_filp, io_data); 928 - } 929 - 930 - static ssize_t ffs_epfile_aio_read(struct kiocb *kiocb, 931 - const struct iovec *iovec, 932 - unsigned long nr_segs, loff_t loff) 933 - { 934 - struct ffs_io_data *io_data; 935 - struct iovec *iovec_copy; 936 - 937 - ENTER(); 938 - 939 - iovec_copy = kmalloc_array(nr_segs, sizeof(*iovec_copy), GFP_KERNEL); 940 - if (unlikely(!iovec_copy)) 941 - return -ENOMEM; 942 - 943 - memcpy(iovec_copy, iovec, sizeof(struct iovec)*nr_segs); 944 - 945 - io_data = kmalloc(sizeof(*io_data), GFP_KERNEL); 946 - if (unlikely(!io_data)) { 947 - kfree(iovec_copy); 948 - return -ENOMEM; 975 + if (!is_sync_kiocb(kiocb)) { 976 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 977 + if (unlikely(!p)) 978 + return -ENOMEM; 979 + p->aio = true; 980 + } else { 981 + p->aio = false; 949 982 } 950 983 951 - io_data->aio = true; 952 - io_data->read = true; 953 - io_data->kiocb = kiocb; 954 - io_data->iovec = iovec_copy; 955 - io_data->nr_segs = nr_segs; 956 - io_data->len = kiocb->ki_nbytes; 957 - io_data->mm = current->mm; 984 + p->read = false; 985 + p->kiocb = kiocb; 986 + p->data = *from; 987 + p->mm = current->mm; 958 988 959 - kiocb->private = io_data; 989 + kiocb->private = p; 960 990 961 991 kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 962 992 963 - return ffs_epfile_io(kiocb->ki_filp, io_data); 993 + res = ffs_epfile_io(kiocb->ki_filp, p); 994 + if (res == -EIOCBQUEUED) 995 + return res; 996 + if (p->aio) 997 + kfree(p); 998 + else 999 + *from = p->data; 1000 + return res; 1001 + } 1002 + 1003 + static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to) 1004 + { 1005 + struct ffs_io_data io_data, *p = &io_data; 1006 + ssize_t res; 1007 + 1008 + ENTER(); 1009 + 1010 + if (!is_sync_kiocb(kiocb)) { 1011 + p = kmalloc(sizeof(io_data), GFP_KERNEL); 1012 + if (unlikely(!p)) 1013 + return -ENOMEM; 1014 + p->aio = true; 1015 + } else { 1016 + p->aio = false; 1017 + } 1018 + 1019 + p->read = true; 1020 + p->kiocb = kiocb; 1021 + if (p->aio) { 1022 + p->to_free = dup_iter(&p->data, to, GFP_KERNEL); 1023 + if (!p->to_free) { 1024 + kfree(p); 1025 + return -ENOMEM; 1026 + } 1027 + } else { 1028 + p->data = *to; 1029 + p->to_free = NULL; 1030 + } 1031 + p->mm = current->mm; 1032 + 1033 + kiocb->private = p; 1034 + 1035 + kiocb_set_cancel_fn(kiocb, ffs_aio_cancel); 1036 + 1037 + res = ffs_epfile_io(kiocb->ki_filp, p); 1038 + if (res == -EIOCBQUEUED) 1039 + return res; 1040 + 1041 + if (p->aio) { 1042 + kfree(p->to_free); 1043 + kfree(p); 1044 + } else { 1045 + *to = p->data; 1046 + } 1047 + return res; 964 1048 } 965 1049 966 1050 static int ··· 1059 1105 .llseek = no_llseek, 1060 1106 1061 1107 .open = ffs_epfile_open, 1062 - .write = ffs_epfile_write, 1063 - .read = ffs_epfile_read, 1064 - .aio_write = ffs_epfile_aio_write, 1065 - .aio_read = ffs_epfile_aio_read, 1108 + .write = new_sync_write, 1109 + .read = new_sync_read, 1110 + .write_iter = ffs_epfile_write_iter, 1111 + .read_iter = ffs_epfile_read_iter, 1066 1112 .release = ffs_epfile_release, 1067 1113 .unlocked_ioctl = ffs_epfile_ioctl, 1068 1114 };
+194 -274
drivers/usb/gadget/legacy/inode.c
··· 74 74 MODULE_AUTHOR ("David Brownell"); 75 75 MODULE_LICENSE ("GPL"); 76 76 77 + static int ep_open(struct inode *, struct file *); 78 + 77 79 78 80 /*----------------------------------------------------------------------*/ 79 81 ··· 285 283 * still need dev->lock to use epdata->ep. 286 284 */ 287 285 static int 288 - get_ready_ep (unsigned f_flags, struct ep_data *epdata) 286 + get_ready_ep (unsigned f_flags, struct ep_data *epdata, bool is_write) 289 287 { 290 288 int val; 291 289 292 290 if (f_flags & O_NONBLOCK) { 293 291 if (!mutex_trylock(&epdata->lock)) 294 292 goto nonblock; 295 - if (epdata->state != STATE_EP_ENABLED) { 293 + if (epdata->state != STATE_EP_ENABLED && 294 + (!is_write || epdata->state != STATE_EP_READY)) { 296 295 mutex_unlock(&epdata->lock); 297 296 nonblock: 298 297 val = -EAGAIN; ··· 308 305 309 306 switch (epdata->state) { 310 307 case STATE_EP_ENABLED: 308 + return 0; 309 + case STATE_EP_READY: /* not configured yet */ 310 + if (is_write) 311 + return 0; 312 + // FALLTHRU 313 + case STATE_EP_UNBOUND: /* clean disconnect */ 311 314 break; 312 315 // case STATE_EP_DISABLED: /* "can't happen" */ 313 - // case STATE_EP_READY: /* "can't happen" */ 314 316 default: /* error! */ 315 317 pr_debug ("%s: ep %p not available, state %d\n", 316 318 shortname, epdata, epdata->state); 317 - // FALLTHROUGH 318 - case STATE_EP_UNBOUND: /* clean disconnect */ 319 - val = -ENODEV; 320 - mutex_unlock(&epdata->lock); 321 319 } 322 - return val; 320 + mutex_unlock(&epdata->lock); 321 + return -ENODEV; 323 322 } 324 323 325 324 static ssize_t ··· 368 363 return value; 369 364 } 370 365 371 - 372 - /* handle a synchronous OUT bulk/intr/iso transfer */ 373 - static ssize_t 374 - ep_read (struct file *fd, char __user *buf, size_t len, loff_t *ptr) 375 - { 376 - struct ep_data *data = fd->private_data; 377 - void *kbuf; 378 - ssize_t value; 379 - 380 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 381 - return value; 382 - 383 - /* halt any endpoint by doing a "wrong direction" i/o call */ 384 - if (usb_endpoint_dir_in(&data->desc)) { 385 - if (usb_endpoint_xfer_isoc(&data->desc)) { 386 - mutex_unlock(&data->lock); 387 - return -EINVAL; 388 - } 389 - DBG (data->dev, "%s halt\n", data->name); 390 - spin_lock_irq (&data->dev->lock); 391 - if (likely (data->ep != NULL)) 392 - usb_ep_set_halt (data->ep); 393 - spin_unlock_irq (&data->dev->lock); 394 - mutex_unlock(&data->lock); 395 - return -EBADMSG; 396 - } 397 - 398 - /* FIXME readahead for O_NONBLOCK and poll(); careful with ZLPs */ 399 - 400 - value = -ENOMEM; 401 - kbuf = kmalloc (len, GFP_KERNEL); 402 - if (unlikely (!kbuf)) 403 - goto free1; 404 - 405 - value = ep_io (data, kbuf, len); 406 - VDEBUG (data->dev, "%s read %zu OUT, status %d\n", 407 - data->name, len, (int) value); 408 - if (value >= 0 && copy_to_user (buf, kbuf, value)) 409 - value = -EFAULT; 410 - 411 - free1: 412 - mutex_unlock(&data->lock); 413 - kfree (kbuf); 414 - return value; 415 - } 416 - 417 - /* handle a synchronous IN bulk/intr/iso transfer */ 418 - static ssize_t 419 - ep_write (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 420 - { 421 - struct ep_data *data = fd->private_data; 422 - void *kbuf; 423 - ssize_t value; 424 - 425 - if ((value = get_ready_ep (fd->f_flags, data)) < 0) 426 - return value; 427 - 428 - /* halt any endpoint by doing a "wrong direction" i/o call */ 429 - if (!usb_endpoint_dir_in(&data->desc)) { 430 - if (usb_endpoint_xfer_isoc(&data->desc)) { 431 - mutex_unlock(&data->lock); 432 - return -EINVAL; 433 - } 434 - DBG (data->dev, "%s halt\n", data->name); 435 - spin_lock_irq (&data->dev->lock); 436 - if (likely (data->ep != NULL)) 437 - usb_ep_set_halt (data->ep); 438 - spin_unlock_irq (&data->dev->lock); 439 - mutex_unlock(&data->lock); 440 - return -EBADMSG; 441 - } 442 - 443 - /* FIXME writebehind for O_NONBLOCK and poll(), qlen = 1 */ 444 - 445 - value = -ENOMEM; 446 - kbuf = memdup_user(buf, len); 447 - if (IS_ERR(kbuf)) { 448 - value = PTR_ERR(kbuf); 449 - kbuf = NULL; 450 - goto free1; 451 - } 452 - 453 - value = ep_io (data, kbuf, len); 454 - VDEBUG (data->dev, "%s write %zu IN, status %d\n", 455 - data->name, len, (int) value); 456 - free1: 457 - mutex_unlock(&data->lock); 458 - kfree (kbuf); 459 - return value; 460 - } 461 - 462 366 static int 463 367 ep_release (struct inode *inode, struct file *fd) 464 368 { ··· 395 481 struct ep_data *data = fd->private_data; 396 482 int status; 397 483 398 - if ((status = get_ready_ep (fd->f_flags, data)) < 0) 484 + if ((status = get_ready_ep (fd->f_flags, data, false)) < 0) 399 485 return status; 400 486 401 487 spin_lock_irq (&data->dev->lock); ··· 431 517 struct mm_struct *mm; 432 518 struct work_struct work; 433 519 void *buf; 434 - const struct iovec *iv; 435 - unsigned long nr_segs; 520 + struct iov_iter to; 521 + const void *to_free; 436 522 unsigned actual; 437 523 }; 438 524 ··· 455 541 return value; 456 542 } 457 543 458 - static ssize_t ep_copy_to_user(struct kiocb_priv *priv) 459 - { 460 - ssize_t len, total; 461 - void *to_copy; 462 - int i; 463 - 464 - /* copy stuff into user buffers */ 465 - total = priv->actual; 466 - len = 0; 467 - to_copy = priv->buf; 468 - for (i=0; i < priv->nr_segs; i++) { 469 - ssize_t this = min((ssize_t)(priv->iv[i].iov_len), total); 470 - 471 - if (copy_to_user(priv->iv[i].iov_base, to_copy, this)) { 472 - if (len == 0) 473 - len = -EFAULT; 474 - break; 475 - } 476 - 477 - total -= this; 478 - len += this; 479 - to_copy += this; 480 - if (total == 0) 481 - break; 482 - } 483 - 484 - return len; 485 - } 486 - 487 544 static void ep_user_copy_worker(struct work_struct *work) 488 545 { 489 546 struct kiocb_priv *priv = container_of(work, struct kiocb_priv, work); ··· 463 578 size_t ret; 464 579 465 580 use_mm(mm); 466 - ret = ep_copy_to_user(priv); 581 + ret = copy_to_iter(priv->buf, priv->actual, &priv->to); 467 582 unuse_mm(mm); 583 + if (!ret) 584 + ret = -EFAULT; 468 585 469 586 /* completing the iocb can drop the ctx and mm, don't touch mm after */ 470 587 aio_complete(iocb, ret, ret); 471 588 472 589 kfree(priv->buf); 590 + kfree(priv->to_free); 473 591 kfree(priv); 474 592 } 475 593 ··· 491 603 * don't need to copy anything to userspace, so we can 492 604 * complete the aio request immediately. 493 605 */ 494 - if (priv->iv == NULL || unlikely(req->actual == 0)) { 606 + if (priv->to_free == NULL || unlikely(req->actual == 0)) { 495 607 kfree(req->buf); 608 + kfree(priv->to_free); 496 609 kfree(priv); 497 610 iocb->private = NULL; 498 611 /* aio_complete() reports bytes-transferred _and_ faults */ ··· 507 618 508 619 priv->buf = req->buf; 509 620 priv->actual = req->actual; 621 + INIT_WORK(&priv->work, ep_user_copy_worker); 510 622 schedule_work(&priv->work); 511 623 } 512 624 spin_unlock(&epdata->dev->lock); ··· 516 626 put_ep(epdata); 517 627 } 518 628 519 - static ssize_t 520 - ep_aio_rwtail( 521 - struct kiocb *iocb, 522 - char *buf, 523 - size_t len, 524 - struct ep_data *epdata, 525 - const struct iovec *iv, 526 - unsigned long nr_segs 527 - ) 629 + static ssize_t ep_aio(struct kiocb *iocb, 630 + struct kiocb_priv *priv, 631 + struct ep_data *epdata, 632 + char *buf, 633 + size_t len) 528 634 { 529 - struct kiocb_priv *priv; 530 - struct usb_request *req; 531 - ssize_t value; 635 + struct usb_request *req; 636 + ssize_t value; 532 637 533 - priv = kmalloc(sizeof *priv, GFP_KERNEL); 534 - if (!priv) { 535 - value = -ENOMEM; 536 - fail: 537 - kfree(buf); 538 - return value; 539 - } 540 638 iocb->private = priv; 541 639 priv->iocb = iocb; 542 - priv->iv = iv; 543 - priv->nr_segs = nr_segs; 544 - INIT_WORK(&priv->work, ep_user_copy_worker); 545 - 546 - value = get_ready_ep(iocb->ki_filp->f_flags, epdata); 547 - if (unlikely(value < 0)) { 548 - kfree(priv); 549 - goto fail; 550 - } 551 640 552 641 kiocb_set_cancel_fn(iocb, ep_aio_cancel); 553 642 get_ep(epdata); ··· 538 669 * allocate or submit those if the host disconnected. 539 670 */ 540 671 spin_lock_irq(&epdata->dev->lock); 541 - if (likely(epdata->ep)) { 542 - req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 543 - if (likely(req)) { 544 - priv->req = req; 545 - req->buf = buf; 546 - req->length = len; 547 - req->complete = ep_aio_complete; 548 - req->context = iocb; 549 - value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 550 - if (unlikely(0 != value)) 551 - usb_ep_free_request(epdata->ep, req); 552 - } else 553 - value = -EAGAIN; 554 - } else 555 - value = -ENODEV; 672 + value = -ENODEV; 673 + if (unlikely(epdata->ep)) 674 + goto fail; 675 + 676 + req = usb_ep_alloc_request(epdata->ep, GFP_ATOMIC); 677 + value = -ENOMEM; 678 + if (unlikely(!req)) 679 + goto fail; 680 + 681 + priv->req = req; 682 + req->buf = buf; 683 + req->length = len; 684 + req->complete = ep_aio_complete; 685 + req->context = iocb; 686 + value = usb_ep_queue(epdata->ep, req, GFP_ATOMIC); 687 + if (unlikely(0 != value)) { 688 + usb_ep_free_request(epdata->ep, req); 689 + goto fail; 690 + } 556 691 spin_unlock_irq(&epdata->dev->lock); 692 + return -EIOCBQUEUED; 557 693 558 - mutex_unlock(&epdata->lock); 559 - 560 - if (unlikely(value)) { 561 - kfree(priv); 562 - put_ep(epdata); 563 - } else 564 - value = -EIOCBQUEUED; 694 + fail: 695 + spin_unlock_irq(&epdata->dev->lock); 696 + kfree(priv->to_free); 697 + kfree(priv); 698 + put_ep(epdata); 565 699 return value; 566 700 } 567 701 568 702 static ssize_t 569 - ep_aio_read(struct kiocb *iocb, const struct iovec *iov, 570 - unsigned long nr_segs, loff_t o) 703 + ep_read_iter(struct kiocb *iocb, struct iov_iter *to) 571 704 { 572 - struct ep_data *epdata = iocb->ki_filp->private_data; 573 - char *buf; 705 + struct file *file = iocb->ki_filp; 706 + struct ep_data *epdata = file->private_data; 707 + size_t len = iov_iter_count(to); 708 + ssize_t value; 709 + char *buf; 574 710 575 - if (unlikely(usb_endpoint_dir_in(&epdata->desc))) 576 - return -EINVAL; 711 + if ((value = get_ready_ep(file->f_flags, epdata, false)) < 0) 712 + return value; 577 713 578 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 579 - if (unlikely(!buf)) 714 + /* halt any endpoint by doing a "wrong direction" i/o call */ 715 + if (usb_endpoint_dir_in(&epdata->desc)) { 716 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 717 + !is_sync_kiocb(iocb)) { 718 + mutex_unlock(&epdata->lock); 719 + return -EINVAL; 720 + } 721 + DBG (epdata->dev, "%s halt\n", epdata->name); 722 + spin_lock_irq(&epdata->dev->lock); 723 + if (likely(epdata->ep != NULL)) 724 + usb_ep_set_halt(epdata->ep); 725 + spin_unlock_irq(&epdata->dev->lock); 726 + mutex_unlock(&epdata->lock); 727 + return -EBADMSG; 728 + } 729 + 730 + buf = kmalloc(len, GFP_KERNEL); 731 + if (unlikely(!buf)) { 732 + mutex_unlock(&epdata->lock); 580 733 return -ENOMEM; 581 - 582 - return ep_aio_rwtail(iocb, buf, iocb->ki_nbytes, epdata, iov, nr_segs); 734 + } 735 + if (is_sync_kiocb(iocb)) { 736 + value = ep_io(epdata, buf, len); 737 + if (value >= 0 && copy_to_iter(buf, value, to)) 738 + value = -EFAULT; 739 + } else { 740 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 741 + value = -ENOMEM; 742 + if (!priv) 743 + goto fail; 744 + priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL); 745 + if (!priv->to_free) { 746 + kfree(priv); 747 + goto fail; 748 + } 749 + value = ep_aio(iocb, priv, epdata, buf, len); 750 + if (value == -EIOCBQUEUED) 751 + buf = NULL; 752 + } 753 + fail: 754 + kfree(buf); 755 + mutex_unlock(&epdata->lock); 756 + return value; 583 757 } 584 758 759 + static ssize_t ep_config(struct ep_data *, const char *, size_t); 760 + 585 761 static ssize_t 586 - ep_aio_write(struct kiocb *iocb, const struct iovec *iov, 587 - unsigned long nr_segs, loff_t o) 762 + ep_write_iter(struct kiocb *iocb, struct iov_iter *from) 588 763 { 589 - struct ep_data *epdata = iocb->ki_filp->private_data; 590 - char *buf; 591 - size_t len = 0; 592 - int i = 0; 764 + struct file *file = iocb->ki_filp; 765 + struct ep_data *epdata = file->private_data; 766 + size_t len = iov_iter_count(from); 767 + bool configured; 768 + ssize_t value; 769 + char *buf; 593 770 594 - if (unlikely(!usb_endpoint_dir_in(&epdata->desc))) 595 - return -EINVAL; 771 + if ((value = get_ready_ep(file->f_flags, epdata, true)) < 0) 772 + return value; 596 773 597 - buf = kmalloc(iocb->ki_nbytes, GFP_KERNEL); 598 - if (unlikely(!buf)) 599 - return -ENOMEM; 774 + configured = epdata->state == STATE_EP_ENABLED; 600 775 601 - for (i=0; i < nr_segs; i++) { 602 - if (unlikely(copy_from_user(&buf[len], iov[i].iov_base, 603 - iov[i].iov_len) != 0)) { 604 - kfree(buf); 605 - return -EFAULT; 776 + /* halt any endpoint by doing a "wrong direction" i/o call */ 777 + if (configured && !usb_endpoint_dir_in(&epdata->desc)) { 778 + if (usb_endpoint_xfer_isoc(&epdata->desc) || 779 + !is_sync_kiocb(iocb)) { 780 + mutex_unlock(&epdata->lock); 781 + return -EINVAL; 606 782 } 607 - len += iov[i].iov_len; 783 + DBG (epdata->dev, "%s halt\n", epdata->name); 784 + spin_lock_irq(&epdata->dev->lock); 785 + if (likely(epdata->ep != NULL)) 786 + usb_ep_set_halt(epdata->ep); 787 + spin_unlock_irq(&epdata->dev->lock); 788 + mutex_unlock(&epdata->lock); 789 + return -EBADMSG; 608 790 } 609 - return ep_aio_rwtail(iocb, buf, len, epdata, NULL, 0); 791 + 792 + buf = kmalloc(len, GFP_KERNEL); 793 + if (unlikely(!buf)) { 794 + mutex_unlock(&epdata->lock); 795 + return -ENOMEM; 796 + } 797 + 798 + if (unlikely(copy_from_iter(buf, len, from) != len)) { 799 + value = -EFAULT; 800 + goto out; 801 + } 802 + 803 + if (unlikely(!configured)) { 804 + value = ep_config(epdata, buf, len); 805 + } else if (is_sync_kiocb(iocb)) { 806 + value = ep_io(epdata, buf, len); 807 + } else { 808 + struct kiocb_priv *priv = kzalloc(sizeof *priv, GFP_KERNEL); 809 + value = -ENOMEM; 810 + if (priv) { 811 + value = ep_aio(iocb, priv, epdata, buf, len); 812 + if (value == -EIOCBQUEUED) 813 + buf = NULL; 814 + } 815 + } 816 + out: 817 + kfree(buf); 818 + mutex_unlock(&epdata->lock); 819 + return value; 610 820 } 611 821 612 822 /*----------------------------------------------------------------------*/ ··· 693 745 /* used after endpoint configuration */ 694 746 static const struct file_operations ep_io_operations = { 695 747 .owner = THIS_MODULE, 696 - .llseek = no_llseek, 697 748 698 - .read = ep_read, 699 - .write = ep_write, 700 - .unlocked_ioctl = ep_ioctl, 749 + .open = ep_open, 701 750 .release = ep_release, 702 - 703 - .aio_read = ep_aio_read, 704 - .aio_write = ep_aio_write, 751 + .llseek = no_llseek, 752 + .read = new_sync_read, 753 + .write = new_sync_write, 754 + .unlocked_ioctl = ep_ioctl, 755 + .read_iter = ep_read_iter, 756 + .write_iter = ep_write_iter, 705 757 }; 706 758 707 759 /* ENDPOINT INITIALIZATION ··· 718 770 * speed descriptor, then optional high speed descriptor. 719 771 */ 720 772 static ssize_t 721 - ep_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr) 773 + ep_config (struct ep_data *data, const char *buf, size_t len) 722 774 { 723 - struct ep_data *data = fd->private_data; 724 775 struct usb_ep *ep; 725 776 u32 tag; 726 777 int value, length = len; 727 - 728 - value = mutex_lock_interruptible(&data->lock); 729 - if (value < 0) 730 - return value; 731 778 732 779 if (data->state != STATE_EP_READY) { 733 780 value = -EL2HLT; ··· 734 791 goto fail0; 735 792 736 793 /* we might need to change message format someday */ 737 - if (copy_from_user (&tag, buf, 4)) { 738 - goto fail1; 739 - } 794 + memcpy(&tag, buf, 4); 740 795 if (tag != 1) { 741 796 DBG(data->dev, "config %s, bad tag %d\n", data->name, tag); 742 797 goto fail0; ··· 747 806 */ 748 807 749 808 /* full/low speed descriptor, then high speed */ 750 - if (copy_from_user (&data->desc, buf, USB_DT_ENDPOINT_SIZE)) { 751 - goto fail1; 752 - } 809 + memcpy(&data->desc, buf, USB_DT_ENDPOINT_SIZE); 753 810 if (data->desc.bLength != USB_DT_ENDPOINT_SIZE 754 811 || data->desc.bDescriptorType != USB_DT_ENDPOINT) 755 812 goto fail0; 756 813 if (len != USB_DT_ENDPOINT_SIZE) { 757 814 if (len != 2 * USB_DT_ENDPOINT_SIZE) 758 815 goto fail0; 759 - if (copy_from_user (&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 760 - USB_DT_ENDPOINT_SIZE)) { 761 - goto fail1; 762 - } 816 + memcpy(&data->hs_desc, buf + USB_DT_ENDPOINT_SIZE, 817 + USB_DT_ENDPOINT_SIZE); 763 818 if (data->hs_desc.bLength != USB_DT_ENDPOINT_SIZE 764 819 || data->hs_desc.bDescriptorType 765 820 != USB_DT_ENDPOINT) { ··· 777 840 case USB_SPEED_LOW: 778 841 case USB_SPEED_FULL: 779 842 ep->desc = &data->desc; 780 - value = usb_ep_enable(ep); 781 - if (value == 0) 782 - data->state = STATE_EP_ENABLED; 783 843 break; 784 844 case USB_SPEED_HIGH: 785 845 /* fails if caller didn't provide that descriptor... */ 786 846 ep->desc = &data->hs_desc; 787 - value = usb_ep_enable(ep); 788 - if (value == 0) 789 - data->state = STATE_EP_ENABLED; 790 847 break; 791 848 default: 792 849 DBG(data->dev, "unconnected, %s init abandoned\n", 793 850 data->name); 794 851 value = -EINVAL; 852 + goto gone; 795 853 } 854 + value = usb_ep_enable(ep); 796 855 if (value == 0) { 797 - fd->f_op = &ep_io_operations; 856 + data->state = STATE_EP_ENABLED; 798 857 value = length; 799 858 } 800 859 gone: ··· 800 867 data->desc.bDescriptorType = 0; 801 868 data->hs_desc.bDescriptorType = 0; 802 869 } 803 - mutex_unlock(&data->lock); 804 870 return value; 805 871 fail0: 806 872 value = -EINVAL; 807 - goto fail; 808 - fail1: 809 - value = -EFAULT; 810 873 goto fail; 811 874 } 812 875 ··· 830 901 mutex_unlock(&data->lock); 831 902 return value; 832 903 } 833 - 834 - /* used before endpoint configuration */ 835 - static const struct file_operations ep_config_operations = { 836 - .llseek = no_llseek, 837 - 838 - .open = ep_open, 839 - .write = ep_config, 840 - .release = ep_release, 841 - }; 842 904 843 905 /*----------------------------------------------------------------------*/ 844 906 ··· 909 989 enum ep0_state state; 910 990 911 991 spin_lock_irq (&dev->lock); 992 + if (dev->state <= STATE_DEV_OPENED) { 993 + retval = -EINVAL; 994 + goto done; 995 + } 912 996 913 997 /* report fd mode change before acting on it */ 914 998 if (dev->setup_abort) { ··· 1111 1187 struct dev_data *dev = fd->private_data; 1112 1188 ssize_t retval = -ESRCH; 1113 1189 1114 - spin_lock_irq (&dev->lock); 1115 - 1116 1190 /* report fd mode change before acting on it */ 1117 1191 if (dev->setup_abort) { 1118 1192 dev->setup_abort = 0; ··· 1156 1234 } else 1157 1235 DBG (dev, "fail %s, state %d\n", __func__, dev->state); 1158 1236 1159 - spin_unlock_irq (&dev->lock); 1160 1237 return retval; 1161 1238 } 1162 1239 ··· 1202 1281 struct dev_data *dev = fd->private_data; 1203 1282 int mask = 0; 1204 1283 1284 + if (dev->state <= STATE_DEV_OPENED) 1285 + return DEFAULT_POLLMASK; 1286 + 1205 1287 poll_wait(fd, &dev->wait, wait); 1206 1288 1207 1289 spin_lock_irq (&dev->lock); ··· 1239 1315 1240 1316 return ret; 1241 1317 } 1242 - 1243 - /* used after device configuration */ 1244 - static const struct file_operations ep0_io_operations = { 1245 - .owner = THIS_MODULE, 1246 - .llseek = no_llseek, 1247 - 1248 - .read = ep0_read, 1249 - .write = ep0_write, 1250 - .fasync = ep0_fasync, 1251 - .poll = ep0_poll, 1252 - .unlocked_ioctl = dev_ioctl, 1253 - .release = dev_release, 1254 - }; 1255 1318 1256 1319 /*----------------------------------------------------------------------*/ 1257 1320 ··· 1561 1650 goto enomem1; 1562 1651 1563 1652 data->dentry = gadgetfs_create_file (dev->sb, data->name, 1564 - data, &ep_config_operations); 1653 + data, &ep_io_operations); 1565 1654 if (!data->dentry) 1566 1655 goto enomem2; 1567 1656 list_add_tail (&data->epfiles, &dev->epfiles); ··· 1763 1852 u32 tag; 1764 1853 char *kbuf; 1765 1854 1855 + spin_lock_irq(&dev->lock); 1856 + if (dev->state > STATE_DEV_OPENED) { 1857 + value = ep0_write(fd, buf, len, ptr); 1858 + spin_unlock_irq(&dev->lock); 1859 + return value; 1860 + } 1861 + spin_unlock_irq(&dev->lock); 1862 + 1766 1863 if (len < (USB_DT_CONFIG_SIZE + USB_DT_DEVICE_SIZE + 4)) 1767 1864 return -EINVAL; 1768 1865 ··· 1844 1925 * on, they can work ... except in cleanup paths that 1845 1926 * kick in after the ep0 descriptor is closed. 1846 1927 */ 1847 - fd->f_op = &ep0_io_operations; 1848 1928 value = len; 1849 1929 } 1850 1930 return value; ··· 1874 1956 return value; 1875 1957 } 1876 1958 1877 - static const struct file_operations dev_init_operations = { 1959 + static const struct file_operations ep0_operations = { 1878 1960 .llseek = no_llseek, 1879 1961 1880 1962 .open = dev_open, 1963 + .read = ep0_read, 1881 1964 .write = dev_config, 1882 1965 .fasync = ep0_fasync, 1966 + .poll = ep0_poll, 1883 1967 .unlocked_ioctl = dev_ioctl, 1884 1968 .release = dev_release, 1885 1969 }; ··· 1997 2077 goto Enomem; 1998 2078 1999 2079 dev->sb = sb; 2000 - dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &dev_init_operations); 2080 + dev->dentry = gadgetfs_create_file(sb, CHIP, dev, &ep0_operations); 2001 2081 if (!dev->dentry) { 2002 2082 put_dev(dev); 2003 2083 goto Enomem;
+2
drivers/vfio/pci/vfio_pci_intrs.c
··· 868 868 func = vfio_pci_set_err_trigger; 869 869 break; 870 870 } 871 + break; 871 872 case VFIO_PCI_REQ_IRQ_INDEX: 872 873 switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) { 873 874 case VFIO_IRQ_SET_ACTION_TRIGGER: 874 875 func = vfio_pci_set_req_trigger; 875 876 break; 876 877 } 878 + break; 877 879 } 878 880 879 881 if (!func)
+3
drivers/video/fbdev/amba-clcd.c
··· 599 599 600 600 len = clcdfb_snprintf_mode(NULL, 0, mode); 601 601 name = devm_kzalloc(dev, len + 1, GFP_KERNEL); 602 + if (!name) 603 + return -ENOMEM; 604 + 602 605 clcdfb_snprintf_mode(name, len + 1, mode); 603 606 mode->name = name; 604 607
+3 -3
drivers/video/fbdev/core/fbmon.c
··· 624 624 int num = 0, i, first = 1; 625 625 int ver, rev; 626 626 627 - ver = edid[EDID_STRUCT_VERSION]; 628 - rev = edid[EDID_STRUCT_REVISION]; 629 - 630 627 mode = kzalloc(50 * sizeof(struct fb_videomode), GFP_KERNEL); 631 628 if (mode == NULL) 632 629 return NULL; ··· 633 636 kfree(mode); 634 637 return NULL; 635 638 } 639 + 640 + ver = edid[EDID_STRUCT_VERSION]; 641 + rev = edid[EDID_STRUCT_REVISION]; 636 642 637 643 *dbsize = 0; 638 644
+95 -84
drivers/video/fbdev/omap2/dss/display-sysfs.c
··· 28 28 #include <video/omapdss.h> 29 29 #include "dss.h" 30 30 31 - static struct omap_dss_device *to_dss_device_sysfs(struct device *dev) 31 + static ssize_t display_name_show(struct omap_dss_device *dssdev, char *buf) 32 32 { 33 - struct omap_dss_device *dssdev = NULL; 34 - 35 - for_each_dss_dev(dssdev) { 36 - if (dssdev->dev == dev) { 37 - omap_dss_put_device(dssdev); 38 - return dssdev; 39 - } 40 - } 41 - 42 - return NULL; 43 - } 44 - 45 - static ssize_t display_name_show(struct device *dev, 46 - struct device_attribute *attr, char *buf) 47 - { 48 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 49 - 50 33 return snprintf(buf, PAGE_SIZE, "%s\n", 51 34 dssdev->name ? 52 35 dssdev->name : ""); 53 36 } 54 37 55 - static ssize_t display_enabled_show(struct device *dev, 56 - struct device_attribute *attr, char *buf) 38 + static ssize_t display_enabled_show(struct omap_dss_device *dssdev, char *buf) 57 39 { 58 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 59 - 60 40 return snprintf(buf, PAGE_SIZE, "%d\n", 61 41 omapdss_device_is_enabled(dssdev)); 62 42 } 63 43 64 - static ssize_t display_enabled_store(struct device *dev, 65 - struct device_attribute *attr, 44 + static ssize_t display_enabled_store(struct omap_dss_device *dssdev, 66 45 const char *buf, size_t size) 67 46 { 68 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 69 47 int r; 70 48 bool enable; 71 49 ··· 68 90 return size; 69 91 } 70 92 71 - static ssize_t display_tear_show(struct device *dev, 72 - struct device_attribute *attr, char *buf) 93 + static ssize_t display_tear_show(struct omap_dss_device *dssdev, char *buf) 73 94 { 74 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 75 95 return snprintf(buf, PAGE_SIZE, "%d\n", 76 96 dssdev->driver->get_te ? 77 97 dssdev->driver->get_te(dssdev) : 0); 78 98 } 79 99 80 - static ssize_t display_tear_store(struct device *dev, 81 - struct device_attribute *attr, const char *buf, size_t size) 100 + static ssize_t display_tear_store(struct omap_dss_device *dssdev, 101 + const char *buf, size_t size) 82 102 { 83 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 84 103 int r; 85 104 bool te; 86 105 ··· 95 120 return size; 96 121 } 97 122 98 - static ssize_t display_timings_show(struct device *dev, 99 - struct device_attribute *attr, char *buf) 123 + static ssize_t display_timings_show(struct omap_dss_device *dssdev, char *buf) 100 124 { 101 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 102 125 struct omap_video_timings t; 103 126 104 127 if (!dssdev->driver->get_timings) ··· 110 137 t.y_res, t.vfp, t.vbp, t.vsw); 111 138 } 112 139 113 - static ssize_t display_timings_store(struct device *dev, 114 - struct device_attribute *attr, const char *buf, size_t size) 140 + static ssize_t display_timings_store(struct omap_dss_device *dssdev, 141 + const char *buf, size_t size) 115 142 { 116 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 117 143 struct omap_video_timings t = dssdev->panel.timings; 118 144 int r, found; 119 145 ··· 148 176 return size; 149 177 } 150 178 151 - static ssize_t display_rotate_show(struct device *dev, 152 - struct device_attribute *attr, char *buf) 179 + static ssize_t display_rotate_show(struct omap_dss_device *dssdev, char *buf) 153 180 { 154 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 155 181 int rotate; 156 182 if (!dssdev->driver->get_rotate) 157 183 return -ENOENT; ··· 157 187 return snprintf(buf, PAGE_SIZE, "%u\n", rotate); 158 188 } 159 189 160 - static ssize_t display_rotate_store(struct device *dev, 161 - struct device_attribute *attr, const char *buf, size_t size) 190 + static ssize_t display_rotate_store(struct omap_dss_device *dssdev, 191 + const char *buf, size_t size) 162 192 { 163 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 164 193 int rot, r; 165 194 166 195 if (!dssdev->driver->set_rotate || !dssdev->driver->get_rotate) ··· 176 207 return size; 177 208 } 178 209 179 - static ssize_t display_mirror_show(struct device *dev, 180 - struct device_attribute *attr, char *buf) 210 + static ssize_t display_mirror_show(struct omap_dss_device *dssdev, char *buf) 181 211 { 182 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 183 212 int mirror; 184 213 if (!dssdev->driver->get_mirror) 185 214 return -ENOENT; ··· 185 218 return snprintf(buf, PAGE_SIZE, "%u\n", mirror); 186 219 } 187 220 188 - static ssize_t display_mirror_store(struct device *dev, 189 - struct device_attribute *attr, const char *buf, size_t size) 221 + static ssize_t display_mirror_store(struct omap_dss_device *dssdev, 222 + const char *buf, size_t size) 190 223 { 191 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 192 224 int r; 193 225 bool mirror; 194 226 ··· 205 239 return size; 206 240 } 207 241 208 - static ssize_t display_wss_show(struct device *dev, 209 - struct device_attribute *attr, char *buf) 242 + static ssize_t display_wss_show(struct omap_dss_device *dssdev, char *buf) 210 243 { 211 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 212 244 unsigned int wss; 213 245 214 246 if (!dssdev->driver->get_wss) ··· 217 253 return snprintf(buf, PAGE_SIZE, "0x%05x\n", wss); 218 254 } 219 255 220 - static ssize_t display_wss_store(struct device *dev, 221 - struct device_attribute *attr, const char *buf, size_t size) 256 + static ssize_t display_wss_store(struct omap_dss_device *dssdev, 257 + const char *buf, size_t size) 222 258 { 223 - struct omap_dss_device *dssdev = to_dss_device_sysfs(dev); 224 259 u32 wss; 225 260 int r; 226 261 ··· 240 277 return size; 241 278 } 242 279 243 - static DEVICE_ATTR(display_name, S_IRUGO, display_name_show, NULL); 244 - static DEVICE_ATTR(enabled, S_IRUGO|S_IWUSR, 280 + struct display_attribute { 281 + struct attribute attr; 282 + ssize_t (*show)(struct omap_dss_device *, char *); 283 + ssize_t (*store)(struct omap_dss_device *, const char *, size_t); 284 + }; 285 + 286 + #define DISPLAY_ATTR(_name, _mode, _show, _store) \ 287 + struct display_attribute display_attr_##_name = \ 288 + __ATTR(_name, _mode, _show, _store) 289 + 290 + static DISPLAY_ATTR(name, S_IRUGO, display_name_show, NULL); 291 + static DISPLAY_ATTR(display_name, S_IRUGO, display_name_show, NULL); 292 + static DISPLAY_ATTR(enabled, S_IRUGO|S_IWUSR, 245 293 display_enabled_show, display_enabled_store); 246 - static DEVICE_ATTR(tear_elim, S_IRUGO|S_IWUSR, 294 + static DISPLAY_ATTR(tear_elim, S_IRUGO|S_IWUSR, 247 295 display_tear_show, display_tear_store); 248 - static DEVICE_ATTR(timings, S_IRUGO|S_IWUSR, 296 + static DISPLAY_ATTR(timings, S_IRUGO|S_IWUSR, 249 297 display_timings_show, display_timings_store); 250 - static DEVICE_ATTR(rotate, S_IRUGO|S_IWUSR, 298 + static DISPLAY_ATTR(rotate, S_IRUGO|S_IWUSR, 251 299 display_rotate_show, display_rotate_store); 252 - static DEVICE_ATTR(mirror, S_IRUGO|S_IWUSR, 300 + static DISPLAY_ATTR(mirror, S_IRUGO|S_IWUSR, 253 301 display_mirror_show, display_mirror_store); 254 - static DEVICE_ATTR(wss, S_IRUGO|S_IWUSR, 302 + static DISPLAY_ATTR(wss, S_IRUGO|S_IWUSR, 255 303 display_wss_show, display_wss_store); 256 304 257 - static const struct attribute *display_sysfs_attrs[] = { 258 - &dev_attr_display_name.attr, 259 - &dev_attr_enabled.attr, 260 - &dev_attr_tear_elim.attr, 261 - &dev_attr_timings.attr, 262 - &dev_attr_rotate.attr, 263 - &dev_attr_mirror.attr, 264 - &dev_attr_wss.attr, 305 + static struct attribute *display_sysfs_attrs[] = { 306 + &display_attr_name.attr, 307 + &display_attr_display_name.attr, 308 + &display_attr_enabled.attr, 309 + &display_attr_tear_elim.attr, 310 + &display_attr_timings.attr, 311 + &display_attr_rotate.attr, 312 + &display_attr_mirror.attr, 313 + &display_attr_wss.attr, 265 314 NULL 315 + }; 316 + 317 + static ssize_t display_attr_show(struct kobject *kobj, struct attribute *attr, 318 + char *buf) 319 + { 320 + struct omap_dss_device *dssdev; 321 + struct display_attribute *display_attr; 322 + 323 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 324 + display_attr = container_of(attr, struct display_attribute, attr); 325 + 326 + if (!display_attr->show) 327 + return -ENOENT; 328 + 329 + return display_attr->show(dssdev, buf); 330 + } 331 + 332 + static ssize_t display_attr_store(struct kobject *kobj, struct attribute *attr, 333 + const char *buf, size_t size) 334 + { 335 + struct omap_dss_device *dssdev; 336 + struct display_attribute *display_attr; 337 + 338 + dssdev = container_of(kobj, struct omap_dss_device, kobj); 339 + display_attr = container_of(attr, struct display_attribute, attr); 340 + 341 + if (!display_attr->store) 342 + return -ENOENT; 343 + 344 + return display_attr->store(dssdev, buf, size); 345 + } 346 + 347 + static const struct sysfs_ops display_sysfs_ops = { 348 + .show = display_attr_show, 349 + .store = display_attr_store, 350 + }; 351 + 352 + static struct kobj_type display_ktype = { 353 + .sysfs_ops = &display_sysfs_ops, 354 + .default_attrs = display_sysfs_attrs, 266 355 }; 267 356 268 357 int display_init_sysfs(struct platform_device *pdev) ··· 323 308 int r; 324 309 325 310 for_each_dss_dev(dssdev) { 326 - struct kobject *kobj = &dssdev->dev->kobj; 327 - 328 - r = sysfs_create_files(kobj, display_sysfs_attrs); 311 + r = kobject_init_and_add(&dssdev->kobj, &display_ktype, 312 + &pdev->dev.kobj, dssdev->alias); 329 313 if (r) { 330 314 DSSERR("failed to create sysfs files\n"); 331 - goto err; 332 - } 333 - 334 - r = sysfs_create_link(&pdev->dev.kobj, kobj, dssdev->alias); 335 - if (r) { 336 - sysfs_remove_files(kobj, display_sysfs_attrs); 337 - 338 - DSSERR("failed to create sysfs display link\n"); 315 + omap_dss_put_device(dssdev); 339 316 goto err; 340 317 } 341 318 } ··· 345 338 struct omap_dss_device *dssdev = NULL; 346 339 347 340 for_each_dss_dev(dssdev) { 348 - sysfs_remove_link(&pdev->dev.kobj, dssdev->alias); 349 - sysfs_remove_files(&dssdev->dev->kobj, 350 - display_sysfs_attrs); 341 + if (kobject_name(&dssdev->kobj) == NULL) 342 + continue; 343 + 344 + kobject_del(&dssdev->kobj); 345 + kobject_put(&dssdev->kobj); 346 + 347 + memset(&dssdev->kobj, 0, sizeof(dssdev->kobj)); 351 348 } 352 349 }
+12 -6
drivers/xen/events/events_base.c
··· 526 526 pirq_query_unmask(irq); 527 527 528 528 rc = set_evtchn_to_irq(evtchn, irq); 529 - if (rc != 0) { 530 - pr_err("irq%d: Failed to set port to irq mapping (%d)\n", 531 - irq, rc); 532 - xen_evtchn_close(evtchn); 533 - return 0; 534 - } 529 + if (rc) 530 + goto err; 531 + 535 532 bind_evtchn_to_cpu(evtchn, 0); 536 533 info->evtchn = evtchn; 534 + 535 + rc = xen_evtchn_port_setup(info); 536 + if (rc) 537 + goto err; 537 538 538 539 out: 539 540 unmask_evtchn(evtchn); 540 541 eoi_pirq(irq_get_irq_data(irq)); 541 542 543 + return 0; 544 + 545 + err: 546 + pr_err("irq%d: Failed to set port to irq mapping (%d)\n", irq, rc); 547 + xen_evtchn_close(evtchn); 542 548 return 0; 543 549 } 544 550
+1 -1
drivers/xen/xen-pciback/conf_space.c
··· 16 16 #include "conf_space.h" 17 17 #include "conf_space_quirks.h" 18 18 19 - static bool permissive; 19 + bool permissive; 20 20 module_param(permissive, bool, 0644); 21 21 22 22 /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word,
+2
drivers/xen/xen-pciback/conf_space.h
··· 64 64 void *data; 65 65 }; 66 66 67 + extern bool permissive; 68 + 67 69 #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) 68 70 69 71 /* Add fields to a device - the add_fields macro expects to get a pointer to
+47 -12
drivers/xen/xen-pciback/conf_space_header.c
··· 11 11 #include "pciback.h" 12 12 #include "conf_space.h" 13 13 14 + struct pci_cmd_info { 15 + u16 val; 16 + }; 17 + 14 18 struct pci_bar_info { 15 19 u32 val; 16 20 u32 len_val; ··· 24 20 #define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO)) 25 21 #define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER) 26 22 23 + /* Bits guests are allowed to control in permissive mode. */ 24 + #define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \ 25 + PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \ 26 + PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK) 27 + 28 + static void *command_init(struct pci_dev *dev, int offset) 29 + { 30 + struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); 31 + int err; 32 + 33 + if (!cmd) 34 + return ERR_PTR(-ENOMEM); 35 + 36 + err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val); 37 + if (err) { 38 + kfree(cmd); 39 + return ERR_PTR(err); 40 + } 41 + 42 + return cmd; 43 + } 44 + 27 45 static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data) 28 46 { 29 - int i; 30 - int ret; 47 + int ret = pci_read_config_word(dev, offset, value); 48 + const struct pci_cmd_info *cmd = data; 31 49 32 - ret = xen_pcibk_read_config_word(dev, offset, value, data); 33 - if (!pci_is_enabled(dev)) 34 - return ret; 35 - 36 - for (i = 0; i < PCI_ROM_RESOURCE; i++) { 37 - if (dev->resource[i].flags & IORESOURCE_IO) 38 - *value |= PCI_COMMAND_IO; 39 - if (dev->resource[i].flags & IORESOURCE_MEM) 40 - *value |= PCI_COMMAND_MEMORY; 41 - } 50 + *value &= PCI_COMMAND_GUEST; 51 + *value |= cmd->val & ~PCI_COMMAND_GUEST; 42 52 43 53 return ret; 44 54 } ··· 61 43 { 62 44 struct xen_pcibk_dev_data *dev_data; 63 45 int err; 46 + u16 val; 47 + struct pci_cmd_info *cmd = data; 64 48 65 49 dev_data = pci_get_drvdata(dev); 66 50 if (!pci_is_enabled(dev) && is_enable_cmd(value)) { ··· 102 82 value &= ~PCI_COMMAND_INVALIDATE; 103 83 } 104 84 } 85 + 86 + cmd->val = value; 87 + 88 + if (!permissive && (!dev_data || !dev_data->permissive)) 89 + return 0; 90 + 91 + /* Only allow the guest to control certain bits. */ 92 + err = pci_read_config_word(dev, offset, &val); 93 + if (err || val == value) 94 + return err; 95 + 96 + value &= PCI_COMMAND_GUEST; 97 + value |= val & ~PCI_COMMAND_GUEST; 105 98 106 99 return pci_write_config_word(dev, offset, value); 107 100 } ··· 315 282 { 316 283 .offset = PCI_COMMAND, 317 284 .size = 2, 285 + .init = command_init, 286 + .release = bar_release, 318 287 .u.w.read = command_read, 319 288 .u.w.write = command_write, 320 289 },
+1 -1
fs/locks.c
··· 1728 1728 break; 1729 1729 } 1730 1730 } 1731 - trace_generic_delete_lease(inode, fl); 1731 + trace_generic_delete_lease(inode, victim); 1732 1732 if (victim) 1733 1733 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); 1734 1734 spin_unlock(&ctx->flc_lock);
+4 -3
fs/nilfs2/segment.c
··· 1907 1907 struct the_nilfs *nilfs) 1908 1908 { 1909 1909 struct nilfs_inode_info *ii, *n; 1910 + int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE); 1910 1911 int defer_iput = false; 1911 1912 1912 1913 spin_lock(&nilfs->ns_inode_lock); ··· 1920 1919 brelse(ii->i_bh); 1921 1920 ii->i_bh = NULL; 1922 1921 list_del_init(&ii->i_dirty); 1923 - if (!ii->vfs_inode.i_nlink) { 1922 + if (!ii->vfs_inode.i_nlink || during_mount) { 1924 1923 /* 1925 - * Defer calling iput() to avoid a deadlock 1926 - * over I_SYNC flag for inodes with i_nlink == 0 1924 + * Defer calling iput() to avoid deadlocks if 1925 + * i_nlink == 0 or mount is not yet finished. 1927 1926 */ 1928 1927 list_add_tail(&ii->i_dirty, &sci->sc_iput_queue); 1929 1928 defer_iput = true;
+2 -1
fs/notify/fanotify/fanotify.c
··· 143 143 !(marks_mask & FS_ISDIR & ~marks_ignored_mask)) 144 144 return false; 145 145 146 - if (event_mask & marks_mask & ~marks_ignored_mask) 146 + if (event_mask & FAN_ALL_OUTGOING_EVENTS & marks_mask & 147 + ~marks_ignored_mask) 147 148 return true; 148 149 149 150 return false;
+1 -1
fs/ocfs2/ocfs2.h
··· 502 502 503 503 static inline int ocfs2_supports_append_dio(struct ocfs2_super *osb) 504 504 { 505 - if (osb->s_feature_ro_compat & OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 505 + if (osb->s_feature_incompat & OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 506 506 return 1; 507 507 return 0; 508 508 }
+8 -7
fs/ocfs2/ocfs2_fs.h
··· 102 102 | OCFS2_FEATURE_INCOMPAT_INDEXED_DIRS \ 103 103 | OCFS2_FEATURE_INCOMPAT_REFCOUNT_TREE \ 104 104 | OCFS2_FEATURE_INCOMPAT_DISCONTIG_BG \ 105 - | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO) 105 + | OCFS2_FEATURE_INCOMPAT_CLUSTERINFO \ 106 + | OCFS2_FEATURE_INCOMPAT_APPEND_DIO) 106 107 #define OCFS2_FEATURE_RO_COMPAT_SUPP (OCFS2_FEATURE_RO_COMPAT_UNWRITTEN \ 107 108 | OCFS2_FEATURE_RO_COMPAT_USRQUOTA \ 108 - | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA \ 109 - | OCFS2_FEATURE_RO_COMPAT_APPEND_DIO) 109 + | OCFS2_FEATURE_RO_COMPAT_GRPQUOTA) 110 110 111 111 /* 112 112 * Heartbeat-only devices are missing journals and other files. The ··· 179 179 #define OCFS2_FEATURE_INCOMPAT_CLUSTERINFO 0x4000 180 180 181 181 /* 182 + * Append Direct IO support 183 + */ 184 + #define OCFS2_FEATURE_INCOMPAT_APPEND_DIO 0x8000 185 + 186 + /* 182 187 * backup superblock flag is used to indicate that this volume 183 188 * has backup superblocks. 184 189 */ ··· 205 200 #define OCFS2_FEATURE_RO_COMPAT_USRQUOTA 0x0002 206 201 #define OCFS2_FEATURE_RO_COMPAT_GRPQUOTA 0x0004 207 202 208 - /* 209 - * Append Direct IO support 210 - */ 211 - #define OCFS2_FEATURE_RO_COMPAT_APPEND_DIO 0x0008 212 203 213 204 /* The byte offset of the first backup block will be 1G. 214 205 * The following will be 4G, 16G, 64G, 256G and 1T.
+25 -24
include/drm/i915_pciids.h
··· 208 208 #define INTEL_VLV_D_IDS(info) \ 209 209 INTEL_VGA_DEVICE(0x0155, info) 210 210 211 - #define _INTEL_BDW_M(gt, id, info) \ 212 - INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info) 213 - #define _INTEL_BDW_D(gt, id, info) \ 214 - INTEL_VGA_DEVICE((((gt) - 1) << 4) | (id), info) 215 - 216 - #define _INTEL_BDW_M_IDS(gt, info) \ 217 - _INTEL_BDW_M(gt, 0x1602, info), /* Halo */ \ 218 - _INTEL_BDW_M(gt, 0x1606, info), /* ULT */ \ 219 - _INTEL_BDW_M(gt, 0x160B, info), /* ULT */ \ 220 - _INTEL_BDW_M(gt, 0x160E, info) /* ULX */ 221 - 222 - #define _INTEL_BDW_D_IDS(gt, info) \ 223 - _INTEL_BDW_D(gt, 0x160A, info), /* Server */ \ 224 - _INTEL_BDW_D(gt, 0x160D, info) /* Workstation */ 225 - 226 - #define INTEL_BDW_GT12M_IDS(info) \ 227 - _INTEL_BDW_M_IDS(1, info), \ 228 - _INTEL_BDW_M_IDS(2, info) 211 + #define INTEL_BDW_GT12M_IDS(info) \ 212 + INTEL_VGA_DEVICE(0x1602, info), /* GT1 ULT */ \ 213 + INTEL_VGA_DEVICE(0x1606, info), /* GT1 ULT */ \ 214 + INTEL_VGA_DEVICE(0x160B, info), /* GT1 Iris */ \ 215 + INTEL_VGA_DEVICE(0x160E, info), /* GT1 ULX */ \ 216 + INTEL_VGA_DEVICE(0x1612, info), /* GT2 Halo */ \ 217 + INTEL_VGA_DEVICE(0x1616, info), /* GT2 ULT */ \ 218 + INTEL_VGA_DEVICE(0x161B, info), /* GT2 ULT */ \ 219 + INTEL_VGA_DEVICE(0x161E, info) /* GT2 ULX */ 229 220 230 221 #define INTEL_BDW_GT12D_IDS(info) \ 231 - _INTEL_BDW_D_IDS(1, info), \ 232 - _INTEL_BDW_D_IDS(2, info) 222 + INTEL_VGA_DEVICE(0x160A, info), /* GT1 Server */ \ 223 + INTEL_VGA_DEVICE(0x160D, info), /* GT1 Workstation */ \ 224 + INTEL_VGA_DEVICE(0x161A, info), /* GT2 Server */ \ 225 + INTEL_VGA_DEVICE(0x161D, info) /* GT2 Workstation */ 233 226 234 227 #define INTEL_BDW_GT3M_IDS(info) \ 235 - _INTEL_BDW_M_IDS(3, info) 228 + INTEL_VGA_DEVICE(0x1622, info), /* ULT */ \ 229 + INTEL_VGA_DEVICE(0x1626, info), /* ULT */ \ 230 + INTEL_VGA_DEVICE(0x162B, info), /* Iris */ \ 231 + INTEL_VGA_DEVICE(0x162E, info) /* ULX */ 236 232 237 233 #define INTEL_BDW_GT3D_IDS(info) \ 238 - _INTEL_BDW_D_IDS(3, info) 234 + INTEL_VGA_DEVICE(0x162A, info), /* Server */ \ 235 + INTEL_VGA_DEVICE(0x162D, info) /* Workstation */ 239 236 240 237 #define INTEL_BDW_RSVDM_IDS(info) \ 241 - _INTEL_BDW_M_IDS(4, info) 238 + INTEL_VGA_DEVICE(0x1632, info), /* ULT */ \ 239 + INTEL_VGA_DEVICE(0x1636, info), /* ULT */ \ 240 + INTEL_VGA_DEVICE(0x163B, info), /* Iris */ \ 241 + INTEL_VGA_DEVICE(0x163E, info) /* ULX */ 242 242 243 243 #define INTEL_BDW_RSVDD_IDS(info) \ 244 - _INTEL_BDW_D_IDS(4, info) 244 + INTEL_VGA_DEVICE(0x163A, info), /* Server */ \ 245 + INTEL_VGA_DEVICE(0x163D, info) /* Workstation */ 245 246 246 247 #define INTEL_BDW_M_IDS(info) \ 247 248 INTEL_BDW_GT12M_IDS(info), \
+2 -1
include/dt-bindings/pinctrl/am33xx.h
··· 13 13 14 14 #define PULL_DISABLE (1 << 3) 15 15 #define INPUT_EN (1 << 5) 16 - #define SLEWCTRL_FAST (1 << 6) 16 + #define SLEWCTRL_SLOW (1 << 6) 17 + #define SLEWCTRL_FAST 0 17 18 18 19 /* update macro depending on INPUT_EN and PULL_ENA */ 19 20 #undef PIN_OUTPUT
+2 -1
include/dt-bindings/pinctrl/am43xx.h
··· 18 18 #define PULL_DISABLE (1 << 16) 19 19 #define PULL_UP (1 << 17) 20 20 #define INPUT_EN (1 << 18) 21 - #define SLEWCTRL_FAST (1 << 19) 21 + #define SLEWCTRL_SLOW (1 << 19) 22 + #define SLEWCTRL_FAST 0 22 23 #define DS0_PULL_UP_DOWN_EN (1 << 27) 23 24 24 25 #define PIN_OUTPUT (PULL_DISABLE)
+18
include/linux/clk.h
··· 125 125 */ 126 126 int clk_get_phase(struct clk *clk); 127 127 128 + /** 129 + * clk_is_match - check if two clk's point to the same hardware clock 130 + * @p: clk compared against q 131 + * @q: clk compared against p 132 + * 133 + * Returns true if the two struct clk pointers both point to the same hardware 134 + * clock node. Put differently, returns true if struct clk *p and struct clk *q 135 + * share the same struct clk_core object. 136 + * 137 + * Returns false otherwise. Note that two NULL clks are treated as matching. 138 + */ 139 + bool clk_is_match(const struct clk *p, const struct clk *q); 140 + 128 141 #else 129 142 130 143 static inline long clk_get_accuracy(struct clk *clk) ··· 153 140 static inline long clk_get_phase(struct clk *clk) 154 141 { 155 142 return -ENOTSUPP; 143 + } 144 + 145 + static inline bool clk_is_match(const struct clk *p, const struct clk *q) 146 + { 147 + return p == q; 156 148 } 157 149 158 150 #endif
+5
include/linux/irqchip/arm-gic-v3.h
··· 166 166 167 167 #define GITS_TRANSLATER 0x10040 168 168 169 + #define GITS_CTLR_ENABLE (1U << 0) 170 + #define GITS_CTLR_QUIESCENT (1U << 31) 171 + 172 + #define GITS_TYPER_DEVBITS_SHIFT 13 173 + #define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1) 169 174 #define GITS_TYPER_PTA (1UL << 19) 170 175 171 176 #define GITS_CBASER_VALID (1UL << 63)
+3 -6
include/linux/kasan.h
··· 5 5 6 6 struct kmem_cache; 7 7 struct page; 8 + struct vm_struct; 8 9 9 10 #ifdef CONFIG_KASAN 10 11 ··· 50 49 void kasan_slab_alloc(struct kmem_cache *s, void *object); 51 50 void kasan_slab_free(struct kmem_cache *s, void *object); 52 51 53 - #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 54 - 55 52 int kasan_module_alloc(void *addr, size_t size); 56 - void kasan_module_free(void *addr); 53 + void kasan_free_shadow(const struct vm_struct *vm); 57 54 58 55 #else /* CONFIG_KASAN */ 59 - 60 - #define MODULE_ALIGN 1 61 56 62 57 static inline void kasan_unpoison_shadow(const void *address, size_t size) {} 63 58 ··· 79 82 static inline void kasan_slab_free(struct kmem_cache *s, void *object) {} 80 83 81 84 static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } 82 - static inline void kasan_module_free(void *addr) {} 85 + static inline void kasan_free_shadow(const struct vm_struct *vm) {} 83 86 84 87 #endif /* CONFIG_KASAN */ 85 88
+8
include/linux/moduleloader.h
··· 84 84 85 85 /* Any cleanup before freeing mod->module_init */ 86 86 void module_arch_freeing_init(struct module *mod); 87 + 88 + #ifdef CONFIG_KASAN 89 + #include <linux/kasan.h> 90 + #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 91 + #else 92 + #define MODULE_ALIGN PAGE_SIZE 93 + #endif 94 + 87 95 #endif
+1 -1
include/linux/of_platform.h
··· 84 84 static inline void of_platform_depopulate(struct device *parent) { } 85 85 #endif 86 86 87 - #ifdef CONFIG_OF_DYNAMIC 87 + #if defined(CONFIG_OF_DYNAMIC) && defined(CONFIG_OF_ADDRESS) 88 88 extern void of_platform_register_reconfig_notifier(void); 89 89 #else 90 90 static inline void of_platform_register_reconfig_notifier(void) { }
+1 -1
include/linux/spi/spi.h
··· 649 649 * sequence completes. On some systems, many such sequences can execute as 650 650 * as single programmed DMA transfer. On all systems, these messages are 651 651 * queued, and might complete after transactions to other devices. Messages 652 - * sent to a given spi_device are alway executed in FIFO order. 652 + * sent to a given spi_device are always executed in FIFO order. 653 653 * 654 654 * The code that submits an spi_message (and its spi_transfers) 655 655 * to the lower layers is responsible for managing its memory.
+2
include/linux/uio.h
··· 98 98 size_t maxsize, size_t *start); 99 99 int iov_iter_npages(const struct iov_iter *i, int maxpages); 100 100 101 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); 102 + 101 103 static inline size_t iov_iter_count(struct iov_iter *i) 102 104 { 103 105 return i->count;
+1
include/linux/vmalloc.h
··· 17 17 #define VM_VPAGES 0x00000010 /* buffer for pages was vmalloc'ed */ 18 18 #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ 19 19 #define VM_NO_GUARD 0x00000040 /* don't add guard page */ 20 + #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ 20 21 /* bits [20..32] reserved for arch specific ioremap internals */ 21 22 22 23 /*
+2 -1
include/linux/workqueue.h
··· 70 70 /* data contains off-queue information when !WORK_STRUCT_PWQ */ 71 71 WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT, 72 72 73 - WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE), 73 + __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE, 74 + WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING), 74 75 75 76 /* 76 77 * When a work item is off queue, its high bits point to the last
+19 -3
include/net/netfilter/nf_tables.h
··· 119 119 const struct nft_data *data, 120 120 enum nft_data_types type); 121 121 122 + 123 + /** 124 + * struct nft_userdata - user defined data associated with an object 125 + * 126 + * @len: length of the data 127 + * @data: content 128 + * 129 + * The presence of user data is indicated in an object specific fashion, 130 + * so a length of zero can't occur and the value "len" indicates data 131 + * of length len + 1. 132 + */ 133 + struct nft_userdata { 134 + u8 len; 135 + unsigned char data[0]; 136 + }; 137 + 122 138 /** 123 139 * struct nft_set_elem - generic representation of set elements 124 140 * ··· 396 380 * @handle: rule handle 397 381 * @genmask: generation mask 398 382 * @dlen: length of expression data 399 - * @ulen: length of user data (used for comments) 383 + * @udata: user data is appended to the rule 400 384 * @data: expression data 401 385 */ 402 386 struct nft_rule { ··· 404 388 u64 handle:42, 405 389 genmask:2, 406 390 dlen:12, 407 - ulen:8; 391 + udata:1; 408 392 unsigned char data[] 409 393 __attribute__((aligned(__alignof__(struct nft_expr)))); 410 394 }; ··· 492 476 return (struct nft_expr *)&rule->data[rule->dlen]; 493 477 } 494 478 495 - static inline void *nft_userdata(const struct nft_rule *rule) 479 + static inline struct nft_userdata *nft_userdata(const struct nft_rule *rule) 496 480 { 497 481 return (void *)&rule->data[rule->dlen]; 498 482 }
+1 -1
include/soc/at91/at91sam9_ddrsdr.h
··· 92 92 #define AT91_DDRSDRC_UPD_MR (3 << 20) /* Update load mode register and extended mode register */ 93 93 94 94 #define AT91_DDRSDRC_MDR 0x20 /* Memory Device Register */ 95 - #define AT91_DDRSDRC_MD (3 << 0) /* Memory Device Type */ 95 + #define AT91_DDRSDRC_MD (7 << 0) /* Memory Device Type */ 96 96 #define AT91_DDRSDRC_MD_SDR 0 97 97 #define AT91_DDRSDRC_MD_LOW_POWER_SDR 1 98 98 #define AT91_DDRSDRC_MD_LOW_POWER_DDR 3
+1 -1
include/uapi/drm/drm_fourcc.h
··· 151 151 /* add more to the end as needed */ 152 152 153 153 #define fourcc_mod_code(vendor, val) \ 154 - ((((u64)DRM_FORMAT_MOD_VENDOR_## vendor) << 56) | (val & 0x00ffffffffffffffL)) 154 + ((((u64)DRM_FORMAT_MOD_VENDOR_## vendor) << 56) | (val & 0x00ffffffffffffffULL)) 155 155 156 156 /* 157 157 * Format Modifier tokens:
+3
include/uapi/drm/i915_drm.h
··· 347 347 #define I915_PARAM_HAS_COHERENT_PHYS_GTT 29 348 348 #define I915_PARAM_MMAP_VERSION 30 349 349 #define I915_PARAM_HAS_BSD2 31 350 + #define I915_PARAM_REVISION 32 351 + #define I915_PARAM_SUBSLICE_TOTAL 33 352 + #define I915_PARAM_EU_TOTAL 34 350 353 351 354 typedef struct drm_i915_getparam { 352 355 int param;
+1
include/video/omapdss.h
··· 689 689 }; 690 690 691 691 struct omap_dss_device { 692 + struct kobject kobj; 692 693 struct device *dev; 693 694 694 695 struct module *owner;
+2 -2
include/xen/xenbus.h
··· 114 114 const char *mod_name); 115 115 116 116 #define xenbus_register_frontend(drv) \ 117 - __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME); 117 + __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME) 118 118 #define xenbus_register_backend(drv) \ 119 - __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME); 119 + __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME) 120 120 121 121 void xenbus_unregister_driver(struct xenbus_driver *drv); 122 122
+4 -5
kernel/cpuset.c
··· 548 548 549 549 rcu_read_lock(); 550 550 cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { 551 - if (cp == root_cs) 552 - continue; 553 - 554 551 /* skip the whole subtree if @cp doesn't have any CPU */ 555 552 if (cpumask_empty(cp->cpus_allowed)) { 556 553 pos_css = css_rightmost_descendant(pos_css); ··· 870 873 * If it becomes empty, inherit the effective mask of the 871 874 * parent, which is guaranteed to have some CPUs. 872 875 */ 873 - if (cpumask_empty(new_cpus)) 876 + if (cgroup_on_dfl(cp->css.cgroup) && cpumask_empty(new_cpus)) 874 877 cpumask_copy(new_cpus, parent->effective_cpus); 875 878 876 879 /* Skip the whole subtree if the cpumask remains the same. */ ··· 1126 1129 * If it becomes empty, inherit the effective mask of the 1127 1130 * parent, which is guaranteed to have some MEMs. 1128 1131 */ 1129 - if (nodes_empty(*new_mems)) 1132 + if (cgroup_on_dfl(cp->css.cgroup) && nodes_empty(*new_mems)) 1130 1133 *new_mems = parent->effective_mems; 1131 1134 1132 1135 /* Skip the whole subtree if the nodemask remains the same. */ ··· 1976 1979 1977 1980 spin_lock_irq(&callback_lock); 1978 1981 cs->mems_allowed = parent->mems_allowed; 1982 + cs->effective_mems = parent->mems_allowed; 1979 1983 cpumask_copy(cs->cpus_allowed, parent->cpus_allowed); 1984 + cpumask_copy(cs->effective_cpus, parent->cpus_allowed); 1980 1985 spin_unlock_irq(&callback_lock); 1981 1986 out_unlock: 1982 1987 mutex_unlock(&cpuset_mutex);
-2
kernel/module.c
··· 56 56 #include <linux/async.h> 57 57 #include <linux/percpu.h> 58 58 #include <linux/kmemleak.h> 59 - #include <linux/kasan.h> 60 59 #include <linux/jump_label.h> 61 60 #include <linux/pfn.h> 62 61 #include <linux/bsearch.h> ··· 1813 1814 void __weak module_memfree(void *module_region) 1814 1815 { 1815 1816 vfree(module_region); 1816 - kasan_module_free(module_region); 1817 1817 } 1818 1818 1819 1819 void __weak module_arch_cleanup(struct module *mod)
+30 -10
kernel/trace/ftrace.c
··· 1059 1059 1060 1060 static struct pid * const ftrace_swapper_pid = &init_struct_pid; 1061 1061 1062 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1063 + static int ftrace_graph_active; 1064 + #else 1065 + # define ftrace_graph_active 0 1066 + #endif 1067 + 1062 1068 #ifdef CONFIG_DYNAMIC_FTRACE 1063 1069 1064 1070 static struct ftrace_ops *removed_ops; ··· 2047 2041 if (!ftrace_rec_count(rec)) 2048 2042 rec->flags = 0; 2049 2043 else 2050 - /* Just disable the record (keep REGS state) */ 2051 - rec->flags &= ~FTRACE_FL_ENABLED; 2044 + /* 2045 + * Just disable the record, but keep the ops TRAMP 2046 + * and REGS states. The _EN flags must be disabled though. 2047 + */ 2048 + rec->flags &= ~(FTRACE_FL_ENABLED | FTRACE_FL_TRAMP_EN | 2049 + FTRACE_FL_REGS_EN); 2052 2050 } 2053 2051 2054 2052 return FTRACE_UPDATE_MAKE_NOP; ··· 2698 2688 2699 2689 static void ftrace_startup_sysctl(void) 2700 2690 { 2691 + int command; 2692 + 2701 2693 if (unlikely(ftrace_disabled)) 2702 2694 return; 2703 2695 2704 2696 /* Force update next time */ 2705 2697 saved_ftrace_func = NULL; 2706 2698 /* ftrace_start_up is true if we want ftrace running */ 2707 - if (ftrace_start_up) 2708 - ftrace_run_update_code(FTRACE_UPDATE_CALLS); 2699 + if (ftrace_start_up) { 2700 + command = FTRACE_UPDATE_CALLS; 2701 + if (ftrace_graph_active) 2702 + command |= FTRACE_START_FUNC_RET; 2703 + ftrace_startup_enable(command); 2704 + } 2709 2705 } 2710 2706 2711 2707 static void ftrace_shutdown_sysctl(void) 2712 2708 { 2709 + int command; 2710 + 2713 2711 if (unlikely(ftrace_disabled)) 2714 2712 return; 2715 2713 2716 2714 /* ftrace_start_up is true if ftrace is running */ 2717 - if (ftrace_start_up) 2718 - ftrace_run_update_code(FTRACE_DISABLE_CALLS); 2715 + if (ftrace_start_up) { 2716 + command = FTRACE_DISABLE_CALLS; 2717 + if (ftrace_graph_active) 2718 + command |= FTRACE_STOP_FUNC_RET; 2719 + ftrace_run_update_code(command); 2720 + } 2719 2721 } 2720 2722 2721 2723 static cycle_t ftrace_update_time; ··· 5580 5558 5581 5559 if (ftrace_enabled) { 5582 5560 5583 - ftrace_startup_sysctl(); 5584 - 5585 5561 /* we are starting ftrace again */ 5586 5562 if (ftrace_ops_list != &ftrace_list_end) 5587 5563 update_ftrace_function(); 5564 + 5565 + ftrace_startup_sysctl(); 5588 5566 5589 5567 } else { 5590 5568 /* stopping ftrace calls (just send to ftrace_stub) */ ··· 5611 5589 #endif 5612 5590 ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash) 5613 5591 }; 5614 - 5615 - static int ftrace_graph_active; 5616 5592 5617 5593 int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace) 5618 5594 {
+52 -4
kernel/workqueue.c
··· 2728 2728 } 2729 2729 EXPORT_SYMBOL_GPL(flush_work); 2730 2730 2731 + struct cwt_wait { 2732 + wait_queue_t wait; 2733 + struct work_struct *work; 2734 + }; 2735 + 2736 + static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key) 2737 + { 2738 + struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait); 2739 + 2740 + if (cwait->work != key) 2741 + return 0; 2742 + return autoremove_wake_function(wait, mode, sync, key); 2743 + } 2744 + 2731 2745 static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) 2732 2746 { 2747 + static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq); 2733 2748 unsigned long flags; 2734 2749 int ret; 2735 2750 2736 2751 do { 2737 2752 ret = try_to_grab_pending(work, is_dwork, &flags); 2738 2753 /* 2739 - * If someone else is canceling, wait for the same event it 2740 - * would be waiting for before retrying. 2754 + * If someone else is already canceling, wait for it to 2755 + * finish. flush_work() doesn't work for PREEMPT_NONE 2756 + * because we may get scheduled between @work's completion 2757 + * and the other canceling task resuming and clearing 2758 + * CANCELING - flush_work() will return false immediately 2759 + * as @work is no longer busy, try_to_grab_pending() will 2760 + * return -ENOENT as @work is still being canceled and the 2761 + * other canceling task won't be able to clear CANCELING as 2762 + * we're hogging the CPU. 2763 + * 2764 + * Let's wait for completion using a waitqueue. As this 2765 + * may lead to the thundering herd problem, use a custom 2766 + * wake function which matches @work along with exclusive 2767 + * wait and wakeup. 2741 2768 */ 2742 - if (unlikely(ret == -ENOENT)) 2743 - flush_work(work); 2769 + if (unlikely(ret == -ENOENT)) { 2770 + struct cwt_wait cwait; 2771 + 2772 + init_wait(&cwait.wait); 2773 + cwait.wait.func = cwt_wakefn; 2774 + cwait.work = work; 2775 + 2776 + prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait, 2777 + TASK_UNINTERRUPTIBLE); 2778 + if (work_is_canceling(work)) 2779 + schedule(); 2780 + finish_wait(&cancel_waitq, &cwait.wait); 2781 + } 2744 2782 } while (unlikely(ret < 0)); 2745 2783 2746 2784 /* tell other tasks trying to grab @work to back off */ ··· 2787 2749 2788 2750 flush_work(work); 2789 2751 clear_work_data(work); 2752 + 2753 + /* 2754 + * Paired with prepare_to_wait() above so that either 2755 + * waitqueue_active() is visible here or !work_is_canceling() is 2756 + * visible there. 2757 + */ 2758 + smp_mb(); 2759 + if (waitqueue_active(&cancel_waitq)) 2760 + __wake_up(&cancel_waitq, TASK_NORMAL, 1, work); 2761 + 2790 2762 return ret; 2791 2763 } 2792 2764
+1 -1
lib/Makefile
··· 24 24 25 25 obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \ 26 26 bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ 27 - gcd.o lcm.o list_sort.o uuid.o flex_array.o clz_ctz.o \ 27 + gcd.o lcm.o list_sort.o uuid.o flex_array.o iov_iter.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 29 percpu-refcount.o percpu_ida.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o
+2 -2
lib/seq_buf.c
··· 61 61 62 62 if (s->len < s->size) { 63 63 len = vsnprintf(s->buffer + s->len, s->size - s->len, fmt, args); 64 - if (seq_buf_can_fit(s, len)) { 64 + if (s->len + len < s->size) { 65 65 s->len += len; 66 66 return 0; 67 67 } ··· 118 118 119 119 if (s->len < s->size) { 120 120 ret = bstr_printf(s->buffer + s->len, len, fmt, binary); 121 - if (seq_buf_can_fit(s, ret)) { 121 + if (s->len + ret < s->size) { 122 122 s->len += ret; 123 123 return 0; 124 124 }
+1 -1
mm/Makefile
··· 21 21 mm_init.o mmu_context.o percpu.o slab_common.o \ 22 22 compaction.o vmacache.o \ 23 23 interval_tree.o list_lru.o workingset.o \ 24 - iov_iter.o debug.o $(mmu-y) 24 + debug.o $(mmu-y) 25 25 26 26 obj-y += init-mm.o 27 27
+7 -5
mm/cma.c
··· 64 64 return (1UL << (align_order - cma->order_per_bit)) - 1; 65 65 } 66 66 67 + /* 68 + * Find a PFN aligned to the specified order and return an offset represented in 69 + * order_per_bits. 70 + */ 67 71 static unsigned long cma_bitmap_aligned_offset(struct cma *cma, int align_order) 68 72 { 69 - unsigned int alignment; 70 - 71 73 if (align_order <= cma->order_per_bit) 72 74 return 0; 73 - alignment = 1UL << (align_order - cma->order_per_bit); 74 - return ALIGN(cma->base_pfn, alignment) - 75 - (cma->base_pfn >> cma->order_per_bit); 75 + 76 + return (ALIGN(cma->base_pfn, (1UL << align_order)) 77 + - cma->base_pfn) >> cma->order_per_bit; 76 78 } 77 79 78 80 static unsigned long cma_bitmap_maxno(struct cma *cma)
+8 -3
mm/huge_memory.c
··· 1295 1295 * Avoid grouping on DSO/COW pages in specific and RO pages 1296 1296 * in general, RO pages shouldn't hurt as much anyway since 1297 1297 * they can be in shared cache state. 1298 + * 1299 + * FIXME! This checks "pmd_dirty()" as an approximation of 1300 + * "is this a read-only page", since checking "pmd_write()" 1301 + * is even more broken. We haven't actually turned this into 1302 + * a writable page, so pmd_write() will always be false. 1298 1303 */ 1299 - if (!pmd_write(pmd)) 1304 + if (!pmd_dirty(pmd)) 1300 1305 flags |= TNF_NO_GROUP; 1301 1306 1302 1307 /* ··· 1487 1482 1488 1483 if (__pmd_trans_huge_lock(pmd, vma, &ptl) == 1) { 1489 1484 pmd_t entry; 1485 + ret = 1; 1490 1486 1491 1487 /* 1492 1488 * Avoid trapping faults against the zero page. The read-only ··· 1496 1490 */ 1497 1491 if (prot_numa && is_huge_zero_pmd(*pmd)) { 1498 1492 spin_unlock(ptl); 1499 - return 0; 1493 + return ret; 1500 1494 } 1501 1495 1502 1496 if (!prot_numa || !pmd_protnone(*pmd)) { 1503 - ret = 1; 1504 1497 entry = pmdp_get_and_clear_notify(mm, addr, pmd); 1505 1498 entry = pmd_modify(entry, newprot); 1506 1499 ret = HPAGE_PMD_NR;
+3 -1
mm/hugetlb.c
··· 917 917 __SetPageHead(page); 918 918 __ClearPageReserved(page); 919 919 for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) { 920 - __SetPageTail(p); 921 920 /* 922 921 * For gigantic hugepages allocated through bootmem at 923 922 * boot, it's safer to be consistent with the not-gigantic ··· 932 933 __ClearPageReserved(p); 933 934 set_page_count(p, 0); 934 935 p->first_page = page; 936 + /* Make sure p->first_page is always valid for PageTail() */ 937 + smp_wmb(); 938 + __SetPageTail(p); 935 939 } 936 940 } 937 941
+15
mm/iov_iter.c lib/iov_iter.c
··· 751 751 return npages; 752 752 } 753 753 EXPORT_SYMBOL(iov_iter_npages); 754 + 755 + const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags) 756 + { 757 + *new = *old; 758 + if (new->type & ITER_BVEC) 759 + return new->bvec = kmemdup(new->bvec, 760 + new->nr_segs * sizeof(struct bio_vec), 761 + flags); 762 + else 763 + /* iovec and kvec have identical layout */ 764 + return new->iov = kmemdup(new->iov, 765 + new->nr_segs * sizeof(struct iovec), 766 + flags); 767 + } 768 + EXPORT_SYMBOL(dup_iter);
+11 -3
mm/kasan/kasan.c
··· 29 29 #include <linux/stacktrace.h> 30 30 #include <linux/string.h> 31 31 #include <linux/types.h> 32 + #include <linux/vmalloc.h> 32 33 #include <linux/kasan.h> 33 34 34 35 #include "kasan.h" ··· 415 414 GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, 416 415 PAGE_KERNEL, VM_NO_GUARD, NUMA_NO_NODE, 417 416 __builtin_return_address(0)); 418 - return ret ? 0 : -ENOMEM; 417 + 418 + if (ret) { 419 + find_vm_area(addr)->flags |= VM_KASAN; 420 + return 0; 421 + } 422 + 423 + return -ENOMEM; 419 424 } 420 425 421 - void kasan_module_free(void *addr) 426 + void kasan_free_shadow(const struct vm_struct *vm) 422 427 { 423 - vfree(kasan_mem_to_shadow(addr)); 428 + if (vm->flags & VM_KASAN) 429 + vfree(kasan_mem_to_shadow(vm->addr)); 424 430 } 425 431 426 432 static void register_global(struct kasan_global *global)
+3 -1
mm/memcontrol.c
··· 5232 5232 * on for the root memcg is enough. 5233 5233 */ 5234 5234 if (cgroup_on_dfl(root_css->cgroup)) 5235 - mem_cgroup_from_css(root_css)->use_hierarchy = true; 5235 + root_mem_cgroup->use_hierarchy = true; 5236 + else 5237 + root_mem_cgroup->use_hierarchy = false; 5236 5238 } 5237 5239 5238 5240 static u64 memory_current_read(struct cgroup_subsys_state *css,
+6 -1
mm/memory.c
··· 3072 3072 * Avoid grouping on DSO/COW pages in specific and RO pages 3073 3073 * in general, RO pages shouldn't hurt as much anyway since 3074 3074 * they can be in shared cache state. 3075 + * 3076 + * FIXME! This checks "pmd_dirty()" as an approximation of 3077 + * "is this a read-only page", since checking "pmd_write()" 3078 + * is even more broken. We haven't actually turned this into 3079 + * a writable page, so pmd_write() will always be false. 3075 3080 */ 3076 - if (!pte_write(pte)) 3081 + if (!pte_dirty(pte)) 3077 3082 flags |= TNF_NO_GROUP; 3078 3083 3079 3084 /*
+2 -2
mm/mlock.c
··· 26 26 27 27 int can_do_mlock(void) 28 28 { 29 - if (capable(CAP_IPC_LOCK)) 30 - return 1; 31 29 if (rlimit(RLIMIT_MEMLOCK) != 0) 30 + return 1; 31 + if (capable(CAP_IPC_LOCK)) 32 32 return 1; 33 33 return 0; 34 34 }
+1
mm/nommu.c
··· 62 62 EXPORT_SYMBOL(high_memory); 63 63 struct page *mem_map; 64 64 unsigned long max_mapnr; 65 + EXPORT_SYMBOL(max_mapnr); 65 66 unsigned long highest_memmap_pfn; 66 67 struct percpu_counter vm_committed_as; 67 68 int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
+2 -1
mm/page_alloc.c
··· 2373 2373 goto out; 2374 2374 } 2375 2375 /* Exhausted what can be done so it's blamo time */ 2376 - if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false)) 2376 + if (out_of_memory(ac->zonelist, gfp_mask, order, ac->nodemask, false) 2377 + || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) 2377 2378 *did_some_progress = 1; 2378 2379 out: 2379 2380 oom_zonelist_unlock(ac->zonelist, gfp_mask);
+1
mm/vmalloc.c
··· 1418 1418 spin_unlock(&vmap_area_lock); 1419 1419 1420 1420 vmap_debug_free_range(va->va_start, va->va_end); 1421 + kasan_free_shadow(vm); 1421 1422 free_unmap_vmap_area(va); 1422 1423 vm->size -= PAGE_SIZE; 1423 1424
+3
net/can/af_can.c
··· 259 259 goto inval_skb; 260 260 } 261 261 262 + skb->ip_summed = CHECKSUM_UNNECESSARY; 263 + 264 + skb_reset_mac_header(skb); 262 265 skb_reset_network_header(skb); 263 266 skb_reset_transport_header(skb); 264 267
+7 -4
net/ipv4/ip_fragment.c
··· 659 659 struct sk_buff *ip_check_defrag(struct sk_buff *skb, u32 user) 660 660 { 661 661 struct iphdr iph; 662 + int netoff; 662 663 u32 len; 663 664 664 665 if (skb->protocol != htons(ETH_P_IP)) 665 666 return skb; 666 667 667 - if (skb_copy_bits(skb, 0, &iph, sizeof(iph)) < 0) 668 + netoff = skb_network_offset(skb); 669 + 670 + if (skb_copy_bits(skb, netoff, &iph, sizeof(iph)) < 0) 668 671 return skb; 669 672 670 673 if (iph.ihl < 5 || iph.version != 4) 671 674 return skb; 672 675 673 676 len = ntohs(iph.tot_len); 674 - if (skb->len < len || len < (iph.ihl * 4)) 677 + if (skb->len < netoff + len || len < (iph.ihl * 4)) 675 678 return skb; 676 679 677 680 if (ip_is_fragment(&iph)) { 678 681 skb = skb_share_check(skb, GFP_ATOMIC); 679 682 if (skb) { 680 - if (!pskb_may_pull(skb, iph.ihl*4)) 683 + if (!pskb_may_pull(skb, netoff + iph.ihl * 4)) 681 684 return skb; 682 - if (pskb_trim_rcsum(skb, len)) 685 + if (pskb_trim_rcsum(skb, netoff + len)) 683 686 return skb; 684 687 memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); 685 688 if (ip_defrag(skb, user))
+23 -10
net/ipv4/ip_sockglue.c
··· 432 432 kfree_skb(skb); 433 433 } 434 434 435 - static bool ipv4_pktinfo_prepare_errqueue(const struct sock *sk, 436 - const struct sk_buff *skb, 437 - int ee_origin) 435 + /* IPv4 supports cmsg on all imcp errors and some timestamps 436 + * 437 + * Timestamp code paths do not initialize the fields expected by cmsg: 438 + * the PKTINFO fields in skb->cb[]. Fill those in here. 439 + */ 440 + static bool ipv4_datagram_support_cmsg(const struct sock *sk, 441 + struct sk_buff *skb, 442 + int ee_origin) 438 443 { 439 - struct in_pktinfo *info = PKTINFO_SKB_CB(skb); 444 + struct in_pktinfo *info; 440 445 441 - if ((ee_origin != SO_EE_ORIGIN_TIMESTAMPING) || 442 - (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 446 + if (ee_origin == SO_EE_ORIGIN_ICMP) 447 + return true; 448 + 449 + if (ee_origin == SO_EE_ORIGIN_LOCAL) 450 + return false; 451 + 452 + /* Support IP_PKTINFO on tstamp packets if requested, to correlate 453 + * timestamp with egress dev. Not possible for packets without dev 454 + * or without payload (SOF_TIMESTAMPING_OPT_TSONLY). 455 + */ 456 + if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) || 443 457 (!skb->dev)) 444 458 return false; 445 459 460 + info = PKTINFO_SKB_CB(skb); 446 461 info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr; 447 462 info->ipi_ifindex = skb->dev->ifindex; 448 463 return true; ··· 498 483 499 484 serr = SKB_EXT_ERR(skb); 500 485 501 - if (sin && skb->len) { 486 + if (sin && serr->port) { 502 487 sin->sin_family = AF_INET; 503 488 sin->sin_addr.s_addr = *(__be32 *)(skb_network_header(skb) + 504 489 serr->addr_offset); ··· 511 496 sin = &errhdr.offender; 512 497 memset(sin, 0, sizeof(*sin)); 513 498 514 - if (skb->len && 515 - (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 516 - ipv4_pktinfo_prepare_errqueue(sk, skb, serr->ee.ee_origin))) { 499 + if (ipv4_datagram_support_cmsg(sk, skb, serr->ee.ee_origin)) { 517 500 sin->sin_family = AF_INET; 518 501 sin->sin_addr.s_addr = ip_hdr(skb)->saddr; 519 502 if (inet_sk(sk)->cmsg_flags)
+10 -2
net/ipv4/ping.c
··· 259 259 kgid_t low, high; 260 260 int ret = 0; 261 261 262 + if (sk->sk_family == AF_INET6) 263 + sk->sk_ipv6only = 1; 264 + 262 265 inet_get_ping_group_range_net(net, &low, &high); 263 266 if (gid_lte(low, group) && gid_lte(group, high)) 264 267 return 0; ··· 308 305 if (addr_len < sizeof(*addr)) 309 306 return -EINVAL; 310 307 308 + if (addr->sin_family != AF_INET && 309 + !(addr->sin_family == AF_UNSPEC && 310 + addr->sin_addr.s_addr == htonl(INADDR_ANY))) 311 + return -EAFNOSUPPORT; 312 + 311 313 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n", 312 314 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port)); 313 315 ··· 338 330 return -EINVAL; 339 331 340 332 if (addr->sin6_family != AF_INET6) 341 - return -EINVAL; 333 + return -EAFNOSUPPORT; 342 334 343 335 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI6c,port=%d)\n", 344 336 sk, addr->sin6_addr.s6_addr, ntohs(addr->sin6_port)); ··· 724 716 if (msg->msg_namelen < sizeof(*usin)) 725 717 return -EINVAL; 726 718 if (usin->sin_family != AF_INET) 727 - return -EINVAL; 719 + return -EAFNOSUPPORT; 728 720 daddr = usin->sin_addr.s_addr; 729 721 /* no remote port */ 730 722 } else {
+3 -7
net/ipv4/tcp.c
··· 835 835 int large_allowed) 836 836 { 837 837 struct tcp_sock *tp = tcp_sk(sk); 838 - u32 new_size_goal, size_goal, hlen; 838 + u32 new_size_goal, size_goal; 839 839 840 840 if (!large_allowed || !sk_can_gso(sk)) 841 841 return mss_now; 842 842 843 - /* Maybe we should/could use sk->sk_prot->max_header here ? */ 844 - hlen = inet_csk(sk)->icsk_af_ops->net_header_len + 845 - inet_csk(sk)->icsk_ext_hdr_len + 846 - tp->tcp_header_len; 847 - 848 - new_size_goal = sk->sk_gso_max_size - 1 - hlen; 843 + /* Note : tcp_tso_autosize() will eventually split this later */ 844 + new_size_goal = sk->sk_gso_max_size - 1 - MAX_TCP_HEADER; 849 845 new_size_goal = tcp_bound_to_half_wnd(tp, new_size_goal); 850 846 851 847 /* We try hard to avoid divides here */
+28 -11
net/ipv6/datagram.c
··· 325 325 kfree_skb(skb); 326 326 } 327 327 328 - static void ip6_datagram_prepare_pktinfo_errqueue(struct sk_buff *skb) 328 + /* IPv6 supports cmsg on all origins aside from SO_EE_ORIGIN_LOCAL. 329 + * 330 + * At one point, excluding local errors was a quick test to identify icmp/icmp6 331 + * errors. This is no longer true, but the test remained, so the v6 stack, 332 + * unlike v4, also honors cmsg requests on all wifi and timestamp errors. 333 + * 334 + * Timestamp code paths do not initialize the fields expected by cmsg: 335 + * the PKTINFO fields in skb->cb[]. Fill those in here. 336 + */ 337 + static bool ip6_datagram_support_cmsg(struct sk_buff *skb, 338 + struct sock_exterr_skb *serr) 329 339 { 330 - int ifindex = skb->dev ? skb->dev->ifindex : -1; 340 + if (serr->ee.ee_origin == SO_EE_ORIGIN_ICMP || 341 + serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6) 342 + return true; 343 + 344 + if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL) 345 + return false; 346 + 347 + if (!skb->dev) 348 + return false; 331 349 332 350 if (skb->protocol == htons(ETH_P_IPV6)) 333 - IP6CB(skb)->iif = ifindex; 351 + IP6CB(skb)->iif = skb->dev->ifindex; 334 352 else 335 - PKTINFO_SKB_CB(skb)->ipi_ifindex = ifindex; 353 + PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex; 354 + 355 + return true; 336 356 } 337 357 338 358 /* ··· 389 369 390 370 serr = SKB_EXT_ERR(skb); 391 371 392 - if (sin && skb->len) { 372 + if (sin && serr->port) { 393 373 const unsigned char *nh = skb_network_header(skb); 394 374 sin->sin6_family = AF_INET6; 395 375 sin->sin6_flowinfo = 0; ··· 414 394 memcpy(&errhdr.ee, &serr->ee, sizeof(struct sock_extended_err)); 415 395 sin = &errhdr.offender; 416 396 memset(sin, 0, sizeof(*sin)); 417 - if (serr->ee.ee_origin != SO_EE_ORIGIN_LOCAL && skb->len) { 397 + 398 + if (ip6_datagram_support_cmsg(skb, serr)) { 418 399 sin->sin6_family = AF_INET6; 419 - if (np->rxopt.all) { 420 - if (serr->ee.ee_origin != SO_EE_ORIGIN_ICMP && 421 - serr->ee.ee_origin != SO_EE_ORIGIN_ICMP6) 422 - ip6_datagram_prepare_pktinfo_errqueue(skb); 400 + if (np->rxopt.all) 423 401 ip6_datagram_recv_common_ctl(sk, msg, skb); 424 - } 425 402 if (skb->protocol == htons(ETH_P_IPV6)) { 426 403 sin->sin6_addr = ipv6_hdr(skb)->saddr; 427 404 if (np->rxopt.all)
+3 -2
net/ipv6/ping.c
··· 102 102 103 103 if (msg->msg_name) { 104 104 DECLARE_SOCKADDR(struct sockaddr_in6 *, u, msg->msg_name); 105 - if (msg->msg_namelen < sizeof(struct sockaddr_in6) || 106 - u->sin6_family != AF_INET6) { 105 + if (msg->msg_namelen < sizeof(*u)) 107 106 return -EINVAL; 107 + if (u->sin6_family != AF_INET6) { 108 + return -EAFNOSUPPORT; 108 109 } 109 110 if (sk->sk_bound_dev_if && 110 111 sk->sk_bound_dev_if != u->sin6_scope_id) {
+3
net/netfilter/ipvs/ip_vs_sync.c
··· 896 896 IP_VS_DBG(2, "BACKUP, add new conn. failed\n"); 897 897 return; 898 898 } 899 + if (!(flags & IP_VS_CONN_F_TEMPLATE)) 900 + kfree(param->pe_data); 899 901 } 900 902 901 903 if (opt) ··· 1171 1169 (opt_flags & IPVS_OPT_F_SEQ_DATA ? &opt : NULL) 1172 1170 ); 1173 1171 #endif 1172 + ip_vs_pe_put(param.pe); 1174 1173 return 0; 1175 1174 /* Error exit */ 1176 1175 out:
+36 -25
net/netfilter/nf_tables_api.c
··· 227 227 228 228 static inline void nft_rule_clear(struct net *net, struct nft_rule *rule) 229 229 { 230 - rule->genmask = 0; 230 + rule->genmask &= ~(1 << gencursor_next(net)); 231 231 } 232 232 233 233 static int ··· 1711 1711 } 1712 1712 nla_nest_end(skb, list); 1713 1713 1714 - if (rule->ulen && 1715 - nla_put(skb, NFTA_RULE_USERDATA, rule->ulen, nft_userdata(rule))) 1716 - goto nla_put_failure; 1714 + if (rule->udata) { 1715 + struct nft_userdata *udata = nft_userdata(rule); 1716 + if (nla_put(skb, NFTA_RULE_USERDATA, udata->len + 1, 1717 + udata->data) < 0) 1718 + goto nla_put_failure; 1719 + } 1717 1720 1718 1721 nlmsg_end(skb, nlh); 1719 1722 return 0; ··· 1899 1896 struct nft_table *table; 1900 1897 struct nft_chain *chain; 1901 1898 struct nft_rule *rule, *old_rule = NULL; 1899 + struct nft_userdata *udata; 1902 1900 struct nft_trans *trans = NULL; 1903 1901 struct nft_expr *expr; 1904 1902 struct nft_ctx ctx; 1905 1903 struct nlattr *tmp; 1906 - unsigned int size, i, n, ulen = 0; 1904 + unsigned int size, i, n, ulen = 0, usize = 0; 1907 1905 int err, rem; 1908 1906 bool create; 1909 1907 u64 handle, pos_handle; ··· 1972 1968 n++; 1973 1969 } 1974 1970 } 1971 + /* Check for overflow of dlen field */ 1972 + err = -EFBIG; 1973 + if (size >= 1 << 12) 1974 + goto err1; 1975 1975 1976 - if (nla[NFTA_RULE_USERDATA]) 1976 + if (nla[NFTA_RULE_USERDATA]) { 1977 1977 ulen = nla_len(nla[NFTA_RULE_USERDATA]); 1978 + if (ulen > 0) 1979 + usize = sizeof(struct nft_userdata) + ulen; 1980 + } 1978 1981 1979 1982 err = -ENOMEM; 1980 - rule = kzalloc(sizeof(*rule) + size + ulen, GFP_KERNEL); 1983 + rule = kzalloc(sizeof(*rule) + size + usize, GFP_KERNEL); 1981 1984 if (rule == NULL) 1982 1985 goto err1; 1983 1986 ··· 1992 1981 1993 1982 rule->handle = handle; 1994 1983 rule->dlen = size; 1995 - rule->ulen = ulen; 1984 + rule->udata = ulen ? 1 : 0; 1996 1985 1997 - if (ulen) 1998 - nla_memcpy(nft_userdata(rule), nla[NFTA_RULE_USERDATA], ulen); 1986 + if (ulen) { 1987 + udata = nft_userdata(rule); 1988 + udata->len = ulen - 1; 1989 + nla_memcpy(udata->data, nla[NFTA_RULE_USERDATA], ulen); 1990 + } 1999 1991 2000 1992 expr = nft_expr_first(rule); 2001 1993 for (i = 0; i < n; i++) { ··· 2045 2031 2046 2032 err3: 2047 2033 list_del_rcu(&rule->list); 2048 - if (trans) { 2049 - list_del_rcu(&nft_trans_rule(trans)->list); 2050 - nft_rule_clear(net, nft_trans_rule(trans)); 2051 - nft_trans_destroy(trans); 2052 - chain->use++; 2053 - } 2054 2034 err2: 2055 2035 nf_tables_rule_destroy(&ctx, rule); 2056 2036 err1: ··· 3620 3612 &te->elem, 3621 3613 NFT_MSG_DELSETELEM, 0); 3622 3614 te->set->ops->get(te->set, &te->elem); 3623 - te->set->ops->remove(te->set, &te->elem); 3624 3615 nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3625 - if (te->elem.flags & NFT_SET_MAP) { 3626 - nft_data_uninit(&te->elem.data, 3627 - te->set->dtype); 3628 - } 3616 + if (te->set->flags & NFT_SET_MAP && 3617 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3618 + nft_data_uninit(&te->elem.data, te->set->dtype); 3619 + te->set->ops->remove(te->set, &te->elem); 3629 3620 nft_trans_destroy(trans); 3630 3621 break; 3631 3622 } ··· 3665 3658 { 3666 3659 struct net *net = sock_net(skb->sk); 3667 3660 struct nft_trans *trans, *next; 3668 - struct nft_set *set; 3661 + struct nft_trans_elem *te; 3669 3662 3670 3663 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3671 3664 switch (trans->msg_type) { ··· 3726 3719 break; 3727 3720 case NFT_MSG_NEWSETELEM: 3728 3721 nft_trans_elem_set(trans)->nelems--; 3729 - set = nft_trans_elem_set(trans); 3730 - set->ops->get(set, &nft_trans_elem(trans)); 3731 - set->ops->remove(set, &nft_trans_elem(trans)); 3722 + te = (struct nft_trans_elem *)trans->data; 3723 + te->set->ops->get(te->set, &te->elem); 3724 + nft_data_uninit(&te->elem.key, NFT_DATA_VALUE); 3725 + if (te->set->flags & NFT_SET_MAP && 3726 + !(te->elem.flags & NFT_SET_ELEM_INTERVAL_END)) 3727 + nft_data_uninit(&te->elem.data, te->set->dtype); 3728 + te->set->ops->remove(te->set, &te->elem); 3732 3729 nft_trans_destroy(trans); 3733 3730 break; 3734 3731 case NFT_MSG_DELSETELEM:
+7 -7
net/netfilter/nft_compat.c
··· 123 123 nft_target_set_tgchk_param(struct xt_tgchk_param *par, 124 124 const struct nft_ctx *ctx, 125 125 struct xt_target *target, void *info, 126 - union nft_entry *entry, u8 proto, bool inv) 126 + union nft_entry *entry, u16 proto, bool inv) 127 127 { 128 128 par->net = ctx->net; 129 129 par->table = ctx->table->name; ··· 137 137 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 138 138 break; 139 139 case NFPROTO_BRIDGE: 140 - entry->ebt.ethproto = proto; 140 + entry->ebt.ethproto = (__force __be16)proto; 141 141 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 142 142 break; 143 143 } ··· 171 171 [NFTA_RULE_COMPAT_FLAGS] = { .type = NLA_U32 }, 172 172 }; 173 173 174 - static int nft_parse_compat(const struct nlattr *attr, u8 *proto, bool *inv) 174 + static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv) 175 175 { 176 176 struct nlattr *tb[NFTA_RULE_COMPAT_MAX+1]; 177 177 u32 flags; ··· 203 203 struct xt_target *target = expr->ops->data; 204 204 struct xt_tgchk_param par; 205 205 size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO])); 206 - u8 proto = 0; 206 + u16 proto = 0; 207 207 bool inv = false; 208 208 union nft_entry e = {}; 209 209 int ret; ··· 334 334 static void 335 335 nft_match_set_mtchk_param(struct xt_mtchk_param *par, const struct nft_ctx *ctx, 336 336 struct xt_match *match, void *info, 337 - union nft_entry *entry, u8 proto, bool inv) 337 + union nft_entry *entry, u16 proto, bool inv) 338 338 { 339 339 par->net = ctx->net; 340 340 par->table = ctx->table->name; ··· 348 348 entry->e6.ipv6.invflags = inv ? IP6T_INV_PROTO : 0; 349 349 break; 350 350 case NFPROTO_BRIDGE: 351 - entry->ebt.ethproto = proto; 351 + entry->ebt.ethproto = (__force __be16)proto; 352 352 entry->ebt.invflags = inv ? EBT_IPROTO : 0; 353 353 break; 354 354 } ··· 385 385 struct xt_match *match = expr->ops->data; 386 386 struct xt_mtchk_param par; 387 387 size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO])); 388 - u8 proto = 0; 388 + u16 proto = 0; 389 389 bool inv = false; 390 390 union nft_entry e = {}; 391 391 int ret;
+14 -8
net/packet/af_packet.c
··· 3123 3123 return 0; 3124 3124 } 3125 3125 3126 - static void packet_dev_mclist(struct net_device *dev, struct packet_mclist *i, int what) 3126 + static void packet_dev_mclist_delete(struct net_device *dev, 3127 + struct packet_mclist **mlp) 3127 3128 { 3128 - for ( ; i; i = i->next) { 3129 - if (i->ifindex == dev->ifindex) 3130 - packet_dev_mc(dev, i, what); 3129 + struct packet_mclist *ml; 3130 + 3131 + while ((ml = *mlp) != NULL) { 3132 + if (ml->ifindex == dev->ifindex) { 3133 + packet_dev_mc(dev, ml, -1); 3134 + *mlp = ml->next; 3135 + kfree(ml); 3136 + } else 3137 + mlp = &ml->next; 3131 3138 } 3132 3139 } 3133 3140 ··· 3211 3204 packet_dev_mc(dev, ml, -1); 3212 3205 kfree(ml); 3213 3206 } 3214 - rtnl_unlock(); 3215 - return 0; 3207 + break; 3216 3208 } 3217 3209 } 3218 3210 rtnl_unlock(); 3219 - return -EADDRNOTAVAIL; 3211 + return 0; 3220 3212 } 3221 3213 3222 3214 static void packet_flush_mclist(struct sock *sk) ··· 3565 3559 switch (msg) { 3566 3560 case NETDEV_UNREGISTER: 3567 3561 if (po->mclist) 3568 - packet_dev_mclist(dev, po->mclist, -1); 3562 + packet_dev_mclist_delete(dev, &po->mclist); 3569 3563 /* fallthrough */ 3570 3564 3571 3565 case NETDEV_DOWN:
+2 -2
net/rxrpc/ar-error.c
··· 42 42 _leave("UDP socket errqueue empty"); 43 43 return; 44 44 } 45 - if (!skb->len) { 45 + serr = SKB_EXT_ERR(skb); 46 + if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { 46 47 _leave("UDP empty message"); 47 48 kfree_skb(skb); 48 49 return; ··· 51 50 52 51 rxrpc_new_skb(skb); 53 52 54 - serr = SKB_EXT_ERR(skb); 55 53 addr = *(__be32 *)(skb_network_header(skb) + serr->addr_offset); 56 54 port = serr->port; 57 55
+4 -3
net/tipc/link.c
··· 464 464 /* Clean up all queues, except inputq: */ 465 465 __skb_queue_purge(&l_ptr->outqueue); 466 466 __skb_queue_purge(&l_ptr->deferred_queue); 467 - skb_queue_splice_init(&l_ptr->wakeupq, &l_ptr->inputq); 468 - if (!skb_queue_empty(&l_ptr->inputq)) 467 + if (!owner->inputq) 468 + owner->inputq = &l_ptr->inputq; 469 + skb_queue_splice_init(&l_ptr->wakeupq, owner->inputq); 470 + if (!skb_queue_empty(owner->inputq)) 469 471 owner->action_flags |= TIPC_MSG_EVT; 470 - owner->inputq = &l_ptr->inputq; 471 472 l_ptr->next_out = NULL; 472 473 l_ptr->unacked_window = 0; 473 474 l_ptr->checkpoint = 1;
+4
sound/core/control.c
··· 1170 1170 1171 1171 if (info->count < 1) 1172 1172 return -EINVAL; 1173 + if (!*info->id.name) 1174 + return -EINVAL; 1175 + if (strnlen(info->id.name, sizeof(info->id.name)) >= sizeof(info->id.name)) 1176 + return -EINVAL; 1173 1177 access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE : 1174 1178 (info->access & (SNDRV_CTL_ELEM_ACCESS_READWRITE| 1175 1179 SNDRV_CTL_ELEM_ACCESS_INACTIVE|
+9 -9
sound/firewire/dice/dice-interface.h
··· 299 299 #define RX_ISOCHRONOUS 0x008 300 300 301 301 /* 302 + * Index of first quadlet to be interpreted; read/write. If > 0, that many 303 + * quadlets at the beginning of each data block will be ignored, and all the 304 + * audio and MIDI quadlets will follow. 305 + */ 306 + #define RX_SEQ_START 0x00c 307 + 308 + /* 302 309 * The number of audio channels; read-only. There will be one quadlet per 303 310 * channel. 304 311 */ 305 - #define RX_NUMBER_AUDIO 0x00c 312 + #define RX_NUMBER_AUDIO 0x010 306 313 307 314 /* 308 315 * The number of MIDI ports, 0-8; read-only. If > 0, there will be one 309 316 * additional quadlet in each data block, following the audio quadlets. 310 317 */ 311 - #define RX_NUMBER_MIDI 0x010 312 - 313 - /* 314 - * Index of first quadlet to be interpreted; read/write. If > 0, that many 315 - * quadlets at the beginning of each data block will be ignored, and all the 316 - * audio and MIDI quadlets will follow. 317 - */ 318 - #define RX_SEQ_START 0x014 318 + #define RX_NUMBER_MIDI 0x014 319 319 320 320 /* 321 321 * Names of all audio channels; read-only. Quadlets are byte-swapped. Names
+2 -2
sound/firewire/dice/dice-proc.c
··· 99 99 } tx; 100 100 struct { 101 101 u32 iso; 102 + u32 seq_start; 102 103 u32 number_audio; 103 104 u32 number_midi; 104 - u32 seq_start; 105 105 char names[RX_NAMES_SIZE]; 106 106 u32 ac3_caps; 107 107 u32 ac3_enable; ··· 204 204 break; 205 205 snd_iprintf(buffer, "rx %u:\n", stream); 206 206 snd_iprintf(buffer, " iso channel: %d\n", (int)buf.rx.iso); 207 + snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 207 208 snd_iprintf(buffer, " audio channels: %u\n", 208 209 buf.rx.number_audio); 209 210 snd_iprintf(buffer, " midi ports: %u\n", buf.rx.number_midi); 210 - snd_iprintf(buffer, " sequence start: %u\n", buf.rx.seq_start); 211 211 if (quadlets >= 68) { 212 212 dice_proc_fixup_string(buf.rx.names, RX_NAMES_SIZE); 213 213 snd_iprintf(buffer, " names: %s\n", buf.rx.names);
+1 -2
sound/firewire/iso-resources.c
··· 26 26 int fw_iso_resources_init(struct fw_iso_resources *r, struct fw_unit *unit) 27 27 { 28 28 r->channels_mask = ~0uLL; 29 - r->unit = fw_unit_get(unit); 29 + r->unit = unit; 30 30 mutex_init(&r->mutex); 31 31 r->allocated = false; 32 32 ··· 42 42 { 43 43 WARN_ON(r->allocated); 44 44 mutex_destroy(&r->mutex); 45 - fw_unit_put(r->unit); 46 45 } 47 46 EXPORT_SYMBOL(fw_iso_resources_destroy); 48 47
+1 -1
sound/pci/hda/hda_controller.c
··· 1164 1164 } 1165 1165 } 1166 1166 1167 - if (!bus->no_response_fallback) 1167 + if (bus->no_response_fallback) 1168 1168 return -1; 1169 1169 1170 1170 if (!chip->polling_mode && chip->poll_count < 2) {
+22 -8
sound/pci/hda/hda_generic.c
··· 692 692 { 693 693 unsigned int caps = query_amp_caps(codec, nid, dir); 694 694 int val = get_amp_val_to_activate(codec, nid, dir, caps, false); 695 - snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 695 + 696 + if (get_wcaps(codec, nid) & AC_WCAP_STEREO) 697 + snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); 698 + else 699 + snd_hda_codec_amp_init(codec, nid, 0, dir, idx, 0xff, val); 700 + } 701 + 702 + /* update the amp, doing in stereo or mono depending on NID */ 703 + static int update_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx, 704 + unsigned int mask, unsigned int val) 705 + { 706 + if (get_wcaps(codec, nid) & AC_WCAP_STEREO) 707 + return snd_hda_codec_amp_stereo(codec, nid, dir, idx, 708 + mask, val); 709 + else 710 + return snd_hda_codec_amp_update(codec, nid, 0, dir, idx, 711 + mask, val); 696 712 } 697 713 698 714 /* calculate amp value mask we can modify; ··· 748 732 return; 749 733 750 734 val &= mask; 751 - snd_hda_codec_amp_stereo(codec, nid, dir, idx, mask, val); 735 + update_amp(codec, nid, dir, idx, mask, val); 752 736 } 753 737 754 738 static void activate_amp_out(struct hda_codec *codec, struct nid_path *path, ··· 4440 4424 has_amp = nid_has_mute(codec, mix, HDA_INPUT); 4441 4425 for (i = 0; i < nums; i++) { 4442 4426 if (has_amp) 4443 - snd_hda_codec_amp_stereo(codec, mix, 4444 - HDA_INPUT, i, 4445 - 0xff, HDA_AMP_MUTE); 4427 + update_amp(codec, mix, HDA_INPUT, i, 4428 + 0xff, HDA_AMP_MUTE); 4446 4429 else if (nid_has_volume(codec, conn[i], HDA_OUTPUT)) 4447 - snd_hda_codec_amp_stereo(codec, conn[i], 4448 - HDA_OUTPUT, 0, 4449 - 0xff, HDA_AMP_MUTE); 4430 + update_amp(codec, conn[i], HDA_OUTPUT, 0, 4431 + 0xff, HDA_AMP_MUTE); 4450 4432 } 4451 4433 } 4452 4434
+2
sound/pci/hda/patch_cirrus.c
··· 393 393 SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81), 394 394 SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122), 395 395 SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101), 396 + SND_PCI_QUIRK(0x106b, 0x5600, "MacBookAir 5,2", CS420X_MBP81), 396 397 SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42), 397 398 SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), 398 399 {} /* terminator */ ··· 585 584 return -ENOMEM; 586 585 587 586 spec->gen.automute_hook = cs_automute; 587 + codec->single_adc_amp = 1; 588 588 589 589 snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl, 590 590 cs420x_fixups);
+11
sound/pci/hda/patch_conexant.c
··· 223 223 CXT_PINCFG_LENOVO_TP410, 224 224 CXT_PINCFG_LEMOTE_A1004, 225 225 CXT_PINCFG_LEMOTE_A1205, 226 + CXT_PINCFG_COMPAQ_CQ60, 226 227 CXT_FIXUP_STEREO_DMIC, 227 228 CXT_FIXUP_INC_MIC_BOOST, 228 229 CXT_FIXUP_HEADPHONE_MIC_PIN, ··· 661 660 .type = HDA_FIXUP_PINS, 662 661 .v.pins = cxt_pincfg_lemote, 663 662 }, 663 + [CXT_PINCFG_COMPAQ_CQ60] = { 664 + .type = HDA_FIXUP_PINS, 665 + .v.pins = (const struct hda_pintbl[]) { 666 + /* 0x17 was falsely set up as a mic, it should 0x1d */ 667 + { 0x17, 0x400001f0 }, 668 + { 0x1d, 0x97a70120 }, 669 + { } 670 + } 671 + }, 664 672 [CXT_FIXUP_STEREO_DMIC] = { 665 673 .type = HDA_FIXUP_FUNC, 666 674 .v.func = cxt_fixup_stereo_dmic, ··· 779 769 }; 780 770 781 771 static const struct snd_pci_quirk cxt5051_fixups[] = { 772 + SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60), 782 773 SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200), 783 774 {} 784 775 };
+2 -2
sound/soc/fsl/fsl_spdif.c
··· 1049 1049 enum spdif_txrate index, bool round) 1050 1050 { 1051 1051 const u32 rate[] = { 32000, 44100, 48000, 96000, 192000 }; 1052 - bool is_sysclk = clk == spdif_priv->sysclk; 1052 + bool is_sysclk = clk_is_match(clk, spdif_priv->sysclk); 1053 1053 u64 rate_ideal, rate_actual, sub; 1054 1054 u32 sysclk_dfmin, sysclk_dfmax; 1055 1055 u32 txclk_df, sysclk_df, arate; ··· 1143 1143 spdif_priv->txclk_src[index], rate[index]); 1144 1144 dev_dbg(&pdev->dev, "use txclk df %d for %dHz sample rate\n", 1145 1145 spdif_priv->txclk_df[index], rate[index]); 1146 - if (spdif_priv->txclk[index] == spdif_priv->sysclk) 1146 + if (clk_is_match(spdif_priv->txclk[index], spdif_priv->sysclk)) 1147 1147 dev_dbg(&pdev->dev, "use sysclk df %d for %dHz sample rate\n", 1148 1148 spdif_priv->sysclk_df[index], rate[index]); 1149 1149 dev_dbg(&pdev->dev, "the best rate for %dHz sample rate is %dHz\n",
+1 -1
sound/soc/kirkwood/kirkwood-i2s.c
··· 579 579 if (PTR_ERR(priv->extclk) == -EPROBE_DEFER) 580 580 return -EPROBE_DEFER; 581 581 } else { 582 - if (priv->extclk == priv->clk) { 582 + if (clk_is_match(priv->extclk, priv->clk)) { 583 583 devm_clk_put(&pdev->dev, priv->extclk); 584 584 priv->extclk = ERR_PTR(-EINVAL); 585 585 } else {
+30
sound/usb/quirks-table.h
··· 1773 1773 } 1774 1774 } 1775 1775 }, 1776 + { 1777 + USB_DEVICE(0x0582, 0x0159), 1778 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1779 + /* .vendor_name = "Roland", */ 1780 + /* .product_name = "UA-22", */ 1781 + .ifnum = QUIRK_ANY_INTERFACE, 1782 + .type = QUIRK_COMPOSITE, 1783 + .data = (const struct snd_usb_audio_quirk[]) { 1784 + { 1785 + .ifnum = 0, 1786 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1787 + }, 1788 + { 1789 + .ifnum = 1, 1790 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 1791 + }, 1792 + { 1793 + .ifnum = 2, 1794 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 1795 + .data = & (const struct snd_usb_midi_endpoint_info) { 1796 + .out_cables = 0x0001, 1797 + .in_cables = 0x0001 1798 + } 1799 + }, 1800 + { 1801 + .ifnum = -1 1802 + } 1803 + } 1804 + } 1805 + }, 1776 1806 /* this catches most recent vendor-specific Roland devices */ 1777 1807 { 1778 1808 .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+1 -1
tools/power/cpupower/Makefile
··· 209 209 210 210 $(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_MAJ) 211 211 $(ECHO) " CC " $@ 212 - $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -Wl,-rpath=./ -lrt -lpci -L$(OUTPUT) -o $@ 212 + $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lcpupower -lrt -lpci -L$(OUTPUT) -o $@ 213 213 $(QUIET) $(STRIPCMD) $@ 214 214 215 215 $(OUTPUT)po/$(PACKAGE).pot: $(UTIL_SRC)
+9 -1
tools/testing/selftests/exec/execveat.c
··· 30 30 #ifdef __NR_execveat 31 31 return syscall(__NR_execveat, fd, path, argv, envp, flags); 32 32 #else 33 - errno = -ENOSYS; 33 + errno = ENOSYS; 34 34 return -1; 35 35 #endif 36 36 } ··· 233 233 int fd_script_ephemeral = open_or_die("script.ephemeral", O_RDONLY); 234 234 int fd_cloexec = open_or_die("execveat", O_RDONLY|O_CLOEXEC); 235 235 int fd_script_cloexec = open_or_die("script", O_RDONLY|O_CLOEXEC); 236 + 237 + /* Check if we have execveat at all, and bail early if not */ 238 + errno = 0; 239 + execveat_(-1, NULL, NULL, NULL, 0); 240 + if (errno == ENOSYS) { 241 + printf("[FAIL] ENOSYS calling execveat - no kernel support?\n"); 242 + return 1; 243 + } 236 244 237 245 /* Change file position to confirm it doesn't affect anything */ 238 246 lseek(fd, 10, SEEK_SET);