Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v3.18-rc7' into for-next

... for allowing more cleanups of hda_intel.c driver-caps where both
upstream and for-next contain the changes.

+2266 -1324
-4
Documentation/devicetree/bindings/interrupt-controller/interrupts.txt
··· 30 30 Example: 31 31 interrupts-extended = <&intc1 5 1>, <&intc2 1 0>; 32 32 33 - A device node may contain either "interrupts" or "interrupts-extended", but not 34 - both. If both properties are present, then the operating system should log an 35 - error and use only the data in "interrupts". 36 - 37 33 2) Interrupt controller nodes 38 34 ----------------------------- 39 35
+11
Documentation/devicetree/bindings/pci/pci.txt
··· 7 7 8 8 Open Firmware Recommended Practice: Interrupt Mapping 9 9 http://www.openfirmware.org/1275/practice/imap/imap0_9d.pdf 10 + 11 + Additionally to the properties specified in the above standards a host bridge 12 + driver implementation may support the following properties: 13 + 14 + - linux,pci-domain: 15 + If present this property assigns a fixed PCI domain number to a host bridge, 16 + otherwise an unstable (across boots) unique number will be assigned. 17 + It is required to either not set this property at all or set it for all 18 + host bridges in the system, otherwise potentially conflicting domain numbers 19 + may be assigned to root buses behind different host bridges. The domain 20 + number for each host bridge in the system must be unique.
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pdc-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090-PDC's pin configuration nodes act as a container for an abitrary number 12 + TZ1090-PDC's pin configuration nodes act as a container for an arbitrary number 13 13 of subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090's pin configuration nodes act as a container for an abitrary number of 12 + TZ1090's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,falcon-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,xway-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Tegra's pin configuration nodes act as a container for an abitrary number of 12 + Tegra's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl-sirf.txt
··· 13 13 Please refer to pinctrl-bindings.txt in this directory for details of the common 14 14 pinctrl bindings used by client devices. 15 15 16 - SiRFprimaII's pinmux nodes act as a container for an abitrary number of subnodes. 16 + SiRFprimaII's pinmux nodes act as a container for an arbitrary number of subnodes. 17 17 Each of these subnodes represents some desired configuration for a group of pins. 18 18 19 19 Required subnode-properties:
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl_spear.txt
··· 32 32 Please refer to pinctrl-bindings.txt in this directory for details of the common 33 33 pinctrl bindings used by client devices. 34 34 35 - SPEAr's pinmux nodes act as a container for an abitrary number of subnodes. Each 35 + SPEAr's pinmux nodes act as a container for an arbitrary number of subnodes. Each 36 36 of these subnodes represents muxing for a pin, a group, or a list of pins or 37 37 groups. 38 38
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8084-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,ipq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8960-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8974-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+4 -1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 34 34 chrp Common Hardware Reference Platform 35 35 chunghwa Chunghwa Picture Tubes Ltd. 36 36 cirrus Cirrus Logic, Inc. 37 + cnm Chips&Media, Inc. 37 38 cortina Cortina Systems, Inc. 38 39 crystalfontz Crystalfontz America, Inc. 39 40 dallas Maxim Integrated Products (formerly Dallas Semiconductor) ··· 93 92 mediatek MediaTek Inc. 94 93 micrel Micrel Inc. 95 94 microchip Microchip Technology Inc. 95 + micron Micron Technology Inc. 96 96 mitsubishi Mitsubishi Electric Corporation 97 97 mosaixtech Mosaix Technologies, Inc. 98 98 moxa Moxa ··· 129 127 ricoh Ricoh Co. Ltd. 130 128 rockchip Fuzhou Rockchip Electronics Co., Ltd 131 129 samsung Samsung Semiconductor 130 + sandisk Sandisk Corporation 132 131 sbs Smart Battery System 133 132 schindler Schindler 134 133 seagate Seagate Technology PLC ··· 141 138 sirf SiRF Technology, Inc. 142 139 sitronix Sitronix Technology Corporation 143 140 smsc Standard Microsystems Corporation 144 - snps Synopsys, Inc. 141 + snps Synopsys, Inc. 145 142 solidrun SolidRun 146 143 sony Sony Corporation 147 144 spansion Spansion Inc.
+1 -1
Documentation/filesystems/overlayfs.txt
··· 64 64 At mount time, the two directories given as mount options "lowerdir" and 65 65 "upperdir" are combined into a merged directory: 66 66 67 - mount -t overlayfs overlayfs -olowerdir=/lower,upperdir=/upper,\ 67 + mount -t overlay overlay -olowerdir=/lower,upperdir=/upper,\ 68 68 workdir=/work /merged 69 69 70 70 The "workdir" needs to be an empty directory on the same filesystem
+1 -1
Documentation/networking/timestamping.txt
··· 136 136 137 137 This option is implemented only for transmit timestamps. There, the 138 138 timestamp is always looped along with a struct sock_extended_err. 139 - The option modifies field ee_info to pass an id that is unique 139 + The option modifies field ee_data to pass an id that is unique 140 140 among all possibly concurrently outstanding timestamp requests for 141 141 that socket. In practice, it is a monotonically increasing u32 142 142 (that wraps).
+4 -3
MAINTAINERS
··· 6888 6888 F: include/scsi/osd_* 6889 6889 F: fs/exofs/ 6890 6890 6891 - OVERLAYFS FILESYSTEM 6891 + OVERLAY FILESYSTEM 6892 6892 M: Miklos Szeredi <miklos@szeredi.hu> 6893 - L: linux-fsdevel@vger.kernel.org 6893 + L: linux-unionfs@vger.kernel.org 6894 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git 6894 6895 S: Supported 6895 - F: fs/overlayfs/* 6896 + F: fs/overlayfs/ 6896 6897 F: Documentation/filesystems/overlayfs.txt 6897 6898 6898 6899 P54 WIRELESS DRIVER
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 18 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION*
+4
arch/arm/boot/dts/exynos5250-snow.dts
··· 624 624 num-cs = <1>; 625 625 }; 626 626 627 + &usbdrd_dwc3 { 628 + dr_mode = "host"; 629 + }; 630 + 627 631 #include "cros-ec-keyboard.dtsi"
+1 -1
arch/arm/boot/dts/exynos5250.dtsi
··· 555 555 #size-cells = <1>; 556 556 ranges; 557 557 558 - dwc3 { 558 + usbdrd_dwc3: dwc3 { 559 559 compatible = "synopsys,dwc3"; 560 560 reg = <0x12000000 0x10000>; 561 561 interrupts = <0 72 0>;
+1 -1
arch/arm/boot/dts/r8a7740.dtsi
··· 433 433 clocks = <&cpg_clocks R8A7740_CLK_S>, 434 434 <&cpg_clocks R8A7740_CLK_S>, <&sub_clk>, 435 435 <&cpg_clocks R8A7740_CLK_B>, 436 - <&sub_clk>, <&sub_clk>, 436 + <&cpg_clocks R8A7740_CLK_HPP>, <&sub_clk>, 437 437 <&cpg_clocks R8A7740_CLK_B>; 438 438 #clock-cells = <1>; 439 439 renesas,clock-indices = <
+2 -2
arch/arm/boot/dts/r8a7790.dtsi
··· 666 666 #clock-cells = <0>; 667 667 clock-output-names = "sd2"; 668 668 }; 669 - sd3_clk: sd3_clk@e615007c { 669 + sd3_clk: sd3_clk@e615026c { 670 670 compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock"; 671 - reg = <0 0xe615007c 0 4>; 671 + reg = <0 0xe615026c 0 4>; 672 672 clocks = <&pll1_div2_clk>; 673 673 #clock-cells = <0>; 674 674 clock-output-names = "sd3";
+4
arch/arm/boot/dts/sun6i-a31.dtsi
··· 361 361 clocks = <&ahb1_gates 6>; 362 362 resets = <&ahb1_rst 6>; 363 363 #dma-cells = <1>; 364 + 365 + /* DMA controller requires AHB1 clocked from PLL6 */ 366 + assigned-clocks = <&ahb1_mux>; 367 + assigned-clock-parents = <&pll6>; 364 368 }; 365 369 366 370 mmc0: mmc@01c0f000 {
+1
arch/arm/boot/dts/tegra114-dalmore.dts
··· 15 15 aliases { 16 16 rtc0 = "/i2c@7000d000/tps65913@58"; 17 17 rtc1 = "/rtc@7000e000"; 18 + serial0 = &uartd; 18 19 }; 19 20 20 21 memory {
+5 -4
arch/arm/boot/dts/tegra114-roth.dts
··· 15 15 linux,initrd-end = <0x82800000>; 16 16 }; 17 17 18 + aliases { 19 + serial0 = &uartd; 20 + }; 21 + 18 22 firmware { 19 23 trusted-foundations { 20 24 compatible = "tlm,trusted-foundations"; ··· 920 916 regulator-name = "vddio-sdmmc3"; 921 917 regulator-min-microvolt = <1800000>; 922 918 regulator-max-microvolt = <3300000>; 923 - regulator-always-on; 924 - regulator-boot-on; 925 919 }; 926 920 927 921 ldousb { ··· 964 962 sdhci@78000400 { 965 963 status = "okay"; 966 964 bus-width = <4>; 967 - vmmc-supply = <&vddio_sdmmc3>; 965 + vqmmc-supply = <&vddio_sdmmc3>; 968 966 cd-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>; 969 967 power-gpios = <&gpio TEGRA_GPIO(H, 0) GPIO_ACTIVE_HIGH>; 970 968 }; ··· 973 971 sdhci@78000600 { 974 972 status = "okay"; 975 973 bus-width = <8>; 976 - vmmc-supply = <&vdd_1v8>; 977 974 non-removable; 978 975 }; 979 976
+4 -1
arch/arm/boot/dts/tegra114-tn7.dts
··· 15 15 linux,initrd-end = <0x82800000>; 16 16 }; 17 17 18 + aliases { 19 + serial0 = &uartd; 20 + }; 21 + 18 22 firmware { 19 23 trusted-foundations { 20 24 compatible = "tlm,trusted-foundations"; ··· 244 240 sdhci@78000600 { 245 241 status = "okay"; 246 242 bus-width = <8>; 247 - vmmc-supply = <&vdd_1v8>; 248 243 non-removable; 249 244 }; 250 245
-7
arch/arm/boot/dts/tegra114.dtsi
··· 9 9 compatible = "nvidia,tegra114"; 10 10 interrupt-parent = <&gic>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - }; 18 - 19 12 host1x@50000000 { 20 13 compatible = "nvidia,tegra114-host1x", "simple-bus"; 21 14 reg = <0x50000000 0x00028000>;
+1
arch/arm/boot/dts/tegra124-jetson-tk1.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra124-nyan-big.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra124-venice2.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+4 -4
arch/arm/boot/dts/tegra124.dtsi
··· 286 286 * the APB DMA based serial driver, the comptible is 287 287 * "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart". 288 288 */ 289 - serial@0,70006000 { 289 + uarta: serial@0,70006000 { 290 290 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 291 291 reg = <0x0 0x70006000 0x0 0x40>; 292 292 reg-shift = <2>; ··· 299 299 status = "disabled"; 300 300 }; 301 301 302 - serial@0,70006040 { 302 + uartb: serial@0,70006040 { 303 303 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 304 304 reg = <0x0 0x70006040 0x0 0x40>; 305 305 reg-shift = <2>; ··· 312 312 status = "disabled"; 313 313 }; 314 314 315 - serial@0,70006200 { 315 + uartc: serial@0,70006200 { 316 316 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 317 317 reg = <0x0 0x70006200 0x0 0x40>; 318 318 reg-shift = <2>; ··· 325 325 status = "disabled"; 326 326 }; 327 327 328 - serial@0,70006300 { 328 + uartd: serial@0,70006300 { 329 329 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 330 330 reg = <0x0 0x70006300 0x0 0x40>; 331 331 reg-shift = <2>;
+1
arch/arm/boot/dts/tegra20-harmony.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+5
arch/arm/boot/dts/tegra20-iris-512.dts
··· 6 6 model = "Toradex Colibri T20 512MB on Iris"; 7 7 compatible = "toradex,iris", "toradex,colibri_t20-512", "nvidia,tegra20"; 8 8 9 + aliases { 10 + serial0 = &uarta; 11 + serial1 = &uartd; 12 + }; 13 + 9 14 host1x@50000000 { 10 15 hdmi@54280000 { 11 16 status = "okay";
+4
arch/arm/boot/dts/tegra20-medcom-wide.dts
··· 6 6 model = "Avionic Design Medcom-Wide board"; 7 7 compatible = "ad,medcom-wide", "ad,tamonten", "nvidia,tegra20"; 8 8 9 + aliases { 10 + serial0 = &uartd; 11 + }; 12 + 9 13 pwm@7000a000 { 10 14 status = "okay"; 11 15 };
+2
arch/arm/boot/dts/tegra20-paz00.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 14 + serial1 = &uartc; 13 15 }; 14 16 15 17 memory {
+1
arch/arm/boot/dts/tegra20-seaboard.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-tamonten.dtsi
··· 7 7 aliases { 8 8 rtc0 = "/i2c@7000d000/tps6586x@34"; 9 9 rtc1 = "/rtc@7000e000"; 10 + serial0 = &uartd; 10 11 }; 11 12 12 13 memory {
+1
arch/arm/boot/dts/tegra20-trimslice.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000c500/rtc@56"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-ventana.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-whistler.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/max8907@3c"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
-8
arch/arm/boot/dts/tegra20.dtsi
··· 9 9 compatible = "nvidia,tegra20"; 10 10 interrupt-parent = <&intc>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - serial4 = &uarte; 18 - }; 19 - 20 12 host1x@50000000 { 21 13 compatible = "nvidia,tegra20-host1x", "simple-bus"; 22 14 reg = <0x50000000 0x00024000>;
+4
arch/arm/boot/dts/tegra30-apalis-eval.dts
··· 11 11 rtc0 = "/i2c@7000c000/rtc@68"; 12 12 rtc1 = "/i2c@7000d000/tps65911@2d"; 13 13 rtc2 = "/rtc@7000e000"; 14 + serial0 = &uarta; 15 + serial1 = &uartb; 16 + serial2 = &uartc; 17 + serial3 = &uartd; 14 18 }; 15 19 16 20 pcie-controller@00003000 {
+1
arch/arm/boot/dts/tegra30-beaver.dts
··· 9 9 aliases { 10 10 rtc0 = "/i2c@7000d000/tps65911@2d"; 11 11 rtc1 = "/rtc@7000e000"; 12 + serial0 = &uarta; 12 13 }; 13 14 14 15 memory {
+2
arch/arm/boot/dts/tegra30-cardhu.dtsi
··· 30 30 aliases { 31 31 rtc0 = "/i2c@7000d000/tps65911@2d"; 32 32 rtc1 = "/rtc@7000e000"; 33 + serial0 = &uarta; 34 + serial1 = &uartc; 33 35 }; 34 36 35 37 memory {
+3
arch/arm/boot/dts/tegra30-colibri-eval-v3.dts
··· 10 10 rtc0 = "/i2c@7000c000/rtc@68"; 11 11 rtc1 = "/i2c@7000d000/tps65911@2d"; 12 12 rtc2 = "/rtc@7000e000"; 13 + serial0 = &uarta; 14 + serial1 = &uartb; 15 + serial2 = &uartd; 13 16 }; 14 17 15 18 host1x@50000000 {
-8
arch/arm/boot/dts/tegra30.dtsi
··· 9 9 compatible = "nvidia,tegra30"; 10 10 interrupt-parent = <&intc>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - serial4 = &uarte; 18 - }; 19 - 20 12 pcie-controller@00003000 { 21 13 compatible = "nvidia,tegra30-pcie"; 22 14 device_type = "pci";
+2
arch/arm/configs/exynos_defconfig
··· 142 142 CONFIG_MMC_DW_EXYNOS=y 143 143 CONFIG_RTC_CLASS=y 144 144 CONFIG_RTC_DRV_MAX77686=y 145 + CONFIG_RTC_DRV_MAX77802=y 145 146 CONFIG_RTC_DRV_S5M=y 146 147 CONFIG_RTC_DRV_S3C=y 147 148 CONFIG_DMADEVICES=y 148 149 CONFIG_PL330_DMA=y 149 150 CONFIG_COMMON_CLK_MAX77686=y 151 + CONFIG_COMMON_CLK_MAX77802=y 150 152 CONFIG_COMMON_CLK_S2MPS11=y 151 153 CONFIG_EXYNOS_IOMMU=y 152 154 CONFIG_IIO=y
+1
arch/arm/configs/multi_v7_defconfig
··· 217 217 CONFIG_I2C_DESIGNWARE_PLATFORM=y 218 218 CONFIG_I2C_EXYNOS5=y 219 219 CONFIG_I2C_MV64XXX=y 220 + CONFIG_I2C_S3C2410=y 220 221 CONFIG_I2C_SIRF=y 221 222 CONFIG_I2C_TEGRA=y 222 223 CONFIG_I2C_ST=y
-11
arch/arm/include/asm/thread_info.h
··· 44 44 __u32 extra[2]; /* Xscale 'acc' register, etc */ 45 45 }; 46 46 47 - struct arm_restart_block { 48 - union { 49 - /* For user cache flushing */ 50 - struct { 51 - unsigned long start; 52 - unsigned long end; 53 - } cache; 54 - }; 55 - }; 56 - 57 47 /* 58 48 * low level task data that entry.S needs immediate access to. 59 49 * __switch_to() assumes cpu_context follows immediately after cpu_domain. ··· 69 79 unsigned long thumbee_state; /* ThumbEE Handler Base register */ 70 80 #endif 71 81 struct restart_block restart_block; 72 - struct arm_restart_block arm_restart_block; 73 82 }; 74 83 75 84 #define INIT_THREAD_INFO(tsk) \
+2 -29
arch/arm/kernel/traps.c
··· 533 533 return regs->ARM_r0; 534 534 } 535 535 536 - static long do_cache_op_restart(struct restart_block *); 537 - 538 536 static inline int 539 537 __do_cache_op(unsigned long start, unsigned long end) 540 538 { ··· 541 543 do { 542 544 unsigned long chunk = min(PAGE_SIZE, end - start); 543 545 544 - if (signal_pending(current)) { 545 - struct thread_info *ti = current_thread_info(); 546 - 547 - ti->restart_block = (struct restart_block) { 548 - .fn = do_cache_op_restart, 549 - }; 550 - 551 - ti->arm_restart_block = (struct arm_restart_block) { 552 - { 553 - .cache = { 554 - .start = start, 555 - .end = end, 556 - }, 557 - }, 558 - }; 559 - 560 - return -ERESTART_RESTARTBLOCK; 561 - } 546 + if (fatal_signal_pending(current)) 547 + return 0; 562 548 563 549 ret = flush_cache_user_range(start, start + chunk); 564 550 if (ret) ··· 553 571 } while (start < end); 554 572 555 573 return 0; 556 - } 557 - 558 - static long do_cache_op_restart(struct restart_block *unused) 559 - { 560 - struct arm_restart_block *restart_block; 561 - 562 - restart_block = &current_thread_info()->arm_restart_block; 563 - return __do_cache_op(restart_block->cache.start, 564 - restart_block->cache.end); 565 574 } 566 575 567 576 static inline int
+8 -2
arch/arm/kvm/mmu.c
··· 197 197 pgd = pgdp + pgd_index(addr); 198 198 do { 199 199 next = kvm_pgd_addr_end(addr, end); 200 - unmap_puds(kvm, pgd, addr, next); 200 + if (!pgd_none(*pgd)) 201 + unmap_puds(kvm, pgd, addr, next); 201 202 } while (pgd++, addr = next, addr != end); 202 203 } 203 204 ··· 835 834 return kvm_vcpu_dabt_iswrite(vcpu); 836 835 } 837 836 837 + static bool kvm_is_device_pfn(unsigned long pfn) 838 + { 839 + return !pfn_valid(pfn); 840 + } 841 + 838 842 static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, 839 843 struct kvm_memory_slot *memslot, unsigned long hva, 840 844 unsigned long fault_status) ··· 910 904 if (is_error_pfn(pfn)) 911 905 return -EFAULT; 912 906 913 - if (kvm_is_mmio_pfn(pfn)) 907 + if (kvm_is_device_pfn(pfn)) 914 908 mem_type = PAGE_S2_DEVICE; 915 909 916 910 spin_lock(&kvm->mmu_lock);
+2
arch/arm/mach-mvebu/coherency.c
··· 400 400 type == COHERENCY_FABRIC_TYPE_ARMADA_380) 401 401 armada_375_380_coherency_init(np); 402 402 403 + of_node_put(np); 404 + 403 405 return 0; 404 406 } 405 407
+7 -2
arch/arm/mach-shmobile/clock-r8a7740.c
··· 455 455 MSTP128, MSTP127, MSTP125, 456 456 MSTP116, MSTP111, MSTP100, MSTP117, 457 457 458 - MSTP230, 458 + MSTP230, MSTP229, 459 459 MSTP222, 460 460 MSTP218, MSTP217, MSTP216, MSTP214, 461 461 MSTP207, MSTP206, MSTP204, MSTP203, MSTP202, MSTP201, MSTP200, ··· 474 474 [MSTP127] = SH_CLK_MSTP32(&div4_clks[DIV4_S], SMSTPCR1, 27, 0), /* CEU20 */ 475 475 [MSTP125] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 25, 0), /* TMU0 */ 476 476 [MSTP117] = SH_CLK_MSTP32(&div4_clks[DIV4_B], SMSTPCR1, 17, 0), /* LCDC1 */ 477 - [MSTP116] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 16, 0), /* IIC0 */ 477 + [MSTP116] = SH_CLK_MSTP32(&div4_clks[DIV4_HPP], SMSTPCR1, 16, 0), /* IIC0 */ 478 478 [MSTP111] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 11, 0), /* TMU1 */ 479 479 [MSTP100] = SH_CLK_MSTP32(&div4_clks[DIV4_B], SMSTPCR1, 0, 0), /* LCDC0 */ 480 480 481 481 [MSTP230] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR2, 30, 0), /* SCIFA6 */ 482 + [MSTP229] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 29, 0), /* INTCA */ 482 483 [MSTP222] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR2, 22, 0), /* SCIFA7 */ 483 484 [MSTP218] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 18, 0), /* DMAC1 */ 484 485 [MSTP217] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 17, 0), /* DMAC2 */ ··· 576 575 CLKDEV_DEV_ID("sh-dma-engine.0", &mstp_clks[MSTP218]), 577 576 CLKDEV_DEV_ID("sh-sci.7", &mstp_clks[MSTP222]), 578 577 CLKDEV_DEV_ID("e6cd0000.serial", &mstp_clks[MSTP222]), 578 + CLKDEV_DEV_ID("renesas_intc_irqpin.0", &mstp_clks[MSTP229]), 579 + CLKDEV_DEV_ID("renesas_intc_irqpin.1", &mstp_clks[MSTP229]), 580 + CLKDEV_DEV_ID("renesas_intc_irqpin.2", &mstp_clks[MSTP229]), 581 + CLKDEV_DEV_ID("renesas_intc_irqpin.3", &mstp_clks[MSTP229]), 579 582 CLKDEV_DEV_ID("sh-sci.6", &mstp_clks[MSTP230]), 580 583 CLKDEV_DEV_ID("e6cc0000.serial", &mstp_clks[MSTP230]), 581 584
+1 -1
arch/arm/mach-shmobile/clock-r8a7790.c
··· 68 68 69 69 #define SDCKCR 0xE6150074 70 70 #define SD2CKCR 0xE6150078 71 - #define SD3CKCR 0xE615007C 71 + #define SD3CKCR 0xE615026C 72 72 #define MMC0CKCR 0xE6150240 73 73 #define MMC1CKCR 0xE6150244 74 74 #define SSPCKCR 0xE6150248
+20
arch/arm/mach-shmobile/setup-sh73a0.c
··· 26 26 #include <linux/of_platform.h> 27 27 #include <linux/delay.h> 28 28 #include <linux/input.h> 29 + #include <linux/i2c/i2c-sh_mobile.h> 29 30 #include <linux/io.h> 30 31 #include <linux/serial_sci.h> 31 32 #include <linux/sh_dma.h> ··· 193 192 }, 194 193 }; 195 194 195 + static struct i2c_sh_mobile_platform_data i2c_platform_data = { 196 + .clks_per_count = 2, 197 + }; 198 + 196 199 static struct platform_device i2c0_device = { 197 200 .name = "i2c-sh_mobile", 198 201 .id = 0, 199 202 .resource = i2c0_resources, 200 203 .num_resources = ARRAY_SIZE(i2c0_resources), 204 + .dev = { 205 + .platform_data = &i2c_platform_data, 206 + }, 201 207 }; 202 208 203 209 static struct platform_device i2c1_device = { ··· 212 204 .id = 1, 213 205 .resource = i2c1_resources, 214 206 .num_resources = ARRAY_SIZE(i2c1_resources), 207 + .dev = { 208 + .platform_data = &i2c_platform_data, 209 + }, 215 210 }; 216 211 217 212 static struct platform_device i2c2_device = { ··· 222 211 .id = 2, 223 212 .resource = i2c2_resources, 224 213 .num_resources = ARRAY_SIZE(i2c2_resources), 214 + .dev = { 215 + .platform_data = &i2c_platform_data, 216 + }, 225 217 }; 226 218 227 219 static struct platform_device i2c3_device = { ··· 232 218 .id = 3, 233 219 .resource = i2c3_resources, 234 220 .num_resources = ARRAY_SIZE(i2c3_resources), 221 + .dev = { 222 + .platform_data = &i2c_platform_data, 223 + }, 235 224 }; 236 225 237 226 static struct platform_device i2c4_device = { ··· 242 225 .id = 4, 243 226 .resource = i2c4_resources, 244 227 .num_resources = ARRAY_SIZE(i2c4_resources), 228 + .dev = { 229 + .platform_data = &i2c_platform_data, 230 + }, 245 231 }; 246 232 247 233 static const struct sh_dmae_slave_config sh73a0_dmae_slaves[] = {
+11 -11
arch/arm/mach-tegra/irq.c
··· 99 99 100 100 static void tegra_mask(struct irq_data *d) 101 101 { 102 - if (d->irq < FIRST_LEGACY_IRQ) 102 + if (d->hwirq < FIRST_LEGACY_IRQ) 103 103 return; 104 104 105 - tegra_irq_write_mask(d->irq, ICTLR_CPU_IER_CLR); 105 + tegra_irq_write_mask(d->hwirq, ICTLR_CPU_IER_CLR); 106 106 } 107 107 108 108 static void tegra_unmask(struct irq_data *d) 109 109 { 110 - if (d->irq < FIRST_LEGACY_IRQ) 110 + if (d->hwirq < FIRST_LEGACY_IRQ) 111 111 return; 112 112 113 - tegra_irq_write_mask(d->irq, ICTLR_CPU_IER_SET); 113 + tegra_irq_write_mask(d->hwirq, ICTLR_CPU_IER_SET); 114 114 } 115 115 116 116 static void tegra_ack(struct irq_data *d) 117 117 { 118 - if (d->irq < FIRST_LEGACY_IRQ) 118 + if (d->hwirq < FIRST_LEGACY_IRQ) 119 119 return; 120 120 121 - tegra_irq_write_mask(d->irq, ICTLR_CPU_IEP_FIR_CLR); 121 + tegra_irq_write_mask(d->hwirq, ICTLR_CPU_IEP_FIR_CLR); 122 122 } 123 123 124 124 static void tegra_eoi(struct irq_data *d) 125 125 { 126 - if (d->irq < FIRST_LEGACY_IRQ) 126 + if (d->hwirq < FIRST_LEGACY_IRQ) 127 127 return; 128 128 129 - tegra_irq_write_mask(d->irq, ICTLR_CPU_IEP_FIR_CLR); 129 + tegra_irq_write_mask(d->hwirq, ICTLR_CPU_IEP_FIR_CLR); 130 130 } 131 131 132 132 static int tegra_retrigger(struct irq_data *d) 133 133 { 134 - if (d->irq < FIRST_LEGACY_IRQ) 134 + if (d->hwirq < FIRST_LEGACY_IRQ) 135 135 return 0; 136 136 137 - tegra_irq_write_mask(d->irq, ICTLR_CPU_IEP_FIR_SET); 137 + tegra_irq_write_mask(d->hwirq, ICTLR_CPU_IEP_FIR_SET); 138 138 139 139 return 1; 140 140 } ··· 142 142 #ifdef CONFIG_PM_SLEEP 143 143 static int tegra_set_wake(struct irq_data *d, unsigned int enable) 144 144 { 145 - u32 irq = d->irq; 145 + u32 irq = d->hwirq; 146 146 u32 index, mask; 147 147 148 148 if (irq < FIRST_LEGACY_IRQ ||
-2
arch/arm/mm/proc-v7.S
··· 270 270 /* Auxiliary Debug Modes Control 1 Register */ 271 271 #define PJ4B_STATIC_BP (1 << 2) /* Enable Static BP */ 272 272 #define PJ4B_INTER_PARITY (1 << 8) /* Disable Internal Parity Handling */ 273 - #define PJ4B_BCK_OFF_STREX (1 << 5) /* Enable the back off of STREX instr */ 274 273 #define PJ4B_CLEAN_LINE (1 << 16) /* Disable data transfer for clean line */ 275 274 276 275 /* Auxiliary Debug Modes Control 2 Register */ ··· 292 293 /* Auxiliary Debug Modes Control 1 Register */ 293 294 mrc p15, 1, r0, c15, c1, 1 294 295 orr r0, r0, #PJ4B_CLEAN_LINE 295 - orr r0, r0, #PJ4B_BCK_OFF_STREX 296 296 orr r0, r0, #PJ4B_INTER_PARITY 297 297 bic r0, r0, #PJ4B_STATIC_BP 298 298 mcr p15, 1, r0, c15, c1, 1
+2 -2
arch/arm/mm/proc-xscale.S
··· 535 535 mrc p15, 0, r5, c15, c1, 0 @ CP access reg 536 536 mrc p15, 0, r6, c13, c0, 0 @ PID 537 537 mrc p15, 0, r7, c3, c0, 0 @ domain ID 538 - mrc p15, 0, r8, c1, c1, 0 @ auxiliary control reg 538 + mrc p15, 0, r8, c1, c0, 1 @ auxiliary control reg 539 539 mrc p15, 0, r9, c1, c0, 0 @ control reg 540 540 bic r4, r4, #2 @ clear frequency change bit 541 541 stmia r0, {r4 - r9} @ store cp regs ··· 552 552 mcr p15, 0, r6, c13, c0, 0 @ PID 553 553 mcr p15, 0, r7, c3, c0, 0 @ domain ID 554 554 mcr p15, 0, r1, c2, c0, 0 @ translation table base addr 555 - mcr p15, 0, r8, c1, c1, 0 @ auxiliary control reg 555 + mcr p15, 0, r8, c1, c0, 1 @ auxiliary control reg 556 556 mov r0, r9 @ control register 557 557 b cpu_resume_mmu 558 558 ENDPROC(cpu_xscale_do_resume)
+9
arch/arm64/kvm/sys_regs.c
··· 424 424 /* VBAR_EL1 */ 425 425 { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b0000), Op2(0b000), 426 426 NULL, reset_val, VBAR_EL1, 0 }, 427 + 428 + /* ICC_SRE_EL1 */ 429 + { Op0(0b11), Op1(0b000), CRn(0b1100), CRm(0b1100), Op2(0b101), 430 + trap_raz_wi }, 431 + 427 432 /* CONTEXTIDR_EL1 */ 428 433 { Op0(0b11), Op1(0b000), CRn(0b1101), CRm(0b0000), Op2(0b001), 429 434 access_vm_reg, reset_val, CONTEXTIDR_EL1, 0 }, ··· 695 690 { Op1( 0), CRn(10), CRm( 2), Op2( 1), access_vm_reg, NULL, c10_NMRR }, 696 691 { Op1( 0), CRn(10), CRm( 3), Op2( 0), access_vm_reg, NULL, c10_AMAIR0 }, 697 692 { Op1( 0), CRn(10), CRm( 3), Op2( 1), access_vm_reg, NULL, c10_AMAIR1 }, 693 + 694 + /* ICC_SRE */ 695 + { Op1( 0), CRn(12), CRm(12), Op2( 5), trap_raz_wi }, 696 + 698 697 { Op1( 0), CRn(13), CRm( 0), Op2( 1), access_vm_reg, NULL, c13_CID }, 699 698 }; 700 699
+1 -1
arch/ia64/kvm/kvm-ia64.c
··· 1563 1563 1564 1564 for (i = 0; i < npages; i++) { 1565 1565 pfn = gfn_to_pfn(kvm, base_gfn + i); 1566 - if (!kvm_is_mmio_pfn(pfn)) { 1566 + if (!kvm_is_reserved_pfn(pfn)) { 1567 1567 kvm_set_pmt_entry(kvm, base_gfn + i, 1568 1568 pfn << PAGE_SHIFT, 1569 1569 _PAGE_AR_RWX | _PAGE_MA_WB);
+12 -2
arch/mips/Kconfig
··· 2101 2101 config ARCH_PHYS_ADDR_T_64BIT 2102 2102 def_bool 64BIT_PHYS_ADDR 2103 2103 2104 + choice 2105 + prompt "SmartMIPS or microMIPS ASE support" 2106 + 2107 + config CPU_NEEDS_NO_SMARTMIPS_OR_MICROMIPS 2108 + bool "None" 2109 + help 2110 + Select this if you want neither microMIPS nor SmartMIPS support 2111 + 2104 2112 config CPU_HAS_SMARTMIPS 2105 2113 depends on SYS_SUPPORTS_SMARTMIPS 2106 - bool "Support for the SmartMIPS ASE" 2114 + bool "SmartMIPS" 2107 2115 help 2108 2116 SmartMIPS is a extension of the MIPS32 architecture aimed at 2109 2117 increased security at both hardware and software level for ··· 2123 2115 2124 2116 config CPU_MICROMIPS 2125 2117 depends on SYS_SUPPORTS_MICROMIPS 2126 - bool "Build kernel using microMIPS ISA" 2118 + bool "microMIPS" 2127 2119 help 2128 2120 When this option is enabled the kernel will be built using the 2129 2121 microMIPS ISA 2122 + 2123 + endchoice 2130 2124 2131 2125 config CPU_HAS_MSA 2132 2126 bool "Support for the MIPS SIMD Architecture (EXPERIMENTAL)"
+7 -1
arch/mips/include/asm/jump_label.h
··· 20 20 #define WORD_INSN ".word" 21 21 #endif 22 22 23 + #ifdef CONFIG_CPU_MICROMIPS 24 + #define NOP_INSN "nop32" 25 + #else 26 + #define NOP_INSN "nop" 27 + #endif 28 + 23 29 static __always_inline bool arch_static_branch(struct static_key *key) 24 30 { 25 - asm_volatile_goto("1:\tnop\n\t" 31 + asm_volatile_goto("1:\t" NOP_INSN "\n\t" 26 32 "nop\n\t" 27 33 ".pushsection __jump_table, \"aw\"\n\t" 28 34 WORD_INSN " 1b, %l[l_yes], %0\n\t"
-2
arch/mips/include/asm/mach-loongson/cpu-feature-overrides.h
··· 41 41 #define cpu_has_mcheck 0 42 42 #define cpu_has_mdmx 0 43 43 #define cpu_has_mips16 0 44 - #define cpu_has_mips32r1 0 45 44 #define cpu_has_mips32r2 0 46 45 #define cpu_has_mips3d 0 47 - #define cpu_has_mips64r1 0 48 46 #define cpu_has_mips64r2 0 49 47 #define cpu_has_mipsmt 0 50 48 #define cpu_has_prefetch 0
+2
arch/mips/include/asm/mipsregs.h
··· 661 661 #define MIPS_CONF6_SYND (_ULCAST_(1) << 13) 662 662 /* proAptiv FTLB on/off bit */ 663 663 #define MIPS_CONF6_FTLBEN (_ULCAST_(1) << 15) 664 + /* FTLB probability bits */ 665 + #define MIPS_CONF6_FTLBP_SHIFT (16) 664 666 665 667 #define MIPS_CONF7_WII (_ULCAST_(1) << 31) 666 668
+4
arch/mips/include/asm/r4kcache.h
··· 257 257 */ 258 258 static inline void protected_writeback_dcache_line(unsigned long addr) 259 259 { 260 + #ifdef CONFIG_EVA 261 + protected_cachee_op(Hit_Writeback_Inv_D, addr); 262 + #else 260 263 protected_cache_op(Hit_Writeback_Inv_D, addr); 264 + #endif 261 265 } 262 266 263 267 static inline void protected_writeback_scache_line(unsigned long addr)
+10 -8
arch/mips/include/asm/uaccess.h
··· 301 301 __get_kernel_common((x), size, __gu_ptr); \ 302 302 else \ 303 303 __get_user_common((x), size, __gu_ptr); \ 304 - } \ 304 + } else \ 305 + (x) = 0; \ 305 306 \ 306 307 __gu_err; \ 307 308 }) ··· 317 316 " .insn \n" \ 318 317 " .section .fixup,\"ax\" \n" \ 319 318 "3: li %0, %4 \n" \ 319 + " move %1, $0 \n" \ 320 320 " j 2b \n" \ 321 321 " .previous \n" \ 322 322 " .section __ex_table,\"a\" \n" \ ··· 632 630 " .insn \n" \ 633 631 " .section .fixup,\"ax\" \n" \ 634 632 "3: li %0, %4 \n" \ 633 + " move %1, $0 \n" \ 635 634 " j 2b \n" \ 636 635 " .previous \n" \ 637 636 " .section __ex_table,\"a\" \n" \ ··· 776 773 "jal\t" #destination "\n\t" 777 774 #endif 778 775 779 - #ifndef CONFIG_CPU_DADDI_WORKAROUNDS 780 - #define DADDI_SCRATCH "$0" 781 - #else 776 + #if defined(CONFIG_CPU_DADDI_WORKAROUNDS) || (defined(CONFIG_EVA) && \ 777 + defined(CONFIG_CPU_HAS_PREFETCH)) 782 778 #define DADDI_SCRATCH "$3" 779 + #else 780 + #define DADDI_SCRATCH "$0" 783 781 #endif 784 782 785 783 extern size_t __copy_user(void *__to, const void *__from, size_t __n); ··· 1422 1418 } 1423 1419 1424 1420 /* 1425 - * strlen_user: - Get the size of a string in user space. 1421 + * strnlen_user: - Get the size of a string in user space. 1426 1422 * @str: The string to measure. 1427 1423 * 1428 1424 * Context: User context only. This function may sleep. ··· 1431 1427 * 1432 1428 * Returns the size of the string INCLUDING the terminating NUL. 1433 1429 * On exception, returns 0. 1434 - * 1435 - * If there is a limit on the length of a valid string, you may wish to 1436 - * consider using strnlen_user() instead. 1430 + * If the string is too long, returns a value greater than @n. 1437 1431 */ 1438 1432 static inline long strnlen_user(const char __user *s, long n) 1439 1433 {
+1 -1
arch/mips/include/uapi/asm/unistd.h
··· 1045 1045 #define __NR_seccomp (__NR_Linux + 316) 1046 1046 #define __NR_getrandom (__NR_Linux + 317) 1047 1047 #define __NR_memfd_create (__NR_Linux + 318) 1048 - #define __NR_memfd_create (__NR_Linux + 319) 1048 + #define __NR_bpf (__NR_Linux + 319) 1049 1049 1050 1050 /* 1051 1051 * Offset of the last N32 flavoured syscall
-3
arch/mips/kernel/bmips_vec.S
··· 208 208 END(bmips_reset_nmi_vec) 209 209 210 210 .set pop 211 - .previous 212 211 213 212 /*********************************************************************** 214 213 * CPU1 warm restart vector (used for second and subsequent boots). ··· 280 281 jr ra 281 282 282 283 END(bmips_enable_xks01) 283 - 284 - .previous
+2
arch/mips/kernel/cps-vec.S
··· 229 229 nop 230 230 231 231 .set push 232 + .set mips32r2 232 233 .set mt 233 234 234 235 /* Only allow 1 TC per VPE to execute... */ ··· 346 345 nop 347 346 348 347 .set push 348 + .set mips32r2 349 349 .set mt 350 350 351 351 1: /* Enter VPE configuration state */
+37 -3
arch/mips/kernel/cpu-probe.c
··· 193 193 static char unknown_isa[] = KERN_ERR \ 194 194 "Unsupported ISA type, c0.config0: %d."; 195 195 196 + static unsigned int calculate_ftlb_probability(struct cpuinfo_mips *c) 197 + { 198 + 199 + unsigned int probability = c->tlbsize / c->tlbsizevtlb; 200 + 201 + /* 202 + * 0 = All TLBWR instructions go to FTLB 203 + * 1 = 15:1: For every 16 TBLWR instructions, 15 go to the 204 + * FTLB and 1 goes to the VTLB. 205 + * 2 = 7:1: As above with 7:1 ratio. 206 + * 3 = 3:1: As above with 3:1 ratio. 207 + * 208 + * Use the linear midpoint as the probability threshold. 209 + */ 210 + if (probability >= 12) 211 + return 1; 212 + else if (probability >= 6) 213 + return 2; 214 + else 215 + /* 216 + * So FTLB is less than 4 times bigger than VTLB. 217 + * A 3:1 ratio can still be useful though. 218 + */ 219 + return 3; 220 + } 221 + 196 222 static void set_ftlb_enable(struct cpuinfo_mips *c, int enable) 197 223 { 198 224 unsigned int config6; ··· 229 203 case CPU_P5600: 230 204 /* proAptiv & related cores use Config6 to enable the FTLB */ 231 205 config6 = read_c0_config6(); 206 + /* Clear the old probability value */ 207 + config6 &= ~(3 << MIPS_CONF6_FTLBP_SHIFT); 232 208 if (enable) 233 209 /* Enable FTLB */ 234 - write_c0_config6(config6 | MIPS_CONF6_FTLBEN); 210 + write_c0_config6(config6 | 211 + (calculate_ftlb_probability(c) 212 + << MIPS_CONF6_FTLBP_SHIFT) 213 + | MIPS_CONF6_FTLBEN); 235 214 else 236 215 /* Disable FTLB */ 237 216 write_c0_config6(config6 & ~MIPS_CONF6_FTLBEN); ··· 788 757 c->cputype = CPU_LOONGSON2; 789 758 __cpu_name[cpu] = "ICT Loongson-2"; 790 759 set_elf_platform(cpu, "loongson2e"); 760 + set_isa(c, MIPS_CPU_ISA_III); 791 761 break; 792 762 case PRID_REV_LOONGSON2F: 793 763 c->cputype = CPU_LOONGSON2; 794 764 __cpu_name[cpu] = "ICT Loongson-2"; 795 765 set_elf_platform(cpu, "loongson2f"); 766 + set_isa(c, MIPS_CPU_ISA_III); 796 767 break; 797 768 case PRID_REV_LOONGSON3A: 798 769 c->cputype = CPU_LOONGSON3; 799 - c->writecombine = _CACHE_UNCACHED_ACCELERATED; 800 770 __cpu_name[cpu] = "ICT Loongson-3"; 801 771 set_elf_platform(cpu, "loongson3a"); 772 + set_isa(c, MIPS_CPU_ISA_M64R1); 802 773 break; 803 774 case PRID_REV_LOONGSON3B_R1: 804 775 case PRID_REV_LOONGSON3B_R2: 805 776 c->cputype = CPU_LOONGSON3; 806 777 __cpu_name[cpu] = "ICT Loongson-3"; 807 778 set_elf_platform(cpu, "loongson3b"); 779 + set_isa(c, MIPS_CPU_ISA_M64R1); 808 780 break; 809 781 } 810 782 811 - set_isa(c, MIPS_CPU_ISA_III); 812 783 c->options = R4K_OPTS | 813 784 MIPS_CPU_FPU | MIPS_CPU_LLSC | 814 785 MIPS_CPU_32FPR; 815 786 c->tlbsize = 64; 787 + c->writecombine = _CACHE_UNCACHED_ACCELERATED; 816 788 break; 817 789 case PRID_IMP_LOONGSON_32: /* Loongson-1 */ 818 790 decode_configs(c);
+32 -10
arch/mips/kernel/jump_label.c
··· 18 18 19 19 #ifdef HAVE_JUMP_LABEL 20 20 21 - #define J_RANGE_MASK ((1ul << 28) - 1) 21 + /* 22 + * Define parameters for the standard MIPS and the microMIPS jump 23 + * instruction encoding respectively: 24 + * 25 + * - the ISA bit of the target, either 0 or 1 respectively, 26 + * 27 + * - the amount the jump target address is shifted right to fit in the 28 + * immediate field of the machine instruction, either 2 or 1, 29 + * 30 + * - the mask determining the size of the jump region relative to the 31 + * delay-slot instruction, either 256MB or 128MB, 32 + * 33 + * - the jump target alignment, either 4 or 2 bytes. 34 + */ 35 + #define J_ISA_BIT IS_ENABLED(CONFIG_CPU_MICROMIPS) 36 + #define J_RANGE_SHIFT (2 - J_ISA_BIT) 37 + #define J_RANGE_MASK ((1ul << (26 + J_RANGE_SHIFT)) - 1) 38 + #define J_ALIGN_MASK ((1ul << J_RANGE_SHIFT) - 1) 22 39 23 40 void arch_jump_label_transform(struct jump_entry *e, 24 41 enum jump_label_type type) 25 42 { 43 + union mips_instruction *insn_p; 26 44 union mips_instruction insn; 27 - union mips_instruction *insn_p = 28 - (union mips_instruction *)(unsigned long)e->code; 29 45 30 - /* Jump only works within a 256MB aligned region. */ 31 - BUG_ON((e->target & ~J_RANGE_MASK) != (e->code & ~J_RANGE_MASK)); 46 + insn_p = (union mips_instruction *)msk_isa16_mode(e->code); 32 47 33 - /* Target must have 4 byte alignment. */ 34 - BUG_ON((e->target & 3) != 0); 48 + /* Jump only works within an aligned region its delay slot is in. */ 49 + BUG_ON((e->target & ~J_RANGE_MASK) != ((e->code + 4) & ~J_RANGE_MASK)); 50 + 51 + /* Target must have the right alignment and ISA must be preserved. */ 52 + BUG_ON((e->target & J_ALIGN_MASK) != J_ISA_BIT); 35 53 36 54 if (type == JUMP_LABEL_ENABLE) { 37 - insn.j_format.opcode = j_op; 38 - insn.j_format.target = (e->target & J_RANGE_MASK) >> 2; 55 + insn.j_format.opcode = J_ISA_BIT ? mm_j32_op : j_op; 56 + insn.j_format.target = e->target >> J_RANGE_SHIFT; 39 57 } else { 40 58 insn.word = 0; /* nop */ 41 59 } 42 60 43 61 get_online_cpus(); 44 62 mutex_lock(&text_mutex); 45 - *insn_p = insn; 63 + if (IS_ENABLED(CONFIG_CPU_MICROMIPS)) { 64 + insn_p->halfword[0] = insn.word >> 16; 65 + insn_p->halfword[1] = insn.word; 66 + } else 67 + *insn_p = insn; 46 68 47 69 flush_icache_range((unsigned long)insn_p, 48 70 (unsigned long)insn_p + sizeof(*insn_p));
+2 -2
arch/mips/kernel/rtlx.c
··· 94 94 int ret = 0; 95 95 96 96 if (index >= RTLX_CHANNELS) { 97 - pr_debug(KERN_DEBUG "rtlx_open index out of range\n"); 97 + pr_debug("rtlx_open index out of range\n"); 98 98 return -ENOSYS; 99 99 } 100 100 101 101 if (atomic_inc_return(&channel_wqs[index].in_open) > 1) { 102 - pr_debug(KERN_DEBUG "rtlx_open channel %d already opened\n", index); 102 + pr_debug("rtlx_open channel %d already opened\n", index); 103 103 ret = -EBUSY; 104 104 goto out_fail; 105 105 }
+2 -2
arch/mips/kernel/setup.c
··· 485 485 * NOTE: historically plat_mem_setup did the entire platform initialization. 486 486 * This was rather impractical because it meant plat_mem_setup had to 487 487 * get away without any kind of memory allocator. To keep old code from 488 - * breaking plat_setup was just renamed to plat_setup and a second platform 488 + * breaking plat_setup was just renamed to plat_mem_setup and a second platform 489 489 * initialization hook for anything else was introduced. 490 490 */ 491 491 ··· 493 493 494 494 static int __init early_parse_mem(char *p) 495 495 { 496 - unsigned long start, size; 496 + phys_t start, size; 497 497 498 498 /* 499 499 * If a user specifies memory size, we
+4 -4
arch/mips/kernel/signal.c
··· 658 658 save_fp_context = _save_fp_context; 659 659 restore_fp_context = _restore_fp_context; 660 660 } else { 661 - save_fp_context = copy_fp_from_sigcontext; 662 - restore_fp_context = copy_fp_to_sigcontext; 661 + save_fp_context = copy_fp_to_sigcontext; 662 + restore_fp_context = copy_fp_from_sigcontext; 663 663 } 664 664 #endif /* CONFIG_SMP */ 665 665 #else 666 - save_fp_context = copy_fp_from_sigcontext;; 667 - restore_fp_context = copy_fp_to_sigcontext; 666 + save_fp_context = copy_fp_to_sigcontext; 667 + restore_fp_context = copy_fp_from_sigcontext; 668 668 #endif 669 669 670 670 return 0;
+1
arch/mips/lib/memcpy.S
··· 503 503 STOREB(t0, NBYTES-2(dst), .Ls_exc_p1\@) 504 504 .Ldone\@: 505 505 jr ra 506 + nop 506 507 .if __memcpy == 1 507 508 END(memcpy) 508 509 .set __memcpy, 0
+2 -1
arch/mips/loongson/common/Makefile
··· 11 11 # Serial port support 12 12 # 13 13 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o 14 - obj-$(CONFIG_SERIAL_8250) += serial.o 14 + loongson-serial-$(CONFIG_SERIAL_8250) := serial.o 15 + obj-y += $(loongson-serial-m) $(loongson-serial-y) 15 16 obj-$(CONFIG_LOONGSON_UART_BASE) += uart_base.o 16 17 obj-$(CONFIG_LOONGSON_MC146818) += rtc.o 17 18
+1
arch/mips/loongson/loongson-3/numa.c
··· 33 33 34 34 static struct node_data prealloc__node_data[MAX_NUMNODES]; 35 35 unsigned char __node_distances[MAX_NUMNODES][MAX_NUMNODES]; 36 + EXPORT_SYMBOL(__node_distances); 36 37 struct node_data *__node_data[MAX_NUMNODES]; 37 38 EXPORT_SYMBOL(__node_data); 38 39
+4
arch/mips/mm/tlb-r4k.c
··· 299 299 300 300 local_irq_save(flags); 301 301 302 + htw_stop(); 302 303 pid = read_c0_entryhi() & ASID_MASK; 303 304 address &= (PAGE_MASK << 1); 304 305 write_c0_entryhi(address | pid); ··· 347 346 tlb_write_indexed(); 348 347 } 349 348 tlbw_use_hazard(); 349 + htw_start(); 350 350 flush_itlb_vm(vma); 351 351 local_irq_restore(flags); 352 352 } ··· 424 422 425 423 local_irq_save(flags); 426 424 /* Save old context and create impossible VPN2 value */ 425 + htw_stop(); 427 426 old_ctx = read_c0_entryhi(); 428 427 old_pagemask = read_c0_pagemask(); 429 428 wired = read_c0_wired(); ··· 446 443 447 444 write_c0_entryhi(old_ctx); 448 445 write_c0_pagemask(old_pagemask); 446 + htw_start(); 449 447 out: 450 448 local_irq_restore(flags); 451 449 return ret;
+9 -1
arch/mips/mm/tlbex.c
··· 1872 1872 uasm_l_smp_pgtable_change(l, *p); 1873 1873 #endif 1874 1874 iPTE_LW(p, wr.r1, wr.r2); /* get even pte */ 1875 - if (!m4kc_tlbp_war()) 1875 + if (!m4kc_tlbp_war()) { 1876 1876 build_tlb_probe_entry(p); 1877 + if (cpu_has_htw) { 1878 + /* race condition happens, leaving */ 1879 + uasm_i_ehb(p); 1880 + uasm_i_mfc0(p, wr.r3, C0_INDEX); 1881 + uasm_il_bltz(p, r, wr.r3, label_leave); 1882 + uasm_i_nop(p); 1883 + } 1884 + } 1877 1885 return wr; 1878 1886 } 1879 1887
+2 -6
arch/mips/mti-sead3/sead3-leds.c
··· 5 5 * 6 6 * Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved. 7 7 */ 8 - #include <linux/module.h> 8 + #include <linux/init.h> 9 9 #include <linux/leds.h> 10 10 #include <linux/platform_device.h> 11 11 ··· 76 76 return platform_device_register(&fled_device); 77 77 } 78 78 79 - module_init(led_init); 80 - 81 - MODULE_AUTHOR("Chris Dearman <chris@mips.com>"); 82 - MODULE_LICENSE("GPL"); 83 - MODULE_DESCRIPTION("LED probe driver for SEAD-3"); 79 + device_initcall(led_init);
+8 -4
arch/mips/netlogic/xlp/Makefile
··· 1 1 obj-y += setup.o nlm_hal.o cop2-ex.o dt.o 2 2 obj-$(CONFIG_SMP) += wakeup.o 3 - obj-$(CONFIG_USB) += usb-init.o 4 - obj-$(CONFIG_USB) += usb-init-xlp2.o 5 - obj-$(CONFIG_SATA_AHCI) += ahci-init.o 6 - obj-$(CONFIG_SATA_AHCI) += ahci-init-xlp2.o 3 + ifdef CONFIG_USB 4 + obj-y += usb-init.o 5 + obj-y += usb-init-xlp2.o 6 + endif 7 + ifdef CONFIG_SATA_AHCI 8 + obj-y += ahci-init.o 9 + obj-y += ahci-init-xlp2.o 10 + endif
+1 -1
arch/mips/oprofile/backtrace.c
··· 92 92 /* This marks the end of the previous function, 93 93 which means we overran. */ 94 94 break; 95 - stack_size = (unsigned) stack_adjustment; 95 + stack_size = (unsigned long) stack_adjustment; 96 96 } else if (is_ra_save_ins(&ip)) { 97 97 int ra_slot = ip.i_format.simmediate; 98 98 if (ra_slot < 0)
+1
arch/mips/sgi-ip27/ip27-memory.c
··· 107 107 } 108 108 109 109 unsigned char __node_distances[MAX_COMPACT_NODES][MAX_COMPACT_NODES]; 110 + EXPORT_SYMBOL(__node_distances); 110 111 111 112 static int __init compute_node_distance(nasid_t nasid_a, nasid_t nasid_b) 112 113 {
-2
arch/powerpc/include/asm/pci-bridge.h
··· 159 159 160 160 int pci_ext_config_space; /* for pci devices */ 161 161 162 - bool force_32bit_msi; 163 - 164 162 struct pci_dev *pcidev; /* back-pointer to the pci device */ 165 163 #ifdef CONFIG_EEH 166 164 struct eeh_dev *edev; /* eeh device */
+1 -1
arch/powerpc/kernel/eeh_sysfs.c
··· 65 65 return -ENODEV; 66 66 67 67 state = eeh_ops->get_state(edev->pe, NULL); 68 - return sprintf(buf, "%0x08x %0x08x\n", 68 + return sprintf(buf, "0x%08x 0x%08x\n", 69 69 state, edev->pe->state); 70 70 } 71 71
-10
arch/powerpc/kernel/pci_64.c
··· 266 266 } 267 267 EXPORT_SYMBOL(pcibus_to_node); 268 268 #endif 269 - 270 - static void quirk_radeon_32bit_msi(struct pci_dev *dev) 271 - { 272 - struct pci_dn *pdn = pci_get_pdn(dev); 273 - 274 - if (pdn) 275 - pdn->force_32bit_msi = true; 276 - } 277 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x68f2, quirk_radeon_32bit_msi); 278 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0xaa68, quirk_radeon_32bit_msi);
+2 -2
arch/powerpc/kernel/vdso32/getcpu.S
··· 30 30 V_FUNCTION_BEGIN(__kernel_getcpu) 31 31 .cfi_startproc 32 32 mfspr r5,SPRN_SPRG_VDSO_READ 33 - cmpdi cr0,r3,0 34 - cmpdi cr1,r4,0 33 + cmpwi cr0,r3,0 34 + cmpwi cr1,r4,0 35 35 clrlwi r6,r5,16 36 36 rlwinm r7,r5,16,31-15,31-0 37 37 beq cr0,1f
+1 -1
arch/powerpc/platforms/powernv/opal-hmi.c
··· 57 57 }; 58 58 59 59 /* Print things out */ 60 - if (hmi_evt->version != OpalHMIEvt_V1) { 60 + if (hmi_evt->version < OpalHMIEvt_V1) { 61 61 pr_err("HMI Interrupt, Unknown event version %d !\n", 62 62 hmi_evt->version); 63 63 return;
+2 -3
arch/powerpc/platforms/powernv/pci-ioda.c
··· 1509 1509 unsigned int is_64, struct msi_msg *msg) 1510 1510 { 1511 1511 struct pnv_ioda_pe *pe = pnv_ioda_get_pe(dev); 1512 - struct pci_dn *pdn = pci_get_pdn(dev); 1513 1512 unsigned int xive_num = hwirq - phb->msi_base; 1514 1513 __be32 data; 1515 1514 int rc; ··· 1522 1523 return -ENXIO; 1523 1524 1524 1525 /* Force 32-bit MSI on some broken devices */ 1525 - if (pdn && pdn->force_32bit_msi) 1526 + if (dev->no_64bit_msi) 1526 1527 is_64 = 0; 1527 1528 1528 1529 /* Assign XIVE to PE */ ··· 1996 1997 if (is_kdump_kernel()) { 1997 1998 pr_info(" Issue PHB reset ...\n"); 1998 1999 ioda_eeh_phb_reset(hose, EEH_RESET_FUNDAMENTAL); 1999 - ioda_eeh_phb_reset(hose, OPAL_DEASSERT_RESET); 2000 + ioda_eeh_phb_reset(hose, EEH_RESET_DEACTIVATE); 2000 2001 } 2001 2002 2002 2003 /* Configure M64 window */
+1 -2
arch/powerpc/platforms/powernv/pci.c
··· 50 50 { 51 51 struct pci_controller *hose = pci_bus_to_host(pdev->bus); 52 52 struct pnv_phb *phb = hose->private_data; 53 - struct pci_dn *pdn = pci_get_pdn(pdev); 54 53 struct msi_desc *entry; 55 54 struct msi_msg msg; 56 55 int hwirq; ··· 59 60 if (WARN_ON(!phb) || !phb->msi_bmp.bitmap) 60 61 return -ENODEV; 61 62 62 - if (pdn && pdn->force_32bit_msi && !phb->msi32_support) 63 + if (pdev->no_64bit_msi && !phb->msi32_support) 63 64 return -ENODEV; 64 65 65 66 list_for_each_entry(entry, &pdev->msi_list, list) {
+1 -1
arch/powerpc/platforms/pseries/msi.c
··· 420 420 */ 421 421 again: 422 422 if (type == PCI_CAP_ID_MSI) { 423 - if (pdn->force_32bit_msi) { 423 + if (pdev->no_64bit_msi) { 424 424 rc = rtas_change_msi(pdn, RTAS_CHANGE_32MSI_FN, nvec); 425 425 if (rc < 0) { 426 426 /*
+1 -1
arch/powerpc/sysdev/fsl_msi.c
··· 361 361 cascade_data->virq = virt_msir; 362 362 msi->cascade_array[irq_index] = cascade_data; 363 363 364 - ret = request_irq(virt_msir, fsl_msi_cascade, 0, 364 + ret = request_irq(virt_msir, fsl_msi_cascade, IRQF_NO_THREAD, 365 365 "fsl-msi-cascade", cascade_data); 366 366 if (ret) { 367 367 dev_err(&dev->dev, "failed to request_irq(%d), ret = %d\n",
+3 -3
arch/powerpc/xmon/xmon.c
··· 293 293 args.token = rtas_token("set-indicator"); 294 294 if (args.token == RTAS_UNKNOWN_SERVICE) 295 295 return; 296 - args.nargs = 3; 297 - args.nret = 1; 296 + args.nargs = cpu_to_be32(3); 297 + args.nret = cpu_to_be32(1); 298 298 args.rets = &args.args[3]; 299 - args.args[0] = SURVEILLANCE_TOKEN; 299 + args.args[0] = cpu_to_be32(SURVEILLANCE_TOKEN); 300 300 args.args[1] = 0; 301 301 args.args[2] = 0; 302 302 enter_rtas(__pa(&args));
+8
arch/sparc/include/asm/dma-mapping.h
··· 12 12 #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f) 13 13 #define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h) 14 14 15 + static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, 16 + enum dma_data_direction dir) 17 + { 18 + /* Since dma_{alloc,free}_noncoherent() allocated coherent memory, this 19 + * routine can be a nop. 20 + */ 21 + } 22 + 15 23 extern struct dma_map_ops *dma_ops; 16 24 extern struct dma_map_ops *leon_dma_ops; 17 25 extern struct dma_map_ops pci32_dma_ops;
+1 -1
arch/x86/Kconfig
··· 144 144 145 145 config PERF_EVENTS_INTEL_UNCORE 146 146 def_bool y 147 - depends on PERF_EVENTS && SUP_SUP_INTEL && PCI 147 + depends on PERF_EVENTS && CPU_SUP_INTEL && PCI 148 148 149 149 config OUTPUT_FORMAT 150 150 string
-1
arch/x86/include/asm/page_32_types.h
··· 20 20 #define THREAD_SIZE_ORDER 1 21 21 #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) 22 22 23 - #define STACKFAULT_STACK 0 24 23 #define DOUBLEFAULT_STACK 1 25 24 #define NMI_STACK 0 26 25 #define DEBUG_STACK 0
+5 -6
arch/x86/include/asm/page_64_types.h
··· 14 14 #define IRQ_STACK_ORDER 2 15 15 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER) 16 16 17 - #define STACKFAULT_STACK 1 18 - #define DOUBLEFAULT_STACK 2 19 - #define NMI_STACK 3 20 - #define DEBUG_STACK 4 21 - #define MCE_STACK 5 22 - #define N_EXCEPTION_STACKS 5 /* hw limit: 7 */ 17 + #define DOUBLEFAULT_STACK 1 18 + #define NMI_STACK 2 19 + #define DEBUG_STACK 3 20 + #define MCE_STACK 4 21 + #define N_EXCEPTION_STACKS 4 /* hw limit: 7 */ 23 22 24 23 #define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT) 25 24 #define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1))
+1 -1
arch/x86/include/asm/thread_info.h
··· 141 141 /* Only used for 64 bit */ 142 142 #define _TIF_DO_NOTIFY_MASK \ 143 143 (_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME | \ 144 - _TIF_USER_RETURN_NOTIFY) 144 + _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE) 145 145 146 146 /* flags to check in __switch_to() */ 147 147 #define _TIF_WORK_CTXSW \
+1
arch/x86/include/asm/traps.h
··· 39 39 40 40 #ifdef CONFIG_TRACING 41 41 asmlinkage void trace_page_fault(void); 42 + #define trace_stack_segment stack_segment 42 43 #define trace_divide_error divide_error 43 44 #define trace_bounds bounds 44 45 #define trace_invalid_op invalid_op
+2
arch/x86/kernel/cpu/common.c
··· 146 146 147 147 static int __init x86_xsave_setup(char *s) 148 148 { 149 + if (strlen(s)) 150 + return 0; 149 151 setup_clear_cpu_cap(X86_FEATURE_XSAVE); 150 152 setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT); 151 153 setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+8
arch/x86/kernel/cpu/microcode/core.c
··· 465 465 466 466 if (uci->valid && uci->mc) 467 467 microcode_ops->apply_microcode(cpu); 468 + else if (!uci->mc) 469 + /* 470 + * We might resume and not have applied late microcode but still 471 + * have a newer patch stashed from the early loader. We don't 472 + * have it in uci->mc so we have to load it the same way we're 473 + * applying patches early on the APs. 474 + */ 475 + load_ucode_ap(); 468 476 } 469 477 470 478 static struct syscore_ops mc_syscore_ops = {
+45 -4
arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
··· 486 486 .attrs = snbep_uncore_qpi_formats_attr, 487 487 }; 488 488 489 - #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 490 - .init_box = snbep_uncore_msr_init_box, \ 489 + #define __SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 491 490 .disable_box = snbep_uncore_msr_disable_box, \ 492 491 .enable_box = snbep_uncore_msr_enable_box, \ 493 492 .disable_event = snbep_uncore_msr_disable_event, \ 494 493 .enable_event = snbep_uncore_msr_enable_event, \ 495 494 .read_counter = uncore_msr_read_counter 495 + 496 + #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 497 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), \ 498 + .init_box = snbep_uncore_msr_init_box \ 496 499 497 500 static struct intel_uncore_ops snbep_uncore_msr_ops = { 498 501 SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), ··· 1922 1919 .format_group = &hswep_uncore_cbox_format_group, 1923 1920 }; 1924 1921 1922 + /* 1923 + * Write SBOX Initialization register bit by bit to avoid spurious #GPs 1924 + */ 1925 + static void hswep_uncore_sbox_msr_init_box(struct intel_uncore_box *box) 1926 + { 1927 + unsigned msr = uncore_msr_box_ctl(box); 1928 + 1929 + if (msr) { 1930 + u64 init = SNBEP_PMON_BOX_CTL_INT; 1931 + u64 flags = 0; 1932 + int i; 1933 + 1934 + for_each_set_bit(i, (unsigned long *)&init, 64) { 1935 + flags |= (1ULL << i); 1936 + wrmsrl(msr, flags); 1937 + } 1938 + } 1939 + } 1940 + 1941 + static struct intel_uncore_ops hswep_uncore_sbox_msr_ops = { 1942 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), 1943 + .init_box = hswep_uncore_sbox_msr_init_box 1944 + }; 1945 + 1925 1946 static struct attribute *hswep_uncore_sbox_formats_attr[] = { 1926 1947 &format_attr_event.attr, 1927 1948 &format_attr_umask.attr, ··· 1971 1944 .event_mask = HSWEP_S_MSR_PMON_RAW_EVENT_MASK, 1972 1945 .box_ctl = HSWEP_S0_MSR_PMON_BOX_CTL, 1973 1946 .msr_offset = HSWEP_SBOX_MSR_OFFSET, 1974 - .ops = &snbep_uncore_msr_ops, 1947 + .ops = &hswep_uncore_sbox_msr_ops, 1975 1948 .format_group = &hswep_uncore_sbox_format_group, 1976 1949 }; 1977 1950 ··· 2052 2025 SNBEP_UNCORE_PCI_COMMON_INIT(), 2053 2026 }; 2054 2027 2028 + static unsigned hswep_uncore_irp_ctrs[] = {0xa0, 0xa8, 0xb0, 0xb8}; 2029 + 2030 + static u64 hswep_uncore_irp_read_counter(struct intel_uncore_box *box, struct perf_event *event) 2031 + { 2032 + struct pci_dev *pdev = box->pci_dev; 2033 + struct hw_perf_event *hwc = &event->hw; 2034 + u64 count = 0; 2035 + 2036 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx], (u32 *)&count); 2037 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx] + 4, (u32 *)&count + 1); 2038 + 2039 + return count; 2040 + } 2041 + 2055 2042 static struct intel_uncore_ops hswep_uncore_irp_ops = { 2056 2043 .init_box = snbep_uncore_pci_init_box, 2057 2044 .disable_box = snbep_uncore_pci_disable_box, 2058 2045 .enable_box = snbep_uncore_pci_enable_box, 2059 2046 .disable_event = ivbep_uncore_irp_disable_event, 2060 2047 .enable_event = ivbep_uncore_irp_enable_event, 2061 - .read_counter = ivbep_uncore_irp_read_counter, 2048 + .read_counter = hswep_uncore_irp_read_counter, 2062 2049 }; 2063 2050 2064 2051 static struct intel_uncore_type hswep_uncore_irp = {
-1
arch/x86/kernel/dumpstack_64.c
··· 24 24 [ DEBUG_STACK-1 ] = "#DB", 25 25 [ NMI_STACK-1 ] = "NMI", 26 26 [ DOUBLEFAULT_STACK-1 ] = "#DF", 27 - [ STACKFAULT_STACK-1 ] = "#SS", 28 27 [ MCE_STACK-1 ] = "#MC", 29 28 #if DEBUG_STKSZ > EXCEPTION_STKSZ 30 29 [ N_EXCEPTION_STACKS ...
+22 -59
arch/x86/kernel/entry_64.S
··· 828 828 jnz native_irq_return_ldt 829 829 #endif 830 830 831 + .global native_irq_return_iret 831 832 native_irq_return_iret: 833 + /* 834 + * This may fault. Non-paranoid faults on return to userspace are 835 + * handled by fixup_bad_iret. These include #SS, #GP, and #NP. 836 + * Double-faults due to espfix64 are handled in do_double_fault. 837 + * Other faults here are fatal. 838 + */ 832 839 iretq 833 - _ASM_EXTABLE(native_irq_return_iret, bad_iret) 834 840 835 841 #ifdef CONFIG_X86_ESPFIX64 836 842 native_irq_return_ldt: ··· 863 857 popq_cfi %rax 864 858 jmp native_irq_return_iret 865 859 #endif 866 - 867 - .section .fixup,"ax" 868 - bad_iret: 869 - /* 870 - * The iret traps when the %cs or %ss being restored is bogus. 871 - * We've lost the original trap vector and error code. 872 - * #GPF is the most likely one to get for an invalid selector. 873 - * So pretend we completed the iret and took the #GPF in user mode. 874 - * 875 - * We are now running with the kernel GS after exception recovery. 876 - * But error_entry expects us to have user GS to match the user %cs, 877 - * so swap back. 878 - */ 879 - pushq $0 880 - 881 - SWAPGS 882 - jmp general_protection 883 - 884 - .previous 885 860 886 861 /* edi: workmask, edx: work */ 887 862 retint_careful: ··· 908 921 #endif 909 922 CFI_ENDPROC 910 923 END(common_interrupt) 911 - 912 - /* 913 - * If IRET takes a fault on the espfix stack, then we 914 - * end up promoting it to a doublefault. In that case, 915 - * modify the stack to make it look like we just entered 916 - * the #GP handler from user space, similar to bad_iret. 917 - */ 918 - #ifdef CONFIG_X86_ESPFIX64 919 - ALIGN 920 - __do_double_fault: 921 - XCPT_FRAME 1 RDI+8 922 - movq RSP(%rdi),%rax /* Trap on the espfix stack? */ 923 - sarq $PGDIR_SHIFT,%rax 924 - cmpl $ESPFIX_PGD_ENTRY,%eax 925 - jne do_double_fault /* No, just deliver the fault */ 926 - cmpl $__KERNEL_CS,CS(%rdi) 927 - jne do_double_fault 928 - movq RIP(%rdi),%rax 929 - cmpq $native_irq_return_iret,%rax 930 - jne do_double_fault /* This shouldn't happen... */ 931 - movq PER_CPU_VAR(kernel_stack),%rax 932 - subq $(6*8-KERNEL_STACK_OFFSET),%rax /* Reset to original stack */ 933 - movq %rax,RSP(%rdi) 934 - movq $0,(%rax) /* Missing (lost) #GP error code */ 935 - movq $general_protection,RIP(%rdi) 936 - retq 937 - CFI_ENDPROC 938 - END(__do_double_fault) 939 - #else 940 - # define __do_double_fault do_double_fault 941 - #endif 942 924 943 925 /* 944 926 * APIC interrupts. ··· 1080 1124 idtentry bounds do_bounds has_error_code=0 1081 1125 idtentry invalid_op do_invalid_op has_error_code=0 1082 1126 idtentry device_not_available do_device_not_available has_error_code=0 1083 - idtentry double_fault __do_double_fault has_error_code=1 paranoid=1 1127 + idtentry double_fault do_double_fault has_error_code=1 paranoid=1 1084 1128 idtentry coprocessor_segment_overrun do_coprocessor_segment_overrun has_error_code=0 1085 1129 idtentry invalid_TSS do_invalid_TSS has_error_code=1 1086 1130 idtentry segment_not_present do_segment_not_present has_error_code=1 ··· 1245 1289 1246 1290 idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK 1247 1291 idtentry int3 do_int3 has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK 1248 - idtentry stack_segment do_stack_segment has_error_code=1 paranoid=1 1292 + idtentry stack_segment do_stack_segment has_error_code=1 1249 1293 #ifdef CONFIG_XEN 1250 1294 idtentry xen_debug do_debug has_error_code=0 1251 1295 idtentry xen_int3 do_int3 has_error_code=0 ··· 1355 1399 1356 1400 /* 1357 1401 * There are two places in the kernel that can potentially fault with 1358 - * usergs. Handle them here. The exception handlers after iret run with 1359 - * kernel gs again, so don't set the user space flag. B stepping K8s 1360 - * sometimes report an truncated RIP for IRET exceptions returning to 1361 - * compat mode. Check for these here too. 1402 + * usergs. Handle them here. B stepping K8s sometimes report a 1403 + * truncated RIP for IRET exceptions returning to compat mode. Check 1404 + * for these here too. 1362 1405 */ 1363 1406 error_kernelspace: 1364 1407 CFI_REL_OFFSET rcx, RCX+8 1365 1408 incl %ebx 1366 1409 leaq native_irq_return_iret(%rip),%rcx 1367 1410 cmpq %rcx,RIP+8(%rsp) 1368 - je error_swapgs 1411 + je error_bad_iret 1369 1412 movl %ecx,%eax /* zero extend */ 1370 1413 cmpq %rax,RIP+8(%rsp) 1371 1414 je bstep_iret ··· 1375 1420 bstep_iret: 1376 1421 /* Fix truncated RIP */ 1377 1422 movq %rcx,RIP+8(%rsp) 1378 - jmp error_swapgs 1423 + /* fall through */ 1424 + 1425 + error_bad_iret: 1426 + SWAPGS 1427 + mov %rsp,%rdi 1428 + call fixup_bad_iret 1429 + mov %rax,%rsp 1430 + decl %ebx /* Return to usergs */ 1431 + jmp error_sti 1379 1432 CFI_ENDPROC 1380 1433 END(error_entry) 1381 1434
+1 -1
arch/x86/kernel/ptrace.c
··· 1484 1484 */ 1485 1485 if (work & _TIF_NOHZ) { 1486 1486 user_exit(); 1487 - work &= ~TIF_NOHZ; 1487 + work &= ~_TIF_NOHZ; 1488 1488 } 1489 1489 1490 1490 #ifdef CONFIG_SECCOMP
+54 -17
arch/x86/kernel/traps.c
··· 233 233 DO_ERROR(X86_TRAP_OLD_MF, SIGFPE, "coprocessor segment overrun",coprocessor_segment_overrun) 234 234 DO_ERROR(X86_TRAP_TS, SIGSEGV, "invalid TSS", invalid_TSS) 235 235 DO_ERROR(X86_TRAP_NP, SIGBUS, "segment not present", segment_not_present) 236 - #ifdef CONFIG_X86_32 237 236 DO_ERROR(X86_TRAP_SS, SIGBUS, "stack segment", stack_segment) 238 - #endif 239 237 DO_ERROR(X86_TRAP_AC, SIGBUS, "alignment check", alignment_check) 240 238 241 239 #ifdef CONFIG_X86_64 242 240 /* Runs on IST stack */ 243 - dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code) 244 - { 245 - enum ctx_state prev_state; 246 - 247 - prev_state = exception_enter(); 248 - if (notify_die(DIE_TRAP, "stack segment", regs, error_code, 249 - X86_TRAP_SS, SIGBUS) != NOTIFY_STOP) { 250 - preempt_conditional_sti(regs); 251 - do_trap(X86_TRAP_SS, SIGBUS, "stack segment", regs, error_code, NULL); 252 - preempt_conditional_cli(regs); 253 - } 254 - exception_exit(prev_state); 255 - } 256 - 257 241 dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) 258 242 { 259 243 static const char str[] = "double fault"; 260 244 struct task_struct *tsk = current; 245 + 246 + #ifdef CONFIG_X86_ESPFIX64 247 + extern unsigned char native_irq_return_iret[]; 248 + 249 + /* 250 + * If IRET takes a non-IST fault on the espfix64 stack, then we 251 + * end up promoting it to a doublefault. In that case, modify 252 + * the stack to make it look like we just entered the #GP 253 + * handler from user space, similar to bad_iret. 254 + */ 255 + if (((long)regs->sp >> PGDIR_SHIFT) == ESPFIX_PGD_ENTRY && 256 + regs->cs == __KERNEL_CS && 257 + regs->ip == (unsigned long)native_irq_return_iret) 258 + { 259 + struct pt_regs *normal_regs = task_pt_regs(current); 260 + 261 + /* Fake a #GP(0) from userspace. */ 262 + memmove(&normal_regs->ip, (void *)regs->sp, 5*8); 263 + normal_regs->orig_ax = 0; /* Missing (lost) #GP error code */ 264 + regs->ip = (unsigned long)general_protection; 265 + regs->sp = (unsigned long)&normal_regs->orig_ax; 266 + return; 267 + } 268 + #endif 261 269 262 270 exception_enter(); 263 271 /* Return not checked because double check cannot be ignored */ ··· 407 399 return regs; 408 400 } 409 401 NOKPROBE_SYMBOL(sync_regs); 402 + 403 + struct bad_iret_stack { 404 + void *error_entry_ret; 405 + struct pt_regs regs; 406 + }; 407 + 408 + asmlinkage __visible 409 + struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) 410 + { 411 + /* 412 + * This is called from entry_64.S early in handling a fault 413 + * caused by a bad iret to user mode. To handle the fault 414 + * correctly, we want move our stack frame to task_pt_regs 415 + * and we want to pretend that the exception came from the 416 + * iret target. 417 + */ 418 + struct bad_iret_stack *new_stack = 419 + container_of(task_pt_regs(current), 420 + struct bad_iret_stack, regs); 421 + 422 + /* Copy the IRET target to the new stack. */ 423 + memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); 424 + 425 + /* Copy the remainder of the stack from the current stack. */ 426 + memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); 427 + 428 + BUG_ON(!user_mode_vm(&new_stack->regs)); 429 + return new_stack; 430 + } 410 431 #endif 411 432 412 433 /* ··· 815 778 set_intr_gate(X86_TRAP_OLD_MF, coprocessor_segment_overrun); 816 779 set_intr_gate(X86_TRAP_TS, invalid_TSS); 817 780 set_intr_gate(X86_TRAP_NP, segment_not_present); 818 - set_intr_gate_ist(X86_TRAP_SS, &stack_segment, STACKFAULT_STACK); 781 + set_intr_gate(X86_TRAP_SS, stack_segment); 819 782 set_intr_gate(X86_TRAP_GP, general_protection); 820 783 set_intr_gate(X86_TRAP_SPURIOUS, spurious_interrupt_bug); 821 784 set_intr_gate(X86_TRAP_MF, coprocessor_error);
+3 -3
arch/x86/kvm/mmu.c
··· 630 630 * kvm mmu, before reclaiming the page, we should 631 631 * unmap it from mmu first. 632 632 */ 633 - WARN_ON(!kvm_is_mmio_pfn(pfn) && !page_count(pfn_to_page(pfn))); 633 + WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn))); 634 634 635 635 if (!shadow_accessed_mask || old_spte & shadow_accessed_mask) 636 636 kvm_set_pfn_accessed(pfn); ··· 2461 2461 spte |= PT_PAGE_SIZE_MASK; 2462 2462 if (tdp_enabled) 2463 2463 spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn, 2464 - kvm_is_mmio_pfn(pfn)); 2464 + kvm_is_reserved_pfn(pfn)); 2465 2465 2466 2466 if (host_writable) 2467 2467 spte |= SPTE_HOST_WRITEABLE; ··· 2737 2737 * PT_PAGE_TABLE_LEVEL and there would be no adjustment done 2738 2738 * here. 2739 2739 */ 2740 - if (!is_error_noslot_pfn(pfn) && !kvm_is_mmio_pfn(pfn) && 2740 + if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && 2741 2741 level == PT_PAGE_TABLE_LEVEL && 2742 2742 PageTransCompound(pfn_to_page(pfn)) && 2743 2743 !has_wrprotected_page(vcpu->kvm, gfn, PT_DIRECTORY_LEVEL)) {
+10 -1
arch/x86/mm/init_64.c
··· 1123 1123 unsigned long end = (unsigned long) &__end_rodata_hpage_align; 1124 1124 unsigned long text_end = PFN_ALIGN(&__stop___ex_table); 1125 1125 unsigned long rodata_end = PFN_ALIGN(&__end_rodata); 1126 - unsigned long all_end = PFN_ALIGN(&_end); 1126 + unsigned long all_end; 1127 1127 1128 1128 printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", 1129 1129 (end - start) >> 10); ··· 1134 1134 /* 1135 1135 * The rodata/data/bss/brk section (but not the kernel text!) 1136 1136 * should also be not-executable. 1137 + * 1138 + * We align all_end to PMD_SIZE because the existing mapping 1139 + * is a full PMD. If we would align _brk_end to PAGE_SIZE we 1140 + * split the PMD and the reminder between _brk_end and the end 1141 + * of the PMD will remain mapped executable. 1142 + * 1143 + * Any PMD which was setup after the one which covers _brk_end 1144 + * has been zapped already via cleanup_highmem(). 1137 1145 */ 1146 + all_end = roundup((unsigned long)_brk_end, PMD_SIZE); 1138 1147 set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT); 1139 1148 1140 1149 rodata_test();
+10 -1
arch/x86/tools/calc_run_size.pl
··· 19 19 if ($file_offset == 0) { 20 20 $file_offset = $offset; 21 21 } elsif ($file_offset != $offset) { 22 - die ".bss and .brk lack common file offset\n"; 22 + # BFD linker shows the same file offset in ELF. 23 + # Gold linker shows them as consecutive. 24 + next if ($file_offset + $mem_size == $offset + $size); 25 + 26 + printf STDERR "file_offset: 0x%lx\n", $file_offset; 27 + printf STDERR "mem_size: 0x%lx\n", $mem_size; 28 + printf STDERR "offset: 0x%lx\n", $offset; 29 + printf STDERR "size: 0x%lx\n", $size; 30 + 31 + die ".bss and .brk are non-contiguous\n"; 23 32 } 24 33 } 25 34 }
+1 -1
drivers/acpi/device_pm.c
··· 878 878 return 0; 879 879 880 880 target_state = acpi_target_system_state(); 881 - wakeup = device_may_wakeup(dev); 881 + wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev); 882 882 error = acpi_device_wakeup(adev, target_state, wakeup); 883 883 if (wakeup && error) 884 884 return error;
+2
drivers/atm/solos-pci.c
··· 1225 1225 card->config_regs = pci_iomap(dev, 0, CONFIG_RAM_SIZE); 1226 1226 if (!card->config_regs) { 1227 1227 dev_warn(&dev->dev, "Failed to ioremap config registers\n"); 1228 + err = -ENOMEM; 1228 1229 goto out_release_regions; 1229 1230 } 1230 1231 card->buffers = pci_iomap(dev, 1, DATA_RAM_SIZE); 1231 1232 if (!card->buffers) { 1232 1233 dev_warn(&dev->dev, "Failed to ioremap data buffers\n"); 1234 + err = -ENOMEM; 1233 1235 goto out_unmap_config; 1234 1236 } 1235 1237
+18 -17
drivers/clk/at91/clk-usb.c
··· 52 52 53 53 tmp = pmc_read(pmc, AT91_PMC_USB); 54 54 usbdiv = (tmp & AT91_PMC_OHCIUSBDIV) >> SAM9X5_USB_DIV_SHIFT; 55 - return parent_rate / (usbdiv + 1); 55 + 56 + return DIV_ROUND_CLOSEST(parent_rate, (usbdiv + 1)); 56 57 } 57 58 58 59 static long at91sam9x5_clk_usb_round_rate(struct clk_hw *hw, unsigned long rate, 59 60 unsigned long *parent_rate) 60 61 { 61 62 unsigned long div; 62 - unsigned long bestrate; 63 - unsigned long tmp; 63 + 64 + if (!rate) 65 + return -EINVAL; 64 66 65 67 if (rate >= *parent_rate) 66 68 return *parent_rate; 67 69 68 - div = *parent_rate / rate; 69 - if (div >= SAM9X5_USB_MAX_DIV) 70 - return *parent_rate / (SAM9X5_USB_MAX_DIV + 1); 70 + div = DIV_ROUND_CLOSEST(*parent_rate, rate); 71 + if (div > SAM9X5_USB_MAX_DIV + 1) 72 + div = SAM9X5_USB_MAX_DIV + 1; 71 73 72 - bestrate = *parent_rate / div; 73 - tmp = *parent_rate / (div + 1); 74 - if (bestrate - rate > rate - tmp) 75 - bestrate = tmp; 76 - 77 - return bestrate; 74 + return DIV_ROUND_CLOSEST(*parent_rate, div); 78 75 } 79 76 80 77 static int at91sam9x5_clk_usb_set_parent(struct clk_hw *hw, u8 index) ··· 103 106 u32 tmp; 104 107 struct at91sam9x5_clk_usb *usb = to_at91sam9x5_clk_usb(hw); 105 108 struct at91_pmc *pmc = usb->pmc; 106 - unsigned long div = parent_rate / rate; 109 + unsigned long div; 107 110 108 - if (parent_rate % rate || div < 1 || div >= SAM9X5_USB_MAX_DIV) 111 + if (!rate) 112 + return -EINVAL; 113 + 114 + div = DIV_ROUND_CLOSEST(parent_rate, rate); 115 + if (div > SAM9X5_USB_MAX_DIV + 1 || !div) 109 116 return -EINVAL; 110 117 111 118 tmp = pmc_read(pmc, AT91_PMC_USB) & ~AT91_PMC_OHCIUSBDIV; ··· 254 253 255 254 tmp_parent_rate = rate * usb->divisors[i]; 256 255 tmp_parent_rate = __clk_round_rate(parent, tmp_parent_rate); 257 - tmprate = tmp_parent_rate / usb->divisors[i]; 256 + tmprate = DIV_ROUND_CLOSEST(tmp_parent_rate, usb->divisors[i]); 258 257 if (tmprate < rate) 259 258 tmpdiff = rate - tmprate; 260 259 else ··· 282 281 struct at91_pmc *pmc = usb->pmc; 283 282 unsigned long div; 284 283 285 - if (!rate || parent_rate % rate) 284 + if (!rate) 286 285 return -EINVAL; 287 286 288 - div = parent_rate / rate; 287 + div = DIV_ROUND_CLOSEST(parent_rate, rate); 289 288 290 289 for (i = 0; i < RM9200_USB_DIV_TAB_SIZE; i++) { 291 290 if (usb->divisors[i] == div) {
+9 -9
drivers/clk/clk-divider.c
··· 263 263 if (!rate) 264 264 rate = 1; 265 265 266 + /* if read only, just return current value */ 267 + if (divider->flags & CLK_DIVIDER_READ_ONLY) { 268 + bestdiv = readl(divider->reg) >> divider->shift; 269 + bestdiv &= div_mask(divider); 270 + bestdiv = _get_div(divider, bestdiv); 271 + return bestdiv; 272 + } 273 + 266 274 maxdiv = _get_maxdiv(divider); 267 275 268 276 if (!(__clk_get_flags(hw->clk) & CLK_SET_RATE_PARENT)) { ··· 369 361 }; 370 362 EXPORT_SYMBOL_GPL(clk_divider_ops); 371 363 372 - const struct clk_ops clk_divider_ro_ops = { 373 - .recalc_rate = clk_divider_recalc_rate, 374 - }; 375 - EXPORT_SYMBOL_GPL(clk_divider_ro_ops); 376 - 377 364 static struct clk *_register_divider(struct device *dev, const char *name, 378 365 const char *parent_name, unsigned long flags, 379 366 void __iomem *reg, u8 shift, u8 width, ··· 394 391 } 395 392 396 393 init.name = name; 397 - if (clk_divider_flags & CLK_DIVIDER_READ_ONLY) 398 - init.ops = &clk_divider_ro_ops; 399 - else 400 - init.ops = &clk_divider_ops; 394 + init.ops = &clk_divider_ops; 401 395 init.flags = flags | CLK_IS_BASIC; 402 396 init.parent_names = (parent_name ? &parent_name: NULL); 403 397 init.num_parents = (parent_name ? 1 : 0);
+2 -2
drivers/clk/pxa/clk-pxa27x.c
··· 322 322 unsigned long ccsr = CCSR; 323 323 324 324 osc_forced = ccsr & (1 << CCCR_CPDIS_BIT); 325 - a = cccr & CCCR_A_BIT; 325 + a = cccr & (1 << CCCR_A_BIT); 326 326 l = ccsr & CCSR_L_MASK; 327 327 328 328 if (osc_forced || a) ··· 341 341 unsigned long ccsr = CCSR; 342 342 343 343 osc_forced = ccsr & (1 << CCCR_CPDIS_BIT); 344 - a = cccr & CCCR_A_BIT; 344 + a = cccr & (1 << CCCR_A_BIT); 345 345 if (osc_forced) 346 346 return PXA_MEM_13Mhz; 347 347 if (a)
+1 -1
drivers/clk/qcom/mmcc-apq8084.c
··· 3122 3122 [ESC1_CLK_SRC] = &esc1_clk_src.clkr, 3123 3123 [HDMI_CLK_SRC] = &hdmi_clk_src.clkr, 3124 3124 [VSYNC_CLK_SRC] = &vsync_clk_src.clkr, 3125 - [RBCPR_CLK_SRC] = &rbcpr_clk_src.clkr, 3125 + [MMSS_RBCPR_CLK_SRC] = &rbcpr_clk_src.clkr, 3126 3126 [RBBMTIMER_CLK_SRC] = &rbbmtimer_clk_src.clkr, 3127 3127 [MAPLE_CLK_SRC] = &maple_clk_src.clkr, 3128 3128 [VDP_CLK_SRC] = &vdp_clk_src.clkr,
+1 -3
drivers/clk/rockchip/clk.c
··· 90 90 div->width = div_width; 91 91 div->lock = lock; 92 92 div->table = div_table; 93 - div_ops = (div_flags & CLK_DIVIDER_READ_ONLY) 94 - ? &clk_divider_ro_ops 95 - : &clk_divider_ops; 93 + div_ops = &clk_divider_ops; 96 94 } 97 95 98 96 clk = clk_register_composite(NULL, name, parent_names, num_parents,
+6 -6
drivers/clocksource/sun4i_timer.c
··· 182 182 /* Make sure timer is stopped before playing with interrupts */ 183 183 sun4i_clkevt_time_stop(0); 184 184 185 + sun4i_clockevent.cpumask = cpu_possible_mask; 186 + sun4i_clockevent.irq = irq; 187 + 188 + clockevents_config_and_register(&sun4i_clockevent, rate, 189 + TIMER_SYNC_TICKS, 0xffffffff); 190 + 185 191 ret = setup_irq(irq, &sun4i_timer_irq); 186 192 if (ret) 187 193 pr_warn("failed to setup irq %d\n", irq); ··· 195 189 /* Enable timer0 interrupt */ 196 190 val = readl(timer_base + TIMER_IRQ_EN_REG); 197 191 writel(val | TIMER_IRQ_EN(0), timer_base + TIMER_IRQ_EN_REG); 198 - 199 - sun4i_clockevent.cpumask = cpu_possible_mask; 200 - sun4i_clockevent.irq = irq; 201 - 202 - clockevents_config_and_register(&sun4i_clockevent, rate, 203 - TIMER_SYNC_TICKS, 0xffffffff); 204 192 } 205 193 CLOCKSOURCE_OF_DECLARE(sun4i, "allwinner,sun4i-a10-timer", 206 194 sun4i_timer_init);
+16 -7
drivers/dma/pl330.c
··· 271 271 #define DMAC_MODE_NS (1 << 0) 272 272 unsigned int mode; 273 273 unsigned int data_bus_width:10; /* In number of bits */ 274 - unsigned int data_buf_dep:10; 274 + unsigned int data_buf_dep:11; 275 275 unsigned int num_chan:4; 276 276 unsigned int num_peri:6; 277 277 u32 peri_ns; ··· 2336 2336 int burst_len; 2337 2337 2338 2338 burst_len = pl330->pcfg.data_bus_width / 8; 2339 - burst_len *= pl330->pcfg.data_buf_dep; 2339 + burst_len *= pl330->pcfg.data_buf_dep / pl330->pcfg.num_chan; 2340 2340 burst_len >>= desc->rqcfg.brst_size; 2341 2341 2342 2342 /* src/dst_burst_len can't be more than 16 */ ··· 2459 2459 /* Select max possible burst size */ 2460 2460 burst = pl330->pcfg.data_bus_width / 8; 2461 2461 2462 - while (burst > 1) { 2463 - if (!(len % burst)) 2464 - break; 2462 + /* 2463 + * Make sure we use a burst size that aligns with all the memcpy 2464 + * parameters because our DMA programming algorithm doesn't cope with 2465 + * transfers which straddle an entry in the DMA device's MFIFO. 2466 + */ 2467 + while ((src | dst | len) & (burst - 1)) 2465 2468 burst /= 2; 2466 - } 2467 2469 2468 2470 desc->rqcfg.brst_size = 0; 2469 2471 while (burst != (1 << desc->rqcfg.brst_size)) 2470 2472 desc->rqcfg.brst_size++; 2473 + 2474 + /* 2475 + * If burst size is smaller than bus width then make sure we only 2476 + * transfer one at a time to avoid a burst stradling an MFIFO entry. 2477 + */ 2478 + if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width) 2479 + desc->rqcfg.brst_len = 1; 2471 2480 2472 2481 desc->rqcfg.brst_len = get_burst_len(desc, len); 2473 2482 ··· 2741 2732 2742 2733 2743 2734 dev_info(&adev->dev, 2744 - "Loaded driver for PL330 DMAC-%d\n", adev->periphid); 2735 + "Loaded driver for PL330 DMAC-%x\n", adev->periphid); 2745 2736 dev_info(&adev->dev, 2746 2737 "\tDBUFF-%ux%ubytes Num_Chans-%u Num_Peri-%u Num_Events-%u\n", 2747 2738 pcfg->data_buf_dep, pcfg->data_bus_width / 8, pcfg->num_chan,
+30 -31
drivers/dma/sun6i-dma.c
··· 230 230 readl(pchan->base + DMA_CHAN_CUR_PARA)); 231 231 } 232 232 233 - static inline int convert_burst(u32 maxburst, u8 *burst) 233 + static inline s8 convert_burst(u32 maxburst) 234 234 { 235 235 switch (maxburst) { 236 236 case 1: 237 - *burst = 0; 238 - break; 237 + return 0; 239 238 case 8: 240 - *burst = 2; 241 - break; 239 + return 2; 242 240 default: 243 241 return -EINVAL; 244 242 } 245 - 246 - return 0; 247 243 } 248 244 249 - static inline int convert_buswidth(enum dma_slave_buswidth addr_width, u8 *width) 245 + static inline s8 convert_buswidth(enum dma_slave_buswidth addr_width) 250 246 { 251 247 if ((addr_width < DMA_SLAVE_BUSWIDTH_1_BYTE) || 252 248 (addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES)) 253 249 return -EINVAL; 254 250 255 - *width = addr_width >> 1; 256 - return 0; 251 + return addr_width >> 1; 257 252 } 258 253 259 254 static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev, ··· 279 284 struct dma_slave_config *config) 280 285 { 281 286 u8 src_width, dst_width, src_burst, dst_burst; 282 - int ret; 283 287 284 288 if (!config) 285 289 return -EINVAL; 286 290 287 - ret = convert_burst(config->src_maxburst, &src_burst); 288 - if (ret) 289 - return ret; 291 + src_burst = convert_burst(config->src_maxburst); 292 + if (src_burst) 293 + return src_burst; 290 294 291 - ret = convert_burst(config->dst_maxburst, &dst_burst); 292 - if (ret) 293 - return ret; 295 + dst_burst = convert_burst(config->dst_maxburst); 296 + if (dst_burst) 297 + return dst_burst; 294 298 295 - ret = convert_buswidth(config->src_addr_width, &src_width); 296 - if (ret) 297 - return ret; 299 + src_width = convert_buswidth(config->src_addr_width); 300 + if (src_width) 301 + return src_width; 298 302 299 - ret = convert_buswidth(config->dst_addr_width, &dst_width); 300 - if (ret) 301 - return ret; 303 + dst_width = convert_buswidth(config->dst_addr_width); 304 + if (dst_width) 305 + return dst_width; 302 306 303 307 lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) | 304 308 DMA_CHAN_CFG_SRC_WIDTH(src_width) | ··· 536 542 { 537 543 struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device); 538 544 struct sun6i_vchan *vchan = to_sun6i_vchan(chan); 539 - struct dma_slave_config *sconfig = &vchan->cfg; 540 545 struct sun6i_dma_lli *v_lli; 541 546 struct sun6i_desc *txd; 542 547 dma_addr_t p_lli; 543 - int ret; 548 + s8 burst, width; 544 549 545 550 dev_dbg(chan2dev(chan), 546 551 "%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n", ··· 558 565 goto err_txd_free; 559 566 } 560 567 561 - ret = sun6i_dma_cfg_lli(v_lli, src, dest, len, sconfig); 562 - if (ret) 563 - goto err_dma_free; 568 + v_lli->src = src; 569 + v_lli->dst = dest; 570 + v_lli->len = len; 571 + v_lli->para = NORMAL_WAIT; 564 572 573 + burst = convert_burst(8); 574 + width = convert_buswidth(DMA_SLAVE_BUSWIDTH_4_BYTES); 565 575 v_lli->cfg |= DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) | 566 576 DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) | 567 577 DMA_CHAN_CFG_DST_LINEAR_MODE | 568 - DMA_CHAN_CFG_SRC_LINEAR_MODE; 578 + DMA_CHAN_CFG_SRC_LINEAR_MODE | 579 + DMA_CHAN_CFG_SRC_BURST(burst) | 580 + DMA_CHAN_CFG_SRC_WIDTH(width) | 581 + DMA_CHAN_CFG_DST_BURST(burst) | 582 + DMA_CHAN_CFG_DST_WIDTH(width); 569 583 570 584 sun6i_dma_lli_add(NULL, v_lli, p_lli, txd); 571 585 ··· 580 580 581 581 return vchan_tx_prep(&vchan->vc, &txd->vd, flags); 582 582 583 - err_dma_free: 584 - dma_pool_free(sdev->pool, v_lli, p_lli); 585 583 err_txd_free: 586 584 kfree(txd); 587 585 return NULL; ··· 913 915 sdc->slave.device_prep_dma_memcpy = sun6i_dma_prep_dma_memcpy; 914 916 sdc->slave.device_control = sun6i_dma_control; 915 917 sdc->slave.chancnt = NR_MAX_VCHANS; 918 + sdc->slave.copy_align = 4; 916 919 917 920 sdc->slave.dev = &pdev->dev; 918 921
+8 -6
drivers/gpu/drm/i915/i915_dma.c
··· 1670 1670 goto out_regs; 1671 1671 1672 1672 if (drm_core_check_feature(dev, DRIVER_MODESET)) { 1673 - ret = i915_kick_out_vgacon(dev_priv); 1674 - if (ret) { 1675 - DRM_ERROR("failed to remove conflicting VGA console\n"); 1676 - goto out_gtt; 1677 - } 1678 - 1673 + /* WARNING: Apparently we must kick fbdev drivers before vgacon, 1674 + * otherwise the vga fbdev driver falls over. */ 1679 1675 ret = i915_kick_out_firmware_fb(dev_priv); 1680 1676 if (ret) { 1681 1677 DRM_ERROR("failed to remove conflicting framebuffer drivers\n"); 1678 + goto out_gtt; 1679 + } 1680 + 1681 + ret = i915_kick_out_vgacon(dev_priv); 1682 + if (ret) { 1683 + DRM_ERROR("failed to remove conflicting VGA console\n"); 1682 1684 goto out_gtt; 1683 1685 } 1684 1686 }
+4
drivers/gpu/drm/i915/intel_display.c
··· 9408 9408 struct drm_device *dev = crtc->base.dev; 9409 9409 struct drm_i915_private *dev_priv = dev->dev_private; 9410 9410 9411 + if (i915_reset_in_progress(&dev_priv->gpu_error) || 9412 + crtc->reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter)) 9413 + return true; 9414 + 9411 9415 /* 9412 9416 * The relevant registers doen't exist on pre-ctg. 9413 9417 * As the flip done interrupt doesn't trigger for mmio
+1
drivers/gpu/drm/i915/intel_dp.c
··· 4450 4450 * vdd might still be enabled do to the delayed vdd off. 4451 4451 * Make sure vdd is actually turned off here. 4452 4452 */ 4453 + cancel_delayed_work_sync(&intel_dp->panel_vdd_work); 4453 4454 pps_lock(intel_dp); 4454 4455 edp_panel_vdd_off_sync(intel_dp); 4455 4456 pps_unlock(intel_dp);
-5
drivers/gpu/drm/i915/intel_pm.c
··· 5469 5469 I915_WRITE(_3D_CHICKEN, 5470 5470 _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB)); 5471 5471 5472 - /* WaSetupGtModeTdRowDispatch:snb */ 5473 - if (IS_SNB_GT1(dev)) 5474 - I915_WRITE(GEN6_GT_MODE, 5475 - _MASKED_BIT_ENABLE(GEN6_TD_FOUR_ROW_DISPATCH_DISABLE)); 5476 - 5477 5472 /* WaDisable_RenderCache_OperationalFlush:snb */ 5478 5473 I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE)); 5479 5474
+1 -1
drivers/gpu/drm/radeon/r600_dpm.c
··· 1256 1256 (mode_info->atom_context->bios + data_offset + 1257 1257 le16_to_cpu(ext_hdr->usPowerTuneTableOffset)); 1258 1258 rdev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 1259 - ppt->usMaximumPowerDeliveryLimit; 1259 + le16_to_cpu(ppt->usMaximumPowerDeliveryLimit); 1260 1260 pt = &ppt->power_tune_table; 1261 1261 } else { 1262 1262 ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
+18 -1
drivers/gpu/drm/radeon/radeon_connectors.c
··· 322 322 } 323 323 324 324 if (!radeon_connector->edid) { 325 + /* don't fetch the edid from the vbios if ddc fails and runpm is 326 + * enabled so we report disconnected. 327 + */ 328 + if ((rdev->flags & RADEON_IS_PX) && (radeon_runtime_pm != 0)) 329 + return; 330 + 325 331 if (rdev->is_atom_bios) { 326 332 /* some laptops provide a hardcoded edid in rom for LCDs */ 327 333 if (((connector->connector_type == DRM_MODE_CONNECTOR_LVDS) || ··· 832 826 static enum drm_connector_status 833 827 radeon_lvds_detect(struct drm_connector *connector, bool force) 834 828 { 829 + struct drm_device *dev = connector->dev; 830 + struct radeon_device *rdev = dev->dev_private; 835 831 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 836 832 struct drm_encoder *encoder = radeon_best_single_encoder(connector); 837 833 enum drm_connector_status ret = connector_status_disconnected; ··· 850 842 /* check if panel is valid */ 851 843 if (native_mode->hdisplay >= 320 && native_mode->vdisplay >= 240) 852 844 ret = connector_status_connected; 853 - 845 + /* don't fetch the edid from the vbios if ddc fails and runpm is 846 + * enabled so we report disconnected. 847 + */ 848 + if ((rdev->flags & RADEON_IS_PX) && (radeon_runtime_pm != 0)) 849 + ret = connector_status_disconnected; 854 850 } 855 851 856 852 /* check for edid as well */ ··· 1601 1589 /* check if panel is valid */ 1602 1590 if (native_mode->hdisplay >= 320 && native_mode->vdisplay >= 240) 1603 1591 ret = connector_status_connected; 1592 + /* don't fetch the edid from the vbios if ddc fails and runpm is 1593 + * enabled so we report disconnected. 1594 + */ 1595 + if ((rdev->flags & RADEON_IS_PX) && (radeon_runtime_pm != 0)) 1596 + ret = connector_status_disconnected; 1604 1597 } 1605 1598 /* eDP is always DP */ 1606 1599 radeon_dig_connector->dp_sink_type = CONNECTOR_OBJECT_ID_DISPLAYPORT;
+3
drivers/gpu/drm/radeon/radeon_encoders.c
··· 179 179 (rdev->pdev->subsystem_vendor == 0x1734) && 180 180 (rdev->pdev->subsystem_device == 0x1107)) 181 181 use_bl = false; 182 + /* disable native backlight control on older asics */ 183 + else if (rdev->family < CHIP_R600) 184 + use_bl = false; 182 185 else 183 186 use_bl = true; 184 187 }
+10
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 185 185 if (rdev->flags & RADEON_IS_AGP) 186 186 return false; 187 187 188 + /* 189 + * Older chips have a HW limitation, they can only generate 40 bits 190 + * of address for "64-bit" MSIs which breaks on some platforms, notably 191 + * IBM POWER servers, so we limit them 192 + */ 193 + if (rdev->family < CHIP_BONAIRE) { 194 + dev_info(rdev->dev, "radeon: MSI limited to 32-bit\n"); 195 + rdev->pdev->no_64bit_msi = 1; 196 + } 197 + 188 198 /* force MSI on */ 189 199 if (radeon_msi == 1) 190 200 return true;
+2 -4
drivers/hwmon/g762.c
··· 1084 1084 if (ret) 1085 1085 goto clock_dis; 1086 1086 1087 - data->hwmon_dev = devm_hwmon_device_register_with_groups(dev, 1088 - client->name, 1089 - data, 1090 - g762_groups); 1087 + data->hwmon_dev = hwmon_device_register_with_groups(dev, client->name, 1088 + data, g762_groups); 1091 1089 if (IS_ERR(data->hwmon_dev)) { 1092 1090 ret = PTR_ERR(data->hwmon_dev); 1093 1091 goto clock_dis;
+33 -7
drivers/iio/accel/bmc150-accel.c
··· 44 44 45 45 #define BMC150_ACCEL_REG_INT_STATUS_2 0x0B 46 46 #define BMC150_ACCEL_ANY_MOTION_MASK 0x07 47 + #define BMC150_ACCEL_ANY_MOTION_BIT_X BIT(0) 48 + #define BMC150_ACCEL_ANY_MOTION_BIT_Y BIT(1) 49 + #define BMC150_ACCEL_ANY_MOTION_BIT_Z BIT(2) 47 50 #define BMC150_ACCEL_ANY_MOTION_BIT_SIGN BIT(3) 48 51 49 52 #define BMC150_ACCEL_REG_PMU_LPW 0x11 ··· 95 92 #define BMC150_ACCEL_SLOPE_THRES_MASK 0xFF 96 93 97 94 /* Slope duration in terms of number of samples */ 98 - #define BMC150_ACCEL_DEF_SLOPE_DURATION 2 95 + #define BMC150_ACCEL_DEF_SLOPE_DURATION 1 99 96 /* in terms of multiples of g's/LSB, based on range */ 100 - #define BMC150_ACCEL_DEF_SLOPE_THRESHOLD 5 97 + #define BMC150_ACCEL_DEF_SLOPE_THRESHOLD 1 101 98 102 99 #define BMC150_ACCEL_REG_XOUT_L 0x02 103 100 ··· 539 536 if (ret < 0) { 540 537 dev_err(&data->client->dev, 541 538 "Failed: bmc150_accel_set_power_state for %d\n", on); 539 + if (on) 540 + pm_runtime_put_noidle(&data->client->dev); 541 + 542 542 return ret; 543 543 } 544 544 ··· 817 811 818 812 ret = bmc150_accel_setup_any_motion_interrupt(data, state); 819 813 if (ret < 0) { 814 + bmc150_accel_set_power_state(data, false); 820 815 mutex_unlock(&data->mutex); 821 816 return ret; 822 817 } ··· 853 846 854 847 static const struct iio_event_spec bmc150_accel_event = { 855 848 .type = IIO_EV_TYPE_ROC, 856 - .dir = IIO_EV_DIR_RISING | IIO_EV_DIR_FALLING, 849 + .dir = IIO_EV_DIR_EITHER, 857 850 .mask_separate = BIT(IIO_EV_INFO_VALUE) | 858 851 BIT(IIO_EV_INFO_ENABLE) | 859 852 BIT(IIO_EV_INFO_PERIOD) ··· 1061 1054 else 1062 1055 ret = bmc150_accel_setup_new_data_interrupt(data, state); 1063 1056 if (ret < 0) { 1057 + bmc150_accel_set_power_state(data, false); 1064 1058 mutex_unlock(&data->mutex); 1065 1059 return ret; 1066 1060 } ··· 1100 1092 else 1101 1093 dir = IIO_EV_DIR_RISING; 1102 1094 1103 - if (ret & BMC150_ACCEL_ANY_MOTION_MASK) 1095 + if (ret & BMC150_ACCEL_ANY_MOTION_BIT_X) 1104 1096 iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ACCEL, 1105 1097 0, 1106 - IIO_MOD_X_OR_Y_OR_Z, 1098 + IIO_MOD_X, 1107 1099 IIO_EV_TYPE_ROC, 1108 - IIO_EV_DIR_EITHER), 1100 + dir), 1101 + data->timestamp); 1102 + if (ret & BMC150_ACCEL_ANY_MOTION_BIT_Y) 1103 + iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ACCEL, 1104 + 0, 1105 + IIO_MOD_Y, 1106 + IIO_EV_TYPE_ROC, 1107 + dir), 1108 + data->timestamp); 1109 + if (ret & BMC150_ACCEL_ANY_MOTION_BIT_Z) 1110 + iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ACCEL, 1111 + 0, 1112 + IIO_MOD_Z, 1113 + IIO_EV_TYPE_ROC, 1114 + dir), 1109 1115 data->timestamp); 1110 1116 ack_intr_status: 1111 1117 if (!data->dready_trigger_on) ··· 1376 1354 { 1377 1355 struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); 1378 1356 struct bmc150_accel_data *data = iio_priv(indio_dev); 1357 + int ret; 1379 1358 1380 1359 dev_dbg(&data->client->dev, __func__); 1360 + ret = bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_SUSPEND, 0); 1361 + if (ret < 0) 1362 + return -EAGAIN; 1381 1363 1382 - return bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_SUSPEND, 0); 1364 + return 0; 1383 1365 } 1384 1366 1385 1367 static int bmc150_accel_runtime_resume(struct device *dev)
+2
drivers/iio/accel/kxcjk-1013.c
··· 269 269 return ret; 270 270 } 271 271 272 + ret &= ~(KXCJK1013_REG_CTRL1_BIT_GSEL0 | 273 + KXCJK1013_REG_CTRL1_BIT_GSEL1); 272 274 ret |= (KXCJK1013_scale_table[range_index].gsel_0 << 3); 273 275 ret |= (KXCJK1013_scale_table[range_index].gsel_1 << 4); 274 276
+1
drivers/iio/adc/men_z188_adc.c
··· 152 152 153 153 static const struct mcb_device_id men_z188_ids[] = { 154 154 { .device = 0xbc }, 155 + { } 155 156 }; 156 157 MODULE_DEVICE_TABLE(mcb, men_z188_ids); 157 158
+49 -4
drivers/iio/gyro/bmg160.c
··· 67 67 #define BMG160_REG_INT_EN_0 0x15 68 68 #define BMG160_DATA_ENABLE_INT BIT(7) 69 69 70 + #define BMG160_REG_INT_EN_1 0x16 71 + #define BMG160_INT1_BIT_OD BIT(1) 72 + 70 73 #define BMG160_REG_XOUT_L 0x02 71 74 #define BMG160_AXIS_TO_REG(axis) (BMG160_REG_XOUT_L + (axis * 2)) 72 75 ··· 85 82 86 83 #define BMG160_REG_INT_STATUS_2 0x0B 87 84 #define BMG160_ANY_MOTION_MASK 0x07 85 + #define BMG160_ANY_MOTION_BIT_X BIT(0) 86 + #define BMG160_ANY_MOTION_BIT_Y BIT(1) 87 + #define BMG160_ANY_MOTION_BIT_Z BIT(2) 88 88 89 89 #define BMG160_REG_TEMP 0x08 90 90 #define BMG160_TEMP_CENTER_VAL 23 ··· 228 222 data->slope_thres = ret; 229 223 230 224 /* Set default interrupt mode */ 225 + ret = i2c_smbus_read_byte_data(data->client, BMG160_REG_INT_EN_1); 226 + if (ret < 0) { 227 + dev_err(&data->client->dev, "Error reading reg_int_en_1\n"); 228 + return ret; 229 + } 230 + ret &= ~BMG160_INT1_BIT_OD; 231 + ret = i2c_smbus_write_byte_data(data->client, 232 + BMG160_REG_INT_EN_1, ret); 233 + if (ret < 0) { 234 + dev_err(&data->client->dev, "Error writing reg_int_en_1\n"); 235 + return ret; 236 + } 237 + 231 238 ret = i2c_smbus_write_byte_data(data->client, 232 239 BMG160_REG_INT_RST_LATCH, 233 240 BMG160_INT_MODE_LATCH_INT | ··· 269 250 if (ret < 0) { 270 251 dev_err(&data->client->dev, 271 252 "Failed: bmg160_set_power_state for %d\n", on); 253 + if (on) 254 + pm_runtime_put_noidle(&data->client->dev); 255 + 272 256 return ret; 273 257 } 274 258 #endif ··· 727 705 728 706 ret = bmg160_setup_any_motion_interrupt(data, state); 729 707 if (ret < 0) { 708 + bmg160_set_power_state(data, false); 730 709 mutex_unlock(&data->mutex); 731 710 return ret; 732 711 } ··· 766 743 767 744 static const struct iio_event_spec bmg160_event = { 768 745 .type = IIO_EV_TYPE_ROC, 769 - .dir = IIO_EV_DIR_RISING | IIO_EV_DIR_FALLING, 746 + .dir = IIO_EV_DIR_EITHER, 770 747 .mask_shared_by_type = BIT(IIO_EV_INFO_VALUE) | 771 748 BIT(IIO_EV_INFO_ENABLE) 772 749 }; ··· 894 871 else 895 872 ret = bmg160_setup_new_data_interrupt(data, state); 896 873 if (ret < 0) { 874 + bmg160_set_power_state(data, false); 897 875 mutex_unlock(&data->mutex); 898 876 return ret; 899 877 } ··· 932 908 else 933 909 dir = IIO_EV_DIR_FALLING; 934 910 935 - if (ret & BMG160_ANY_MOTION_MASK) 911 + if (ret & BMG160_ANY_MOTION_BIT_X) 936 912 iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ANGL_VEL, 937 913 0, 938 - IIO_MOD_X_OR_Y_OR_Z, 914 + IIO_MOD_X, 915 + IIO_EV_TYPE_ROC, 916 + dir), 917 + data->timestamp); 918 + if (ret & BMG160_ANY_MOTION_BIT_Y) 919 + iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ANGL_VEL, 920 + 0, 921 + IIO_MOD_Y, 922 + IIO_EV_TYPE_ROC, 923 + dir), 924 + data->timestamp); 925 + if (ret & BMG160_ANY_MOTION_BIT_Z) 926 + iio_push_event(indio_dev, IIO_MOD_EVENT_CODE(IIO_ANGL_VEL, 927 + 0, 928 + IIO_MOD_Z, 939 929 IIO_EV_TYPE_ROC, 940 930 dir), 941 931 data->timestamp); ··· 1207 1169 { 1208 1170 struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); 1209 1171 struct bmg160_data *data = iio_priv(indio_dev); 1172 + int ret; 1210 1173 1211 - return bmg160_set_mode(data, BMG160_MODE_SUSPEND); 1174 + ret = bmg160_set_mode(data, BMG160_MODE_SUSPEND); 1175 + if (ret < 0) { 1176 + dev_err(&data->client->dev, "set mode failed\n"); 1177 + return -EAGAIN; 1178 + } 1179 + 1180 + return 0; 1212 1181 } 1213 1182 1214 1183 static int bmg160_runtime_resume(struct device *dev)
+30 -14
drivers/infiniband/ulp/isert/ib_isert.c
··· 115 115 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS; 116 116 /* 117 117 * FIXME: Use devattr.max_sge - 2 for max_send_sge as 118 - * work-around for RDMA_READ.. 118 + * work-around for RDMA_READs with ConnectX-2. 119 + * 120 + * Also, still make sure to have at least two SGEs for 121 + * outgoing control PDU responses. 119 122 */ 120 - attr.cap.max_send_sge = device->dev_attr.max_sge - 2; 123 + attr.cap.max_send_sge = max(2, device->dev_attr.max_sge - 2); 121 124 isert_conn->max_sge = attr.cap.max_send_sge; 122 125 123 126 attr.cap.max_recv_sge = 1; ··· 228 225 struct isert_cq_desc *cq_desc; 229 226 struct ib_device_attr *dev_attr; 230 227 int ret = 0, i, j; 228 + int max_rx_cqe, max_tx_cqe; 231 229 232 230 dev_attr = &device->dev_attr; 233 231 ret = isert_query_device(ib_dev, dev_attr); 234 232 if (ret) 235 233 return ret; 234 + 235 + max_rx_cqe = min(ISER_MAX_RX_CQ_LEN, dev_attr->max_cqe); 236 + max_tx_cqe = min(ISER_MAX_TX_CQ_LEN, dev_attr->max_cqe); 236 237 237 238 /* asign function handlers */ 238 239 if (dev_attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS && ··· 279 272 isert_cq_rx_callback, 280 273 isert_cq_event_callback, 281 274 (void *)&cq_desc[i], 282 - ISER_MAX_RX_CQ_LEN, i); 275 + max_rx_cqe, i); 283 276 if (IS_ERR(device->dev_rx_cq[i])) { 284 277 ret = PTR_ERR(device->dev_rx_cq[i]); 285 278 device->dev_rx_cq[i] = NULL; ··· 291 284 isert_cq_tx_callback, 292 285 isert_cq_event_callback, 293 286 (void *)&cq_desc[i], 294 - ISER_MAX_TX_CQ_LEN, i); 287 + max_tx_cqe, i); 295 288 if (IS_ERR(device->dev_tx_cq[i])) { 296 289 ret = PTR_ERR(device->dev_tx_cq[i]); 297 290 device->dev_tx_cq[i] = NULL; ··· 810 803 complete(&isert_conn->conn_wait); 811 804 } 812 805 813 - static void 806 + static int 814 807 isert_disconnected_handler(struct rdma_cm_id *cma_id, bool disconnect) 815 808 { 816 - struct isert_conn *isert_conn = (struct isert_conn *)cma_id->context; 809 + struct isert_conn *isert_conn; 810 + 811 + if (!cma_id->qp) { 812 + struct isert_np *isert_np = cma_id->context; 813 + 814 + isert_np->np_cm_id = NULL; 815 + return -1; 816 + } 817 + 818 + isert_conn = (struct isert_conn *)cma_id->context; 817 819 818 820 isert_conn->disconnect = disconnect; 819 821 INIT_WORK(&isert_conn->conn_logout_work, isert_disconnect_work); 820 822 schedule_work(&isert_conn->conn_logout_work); 823 + 824 + return 0; 821 825 } 822 826 823 827 static int ··· 843 825 switch (event->event) { 844 826 case RDMA_CM_EVENT_CONNECT_REQUEST: 845 827 ret = isert_connect_request(cma_id, event); 828 + if (ret) 829 + pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 830 + event->event, ret); 846 831 break; 847 832 case RDMA_CM_EVENT_ESTABLISHED: 848 833 isert_connected_handler(cma_id); ··· 855 834 case RDMA_CM_EVENT_DEVICE_REMOVAL: /* FALLTHRU */ 856 835 disconnect = true; 857 836 case RDMA_CM_EVENT_TIMEWAIT_EXIT: /* FALLTHRU */ 858 - isert_disconnected_handler(cma_id, disconnect); 837 + ret = isert_disconnected_handler(cma_id, disconnect); 859 838 break; 860 839 case RDMA_CM_EVENT_CONNECT_ERROR: 861 840 default: 862 841 pr_err("Unhandled RDMA CMA event: %d\n", event->event); 863 842 break; 864 - } 865 - 866 - if (ret != 0) { 867 - pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 868 - event->event, ret); 869 - dump_stack(); 870 843 } 871 844 872 845 return ret; ··· 3205 3190 { 3206 3191 struct isert_np *isert_np = (struct isert_np *)np->np_context; 3207 3192 3208 - rdma_destroy_id(isert_np->np_cm_id); 3193 + if (isert_np->np_cm_id) 3194 + rdma_destroy_id(isert_np->np_cm_id); 3209 3195 3210 3196 np->np_context = NULL; 3211 3197 kfree(isert_np);
+8
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 2092 2092 if (!qp_init) 2093 2093 goto out; 2094 2094 2095 + retry: 2095 2096 ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch, 2096 2097 ch->rq_size + srp_sq_size, 0); 2097 2098 if (IS_ERR(ch->cq)) { ··· 2116 2115 ch->qp = ib_create_qp(sdev->pd, qp_init); 2117 2116 if (IS_ERR(ch->qp)) { 2118 2117 ret = PTR_ERR(ch->qp); 2118 + if (ret == -ENOMEM) { 2119 + srp_sq_size /= 2; 2120 + if (srp_sq_size >= MIN_SRPT_SQ_SIZE) { 2121 + ib_destroy_cq(ch->cq); 2122 + goto retry; 2123 + } 2124 + } 2119 2125 printk(KERN_ERR "failed to create_qp ret= %d\n", ret); 2120 2126 goto err_destroy_cq; 2121 2127 }
+13 -3
drivers/input/joystick/xpad.c
··· 1179 1179 } 1180 1180 1181 1181 ep_irq_in = &intf->cur_altsetting->endpoint[1].desc; 1182 - usb_fill_bulk_urb(xpad->bulk_out, udev, 1183 - usb_sndbulkpipe(udev, ep_irq_in->bEndpointAddress), 1184 - xpad->bdata, XPAD_PKT_LEN, xpad_bulk_out, xpad); 1182 + if (usb_endpoint_is_bulk_out(ep_irq_in)) { 1183 + usb_fill_bulk_urb(xpad->bulk_out, udev, 1184 + usb_sndbulkpipe(udev, 1185 + ep_irq_in->bEndpointAddress), 1186 + xpad->bdata, XPAD_PKT_LEN, 1187 + xpad_bulk_out, xpad); 1188 + } else { 1189 + usb_fill_int_urb(xpad->bulk_out, udev, 1190 + usb_sndintpipe(udev, 1191 + ep_irq_in->bEndpointAddress), 1192 + xpad->bdata, XPAD_PKT_LEN, 1193 + xpad_bulk_out, xpad, 0); 1194 + } 1185 1195 1186 1196 /* 1187 1197 * Submit the int URB immediately rather than waiting for open
+1 -9
drivers/input/mouse/elantech.c
··· 428 428 int x, y; 429 429 u32 t; 430 430 431 - if (dev_WARN_ONCE(&psmouse->ps2dev.serio->dev, 432 - !tp_dev, 433 - psmouse_fmt("Unexpected trackpoint message\n"))) { 434 - if (etd->debug == 1) 435 - elantech_packet_dump(psmouse); 436 - return; 437 - } 438 - 439 431 t = get_unaligned_le32(&packet[0]); 440 432 441 433 switch (t & ~7U) { ··· 785 793 unsigned char packet_type = packet[3] & 0x03; 786 794 bool sanity_check; 787 795 788 - if ((packet[3] & 0x0f) == 0x06) 796 + if (etd->tp_dev && (packet[3] & 0x0f) == 0x06) 789 797 return PACKET_TRACKPOINT; 790 798 791 799 /*
+4
drivers/input/mouse/synaptics.c
··· 143 143 (const char * const []){"LEN2001", NULL}, 144 144 1024, 5022, 2508, 4832 145 145 }, 146 + { 147 + (const char * const []){"LEN2006", NULL}, 148 + 1264, 5675, 1171, 4688 149 + }, 146 150 { } 147 151 }; 148 152
+3 -3
drivers/irqchip/irq-atmel-aic-common.c
··· 217 217 } 218 218 219 219 ret = irq_alloc_domain_generic_chips(domain, 32, 1, name, 220 - handle_level_irq, 0, 0, 221 - IRQCHIP_SKIP_SET_WAKE); 220 + handle_fasteoi_irq, 221 + IRQ_NOREQUEST | IRQ_NOPROBE | 222 + IRQ_NOAUTOEN, 0, 0); 222 223 if (ret) 223 224 goto err_domain_remove; 224 225 ··· 231 230 gc->unused = 0; 232 231 gc->wake_enabled = ~0; 233 232 gc->chip_types[0].type = IRQ_TYPE_SENSE_MASK; 234 - gc->chip_types[0].handler = handle_fasteoi_irq; 235 233 gc->chip_types[0].chip.irq_eoi = irq_gc_eoi; 236 234 gc->chip_types[0].chip.irq_set_wake = irq_gc_set_wake; 237 235 gc->chip_types[0].chip.irq_shutdown = aic_common_shutdown;
+2 -2
drivers/irqchip/irq-bcm7120-l2.c
··· 101 101 int parent_irq; 102 102 103 103 parent_irq = irq_of_parse_and_map(dn, irq); 104 - if (parent_irq < 0) { 104 + if (!parent_irq) { 105 105 pr_err("failed to map interrupt %d\n", irq); 106 - return parent_irq; 106 + return -EINVAL; 107 107 } 108 108 109 109 data->irq_map_mask |= be32_to_cpup(map_mask + irq);
+2 -2
drivers/irqchip/irq-brcmstb-l2.c
··· 135 135 __raw_writel(0xffffffff, data->base + CPU_CLEAR); 136 136 137 137 data->parent_irq = irq_of_parse_and_map(np, 0); 138 - if (data->parent_irq < 0) { 138 + if (!data->parent_irq) { 139 139 pr_err("failed to find parent interrupt\n"); 140 - ret = data->parent_irq; 140 + ret = -EINVAL; 141 141 goto out_unmap; 142 142 } 143 143
+2 -1
drivers/net/bonding/bond_main.c
··· 2471 2471 bond_slave_state_change(bond); 2472 2472 if (BOND_MODE(bond) == BOND_MODE_XOR) 2473 2473 bond_update_slave_arr(bond, NULL); 2474 - } else if (do_failover) { 2474 + } 2475 + if (do_failover) { 2475 2476 block_netpoll_tx(); 2476 2477 bond_select_active_slave(bond); 2477 2478 unblock_netpoll_tx();
+2 -2
drivers/net/can/dev.c
··· 110 110 long rate; 111 111 u64 v64; 112 112 113 - /* Use CIA recommended sample points */ 113 + /* Use CiA recommended sample points */ 114 114 if (bt->sample_point) { 115 115 sampl_pt = bt->sample_point; 116 116 } else { ··· 382 382 BUG_ON(idx >= priv->echo_skb_max); 383 383 384 384 if (priv->echo_skb[idx]) { 385 - kfree_skb(priv->echo_skb[idx]); 385 + dev_kfree_skb_any(priv->echo_skb[idx]); 386 386 priv->echo_skb[idx] = NULL; 387 387 } 388 388 }
+1
drivers/net/can/m_can/Kconfig
··· 1 1 config CAN_M_CAN 2 + depends on HAS_IOMEM 2 3 tristate "Bosch M_CAN devices" 3 4 ---help--- 4 5 Say Y here if you want to support for Bosch M_CAN controller.
+166 -53
drivers/net/can/m_can/m_can.c
··· 105 105 MRAM_CFG_NUM, 106 106 }; 107 107 108 + /* Fast Bit Timing & Prescaler Register (FBTP) */ 109 + #define FBTR_FBRP_MASK 0x1f 110 + #define FBTR_FBRP_SHIFT 16 111 + #define FBTR_FTSEG1_SHIFT 8 112 + #define FBTR_FTSEG1_MASK (0xf << FBTR_FTSEG1_SHIFT) 113 + #define FBTR_FTSEG2_SHIFT 4 114 + #define FBTR_FTSEG2_MASK (0x7 << FBTR_FTSEG2_SHIFT) 115 + #define FBTR_FSJW_SHIFT 0 116 + #define FBTR_FSJW_MASK 0x3 117 + 108 118 /* Test Register (TEST) */ 109 119 #define TEST_LBCK BIT(4) 110 120 111 121 /* CC Control Register(CCCR) */ 112 - #define CCCR_TEST BIT(7) 113 - #define CCCR_MON BIT(5) 114 - #define CCCR_CCE BIT(1) 115 - #define CCCR_INIT BIT(0) 122 + #define CCCR_TEST BIT(7) 123 + #define CCCR_CMR_MASK 0x3 124 + #define CCCR_CMR_SHIFT 10 125 + #define CCCR_CMR_CANFD 0x1 126 + #define CCCR_CMR_CANFD_BRS 0x2 127 + #define CCCR_CMR_CAN 0x3 128 + #define CCCR_CME_MASK 0x3 129 + #define CCCR_CME_SHIFT 8 130 + #define CCCR_CME_CAN 0 131 + #define CCCR_CME_CANFD 0x1 132 + #define CCCR_CME_CANFD_BRS 0x2 133 + #define CCCR_TEST BIT(7) 134 + #define CCCR_MON BIT(5) 135 + #define CCCR_CCE BIT(1) 136 + #define CCCR_INIT BIT(0) 137 + #define CCCR_CANFD 0x10 116 138 117 139 /* Bit Timing & Prescaler Register (BTP) */ 118 140 #define BTR_BRP_MASK 0x3ff ··· 226 204 227 205 /* Rx Buffer / FIFO Element Size Configuration (RXESC) */ 228 206 #define M_CAN_RXESC_8BYTES 0x0 207 + #define M_CAN_RXESC_64BYTES 0x777 229 208 230 209 /* Tx Buffer Configuration(TXBC) */ 231 210 #define TXBC_NDTB_OFF 16 ··· 234 211 235 212 /* Tx Buffer Element Size Configuration(TXESC) */ 236 213 #define TXESC_TBDS_8BYTES 0x0 214 + #define TXESC_TBDS_64BYTES 0x7 237 215 238 216 /* Tx Event FIFO Con.guration (TXEFC) */ 239 217 #define TXEFC_EFS_OFF 16 ··· 243 219 /* Message RAM Configuration (in bytes) */ 244 220 #define SIDF_ELEMENT_SIZE 4 245 221 #define XIDF_ELEMENT_SIZE 8 246 - #define RXF0_ELEMENT_SIZE 16 247 - #define RXF1_ELEMENT_SIZE 16 222 + #define RXF0_ELEMENT_SIZE 72 223 + #define RXF1_ELEMENT_SIZE 72 248 224 #define RXB_ELEMENT_SIZE 16 249 225 #define TXE_ELEMENT_SIZE 8 250 - #define TXB_ELEMENT_SIZE 16 226 + #define TXB_ELEMENT_SIZE 72 251 227 252 228 /* Message RAM Elements */ 253 229 #define M_CAN_FIFO_ID 0x0 ··· 255 231 #define M_CAN_FIFO_DATA(n) (0x8 + ((n) << 2)) 256 232 257 233 /* Rx Buffer Element */ 234 + /* R0 */ 258 235 #define RX_BUF_ESI BIT(31) 259 236 #define RX_BUF_XTD BIT(30) 260 237 #define RX_BUF_RTR BIT(29) 238 + /* R1 */ 239 + #define RX_BUF_ANMF BIT(31) 240 + #define RX_BUF_EDL BIT(21) 241 + #define RX_BUF_BRS BIT(20) 261 242 262 243 /* Tx Buffer Element */ 244 + /* R0 */ 263 245 #define TX_BUF_XTD BIT(30) 264 246 #define TX_BUF_RTR BIT(29) 265 247 ··· 326 296 if (enable) { 327 297 /* enable m_can configuration */ 328 298 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT); 299 + udelay(5); 329 300 /* CCCR.CCE can only be set/reset while CCCR.INIT = '1' */ 330 301 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT | CCCR_CCE); 331 302 } else { ··· 357 326 m_can_write(priv, M_CAN_ILE, 0x0); 358 327 } 359 328 360 - static void m_can_read_fifo(const struct net_device *dev, struct can_frame *cf, 361 - u32 rxfs) 329 + static void m_can_read_fifo(struct net_device *dev, u32 rxfs) 362 330 { 331 + struct net_device_stats *stats = &dev->stats; 363 332 struct m_can_priv *priv = netdev_priv(dev); 364 - u32 id, fgi; 333 + struct canfd_frame *cf; 334 + struct sk_buff *skb; 335 + u32 id, fgi, dlc; 336 + int i; 365 337 366 338 /* calculate the fifo get index for where to read data */ 367 339 fgi = (rxfs & RXFS_FGI_MASK) >> RXFS_FGI_OFF; 340 + dlc = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 341 + if (dlc & RX_BUF_EDL) 342 + skb = alloc_canfd_skb(dev, &cf); 343 + else 344 + skb = alloc_can_skb(dev, (struct can_frame **)&cf); 345 + if (!skb) { 346 + stats->rx_dropped++; 347 + return; 348 + } 349 + 350 + if (dlc & RX_BUF_EDL) 351 + cf->len = can_dlc2len((dlc >> 16) & 0x0F); 352 + else 353 + cf->len = get_can_dlc((dlc >> 16) & 0x0F); 354 + 368 355 id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_ID); 369 356 if (id & RX_BUF_XTD) 370 357 cf->can_id = (id & CAN_EFF_MASK) | CAN_EFF_FLAG; 371 358 else 372 359 cf->can_id = (id >> 18) & CAN_SFF_MASK; 373 360 374 - if (id & RX_BUF_RTR) { 361 + if (id & RX_BUF_ESI) { 362 + cf->flags |= CANFD_ESI; 363 + netdev_dbg(dev, "ESI Error\n"); 364 + } 365 + 366 + if (!(dlc & RX_BUF_EDL) && (id & RX_BUF_RTR)) { 375 367 cf->can_id |= CAN_RTR_FLAG; 376 368 } else { 377 - id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 378 - cf->can_dlc = get_can_dlc((id >> 16) & 0x0F); 379 - *(u32 *)(cf->data + 0) = m_can_fifo_read(priv, fgi, 380 - M_CAN_FIFO_DATA(0)); 381 - *(u32 *)(cf->data + 4) = m_can_fifo_read(priv, fgi, 382 - M_CAN_FIFO_DATA(1)); 369 + if (dlc & RX_BUF_BRS) 370 + cf->flags |= CANFD_BRS; 371 + 372 + for (i = 0; i < cf->len; i += 4) 373 + *(u32 *)(cf->data + i) = 374 + m_can_fifo_read(priv, fgi, 375 + M_CAN_FIFO_DATA(i / 4)); 383 376 } 384 377 385 378 /* acknowledge rx fifo 0 */ 386 379 m_can_write(priv, M_CAN_RXF0A, fgi); 380 + 381 + stats->rx_packets++; 382 + stats->rx_bytes += cf->len; 383 + 384 + netif_receive_skb(skb); 387 385 } 388 386 389 387 static int m_can_do_rx_poll(struct net_device *dev, int quota) 390 388 { 391 389 struct m_can_priv *priv = netdev_priv(dev); 392 - struct net_device_stats *stats = &dev->stats; 393 - struct sk_buff *skb; 394 - struct can_frame *frame; 395 390 u32 pkts = 0; 396 391 u32 rxfs; 397 392 ··· 431 374 if (rxfs & RXFS_RFL) 432 375 netdev_warn(dev, "Rx FIFO 0 Message Lost\n"); 433 376 434 - skb = alloc_can_skb(dev, &frame); 435 - if (!skb) { 436 - stats->rx_dropped++; 437 - return pkts; 438 - } 439 - 440 - m_can_read_fifo(dev, frame, rxfs); 441 - 442 - stats->rx_packets++; 443 - stats->rx_bytes += frame->can_dlc; 444 - 445 - netif_receive_skb(skb); 377 + m_can_read_fifo(dev, rxfs); 446 378 447 379 quota--; 448 380 pkts++; ··· 527 481 return 1; 528 482 } 529 483 484 + static int __m_can_get_berr_counter(const struct net_device *dev, 485 + struct can_berr_counter *bec) 486 + { 487 + struct m_can_priv *priv = netdev_priv(dev); 488 + unsigned int ecr; 489 + 490 + ecr = m_can_read(priv, M_CAN_ECR); 491 + bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 492 + bec->txerr = ecr & ECR_TEC_MASK; 493 + 494 + return 0; 495 + } 496 + 530 497 static int m_can_get_berr_counter(const struct net_device *dev, 531 498 struct can_berr_counter *bec) 532 499 { 533 500 struct m_can_priv *priv = netdev_priv(dev); 534 - unsigned int ecr; 535 501 int err; 536 502 537 503 err = clk_prepare_enable(priv->hclk); ··· 556 498 return err; 557 499 } 558 500 559 - ecr = m_can_read(priv, M_CAN_ECR); 560 - bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 561 - bec->txerr = ecr & ECR_TEC_MASK; 501 + __m_can_get_berr_counter(dev, bec); 562 502 563 503 clk_disable_unprepare(priv->cclk); 564 504 clk_disable_unprepare(priv->hclk); ··· 600 544 if (unlikely(!skb)) 601 545 return 0; 602 546 603 - m_can_get_berr_counter(dev, &bec); 547 + __m_can_get_berr_counter(dev, &bec); 604 548 605 549 switch (new_state) { 606 550 case CAN_STATE_ERROR_ACTIVE: ··· 652 596 653 597 if ((psr & PSR_EP) && 654 598 (priv->can.state != CAN_STATE_ERROR_PASSIVE)) { 655 - netdev_dbg(dev, "entered error warning state\n"); 599 + netdev_dbg(dev, "entered error passive state\n"); 656 600 work_done += m_can_handle_state_change(dev, 657 601 CAN_STATE_ERROR_PASSIVE); 658 602 } 659 603 660 604 if ((psr & PSR_BO) && 661 605 (priv->can.state != CAN_STATE_BUS_OFF)) { 662 - netdev_dbg(dev, "entered error warning state\n"); 606 + netdev_dbg(dev, "entered error bus off state\n"); 663 607 work_done += m_can_handle_state_change(dev, 664 608 CAN_STATE_BUS_OFF); 665 609 } ··· 671 615 { 672 616 if (irqstatus & IR_WDI) 673 617 netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); 674 - if (irqstatus & IR_BEU) 618 + if (irqstatus & IR_ELO) 675 619 netdev_err(dev, "Error Logging Overflow\n"); 676 620 if (irqstatus & IR_BEU) 677 621 netdev_err(dev, "Bit Error Uncorrected\n"); ··· 789 733 .brp_inc = 1, 790 734 }; 791 735 736 + static const struct can_bittiming_const m_can_data_bittiming_const = { 737 + .name = KBUILD_MODNAME, 738 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 739 + .tseg1_max = 16, 740 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 741 + .tseg2_max = 8, 742 + .sjw_max = 4, 743 + .brp_min = 1, 744 + .brp_max = 32, 745 + .brp_inc = 1, 746 + }; 747 + 792 748 static int m_can_set_bittiming(struct net_device *dev) 793 749 { 794 750 struct m_can_priv *priv = netdev_priv(dev); 795 751 const struct can_bittiming *bt = &priv->can.bittiming; 752 + const struct can_bittiming *dbt = &priv->can.data_bittiming; 796 753 u16 brp, sjw, tseg1, tseg2; 797 754 u32 reg_btp; 798 755 ··· 816 747 reg_btp = (brp << BTR_BRP_SHIFT) | (sjw << BTR_SJW_SHIFT) | 817 748 (tseg1 << BTR_TSEG1_SHIFT) | (tseg2 << BTR_TSEG2_SHIFT); 818 749 m_can_write(priv, M_CAN_BTP, reg_btp); 819 - netdev_dbg(dev, "setting BTP 0x%x\n", reg_btp); 750 + 751 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 752 + brp = dbt->brp - 1; 753 + sjw = dbt->sjw - 1; 754 + tseg1 = dbt->prop_seg + dbt->phase_seg1 - 1; 755 + tseg2 = dbt->phase_seg2 - 1; 756 + reg_btp = (brp << FBTR_FBRP_SHIFT) | (sjw << FBTR_FSJW_SHIFT) | 757 + (tseg1 << FBTR_FTSEG1_SHIFT) | 758 + (tseg2 << FBTR_FTSEG2_SHIFT); 759 + m_can_write(priv, M_CAN_FBTP, reg_btp); 760 + } 820 761 821 762 return 0; 822 763 } ··· 846 767 847 768 m_can_config_endisable(priv, true); 848 769 849 - /* RX Buffer/FIFO Element Size 8 bytes data field */ 850 - m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_8BYTES); 770 + /* RX Buffer/FIFO Element Size 64 bytes data field */ 771 + m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_64BYTES); 851 772 852 773 /* Accept Non-matching Frames Into FIFO 0 */ 853 774 m_can_write(priv, M_CAN_GFC, 0x0); ··· 856 777 m_can_write(priv, M_CAN_TXBC, (1 << TXBC_NDTB_OFF) | 857 778 priv->mcfg[MRAM_TXB].off); 858 779 859 - /* only support 8 bytes firstly */ 860 - m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_8BYTES); 780 + /* support 64 bytes payload */ 781 + m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_64BYTES); 861 782 862 783 m_can_write(priv, M_CAN_TXEFC, (1 << TXEFC_EFS_OFF) | 863 784 priv->mcfg[MRAM_TXE].off); ··· 872 793 RXFC_FWM_1 | priv->mcfg[MRAM_RXF1].off); 873 794 874 795 cccr = m_can_read(priv, M_CAN_CCCR); 875 - cccr &= ~(CCCR_TEST | CCCR_MON); 796 + cccr &= ~(CCCR_TEST | CCCR_MON | (CCCR_CMR_MASK << CCCR_CMR_SHIFT) | 797 + (CCCR_CME_MASK << CCCR_CME_SHIFT)); 876 798 test = m_can_read(priv, M_CAN_TEST); 877 799 test &= ~TEST_LBCK; 878 800 ··· 884 804 cccr |= CCCR_TEST; 885 805 test |= TEST_LBCK; 886 806 } 807 + 808 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) 809 + cccr |= CCCR_CME_CANFD_BRS << CCCR_CME_SHIFT; 887 810 888 811 m_can_write(priv, M_CAN_CCCR, cccr); 889 812 m_can_write(priv, M_CAN_TEST, test); ··· 952 869 953 870 priv->dev = dev; 954 871 priv->can.bittiming_const = &m_can_bittiming_const; 872 + priv->can.data_bittiming_const = &m_can_data_bittiming_const; 955 873 priv->can.do_set_mode = m_can_set_mode; 956 874 priv->can.do_get_berr_counter = m_can_get_berr_counter; 957 875 priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK | 958 876 CAN_CTRLMODE_LISTENONLY | 959 - CAN_CTRLMODE_BERR_REPORTING; 877 + CAN_CTRLMODE_BERR_REPORTING | 878 + CAN_CTRLMODE_FD; 960 879 961 880 return dev; 962 881 } ··· 1041 956 struct net_device *dev) 1042 957 { 1043 958 struct m_can_priv *priv = netdev_priv(dev); 1044 - struct can_frame *cf = (struct can_frame *)skb->data; 1045 - u32 id; 959 + struct canfd_frame *cf = (struct canfd_frame *)skb->data; 960 + u32 id, cccr; 961 + int i; 1046 962 1047 963 if (can_dropped_invalid_skb(dev, skb)) 1048 964 return NETDEV_TX_OK; ··· 1062 976 1063 977 /* message ram configuration */ 1064 978 m_can_fifo_write(priv, 0, M_CAN_FIFO_ID, id); 1065 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, cf->can_dlc << 16); 1066 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(0), *(u32 *)(cf->data + 0)); 1067 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(1), *(u32 *)(cf->data + 4)); 979 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, can_len2dlc(cf->len) << 16); 980 + 981 + for (i = 0; i < cf->len; i += 4) 982 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(i / 4), 983 + *(u32 *)(cf->data + i)); 984 + 1068 985 can_put_echo_skb(skb, dev, 0); 986 + 987 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 988 + cccr = m_can_read(priv, M_CAN_CCCR); 989 + cccr &= ~(CCCR_CMR_MASK << CCCR_CMR_SHIFT); 990 + if (can_is_canfd_skb(skb)) { 991 + if (cf->flags & CANFD_BRS) 992 + cccr |= CCCR_CMR_CANFD_BRS << CCCR_CMR_SHIFT; 993 + else 994 + cccr |= CCCR_CMR_CANFD << CCCR_CMR_SHIFT; 995 + } else { 996 + cccr |= CCCR_CMR_CAN << CCCR_CMR_SHIFT; 997 + } 998 + m_can_write(priv, M_CAN_CCCR, cccr); 999 + } 1069 1000 1070 1001 /* enable first TX buffer to start transfer */ 1071 1002 m_can_write(priv, M_CAN_TXBTIE, 0x1); ··· 1095 992 .ndo_open = m_can_open, 1096 993 .ndo_stop = m_can_close, 1097 994 .ndo_start_xmit = m_can_start_xmit, 995 + .ndo_change_mtu = can_change_mtu, 1098 996 }; 1099 997 1100 998 static int register_m_can_dev(struct net_device *dev) ··· 1113 1009 struct resource *res; 1114 1010 void __iomem *addr; 1115 1011 u32 out_val[MRAM_CFG_LEN]; 1116 - int ret; 1012 + int i, start, end, ret; 1117 1013 1118 1014 /* message ram could be shared */ 1119 1015 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "message_ram"); ··· 1163 1059 priv->mcfg[MRAM_RXB].off, priv->mcfg[MRAM_RXB].num, 1164 1060 priv->mcfg[MRAM_TXE].off, priv->mcfg[MRAM_TXE].num, 1165 1061 priv->mcfg[MRAM_TXB].off, priv->mcfg[MRAM_TXB].num); 1062 + 1063 + /* initialize the entire Message RAM in use to avoid possible 1064 + * ECC/parity checksum errors when reading an uninitialized buffer 1065 + */ 1066 + start = priv->mcfg[MRAM_SIDF].off; 1067 + end = priv->mcfg[MRAM_TXB].off + 1068 + priv->mcfg[MRAM_TXB].num * TXB_ELEMENT_SIZE; 1069 + for (i = start; i < end; i += 4) 1070 + writel(0x0, priv->mram_base + i); 1166 1071 1167 1072 return 0; 1168 1073 }
+1
drivers/net/can/rcar_can.c
··· 628 628 .ndo_open = rcar_can_open, 629 629 .ndo_stop = rcar_can_close, 630 630 .ndo_start_xmit = rcar_can_start_xmit, 631 + .ndo_change_mtu = can_change_mtu, 631 632 }; 632 633 633 634 static void rcar_can_rx_pkt(struct rcar_can_priv *priv)
+1 -4
drivers/net/can/sja1000/kvaser_pci.c
··· 214 214 struct net_device *dev; 215 215 struct sja1000_priv *priv; 216 216 struct kvaser_pci *board; 217 - int err, init_step; 217 + int err; 218 218 219 219 dev = alloc_sja1000dev(sizeof(struct kvaser_pci)); 220 220 if (dev == NULL) ··· 235 235 if (channel == 0) { 236 236 board->xilinx_ver = 237 237 ioread8(board->res_addr + XILINX_VERINT) >> 4; 238 - init_step = 2; 239 238 240 239 /* Assert PTADR# - we're in passive mode so the other bits are 241 240 not important */ ··· 262 263 263 264 priv->irq_flags = IRQF_SHARED; 264 265 dev->irq = pdev->irq; 265 - 266 - init_step = 4; 267 266 268 267 dev_info(&pdev->dev, "reg_base=%p conf_addr=%p irq=%d\n", 269 268 priv->reg_base, board->conf_addr, dev->irq);
+1 -2
drivers/net/can/usb/ems_usb.c
··· 434 434 if (urb->actual_length > CPC_HEADER_SIZE) { 435 435 struct ems_cpc_msg *msg; 436 436 u8 *ibuf = urb->transfer_buffer; 437 - u8 msg_count, again, start; 437 + u8 msg_count, start; 438 438 439 439 msg_count = ibuf[0] & ~0x80; 440 - again = ibuf[0] & 0x80; 441 440 442 441 start = CPC_HEADER_SIZE; 443 442
+1 -2
drivers/net/can/usb/esd_usb2.c
··· 464 464 { 465 465 struct esd_tx_urb_context *context = urb->context; 466 466 struct esd_usb2_net_priv *priv; 467 - struct esd_usb2 *dev; 468 467 struct net_device *netdev; 469 468 size_t size = sizeof(struct esd_usb2_msg); 470 469 ··· 471 472 472 473 priv = context->priv; 473 474 netdev = priv->netdev; 474 - dev = priv->usb2; 475 475 476 476 /* free up our allocated buffer */ 477 477 usb_free_coherent(urb->dev, size, ··· 1141 1143 } 1142 1144 } 1143 1145 unlink_all_urbs(dev); 1146 + kfree(dev); 1144 1147 } 1145 1148 } 1146 1149
+1
drivers/net/can/usb/gs_usb.c
··· 718 718 .ndo_open = gs_can_open, 719 719 .ndo_stop = gs_can_close, 720 720 .ndo_start_xmit = gs_can_start_xmit, 721 + .ndo_change_mtu = can_change_mtu, 721 722 }; 722 723 723 724 static struct gs_can *gs_make_candev(unsigned int channel, struct usb_interface *intf)
+3 -1
drivers/net/can/xilinx_can.c
··· 300 300 static int xcan_chip_start(struct net_device *ndev) 301 301 { 302 302 struct xcan_priv *priv = netdev_priv(ndev); 303 - u32 err, reg_msr, reg_sr_mask; 303 + u32 reg_msr, reg_sr_mask; 304 + int err; 304 305 unsigned long timeout; 305 306 306 307 /* Check if it is in reset mode */ ··· 962 961 .ndo_open = xcan_open, 963 962 .ndo_stop = xcan_close, 964 963 .ndo_start_xmit = xcan_start_xmit, 964 + .ndo_change_mtu = can_change_mtu, 965 965 }; 966 966 967 967 /**
+33 -25
drivers/net/dsa/bcm_sf2.c
··· 377 377 return IRQ_HANDLED; 378 378 } 379 379 380 + static int bcm_sf2_sw_rst(struct bcm_sf2_priv *priv) 381 + { 382 + unsigned int timeout = 1000; 383 + u32 reg; 384 + 385 + reg = core_readl(priv, CORE_WATCHDOG_CTRL); 386 + reg |= SOFTWARE_RESET | EN_CHIP_RST | EN_SW_RESET; 387 + core_writel(priv, reg, CORE_WATCHDOG_CTRL); 388 + 389 + do { 390 + reg = core_readl(priv, CORE_WATCHDOG_CTRL); 391 + if (!(reg & SOFTWARE_RESET)) 392 + break; 393 + 394 + usleep_range(1000, 2000); 395 + } while (timeout-- > 0); 396 + 397 + if (timeout == 0) 398 + return -ETIMEDOUT; 399 + 400 + return 0; 401 + } 402 + 380 403 static int bcm_sf2_sw_setup(struct dsa_switch *ds) 381 404 { 382 405 const char *reg_names[BCM_SF2_REGS_NUM] = BCM_SF2_REGS_NAME; ··· 427 404 *base = of_iomap(dn, i); 428 405 if (*base == NULL) { 429 406 pr_err("unable to find register: %s\n", reg_names[i]); 430 - return -ENODEV; 407 + ret = -ENOMEM; 408 + goto out_unmap; 431 409 } 432 410 base++; 411 + } 412 + 413 + ret = bcm_sf2_sw_rst(priv); 414 + if (ret) { 415 + pr_err("unable to software reset switch: %d\n", ret); 416 + goto out_unmap; 433 417 } 434 418 435 419 /* Disable all interrupts and request them */ ··· 514 484 out_unmap: 515 485 base = &priv->core; 516 486 for (i = 0; i < BCM_SF2_REGS_NUM; i++) { 517 - iounmap(*base); 487 + if (*base) 488 + iounmap(*base); 518 489 base++; 519 490 } 520 491 return ret; ··· 760 729 dsa_is_cpu_port(ds, port)) 761 730 bcm_sf2_port_disable(ds, port, NULL); 762 731 } 763 - 764 - return 0; 765 - } 766 - 767 - static int bcm_sf2_sw_rst(struct bcm_sf2_priv *priv) 768 - { 769 - unsigned int timeout = 1000; 770 - u32 reg; 771 - 772 - reg = core_readl(priv, CORE_WATCHDOG_CTRL); 773 - reg |= SOFTWARE_RESET | EN_CHIP_RST | EN_SW_RESET; 774 - core_writel(priv, reg, CORE_WATCHDOG_CTRL); 775 - 776 - do { 777 - reg = core_readl(priv, CORE_WATCHDOG_CTRL); 778 - if (!(reg & SOFTWARE_RESET)) 779 - break; 780 - 781 - usleep_range(1000, 2000); 782 - } while (timeout-- > 0); 783 - 784 - if (timeout == 0) 785 - return -ETIMEDOUT; 786 732 787 733 return 0; 788 734 }
+2 -1
drivers/net/ethernet/broadcom/tg3.c
··· 8563 8563 if (tnapi->rx_rcb) 8564 8564 memset(tnapi->rx_rcb, 0, TG3_RX_RCB_RING_BYTES(tp)); 8565 8565 8566 - if (tg3_rx_prodring_alloc(tp, &tnapi->prodring)) { 8566 + if (tnapi->prodring.rx_std && 8567 + tg3_rx_prodring_alloc(tp, &tnapi->prodring)) { 8567 8568 tg3_free_rings(tp); 8568 8569 return -ENOMEM; 8569 8570 }
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
··· 1082 1082 pgid = be32_to_cpu(pcmd.u.dcb.pgid.pgid); 1083 1083 1084 1084 for (i = 0; i < CXGB4_MAX_PRIORITY; i++) 1085 - pg->prio_pg[i] = (pgid >> (i * 4)) & 0xF; 1085 + pg->prio_pg[7 - i] = (pgid >> (i * 4)) & 0xF; 1086 1086 1087 1087 INIT_PORT_DCB_READ_PEER_CMD(pcmd, pi->port_id); 1088 1088 pcmd.u.dcb.pgrate.type = FW_PORT_DCB_TYPE_PGRATE;
+11
drivers/net/ethernet/emulex/benet/be_main.c
··· 4309 4309 return -EOPNOTSUPP; 4310 4310 4311 4311 br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); 4312 + if (!br_spec) 4313 + return -EINVAL; 4312 4314 4313 4315 nla_for_each_nested(attr, br_spec, rem) { 4314 4316 if (nla_type(attr) != IFLA_BRIDGE_MODE) 4315 4317 continue; 4318 + 4319 + if (nla_len(attr) < sizeof(mode)) 4320 + return -EINVAL; 4316 4321 4317 4322 mode = nla_get_u16(attr); 4318 4323 if (mode != BRIDGE_MODE_VEPA && mode != BRIDGE_MODE_VEB) ··· 4426 4421 "Disabled VxLAN offloads for UDP port %d\n", 4427 4422 be16_to_cpu(port)); 4428 4423 } 4424 + 4425 + static bool be_gso_check(struct sk_buff *skb, struct net_device *dev) 4426 + { 4427 + return vxlan_gso_check(skb); 4428 + } 4429 4429 #endif 4430 4430 4431 4431 static const struct net_device_ops be_netdev_ops = { ··· 4460 4450 #ifdef CONFIG_BE2NET_VXLAN 4461 4451 .ndo_add_vxlan_port = be_add_vxlan_port, 4462 4452 .ndo_del_vxlan_port = be_del_vxlan_port, 4453 + .ndo_gso_check = be_gso_check, 4463 4454 #endif 4464 4455 }; 4465 4456
+16 -7
drivers/net/ethernet/intel/igb/igb_main.c
··· 1012 1012 /* igb_get_stats64() might access the rings on this vector, 1013 1013 * we must wait a grace period before freeing it. 1014 1014 */ 1015 - kfree_rcu(q_vector, rcu); 1015 + if (q_vector) 1016 + kfree_rcu(q_vector, rcu); 1016 1017 } 1017 1018 1018 1019 /** ··· 1793 1792 adapter->flags &= ~IGB_FLAG_NEED_LINK_UPDATE; 1794 1793 1795 1794 for (i = 0; i < adapter->num_q_vectors; i++) { 1796 - napi_synchronize(&(adapter->q_vector[i]->napi)); 1797 - napi_disable(&(adapter->q_vector[i]->napi)); 1795 + if (adapter->q_vector[i]) { 1796 + napi_synchronize(&adapter->q_vector[i]->napi); 1797 + napi_disable(&adapter->q_vector[i]->napi); 1798 + } 1798 1799 } 1799 1800 1800 1801 ··· 3720 3717 int i; 3721 3718 3722 3719 for (i = 0; i < adapter->num_tx_queues; i++) 3723 - igb_free_tx_resources(adapter->tx_ring[i]); 3720 + if (adapter->tx_ring[i]) 3721 + igb_free_tx_resources(adapter->tx_ring[i]); 3724 3722 } 3725 3723 3726 3724 void igb_unmap_and_free_tx_resource(struct igb_ring *ring, ··· 3786 3782 int i; 3787 3783 3788 3784 for (i = 0; i < adapter->num_tx_queues; i++) 3789 - igb_clean_tx_ring(adapter->tx_ring[i]); 3785 + if (adapter->tx_ring[i]) 3786 + igb_clean_tx_ring(adapter->tx_ring[i]); 3790 3787 } 3791 3788 3792 3789 /** ··· 3824 3819 int i; 3825 3820 3826 3821 for (i = 0; i < adapter->num_rx_queues; i++) 3827 - igb_free_rx_resources(adapter->rx_ring[i]); 3822 + if (adapter->rx_ring[i]) 3823 + igb_free_rx_resources(adapter->rx_ring[i]); 3828 3824 } 3829 3825 3830 3826 /** ··· 3880 3874 int i; 3881 3875 3882 3876 for (i = 0; i < adapter->num_rx_queues; i++) 3883 - igb_clean_rx_ring(adapter->rx_ring[i]); 3877 + if (adapter->rx_ring[i]) 3878 + igb_clean_rx_ring(adapter->rx_ring[i]); 3884 3879 } 3885 3880 3886 3881 /** ··· 7411 7404 pci_restore_state(pdev); 7412 7405 pci_save_state(pdev); 7413 7406 7407 + if (!pci_device_is_present(pdev)) 7408 + return -ENODEV; 7414 7409 err = pci_enable_device_mem(pdev); 7415 7410 if (err) { 7416 7411 dev_err(&pdev->dev,
+13 -4
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 3936 3936 * if SR-IOV and VMDQ are disabled - otherwise ensure 3937 3937 * that hardware VLAN filters remain enabled. 3938 3938 */ 3939 - if (!(adapter->flags & (IXGBE_FLAG_VMDQ_ENABLED | 3940 - IXGBE_FLAG_SRIOV_ENABLED))) 3939 + if (adapter->flags & (IXGBE_FLAG_VMDQ_ENABLED | 3940 + IXGBE_FLAG_SRIOV_ENABLED)) 3941 3941 vlnctrl |= (IXGBE_VLNCTRL_VFE | IXGBE_VLNCTRL_CFIEN); 3942 3942 } else { 3943 3943 if (netdev->flags & IFF_ALLMULTI) { ··· 7669 7669 return -EOPNOTSUPP; 7670 7670 7671 7671 br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC); 7672 + if (!br_spec) 7673 + return -EINVAL; 7672 7674 7673 7675 nla_for_each_nested(attr, br_spec, rem) { 7674 7676 __u16 mode; ··· 7678 7676 7679 7677 if (nla_type(attr) != IFLA_BRIDGE_MODE) 7680 7678 continue; 7679 + 7680 + if (nla_len(attr) < sizeof(mode)) 7681 + return -EINVAL; 7681 7682 7682 7683 mode = nla_get_u16(attr); 7683 7684 if (mode == BRIDGE_MODE_VEPA) { ··· 7984 7979 int i, err, pci_using_dac, expected_gts; 7985 7980 unsigned int indices = MAX_TX_QUEUES; 7986 7981 u8 part_str[IXGBE_PBANUM_LENGTH]; 7982 + bool disable_dev = false; 7987 7983 #ifdef IXGBE_FCOE 7988 7984 u16 device_caps; 7989 7985 #endif ··· 8375 8369 iounmap(adapter->io_addr); 8376 8370 kfree(adapter->mac_table); 8377 8371 err_ioremap: 8372 + disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 8378 8373 free_netdev(netdev); 8379 8374 err_alloc_etherdev: 8380 8375 pci_release_selected_regions(pdev, 8381 8376 pci_select_bars(pdev, IORESOURCE_MEM)); 8382 8377 err_pci_reg: 8383 8378 err_dma: 8384 - if (!adapter || !test_and_set_bit(__IXGBE_DISABLED, &adapter->state)) 8379 + if (!adapter || disable_dev) 8385 8380 pci_disable_device(pdev); 8386 8381 return err; 8387 8382 } ··· 8400 8393 { 8401 8394 struct ixgbe_adapter *adapter = pci_get_drvdata(pdev); 8402 8395 struct net_device *netdev = adapter->netdev; 8396 + bool disable_dev; 8403 8397 8404 8398 ixgbe_dbg_adapter_exit(adapter); 8405 8399 ··· 8450 8442 e_dev_info("complete\n"); 8451 8443 8452 8444 kfree(adapter->mac_table); 8445 + disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 8453 8446 free_netdev(netdev); 8454 8447 8455 8448 pci_disable_pcie_error_reporting(pdev); 8456 8449 8457 - if (!test_and_set_bit(__IXGBE_DISABLED, &adapter->state)) 8450 + if (disable_dev) 8458 8451 pci_disable_device(pdev); 8459 8452 } 8460 8453
+12 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1693 1693 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 1694 1694 1695 1695 #ifdef CONFIG_MLX4_EN_VXLAN 1696 - if (priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS) 1696 + if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 1697 1697 vxlan_get_rx_port(dev); 1698 1698 #endif 1699 1699 priv->port_up = true; ··· 2355 2355 2356 2356 queue_work(priv->mdev->workqueue, &priv->vxlan_del_task); 2357 2357 } 2358 + 2359 + static bool mlx4_en_gso_check(struct sk_buff *skb, struct net_device *dev) 2360 + { 2361 + return vxlan_gso_check(skb); 2362 + } 2358 2363 #endif 2359 2364 2360 2365 static const struct net_device_ops mlx4_netdev_ops = { ··· 2391 2386 #ifdef CONFIG_MLX4_EN_VXLAN 2392 2387 .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2393 2388 .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2389 + .ndo_gso_check = mlx4_en_gso_check, 2394 2390 #endif 2395 2391 }; 2396 2392 ··· 2422 2416 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2423 2417 #endif 2424 2418 .ndo_get_phys_port_id = mlx4_en_get_phys_port_id, 2419 + #ifdef CONFIG_MLX4_EN_VXLAN 2420 + .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2421 + .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2422 + .ndo_gso_check = mlx4_en_gso_check, 2423 + #endif 2425 2424 }; 2426 2425 2427 2426 int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
+1 -1
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 1546 1546 1547 1547 switch (op) { 1548 1548 case RES_OP_RESERVE: 1549 - count = get_param_l(&in_param); 1549 + count = get_param_l(&in_param) & 0xffffff; 1550 1550 align = get_param_h(&in_param); 1551 1551 err = mlx4_grant_resource(dev, slave, RES_QP, count, 0); 1552 1552 if (err)
+6
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 503 503 504 504 adapter->flags |= QLCNIC_DEL_VXLAN_PORT; 505 505 } 506 + 507 + static bool qlcnic_gso_check(struct sk_buff *skb, struct net_device *dev) 508 + { 509 + return vxlan_gso_check(skb); 510 + } 506 511 #endif 507 512 508 513 static const struct net_device_ops qlcnic_netdev_ops = { ··· 531 526 #ifdef CONFIG_QLCNIC_VXLAN 532 527 .ndo_add_vxlan_port = qlcnic_add_vxlan_port, 533 528 .ndo_del_vxlan_port = qlcnic_del_vxlan_port, 529 + .ndo_gso_check = qlcnic_gso_check, 534 530 #endif 535 531 #ifdef CONFIG_NET_POLL_CONTROLLER 536 532 .ndo_poll_controller = qlcnic_poll_controller,
+7 -6
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 177 177 */ 178 178 plat->maxmtu = JUMBO_LEN; 179 179 180 - /* Set default value for multicast hash bins */ 181 - plat->multicast_filter_bins = HASH_TABLE_SIZE; 182 - 183 - /* Set default value for unicast filter entries */ 184 - plat->unicast_filter_entries = 1; 185 - 186 180 /* 187 181 * Currently only the properties needed on SPEAr600 188 182 * are provided. All other properties should be added ··· 264 270 return PTR_ERR(addr); 265 271 266 272 plat_dat = dev_get_platdata(&pdev->dev); 273 + 274 + /* Set default value for multicast hash bins */ 275 + plat_dat->multicast_filter_bins = HASH_TABLE_SIZE; 276 + 277 + /* Set default value for unicast filter entries */ 278 + plat_dat->unicast_filter_entries = 1; 279 + 267 280 if (pdev->dev.of_node) { 268 281 if (!plat_dat) 269 282 plat_dat = devm_kzalloc(&pdev->dev,
+3 -3
drivers/net/ethernet/ti/cpsw.c
··· 129 129 #define CPSW_VLAN_AWARE BIT(1) 130 130 #define CPSW_ALE_VLAN_AWARE 1 131 131 132 - #define CPSW_FIFO_NORMAL_MODE (0 << 15) 133 - #define CPSW_FIFO_DUAL_MAC_MODE (1 << 15) 134 - #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 15) 132 + #define CPSW_FIFO_NORMAL_MODE (0 << 16) 133 + #define CPSW_FIFO_DUAL_MAC_MODE (1 << 16) 134 + #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 16) 135 135 136 136 #define CPSW_INTPACEEN (0x3f << 16) 137 137 #define CPSW_INTPRESCALE_MASK (0x7FF << 0)
+8 -5
drivers/net/ieee802154/fakehard.c
··· 377 377 378 378 err = wpan_phy_register(phy); 379 379 if (err) 380 - goto out; 380 + goto err_phy_reg; 381 381 382 382 err = register_netdev(dev); 383 - if (err < 0) 384 - goto out; 383 + if (err) 384 + goto err_netdev_reg; 385 385 386 386 dev_info(&pdev->dev, "Added ieee802154 HardMAC hardware\n"); 387 387 return 0; 388 388 389 - out: 390 - unregister_netdev(dev); 389 + err_netdev_reg: 390 + wpan_phy_unregister(phy); 391 + err_phy_reg: 392 + free_netdev(dev); 393 + wpan_phy_free(phy); 391 394 return err; 392 395 } 393 396
+3 -1
drivers/net/ppp/pptp.c
··· 506 506 int len = sizeof(struct sockaddr_pppox); 507 507 struct sockaddr_pppox sp; 508 508 509 - sp.sa_family = AF_PPPOX; 509 + memset(&sp.sa_addr, 0, sizeof(sp.sa_addr)); 510 + 511 + sp.sa_family = AF_PPPOX; 510 512 sp.sa_protocol = PX_PROTO_PPTP; 511 513 sp.sa_addr.pptp = pppox_sk(sock->sk)->proto.pptp.src_addr; 512 514
+1
drivers/net/usb/qmi_wwan.c
··· 780 780 {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */ 781 781 {QMI_FIXED_INTF(0x413c, 0x81a8, 8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */ 782 782 {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */ 783 + {QMI_FIXED_INTF(0x03f0, 0x581d, 4)}, /* HP lt4112 LTE/HSPA+ Gobi 4G Module (Huawei me906e) */ 783 784 784 785 /* 4. Gobi 1000 devices */ 785 786 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+37
drivers/net/virtio_net.c
··· 1673 1673 }; 1674 1674 #endif 1675 1675 1676 + static bool virtnet_fail_on_feature(struct virtio_device *vdev, 1677 + unsigned int fbit, 1678 + const char *fname, const char *dname) 1679 + { 1680 + if (!virtio_has_feature(vdev, fbit)) 1681 + return false; 1682 + 1683 + dev_err(&vdev->dev, "device advertises feature %s but not %s", 1684 + fname, dname); 1685 + 1686 + return true; 1687 + } 1688 + 1689 + #define VIRTNET_FAIL_ON(vdev, fbit, dbit) \ 1690 + virtnet_fail_on_feature(vdev, fbit, #fbit, dbit) 1691 + 1692 + static bool virtnet_validate_features(struct virtio_device *vdev) 1693 + { 1694 + if (!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) && 1695 + (VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_RX, 1696 + "VIRTIO_NET_F_CTRL_VQ") || 1697 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_VLAN, 1698 + "VIRTIO_NET_F_CTRL_VQ") || 1699 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE, 1700 + "VIRTIO_NET_F_CTRL_VQ") || 1701 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") || 1702 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR, 1703 + "VIRTIO_NET_F_CTRL_VQ"))) { 1704 + return false; 1705 + } 1706 + 1707 + return true; 1708 + } 1709 + 1676 1710 static int virtnet_probe(struct virtio_device *vdev) 1677 1711 { 1678 1712 int i, err; 1679 1713 struct net_device *dev; 1680 1714 struct virtnet_info *vi; 1681 1715 u16 max_queue_pairs; 1716 + 1717 + if (!virtnet_validate_features(vdev)) 1718 + return -EINVAL; 1682 1719 1683 1720 /* Find if host supports multiqueue virtio_net device */ 1684 1721 err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
+2 -8
drivers/net/vxlan.c
··· 67 67 68 68 #define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */ 69 69 70 - /* VXLAN protocol header */ 71 - struct vxlanhdr { 72 - __be32 vx_flags; 73 - __be32 vx_vni; 74 - }; 75 - 76 70 /* UDP port for VXLAN traffic. 77 71 * The IANA assigned port is 4789, but the Linux default is 8472 78 72 * for compatibility with early adopters. ··· 2306 2312 if (ipv6) { 2307 2313 udp_conf.family = AF_INET6; 2308 2314 udp_conf.use_udp6_tx_checksums = 2309 - !!(flags & VXLAN_F_UDP_ZERO_CSUM6_TX); 2315 + !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX); 2310 2316 udp_conf.use_udp6_rx_checksums = 2311 - !!(flags & VXLAN_F_UDP_ZERO_CSUM6_RX); 2317 + !(flags & VXLAN_F_UDP_ZERO_CSUM6_RX); 2312 2318 } else { 2313 2319 udp_conf.family = AF_INET; 2314 2320 udp_conf.local_ip.s_addr = INADDR_ANY;
+13
drivers/net/wireless/ath/ath9k/ar9003_phy.c
··· 664 664 ah->enabled_cals |= TX_CL_CAL; 665 665 else 666 666 ah->enabled_cals &= ~TX_CL_CAL; 667 + 668 + if (AR_SREV_9340(ah) || AR_SREV_9531(ah) || AR_SREV_9550(ah)) { 669 + if (ah->is_clk_25mhz) { 670 + REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1); 671 + REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7); 672 + REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae); 673 + } else { 674 + REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1); 675 + REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400); 676 + REG_WRITE(ah, AR_SLP32_INC, 0x0001e800); 677 + } 678 + udelay(100); 679 + } 667 680 } 668 681 669 682 static void ar9003_hw_prog_ini(struct ath_hw *ah,
-13
drivers/net/wireless/ath/ath9k/hw.c
··· 861 861 udelay(RTC_PLL_SETTLE_DELAY); 862 862 863 863 REG_WRITE(ah, AR_RTC_SLEEP_CLK, AR_RTC_FORCE_DERIVED_CLK); 864 - 865 - if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) { 866 - if (ah->is_clk_25mhz) { 867 - REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1); 868 - REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7); 869 - REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae); 870 - } else { 871 - REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1); 872 - REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400); 873 - REG_WRITE(ah, AR_SLP32_INC, 0x0001e800); 874 - } 875 - udelay(100); 876 - } 877 864 } 878 865 879 866 static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
+6 -3
drivers/net/wireless/ath/ath9k/main.c
··· 974 974 struct ath_vif *avp; 975 975 976 976 /* 977 - * Pick the MAC address of the first interface as the new hardware 978 - * MAC address. The hardware will use it together with the BSSID mask 979 - * when matching addresses. 977 + * The hardware will use primary station addr together with the 978 + * BSSID mask when matching addresses. 980 979 */ 981 980 memset(iter_data, 0, sizeof(*iter_data)); 982 981 memset(&iter_data->mask, 0xff, ETH_ALEN); ··· 1204 1205 list_add_tail(&avp->list, &avp->chanctx->vifs); 1205 1206 } 1206 1207 1208 + ath9k_calculate_summary_state(sc, avp->chanctx); 1209 + 1207 1210 ath9k_assign_hw_queues(hw, vif); 1208 1211 1209 1212 an->sc = sc; ··· 1274 1273 ath9k_beacon_remove_slot(sc, vif); 1275 1274 1276 1275 ath_tx_node_cleanup(sc, &avp->mcast_node); 1276 + 1277 + ath9k_calculate_summary_state(sc, avp->chanctx); 1277 1278 1278 1279 mutex_unlock(&sc->mutex); 1279 1280 }
+1 -3
drivers/net/wireless/b43/phy_common.c
··· 300 300 301 301 void b43_phy_copy(struct b43_wldev *dev, u16 destreg, u16 srcreg) 302 302 { 303 - assert_mac_suspended(dev); 304 - dev->phy.ops->phy_write(dev, destreg, 305 - dev->phy.ops->phy_read(dev, srcreg)); 303 + b43_phy_write(dev, destreg, b43_phy_read(dev, srcreg)); 306 304 } 307 305 308 306 void b43_phy_mask(struct b43_wldev *dev, u16 offset, u16 mask)
+2 -2
drivers/net/wireless/brcm80211/brcmfmac/of.c
··· 40 40 return; 41 41 42 42 irq = irq_of_parse_and_map(np, 0); 43 - if (irq < 0) { 44 - brcmf_err("interrupt could not be mapped: err=%d\n", irq); 43 + if (!irq) { 44 + brcmf_err("interrupt could not be mapped\n"); 45 45 devm_kfree(dev, sdiodev->pdata); 46 46 return; 47 47 }
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/pcie.c
··· 19 19 #include <linux/pci.h> 20 20 #include <linux/vmalloc.h> 21 21 #include <linux/delay.h> 22 - #include <linux/unaligned/access_ok.h> 23 22 #include <linux/interrupt.h> 24 23 #include <linux/bcma/bcma.h> 25 24 #include <linux/sched.h> 25 + #include <asm/unaligned.h> 26 26 27 27 #include <soc.h> 28 28 #include <chipcommon.h>
+4 -2
drivers/net/wireless/brcm80211/brcmfmac/usb.c
··· 669 669 goto finalize; 670 670 } 671 671 672 - if (!brcmf_usb_ioctl_resp_wait(devinfo)) 672 + if (!brcmf_usb_ioctl_resp_wait(devinfo)) { 673 + usb_kill_urb(devinfo->ctl_urb); 673 674 ret = -ETIMEDOUT; 674 - else 675 + } else { 675 676 memcpy(buffer, tmpbuf, buflen); 677 + } 676 678 677 679 finalize: 678 680 kfree(tmpbuf);
+6
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 299 299 primary_offset = ch->center_freq1 - ch->chan->center_freq; 300 300 switch (ch->width) { 301 301 case NL80211_CHAN_WIDTH_20: 302 + case NL80211_CHAN_WIDTH_20_NOHT: 302 303 ch_inf.bw = BRCMU_CHAN_BW_20; 303 304 WARN_ON(primary_offset != 0); 304 305 break; ··· 324 323 ch_inf.sb = BRCMU_CHAN_SB_LU; 325 324 } 326 325 break; 326 + case NL80211_CHAN_WIDTH_80P80: 327 + case NL80211_CHAN_WIDTH_160: 328 + case NL80211_CHAN_WIDTH_5: 329 + case NL80211_CHAN_WIDTH_10: 327 330 default: 328 331 WARN_ON_ONCE(1); 329 332 } ··· 338 333 case IEEE80211_BAND_5GHZ: 339 334 ch_inf.band = BRCMU_CHAN_BAND_5G; 340 335 break; 336 + case IEEE80211_BAND_60GHZ: 341 337 default: 342 338 WARN_ON_ONCE(1); 343 339 }
+2
drivers/net/wireless/iwlwifi/iwl-fw.h
··· 155 155 * @IWL_UCODE_TLV_CAPA_QUIET_PERIOD_SUPPORT: supports Quiet Period requests 156 156 * @IWL_UCODE_TLV_CAPA_DQA_SUPPORT: supports dynamic queue allocation (DQA), 157 157 * which also implies support for the scheduler configuration command 158 + * @IWL_UCODE_TLV_CAPA_HOTSPOT_SUPPORT: supports Hot Spot Command 158 159 */ 159 160 enum iwl_ucode_tlv_capa { 160 161 IWL_UCODE_TLV_CAPA_D0I3_SUPPORT = BIT(0), ··· 164 163 IWL_UCODE_TLV_CAPA_WFA_TPC_REP_IE_SUPPORT = BIT(10), 165 164 IWL_UCODE_TLV_CAPA_QUIET_PERIOD_SUPPORT = BIT(11), 166 165 IWL_UCODE_TLV_CAPA_DQA_SUPPORT = BIT(12), 166 + IWL_UCODE_TLV_CAPA_HOTSPOT_SUPPORT = BIT(18), 167 167 }; 168 168 169 169 /* The default calibrate table size if not specified by firmware file */
+9 -3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 2448 2448 2449 2449 switch (vif->type) { 2450 2450 case NL80211_IFTYPE_STATION: 2451 - /* Use aux roc framework (HS20) */ 2452 - ret = iwl_mvm_send_aux_roc_cmd(mvm, channel, 2453 - vif, duration); 2451 + if (mvm->fw->ucode_capa.capa[0] & 2452 + IWL_UCODE_TLV_CAPA_HOTSPOT_SUPPORT) { 2453 + /* Use aux roc framework (HS20) */ 2454 + ret = iwl_mvm_send_aux_roc_cmd(mvm, channel, 2455 + vif, duration); 2456 + goto out_unlock; 2457 + } 2458 + IWL_ERR(mvm, "hotspot not supported\n"); 2459 + ret = -EINVAL; 2454 2460 goto out_unlock; 2455 2461 case NL80211_IFTYPE_P2P_DEVICE: 2456 2462 /* handle below */
+10 -10
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 602 602 SCAN_COMPLETE_NOTIFICATION }; 603 603 int ret; 604 604 605 - if (mvm->scan_status == IWL_MVM_SCAN_NONE) 606 - return 0; 607 - 608 - if (iwl_mvm_is_radio_killed(mvm)) { 609 - ieee80211_scan_completed(mvm->hw, true); 610 - iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 611 - mvm->scan_status = IWL_MVM_SCAN_NONE; 612 - return 0; 613 - } 614 - 615 605 iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_abort, 616 606 scan_abort_notif, 617 607 ARRAY_SIZE(scan_abort_notif), ··· 1390 1400 1391 1401 int iwl_mvm_cancel_scan(struct iwl_mvm *mvm) 1392 1402 { 1403 + if (mvm->scan_status == IWL_MVM_SCAN_NONE) 1404 + return 0; 1405 + 1406 + if (iwl_mvm_is_radio_killed(mvm)) { 1407 + ieee80211_scan_completed(mvm->hw, true); 1408 + iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 1409 + mvm->scan_status = IWL_MVM_SCAN_NONE; 1410 + return 0; 1411 + } 1412 + 1393 1413 if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_LMAC_SCAN) 1394 1414 return iwl_mvm_scan_offload_stop(mvm, true); 1395 1415 return iwl_mvm_cancel_regular_scan(mvm);
+1 -2
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 1894 1894 int reg; 1895 1895 __le32 *val; 1896 1896 1897 - prph_len += sizeof(*data) + sizeof(*prph) + 1898 - num_bytes_in_chunk; 1897 + prph_len += sizeof(**data) + sizeof(*prph) + num_bytes_in_chunk; 1899 1898 1900 1899 (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PRPH); 1901 1900 (*data)->len = cpu_to_le32(sizeof(*prph) +
+18 -44
drivers/net/wireless/rt2x00/rt2x00queue.c
··· 158 158 skb_trim(skb, frame_length); 159 159 } 160 160 161 - void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int header_length) 161 + /* 162 + * H/W needs L2 padding between the header and the paylod if header size 163 + * is not 4 bytes aligned. 164 + */ 165 + void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int hdr_len) 162 166 { 163 - unsigned int payload_length = skb->len - header_length; 164 - unsigned int header_align = ALIGN_SIZE(skb, 0); 165 - unsigned int payload_align = ALIGN_SIZE(skb, header_length); 166 - unsigned int l2pad = payload_length ? L2PAD_SIZE(header_length) : 0; 167 - 168 - /* 169 - * Adjust the header alignment if the payload needs to be moved more 170 - * than the header. 171 - */ 172 - if (payload_align > header_align) 173 - header_align += 4; 174 - 175 - /* There is nothing to do if no alignment is needed */ 176 - if (!header_align) 177 - return; 178 - 179 - /* Reserve the amount of space needed in front of the frame */ 180 - skb_push(skb, header_align); 181 - 182 - /* 183 - * Move the header. 184 - */ 185 - memmove(skb->data, skb->data + header_align, header_length); 186 - 187 - /* Move the payload, if present and if required */ 188 - if (payload_length && payload_align) 189 - memmove(skb->data + header_length + l2pad, 190 - skb->data + header_length + l2pad + payload_align, 191 - payload_length); 192 - 193 - /* Trim the skb to the correct size */ 194 - skb_trim(skb, header_length + l2pad + payload_length); 195 - } 196 - 197 - void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length) 198 - { 199 - /* 200 - * L2 padding is only present if the skb contains more than just the 201 - * IEEE 802.11 header. 202 - */ 203 - unsigned int l2pad = (skb->len > header_length) ? 204 - L2PAD_SIZE(header_length) : 0; 167 + unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0; 205 168 206 169 if (!l2pad) 207 170 return; 208 171 209 - memmove(skb->data + l2pad, skb->data, header_length); 172 + skb_push(skb, l2pad); 173 + memmove(skb->data, skb->data + l2pad, hdr_len); 174 + } 175 + 176 + void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int hdr_len) 177 + { 178 + unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0; 179 + 180 + if (!l2pad) 181 + return; 182 + 183 + memmove(skb->data + l2pad, skb->data, hdr_len); 210 184 skb_pull(skb, l2pad); 211 185 } 212 186
+22 -17
drivers/net/wireless/rtlwifi/pci.c
··· 842 842 break; 843 843 } 844 844 /* handle command packet here */ 845 - if (rtlpriv->cfg->ops->rx_command_packet(hw, stats, skb)) { 845 + if (rtlpriv->cfg->ops->rx_command_packet && 846 + rtlpriv->cfg->ops->rx_command_packet(hw, stats, skb)) { 846 847 dev_kfree_skb_any(skb); 847 848 goto end; 848 849 } ··· 1128 1127 1129 1128 __skb_queue_tail(&ring->queue, pskb); 1130 1129 1131 - rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc, true, HW_DESC_OWN, 1132 - &temp_one); 1133 - 1130 + if (rtlpriv->use_new_trx_flow) { 1131 + temp_one = 4; 1132 + rtlpriv->cfg->ops->set_desc(hw, (u8 *)pbuffer_desc, true, 1133 + HW_DESC_OWN, (u8 *)&temp_one); 1134 + } else { 1135 + rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc, true, HW_DESC_OWN, 1136 + &temp_one); 1137 + } 1134 1138 return; 1135 1139 } 1136 1140 ··· 1376 1370 ring->desc = NULL; 1377 1371 if (rtlpriv->use_new_trx_flow) { 1378 1372 pci_free_consistent(rtlpci->pdev, 1379 - sizeof(*ring->desc) * ring->entries, 1373 + sizeof(*ring->buffer_desc) * ring->entries, 1380 1374 ring->buffer_desc, ring->buffer_desc_dma); 1381 - ring->desc = NULL; 1375 + ring->buffer_desc = NULL; 1382 1376 } 1383 1377 } 1384 1378 ··· 1549 1543 true, 1550 1544 HW_DESC_TXBUFF_ADDR), 1551 1545 skb->len, PCI_DMA_TODEVICE); 1552 - ring->idx = (ring->idx + 1) % ring->entries; 1553 1546 kfree_skb(skb); 1554 1547 ring->idx = (ring->idx + 1) % ring->entries; 1555 1548 } ··· 2249 2244 /*like read eeprom and so on */ 2250 2245 rtlpriv->cfg->ops->read_eeprom_info(hw); 2251 2246 2247 + if (rtlpriv->cfg->ops->init_sw_vars(hw)) { 2248 + RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Can't init_sw_vars\n"); 2249 + err = -ENODEV; 2250 + goto fail3; 2251 + } 2252 + rtlpriv->cfg->ops->init_sw_leds(hw); 2253 + 2254 + /*aspm */ 2255 + rtl_pci_init_aspm(hw); 2256 + 2252 2257 /* Init mac80211 sw */ 2253 2258 err = rtl_init_core(hw); 2254 2259 if (err) { ··· 2273 2258 RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Failed to init PCI\n"); 2274 2259 goto fail3; 2275 2260 } 2276 - 2277 - if (rtlpriv->cfg->ops->init_sw_vars(hw)) { 2278 - RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Can't init_sw_vars\n"); 2279 - err = -ENODEV; 2280 - goto fail3; 2281 - } 2282 - rtlpriv->cfg->ops->init_sw_leds(hw); 2283 - 2284 - /*aspm */ 2285 - rtl_pci_init_aspm(hw); 2286 2261 2287 2262 err = ieee80211_register_hw(hw); 2288 2263 if (err) {
+5 -2
drivers/net/wireless/rtlwifi/rtl8192se/hw.c
··· 1201 1201 1202 1202 } 1203 1203 1204 + if (type != NL80211_IFTYPE_AP && 1205 + rtlpriv->mac80211.link_state < MAC80211_LINKED) 1206 + bt_msr = rtl_read_byte(rtlpriv, MSR) & ~MSR_LINK_MASK; 1204 1207 rtl_write_byte(rtlpriv, (MSR), bt_msr); 1205 1208 1206 1209 temp = rtl_read_dword(rtlpriv, TCR); ··· 1265 1262 rtl_write_dword(rtlpriv, INTA_MASK, rtlpci->irq_mask[0]); 1266 1263 /* Support Bit 32-37(Assign as Bit 0-5) interrupt setting now */ 1267 1264 rtl_write_dword(rtlpriv, INTA_MASK + 4, rtlpci->irq_mask[1] & 0x3F); 1265 + rtlpci->irq_enabled = true; 1268 1266 } 1269 1267 1270 1268 void rtl92se_disable_interrupt(struct ieee80211_hw *hw) ··· 1280 1276 rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 1281 1277 rtl_write_dword(rtlpriv, INTA_MASK, 0); 1282 1278 rtl_write_dword(rtlpriv, INTA_MASK + 4, 0); 1283 - 1284 - synchronize_irq(rtlpci->pdev->irq); 1279 + rtlpci->irq_enabled = false; 1285 1280 } 1286 1281 1287 1282 static u8 _rtl92s_set_sysclk(struct ieee80211_hw *hw, u8 data)
+2
drivers/net/wireless/rtlwifi/rtl8192se/phy.c
··· 399 399 case 2: 400 400 currentcmd = &postcommoncmd[*step]; 401 401 break; 402 + default: 403 + return true; 402 404 } 403 405 404 406 if (currentcmd->cmdid == CMDID_END) {
+16
drivers/net/wireless/rtlwifi/rtl8192se/sw.c
··· 236 236 } 237 237 } 238 238 239 + static bool rtl92se_is_tx_desc_closed(struct ieee80211_hw *hw, u8 hw_queue, 240 + u16 index) 241 + { 242 + struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 243 + struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[hw_queue]; 244 + u8 *entry = (u8 *)(&ring->desc[ring->idx]); 245 + u8 own = (u8)rtl92se_get_desc(entry, true, HW_DESC_OWN); 246 + 247 + if (own) 248 + return false; 249 + return true; 250 + } 251 + 239 252 static struct rtl_hal_ops rtl8192se_hal_ops = { 240 253 .init_sw_vars = rtl92s_init_sw_vars, 241 254 .deinit_sw_vars = rtl92s_deinit_sw_vars, ··· 282 269 .led_control = rtl92se_led_control, 283 270 .set_desc = rtl92se_set_desc, 284 271 .get_desc = rtl92se_get_desc, 272 + .is_tx_desc_closed = rtl92se_is_tx_desc_closed, 285 273 .tx_polling = rtl92se_tx_polling, 286 274 .enable_hw_sec = rtl92se_enable_hw_security_config, 287 275 .set_key = rtl92se_set_key, ··· 320 306 .maps[MAC_RCR_ACRC32] = RCR_ACRC32, 321 307 .maps[MAC_RCR_ACF] = RCR_ACF, 322 308 .maps[MAC_RCR_AAP] = RCR_AAP, 309 + .maps[MAC_HIMR] = INTA_MASK, 310 + .maps[MAC_HIMRE] = INTA_MASK + 4, 323 311 324 312 .maps[EFUSE_TEST] = REG_EFUSE_TEST, 325 313 .maps[EFUSE_CTRL] = REG_EFUSE_CTRL,
+3 -2
drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
··· 3672 3672 mac->opmode == NL80211_IFTYPE_ADHOC) 3673 3673 macid = sta->aid + 1; 3674 3674 if (wirelessmode == WIRELESS_MODE_N_5G || 3675 - wirelessmode == WIRELESS_MODE_AC_5G) 3676 - ratr_bitmap = sta->supp_rates[NL80211_BAND_5GHZ]; 3675 + wirelessmode == WIRELESS_MODE_AC_5G || 3676 + wirelessmode == WIRELESS_MODE_A) 3677 + ratr_bitmap = sta->supp_rates[NL80211_BAND_5GHZ] << 4; 3677 3678 else 3678 3679 ratr_bitmap = sta->supp_rates[NL80211_BAND_2GHZ]; 3679 3680
+9 -6
drivers/net/xen-netback/xenbus.c
··· 39 39 static int connect_rings(struct backend_info *be, struct xenvif_queue *queue); 40 40 static void connect(struct backend_info *be); 41 41 static int read_xenbus_vif_flags(struct backend_info *be); 42 - static void backend_create_xenvif(struct backend_info *be); 42 + static int backend_create_xenvif(struct backend_info *be); 43 43 static void unregister_hotplug_status_watch(struct backend_info *be); 44 44 static void set_backend_state(struct backend_info *be, 45 45 enum xenbus_state state); ··· 352 352 be->state = XenbusStateInitWait; 353 353 354 354 /* This kicks hotplug scripts, so do it immediately. */ 355 - backend_create_xenvif(be); 355 + err = backend_create_xenvif(be); 356 + if (err) 357 + goto fail; 356 358 357 359 return 0; 358 360 ··· 399 397 } 400 398 401 399 402 - static void backend_create_xenvif(struct backend_info *be) 400 + static int backend_create_xenvif(struct backend_info *be) 403 401 { 404 402 int err; 405 403 long handle; 406 404 struct xenbus_device *dev = be->dev; 407 405 408 406 if (be->vif != NULL) 409 - return; 407 + return 0; 410 408 411 409 err = xenbus_scanf(XBT_NIL, dev->nodename, "handle", "%li", &handle); 412 410 if (err != 1) { 413 411 xenbus_dev_fatal(dev, err, "reading handle"); 414 - return; 412 + return (err < 0) ? err : -EINVAL; 415 413 } 416 414 417 415 be->vif = xenvif_alloc(&dev->dev, dev->otherend_id, handle); ··· 419 417 err = PTR_ERR(be->vif); 420 418 be->vif = NULL; 421 419 xenbus_dev_fatal(dev, err, "creating interface"); 422 - return; 420 + return err; 423 421 } 424 422 425 423 kobject_uevent(&dev->dev.kobj, KOBJ_ONLINE); 424 + return 0; 426 425 } 427 426 428 427 static void backend_disconnect(struct backend_info *be)
+16 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 + static int of_empty_ranges_quirk(void) 454 + { 455 + if (IS_ENABLED(CONFIG_PPC)) { 456 + /* To save cycles, we cache the result */ 457 + static int quirk_state = -1; 458 + 459 + if (quirk_state < 0) 460 + quirk_state = 461 + of_machine_is_compatible("Power Macintosh") || 462 + of_machine_is_compatible("MacRISC"); 463 + return quirk_state; 464 + } 465 + return false; 466 + } 467 + 453 468 static int of_translate_one(struct device_node *parent, struct of_bus *bus, 454 469 struct of_bus *pbus, __be32 *addr, 455 470 int na, int ns, int pna, const char *rprop) ··· 490 475 * This code is only enabled on powerpc. --gcl 491 476 */ 492 477 ranges = of_get_property(parent, rprop, &rlen); 493 - #if !defined(CONFIG_PPC) 494 - if (ranges == NULL) { 478 + if (ranges == NULL && !of_empty_ranges_quirk()) { 495 479 pr_err("OF: no ranges; cannot translate\n"); 496 480 return 1; 497 481 } 498 - #endif /* !defined(CONFIG_PPC) */ 499 482 if (ranges == NULL || rlen == 0) { 500 483 offset = of_read_number(addr, na); 501 484 memset(addr, 0, pna * 4);
+1 -1
drivers/of/dynamic.c
··· 247 247 * @allocflags: Allocation flags (typically pass GFP_KERNEL) 248 248 * 249 249 * Copy a property by dynamically allocating the memory of both the 250 - * property stucture and the property name & contents. The property's 250 + * property structure and the property name & contents. The property's 251 251 * flags have the OF_DYNAMIC bit set so that we can differentiate between 252 252 * dynamically allocated properties and not. 253 253 * Returns the newly allocated property or NULL on out of memory error.
+1 -1
drivers/of/fdt.c
··· 773 773 if (offset < 0) 774 774 return -ENODEV; 775 775 776 - while (match->compatible) { 776 + while (match->compatible[0]) { 777 777 unsigned long addr; 778 778 if (fdt_node_check_compatible(fdt, offset, match->compatible)) { 779 779 match++;
+8 -3
drivers/of/selftest.c
··· 896 896 return; 897 897 } 898 898 899 - while (last_node_index >= 0) { 899 + while (last_node_index-- > 0) { 900 900 if (nodes[last_node_index]) { 901 901 np = of_find_node_by_path(nodes[last_node_index]->full_name); 902 - if (strcmp(np->full_name, "/aliases") != 0) { 902 + if (np == nodes[last_node_index]) { 903 + if (of_aliases == np) { 904 + of_node_put(of_aliases); 905 + of_aliases = NULL; 906 + } 903 907 detach_node_and_children(np); 904 908 } else { 905 909 for_each_property_of_node(np, prop) { ··· 912 908 } 913 909 } 914 910 } 915 - last_node_index--; 916 911 } 917 912 } 918 913 ··· 924 921 res = selftest_data_add(); 925 922 if (res) 926 923 return res; 924 + if (!of_aliases) 925 + of_aliases = of_find_node_by_path("/aliases"); 927 926 928 927 np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a"); 929 928 if (!np) {
+1 -1
drivers/pci/access.c
··· 444 444 return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; 445 445 } 446 446 447 - static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 447 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 448 448 { 449 449 int type = pci_pcie_type(dev); 450 450
+6 -1
drivers/pci/host/pci-xgene.c
··· 631 631 if (ret) 632 632 return ret; 633 633 634 - bus = pci_scan_root_bus(&pdev->dev, 0, &xgene_pcie_ops, port, &res); 634 + bus = pci_create_root_bus(&pdev->dev, 0, 635 + &xgene_pcie_ops, port, &res); 635 636 if (!bus) 636 637 return -ENOMEM; 638 + 639 + pci_scan_child_bus(bus); 640 + pci_assign_unassigned_bus_resources(bus); 641 + pci_bus_add_devices(bus); 637 642 638 643 platform_set_drvdata(pdev, port); 639 644 return 0;
+26
drivers/pci/msi.c
··· 590 590 return entry; 591 591 } 592 592 593 + static int msi_verify_entries(struct pci_dev *dev) 594 + { 595 + struct msi_desc *entry; 596 + 597 + list_for_each_entry(entry, &dev->msi_list, list) { 598 + if (!dev->no_64bit_msi || !entry->msg.address_hi) 599 + continue; 600 + dev_err(&dev->dev, "Device has broken 64-bit MSI but arch" 601 + " tried to assign one above 4G\n"); 602 + return -EIO; 603 + } 604 + return 0; 605 + } 606 + 593 607 /** 594 608 * msi_capability_init - configure device's MSI capability structure 595 609 * @dev: pointer to the pci_dev data structure of MSI device function ··· 635 621 636 622 /* Configure MSI capability structure */ 637 623 ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI); 624 + if (ret) { 625 + msi_mask_irq(entry, mask, ~mask); 626 + free_msi_irqs(dev); 627 + return ret; 628 + } 629 + 630 + ret = msi_verify_entries(dev); 638 631 if (ret) { 639 632 msi_mask_irq(entry, mask, ~mask); 640 633 free_msi_irqs(dev); ··· 759 738 ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX); 760 739 if (ret) 761 740 goto out_avail; 741 + 742 + /* Check if all MSI entries honor device restrictions */ 743 + ret = msi_verify_entries(dev); 744 + if (ret) 745 + goto out_free; 762 746 763 747 /* 764 748 * Some devices require MSI-X to be enabled before we can touch the
+2
drivers/pci/pci.h
··· 6 6 7 7 extern const unsigned char pcie_link_speed[]; 8 8 9 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 10 + 9 11 /* Functions internal to the PCI core code */ 10 12 11 13 int pci_create_sysfs_dev_files(struct pci_dev *pdev);
+17 -13
drivers/pci/probe.c
··· 407 407 { 408 408 struct pci_dev *dev = child->self; 409 409 u16 mem_base_lo, mem_limit_lo; 410 - unsigned long base, limit; 410 + u64 base64, limit64; 411 + dma_addr_t base, limit; 411 412 struct pci_bus_region region; 412 413 struct resource *res; 413 414 414 415 res = child->resource[2]; 415 416 pci_read_config_word(dev, PCI_PREF_MEMORY_BASE, &mem_base_lo); 416 417 pci_read_config_word(dev, PCI_PREF_MEMORY_LIMIT, &mem_limit_lo); 417 - base = ((unsigned long) mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 418 - limit = ((unsigned long) mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 418 + base64 = (mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 419 + limit64 = (mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 419 420 420 421 if ((mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) == PCI_PREF_RANGE_TYPE_64) { 421 422 u32 mem_base_hi, mem_limit_hi; ··· 430 429 * this, just assume they are not being used. 431 430 */ 432 431 if (mem_base_hi <= mem_limit_hi) { 433 - #if BITS_PER_LONG == 64 434 - base |= ((unsigned long) mem_base_hi) << 32; 435 - limit |= ((unsigned long) mem_limit_hi) << 32; 436 - #else 437 - if (mem_base_hi || mem_limit_hi) { 438 - dev_err(&dev->dev, "can't handle 64-bit address space for bridge\n"); 439 - return; 440 - } 441 - #endif 432 + base64 |= (u64) mem_base_hi << 32; 433 + limit64 |= (u64) mem_limit_hi << 32; 442 434 } 443 435 } 436 + 437 + base = (dma_addr_t) base64; 438 + limit = (dma_addr_t) limit64; 439 + 440 + if (base != base64) { 441 + dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n", 442 + (unsigned long long) base64); 443 + return; 444 + } 445 + 444 446 if (base <= limit) { 445 447 res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | 446 448 IORESOURCE_MEM | IORESOURCE_PREFETCH; ··· 1327 1323 ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 1328 1324 1329 1325 /* Initialize Link Control Register */ 1330 - if (dev->subordinate) 1326 + if (pcie_cap_has_lnkctl(dev)) 1331 1327 pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 1332 1328 ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 1333 1329
+7
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 412 412 struct fc_frame_header *fh; 413 413 struct fcoe_rcv_info *fr; 414 414 struct fcoe_percpu_s *bg; 415 + struct sk_buff *tmp_skb; 415 416 unsigned short oxid; 416 417 417 418 interface = container_of(ptype, struct bnx2fc_interface, ··· 424 423 printk(KERN_ERR PFX "bnx2fc_rcv: lport is NULL\n"); 425 424 goto err; 426 425 } 426 + 427 + tmp_skb = skb_share_check(skb, GFP_ATOMIC); 428 + if (!tmp_skb) 429 + goto err; 430 + 431 + skb = tmp_skb; 427 432 428 433 if (unlikely(eth_hdr(skb)->h_proto != htons(ETH_P_FCOE))) { 429 434 printk(KERN_ERR PFX "bnx2fc_rcv: Wrong FC type frame\n");
+2
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 828 828 if (status == CPL_ERR_RTX_NEG_ADVICE) 829 829 goto rel_skb; 830 830 831 + module_put(THIS_MODULE); 832 + 831 833 if (status && status != CPL_ERR_TCAM_FULL && 832 834 status != CPL_ERR_CONN_EXIST && 833 835 status != CPL_ERR_ARP_MISS)
+1 -1
drivers/scsi/cxgbi/libcxgbi.c
··· 816 816 read_lock_bh(&csk->callback_lock); 817 817 if (csk->user_data) 818 818 iscsi_conn_failure(csk->user_data, 819 - ISCSI_ERR_CONN_FAILED); 819 + ISCSI_ERR_TCP_CONN_CLOSE); 820 820 read_unlock_bh(&csk->callback_lock); 821 821 } 822 822 }
+1
drivers/scsi/scsi_devinfo.c
··· 202 202 {"IOMEGA", "Io20S *F", NULL, BLIST_KEY}, 203 203 {"INSITE", "Floptical F*8I", NULL, BLIST_KEY}, 204 204 {"INSITE", "I325VM", NULL, BLIST_KEY}, 205 + {"Intel", "Multi-Flex", NULL, BLIST_NO_RSOC}, 205 206 {"iRiver", "iFP Mass Driver", NULL, BLIST_NOT_LOCKABLE | BLIST_INQUIRY_36}, 206 207 {"LASOUND", "CDX7405", "3.10", BLIST_MAX5LUN | BLIST_SINGLELUN}, 207 208 {"MATSHITA", "PD-1", NULL, BLIST_FORCELUN | BLIST_SINGLELUN},
+5 -10
drivers/scsi/ufs/ufshcd-pltfrm.c
··· 102 102 clkfreq = devm_kzalloc(dev, sz * sizeof(*clkfreq), 103 103 GFP_KERNEL); 104 104 if (!clkfreq) { 105 - dev_err(dev, "%s: no memory\n", "freq-table-hz"); 106 105 ret = -ENOMEM; 107 106 goto out; 108 107 } ··· 111 112 if (ret && (ret != -EINVAL)) { 112 113 dev_err(dev, "%s: error reading array %d\n", 113 114 "freq-table-hz", ret); 114 - goto free_clkfreq; 115 + return ret; 115 116 } 116 117 117 118 for (i = 0; i < sz; i += 2) { 118 119 ret = of_property_read_string_index(np, 119 120 "clock-names", i/2, (const char **)&name); 120 121 if (ret) 121 - goto free_clkfreq; 122 + goto out; 122 123 123 124 clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL); 124 125 if (!clki) { 125 126 ret = -ENOMEM; 126 - goto free_clkfreq; 127 + goto out; 127 128 } 128 129 129 130 clki->min_freq = clkfreq[i]; ··· 133 134 clki->min_freq, clki->max_freq, clki->name); 134 135 list_add_tail(&clki->list, &hba->clk_list_head); 135 136 } 136 - free_clkfreq: 137 - kfree(clkfreq); 138 137 out: 139 138 return ret; 140 139 } ··· 159 162 } 160 163 161 164 vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL); 162 - if (!vreg) { 163 - dev_err(dev, "No memory for %s regulator\n", name); 164 - goto out; 165 - } 165 + if (!vreg) 166 + return -ENOMEM; 166 167 167 168 vreg->name = kstrdup(name, GFP_KERNEL); 168 169
+64 -40
drivers/scsi/ufs/ufshcd.c
··· 744 744 if (!ufshcd_is_clkgating_allowed(hba)) 745 745 return; 746 746 device_remove_file(hba->dev, &hba->clk_gating.delay_attr); 747 + cancel_work_sync(&hba->clk_gating.ungate_work); 748 + cancel_delayed_work_sync(&hba->clk_gating.gate_work); 747 749 } 748 750 749 751 /* Must be called with host lock acquired */ ··· 2248 2246 return ret; 2249 2247 } 2250 2248 2249 + /** 2250 + * ufshcd_init_pwr_info - setting the POR (power on reset) 2251 + * values in hba power info 2252 + * @hba: per-adapter instance 2253 + */ 2254 + static void ufshcd_init_pwr_info(struct ufs_hba *hba) 2255 + { 2256 + hba->pwr_info.gear_rx = UFS_PWM_G1; 2257 + hba->pwr_info.gear_tx = UFS_PWM_G1; 2258 + hba->pwr_info.lane_rx = 1; 2259 + hba->pwr_info.lane_tx = 1; 2260 + hba->pwr_info.pwr_rx = SLOWAUTO_MODE; 2261 + hba->pwr_info.pwr_tx = SLOWAUTO_MODE; 2262 + hba->pwr_info.hs_rate = 0; 2263 + } 2264 + 2251 2265 /** 2252 2266 * ufshcd_get_max_pwr_mode - reads the max power mode negotiated with device 2253 2267 * @hba: per-adapter instance ··· 2862 2844 hba = shost_priv(sdev->host); 2863 2845 scsi_deactivate_tcq(sdev, hba->nutrs); 2864 2846 /* Drop the reference as it won't be needed anymore */ 2865 - if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) 2847 + if (ufshcd_scsi_to_upiu_lun(sdev->lun) == UFS_UPIU_UFS_DEVICE_WLUN) { 2848 + unsigned long flags; 2849 + 2850 + spin_lock_irqsave(hba->host->host_lock, flags); 2866 2851 hba->sdev_ufs_device = NULL; 2852 + spin_unlock_irqrestore(hba->host->host_lock, flags); 2853 + } 2867 2854 } 2868 2855 2869 2856 /** ··· 4085 4062 static int ufshcd_scsi_add_wlus(struct ufs_hba *hba) 4086 4063 { 4087 4064 int ret = 0; 4065 + struct scsi_device *sdev_rpmb; 4066 + struct scsi_device *sdev_boot; 4088 4067 4089 4068 hba->sdev_ufs_device = __scsi_add_device(hba->host, 0, 0, 4090 4069 ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_UFS_DEVICE_WLUN), NULL); ··· 4095 4070 hba->sdev_ufs_device = NULL; 4096 4071 goto out; 4097 4072 } 4073 + scsi_device_put(hba->sdev_ufs_device); 4098 4074 4099 - hba->sdev_boot = __scsi_add_device(hba->host, 0, 0, 4075 + sdev_boot = __scsi_add_device(hba->host, 0, 0, 4100 4076 ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_BOOT_WLUN), NULL); 4101 - if (IS_ERR(hba->sdev_boot)) { 4102 - ret = PTR_ERR(hba->sdev_boot); 4103 - hba->sdev_boot = NULL; 4077 + if (IS_ERR(sdev_boot)) { 4078 + ret = PTR_ERR(sdev_boot); 4104 4079 goto remove_sdev_ufs_device; 4105 4080 } 4081 + scsi_device_put(sdev_boot); 4106 4082 4107 - hba->sdev_rpmb = __scsi_add_device(hba->host, 0, 0, 4083 + sdev_rpmb = __scsi_add_device(hba->host, 0, 0, 4108 4084 ufshcd_upiu_wlun_to_scsi_wlun(UFS_UPIU_RPMB_WLUN), NULL); 4109 - if (IS_ERR(hba->sdev_rpmb)) { 4110 - ret = PTR_ERR(hba->sdev_rpmb); 4111 - hba->sdev_rpmb = NULL; 4085 + if (IS_ERR(sdev_rpmb)) { 4086 + ret = PTR_ERR(sdev_rpmb); 4112 4087 goto remove_sdev_boot; 4113 4088 } 4089 + scsi_device_put(sdev_rpmb); 4114 4090 goto out; 4115 4091 4116 4092 remove_sdev_boot: 4117 - scsi_remove_device(hba->sdev_boot); 4093 + scsi_remove_device(sdev_boot); 4118 4094 remove_sdev_ufs_device: 4119 4095 scsi_remove_device(hba->sdev_ufs_device); 4120 4096 out: 4121 4097 return ret; 4122 - } 4123 - 4124 - /** 4125 - * ufshcd_scsi_remove_wlus - Removes the W-LUs which were added by 4126 - * ufshcd_scsi_add_wlus() 4127 - * @hba: per-adapter instance 4128 - * 4129 - */ 4130 - static void ufshcd_scsi_remove_wlus(struct ufs_hba *hba) 4131 - { 4132 - if (hba->sdev_ufs_device) { 4133 - scsi_remove_device(hba->sdev_ufs_device); 4134 - hba->sdev_ufs_device = NULL; 4135 - } 4136 - 4137 - if (hba->sdev_boot) { 4138 - scsi_remove_device(hba->sdev_boot); 4139 - hba->sdev_boot = NULL; 4140 - } 4141 - 4142 - if (hba->sdev_rpmb) { 4143 - scsi_remove_device(hba->sdev_rpmb); 4144 - hba->sdev_rpmb = NULL; 4145 - } 4146 4098 } 4147 4099 4148 4100 /** ··· 4135 4133 ret = ufshcd_link_startup(hba); 4136 4134 if (ret) 4137 4135 goto out; 4136 + 4137 + ufshcd_init_pwr_info(hba); 4138 4138 4139 4139 /* UniPro link is active now */ 4140 4140 ufshcd_set_link_active(hba); ··· 4268 4264 static inline int ufshcd_config_vreg_lpm(struct ufs_hba *hba, 4269 4265 struct ufs_vreg *vreg) 4270 4266 { 4267 + if (!vreg) 4268 + return 0; 4269 + 4271 4270 return ufshcd_config_vreg_load(hba->dev, vreg, UFS_VREG_LPM_LOAD_UA); 4272 4271 } 4273 4272 4274 4273 static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba, 4275 4274 struct ufs_vreg *vreg) 4276 4275 { 4276 + if (!vreg) 4277 + return 0; 4278 + 4277 4279 return ufshcd_config_vreg_load(hba->dev, vreg, vreg->max_uA); 4278 4280 } 4279 4281 ··· 4481 4471 if (!IS_ERR_OR_NULL(clki->clk) && clki->enabled) 4482 4472 clk_disable_unprepare(clki->clk); 4483 4473 } 4484 - } else if (!ret && on) { 4474 + } else if (on) { 4485 4475 spin_lock_irqsave(hba->host->host_lock, flags); 4486 4476 hba->clk_gating.state = CLKS_ON; 4487 4477 spin_unlock_irqrestore(hba->host->host_lock, flags); ··· 4685 4675 { 4686 4676 unsigned char cmd[6] = { START_STOP }; 4687 4677 struct scsi_sense_hdr sshdr; 4688 - struct scsi_device *sdp = hba->sdev_ufs_device; 4678 + struct scsi_device *sdp; 4679 + unsigned long flags; 4689 4680 int ret; 4690 4681 4691 - if (!sdp || !scsi_device_online(sdp)) 4692 - return -ENODEV; 4682 + spin_lock_irqsave(hba->host->host_lock, flags); 4683 + sdp = hba->sdev_ufs_device; 4684 + if (sdp) { 4685 + ret = scsi_device_get(sdp); 4686 + if (!ret && !scsi_device_online(sdp)) { 4687 + ret = -ENODEV; 4688 + scsi_device_put(sdp); 4689 + } 4690 + } else { 4691 + ret = -ENODEV; 4692 + } 4693 + spin_unlock_irqrestore(hba->host->host_lock, flags); 4694 + 4695 + if (ret) 4696 + return ret; 4693 4697 4694 4698 /* 4695 4699 * If scsi commands fail, the scsi mid-layer schedules scsi error- ··· 4742 4718 if (!ret) 4743 4719 hba->curr_dev_pwr_mode = pwr_mode; 4744 4720 out: 4721 + scsi_device_put(sdp); 4745 4722 hba->host->eh_noresume = 0; 4746 4723 return ret; 4747 4724 } ··· 5112 5087 int ret = 0; 5113 5088 5114 5089 if (!hba || !hba->is_powered) 5115 - goto out; 5090 + return 0; 5116 5091 5117 5092 if (pm_runtime_suspended(hba->dev)) { 5118 5093 if (hba->rpm_lvl == hba->spm_lvl) ··· 5256 5231 void ufshcd_remove(struct ufs_hba *hba) 5257 5232 { 5258 5233 scsi_remove_host(hba->host); 5259 - ufshcd_scsi_remove_wlus(hba); 5260 5234 /* disable interrupts */ 5261 5235 ufshcd_disable_intr(hba, hba->intr_mask); 5262 5236 ufshcd_hba_stop(hba);
-2
drivers/scsi/ufs/ufshcd.h
··· 392 392 * "UFS device" W-LU. 393 393 */ 394 394 struct scsi_device *sdev_ufs_device; 395 - struct scsi_device *sdev_rpmb; 396 - struct scsi_device *sdev_boot; 397 395 398 396 enum ufs_dev_pwr_mode curr_dev_pwr_mode; 399 397 enum uic_link_state uic_link_state;
+1 -5
drivers/spi/spi-dw.c
··· 376 376 chip = dws->cur_chip; 377 377 spi = message->spi; 378 378 379 - if (unlikely(!chip->clk_div)) 380 - chip->clk_div = dws->max_freq / chip->speed_hz; 381 - 382 379 if (message->state == ERROR_STATE) { 383 380 message->status = -EIO; 384 381 goto early_exit; ··· 416 419 if (transfer->speed_hz) { 417 420 speed = chip->speed_hz; 418 421 419 - if (transfer->speed_hz != speed) { 422 + if ((transfer->speed_hz != speed) || (!chip->clk_div)) { 420 423 speed = transfer->speed_hz; 421 424 422 425 /* clk_div doesn't support odd number */ ··· 578 581 dev_err(&spi->dev, "No max speed HZ parameter\n"); 579 582 return -EINVAL; 580 583 } 581 - chip->speed_hz = spi->max_speed_hz; 582 584 583 585 chip->tmode = 0; /* Tx & Rx */ 584 586 /* Default SPI mode is SCPOL = 0, SCPH = 0 */
+2 -2
drivers/spi/spi-sirf.c
··· 562 562 563 563 sspi->word_width = DIV_ROUND_UP(bits_per_word, 8); 564 564 txfifo_ctrl = SIRFSOC_SPI_FIFO_THD(SIRFSOC_SPI_FIFO_SIZE / 2) | 565 - sspi->word_width; 565 + (sspi->word_width >> 1); 566 566 rxfifo_ctrl = SIRFSOC_SPI_FIFO_THD(SIRFSOC_SPI_FIFO_SIZE / 2) | 567 - sspi->word_width; 567 + (sspi->word_width >> 1); 568 568 569 569 if (!(spi->mode & SPI_CS_HIGH)) 570 570 regval |= SIRFSOC_SPI_CS_IDLE_STAT;
+3 -3
drivers/spi/spi.c
··· 615 615 sg_free_table(sgt); 616 616 return -ENOMEM; 617 617 } 618 - sg_buf = page_address(vm_page) + 619 - ((size_t)buf & ~PAGE_MASK); 618 + sg_set_page(&sgt->sgl[i], vm_page, 619 + min, offset_in_page(buf)); 620 620 } else { 621 621 sg_buf = buf; 622 + sg_set_buf(&sgt->sgl[i], sg_buf, min); 622 623 } 623 624 624 - sg_set_buf(&sgt->sgl[i], sg_buf, min); 625 625 626 626 buf += min; 627 627 len -= min;
+11 -11
drivers/staging/rtl8188eu/core/rtw_cmd.c
··· 275 275 if (check_fwstate(pmlmepriv, _FW_LINKED) == true) 276 276 rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_SCAN, 1); 277 277 278 - ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 278 + ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 279 279 if (ph2c == NULL) 280 280 return _FAIL; 281 281 282 - psurveyPara = kzalloc(sizeof(struct sitesurvey_parm), GFP_KERNEL); 282 + psurveyPara = kzalloc(sizeof(struct sitesurvey_parm), GFP_ATOMIC); 283 283 if (psurveyPara == NULL) { 284 284 kfree(ph2c); 285 285 return _FAIL; ··· 405 405 else 406 406 RT_TRACE(_module_rtl871x_cmd_c_, _drv_notice_, ("+Join cmd: SSid =[%s]\n", pmlmepriv->assoc_ssid.Ssid)); 407 407 408 - pcmd = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 408 + pcmd = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 409 409 if (pcmd == NULL) { 410 410 res = _FAIL; 411 411 RT_TRACE(_module_rtl871x_cmd_c_, _drv_err_, ("rtw_joinbss_cmd: memory allocate for cmd_obj fail!!!\n")); ··· 755 755 u8 res = _SUCCESS; 756 756 757 757 758 - ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 758 + ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 759 759 if (ph2c == NULL) { 760 760 res = _FAIL; 761 761 goto exit; 762 762 } 763 763 764 - pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_KERNEL); 764 + pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_ATOMIC); 765 765 if (pdrvextra_cmd_parm == NULL) { 766 766 kfree(ph2c); 767 767 res = _FAIL; ··· 967 967 u8 res = _SUCCESS; 968 968 969 969 if (enqueue) { 970 - ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 970 + ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 971 971 if (ph2c == NULL) { 972 972 res = _FAIL; 973 973 goto exit; 974 974 } 975 975 976 - pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_KERNEL); 976 + pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_ATOMIC); 977 977 if (pdrvextra_cmd_parm == NULL) { 978 978 kfree(ph2c); 979 979 res = _FAIL; ··· 1010 1010 1011 1011 u8 res = _SUCCESS; 1012 1012 1013 - ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 1013 + ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 1014 1014 if (ph2c == NULL) { 1015 1015 res = _FAIL; 1016 1016 goto exit; 1017 1017 } 1018 1018 1019 - pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_KERNEL); 1019 + pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_ATOMIC); 1020 1020 if (pdrvextra_cmd_parm == NULL) { 1021 1021 kfree(ph2c); 1022 1022 res = _FAIL; ··· 1088 1088 1089 1089 u8 res = _SUCCESS; 1090 1090 1091 - ppscmd = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 1091 + ppscmd = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 1092 1092 if (ppscmd == NULL) { 1093 1093 res = _FAIL; 1094 1094 goto exit; 1095 1095 } 1096 1096 1097 - pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_KERNEL); 1097 + pdrvextra_cmd_parm = kzalloc(sizeof(struct drvextra_cmd_parm), GFP_ATOMIC); 1098 1098 if (pdrvextra_cmd_parm == NULL) { 1099 1099 kfree(ppscmd); 1100 1100 res = _FAIL;
+6 -6
drivers/staging/rtl8188eu/core/rtw_mlme_ext.c
··· 4241 4241 pcmdpriv = &padapter->cmdpriv; 4242 4242 4243 4243 4244 - pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 4244 + pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 4245 4245 if (pcmd_obj == NULL) 4246 4246 return; 4247 4247 4248 4248 cmdsz = (sizeof(struct survey_event) + sizeof(struct C2HEvent_Header)); 4249 - pevtcmd = kzalloc(cmdsz, GFP_KERNEL); 4249 + pevtcmd = kzalloc(cmdsz, GFP_ATOMIC); 4250 4250 if (pevtcmd == NULL) { 4251 4251 kfree(pcmd_obj); 4252 4252 return; ··· 4339 4339 struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info); 4340 4340 struct cmd_priv *pcmdpriv = &padapter->cmdpriv; 4341 4341 4342 - pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 4342 + pcmd_obj = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 4343 4343 if (pcmd_obj == NULL) 4344 4344 return; 4345 4345 4346 4346 cmdsz = (sizeof(struct joinbss_event) + sizeof(struct C2HEvent_Header)); 4347 - pevtcmd = kzalloc(cmdsz, GFP_KERNEL); 4347 + pevtcmd = kzalloc(cmdsz, GFP_ATOMIC); 4348 4348 if (pevtcmd == NULL) { 4349 4349 kfree(pcmd_obj); 4350 4350 return; ··· 4854 4854 pmlmeext->scan_abort = false;/* reset */ 4855 4855 } 4856 4856 4857 - ph2c = kzalloc(sizeof(struct cmd_obj), GFP_KERNEL); 4857 + ph2c = kzalloc(sizeof(struct cmd_obj), GFP_ATOMIC); 4858 4858 if (ph2c == NULL) 4859 4859 goto exit_survey_timer_hdl; 4860 4860 4861 - psurveyPara = kzalloc(sizeof(struct sitesurvey_parm), GFP_KERNEL); 4861 + psurveyPara = kzalloc(sizeof(struct sitesurvey_parm), GFP_ATOMIC); 4862 4862 if (psurveyPara == NULL) { 4863 4863 kfree(ph2c); 4864 4864 goto exit_survey_timer_hdl;
+1 -1
drivers/staging/rtl8188eu/core/rtw_wlan_util.c
··· 935 935 return true; 936 936 } 937 937 938 - bssid = kzalloc(sizeof(struct wlan_bssid_ex), GFP_KERNEL); 938 + bssid = kzalloc(sizeof(struct wlan_bssid_ex), GFP_ATOMIC); 939 939 940 940 subtype = GetFrameSubType(pframe) >> 4; 941 941
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 47 47 {USB_DEVICE(0x07b8, 0x8179)}, /* Abocom - Abocom */ 48 48 {USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */ 49 49 {USB_DEVICE(0x2001, 0x3310)}, /* Dlink DWA-123 REV D1 */ 50 + {USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */ 50 51 {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ 51 52 {} /* Terminating entry */ 52 53 };
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 3491 3491 len = sprintf(buf, "TargetAddress=" 3492 3492 "%s:%hu,%hu", 3493 3493 inaddr_any ? conn->local_ip : np->np_ip, 3494 - inaddr_any ? conn->local_port : np->np_port, 3494 + np->np_port, 3495 3495 tpg->tpgt); 3496 3496 len += 1; 3497 3497
+5 -4
drivers/target/target_core_pr.c
··· 2738 2738 struct t10_pr_registration *pr_reg, *pr_reg_tmp, *pr_reg_n, *pr_res_holder; 2739 2739 struct t10_reservation *pr_tmpl = &dev->t10_pr; 2740 2740 u32 pr_res_mapped_lun = 0; 2741 - int all_reg = 0, calling_it_nexus = 0, released_regs = 0; 2741 + int all_reg = 0, calling_it_nexus = 0; 2742 + bool sa_res_key_unmatched = sa_res_key != 0; 2742 2743 int prh_type = 0, prh_scope = 0; 2743 2744 2744 2745 if (!se_sess) ··· 2814 2813 if (!all_reg) { 2815 2814 if (pr_reg->pr_res_key != sa_res_key) 2816 2815 continue; 2816 + sa_res_key_unmatched = false; 2817 2817 2818 2818 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2819 2819 pr_reg_nacl = pr_reg->pr_reg_nacl; ··· 2822 2820 __core_scsi3_free_registration(dev, pr_reg, 2823 2821 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2824 2822 NULL, calling_it_nexus); 2825 - released_regs++; 2826 2823 } else { 2827 2824 /* 2828 2825 * Case for any existing all registrants type ··· 2839 2838 if ((sa_res_key) && 2840 2839 (pr_reg->pr_res_key != sa_res_key)) 2841 2840 continue; 2841 + sa_res_key_unmatched = false; 2842 2842 2843 2843 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2844 2844 if (calling_it_nexus) ··· 2850 2848 __core_scsi3_free_registration(dev, pr_reg, 2851 2849 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2852 2850 NULL, 0); 2853 - released_regs++; 2854 2851 } 2855 2852 if (!calling_it_nexus) 2856 2853 core_scsi3_ua_allocate(pr_reg_nacl, ··· 2864 2863 * registered reservation key, then the device server shall 2865 2864 * complete the command with RESERVATION CONFLICT status. 2866 2865 */ 2867 - if (!released_regs) { 2866 + if (sa_res_key_unmatched) { 2868 2867 spin_unlock(&dev->dev_reservation_lock); 2869 2868 core_scsi3_put_pr_reg(pr_reg_n); 2870 2869 return TCM_RESERVATION_CONFLICT;
+1 -1
drivers/target/target_core_transport.c
··· 2292 2292 * and let it call back once the write buffers are ready. 2293 2293 */ 2294 2294 target_add_to_state_list(cmd); 2295 - if (cmd->data_direction != DMA_TO_DEVICE) { 2295 + if (cmd->data_direction != DMA_TO_DEVICE || cmd->data_length == 0) { 2296 2296 target_execute_cmd(cmd); 2297 2297 return 0; 2298 2298 }
+21 -16
drivers/thermal/cpu_cooling.c
··· 50 50 unsigned int cpufreq_state; 51 51 unsigned int cpufreq_val; 52 52 struct cpumask allowed_cpus; 53 + struct list_head node; 53 54 }; 54 55 static DEFINE_IDR(cpufreq_idr); 55 56 static DEFINE_MUTEX(cooling_cpufreq_lock); 56 57 57 58 static unsigned int cpufreq_dev_count; 58 59 59 - /* notify_table passes value to the CPUFREQ_ADJUST callback function. */ 60 - #define NOTIFY_INVALID NULL 61 - static struct cpufreq_cooling_device *notify_device; 60 + static LIST_HEAD(cpufreq_dev_list); 62 61 63 62 /** 64 63 * get_idr - function to get a unique id. ··· 286 287 287 288 cpufreq_device->cpufreq_state = cooling_state; 288 289 cpufreq_device->cpufreq_val = clip_freq; 289 - notify_device = cpufreq_device; 290 290 291 291 for_each_cpu(cpuid, mask) { 292 292 if (is_cpufreq_valid(cpuid)) 293 293 cpufreq_update_policy(cpuid); 294 294 } 295 - 296 - notify_device = NOTIFY_INVALID; 297 295 298 296 return 0; 299 297 } ··· 312 316 { 313 317 struct cpufreq_policy *policy = data; 314 318 unsigned long max_freq = 0; 319 + struct cpufreq_cooling_device *cpufreq_dev; 315 320 316 - if (event != CPUFREQ_ADJUST || notify_device == NOTIFY_INVALID) 321 + if (event != CPUFREQ_ADJUST) 317 322 return 0; 318 323 319 - if (cpumask_test_cpu(policy->cpu, &notify_device->allowed_cpus)) 320 - max_freq = notify_device->cpufreq_val; 321 - else 322 - return 0; 324 + mutex_lock(&cooling_cpufreq_lock); 325 + list_for_each_entry(cpufreq_dev, &cpufreq_dev_list, node) { 326 + if (!cpumask_test_cpu(policy->cpu, 327 + &cpufreq_dev->allowed_cpus)) 328 + continue; 323 329 324 - /* Never exceed user_policy.max */ 325 - if (max_freq > policy->user_policy.max) 326 - max_freq = policy->user_policy.max; 330 + if (!cpufreq_dev->cpufreq_val) 331 + cpufreq_dev->cpufreq_val = get_cpu_frequency( 332 + cpumask_any(&cpufreq_dev->allowed_cpus), 333 + cpufreq_dev->cpufreq_state); 327 334 328 - if (policy->max != max_freq) 329 - cpufreq_verify_within_limits(policy, 0, max_freq); 335 + max_freq = cpufreq_dev->cpufreq_val; 336 + 337 + if (policy->max != max_freq) 338 + cpufreq_verify_within_limits(policy, 0, max_freq); 339 + } 340 + mutex_unlock(&cooling_cpufreq_lock); 330 341 331 342 return 0; 332 343 } ··· 489 486 cpufreq_register_notifier(&thermal_cpufreq_notifier_block, 490 487 CPUFREQ_POLICY_NOTIFIER); 491 488 cpufreq_dev_count++; 489 + list_add(&cpufreq_dev->node, &cpufreq_dev_list); 492 490 493 491 mutex_unlock(&cooling_cpufreq_lock); 494 492 ··· 553 549 554 550 cpufreq_dev = cdev->devdata; 555 551 mutex_lock(&cooling_cpufreq_lock); 552 + list_del(&cpufreq_dev->node); 556 553 cpufreq_dev_count--; 557 554 558 555 /* Unregister the notifier for the last cpufreq cooling device */
+3 -6
drivers/thermal/samsung/exynos_thermal_common.c
··· 417 417 418 418 th_zone = sensor_conf->pzone_data; 419 419 420 - if (th_zone->therm_dev) 421 - thermal_zone_device_unregister(th_zone->therm_dev); 420 + thermal_zone_device_unregister(th_zone->therm_dev); 422 421 423 - for (i = 0; i < th_zone->cool_dev_size; i++) { 424 - if (th_zone->cool_dev[i]) 425 - cpufreq_cooling_unregister(th_zone->cool_dev[i]); 426 - } 422 + for (i = 0; i < th_zone->cool_dev_size; ++i) 423 + cpufreq_cooling_unregister(th_zone->cool_dev[i]); 427 424 428 425 dev_info(sensor_conf->dev, 429 426 "Exynos: Kernel Thermal management unregistered\n");
+3
drivers/thermal/st/st_thermal.c
··· 275 275 } 276 276 EXPORT_SYMBOL_GPL(st_thermal_unregister); 277 277 278 + #ifdef CONFIG_PM_SLEEP 278 279 static int st_thermal_suspend(struct device *dev) 279 280 { 280 281 struct platform_device *pdev = to_platform_device(dev); ··· 306 305 307 306 return 0; 308 307 } 308 + #endif 309 + 309 310 SIMPLE_DEV_PM_OPS(st_thermal_pm_ops, st_thermal_suspend, st_thermal_resume); 310 311 EXPORT_SYMBOL_GPL(st_thermal_pm_ops); 311 312
-27
drivers/tty/serial/of_serial.c
··· 240 240 return 0; 241 241 } 242 242 243 - #ifdef CONFIG_PM_SLEEP 244 - static int of_serial_suspend(struct device *dev) 245 - { 246 - struct of_serial_info *info = dev_get_drvdata(dev); 247 - 248 - serial8250_suspend_port(info->line); 249 - if (info->clk) 250 - clk_disable_unprepare(info->clk); 251 - 252 - return 0; 253 - } 254 - 255 - static int of_serial_resume(struct device *dev) 256 - { 257 - struct of_serial_info *info = dev_get_drvdata(dev); 258 - 259 - if (info->clk) 260 - clk_prepare_enable(info->clk); 261 - 262 - serial8250_resume_port(info->line); 263 - 264 - return 0; 265 - } 266 - #endif 267 - static SIMPLE_DEV_PM_OPS(of_serial_pm_ops, of_serial_suspend, of_serial_resume); 268 - 269 243 /* 270 244 * A few common types, add more as needed. 271 245 */ ··· 271 297 .name = "of_serial", 272 298 .owner = THIS_MODULE, 273 299 .of_match_table = of_platform_serial_table, 274 - .pm = &of_serial_pm_ops, 275 300 }, 276 301 .probe = of_platform_serial_probe, 277 302 .remove = of_platform_serial_remove,
+3
drivers/usb/core/quirks.c
··· 44 44 /* Creative SB Audigy 2 NX */ 45 45 { USB_DEVICE(0x041e, 0x3020), .driver_info = USB_QUIRK_RESET_RESUME }, 46 46 47 + /* Microsoft Wireless Laser Mouse 6000 Receiver */ 48 + { USB_DEVICE(0x045e, 0x00e1), .driver_info = USB_QUIRK_RESET_RESUME }, 49 + 47 50 /* Microsoft LifeCam-VX700 v2.0 */ 48 51 { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, 49 52
+4 -4
drivers/usb/dwc3/ep0.c
··· 791 791 792 792 trb = dwc->ep0_trb; 793 793 794 + r = next_request(&ep0->request_list); 795 + if (!r) 796 + return; 797 + 794 798 status = DWC3_TRB_SIZE_TRBSTS(trb->size); 795 799 if (status == DWC3_TRBSTS_SETUP_PENDING) { 796 800 dwc3_trace(trace_dwc3_ep0, "Setup Pending received"); ··· 804 800 805 801 return; 806 802 } 807 - 808 - r = next_request(&ep0->request_list); 809 - if (!r) 810 - return; 811 803 812 804 ur = &r->request; 813 805
+1 -4
drivers/usb/host/xhci-hub.c
··· 22 22 23 23 24 24 #include <linux/slab.h> 25 - #include <linux/device.h> 26 25 #include <asm/unaligned.h> 27 26 28 27 #include "xhci.h" ··· 1148 1149 * including the USB 3.0 roothub, but only if CONFIG_PM_RUNTIME 1149 1150 * is enabled, so also enable remote wake here. 1150 1151 */ 1151 - if (hcd->self.root_hub->do_remote_wakeup 1152 - && device_may_wakeup(hcd->self.controller)) { 1153 - 1152 + if (hcd->self.root_hub->do_remote_wakeup) { 1154 1153 if (t1 & PORT_CONNECT) { 1155 1154 t2 |= PORT_WKOC_E | PORT_WKDISC_E; 1156 1155 t2 &= ~PORT_WKCONN_E;
+1 -1
drivers/usb/host/xhci-pci.c
··· 281 281 if (xhci->quirks & XHCI_COMP_MODE_QUIRK) 282 282 pdev->no_d3cold = true; 283 283 284 - return xhci_suspend(xhci); 284 + return xhci_suspend(xhci, do_wakeup); 285 285 } 286 286 287 287 static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated)
+9 -1
drivers/usb/host/xhci-plat.c
··· 204 204 struct usb_hcd *hcd = dev_get_drvdata(dev); 205 205 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 206 206 207 - return xhci_suspend(xhci); 207 + /* 208 + * xhci_suspend() needs `do_wakeup` to know whether host is allowed 209 + * to do wakeup during suspend. Since xhci_plat_suspend is currently 210 + * only designed for system suspend, device_may_wakeup() is enough 211 + * to dertermine whether host is allowed to do wakeup. Need to 212 + * reconsider this when xhci_plat_suspend enlarges its scope, e.g., 213 + * also applies to runtime suspend. 214 + */ 215 + return xhci_suspend(xhci, device_may_wakeup(dev)); 208 216 } 209 217 210 218 static int xhci_plat_resume(struct device *dev)
+11 -32
drivers/usb/host/xhci-ring.c
··· 1067 1067 false); 1068 1068 xhci_ring_cmd_db(xhci); 1069 1069 } else { 1070 - /* Clear our internal halted state and restart the ring(s) */ 1070 + /* Clear our internal halted state */ 1071 1071 xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_HALTED; 1072 - ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 1073 1072 } 1074 1073 } 1075 1074 ··· 1822 1823 ep->stopped_td = td; 1823 1824 return 0; 1824 1825 } else { 1825 - if (trb_comp_code == COMP_STALL) { 1826 - /* The transfer is completed from the driver's 1827 - * perspective, but we need to issue a set dequeue 1828 - * command for this stalled endpoint to move the dequeue 1829 - * pointer past the TD. We can't do that here because 1830 - * the halt condition must be cleared first. Let the 1831 - * USB class driver clear the stall later. 1832 - */ 1833 - ep->stopped_td = td; 1834 - ep->stopped_stream = ep_ring->stream_id; 1835 - } else if (xhci_requires_manual_halt_cleanup(xhci, 1836 - ep_ctx, trb_comp_code)) { 1837 - /* Other types of errors halt the endpoint, but the 1838 - * class driver doesn't call usb_reset_endpoint() unless 1839 - * the error is -EPIPE. Clear the halted status in the 1840 - * xHCI hardware manually. 1826 + if (trb_comp_code == COMP_STALL || 1827 + xhci_requires_manual_halt_cleanup(xhci, ep_ctx, 1828 + trb_comp_code)) { 1829 + /* Issue a reset endpoint command to clear the host side 1830 + * halt, followed by a set dequeue command to move the 1831 + * dequeue pointer past the TD. 1832 + * The class driver clears the device side halt later. 1841 1833 */ 1842 1834 xhci_cleanup_halted_endpoint(xhci, 1843 1835 slot_id, ep_index, ep_ring->stream_id, ··· 1948 1958 else 1949 1959 td->urb->actual_length = 0; 1950 1960 1951 - xhci_cleanup_halted_endpoint(xhci, 1952 - slot_id, ep_index, 0, td, event_trb); 1953 - return finish_td(xhci, td, event_trb, event, ep, status, true); 1961 + return finish_td(xhci, td, event_trb, event, ep, status, false); 1954 1962 } 1955 1963 /* 1956 1964 * Did we transfer any data, despite the errors that might have ··· 2507 2519 if (ret) { 2508 2520 urb = td->urb; 2509 2521 urb_priv = urb->hcpriv; 2510 - /* Leave the TD around for the reset endpoint function 2511 - * to use(but only if it's not a control endpoint, 2512 - * since we already queued the Set TR dequeue pointer 2513 - * command for stalled control endpoints). 2514 - */ 2515 - if (usb_endpoint_xfer_control(&urb->ep->desc) || 2516 - (trb_comp_code != COMP_STALL && 2517 - trb_comp_code != COMP_BABBLE)) 2518 - xhci_urb_free_priv(xhci, urb_priv); 2519 - else 2520 - kfree(urb_priv); 2522 + 2523 + xhci_urb_free_priv(xhci, urb_priv); 2521 2524 2522 2525 usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb); 2523 2526 if ((urb->actual_length != urb->transfer_buffer_length &&
+56 -51
drivers/usb/host/xhci.c
··· 35 35 #define DRIVER_AUTHOR "Sarah Sharp" 36 36 #define DRIVER_DESC "'eXtensible' Host Controller (xHC) Driver" 37 37 38 + #define PORT_WAKE_BITS (PORT_WKOC_E | PORT_WKDISC_E | PORT_WKCONN_E) 39 + 38 40 /* Some 0.95 hardware can't handle the chain bit on a Link TRB being cleared */ 39 41 static int link_quirk; 40 42 module_param(link_quirk, int, S_IRUGO | S_IWUSR); ··· 853 851 xhci_set_cmd_ring_deq(xhci); 854 852 } 855 853 854 + static void xhci_disable_port_wake_on_bits(struct xhci_hcd *xhci) 855 + { 856 + int port_index; 857 + __le32 __iomem **port_array; 858 + unsigned long flags; 859 + u32 t1, t2; 860 + 861 + spin_lock_irqsave(&xhci->lock, flags); 862 + 863 + /* disble usb3 ports Wake bits*/ 864 + port_index = xhci->num_usb3_ports; 865 + port_array = xhci->usb3_ports; 866 + while (port_index--) { 867 + t1 = readl(port_array[port_index]); 868 + t1 = xhci_port_state_to_neutral(t1); 869 + t2 = t1 & ~PORT_WAKE_BITS; 870 + if (t1 != t2) 871 + writel(t2, port_array[port_index]); 872 + } 873 + 874 + /* disble usb2 ports Wake bits*/ 875 + port_index = xhci->num_usb2_ports; 876 + port_array = xhci->usb2_ports; 877 + while (port_index--) { 878 + t1 = readl(port_array[port_index]); 879 + t1 = xhci_port_state_to_neutral(t1); 880 + t2 = t1 & ~PORT_WAKE_BITS; 881 + if (t1 != t2) 882 + writel(t2, port_array[port_index]); 883 + } 884 + 885 + spin_unlock_irqrestore(&xhci->lock, flags); 886 + } 887 + 856 888 /* 857 889 * Stop HC (not bus-specific) 858 890 * 859 891 * This is called when the machine transition into S3/S4 mode. 860 892 * 861 893 */ 862 - int xhci_suspend(struct xhci_hcd *xhci) 894 + int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) 863 895 { 864 896 int rc = 0; 865 897 unsigned int delay = XHCI_MAX_HALT_USEC; ··· 903 867 if (hcd->state != HC_STATE_SUSPENDED || 904 868 xhci->shared_hcd->state != HC_STATE_SUSPENDED) 905 869 return -EINVAL; 870 + 871 + /* Clear root port wake on bits if wakeup not allowed. */ 872 + if (!do_wakeup) 873 + xhci_disable_port_wake_on_bits(xhci); 906 874 907 875 /* Don't poll the roothubs on bus suspend. */ 908 876 xhci_dbg(xhci, "%s: stopping port polling.\n", __func__); ··· 2952 2912 } 2953 2913 } 2954 2914 2955 - /* Deal with stalled endpoints. The core should have sent the control message 2956 - * to clear the halt condition. However, we need to make the xHCI hardware 2957 - * reset its sequence number, since a device will expect a sequence number of 2958 - * zero after the halt condition is cleared. 2915 + /* Called when clearing halted device. The core should have sent the control 2916 + * message to clear the device halt condition. The host side of the halt should 2917 + * already be cleared with a reset endpoint command issued when the STALL tx 2918 + * event was received. 2919 + * 2959 2920 * Context: in_interrupt 2960 2921 */ 2922 + 2961 2923 void xhci_endpoint_reset(struct usb_hcd *hcd, 2962 2924 struct usb_host_endpoint *ep) 2963 2925 { 2964 2926 struct xhci_hcd *xhci; 2965 - struct usb_device *udev; 2966 - unsigned int ep_index; 2967 - unsigned long flags; 2968 - int ret; 2969 - struct xhci_virt_ep *virt_ep; 2970 - struct xhci_command *command; 2971 2927 2972 2928 xhci = hcd_to_xhci(hcd); 2973 - udev = (struct usb_device *) ep->hcpriv; 2974 - /* Called with a root hub endpoint (or an endpoint that wasn't added 2975 - * with xhci_add_endpoint() 2976 - */ 2977 - if (!ep->hcpriv) 2978 - return; 2979 - ep_index = xhci_get_endpoint_index(&ep->desc); 2980 - virt_ep = &xhci->devs[udev->slot_id]->eps[ep_index]; 2981 - if (!virt_ep->stopped_td) { 2982 - xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep, 2983 - "Endpoint 0x%x not halted, refusing to reset.", 2984 - ep->desc.bEndpointAddress); 2985 - return; 2986 - } 2987 - if (usb_endpoint_xfer_control(&ep->desc)) { 2988 - xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep, 2989 - "Control endpoint stall already handled."); 2990 - return; 2991 - } 2992 2929 2993 - command = xhci_alloc_command(xhci, false, false, GFP_ATOMIC); 2994 - if (!command) 2995 - return; 2996 - 2997 - xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep, 2998 - "Queueing reset endpoint command"); 2999 - spin_lock_irqsave(&xhci->lock, flags); 3000 - ret = xhci_queue_reset_ep(xhci, command, udev->slot_id, ep_index); 3001 2930 /* 3002 - * Can't change the ring dequeue pointer until it's transitioned to the 3003 - * stopped state, which is only upon a successful reset endpoint 3004 - * command. Better hope that last command worked! 2931 + * We might need to implement the config ep cmd in xhci 4.8.1 note: 2932 + * The Reset Endpoint Command may only be issued to endpoints in the 2933 + * Halted state. If software wishes reset the Data Toggle or Sequence 2934 + * Number of an endpoint that isn't in the Halted state, then software 2935 + * may issue a Configure Endpoint Command with the Drop and Add bits set 2936 + * for the target endpoint. that is in the Stopped state. 3005 2937 */ 3006 - if (!ret) { 3007 - xhci_cleanup_stalled_ring(xhci, udev, ep_index); 3008 - kfree(virt_ep->stopped_td); 3009 - xhci_ring_cmd_db(xhci); 3010 - } 3011 - virt_ep->stopped_td = NULL; 3012 - virt_ep->stopped_stream = 0; 3013 - spin_unlock_irqrestore(&xhci->lock, flags); 3014 2938 3015 - if (ret) 3016 - xhci_warn(xhci, "FIXME allocate a new ring segment\n"); 2939 + /* For now just print debug to follow the situation */ 2940 + xhci_dbg(xhci, "Endpoint 0x%x ep reset callback called\n", 2941 + ep->desc.bEndpointAddress); 3017 2942 } 3018 2943 3019 2944 static int xhci_check_streams_endpoint(struct xhci_hcd *xhci,
+1 -1
drivers/usb/host/xhci.h
··· 1746 1746 void xhci_init_driver(struct hc_driver *drv, int (*setup_fn)(struct usb_hcd *)); 1747 1747 1748 1748 #ifdef CONFIG_PM 1749 - int xhci_suspend(struct xhci_hcd *xhci); 1749 + int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup); 1750 1750 int xhci_resume(struct xhci_hcd *xhci, bool hibernated); 1751 1751 #else 1752 1752 #define xhci_suspend NULL
+1
drivers/usb/serial/cp210x.c
··· 120 120 { USB_DEVICE(0x10C4, 0x85F8) }, /* Virtenio Preon32 */ 121 121 { USB_DEVICE(0x10C4, 0x8664) }, /* AC-Services CAN-IF */ 122 122 { USB_DEVICE(0x10C4, 0x8665) }, /* AC-Services OBD-IF */ 123 + { USB_DEVICE(0x10C4, 0x8875) }, /* CEL MeshConnect USB Stick */ 123 124 { USB_DEVICE(0x10C4, 0x88A4) }, /* MMB Networks ZigBee USB Device */ 124 125 { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ 125 126 { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */
+33
drivers/usb/serial/ftdi_sio.c
··· 470 470 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_01FD_PID) }, 471 471 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_01FE_PID) }, 472 472 { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_01FF_PID) }, 473 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_4701_PID) }, 474 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9300_PID) }, 475 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9301_PID) }, 476 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9302_PID) }, 477 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9303_PID) }, 478 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9304_PID) }, 479 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9305_PID) }, 480 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9306_PID) }, 481 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9307_PID) }, 482 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9308_PID) }, 483 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9309_PID) }, 484 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930A_PID) }, 485 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930B_PID) }, 486 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930C_PID) }, 487 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930D_PID) }, 488 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930E_PID) }, 489 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_930F_PID) }, 490 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9310_PID) }, 491 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9311_PID) }, 492 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9312_PID) }, 493 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9313_PID) }, 494 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9314_PID) }, 495 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9315_PID) }, 496 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9316_PID) }, 497 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9317_PID) }, 498 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9318_PID) }, 499 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_9319_PID) }, 500 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931A_PID) }, 501 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931B_PID) }, 502 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931C_PID) }, 503 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931D_PID) }, 504 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931E_PID) }, 505 + { USB_DEVICE(MTXORB_VID, MTXORB_FTDI_RANGE_931F_PID) }, 473 506 { USB_DEVICE(FTDI_VID, FTDI_PERLE_ULTRAPORT_PID) }, 474 507 { USB_DEVICE(FTDI_VID, FTDI_PIEGROUP_PID) }, 475 508 { USB_DEVICE(FTDI_VID, FTDI_TNC_X_PID) },
+35 -4
drivers/usb/serial/ftdi_sio_ids.h
··· 926 926 #define BAYER_CONTOUR_CABLE_PID 0x6001 927 927 928 928 /* 929 - * The following are the values for the Matrix Orbital FTDI Range 930 - * Anything in this range will use an FT232RL. 929 + * Matrix Orbital Intelligent USB displays. 930 + * http://www.matrixorbital.com 931 931 */ 932 932 #define MTXORB_VID 0x1B3D 933 933 #define MTXORB_FTDI_RANGE_0100_PID 0x0100 ··· 1186 1186 #define MTXORB_FTDI_RANGE_01FD_PID 0x01FD 1187 1187 #define MTXORB_FTDI_RANGE_01FE_PID 0x01FE 1188 1188 #define MTXORB_FTDI_RANGE_01FF_PID 0x01FF 1189 - 1190 - 1189 + #define MTXORB_FTDI_RANGE_4701_PID 0x4701 1190 + #define MTXORB_FTDI_RANGE_9300_PID 0x9300 1191 + #define MTXORB_FTDI_RANGE_9301_PID 0x9301 1192 + #define MTXORB_FTDI_RANGE_9302_PID 0x9302 1193 + #define MTXORB_FTDI_RANGE_9303_PID 0x9303 1194 + #define MTXORB_FTDI_RANGE_9304_PID 0x9304 1195 + #define MTXORB_FTDI_RANGE_9305_PID 0x9305 1196 + #define MTXORB_FTDI_RANGE_9306_PID 0x9306 1197 + #define MTXORB_FTDI_RANGE_9307_PID 0x9307 1198 + #define MTXORB_FTDI_RANGE_9308_PID 0x9308 1199 + #define MTXORB_FTDI_RANGE_9309_PID 0x9309 1200 + #define MTXORB_FTDI_RANGE_930A_PID 0x930A 1201 + #define MTXORB_FTDI_RANGE_930B_PID 0x930B 1202 + #define MTXORB_FTDI_RANGE_930C_PID 0x930C 1203 + #define MTXORB_FTDI_RANGE_930D_PID 0x930D 1204 + #define MTXORB_FTDI_RANGE_930E_PID 0x930E 1205 + #define MTXORB_FTDI_RANGE_930F_PID 0x930F 1206 + #define MTXORB_FTDI_RANGE_9310_PID 0x9310 1207 + #define MTXORB_FTDI_RANGE_9311_PID 0x9311 1208 + #define MTXORB_FTDI_RANGE_9312_PID 0x9312 1209 + #define MTXORB_FTDI_RANGE_9313_PID 0x9313 1210 + #define MTXORB_FTDI_RANGE_9314_PID 0x9314 1211 + #define MTXORB_FTDI_RANGE_9315_PID 0x9315 1212 + #define MTXORB_FTDI_RANGE_9316_PID 0x9316 1213 + #define MTXORB_FTDI_RANGE_9317_PID 0x9317 1214 + #define MTXORB_FTDI_RANGE_9318_PID 0x9318 1215 + #define MTXORB_FTDI_RANGE_9319_PID 0x9319 1216 + #define MTXORB_FTDI_RANGE_931A_PID 0x931A 1217 + #define MTXORB_FTDI_RANGE_931B_PID 0x931B 1218 + #define MTXORB_FTDI_RANGE_931C_PID 0x931C 1219 + #define MTXORB_FTDI_RANGE_931D_PID 0x931D 1220 + #define MTXORB_FTDI_RANGE_931E_PID 0x931E 1221 + #define MTXORB_FTDI_RANGE_931F_PID 0x931F 1191 1222 1192 1223 /* 1193 1224 * The Mobility Lab (TML)
+59 -38
drivers/usb/serial/keyspan.c
··· 311 311 if ((data[0] & 0x80) == 0) { 312 312 /* no errors on individual bytes, only 313 313 possible overrun err */ 314 - if (data[0] & RXERROR_OVERRUN) 315 - err = TTY_OVERRUN; 316 - else 317 - err = 0; 314 + if (data[0] & RXERROR_OVERRUN) { 315 + tty_insert_flip_char(&port->port, 0, 316 + TTY_OVERRUN); 317 + } 318 318 for (i = 1; i < urb->actual_length ; ++i) 319 - tty_insert_flip_char(&port->port, data[i], err); 319 + tty_insert_flip_char(&port->port, data[i], 320 + TTY_NORMAL); 320 321 } else { 321 322 /* some bytes had errors, every byte has status */ 322 323 dev_dbg(&port->dev, "%s - RX error!!!!\n", __func__); 323 324 for (i = 0; i + 1 < urb->actual_length; i += 2) { 324 - int stat = data[i], flag = 0; 325 - if (stat & RXERROR_OVERRUN) 326 - flag |= TTY_OVERRUN; 327 - if (stat & RXERROR_FRAMING) 328 - flag |= TTY_FRAME; 329 - if (stat & RXERROR_PARITY) 330 - flag |= TTY_PARITY; 325 + int stat = data[i]; 326 + int flag = TTY_NORMAL; 327 + 328 + if (stat & RXERROR_OVERRUN) { 329 + tty_insert_flip_char(&port->port, 0, 330 + TTY_OVERRUN); 331 + } 331 332 /* XXX should handle break (0x10) */ 333 + if (stat & RXERROR_PARITY) 334 + flag = TTY_PARITY; 335 + else if (stat & RXERROR_FRAMING) 336 + flag = TTY_FRAME; 337 + 332 338 tty_insert_flip_char(&port->port, data[i+1], 333 339 flag); 334 340 } ··· 655 649 } else { 656 650 /* some bytes had errors, every byte has status */ 657 651 for (i = 0; i + 1 < urb->actual_length; i += 2) { 658 - int stat = data[i], flag = 0; 659 - if (stat & RXERROR_OVERRUN) 660 - flag |= TTY_OVERRUN; 661 - if (stat & RXERROR_FRAMING) 662 - flag |= TTY_FRAME; 663 - if (stat & RXERROR_PARITY) 664 - flag |= TTY_PARITY; 652 + int stat = data[i]; 653 + int flag = TTY_NORMAL; 654 + 655 + if (stat & RXERROR_OVERRUN) { 656 + tty_insert_flip_char(&port->port, 0, 657 + TTY_OVERRUN); 658 + } 665 659 /* XXX should handle break (0x10) */ 660 + if (stat & RXERROR_PARITY) 661 + flag = TTY_PARITY; 662 + else if (stat & RXERROR_FRAMING) 663 + flag = TTY_FRAME; 664 + 666 665 tty_insert_flip_char(&port->port, data[i+1], 667 666 flag); 668 667 } ··· 724 713 */ 725 714 for (x = 0; x + 1 < len && 726 715 i + 1 < urb->actual_length; x += 2) { 727 - int stat = data[i], flag = 0; 716 + int stat = data[i]; 717 + int flag = TTY_NORMAL; 728 718 729 - if (stat & RXERROR_OVERRUN) 730 - flag |= TTY_OVERRUN; 731 - if (stat & RXERROR_FRAMING) 732 - flag |= TTY_FRAME; 733 - if (stat & RXERROR_PARITY) 734 - flag |= TTY_PARITY; 719 + if (stat & RXERROR_OVERRUN) { 720 + tty_insert_flip_char(&port->port, 0, 721 + TTY_OVERRUN); 722 + } 735 723 /* XXX should handle break (0x10) */ 724 + if (stat & RXERROR_PARITY) 725 + flag = TTY_PARITY; 726 + else if (stat & RXERROR_FRAMING) 727 + flag = TTY_FRAME; 728 + 736 729 tty_insert_flip_char(&port->port, data[i+1], 737 730 flag); 738 731 i += 2; ··· 788 773 if ((data[0] & 0x80) == 0) { 789 774 /* no errors on individual bytes, only 790 775 possible overrun err*/ 791 - if (data[0] & RXERROR_OVERRUN) 792 - err = TTY_OVERRUN; 793 - else 794 - err = 0; 776 + if (data[0] & RXERROR_OVERRUN) { 777 + tty_insert_flip_char(&port->port, 0, 778 + TTY_OVERRUN); 779 + } 795 780 for (i = 1; i < urb->actual_length ; ++i) 796 781 tty_insert_flip_char(&port->port, 797 - data[i], err); 782 + data[i], TTY_NORMAL); 798 783 } else { 799 784 /* some bytes had errors, every byte has status */ 800 785 dev_dbg(&port->dev, "%s - RX error!!!!\n", __func__); 801 786 for (i = 0; i + 1 < urb->actual_length; i += 2) { 802 - int stat = data[i], flag = 0; 803 - if (stat & RXERROR_OVERRUN) 804 - flag |= TTY_OVERRUN; 805 - if (stat & RXERROR_FRAMING) 806 - flag |= TTY_FRAME; 807 - if (stat & RXERROR_PARITY) 808 - flag |= TTY_PARITY; 787 + int stat = data[i]; 788 + int flag = TTY_NORMAL; 789 + 790 + if (stat & RXERROR_OVERRUN) { 791 + tty_insert_flip_char( 792 + &port->port, 0, 793 + TTY_OVERRUN); 794 + } 809 795 /* XXX should handle break (0x10) */ 796 + if (stat & RXERROR_PARITY) 797 + flag = TTY_PARITY; 798 + else if (stat & RXERROR_FRAMING) 799 + flag = TTY_FRAME; 800 + 810 801 tty_insert_flip_char(&port->port, 811 802 data[i+1], flag); 812 803 }
+3 -8
drivers/usb/serial/ssu100.c
··· 490 490 if (*tty_flag == TTY_NORMAL) 491 491 *tty_flag = TTY_FRAME; 492 492 } 493 - if (lsr & UART_LSR_OE){ 493 + if (lsr & UART_LSR_OE) { 494 494 port->icount.overrun++; 495 - if (*tty_flag == TTY_NORMAL) 496 - *tty_flag = TTY_OVERRUN; 495 + tty_insert_flip_char(&port->port, 0, TTY_OVERRUN); 497 496 } 498 497 } 499 498 ··· 510 511 if ((len >= 4) && 511 512 (packet[0] == 0x1b) && (packet[1] == 0x1b) && 512 513 ((packet[2] == 0x00) || (packet[2] == 0x01))) { 513 - if (packet[2] == 0x00) { 514 + if (packet[2] == 0x00) 514 515 ssu100_update_lsr(port, packet[3], &flag); 515 - if (flag == TTY_OVERRUN) 516 - tty_insert_flip_char(&port->port, 0, 517 - TTY_OVERRUN); 518 - } 519 516 if (packet[2] == 0x01) 520 517 ssu100_update_msr(port, packet[3]); 521 518
+7
drivers/usb/storage/unusual_uas.h
··· 103 103 "VL711", 104 104 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 105 105 US_FL_NO_ATA_1X), 106 + 107 + /* Reported-by: Hans de Goede <hdegoede@redhat.com> */ 108 + UNUSUAL_DEV(0x4971, 0x1012, 0x0000, 0x9999, 109 + "Hitachi", 110 + "External HDD", 111 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 112 + US_FL_IGNORE_UAS),
+24
drivers/vhost/scsi.c
··· 1312 1312 vhost_scsi_set_endpoint(struct vhost_scsi *vs, 1313 1313 struct vhost_scsi_target *t) 1314 1314 { 1315 + struct se_portal_group *se_tpg; 1315 1316 struct tcm_vhost_tport *tv_tport; 1316 1317 struct tcm_vhost_tpg *tpg; 1317 1318 struct tcm_vhost_tpg **vs_tpg; ··· 1360 1359 ret = -EEXIST; 1361 1360 goto out; 1362 1361 } 1362 + /* 1363 + * In order to ensure individual vhost-scsi configfs 1364 + * groups cannot be removed while in use by vhost ioctl, 1365 + * go ahead and take an explicit se_tpg->tpg_group.cg_item 1366 + * dependency now. 1367 + */ 1368 + se_tpg = &tpg->se_tpg; 1369 + ret = configfs_depend_item(se_tpg->se_tpg_tfo->tf_subsys, 1370 + &se_tpg->tpg_group.cg_item); 1371 + if (ret) { 1372 + pr_warn("configfs_depend_item() failed: %d\n", ret); 1373 + kfree(vs_tpg); 1374 + mutex_unlock(&tpg->tv_tpg_mutex); 1375 + goto out; 1376 + } 1363 1377 tpg->tv_tpg_vhost_count++; 1364 1378 tpg->vhost_scsi = vs; 1365 1379 vs_tpg[tpg->tport_tpgt] = tpg; ··· 1417 1401 vhost_scsi_clear_endpoint(struct vhost_scsi *vs, 1418 1402 struct vhost_scsi_target *t) 1419 1403 { 1404 + struct se_portal_group *se_tpg; 1420 1405 struct tcm_vhost_tport *tv_tport; 1421 1406 struct tcm_vhost_tpg *tpg; 1422 1407 struct vhost_virtqueue *vq; ··· 1466 1449 vs->vs_tpg[target] = NULL; 1467 1450 match = true; 1468 1451 mutex_unlock(&tpg->tv_tpg_mutex); 1452 + /* 1453 + * Release se_tpg->tpg_group.cg_item configfs dependency now 1454 + * to allow vhost-scsi WWPN se_tpg->tpg_group shutdown to occur. 1455 + */ 1456 + se_tpg = &tpg->se_tpg; 1457 + configfs_undepend_item(se_tpg->se_tpg_tfo->tf_subsys, 1458 + &se_tpg->tpg_group.cg_item); 1469 1459 } 1470 1460 if (match) { 1471 1461 for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
+1 -1
fs/Makefile
··· 104 104 obj-$(CONFIG_AUTOFS4_FS) += autofs4/ 105 105 obj-$(CONFIG_ADFS_FS) += adfs/ 106 106 obj-$(CONFIG_FUSE_FS) += fuse/ 107 - obj-$(CONFIG_OVERLAYFS_FS) += overlayfs/ 107 + obj-$(CONFIG_OVERLAY_FS) += overlayfs/ 108 108 obj-$(CONFIG_UDF_FS) += udf/ 109 109 obj-$(CONFIG_SUN_OPENPROMFS) += openpromfs/ 110 110 obj-$(CONFIG_OMFS_FS) += omfs/
+14 -7
fs/aio.c
··· 165 165 static const struct file_operations aio_ring_fops; 166 166 static const struct address_space_operations aio_ctx_aops; 167 167 168 + /* Backing dev info for aio fs. 169 + * -no dirty page accounting or writeback happens 170 + */ 171 + static struct backing_dev_info aio_fs_backing_dev_info = { 172 + .name = "aiofs", 173 + .state = 0, 174 + .capabilities = BDI_CAP_NO_ACCT_AND_WRITEBACK | BDI_CAP_MAP_COPY, 175 + }; 176 + 168 177 static struct file *aio_private_file(struct kioctx *ctx, loff_t nr_pages) 169 178 { 170 179 struct qstr this = QSTR_INIT("[aio]", 5); ··· 185 176 186 177 inode->i_mapping->a_ops = &aio_ctx_aops; 187 178 inode->i_mapping->private_data = ctx; 179 + inode->i_mapping->backing_dev_info = &aio_fs_backing_dev_info; 188 180 inode->i_size = PAGE_SIZE * nr_pages; 189 181 190 182 path.dentry = d_alloc_pseudo(aio_mnt->mnt_sb, &this); ··· 229 219 aio_mnt = kern_mount(&aio_fs); 230 220 if (IS_ERR(aio_mnt)) 231 221 panic("Failed to create aio fs mount."); 222 + 223 + if (bdi_init(&aio_fs_backing_dev_info)) 224 + panic("Failed to init aio fs backing dev info."); 232 225 233 226 kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC); 234 227 kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC); ··· 293 280 static const struct file_operations aio_ring_fops = { 294 281 .mmap = aio_ring_mmap, 295 282 }; 296 - 297 - static int aio_set_page_dirty(struct page *page) 298 - { 299 - return 0; 300 - } 301 283 302 284 #if IS_ENABLED(CONFIG_MIGRATION) 303 285 static int aio_migratepage(struct address_space *mapping, struct page *new, ··· 365 357 #endif 366 358 367 359 static const struct address_space_operations aio_ctx_aops = { 368 - .set_page_dirty = aio_set_page_dirty, 360 + .set_page_dirty = __set_page_dirty_no_writeback, 369 361 #if IS_ENABLED(CONFIG_MIGRATION) 370 362 .migratepage = aio_migratepage, 371 363 #endif ··· 420 412 pr_debug("pid(%d) page[%d]->count=%d\n", 421 413 current->pid, i, page_count(page)); 422 414 SetPageUptodate(page); 423 - SetPageDirty(page); 424 415 unlock_page(page); 425 416 426 417 ctx->ring_pages[i] = page;
+31 -2
fs/btrfs/compression.c
··· 1011 1011 bytes = min(bytes, working_bytes); 1012 1012 kaddr = kmap_atomic(page_out); 1013 1013 memcpy(kaddr + *pg_offset, buf + buf_offset, bytes); 1014 - if (*pg_index == (vcnt - 1) && *pg_offset == 0) 1015 - memset(kaddr + bytes, 0, PAGE_CACHE_SIZE - bytes); 1016 1014 kunmap_atomic(kaddr); 1017 1015 flush_dcache_page(page_out); 1018 1016 ··· 1051 1053 } 1052 1054 1053 1055 return 1; 1056 + } 1057 + 1058 + /* 1059 + * When uncompressing data, we need to make sure and zero any parts of 1060 + * the biovec that were not filled in by the decompression code. pg_index 1061 + * and pg_offset indicate the last page and the last offset of that page 1062 + * that have been filled in. This will zero everything remaining in the 1063 + * biovec. 1064 + */ 1065 + void btrfs_clear_biovec_end(struct bio_vec *bvec, int vcnt, 1066 + unsigned long pg_index, 1067 + unsigned long pg_offset) 1068 + { 1069 + while (pg_index < vcnt) { 1070 + struct page *page = bvec[pg_index].bv_page; 1071 + unsigned long off = bvec[pg_index].bv_offset; 1072 + unsigned long len = bvec[pg_index].bv_len; 1073 + 1074 + if (pg_offset < off) 1075 + pg_offset = off; 1076 + if (pg_offset < off + len) { 1077 + unsigned long bytes = off + len - pg_offset; 1078 + char *kaddr; 1079 + 1080 + kaddr = kmap_atomic(page); 1081 + memset(kaddr + pg_offset, 0, bytes); 1082 + kunmap_atomic(kaddr); 1083 + } 1084 + pg_index++; 1085 + pg_offset = 0; 1086 + } 1054 1087 }
+3 -1
fs/btrfs/compression.h
··· 45 45 unsigned long nr_pages); 46 46 int btrfs_submit_compressed_read(struct inode *inode, struct bio *bio, 47 47 int mirror_num, unsigned long bio_flags); 48 - 48 + void btrfs_clear_biovec_end(struct bio_vec *bvec, int vcnt, 49 + unsigned long pg_index, 50 + unsigned long pg_offset); 49 51 struct btrfs_compress_op { 50 52 struct list_head *(*alloc_workspace)(void); 51 53
+2 -12
fs/btrfs/ctree.c
··· 80 80 { 81 81 int i; 82 82 83 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 84 - /* lockdep really cares that we take all of these spinlocks 85 - * in the right order. If any of the locks in the path are not 86 - * currently blocking, it is going to complain. So, make really 87 - * really sure by forcing the path to blocking before we clear 88 - * the path blocking. 89 - */ 90 83 if (held) { 91 84 btrfs_set_lock_blocking_rw(held, held_rw); 92 85 if (held_rw == BTRFS_WRITE_LOCK) ··· 88 95 held_rw = BTRFS_READ_LOCK_BLOCKING; 89 96 } 90 97 btrfs_set_path_blocking(p); 91 - #endif 92 98 93 99 for (i = BTRFS_MAX_LEVEL - 1; i >= 0; i--) { 94 100 if (p->nodes[i] && p->locks[i]) { ··· 99 107 } 100 108 } 101 109 102 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 103 110 if (held) 104 111 btrfs_clear_lock_blocking_rw(held, held_rw); 105 - #endif 106 112 } 107 113 108 114 /* this also releases the path */ ··· 2883 2893 } 2884 2894 p->locks[level] = BTRFS_WRITE_LOCK; 2885 2895 } else { 2886 - err = btrfs_try_tree_read_lock(b); 2896 + err = btrfs_tree_read_lock_atomic(b); 2887 2897 if (!err) { 2888 2898 btrfs_set_path_blocking(p); 2889 2899 btrfs_tree_read_lock(b); ··· 3015 3025 } 3016 3026 3017 3027 level = btrfs_header_level(b); 3018 - err = btrfs_try_tree_read_lock(b); 3028 + err = btrfs_tree_read_lock_atomic(b); 3019 3029 if (!err) { 3020 3030 btrfs_set_path_blocking(p); 3021 3031 btrfs_tree_read_lock(b);
+21 -3
fs/btrfs/locking.c
··· 128 128 } 129 129 130 130 /* 131 + * take a spinning read lock. 132 + * returns 1 if we get the read lock and 0 if we don't 133 + * this won't wait for blocking writers 134 + */ 135 + int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) 136 + { 137 + if (atomic_read(&eb->blocking_writers)) 138 + return 0; 139 + 140 + read_lock(&eb->lock); 141 + if (atomic_read(&eb->blocking_writers)) { 142 + read_unlock(&eb->lock); 143 + return 0; 144 + } 145 + atomic_inc(&eb->read_locks); 146 + atomic_inc(&eb->spinning_readers); 147 + return 1; 148 + } 149 + 150 + /* 131 151 * returns 1 if we get the read lock and 0 if we don't 132 152 * this won't wait for blocking writers 133 153 */ ··· 178 158 atomic_read(&eb->blocking_readers)) 179 159 return 0; 180 160 181 - if (!write_trylock(&eb->lock)) 182 - return 0; 183 - 161 + write_lock(&eb->lock); 184 162 if (atomic_read(&eb->blocking_writers) || 185 163 atomic_read(&eb->blocking_readers)) { 186 164 write_unlock(&eb->lock);
+2
fs/btrfs/locking.h
··· 35 35 void btrfs_assert_tree_locked(struct extent_buffer *eb); 36 36 int btrfs_try_tree_read_lock(struct extent_buffer *eb); 37 37 int btrfs_try_tree_write_lock(struct extent_buffer *eb); 38 + int btrfs_tree_read_lock_atomic(struct extent_buffer *eb); 39 + 38 40 39 41 static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw) 40 42 {
+15
fs/btrfs/lzo.c
··· 373 373 } 374 374 done: 375 375 kunmap(pages_in[page_in_index]); 376 + if (!ret) 377 + btrfs_clear_biovec_end(bvec, vcnt, page_out_index, pg_offset); 376 378 return ret; 377 379 } 378 380 ··· 412 410 goto out; 413 411 } 414 412 413 + /* 414 + * the caller is already checking against PAGE_SIZE, but lets 415 + * move this check closer to the memcpy/memset 416 + */ 417 + destlen = min_t(unsigned long, destlen, PAGE_SIZE); 415 418 bytes = min_t(unsigned long, destlen, out_len - start_byte); 416 419 417 420 kaddr = kmap_atomic(dest_page); 418 421 memcpy(kaddr, workspace->buf + start_byte, bytes); 422 + 423 + /* 424 + * btrfs_getblock is doing a zero on the tail of the page too, 425 + * but this will cover anything missing from the decompressed 426 + * data. 427 + */ 428 + if (bytes < destlen) 429 + memset(kaddr+bytes, 0, destlen-bytes); 419 430 kunmap_atomic(kaddr); 420 431 out: 421 432 return ret;
+18 -2
fs/btrfs/zlib.c
··· 299 299 zlib_inflateEnd(&workspace->strm); 300 300 if (data_in) 301 301 kunmap(pages_in[page_in_index]); 302 + if (!ret) 303 + btrfs_clear_biovec_end(bvec, vcnt, page_out_index, pg_offset); 302 304 return ret; 303 305 } 304 306 ··· 312 310 struct workspace *workspace = list_entry(ws, struct workspace, list); 313 311 int ret = 0; 314 312 int wbits = MAX_WBITS; 315 - unsigned long bytes_left = destlen; 313 + unsigned long bytes_left; 316 314 unsigned long total_out = 0; 315 + unsigned long pg_offset = 0; 317 316 char *kaddr; 317 + 318 + destlen = min_t(unsigned long, destlen, PAGE_SIZE); 319 + bytes_left = destlen; 318 320 319 321 workspace->strm.next_in = data_in; 320 322 workspace->strm.avail_in = srclen; ··· 347 341 unsigned long buf_start; 348 342 unsigned long buf_offset; 349 343 unsigned long bytes; 350 - unsigned long pg_offset = 0; 351 344 352 345 ret = zlib_inflate(&workspace->strm, Z_NO_FLUSH); 353 346 if (ret != Z_OK && ret != Z_STREAM_END) ··· 389 384 ret = 0; 390 385 391 386 zlib_inflateEnd(&workspace->strm); 387 + 388 + /* 389 + * this should only happen if zlib returned fewer bytes than we 390 + * expected. btrfs_get_block is responsible for zeroing from the 391 + * end of the inline extent (destlen) to the end of the page 392 + */ 393 + if (pg_offset < destlen) { 394 + kaddr = kmap_atomic(dest_page); 395 + memset(kaddr + pg_offset, 0, destlen - pg_offset); 396 + kunmap_atomic(kaddr); 397 + } 392 398 return ret; 393 399 } 394 400
+1
fs/dcache.c
··· 778 778 struct dentry *parent = lock_parent(dentry); 779 779 if (likely(!dentry->d_lockref.count)) { 780 780 __dentry_kill(dentry); 781 + dput(parent); 781 782 goto restart; 782 783 } 783 784 if (parent)
+21 -21
fs/isofs/inode.c
··· 174 174 * Compute the hash for the isofs name corresponding to the dentry. 175 175 */ 176 176 static int 177 - isofs_hash_common(struct qstr *qstr, int ms) 178 - { 179 - const char *name; 180 - int len; 181 - 182 - len = qstr->len; 183 - name = qstr->name; 184 - if (ms) { 185 - while (len && name[len-1] == '.') 186 - len--; 187 - } 188 - 189 - qstr->hash = full_name_hash(name, len); 190 - 191 - return 0; 192 - } 193 - 194 - /* 195 - * Compute the hash for the isofs name corresponding to the dentry. 196 - */ 197 - static int 198 177 isofs_hashi_common(struct qstr *qstr, int ms) 199 178 { 200 179 const char *name; ··· 242 263 } 243 264 244 265 #ifdef CONFIG_JOLIET 266 + /* 267 + * Compute the hash for the isofs name corresponding to the dentry. 268 + */ 269 + static int 270 + isofs_hash_common(struct qstr *qstr, int ms) 271 + { 272 + const char *name; 273 + int len; 274 + 275 + len = qstr->len; 276 + name = qstr->name; 277 + if (ms) { 278 + while (len && name[len-1] == '.') 279 + len--; 280 + } 281 + 282 + qstr->hash = full_name_hash(name, len); 283 + 284 + return 0; 285 + } 286 + 245 287 static int 246 288 isofs_hash_ms(const struct dentry *dentry, struct qstr *qstr) 247 289 {
+6 -2
fs/nfsd/nfs4callback.c
··· 774 774 { 775 775 if (test_and_set_bit(0, &clp->cl_cb_slot_busy) != 0) { 776 776 rpc_sleep_on(&clp->cl_cb_waitq, task, NULL); 777 - dprintk("%s slot is busy\n", __func__); 778 - return false; 777 + /* Race breaker */ 778 + if (test_and_set_bit(0, &clp->cl_cb_slot_busy) != 0) { 779 + dprintk("%s slot is busy\n", __func__); 780 + return false; 781 + } 782 + rpc_wake_up_queued_task(&clp->cl_cb_waitq, task); 779 783 } 780 784 return true; 781 785 }
+6 -3
fs/nfsd/nfsd.h
··· 335 335 (NFSD4_SUPPORTED_ATTRS_WORD2 | FATTR4_WORD2_SUPPATTR_EXCLCREAT) 336 336 337 337 #ifdef CONFIG_NFSD_V4_SECURITY_LABEL 338 - #define NFSD4_2_SUPPORTED_ATTRS_WORD2 \ 339 - (NFSD4_1_SUPPORTED_ATTRS_WORD2 | FATTR4_WORD2_SECURITY_LABEL) 338 + #define NFSD4_2_SECURITY_ATTRS FATTR4_WORD2_SECURITY_LABEL 340 339 #else 341 - #define NFSD4_2_SUPPORTED_ATTRS_WORD2 0 340 + #define NFSD4_2_SECURITY_ATTRS 0 342 341 #endif 342 + 343 + #define NFSD4_2_SUPPORTED_ATTRS_WORD2 \ 344 + (NFSD4_1_SUPPORTED_ATTRS_WORD2 | \ 345 + NFSD4_2_SECURITY_ATTRS) 343 346 344 347 static inline u32 nfsd_suppattrs0(u32 minorversion) 345 348 {
+1 -1
fs/overlayfs/Kconfig
··· 1 - config OVERLAYFS_FS 1 + config OVERLAY_FS 2 2 tristate "Overlay filesystem support" 3 3 help 4 4 An overlay filesystem combines two filesystems - an 'upper' filesystem
+2 -2
fs/overlayfs/Makefile
··· 2 2 # Makefile for the overlay filesystem. 3 3 # 4 4 5 - obj-$(CONFIG_OVERLAYFS_FS) += overlayfs.o 5 + obj-$(CONFIG_OVERLAY_FS) += overlay.o 6 6 7 - overlayfs-objs := super.o inode.o dir.o readdir.o copy_up.o 7 + overlay-objs := super.o inode.o dir.o readdir.o copy_up.o
+19 -12
fs/overlayfs/dir.c
··· 284 284 return ERR_PTR(err); 285 285 } 286 286 287 - static struct dentry *ovl_check_empty_and_clear(struct dentry *dentry, 288 - enum ovl_path_type type) 287 + static struct dentry *ovl_check_empty_and_clear(struct dentry *dentry) 289 288 { 290 289 int err; 291 290 struct dentry *ret = NULL; ··· 293 294 err = ovl_check_empty_dir(dentry, &list); 294 295 if (err) 295 296 ret = ERR_PTR(err); 296 - else if (type == OVL_PATH_MERGE) 297 - ret = ovl_clear_empty(dentry, &list); 297 + else { 298 + /* 299 + * If no upperdentry then skip clearing whiteouts. 300 + * 301 + * Can race with copy-up, since we don't hold the upperdir 302 + * mutex. Doesn't matter, since copy-up can't create a 303 + * non-empty directory from an empty one. 304 + */ 305 + if (ovl_dentry_upper(dentry)) 306 + ret = ovl_clear_empty(dentry, &list); 307 + } 298 308 299 309 ovl_cache_free(&list); 300 310 ··· 495 487 return err; 496 488 } 497 489 498 - static int ovl_remove_and_whiteout(struct dentry *dentry, 499 - enum ovl_path_type type, bool is_dir) 490 + static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir) 500 491 { 501 492 struct dentry *workdir = ovl_workdir(dentry); 502 493 struct inode *wdir = workdir->d_inode; ··· 507 500 int err; 508 501 509 502 if (is_dir) { 510 - opaquedir = ovl_check_empty_and_clear(dentry, type); 503 + opaquedir = ovl_check_empty_and_clear(dentry); 511 504 err = PTR_ERR(opaquedir); 512 505 if (IS_ERR(opaquedir)) 513 506 goto out; ··· 522 515 if (IS_ERR(whiteout)) 523 516 goto out_unlock; 524 517 525 - if (type == OVL_PATH_LOWER) { 518 + upper = ovl_dentry_upper(dentry); 519 + if (!upper) { 526 520 upper = lookup_one_len(dentry->d_name.name, upperdir, 527 - dentry->d_name.len); 521 + dentry->d_name.len); 528 522 err = PTR_ERR(upper); 529 523 if (IS_ERR(upper)) 530 524 goto kill_whiteout; ··· 537 529 } else { 538 530 int flags = 0; 539 531 540 - upper = ovl_dentry_upper(dentry); 541 532 if (opaquedir) 542 533 upper = opaquedir; 543 534 err = -ESTALE; ··· 655 648 cap_raise(override_cred->cap_effective, CAP_CHOWN); 656 649 old_cred = override_creds(override_cred); 657 650 658 - err = ovl_remove_and_whiteout(dentry, type, is_dir); 651 + err = ovl_remove_and_whiteout(dentry, is_dir); 659 652 660 653 revert_creds(old_cred); 661 654 put_cred(override_cred); ··· 788 781 } 789 782 790 783 if (overwrite && (new_type == OVL_PATH_LOWER || new_type == OVL_PATH_MERGE) && new_is_dir) { 791 - opaquedir = ovl_check_empty_and_clear(new, new_type); 784 + opaquedir = ovl_check_empty_and_clear(new); 792 785 err = PTR_ERR(opaquedir); 793 786 if (IS_ERR(opaquedir)) { 794 787 opaquedir = NULL;
+18 -9
fs/overlayfs/inode.c
··· 235 235 return err; 236 236 } 237 237 238 + static bool ovl_need_xattr_filter(struct dentry *dentry, 239 + enum ovl_path_type type) 240 + { 241 + return type == OVL_PATH_UPPER && S_ISDIR(dentry->d_inode->i_mode); 242 + } 243 + 238 244 ssize_t ovl_getxattr(struct dentry *dentry, const char *name, 239 245 void *value, size_t size) 240 246 { 241 - if (ovl_path_type(dentry->d_parent) == OVL_PATH_MERGE && 242 - ovl_is_private_xattr(name)) 247 + struct path realpath; 248 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 249 + 250 + if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 243 251 return -ENODATA; 244 252 245 - return vfs_getxattr(ovl_dentry_real(dentry), name, value, size); 253 + return vfs_getxattr(realpath.dentry, name, value, size); 246 254 } 247 255 248 256 ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size) 249 257 { 258 + struct path realpath; 259 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 250 260 ssize_t res; 251 261 int off; 252 262 253 - res = vfs_listxattr(ovl_dentry_real(dentry), list, size); 263 + res = vfs_listxattr(realpath.dentry, list, size); 254 264 if (res <= 0 || size == 0) 255 265 return res; 256 266 257 - if (ovl_path_type(dentry->d_parent) != OVL_PATH_MERGE) 267 + if (!ovl_need_xattr_filter(dentry, type)) 258 268 return res; 259 269 260 270 /* filter out private xattrs */ ··· 289 279 { 290 280 int err; 291 281 struct path realpath; 292 - enum ovl_path_type type; 282 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 293 283 294 284 err = ovl_want_write(dentry); 295 285 if (err) 296 286 goto out; 297 287 298 - if (ovl_path_type(dentry->d_parent) == OVL_PATH_MERGE && 299 - ovl_is_private_xattr(name)) 288 + err = -ENODATA; 289 + if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 300 290 goto out_drop_write; 301 291 302 - type = ovl_path_real(dentry, &realpath); 303 292 if (type == OVL_PATH_LOWER) { 304 293 err = vfs_getxattr(realpath.dentry, name, NULL, 0); 305 294 if (err < 0)
+16 -23
fs/overlayfs/readdir.c
··· 274 274 return 0; 275 275 } 276 276 277 - static inline int ovl_dir_read_merged(struct path *upperpath, 278 - struct path *lowerpath, 279 - struct list_head *list) 277 + static int ovl_dir_read_merged(struct dentry *dentry, struct list_head *list) 280 278 { 281 279 int err; 280 + struct path lowerpath; 281 + struct path upperpath; 282 282 struct ovl_readdir_data rdd = { 283 283 .ctx.actor = ovl_fill_merge, 284 284 .list = list, ··· 286 286 .is_merge = false, 287 287 }; 288 288 289 - if (upperpath->dentry) { 290 - err = ovl_dir_read(upperpath, &rdd); 289 + ovl_path_lower(dentry, &lowerpath); 290 + ovl_path_upper(dentry, &upperpath); 291 + 292 + if (upperpath.dentry) { 293 + err = ovl_dir_read(&upperpath, &rdd); 291 294 if (err) 292 295 goto out; 293 296 294 - if (lowerpath->dentry) { 295 - err = ovl_dir_mark_whiteouts(upperpath->dentry, &rdd); 297 + if (lowerpath.dentry) { 298 + err = ovl_dir_mark_whiteouts(upperpath.dentry, &rdd); 296 299 if (err) 297 300 goto out; 298 301 } 299 302 } 300 - if (lowerpath->dentry) { 303 + if (lowerpath.dentry) { 301 304 /* 302 305 * Insert lowerpath entries before upperpath ones, this allows 303 306 * offsets to be reasonably constant 304 307 */ 305 308 list_add(&rdd.middle, rdd.list); 306 309 rdd.is_merge = true; 307 - err = ovl_dir_read(lowerpath, &rdd); 310 + err = ovl_dir_read(&lowerpath, &rdd); 308 311 list_del(&rdd.middle); 309 312 } 310 313 out: ··· 332 329 static struct ovl_dir_cache *ovl_cache_get(struct dentry *dentry) 333 330 { 334 331 int res; 335 - struct path lowerpath; 336 - struct path upperpath; 337 332 struct ovl_dir_cache *cache; 338 333 339 334 cache = ovl_dir_cache(dentry); ··· 348 347 cache->refcount = 1; 349 348 INIT_LIST_HEAD(&cache->entries); 350 349 351 - ovl_path_lower(dentry, &lowerpath); 352 - ovl_path_upper(dentry, &upperpath); 353 - 354 - res = ovl_dir_read_merged(&upperpath, &lowerpath, &cache->entries); 350 + res = ovl_dir_read_merged(dentry, &cache->entries); 355 351 if (res) { 356 352 ovl_cache_free(&cache->entries); 357 353 kfree(cache); ··· 450 452 /* 451 453 * Need to check if we started out being a lower dir, but got copied up 452 454 */ 453 - if (!od->is_upper && ovl_path_type(dentry) == OVL_PATH_MERGE) { 455 + if (!od->is_upper && ovl_path_type(dentry) != OVL_PATH_LOWER) { 454 456 struct inode *inode = file_inode(file); 455 457 456 - realfile =lockless_dereference(od->upperfile); 458 + realfile = lockless_dereference(od->upperfile); 457 459 if (!realfile) { 458 460 struct path upperpath; 459 461 ··· 536 538 int ovl_check_empty_dir(struct dentry *dentry, struct list_head *list) 537 539 { 538 540 int err; 539 - struct path lowerpath; 540 - struct path upperpath; 541 541 struct ovl_cache_entry *p; 542 542 543 - ovl_path_upper(dentry, &upperpath); 544 - ovl_path_lower(dentry, &lowerpath); 545 - 546 - err = ovl_dir_read_merged(&upperpath, &lowerpath, list); 543 + err = ovl_dir_read_merged(dentry, list); 547 544 if (err) 548 545 return err; 549 546
+49 -12
fs/overlayfs/super.c
··· 24 24 MODULE_DESCRIPTION("Overlay filesystem"); 25 25 MODULE_LICENSE("GPL"); 26 26 27 - #define OVERLAYFS_SUPER_MAGIC 0x794c764f 27 + #define OVERLAYFS_SUPER_MAGIC 0x794c7630 28 28 29 29 struct ovl_config { 30 30 char *lowerdir; ··· 84 84 85 85 static struct dentry *ovl_upperdentry_dereference(struct ovl_entry *oe) 86 86 { 87 - struct dentry *upperdentry = ACCESS_ONCE(oe->__upperdentry); 88 - /* 89 - * Make sure to order reads to upperdentry wrt ovl_dentry_update() 90 - */ 91 - smp_read_barrier_depends(); 92 - return upperdentry; 87 + return lockless_dereference(oe->__upperdentry); 93 88 } 94 89 95 90 void ovl_path_upper(struct dentry *dentry, struct path *path) ··· 457 462 {OPT_ERR, NULL} 458 463 }; 459 464 465 + static char *ovl_next_opt(char **s) 466 + { 467 + char *sbegin = *s; 468 + char *p; 469 + 470 + if (sbegin == NULL) 471 + return NULL; 472 + 473 + for (p = sbegin; *p; p++) { 474 + if (*p == '\\') { 475 + p++; 476 + if (!*p) 477 + break; 478 + } else if (*p == ',') { 479 + *p = '\0'; 480 + *s = p + 1; 481 + return sbegin; 482 + } 483 + } 484 + *s = NULL; 485 + return sbegin; 486 + } 487 + 460 488 static int ovl_parse_opt(char *opt, struct ovl_config *config) 461 489 { 462 490 char *p; 463 491 464 - while ((p = strsep(&opt, ",")) != NULL) { 492 + while ((p = ovl_next_opt(&opt)) != NULL) { 465 493 int token; 466 494 substring_t args[MAX_OPT_ARGS]; 467 495 ··· 572 554 goto out_unlock; 573 555 } 574 556 557 + static void ovl_unescape(char *s) 558 + { 559 + char *d = s; 560 + 561 + for (;; s++, d++) { 562 + if (*s == '\\') 563 + s++; 564 + *d = *s; 565 + if (!*s) 566 + break; 567 + } 568 + } 569 + 575 570 static int ovl_mount_dir(const char *name, struct path *path) 576 571 { 577 572 int err; 573 + char *tmp = kstrdup(name, GFP_KERNEL); 578 574 579 - err = kern_path(name, LOOKUP_FOLLOW, path); 575 + if (!tmp) 576 + return -ENOMEM; 577 + 578 + ovl_unescape(tmp); 579 + err = kern_path(tmp, LOOKUP_FOLLOW, path); 580 580 if (err) { 581 - pr_err("overlayfs: failed to resolve '%s': %i\n", name, err); 581 + pr_err("overlayfs: failed to resolve '%s': %i\n", tmp, err); 582 582 err = -EINVAL; 583 583 } 584 + kfree(tmp); 584 585 return err; 585 586 } 586 587 ··· 813 776 814 777 static struct file_system_type ovl_fs_type = { 815 778 .owner = THIS_MODULE, 816 - .name = "overlayfs", 779 + .name = "overlay", 817 780 .mount = ovl_mount, 818 781 .kill_sb = kill_anon_super, 819 782 }; 820 - MODULE_ALIAS_FS("overlayfs"); 783 + MODULE_ALIAS_FS("overlay"); 821 784 822 785 static int __init ovl_init(void) 823 786 {
+1 -1
include/dt-bindings/clock/qcom,mmcc-apq8084.h
··· 60 60 #define ESC1_CLK_SRC 43 61 61 #define HDMI_CLK_SRC 44 62 62 #define VSYNC_CLK_SRC 45 63 - #define RBCPR_CLK_SRC 46 63 + #define MMSS_RBCPR_CLK_SRC 46 64 64 #define RBBMTIMER_CLK_SRC 47 65 65 #define MAPLE_CLK_SRC 48 66 66 #define VDP_CLK_SRC 49
+5 -2
include/linux/bitops.h
··· 18 18 * position @h. For example 19 19 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. 20 20 */ 21 - #define GENMASK(h, l) (((U32_C(1) << ((h) - (l) + 1)) - 1) << (l)) 22 - #define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l)) 21 + #define GENMASK(h, l) \ 22 + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) 23 + 24 + #define GENMASK_ULL(h, l) \ 25 + (((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) 23 26 24 27 extern unsigned int __sw_hweight8(unsigned int w); 25 28 extern unsigned int __sw_hweight16(unsigned int w);
+6
include/linux/can/dev.h
··· 99 99 return 1; 100 100 } 101 101 102 + static inline bool can_is_canfd_skb(const struct sk_buff *skb) 103 + { 104 + /* the CAN specific type of skb is identified by its data length */ 105 + return skb->len == CANFD_MTU; 106 + } 107 + 102 108 /* get data length from can_dlc with sanitized can_dlc */ 103 109 u8 can_dlc2len(u8 can_dlc); 104 110
-1
include/linux/clk-provider.h
··· 352 352 #define CLK_DIVIDER_READ_ONLY BIT(5) 353 353 354 354 extern const struct clk_ops clk_divider_ops; 355 - extern const struct clk_ops clk_divider_ro_ops; 356 355 struct clk *clk_register_divider(struct device *dev, const char *name, 357 356 const char *parent_name, unsigned long flags, 358 357 void __iomem *reg, u8 shift, u8 width,
+1 -1
include/linux/iio/events.h
··· 72 72 73 73 #define IIO_EVENT_CODE_EXTRACT_TYPE(mask) ((mask >> 56) & 0xFF) 74 74 75 - #define IIO_EVENT_CODE_EXTRACT_DIR(mask) ((mask >> 48) & 0xCF) 75 + #define IIO_EVENT_CODE_EXTRACT_DIR(mask) ((mask >> 48) & 0x7F) 76 76 77 77 #define IIO_EVENT_CODE_EXTRACT_CHAN_TYPE(mask) ((mask >> 32) & 0xFF) 78 78
+1 -1
include/linux/inetdevice.h
··· 242 242 static __inline__ __be32 inet_make_mask(int logmask) 243 243 { 244 244 if (logmask) 245 - return htonl(~((1<<(32-logmask))-1)); 245 + return htonl(~((1U<<(32-logmask))-1)); 246 246 return 0; 247 247 } 248 248
-5
include/linux/kernel_stat.h
··· 77 77 return kstat_cpu(cpu).irqs_sum; 78 78 } 79 79 80 - /* 81 - * Lock/unlock the current runqueue - to extract task statistics: 82 - */ 83 - extern unsigned long long task_delta_exec(struct task_struct *); 84 - 85 80 extern void account_user_time(struct task_struct *, cputime_t, cputime_t); 86 81 extern void account_system_time(struct task_struct *, int, cputime_t, cputime_t); 87 82 extern void account_steal_time(cputime_t);
+1 -1
include/linux/kvm_host.h
··· 703 703 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu); 704 704 void kvm_vcpu_kick(struct kvm_vcpu *vcpu); 705 705 706 - bool kvm_is_mmio_pfn(pfn_t pfn); 706 + bool kvm_is_reserved_pfn(pfn_t pfn); 707 707 708 708 struct kvm_irq_ack_notifier { 709 709 struct hlist_node link;
+1
include/linux/pci.h
··· 331 331 unsigned int is_added:1; 332 332 unsigned int is_busmaster:1; /* device is busmaster */ 333 333 unsigned int no_msi:1; /* device may not use msi */ 334 + unsigned int no_64bit_msi:1; /* device may only use 32-bit MSIs */ 334 335 unsigned int block_cfg_access:1; /* config space access is blocked */ 335 336 unsigned int broken_parity_status:1; /* Device generates false positive parity */ 336 337 unsigned int irq_reroute_variant:2; /* device needs IRQ rerouting variant */
+7 -1
include/linux/percpu-refcount.h
··· 133 133 /* paired with smp_store_release() in percpu_ref_reinit() */ 134 134 smp_read_barrier_depends(); 135 135 136 - if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC)) 136 + /* 137 + * Theoretically, the following could test just ATOMIC; however, 138 + * then we'd have to mask off DEAD separately as DEAD may be 139 + * visible without ATOMIC if we race with percpu_ref_kill(). DEAD 140 + * implies ATOMIC anyway. Test them together. 141 + */ 142 + if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC_DEAD)) 137 143 return false; 138 144 139 145 *percpu_countp = (unsigned long __percpu *)percpu_ptr;
+2
include/net/inet_common.h
··· 37 37 int inet_ctl_sock_create(struct sock **sk, unsigned short family, 38 38 unsigned short type, unsigned char protocol, 39 39 struct net *net); 40 + int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, 41 + int *addr_len); 40 42 41 43 static inline void inet_ctl_sock_destroy(struct sock *sk) 42 44 {
-2
include/net/netfilter/nf_tables.h
··· 396 396 /** 397 397 * struct nft_trans - nf_tables object update in transaction 398 398 * 399 - * @rcu_head: rcu head to defer release of transaction data 400 399 * @list: used internally 401 400 * @msg_type: message type 402 401 * @ctx: transaction context 403 402 * @data: internal information related to the transaction 404 403 */ 405 404 struct nft_trans { 406 - struct rcu_head rcu_head; 407 405 struct list_head list; 408 406 int msg_type; 409 407 struct nft_ctx ctx;
+18
include/net/vxlan.h
··· 8 8 #define VNI_HASH_BITS 10 9 9 #define VNI_HASH_SIZE (1<<VNI_HASH_BITS) 10 10 11 + /* VXLAN protocol header */ 12 + struct vxlanhdr { 13 + __be32 vx_flags; 14 + __be32 vx_vni; 15 + }; 16 + 11 17 struct vxlan_sock; 12 18 typedef void (vxlan_rcv_t)(struct vxlan_sock *vh, struct sk_buff *skb, __be32 key); 13 19 ··· 50 44 struct rtable *rt, struct sk_buff *skb, 51 45 __be32 src, __be32 dst, __u8 tos, __u8 ttl, __be16 df, 52 46 __be16 src_port, __be16 dst_port, __be32 vni, bool xnet); 47 + 48 + static inline bool vxlan_gso_check(struct sk_buff *skb) 49 + { 50 + if ((skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL) && 51 + (skb->inner_protocol_type != ENCAP_TYPE_ETHER || 52 + skb->inner_protocol != htons(ETH_P_TEB) || 53 + (skb_inner_mac_header(skb) - skb_transport_header(skb) != 54 + sizeof(struct udphdr) + sizeof(struct vxlanhdr)))) 55 + return false; 56 + 57 + return true; 58 + } 53 59 54 60 /* IP header + UDP + VXLAN + Ethernet header */ 55 61 #define VXLAN_HEADROOM (20 + 8 + 8 + 14)
+5 -3
kernel/events/core.c
··· 1562 1562 1563 1563 if (!task) { 1564 1564 /* 1565 - * Per cpu events are removed via an smp call and 1566 - * the removal is always successful. 1565 + * Per cpu events are removed via an smp call. The removal can 1566 + * fail if the CPU is currently offline, but in that case we 1567 + * already called __perf_remove_from_context from 1568 + * perf_event_exit_cpu. 1567 1569 */ 1568 1570 cpu_function_call(event->cpu, __perf_remove_from_context, &re); 1569 1571 return; ··· 8119 8117 8120 8118 static void __perf_event_exit_context(void *__info) 8121 8119 { 8122 - struct remove_event re = { .detach_group = false }; 8120 + struct remove_event re = { .detach_group = true }; 8123 8121 struct perf_event_context *ctx = __info; 8124 8122 8125 8123 perf_pmu_rotate_stop(ctx->pmu);
-1
kernel/events/uprobes.c
··· 1640 1640 if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) { 1641 1641 utask->state = UTASK_SSTEP_TRAPPED; 1642 1642 set_tsk_thread_flag(t, TIF_UPROBE); 1643 - set_tsk_thread_flag(t, TIF_NOTIFY_RESUME); 1644 1643 } 1645 1644 } 1646 1645
+21 -42
kernel/sched/core.c
··· 2475 2475 EXPORT_PER_CPU_SYMBOL(kernel_cpustat); 2476 2476 2477 2477 /* 2478 - * Return any ns on the sched_clock that have not yet been accounted in 2479 - * @p in case that task is currently running. 2480 - * 2481 - * Called with task_rq_lock() held on @rq. 2482 - */ 2483 - static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq) 2484 - { 2485 - u64 ns = 0; 2486 - 2487 - /* 2488 - * Must be ->curr _and_ ->on_rq. If dequeued, we would 2489 - * project cycles that may never be accounted to this 2490 - * thread, breaking clock_gettime(). 2491 - */ 2492 - if (task_current(rq, p) && task_on_rq_queued(p)) { 2493 - update_rq_clock(rq); 2494 - ns = rq_clock_task(rq) - p->se.exec_start; 2495 - if ((s64)ns < 0) 2496 - ns = 0; 2497 - } 2498 - 2499 - return ns; 2500 - } 2501 - 2502 - unsigned long long task_delta_exec(struct task_struct *p) 2503 - { 2504 - unsigned long flags; 2505 - struct rq *rq; 2506 - u64 ns = 0; 2507 - 2508 - rq = task_rq_lock(p, &flags); 2509 - ns = do_task_delta_exec(p, rq); 2510 - task_rq_unlock(rq, p, &flags); 2511 - 2512 - return ns; 2513 - } 2514 - 2515 - /* 2516 2478 * Return accounted runtime for the task. 2517 2479 * In case the task is currently running, return the runtime plus current's 2518 2480 * pending runtime that have not been accounted yet. ··· 2483 2521 { 2484 2522 unsigned long flags; 2485 2523 struct rq *rq; 2486 - u64 ns = 0; 2524 + u64 ns; 2487 2525 2488 2526 #if defined(CONFIG_64BIT) && defined(CONFIG_SMP) 2489 2527 /* ··· 2502 2540 #endif 2503 2541 2504 2542 rq = task_rq_lock(p, &flags); 2505 - ns = p->se.sum_exec_runtime + do_task_delta_exec(p, rq); 2543 + /* 2544 + * Must be ->curr _and_ ->on_rq. If dequeued, we would 2545 + * project cycles that may never be accounted to this 2546 + * thread, breaking clock_gettime(). 2547 + */ 2548 + if (task_current(rq, p) && task_on_rq_queued(p)) { 2549 + update_rq_clock(rq); 2550 + p->sched_class->update_curr(rq); 2551 + } 2552 + ns = p->se.sum_exec_runtime; 2506 2553 task_rq_unlock(rq, p, &flags); 2507 2554 2508 2555 return ns; ··· 6339 6368 if (!sched_debug()) 6340 6369 break; 6341 6370 } 6371 + 6372 + if (!level) 6373 + return; 6374 + 6342 6375 /* 6343 6376 * 'level' contains the number of unique distances, excluding the 6344 6377 * identity distance node_distance(i,i). ··· 7419 7444 if (unlikely(running)) 7420 7445 put_prev_task(rq, tsk); 7421 7446 7422 - tg = container_of(task_css_check(tsk, cpu_cgrp_id, 7423 - lockdep_is_held(&tsk->sighand->siglock)), 7447 + /* 7448 + * All callers are synchronized by task_rq_lock(); we do not use RCU 7449 + * which is pointless here. Thus, we pass "true" to task_css_check() 7450 + * to prevent lockdep warnings. 7451 + */ 7452 + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true), 7424 7453 struct task_group, css); 7425 7454 tg = autogroup_task_group(tsk, tg); 7426 7455 tsk->sched_task_group = tg;
+2
kernel/sched/deadline.c
··· 1701 1701 .prio_changed = prio_changed_dl, 1702 1702 .switched_from = switched_from_dl, 1703 1703 .switched_to = switched_to_dl, 1704 + 1705 + .update_curr = update_curr_dl, 1704 1706 };
+14
kernel/sched/fair.c
··· 726 726 account_cfs_rq_runtime(cfs_rq, delta_exec); 727 727 } 728 728 729 + static void update_curr_fair(struct rq *rq) 730 + { 731 + update_curr(cfs_rq_of(&rq->curr->se)); 732 + } 733 + 729 734 static inline void 730 735 update_stats_wait_start(struct cfs_rq *cfs_rq, struct sched_entity *se) 731 736 { ··· 1183 1178 if ((cur->flags & PF_EXITING) || is_idle_task(cur)) 1184 1179 cur = NULL; 1185 1180 raw_spin_unlock_irq(&dst_rq->lock); 1181 + 1182 + /* 1183 + * Because we have preemption enabled we can get migrated around and 1184 + * end try selecting ourselves (current == env->p) as a swap candidate. 1185 + */ 1186 + if (cur == env->p) 1187 + goto unlock; 1186 1188 1187 1189 /* 1188 1190 * "imp" is the fault differential for the source task between the ··· 7960 7948 .switched_to = switched_to_fair, 7961 7949 7962 7950 .get_rr_interval = get_rr_interval_fair, 7951 + 7952 + .update_curr = update_curr_fair, 7963 7953 7964 7954 #ifdef CONFIG_FAIR_GROUP_SCHED 7965 7955 .task_move_group = task_move_group_fair,
+5
kernel/sched/idle_task.c
··· 75 75 return 0; 76 76 } 77 77 78 + static void update_curr_idle(struct rq *rq) 79 + { 80 + } 81 + 78 82 /* 79 83 * Simple, special scheduling class for the per-CPU idle tasks: 80 84 */ ··· 105 101 106 102 .prio_changed = prio_changed_idle, 107 103 .switched_to = switched_to_idle, 104 + .update_curr = update_curr_idle, 108 105 };
+2
kernel/sched/rt.c
··· 2128 2128 2129 2129 .prio_changed = prio_changed_rt, 2130 2130 .switched_to = switched_to_rt, 2131 + 2132 + .update_curr = update_curr_rt, 2131 2133 }; 2132 2134 2133 2135 #ifdef CONFIG_SCHED_DEBUG
+2
kernel/sched/sched.h
··· 1135 1135 unsigned int (*get_rr_interval) (struct rq *rq, 1136 1136 struct task_struct *task); 1137 1137 1138 + void (*update_curr) (struct rq *rq); 1139 + 1138 1140 #ifdef CONFIG_FAIR_GROUP_SCHED 1139 1141 void (*task_move_group) (struct task_struct *p, int on_rq); 1140 1142 #endif
+5
kernel/sched/stop_task.c
··· 102 102 return 0; 103 103 } 104 104 105 + static void update_curr_stop(struct rq *rq) 106 + { 107 + } 108 + 105 109 /* 106 110 * Simple, special scheduling class for the per-CPU stop tasks: 107 111 */ ··· 132 128 133 129 .prio_changed = prio_changed_stop, 134 130 .switched_to = switched_to_stop, 131 + .update_curr = update_curr_stop, 135 132 };
+1 -1
kernel/time/posix-cpu-timers.c
··· 553 553 *sample = cputime_to_expires(cputime.utime); 554 554 break; 555 555 case CPUCLOCK_SCHED: 556 - *sample = cputime.sum_exec_runtime + task_delta_exec(p); 556 + *sample = cputime.sum_exec_runtime; 557 557 break; 558 558 } 559 559 return 0;
+2 -2
lib/Makefile
··· 10 10 lib-y := ctype.o string.o vsprintf.o cmdline.o \ 11 11 rbtree.o radix-tree.o dump_stack.o timerqueue.o\ 12 12 idr.o int_sqrt.o extable.o \ 13 - sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ 13 + sha1.o md5.o irq_regs.o argv_split.o \ 14 14 proportions.o flex_proportions.o ratelimit.o show_mem.o \ 15 15 is_single_threaded.o plist.o decompress.o kobject_uevent.o \ 16 16 earlycpio.o ··· 26 26 bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \ 27 27 gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 - percpu-refcount.o percpu_ida.o hash.o rhashtable.o 29 + percpu-refcount.o percpu_ida.o hash.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o 31 31 obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o 32 32 obj-y += kstrtox.o
+1 -2
net/bridge/br_multicast.c
··· 813 813 return; 814 814 815 815 if (port) { 816 - __skb_push(skb, sizeof(struct ethhdr)); 817 816 skb->dev = port->dev; 818 817 NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT, skb, NULL, skb->dev, 819 - dev_queue_xmit); 818 + br_dev_queue_push_xmit); 820 819 } else { 821 820 br_multicast_select_own_querier(br, ip, skb); 822 821 netif_rx(skb);
+1
net/bridge/br_netlink.c
··· 280 280 [IFLA_BRPORT_MODE] = { .type = NLA_U8 }, 281 281 [IFLA_BRPORT_GUARD] = { .type = NLA_U8 }, 282 282 [IFLA_BRPORT_PROTECT] = { .type = NLA_U8 }, 283 + [IFLA_BRPORT_FAST_LEAVE]= { .type = NLA_U8 }, 283 284 [IFLA_BRPORT_LEARNING] = { .type = NLA_U8 }, 284 285 [IFLA_BRPORT_UNICAST_FLOOD] = { .type = NLA_U8 }, 285 286 };
+18 -5
net/core/rtnetlink.c
··· 2685 2685 int idx = 0; 2686 2686 u32 portid = NETLINK_CB(cb->skb).portid; 2687 2687 u32 seq = cb->nlh->nlmsg_seq; 2688 - struct nlattr *extfilt; 2689 2688 u32 filter_mask = 0; 2690 2689 2691 - extfilt = nlmsg_find_attr(cb->nlh, sizeof(struct ifinfomsg), 2692 - IFLA_EXT_MASK); 2693 - if (extfilt) 2694 - filter_mask = nla_get_u32(extfilt); 2690 + if (nlmsg_len(cb->nlh) > sizeof(struct ifinfomsg)) { 2691 + struct nlattr *extfilt; 2692 + 2693 + extfilt = nlmsg_find_attr(cb->nlh, sizeof(struct ifinfomsg), 2694 + IFLA_EXT_MASK); 2695 + if (extfilt) { 2696 + if (nla_len(extfilt) < sizeof(filter_mask)) 2697 + return -EINVAL; 2698 + 2699 + filter_mask = nla_get_u32(extfilt); 2700 + } 2701 + } 2695 2702 2696 2703 rcu_read_lock(); 2697 2704 for_each_netdev_rcu(net, dev) { ··· 2805 2798 if (br_spec) { 2806 2799 nla_for_each_nested(attr, br_spec, rem) { 2807 2800 if (nla_type(attr) == IFLA_BRIDGE_FLAGS) { 2801 + if (nla_len(attr) < sizeof(flags)) 2802 + return -EINVAL; 2803 + 2808 2804 have_flags = true; 2809 2805 flags = nla_get_u16(attr); 2810 2806 break; ··· 2878 2868 if (br_spec) { 2879 2869 nla_for_each_nested(attr, br_spec, rem) { 2880 2870 if (nla_type(attr) == IFLA_BRIDGE_FLAGS) { 2871 + if (nla_len(attr) < sizeof(flags)) 2872 + return -EINVAL; 2873 + 2881 2874 have_flags = true; 2882 2875 flags = nla_get_u16(attr); 2883 2876 break;
+6 -17
net/core/skbuff.c
··· 552 552 case SKB_FCLONE_CLONE: 553 553 fclones = container_of(skb, struct sk_buff_fclones, skb2); 554 554 555 - /* Warning : We must perform the atomic_dec_and_test() before 556 - * setting skb->fclone back to SKB_FCLONE_FREE, otherwise 557 - * skb_clone() could set clone_ref to 2 before our decrement. 558 - * Anyway, if we are going to free the structure, no need to 559 - * rewrite skb->fclone. 555 + /* The clone portion is available for 556 + * fast-cloning again. 560 557 */ 561 - if (atomic_dec_and_test(&fclones->fclone_ref)) { 558 + skb->fclone = SKB_FCLONE_FREE; 559 + 560 + if (atomic_dec_and_test(&fclones->fclone_ref)) 562 561 kmem_cache_free(skbuff_fclone_cache, fclones); 563 - } else { 564 - /* The clone portion is available for 565 - * fast-cloning again. 566 - */ 567 - skb->fclone = SKB_FCLONE_FREE; 568 - } 569 562 break; 570 563 } 571 564 } ··· 880 887 if (skb->fclone == SKB_FCLONE_ORIG && 881 888 n->fclone == SKB_FCLONE_FREE) { 882 889 n->fclone = SKB_FCLONE_CLONE; 883 - /* As our fastclone was free, clone_ref must be 1 at this point. 884 - * We could use atomic_inc() here, but it is faster 885 - * to set the final value. 886 - */ 887 - atomic_set(&fclones->fclone_ref, 2); 890 + atomic_inc(&fclones->fclone_ref); 888 891 } else { 889 892 if (skb_pfmemalloc(skb)) 890 893 gfp_mask |= __GFP_MEMALLOC;
+18 -18
net/dcb/dcbnl.c
··· 1080 1080 if (!app) 1081 1081 return -EMSGSIZE; 1082 1082 1083 - spin_lock(&dcb_lock); 1083 + spin_lock_bh(&dcb_lock); 1084 1084 list_for_each_entry(itr, &dcb_app_list, list) { 1085 1085 if (itr->ifindex == netdev->ifindex) { 1086 1086 err = nla_put(skb, DCB_ATTR_IEEE_APP, sizeof(itr->app), 1087 1087 &itr->app); 1088 1088 if (err) { 1089 - spin_unlock(&dcb_lock); 1089 + spin_unlock_bh(&dcb_lock); 1090 1090 return -EMSGSIZE; 1091 1091 } 1092 1092 } ··· 1097 1097 else 1098 1098 dcbx = -EOPNOTSUPP; 1099 1099 1100 - spin_unlock(&dcb_lock); 1100 + spin_unlock_bh(&dcb_lock); 1101 1101 nla_nest_end(skb, app); 1102 1102 1103 1103 /* get peer info if available */ ··· 1234 1234 } 1235 1235 1236 1236 /* local app */ 1237 - spin_lock(&dcb_lock); 1237 + spin_lock_bh(&dcb_lock); 1238 1238 app = nla_nest_start(skb, DCB_ATTR_CEE_APP_TABLE); 1239 1239 if (!app) 1240 1240 goto dcb_unlock; ··· 1271 1271 else 1272 1272 dcbx = -EOPNOTSUPP; 1273 1273 1274 - spin_unlock(&dcb_lock); 1274 + spin_unlock_bh(&dcb_lock); 1275 1275 1276 1276 /* features flags */ 1277 1277 if (ops->getfeatcfg) { ··· 1326 1326 return 0; 1327 1327 1328 1328 dcb_unlock: 1329 - spin_unlock(&dcb_lock); 1329 + spin_unlock_bh(&dcb_lock); 1330 1330 nla_put_failure: 1331 1331 return err; 1332 1332 } ··· 1762 1762 struct dcb_app_type *itr; 1763 1763 u8 prio = 0; 1764 1764 1765 - spin_lock(&dcb_lock); 1765 + spin_lock_bh(&dcb_lock); 1766 1766 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1767 1767 prio = itr->app.priority; 1768 - spin_unlock(&dcb_lock); 1768 + spin_unlock_bh(&dcb_lock); 1769 1769 1770 1770 return prio; 1771 1771 } ··· 1789 1789 if (dev->dcbnl_ops->getdcbx) 1790 1790 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1791 1791 1792 - spin_lock(&dcb_lock); 1792 + spin_lock_bh(&dcb_lock); 1793 1793 /* Search for existing match and replace */ 1794 1794 if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) { 1795 1795 if (new->priority) ··· 1804 1804 if (new->priority) 1805 1805 err = dcb_app_add(new, dev->ifindex); 1806 1806 out: 1807 - spin_unlock(&dcb_lock); 1807 + spin_unlock_bh(&dcb_lock); 1808 1808 if (!err) 1809 1809 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1810 1810 return err; ··· 1823 1823 struct dcb_app_type *itr; 1824 1824 u8 prio = 0; 1825 1825 1826 - spin_lock(&dcb_lock); 1826 + spin_lock_bh(&dcb_lock); 1827 1827 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1828 1828 prio |= 1 << itr->app.priority; 1829 - spin_unlock(&dcb_lock); 1829 + spin_unlock_bh(&dcb_lock); 1830 1830 1831 1831 return prio; 1832 1832 } ··· 1850 1850 if (dev->dcbnl_ops->getdcbx) 1851 1851 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1852 1852 1853 - spin_lock(&dcb_lock); 1853 + spin_lock_bh(&dcb_lock); 1854 1854 /* Search for existing match and abort if found */ 1855 1855 if (dcb_app_lookup(new, dev->ifindex, new->priority)) { 1856 1856 err = -EEXIST; ··· 1859 1859 1860 1860 err = dcb_app_add(new, dev->ifindex); 1861 1861 out: 1862 - spin_unlock(&dcb_lock); 1862 + spin_unlock_bh(&dcb_lock); 1863 1863 if (!err) 1864 1864 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1865 1865 return err; ··· 1882 1882 if (dev->dcbnl_ops->getdcbx) 1883 1883 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1884 1884 1885 - spin_lock(&dcb_lock); 1885 + spin_lock_bh(&dcb_lock); 1886 1886 /* Search for existing match and remove it. */ 1887 1887 if ((itr = dcb_app_lookup(del, dev->ifindex, del->priority))) { 1888 1888 list_del(&itr->list); ··· 1890 1890 err = 0; 1891 1891 } 1892 1892 1893 - spin_unlock(&dcb_lock); 1893 + spin_unlock_bh(&dcb_lock); 1894 1894 if (!err) 1895 1895 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1896 1896 return err; ··· 1902 1902 struct dcb_app_type *app; 1903 1903 struct dcb_app_type *tmp; 1904 1904 1905 - spin_lock(&dcb_lock); 1905 + spin_lock_bh(&dcb_lock); 1906 1906 list_for_each_entry_safe(app, tmp, &dcb_app_list, list) { 1907 1907 list_del(&app->list); 1908 1908 kfree(app); 1909 1909 } 1910 - spin_unlock(&dcb_lock); 1910 + spin_unlock_bh(&dcb_lock); 1911 1911 } 1912 1912 1913 1913 static int __init dcbnl_init(void)
+11
net/ipv4/af_inet.c
··· 1386 1386 return pp; 1387 1387 } 1388 1388 1389 + int inet_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len) 1390 + { 1391 + if (sk->sk_family == AF_INET) 1392 + return ip_recv_error(sk, msg, len, addr_len); 1393 + #if IS_ENABLED(CONFIG_IPV6) 1394 + if (sk->sk_family == AF_INET6) 1395 + return pingv6_ops.ipv6_recv_error(sk, msg, len, addr_len); 1396 + #endif 1397 + return -EINVAL; 1398 + } 1399 + 1389 1400 static int inet_gro_complete(struct sk_buff *skb, int nhoff) 1390 1401 { 1391 1402 __be16 newlen = htons(skb->len - nhoff);
+4
net/ipv4/fib_rules.c
··· 62 62 else 63 63 res->tclassid = 0; 64 64 #endif 65 + 66 + if (err == -ESRCH) 67 + err = -ENETUNREACH; 68 + 65 69 return err; 66 70 } 67 71 EXPORT_SYMBOL_GPL(__fib_lookup);
+5 -6
net/ipv4/igmp.c
··· 318 318 return scount; 319 319 } 320 320 321 - #define igmp_skb_size(skb) (*(unsigned int *)((skb)->cb)) 322 - 323 - static struct sk_buff *igmpv3_newpack(struct net_device *dev, int size) 321 + static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu) 324 322 { 325 323 struct sk_buff *skb; 326 324 struct rtable *rt; ··· 328 330 struct flowi4 fl4; 329 331 int hlen = LL_RESERVED_SPACE(dev); 330 332 int tlen = dev->needed_tailroom; 333 + unsigned int size = mtu; 331 334 332 335 while (1) { 333 336 skb = alloc_skb(size + hlen + tlen, ··· 340 341 return NULL; 341 342 } 342 343 skb->priority = TC_PRIO_CONTROL; 343 - igmp_skb_size(skb) = size; 344 344 345 345 rt = ip_route_output_ports(net, &fl4, NULL, IGMPV3_ALL_MCR, 0, 346 346 0, 0, ··· 352 354 skb_dst_set(skb, &rt->dst); 353 355 skb->dev = dev; 354 356 357 + skb->reserved_tailroom = skb_end_offset(skb) - 358 + min(mtu, skb_end_offset(skb)); 355 359 skb_reserve(skb, hlen); 356 360 357 361 skb_reset_network_header(skb); ··· 423 423 return skb; 424 424 } 425 425 426 - #define AVAILABLE(skb) ((skb) ? ((skb)->dev ? igmp_skb_size(skb) - (skb)->len : \ 427 - skb_tailroom(skb)) : 0) 426 + #define AVAILABLE(skb) ((skb) ? skb_availroom(skb) : 0) 428 427 429 428 static struct sk_buff *add_grec(struct sk_buff *skb, struct ip_mc_list *pmc, 430 429 int type, int gdeleted, int sdeleted)
+1
net/ipv4/ip_vti.c
··· 528 528 .validate = vti_tunnel_validate, 529 529 .newlink = vti_newlink, 530 530 .changelink = vti_changelink, 531 + .dellink = ip_tunnel_dellink, 531 532 .get_size = vti_get_size, 532 533 .fill_info = vti_fill_info, 533 534 };
+1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 24 24 struct nf_nat_range range; 25 25 unsigned int verdict; 26 26 27 + memset(&range, 0, sizeof(range)); 27 28 range.flags = priv->flags; 28 29 29 30 verdict = nf_nat_masquerade_ipv4(pkt->skb, pkt->ops->hooknum,
+4 -10
net/ipv4/ping.c
··· 217 217 &ipv6_hdr(skb)->daddr)) 218 218 continue; 219 219 #endif 220 + } else { 221 + continue; 220 222 } 221 223 222 224 if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif) ··· 855 853 if (flags & MSG_OOB) 856 854 goto out; 857 855 858 - if (flags & MSG_ERRQUEUE) { 859 - if (family == AF_INET) { 860 - return ip_recv_error(sk, msg, len, addr_len); 861 - #if IS_ENABLED(CONFIG_IPV6) 862 - } else if (family == AF_INET6) { 863 - return pingv6_ops.ipv6_recv_error(sk, msg, len, 864 - addr_len); 865 - #endif 866 - } 867 - } 856 + if (flags & MSG_ERRQUEUE) 857 + return inet_recv_error(sk, msg, len, addr_len); 868 858 869 859 skb = skb_recv_datagram(sk, flags, noblock, &err); 870 860 if (!skb)
+1 -1
net/ipv4/tcp.c
··· 1598 1598 u32 urg_hole = 0; 1599 1599 1600 1600 if (unlikely(flags & MSG_ERRQUEUE)) 1601 - return ip_recv_error(sk, msg, len, addr_len); 1601 + return inet_recv_error(sk, msg, len, addr_len); 1602 1602 1603 1603 if (sk_can_busy_loop(sk) && skb_queue_empty(&sk->sk_receive_queue) && 1604 1604 (sk->sk_state == TCP_ESTABLISHED))
+2 -2
net/ipv4/tcp_input.c
··· 5231 5231 if (len < (th->doff << 2) || tcp_checksum_complete_user(sk, skb)) 5232 5232 goto csum_error; 5233 5233 5234 - if (!th->ack && !th->rst) 5234 + if (!th->ack && !th->rst && !th->syn) 5235 5235 goto discard; 5236 5236 5237 5237 /* ··· 5650 5650 goto discard; 5651 5651 } 5652 5652 5653 - if (!th->ack && !th->rst) 5653 + if (!th->ack && !th->rst && !th->syn) 5654 5654 goto discard; 5655 5655 5656 5656 if (!tcp_validate_incoming(sk, skb, th, 0))
+4 -1
net/ipv4/tcp_ipv4.c
··· 598 598 if (th->rst) 599 599 return; 600 600 601 - if (skb_rtable(skb)->rt_type != RTN_LOCAL) 601 + /* If sk not NULL, it means we did a successful lookup and incoming 602 + * route had to be correct. prequeue might have dropped our dst. 603 + */ 604 + if (!sk && skb_rtable(skb)->rt_type != RTN_LOCAL) 602 605 return; 603 606 604 607 /* Swap the send and the receive. */
+2 -2
net/ipv6/ip6_gre.c
··· 502 502 503 503 skb->protocol = gre_proto; 504 504 /* WCCP version 1 and 2 protocol decoding. 505 - * - Change protocol to IP 505 + * - Change protocol to IPv6 506 506 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header 507 507 */ 508 508 if (flags == 0 && gre_proto == htons(ETH_P_WCCP)) { 509 - skb->protocol = htons(ETH_P_IP); 509 + skb->protocol = htons(ETH_P_IPV6); 510 510 if ((*(h + offset) & 0xF0) != 0x40) 511 511 offset += 4; 512 512 }
+2 -1
net/ipv6/ip6_offload.c
··· 69 69 int nhoff; 70 70 71 71 if (unlikely(skb_shinfo(skb)->gso_type & 72 - ~(SKB_GSO_UDP | 72 + ~(SKB_GSO_TCPV4 | 73 + SKB_GSO_UDP | 73 74 SKB_GSO_DODGY | 74 75 SKB_GSO_TCP_ECN | 75 76 SKB_GSO_GRE |
+1 -3
net/ipv6/ip6_udp_tunnel.c
··· 79 79 uh->source = src_port; 80 80 81 81 uh->len = htons(skb->len); 82 - uh->check = 0; 83 82 84 83 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 85 84 IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED 86 85 | IPSKB_REROUTED); 87 86 skb_dst_set(skb, dst); 88 87 89 - udp6_set_csum(udp_get_no_check6_tx(sk), skb, &inet6_sk(sk)->saddr, 90 - &sk->sk_v6_daddr, skb->len); 88 + udp6_set_csum(udp_get_no_check6_tx(sk), skb, saddr, daddr, skb->len); 91 89 92 90 __skb_push(skb, sizeof(*ip6h)); 93 91 skb_reset_network_header(skb);
+11
net/ipv6/ip6_vti.c
··· 905 905 return vti6_tnl_create2(dev); 906 906 } 907 907 908 + static void vti6_dellink(struct net_device *dev, struct list_head *head) 909 + { 910 + struct net *net = dev_net(dev); 911 + struct vti6_net *ip6n = net_generic(net, vti6_net_id); 912 + 913 + if (dev != ip6n->fb_tnl_dev) 914 + unregister_netdevice_queue(dev, head); 915 + } 916 + 908 917 static int vti6_changelink(struct net_device *dev, struct nlattr *tb[], 909 918 struct nlattr *data[]) 910 919 { ··· 989 980 .setup = vti6_dev_setup, 990 981 .validate = vti6_validate, 991 982 .newlink = vti6_newlink, 983 + .dellink = vti6_dellink, 992 984 .changelink = vti6_changelink, 993 985 .get_size = vti6_get_size, 994 986 .fill_info = vti6_fill_info, ··· 1030 1020 if (!ip6n->fb_tnl_dev) 1031 1021 goto err_alloc_dev; 1032 1022 dev_net_set(ip6n->fb_tnl_dev, net); 1023 + ip6n->fb_tnl_dev->rtnl_link_ops = &vti6_link_ops; 1033 1024 1034 1025 err = vti6_fb_tnl_dev_init(ip6n->fb_tnl_dev); 1035 1026 if (err < 0)
+4
net/ipv6/ip6mr.c
··· 1439 1439 1440 1440 void ip6_mr_cleanup(void) 1441 1441 { 1442 + rtnl_unregister(RTNL_FAMILY_IP6MR, RTM_GETROUTE); 1443 + #ifdef CONFIG_IPV6_PIMSM_V2 1444 + inet6_del_protocol(&pim6_protocol, IPPROTO_PIM); 1445 + #endif 1442 1446 unregister_netdevice_notifier(&ip6_mr_notifier); 1443 1447 unregister_pernet_subsys(&ip6mr_net_ops); 1444 1448 kmem_cache_destroy(mrt_cachep);
+5 -4
net/ipv6/mcast.c
··· 1550 1550 hdr->daddr = *daddr; 1551 1551 } 1552 1552 1553 - static struct sk_buff *mld_newpack(struct inet6_dev *idev, int size) 1553 + static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) 1554 1554 { 1555 1555 struct net_device *dev = idev->dev; 1556 1556 struct net *net = dev_net(dev); ··· 1561 1561 const struct in6_addr *saddr; 1562 1562 int hlen = LL_RESERVED_SPACE(dev); 1563 1563 int tlen = dev->needed_tailroom; 1564 + unsigned int size = mtu + hlen + tlen; 1564 1565 int err; 1565 1566 u8 ra[8] = { IPPROTO_ICMPV6, 0, 1566 1567 IPV6_TLV_ROUTERALERT, 2, 0, 0, 1567 1568 IPV6_TLV_PADN, 0 }; 1568 1569 1569 1570 /* we assume size > sizeof(ra) here */ 1570 - size += hlen + tlen; 1571 1571 /* limit our allocations to order-0 page */ 1572 1572 size = min_t(int, size, SKB_MAX_ORDER(0, 0)); 1573 1573 skb = sock_alloc_send_skb(sk, size, 1, &err); ··· 1576 1576 return NULL; 1577 1577 1578 1578 skb->priority = TC_PRIO_CONTROL; 1579 + skb->reserved_tailroom = skb_end_offset(skb) - 1580 + min(mtu, skb_end_offset(skb)); 1579 1581 skb_reserve(skb, hlen); 1580 1582 1581 1583 if (__ipv6_get_lladdr(idev, &addr_buf, IFA_F_TENTATIVE)) { ··· 1692 1690 return skb; 1693 1691 } 1694 1692 1695 - #define AVAILABLE(skb) ((skb) ? ((skb)->dev ? (skb)->dev->mtu - (skb)->len : \ 1696 - skb_tailroom(skb)) : 0) 1693 + #define AVAILABLE(skb) ((skb) ? skb_availroom(skb) : 0) 1697 1694 1698 1695 static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, 1699 1696 int type, int gdeleted, int sdeleted, int crsend)
+1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 25 25 struct nf_nat_range range; 26 26 unsigned int verdict; 27 27 28 + memset(&range, 0, sizeof(range)); 28 29 range.flags = priv->flags; 29 30 30 31 verdict = nf_nat_masquerade_ipv6(pkt->skb, &range, pkt->out);
+4 -1
net/ipv6/tcp_ipv6.c
··· 903 903 if (th->rst) 904 904 return; 905 905 906 - if (!ipv6_unicast_destination(skb)) 906 + /* If sk not NULL, it means we did a successful lookup and incoming 907 + * route had to be correct. prequeue might have dropped our dst. 908 + */ 909 + if (!sk && !ipv6_unicast_destination(skb)) 907 910 return; 908 911 909 912 #ifdef CONFIG_TCP_MD5SIG
+5 -1
net/ipx/af_ipx.c
··· 1764 1764 struct ipxhdr *ipx = NULL; 1765 1765 struct sk_buff *skb; 1766 1766 int copied, rc; 1767 + bool locked = true; 1767 1768 1768 1769 lock_sock(sk); 1769 1770 /* put the autobinding in */ ··· 1791 1790 if (sock_flag(sk, SOCK_ZAPPED)) 1792 1791 goto out; 1793 1792 1793 + release_sock(sk); 1794 + locked = false; 1794 1795 skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, 1795 1796 flags & MSG_DONTWAIT, &rc); 1796 1797 if (!skb) { ··· 1829 1826 out_free: 1830 1827 skb_free_datagram(sk, skb); 1831 1828 out: 1832 - release_sock(sk); 1829 + if (locked) 1830 + release_sock(sk); 1833 1831 return rc; 1834 1832 } 1835 1833
+3
net/mac80211/aes_ccm.c
··· 53 53 __aligned(__alignof__(struct aead_request)); 54 54 struct aead_request *aead_req = (void *) aead_req_data; 55 55 56 + if (data_len == 0) 57 + return -EINVAL; 58 + 56 59 memset(aead_req, 0, sizeof(aead_req_data)); 57 60 58 61 sg_init_one(&pt, data, data_len);
+6 -9
net/mac80211/rc80211_minstrel_ht.c
··· 252 252 cur_thr = mi->groups[cur_group].rates[cur_idx].cur_tp; 253 253 cur_prob = mi->groups[cur_group].rates[cur_idx].probability; 254 254 255 - tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 256 - tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 257 - tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 258 - tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 259 - 260 - while (j > 0 && (cur_thr > tmp_thr || 261 - (cur_thr == tmp_thr && cur_prob > tmp_prob))) { 262 - j--; 255 + do { 263 256 tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 264 257 tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 265 258 tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 266 259 tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 267 - } 260 + if (cur_thr < tmp_thr || 261 + (cur_thr == tmp_thr && cur_prob <= tmp_prob)) 262 + break; 263 + j--; 264 + } while (j > 0); 268 265 269 266 if (j < MAX_THR_RATES - 1) { 270 267 memmove(&tp_list[j + 1], &tp_list[j], (sizeof(*tp_list) *
+6
net/netfilter/ipset/ip_set_core.c
··· 1863 1863 if (*op < IP_SET_OP_VERSION) { 1864 1864 /* Check the version at the beginning of operations */ 1865 1865 struct ip_set_req_version *req_version = data; 1866 + 1867 + if (*len < sizeof(struct ip_set_req_version)) { 1868 + ret = -EINVAL; 1869 + goto done; 1870 + } 1871 + 1866 1872 if (req_version->version != IPSET_PROTOCOL) { 1867 1873 ret = -EPROTO; 1868 1874 goto done;
+2
net/netfilter/ipvs/ip_vs_xmit.c
··· 846 846 new_skb = skb_realloc_headroom(skb, max_headroom); 847 847 if (!new_skb) 848 848 goto error; 849 + if (skb->sk) 850 + skb_set_owner_w(new_skb, skb->sk); 849 851 consume_skb(skb); 850 852 skb = new_skb; 851 853 }
+8 -16
net/netfilter/nf_tables_api.c
··· 3484 3484 } 3485 3485 } 3486 3486 3487 - /* Schedule objects for release via rcu to make sure no packets are accesing 3488 - * removed rules. 3489 - */ 3490 - static void nf_tables_commit_release_rcu(struct rcu_head *rt) 3487 + static void nf_tables_commit_release(struct nft_trans *trans) 3491 3488 { 3492 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3493 - 3494 3489 switch (trans->msg_type) { 3495 3490 case NFT_MSG_DELTABLE: 3496 3491 nf_tables_table_destroy(&trans->ctx); ··· 3607 3612 } 3608 3613 } 3609 3614 3615 + synchronize_rcu(); 3616 + 3610 3617 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3611 3618 list_del(&trans->list); 3612 - trans->ctx.nla = NULL; 3613 - call_rcu(&trans->rcu_head, nf_tables_commit_release_rcu); 3619 + nf_tables_commit_release(trans); 3614 3620 } 3615 3621 3616 3622 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); ··· 3619 3623 return 0; 3620 3624 } 3621 3625 3622 - /* Schedule objects for release via rcu to make sure no packets are accesing 3623 - * aborted rules. 3624 - */ 3625 - static void nf_tables_abort_release_rcu(struct rcu_head *rt) 3626 + static void nf_tables_abort_release(struct nft_trans *trans) 3626 3627 { 3627 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3628 - 3629 3628 switch (trans->msg_type) { 3630 3629 case NFT_MSG_NEWTABLE: 3631 3630 nf_tables_table_destroy(&trans->ctx); ··· 3716 3725 } 3717 3726 } 3718 3727 3728 + synchronize_rcu(); 3729 + 3719 3730 list_for_each_entry_safe_reverse(trans, next, 3720 3731 &net->nft.commit_list, list) { 3721 3732 list_del(&trans->list); 3722 - trans->ctx.nla = NULL; 3723 - call_rcu(&trans->rcu_head, nf_tables_abort_release_rcu); 3733 + nf_tables_abort_release(trans); 3724 3734 } 3725 3735 3726 3736 return 0;
+11 -1
net/netfilter/nfnetlink.c
··· 47 47 [NFNLGRP_CONNTRACK_EXP_NEW] = NFNL_SUBSYS_CTNETLINK_EXP, 48 48 [NFNLGRP_CONNTRACK_EXP_UPDATE] = NFNL_SUBSYS_CTNETLINK_EXP, 49 49 [NFNLGRP_CONNTRACK_EXP_DESTROY] = NFNL_SUBSYS_CTNETLINK_EXP, 50 + [NFNLGRP_NFTABLES] = NFNL_SUBSYS_NFTABLES, 51 + [NFNLGRP_ACCT_QUOTA] = NFNL_SUBSYS_ACCT, 50 52 }; 51 53 52 54 void nfnl_lock(__u8 subsys_id) ··· 466 464 static int nfnetlink_bind(int group) 467 465 { 468 466 const struct nfnetlink_subsystem *ss; 469 - int type = nfnl_group2type[group]; 467 + int type; 468 + 469 + if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX) 470 + return -EINVAL; 471 + 472 + type = nfnl_group2type[group]; 470 473 471 474 rcu_read_lock(); 472 475 ss = nfnetlink_get_subsys(type); ··· 520 513 static int __init nfnetlink_init(void) 521 514 { 522 515 int i; 516 + 517 + for (i = NFNLGRP_NONE + 1; i <= NFNLGRP_MAX; i++) 518 + BUG_ON(nfnl_group2type[i] == NFNL_SUBSYS_NONE); 523 519 524 520 for (i=0; i<NFNL_SUBSYS_COUNT; i++) 525 521 mutex_init(&table[i].mutex);
+6 -34
net/netfilter/nft_compat.c
··· 21 21 #include <linux/netfilter_ipv6/ip6_tables.h> 22 22 #include <net/netfilter/nf_tables.h> 23 23 24 - static const struct { 25 - const char *name; 26 - u8 type; 27 - } table_to_chaintype[] = { 28 - { "filter", NFT_CHAIN_T_DEFAULT }, 29 - { "raw", NFT_CHAIN_T_DEFAULT }, 30 - { "security", NFT_CHAIN_T_DEFAULT }, 31 - { "mangle", NFT_CHAIN_T_ROUTE }, 32 - { "nat", NFT_CHAIN_T_NAT }, 33 - { }, 34 - }; 35 - 36 - static int nft_compat_table_to_chaintype(const char *table) 37 - { 38 - int i; 39 - 40 - for (i = 0; table_to_chaintype[i].name != NULL; i++) { 41 - if (strcmp(table_to_chaintype[i].name, table) == 0) 42 - return table_to_chaintype[i].type; 43 - } 44 - 45 - return -1; 46 - } 47 - 48 24 static int nft_compat_chain_validate_dependency(const char *tablename, 49 25 const struct nft_chain *chain) 50 26 { 51 - enum nft_chain_type type; 52 27 const struct nft_base_chain *basechain; 53 28 54 29 if (!tablename || !(chain->flags & NFT_BASE_CHAIN)) 55 30 return 0; 56 31 57 - type = nft_compat_table_to_chaintype(tablename); 58 - if (type < 0) 59 - return -EINVAL; 60 - 61 32 basechain = nft_base_chain(chain); 62 - if (basechain->type->type != type) 33 + if (strcmp(tablename, "nat") == 0 && 34 + basechain->type->type != NFT_CHAIN_T_NAT) 63 35 return -EINVAL; 64 36 65 37 return 0; ··· 89 117 struct xt_target *target, void *info, 90 118 union nft_entry *entry, u8 proto, bool inv) 91 119 { 92 - par->net = &init_net; 120 + par->net = ctx->net; 93 121 par->table = ctx->table->name; 94 122 switch (ctx->afi->family) { 95 123 case AF_INET: ··· 296 324 struct xt_match *match, void *info, 297 325 union nft_entry *entry, u8 proto, bool inv) 298 326 { 299 - par->net = &init_net; 327 + par->net = ctx->net; 300 328 par->table = ctx->table->name; 301 329 switch (ctx->afi->family) { 302 330 case AF_INET: ··· 346 374 union nft_entry e = {}; 347 375 int ret; 348 376 349 - ret = nft_compat_chain_validate_dependency(match->name, ctx->chain); 377 + ret = nft_compat_chain_validate_dependency(match->table, ctx->chain); 350 378 if (ret < 0) 351 379 goto err; 352 380 ··· 420 448 if (!(hook_mask & match->hooks)) 421 449 return -EINVAL; 422 450 423 - ret = nft_compat_chain_validate_dependency(match->name, 451 + ret = nft_compat_chain_validate_dependency(match->table, 424 452 ctx->chain); 425 453 if (ret < 0) 426 454 return ret;
+6 -4
net/openvswitch/actions.c
··· 246 246 { 247 247 int transport_len = skb->len - skb_transport_offset(skb); 248 248 249 - if (l4_proto == IPPROTO_TCP) { 249 + if (l4_proto == NEXTHDR_TCP) { 250 250 if (likely(transport_len >= sizeof(struct tcphdr))) 251 251 inet_proto_csum_replace16(&tcp_hdr(skb)->check, skb, 252 252 addr, new_addr, 1); 253 - } else if (l4_proto == IPPROTO_UDP) { 253 + } else if (l4_proto == NEXTHDR_UDP) { 254 254 if (likely(transport_len >= sizeof(struct udphdr))) { 255 255 struct udphdr *uh = udp_hdr(skb); 256 256 ··· 261 261 uh->check = CSUM_MANGLED_0; 262 262 } 263 263 } 264 + } else if (l4_proto == NEXTHDR_ICMP) { 265 + if (likely(transport_len >= sizeof(struct icmp6hdr))) 266 + inet_proto_csum_replace16(&icmp6_hdr(skb)->icmp6_cksum, 267 + skb, addr, new_addr, 1); 264 268 } 265 269 } 266 270 ··· 726 722 727 723 case OVS_ACTION_ATTR_SAMPLE: 728 724 err = sample(dp, skb, key, a); 729 - if (unlikely(err)) /* skb already freed. */ 730 - return err; 731 725 break; 732 726 } 733 727
+7 -7
net/openvswitch/datapath.c
··· 1265 1265 return msgsize; 1266 1266 } 1267 1267 1268 - /* Called with ovs_mutex or RCU read lock. */ 1268 + /* Called with ovs_mutex. */ 1269 1269 static int ovs_dp_cmd_fill_info(struct datapath *dp, struct sk_buff *skb, 1270 1270 u32 portid, u32 seq, u32 flags, u8 cmd) 1271 1271 { ··· 1555 1555 if (!reply) 1556 1556 return -ENOMEM; 1557 1557 1558 - rcu_read_lock(); 1558 + ovs_lock(); 1559 1559 dp = lookup_datapath(sock_net(skb->sk), info->userhdr, info->attrs); 1560 1560 if (IS_ERR(dp)) { 1561 1561 err = PTR_ERR(dp); ··· 1564 1564 err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, 1565 1565 info->snd_seq, 0, OVS_DP_CMD_NEW); 1566 1566 BUG_ON(err < 0); 1567 - rcu_read_unlock(); 1567 + ovs_unlock(); 1568 1568 1569 1569 return genlmsg_reply(reply, info); 1570 1570 1571 1571 err_unlock_free: 1572 - rcu_read_unlock(); 1572 + ovs_unlock(); 1573 1573 kfree_skb(reply); 1574 1574 return err; 1575 1575 } ··· 1581 1581 int skip = cb->args[0]; 1582 1582 int i = 0; 1583 1583 1584 - rcu_read_lock(); 1585 - list_for_each_entry_rcu(dp, &ovs_net->dps, list_node) { 1584 + ovs_lock(); 1585 + list_for_each_entry(dp, &ovs_net->dps, list_node) { 1586 1586 if (i >= skip && 1587 1587 ovs_dp_cmd_fill_info(dp, skb, NETLINK_CB(cb->skb).portid, 1588 1588 cb->nlh->nlmsg_seq, NLM_F_MULTI, ··· 1590 1590 break; 1591 1591 i++; 1592 1592 } 1593 - rcu_read_unlock(); 1593 + ovs_unlock(); 1594 1594 1595 1595 cb->args[0] = i; 1596 1596
+8 -1
net/openvswitch/flow_netlink.c
··· 145 145 if (match->key->eth.type == htons(ETH_P_ARP) 146 146 || match->key->eth.type == htons(ETH_P_RARP)) { 147 147 key_expected |= 1 << OVS_KEY_ATTR_ARP; 148 - if (match->mask && (match->mask->key.eth.type == htons(0xffff))) 148 + if (match->mask && (match->mask->key.tp.src == htons(0xff))) 149 149 mask_allowed |= 1 << OVS_KEY_ATTR_ARP; 150 150 } 151 151 ··· 689 689 ipv6_key->ipv6_frag, OVS_FRAG_TYPE_MAX); 690 690 return -EINVAL; 691 691 } 692 + 693 + if (!is_mask && ipv6_key->ipv6_label & htonl(0xFFF00000)) { 694 + OVS_NLERR("IPv6 flow label %x is out of range (max=%x).\n", 695 + ntohl(ipv6_key->ipv6_label), (1 << 20) - 1); 696 + return -EINVAL; 697 + } 698 + 692 699 SW_FLOW_KEY_PUT(match, ipv6.label, 693 700 ipv6_key->ipv6_label, is_mask); 694 701 SW_FLOW_KEY_PUT(match, ip.proto,
+1 -1
net/packet/af_packet.c
··· 378 378 __unregister_prot_hook(sk, sync); 379 379 } 380 380 381 - static inline __pure struct page *pgv_to_page(void *addr) 381 + static inline struct page * __pure pgv_to_page(void *addr) 382 382 { 383 383 if (is_vmalloc_addr(addr)) 384 384 return vmalloc_to_page(addr);
+16 -11
net/sunrpc/svcsock.c
··· 1019 1019 xid = *p++; 1020 1020 calldir = *p; 1021 1021 1022 - if (bc_xprt) 1023 - req = xprt_lookup_rqst(bc_xprt, xid); 1024 - 1025 - if (!req) { 1026 - printk(KERN_NOTICE 1027 - "%s: Got unrecognized reply: " 1028 - "calldir 0x%x xpt_bc_xprt %p xid %08x\n", 1029 - __func__, ntohl(calldir), 1030 - bc_xprt, ntohl(xid)); 1022 + if (!bc_xprt) 1031 1023 return -EAGAIN; 1032 - } 1024 + spin_lock_bh(&bc_xprt->transport_lock); 1025 + req = xprt_lookup_rqst(bc_xprt, xid); 1026 + if (!req) 1027 + goto unlock_notfound; 1033 1028 1034 1029 memcpy(&req->rq_private_buf, &req->rq_rcv_buf, sizeof(struct xdr_buf)); 1035 1030 /* ··· 1035 1040 dst = &req->rq_private_buf.head[0]; 1036 1041 src = &rqstp->rq_arg.head[0]; 1037 1042 if (dst->iov_len < src->iov_len) 1038 - return -EAGAIN; /* whatever; just giving up. */ 1043 + goto unlock_eagain; /* whatever; just giving up. */ 1039 1044 memcpy(dst->iov_base, src->iov_base, src->iov_len); 1040 1045 xprt_complete_rqst(req->rq_task, rqstp->rq_arg.len); 1041 1046 rqstp->rq_arg.len = 0; 1047 + spin_unlock_bh(&bc_xprt->transport_lock); 1042 1048 return 0; 1049 + unlock_notfound: 1050 + printk(KERN_NOTICE 1051 + "%s: Got unrecognized reply: " 1052 + "calldir 0x%x xpt_bc_xprt %p xid %08x\n", 1053 + __func__, ntohl(calldir), 1054 + bc_xprt, ntohl(xid)); 1055 + unlock_eagain: 1056 + spin_unlock_bh(&bc_xprt->transport_lock); 1057 + return -EAGAIN; 1043 1058 } 1044 1059 1045 1060 static int copy_pages_to_kvecs(struct kvec *vec, struct page **pages, int len)
+19 -5
sound/pci/hda/hda_intel.c
··· 302 302 303 303 /* quirks for ATI/AMD HDMI */ 304 304 #define AZX_DCAPS_PRESET_ATI_HDMI \ 305 - (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_SYNC_WRITE | AZX_DCAPS_POSFIX_LPIB) 305 + (AZX_DCAPS_NO_TCSEL | AZX_DCAPS_SYNC_WRITE | AZX_DCAPS_POSFIX_LPIB|\ 306 + AZX_DCAPS_NO_MSI64) 306 307 307 308 /* quirks for ATI HDMI with snoop off */ 308 309 #define AZX_DCAPS_PRESET_ATI_HDMI_NS \ ··· 1494 1493 struct snd_card *card = chip->card; 1495 1494 int err; 1496 1495 unsigned short gcap; 1496 + unsigned int dma_bits = 64; 1497 1497 1498 1498 #if BITS_PER_LONG != 64 1499 1499 /* Fix up base address on ULI M5461 */ ··· 1518 1516 return -ENXIO; 1519 1517 } 1520 1518 1521 - if (chip->msi) 1519 + if (chip->msi) { 1520 + if (chip->driver_caps & AZX_DCAPS_NO_MSI64) { 1521 + dev_dbg(card->dev, "Disabling 64bit MSI\n"); 1522 + pci->no_64bit_msi = true; 1523 + } 1522 1524 if (pci_enable_msi(pci) < 0) 1523 1525 chip->msi = 0; 1526 + } 1524 1527 1525 1528 if (azx_acquire_irq(chip, 0) < 0) 1526 1529 return -EBUSY; ··· 1536 1529 gcap = azx_readw(chip, GCAP); 1537 1530 dev_dbg(card->dev, "chipset global capabilities = 0x%x\n", gcap); 1538 1531 1532 + /* AMD devices support 40 or 48bit DMA, take the safe one */ 1533 + if (chip->pci->vendor == PCI_VENDOR_ID_AMD) 1534 + dma_bits = 40; 1535 + 1539 1536 /* disable SB600 64bit support for safety */ 1540 1537 if (chip->pci->vendor == PCI_VENDOR_ID_ATI) { 1541 1538 struct pci_dev *p_smbus; 1539 + dma_bits = 40; 1542 1540 p_smbus = pci_get_device(PCI_VENDOR_ID_ATI, 1543 1541 PCI_DEVICE_ID_ATI_SBX00_SMBUS, 1544 1542 NULL); ··· 1573 1561 } 1574 1562 1575 1563 /* allow 64bit DMA address if supported by H/W */ 1576 - if ((gcap & AZX_GCAP_64OK) && !pci_set_dma_mask(pci, DMA_BIT_MASK(64))) 1577 - pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(64)); 1578 - else { 1564 + if (!(gcap & AZX_GCAP_64OK)) 1565 + dma_bits = 32; 1566 + if (!pci_set_dma_mask(pci, DMA_BIT_MASK(dma_bits))) { 1567 + pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(dma_bits)); 1568 + } else { 1579 1569 pci_set_dma_mask(pci, DMA_BIT_MASK(32)); 1580 1570 pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(32)); 1581 1571 }
+1
sound/pci/hda/hda_priv.h
··· 170 170 #define AZX_DCAPS_PM_RUNTIME (1 << 26) /* runtime PM support */ 171 171 #define AZX_DCAPS_I915_POWERWELL (1 << 27) /* HSW i915 powerwell support */ 172 172 #define AZX_DCAPS_CORBRP_SELF_CLEAR (1 << 28) /* CORBRP clears itself after reset */ 173 + #define AZX_DCAPS_NO_MSI64 (1 << 29) /* Stick to 32-bit MSIs */ 173 174 174 175 enum { 175 176 AZX_SNOOP_TYPE_NONE ,
+4 -4
virt/kvm/arm/vgic.c
··· 1933 1933 1934 1934 int kvm_vgic_create(struct kvm *kvm) 1935 1935 { 1936 - int i, vcpu_lock_idx = -1, ret = 0; 1936 + int i, vcpu_lock_idx = -1, ret; 1937 1937 struct kvm_vcpu *vcpu; 1938 1938 1939 1939 mutex_lock(&kvm->lock); ··· 1948 1948 * vcpu->mutex. By grabbing the vcpu->mutex of all VCPUs we ensure 1949 1949 * that no other VCPUs are run while we create the vgic. 1950 1950 */ 1951 + ret = -EBUSY; 1951 1952 kvm_for_each_vcpu(i, vcpu, kvm) { 1952 1953 if (!mutex_trylock(&vcpu->mutex)) 1953 1954 goto out_unlock; ··· 1956 1955 } 1957 1956 1958 1957 kvm_for_each_vcpu(i, vcpu, kvm) { 1959 - if (vcpu->arch.has_run_once) { 1960 - ret = -EBUSY; 1958 + if (vcpu->arch.has_run_once) 1961 1959 goto out_unlock; 1962 - } 1963 1960 } 1961 + ret = 0; 1964 1962 1965 1963 spin_lock_init(&kvm->arch.vgic.lock); 1966 1964 kvm->arch.vgic.in_kernel = true;
+8 -8
virt/kvm/kvm_main.c
··· 107 107 108 108 static bool largepages_enabled = true; 109 109 110 - bool kvm_is_mmio_pfn(pfn_t pfn) 110 + bool kvm_is_reserved_pfn(pfn_t pfn) 111 111 { 112 112 if (pfn_valid(pfn)) 113 - return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); 113 + return PageReserved(pfn_to_page(pfn)); 114 114 115 115 return true; 116 116 } ··· 1321 1321 else if ((vma->vm_flags & VM_PFNMAP)) { 1322 1322 pfn = ((addr - vma->vm_start) >> PAGE_SHIFT) + 1323 1323 vma->vm_pgoff; 1324 - BUG_ON(!kvm_is_mmio_pfn(pfn)); 1324 + BUG_ON(!kvm_is_reserved_pfn(pfn)); 1325 1325 } else { 1326 1326 if (async && vma_is_valid(vma, write_fault)) 1327 1327 *async = true; ··· 1427 1427 if (is_error_noslot_pfn(pfn)) 1428 1428 return KVM_ERR_PTR_BAD_PAGE; 1429 1429 1430 - if (kvm_is_mmio_pfn(pfn)) { 1430 + if (kvm_is_reserved_pfn(pfn)) { 1431 1431 WARN_ON(1); 1432 1432 return KVM_ERR_PTR_BAD_PAGE; 1433 1433 } ··· 1456 1456 1457 1457 void kvm_release_pfn_clean(pfn_t pfn) 1458 1458 { 1459 - if (!is_error_noslot_pfn(pfn) && !kvm_is_mmio_pfn(pfn)) 1459 + if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) 1460 1460 put_page(pfn_to_page(pfn)); 1461 1461 } 1462 1462 EXPORT_SYMBOL_GPL(kvm_release_pfn_clean); ··· 1477 1477 1478 1478 void kvm_set_pfn_dirty(pfn_t pfn) 1479 1479 { 1480 - if (!kvm_is_mmio_pfn(pfn)) { 1480 + if (!kvm_is_reserved_pfn(pfn)) { 1481 1481 struct page *page = pfn_to_page(pfn); 1482 1482 if (!PageReserved(page)) 1483 1483 SetPageDirty(page); ··· 1487 1487 1488 1488 void kvm_set_pfn_accessed(pfn_t pfn) 1489 1489 { 1490 - if (!kvm_is_mmio_pfn(pfn)) 1490 + if (!kvm_is_reserved_pfn(pfn)) 1491 1491 mark_page_accessed(pfn_to_page(pfn)); 1492 1492 } 1493 1493 EXPORT_SYMBOL_GPL(kvm_set_pfn_accessed); 1494 1494 1495 1495 void kvm_get_pfn(pfn_t pfn) 1496 1496 { 1497 - if (!kvm_is_mmio_pfn(pfn)) 1497 + if (!kvm_is_reserved_pfn(pfn)) 1498 1498 get_page(pfn_to_page(pfn)); 1499 1499 } 1500 1500 EXPORT_SYMBOL_GPL(kvm_get_pfn);