Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'nfs-for-3.19-1' into nfsd for-3.19 branch

Mainly what I need is 860a0d9e511f "sunrpc: add some tracepoints in
svc_rqst handling functions", which subsequent server rpc patches from
jlayton depend on. I'm merging this later tag on the assumption that's
more likely to be a tested and stable point.

+2564 -1029
-4
Documentation/devicetree/bindings/interrupt-controller/interrupts.txt
··· 30 30 Example: 31 31 interrupts-extended = <&intc1 5 1>, <&intc2 1 0>; 32 32 33 - A device node may contain either "interrupts" or "interrupts-extended", but not 34 - both. If both properties are present, then the operating system should log an 35 - error and use only the data in "interrupts". 36 - 37 33 2) Interrupt controller nodes 38 34 ----------------------------- 39 35
+11
Documentation/devicetree/bindings/pci/pci.txt
··· 7 7 8 8 Open Firmware Recommended Practice: Interrupt Mapping 9 9 http://www.openfirmware.org/1275/practice/imap/imap0_9d.pdf 10 + 11 + Additionally to the properties specified in the above standards a host bridge 12 + driver implementation may support the following properties: 13 + 14 + - linux,pci-domain: 15 + If present this property assigns a fixed PCI domain number to a host bridge, 16 + otherwise an unstable (across boots) unique number will be assigned. 17 + It is required to either not set this property at all or set it for all 18 + host bridges in the system, otherwise potentially conflicting domain numbers 19 + may be assigned to root buses behind different host bridges. The domain 20 + number for each host bridge in the system must be unique.
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pdc-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090-PDC's pin configuration nodes act as a container for an abitrary number 12 + TZ1090-PDC's pin configuration nodes act as a container for an arbitrary number 13 13 of subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/img,tz1090-pinctrl.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - TZ1090's pin configuration nodes act as a container for an abitrary number of 12 + TZ1090's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,falcon-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/lantiq,xway-pinumx.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Lantiq's pin configuration nodes act as a container for an abitrary number of 12 + Lantiq's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those group(s), and two pin configuration parameters:
+1 -1
Documentation/devicetree/bindings/pinctrl/nvidia,tegra20-pinmux.txt
··· 9 9 common pinctrl bindings used by client devices, including the meaning of the 10 10 phrase "pin configuration node". 11 11 12 - Tegra's pin configuration nodes act as a container for an abitrary number of 12 + Tegra's pin configuration nodes act as a container for an arbitrary number of 13 13 subnodes. Each of these subnodes represents some desired configuration for a 14 14 pin, a group, or a list of pins or groups. This configuration can include the 15 15 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl-sirf.txt
··· 13 13 Please refer to pinctrl-bindings.txt in this directory for details of the common 14 14 pinctrl bindings used by client devices. 15 15 16 - SiRFprimaII's pinmux nodes act as a container for an abitrary number of subnodes. 16 + SiRFprimaII's pinmux nodes act as a container for an arbitrary number of subnodes. 17 17 Each of these subnodes represents some desired configuration for a group of pins. 18 18 19 19 Required subnode-properties:
+1 -1
Documentation/devicetree/bindings/pinctrl/pinctrl_spear.txt
··· 32 32 Please refer to pinctrl-bindings.txt in this directory for details of the common 33 33 pinctrl bindings used by client devices. 34 34 35 - SPEAr's pinmux nodes act as a container for an abitrary number of subnodes. Each 35 + SPEAr's pinmux nodes act as a container for an arbitrary number of subnodes. Each 36 36 of these subnodes represents muxing for a pin, a group, or a list of pins or 37 37 groups. 38 38
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,apq8084-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,ipq8064-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8960-pinctrl.txt
··· 47 47 common pinctrl bindings used by client devices, including the meaning of the 48 48 phrase "pin configuration node". 49 49 50 - The pin configuration nodes act as a container for an abitrary number of 50 + The pin configuration nodes act as a container for an arbitrary number of 51 51 subnodes. Each of these subnodes represents some desired configuration for a 52 52 pin, a group, or a list of pins or groups. This configuration can include the 53 53 mux function to select on those pin(s)/group(s), and various pin configuration
+1 -1
Documentation/devicetree/bindings/pinctrl/qcom,msm8974-pinctrl.txt
··· 18 18 common pinctrl bindings used by client devices, including the meaning of the 19 19 phrase "pin configuration node". 20 20 21 - Qualcomm's pin configuration nodes act as a container for an abitrary number of 21 + Qualcomm's pin configuration nodes act as a container for an arbitrary number of 22 22 subnodes. Each of these subnodes represents some desired configuration for a 23 23 pin, a group, or a list of pins or groups. This configuration can include the 24 24 mux function to select on those pin(s)/group(s), and various pin configuration
+4 -1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 34 34 chrp Common Hardware Reference Platform 35 35 chunghwa Chunghwa Picture Tubes Ltd. 36 36 cirrus Cirrus Logic, Inc. 37 + cnm Chips&Media, Inc. 37 38 cortina Cortina Systems, Inc. 38 39 crystalfontz Crystalfontz America, Inc. 39 40 dallas Maxim Integrated Products (formerly Dallas Semiconductor) ··· 93 92 mediatek MediaTek Inc. 94 93 micrel Micrel Inc. 95 94 microchip Microchip Technology Inc. 95 + micron Micron Technology Inc. 96 96 mitsubishi Mitsubishi Electric Corporation 97 97 mosaixtech Mosaix Technologies, Inc. 98 98 moxa Moxa ··· 129 127 ricoh Ricoh Co. Ltd. 130 128 rockchip Fuzhou Rockchip Electronics Co., Ltd 131 129 samsung Samsung Semiconductor 130 + sandisk Sandisk Corporation 132 131 sbs Smart Battery System 133 132 schindler Schindler 134 133 seagate Seagate Technology PLC ··· 141 138 sirf SiRF Technology, Inc. 142 139 sitronix Sitronix Technology Corporation 143 140 smsc Standard Microsystems Corporation 144 - snps Synopsys, Inc. 141 + snps Synopsys, Inc. 145 142 solidrun SolidRun 146 143 sony Sony Corporation 147 144 spansion Spansion Inc.
+1 -1
Documentation/filesystems/overlayfs.txt
··· 64 64 At mount time, the two directories given as mount options "lowerdir" and 65 65 "upperdir" are combined into a merged directory: 66 66 67 - mount -t overlayfs overlayfs -olowerdir=/lower,upperdir=/upper,\ 67 + mount -t overlay overlay -olowerdir=/lower,upperdir=/upper,\ 68 68 workdir=/work /merged 69 69 70 70 The "workdir" needs to be an empty directory on the same filesystem
+4 -3
MAINTAINERS
··· 6888 6888 F: include/scsi/osd_* 6889 6889 F: fs/exofs/ 6890 6890 6891 - OVERLAYFS FILESYSTEM 6891 + OVERLAY FILESYSTEM 6892 6892 M: Miklos Szeredi <miklos@szeredi.hu> 6893 - L: linux-fsdevel@vger.kernel.org 6893 + L: linux-unionfs@vger.kernel.org 6894 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git 6894 6895 S: Supported 6895 - F: fs/overlayfs/* 6896 + F: fs/overlayfs/ 6896 6897 F: Documentation/filesystems/overlayfs.txt 6897 6898 6898 6899 P54 WIRELESS DRIVER
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 18 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/r8a7740.dtsi
··· 433 433 clocks = <&cpg_clocks R8A7740_CLK_S>, 434 434 <&cpg_clocks R8A7740_CLK_S>, <&sub_clk>, 435 435 <&cpg_clocks R8A7740_CLK_B>, 436 - <&sub_clk>, <&sub_clk>, 436 + <&cpg_clocks R8A7740_CLK_HPP>, <&sub_clk>, 437 437 <&cpg_clocks R8A7740_CLK_B>; 438 438 #clock-cells = <1>; 439 439 renesas,clock-indices = <
+2 -2
arch/arm/boot/dts/r8a7790.dtsi
··· 666 666 #clock-cells = <0>; 667 667 clock-output-names = "sd2"; 668 668 }; 669 - sd3_clk: sd3_clk@e615007c { 669 + sd3_clk: sd3_clk@e615026c { 670 670 compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock"; 671 - reg = <0 0xe615007c 0 4>; 671 + reg = <0 0xe615026c 0 4>; 672 672 clocks = <&pll1_div2_clk>; 673 673 #clock-cells = <0>; 674 674 clock-output-names = "sd3";
+4
arch/arm/boot/dts/sun6i-a31.dtsi
··· 361 361 clocks = <&ahb1_gates 6>; 362 362 resets = <&ahb1_rst 6>; 363 363 #dma-cells = <1>; 364 + 365 + /* DMA controller requires AHB1 clocked from PLL6 */ 366 + assigned-clocks = <&ahb1_mux>; 367 + assigned-clock-parents = <&pll6>; 364 368 }; 365 369 366 370 mmc0: mmc@01c0f000 {
+1
arch/arm/boot/dts/tegra114-dalmore.dts
··· 15 15 aliases { 16 16 rtc0 = "/i2c@7000d000/tps65913@58"; 17 17 rtc1 = "/rtc@7000e000"; 18 + serial0 = &uartd; 18 19 }; 19 20 20 21 memory {
+5 -4
arch/arm/boot/dts/tegra114-roth.dts
··· 15 15 linux,initrd-end = <0x82800000>; 16 16 }; 17 17 18 + aliases { 19 + serial0 = &uartd; 20 + }; 21 + 18 22 firmware { 19 23 trusted-foundations { 20 24 compatible = "tlm,trusted-foundations"; ··· 920 916 regulator-name = "vddio-sdmmc3"; 921 917 regulator-min-microvolt = <1800000>; 922 918 regulator-max-microvolt = <3300000>; 923 - regulator-always-on; 924 - regulator-boot-on; 925 919 }; 926 920 927 921 ldousb { ··· 964 962 sdhci@78000400 { 965 963 status = "okay"; 966 964 bus-width = <4>; 967 - vmmc-supply = <&vddio_sdmmc3>; 965 + vqmmc-supply = <&vddio_sdmmc3>; 968 966 cd-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>; 969 967 power-gpios = <&gpio TEGRA_GPIO(H, 0) GPIO_ACTIVE_HIGH>; 970 968 }; ··· 973 971 sdhci@78000600 { 974 972 status = "okay"; 975 973 bus-width = <8>; 976 - vmmc-supply = <&vdd_1v8>; 977 974 non-removable; 978 975 }; 979 976
+4 -1
arch/arm/boot/dts/tegra114-tn7.dts
··· 15 15 linux,initrd-end = <0x82800000>; 16 16 }; 17 17 18 + aliases { 19 + serial0 = &uartd; 20 + }; 21 + 18 22 firmware { 19 23 trusted-foundations { 20 24 compatible = "tlm,trusted-foundations"; ··· 244 240 sdhci@78000600 { 245 241 status = "okay"; 246 242 bus-width = <8>; 247 - vmmc-supply = <&vdd_1v8>; 248 243 non-removable; 249 244 }; 250 245
-7
arch/arm/boot/dts/tegra114.dtsi
··· 9 9 compatible = "nvidia,tegra114"; 10 10 interrupt-parent = <&gic>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - }; 18 - 19 12 host1x@50000000 { 20 13 compatible = "nvidia,tegra114-host1x", "simple-bus"; 21 14 reg = <0x50000000 0x00028000>;
+1
arch/arm/boot/dts/tegra124-jetson-tk1.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra124-nyan-big.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra124-venice2.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@0,7000d000/pmic@40"; 12 12 rtc1 = "/rtc@0,7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+4 -4
arch/arm/boot/dts/tegra124.dtsi
··· 286 286 * the APB DMA based serial driver, the comptible is 287 287 * "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart". 288 288 */ 289 - serial@0,70006000 { 289 + uarta: serial@0,70006000 { 290 290 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 291 291 reg = <0x0 0x70006000 0x0 0x40>; 292 292 reg-shift = <2>; ··· 299 299 status = "disabled"; 300 300 }; 301 301 302 - serial@0,70006040 { 302 + uartb: serial@0,70006040 { 303 303 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 304 304 reg = <0x0 0x70006040 0x0 0x40>; 305 305 reg-shift = <2>; ··· 312 312 status = "disabled"; 313 313 }; 314 314 315 - serial@0,70006200 { 315 + uartc: serial@0,70006200 { 316 316 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 317 317 reg = <0x0 0x70006200 0x0 0x40>; 318 318 reg-shift = <2>; ··· 325 325 status = "disabled"; 326 326 }; 327 327 328 - serial@0,70006300 { 328 + uartd: serial@0,70006300 { 329 329 compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart"; 330 330 reg = <0x0 0x70006300 0x0 0x40>; 331 331 reg-shift = <2>;
+1
arch/arm/boot/dts/tegra20-harmony.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+5
arch/arm/boot/dts/tegra20-iris-512.dts
··· 6 6 model = "Toradex Colibri T20 512MB on Iris"; 7 7 compatible = "toradex,iris", "toradex,colibri_t20-512", "nvidia,tegra20"; 8 8 9 + aliases { 10 + serial0 = &uarta; 11 + serial1 = &uartd; 12 + }; 13 + 9 14 host1x@50000000 { 10 15 hdmi@54280000 { 11 16 status = "okay";
+4
arch/arm/boot/dts/tegra20-medcom-wide.dts
··· 6 6 model = "Avionic Design Medcom-Wide board"; 7 7 compatible = "ad,medcom-wide", "ad,tamonten", "nvidia,tegra20"; 8 8 9 + aliases { 10 + serial0 = &uartd; 11 + }; 12 + 9 13 pwm@7000a000 { 10 14 status = "okay"; 11 15 };
+2
arch/arm/boot/dts/tegra20-paz00.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 14 + serial1 = &uartc; 13 15 }; 14 16 15 17 memory {
+1
arch/arm/boot/dts/tegra20-seaboard.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-tamonten.dtsi
··· 7 7 aliases { 8 8 rtc0 = "/i2c@7000d000/tps6586x@34"; 9 9 rtc1 = "/rtc@7000e000"; 10 + serial0 = &uartd; 10 11 }; 11 12 12 13 memory {
+1
arch/arm/boot/dts/tegra20-trimslice.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000c500/rtc@56"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-ventana.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/tps6586x@34"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uartd; 13 14 }; 14 15 15 16 memory {
+1
arch/arm/boot/dts/tegra20-whistler.dts
··· 10 10 aliases { 11 11 rtc0 = "/i2c@7000d000/max8907@3c"; 12 12 rtc1 = "/rtc@7000e000"; 13 + serial0 = &uarta; 13 14 }; 14 15 15 16 memory {
-8
arch/arm/boot/dts/tegra20.dtsi
··· 9 9 compatible = "nvidia,tegra20"; 10 10 interrupt-parent = <&intc>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - serial4 = &uarte; 18 - }; 19 - 20 12 host1x@50000000 { 21 13 compatible = "nvidia,tegra20-host1x", "simple-bus"; 22 14 reg = <0x50000000 0x00024000>;
+4
arch/arm/boot/dts/tegra30-apalis-eval.dts
··· 11 11 rtc0 = "/i2c@7000c000/rtc@68"; 12 12 rtc1 = "/i2c@7000d000/tps65911@2d"; 13 13 rtc2 = "/rtc@7000e000"; 14 + serial0 = &uarta; 15 + serial1 = &uartb; 16 + serial2 = &uartc; 17 + serial3 = &uartd; 14 18 }; 15 19 16 20 pcie-controller@00003000 {
+1
arch/arm/boot/dts/tegra30-beaver.dts
··· 9 9 aliases { 10 10 rtc0 = "/i2c@7000d000/tps65911@2d"; 11 11 rtc1 = "/rtc@7000e000"; 12 + serial0 = &uarta; 12 13 }; 13 14 14 15 memory {
+2
arch/arm/boot/dts/tegra30-cardhu.dtsi
··· 30 30 aliases { 31 31 rtc0 = "/i2c@7000d000/tps65911@2d"; 32 32 rtc1 = "/rtc@7000e000"; 33 + serial0 = &uarta; 34 + serial1 = &uartc; 33 35 }; 34 36 35 37 memory {
+3
arch/arm/boot/dts/tegra30-colibri-eval-v3.dts
··· 10 10 rtc0 = "/i2c@7000c000/rtc@68"; 11 11 rtc1 = "/i2c@7000d000/tps65911@2d"; 12 12 rtc2 = "/rtc@7000e000"; 13 + serial0 = &uarta; 14 + serial1 = &uartb; 15 + serial2 = &uartd; 13 16 }; 14 17 15 18 host1x@50000000 {
-8
arch/arm/boot/dts/tegra30.dtsi
··· 9 9 compatible = "nvidia,tegra30"; 10 10 interrupt-parent = <&intc>; 11 11 12 - aliases { 13 - serial0 = &uarta; 14 - serial1 = &uartb; 15 - serial2 = &uartc; 16 - serial3 = &uartd; 17 - serial4 = &uarte; 18 - }; 19 - 20 12 pcie-controller@00003000 { 21 13 compatible = "nvidia,tegra30-pcie"; 22 14 device_type = "pci";
+1
arch/arm/configs/multi_v7_defconfig
··· 217 217 CONFIG_I2C_DESIGNWARE_PLATFORM=y 218 218 CONFIG_I2C_EXYNOS5=y 219 219 CONFIG_I2C_MV64XXX=y 220 + CONFIG_I2C_S3C2410=y 220 221 CONFIG_I2C_SIRF=y 221 222 CONFIG_I2C_TEGRA=y 222 223 CONFIG_I2C_ST=y
+7 -2
arch/arm/mach-shmobile/clock-r8a7740.c
··· 455 455 MSTP128, MSTP127, MSTP125, 456 456 MSTP116, MSTP111, MSTP100, MSTP117, 457 457 458 - MSTP230, 458 + MSTP230, MSTP229, 459 459 MSTP222, 460 460 MSTP218, MSTP217, MSTP216, MSTP214, 461 461 MSTP207, MSTP206, MSTP204, MSTP203, MSTP202, MSTP201, MSTP200, ··· 474 474 [MSTP127] = SH_CLK_MSTP32(&div4_clks[DIV4_S], SMSTPCR1, 27, 0), /* CEU20 */ 475 475 [MSTP125] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 25, 0), /* TMU0 */ 476 476 [MSTP117] = SH_CLK_MSTP32(&div4_clks[DIV4_B], SMSTPCR1, 17, 0), /* LCDC1 */ 477 - [MSTP116] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 16, 0), /* IIC0 */ 477 + [MSTP116] = SH_CLK_MSTP32(&div4_clks[DIV4_HPP], SMSTPCR1, 16, 0), /* IIC0 */ 478 478 [MSTP111] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR1, 11, 0), /* TMU1 */ 479 479 [MSTP100] = SH_CLK_MSTP32(&div4_clks[DIV4_B], SMSTPCR1, 0, 0), /* LCDC0 */ 480 480 481 481 [MSTP230] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR2, 30, 0), /* SCIFA6 */ 482 + [MSTP229] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 29, 0), /* INTCA */ 482 483 [MSTP222] = SH_CLK_MSTP32(&div6_clks[DIV6_SUB], SMSTPCR2, 22, 0), /* SCIFA7 */ 483 484 [MSTP218] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 18, 0), /* DMAC1 */ 484 485 [MSTP217] = SH_CLK_MSTP32(&div4_clks[DIV4_HP], SMSTPCR2, 17, 0), /* DMAC2 */ ··· 576 575 CLKDEV_DEV_ID("sh-dma-engine.0", &mstp_clks[MSTP218]), 577 576 CLKDEV_DEV_ID("sh-sci.7", &mstp_clks[MSTP222]), 578 577 CLKDEV_DEV_ID("e6cd0000.serial", &mstp_clks[MSTP222]), 578 + CLKDEV_DEV_ID("renesas_intc_irqpin.0", &mstp_clks[MSTP229]), 579 + CLKDEV_DEV_ID("renesas_intc_irqpin.1", &mstp_clks[MSTP229]), 580 + CLKDEV_DEV_ID("renesas_intc_irqpin.2", &mstp_clks[MSTP229]), 581 + CLKDEV_DEV_ID("renesas_intc_irqpin.3", &mstp_clks[MSTP229]), 579 582 CLKDEV_DEV_ID("sh-sci.6", &mstp_clks[MSTP230]), 580 583 CLKDEV_DEV_ID("e6cc0000.serial", &mstp_clks[MSTP230]), 581 584
+1 -1
arch/arm/mach-shmobile/clock-r8a7790.c
··· 68 68 69 69 #define SDCKCR 0xE6150074 70 70 #define SD2CKCR 0xE6150078 71 - #define SD3CKCR 0xE615007C 71 + #define SD3CKCR 0xE615026C 72 72 #define MMC0CKCR 0xE6150240 73 73 #define MMC1CKCR 0xE6150244 74 74 #define SSPCKCR 0xE6150248
+20
arch/arm/mach-shmobile/setup-sh73a0.c
··· 26 26 #include <linux/of_platform.h> 27 27 #include <linux/delay.h> 28 28 #include <linux/input.h> 29 + #include <linux/i2c/i2c-sh_mobile.h> 29 30 #include <linux/io.h> 30 31 #include <linux/serial_sci.h> 31 32 #include <linux/sh_dma.h> ··· 193 192 }, 194 193 }; 195 194 195 + static struct i2c_sh_mobile_platform_data i2c_platform_data = { 196 + .clks_per_count = 2, 197 + }; 198 + 196 199 static struct platform_device i2c0_device = { 197 200 .name = "i2c-sh_mobile", 198 201 .id = 0, 199 202 .resource = i2c0_resources, 200 203 .num_resources = ARRAY_SIZE(i2c0_resources), 204 + .dev = { 205 + .platform_data = &i2c_platform_data, 206 + }, 201 207 }; 202 208 203 209 static struct platform_device i2c1_device = { ··· 212 204 .id = 1, 213 205 .resource = i2c1_resources, 214 206 .num_resources = ARRAY_SIZE(i2c1_resources), 207 + .dev = { 208 + .platform_data = &i2c_platform_data, 209 + }, 215 210 }; 216 211 217 212 static struct platform_device i2c2_device = { ··· 222 211 .id = 2, 223 212 .resource = i2c2_resources, 224 213 .num_resources = ARRAY_SIZE(i2c2_resources), 214 + .dev = { 215 + .platform_data = &i2c_platform_data, 216 + }, 225 217 }; 226 218 227 219 static struct platform_device i2c3_device = { ··· 232 218 .id = 3, 233 219 .resource = i2c3_resources, 234 220 .num_resources = ARRAY_SIZE(i2c3_resources), 221 + .dev = { 222 + .platform_data = &i2c_platform_data, 223 + }, 235 224 }; 236 225 237 226 static struct platform_device i2c4_device = { ··· 242 225 .id = 4, 243 226 .resource = i2c4_resources, 244 227 .num_resources = ARRAY_SIZE(i2c4_resources), 228 + .dev = { 229 + .platform_data = &i2c_platform_data, 230 + }, 245 231 }; 246 232 247 233 static const struct sh_dmae_slave_config sh73a0_dmae_slaves[] = {
+7 -1
arch/mips/include/asm/jump_label.h
··· 20 20 #define WORD_INSN ".word" 21 21 #endif 22 22 23 + #ifdef CONFIG_CPU_MICROMIPS 24 + #define NOP_INSN "nop32" 25 + #else 26 + #define NOP_INSN "nop" 27 + #endif 28 + 23 29 static __always_inline bool arch_static_branch(struct static_key *key) 24 30 { 25 - asm_volatile_goto("1:\tnop\n\t" 31 + asm_volatile_goto("1:\t" NOP_INSN "\n\t" 26 32 "nop\n\t" 27 33 ".pushsection __jump_table, \"aw\"\n\t" 28 34 WORD_INSN " 1b, %l[l_yes], %0\n\t"
-2
arch/mips/include/asm/mach-loongson/cpu-feature-overrides.h
··· 41 41 #define cpu_has_mcheck 0 42 42 #define cpu_has_mdmx 0 43 43 #define cpu_has_mips16 0 44 - #define cpu_has_mips32r1 0 45 44 #define cpu_has_mips32r2 0 46 45 #define cpu_has_mips3d 0 47 - #define cpu_has_mips64r1 0 48 46 #define cpu_has_mips64r2 0 49 47 #define cpu_has_mipsmt 0 50 48 #define cpu_has_prefetch 0
+8 -4
arch/mips/include/asm/uaccess.h
··· 301 301 __get_kernel_common((x), size, __gu_ptr); \ 302 302 else \ 303 303 __get_user_common((x), size, __gu_ptr); \ 304 - } \ 304 + } else \ 305 + (x) = 0; \ 305 306 \ 306 307 __gu_err; \ 307 308 }) ··· 317 316 " .insn \n" \ 318 317 " .section .fixup,\"ax\" \n" \ 319 318 "3: li %0, %4 \n" \ 319 + " move %1, $0 \n" \ 320 320 " j 2b \n" \ 321 321 " .previous \n" \ 322 322 " .section __ex_table,\"a\" \n" \ ··· 632 630 " .insn \n" \ 633 631 " .section .fixup,\"ax\" \n" \ 634 632 "3: li %0, %4 \n" \ 633 + " move %1, $0 \n" \ 635 634 " j 2b \n" \ 636 635 " .previous \n" \ 637 636 " .section __ex_table,\"a\" \n" \ ··· 776 773 "jal\t" #destination "\n\t" 777 774 #endif 778 775 779 - #ifndef CONFIG_CPU_DADDI_WORKAROUNDS 780 - #define DADDI_SCRATCH "$0" 781 - #else 776 + #if defined(CONFIG_CPU_DADDI_WORKAROUNDS) || (defined(CONFIG_EVA) && \ 777 + defined(CONFIG_CPU_HAS_PREFETCH)) 782 778 #define DADDI_SCRATCH "$3" 779 + #else 780 + #define DADDI_SCRATCH "$0" 783 781 #endif 784 782 785 783 extern size_t __copy_user(void *__to, const void *__from, size_t __n);
+5 -2
arch/mips/kernel/cpu-probe.c
··· 757 757 c->cputype = CPU_LOONGSON2; 758 758 __cpu_name[cpu] = "ICT Loongson-2"; 759 759 set_elf_platform(cpu, "loongson2e"); 760 + set_isa(c, MIPS_CPU_ISA_III); 760 761 break; 761 762 case PRID_REV_LOONGSON2F: 762 763 c->cputype = CPU_LOONGSON2; 763 764 __cpu_name[cpu] = "ICT Loongson-2"; 764 765 set_elf_platform(cpu, "loongson2f"); 766 + set_isa(c, MIPS_CPU_ISA_III); 765 767 break; 766 768 case PRID_REV_LOONGSON3A: 767 769 c->cputype = CPU_LOONGSON3; 768 - c->writecombine = _CACHE_UNCACHED_ACCELERATED; 769 770 __cpu_name[cpu] = "ICT Loongson-3"; 770 771 set_elf_platform(cpu, "loongson3a"); 772 + set_isa(c, MIPS_CPU_ISA_M64R1); 771 773 break; 772 774 case PRID_REV_LOONGSON3B_R1: 773 775 case PRID_REV_LOONGSON3B_R2: 774 776 c->cputype = CPU_LOONGSON3; 775 777 __cpu_name[cpu] = "ICT Loongson-3"; 776 778 set_elf_platform(cpu, "loongson3b"); 779 + set_isa(c, MIPS_CPU_ISA_M64R1); 777 780 break; 778 781 } 779 782 780 - set_isa(c, MIPS_CPU_ISA_III); 781 783 c->options = R4K_OPTS | 782 784 MIPS_CPU_FPU | MIPS_CPU_LLSC | 783 785 MIPS_CPU_32FPR; 784 786 c->tlbsize = 64; 787 + c->writecombine = _CACHE_UNCACHED_ACCELERATED; 785 788 break; 786 789 case PRID_IMP_LOONGSON_32: /* Loongson-1 */ 787 790 decode_configs(c);
+32 -10
arch/mips/kernel/jump_label.c
··· 18 18 19 19 #ifdef HAVE_JUMP_LABEL 20 20 21 - #define J_RANGE_MASK ((1ul << 28) - 1) 21 + /* 22 + * Define parameters for the standard MIPS and the microMIPS jump 23 + * instruction encoding respectively: 24 + * 25 + * - the ISA bit of the target, either 0 or 1 respectively, 26 + * 27 + * - the amount the jump target address is shifted right to fit in the 28 + * immediate field of the machine instruction, either 2 or 1, 29 + * 30 + * - the mask determining the size of the jump region relative to the 31 + * delay-slot instruction, either 256MB or 128MB, 32 + * 33 + * - the jump target alignment, either 4 or 2 bytes. 34 + */ 35 + #define J_ISA_BIT IS_ENABLED(CONFIG_CPU_MICROMIPS) 36 + #define J_RANGE_SHIFT (2 - J_ISA_BIT) 37 + #define J_RANGE_MASK ((1ul << (26 + J_RANGE_SHIFT)) - 1) 38 + #define J_ALIGN_MASK ((1ul << J_RANGE_SHIFT) - 1) 22 39 23 40 void arch_jump_label_transform(struct jump_entry *e, 24 41 enum jump_label_type type) 25 42 { 43 + union mips_instruction *insn_p; 26 44 union mips_instruction insn; 27 - union mips_instruction *insn_p = 28 - (union mips_instruction *)(unsigned long)e->code; 29 45 30 - /* Jump only works within a 256MB aligned region. */ 31 - BUG_ON((e->target & ~J_RANGE_MASK) != (e->code & ~J_RANGE_MASK)); 46 + insn_p = (union mips_instruction *)msk_isa16_mode(e->code); 32 47 33 - /* Target must have 4 byte alignment. */ 34 - BUG_ON((e->target & 3) != 0); 48 + /* Jump only works within an aligned region its delay slot is in. */ 49 + BUG_ON((e->target & ~J_RANGE_MASK) != ((e->code + 4) & ~J_RANGE_MASK)); 50 + 51 + /* Target must have the right alignment and ISA must be preserved. */ 52 + BUG_ON((e->target & J_ALIGN_MASK) != J_ISA_BIT); 35 53 36 54 if (type == JUMP_LABEL_ENABLE) { 37 - insn.j_format.opcode = j_op; 38 - insn.j_format.target = (e->target & J_RANGE_MASK) >> 2; 55 + insn.j_format.opcode = J_ISA_BIT ? mm_j32_op : j_op; 56 + insn.j_format.target = e->target >> J_RANGE_SHIFT; 39 57 } else { 40 58 insn.word = 0; /* nop */ 41 59 } 42 60 43 61 get_online_cpus(); 44 62 mutex_lock(&text_mutex); 45 - *insn_p = insn; 63 + if (IS_ENABLED(CONFIG_CPU_MICROMIPS)) { 64 + insn_p->halfword[0] = insn.word >> 16; 65 + insn_p->halfword[1] = insn.word; 66 + } else 67 + *insn_p = insn; 46 68 47 69 flush_icache_range((unsigned long)insn_p, 48 70 (unsigned long)insn_p + sizeof(*insn_p));
+1
arch/mips/lib/memcpy.S
··· 503 503 STOREB(t0, NBYTES-2(dst), .Ls_exc_p1\@) 504 504 .Ldone\@: 505 505 jr ra 506 + nop 506 507 .if __memcpy == 1 507 508 END(memcpy) 508 509 .set __memcpy, 0
+1
arch/mips/loongson/loongson-3/numa.c
··· 33 33 34 34 static struct node_data prealloc__node_data[MAX_NUMNODES]; 35 35 unsigned char __node_distances[MAX_NUMNODES][MAX_NUMNODES]; 36 + EXPORT_SYMBOL(__node_distances); 36 37 struct node_data *__node_data[MAX_NUMNODES]; 37 38 EXPORT_SYMBOL(__node_data); 38 39
+4
arch/mips/mm/tlb-r4k.c
··· 299 299 300 300 local_irq_save(flags); 301 301 302 + htw_stop(); 302 303 pid = read_c0_entryhi() & ASID_MASK; 303 304 address &= (PAGE_MASK << 1); 304 305 write_c0_entryhi(address | pid); ··· 347 346 tlb_write_indexed(); 348 347 } 349 348 tlbw_use_hazard(); 349 + htw_start(); 350 350 flush_itlb_vm(vma); 351 351 local_irq_restore(flags); 352 352 } ··· 424 422 425 423 local_irq_save(flags); 426 424 /* Save old context and create impossible VPN2 value */ 425 + htw_stop(); 427 426 old_ctx = read_c0_entryhi(); 428 427 old_pagemask = read_c0_pagemask(); 429 428 wired = read_c0_wired(); ··· 446 443 447 444 write_c0_entryhi(old_ctx); 448 445 write_c0_pagemask(old_pagemask); 446 + htw_start(); 449 447 out: 450 448 local_irq_restore(flags); 451 449 return ret;
+1 -1
arch/mips/oprofile/backtrace.c
··· 92 92 /* This marks the end of the previous function, 93 93 which means we overran. */ 94 94 break; 95 - stack_size = (unsigned) stack_adjustment; 95 + stack_size = (unsigned long) stack_adjustment; 96 96 } else if (is_ra_save_ins(&ip)) { 97 97 int ra_slot = ip.i_format.simmediate; 98 98 if (ra_slot < 0)
+1
arch/mips/sgi-ip27/ip27-memory.c
··· 107 107 } 108 108 109 109 unsigned char __node_distances[MAX_COMPACT_NODES][MAX_COMPACT_NODES]; 110 + EXPORT_SYMBOL(__node_distances); 110 111 111 112 static int __init compute_node_distance(nasid_t nasid_a, nasid_t nasid_b) 112 113 {
+1 -1
arch/powerpc/sysdev/fsl_msi.c
··· 361 361 cascade_data->virq = virt_msir; 362 362 msi->cascade_array[irq_index] = cascade_data; 363 363 364 - ret = request_irq(virt_msir, fsl_msi_cascade, 0, 364 + ret = request_irq(virt_msir, fsl_msi_cascade, IRQF_NO_THREAD, 365 365 "fsl-msi-cascade", cascade_data); 366 366 if (ret) { 367 367 dev_err(&dev->dev, "failed to request_irq(%d), ret = %d\n",
+1 -1
arch/x86/Kconfig
··· 144 144 145 145 config PERF_EVENTS_INTEL_UNCORE 146 146 def_bool y 147 - depends on PERF_EVENTS && SUP_SUP_INTEL && PCI 147 + depends on PERF_EVENTS && CPU_SUP_INTEL && PCI 148 148 149 149 config OUTPUT_FORMAT 150 150 string
-1
arch/x86/include/asm/page_32_types.h
··· 20 20 #define THREAD_SIZE_ORDER 1 21 21 #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) 22 22 23 - #define STACKFAULT_STACK 0 24 23 #define DOUBLEFAULT_STACK 1 25 24 #define NMI_STACK 0 26 25 #define DEBUG_STACK 0
+5 -6
arch/x86/include/asm/page_64_types.h
··· 14 14 #define IRQ_STACK_ORDER 2 15 15 #define IRQ_STACK_SIZE (PAGE_SIZE << IRQ_STACK_ORDER) 16 16 17 - #define STACKFAULT_STACK 1 18 - #define DOUBLEFAULT_STACK 2 19 - #define NMI_STACK 3 20 - #define DEBUG_STACK 4 21 - #define MCE_STACK 5 22 - #define N_EXCEPTION_STACKS 5 /* hw limit: 7 */ 17 + #define DOUBLEFAULT_STACK 1 18 + #define NMI_STACK 2 19 + #define DEBUG_STACK 3 20 + #define MCE_STACK 4 21 + #define N_EXCEPTION_STACKS 4 /* hw limit: 7 */ 23 22 24 23 #define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT) 25 24 #define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1))
+1 -1
arch/x86/include/asm/thread_info.h
··· 141 141 /* Only used for 64 bit */ 142 142 #define _TIF_DO_NOTIFY_MASK \ 143 143 (_TIF_SIGPENDING | _TIF_MCE_NOTIFY | _TIF_NOTIFY_RESUME | \ 144 - _TIF_USER_RETURN_NOTIFY) 144 + _TIF_USER_RETURN_NOTIFY | _TIF_UPROBE) 145 145 146 146 /* flags to check in __switch_to() */ 147 147 #define _TIF_WORK_CTXSW \
+1
arch/x86/include/asm/traps.h
··· 39 39 40 40 #ifdef CONFIG_TRACING 41 41 asmlinkage void trace_page_fault(void); 42 + #define trace_stack_segment stack_segment 42 43 #define trace_divide_error divide_error 43 44 #define trace_bounds bounds 44 45 #define trace_invalid_op invalid_op
+2
arch/x86/kernel/cpu/common.c
··· 146 146 147 147 static int __init x86_xsave_setup(char *s) 148 148 { 149 + if (strlen(s)) 150 + return 0; 149 151 setup_clear_cpu_cap(X86_FEATURE_XSAVE); 150 152 setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT); 151 153 setup_clear_cpu_cap(X86_FEATURE_XSAVES);
+8
arch/x86/kernel/cpu/microcode/core.c
··· 465 465 466 466 if (uci->valid && uci->mc) 467 467 microcode_ops->apply_microcode(cpu); 468 + else if (!uci->mc) 469 + /* 470 + * We might resume and not have applied late microcode but still 471 + * have a newer patch stashed from the early loader. We don't 472 + * have it in uci->mc so we have to load it the same way we're 473 + * applying patches early on the APs. 474 + */ 475 + load_ucode_ap(); 468 476 } 469 477 470 478 static struct syscore_ops mc_syscore_ops = {
+45 -4
arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
··· 486 486 .attrs = snbep_uncore_qpi_formats_attr, 487 487 }; 488 488 489 - #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 490 - .init_box = snbep_uncore_msr_init_box, \ 489 + #define __SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 491 490 .disable_box = snbep_uncore_msr_disable_box, \ 492 491 .enable_box = snbep_uncore_msr_enable_box, \ 493 492 .disable_event = snbep_uncore_msr_disable_event, \ 494 493 .enable_event = snbep_uncore_msr_enable_event, \ 495 494 .read_counter = uncore_msr_read_counter 495 + 496 + #define SNBEP_UNCORE_MSR_OPS_COMMON_INIT() \ 497 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), \ 498 + .init_box = snbep_uncore_msr_init_box \ 496 499 497 500 static struct intel_uncore_ops snbep_uncore_msr_ops = { 498 501 SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), ··· 1922 1919 .format_group = &hswep_uncore_cbox_format_group, 1923 1920 }; 1924 1921 1922 + /* 1923 + * Write SBOX Initialization register bit by bit to avoid spurious #GPs 1924 + */ 1925 + static void hswep_uncore_sbox_msr_init_box(struct intel_uncore_box *box) 1926 + { 1927 + unsigned msr = uncore_msr_box_ctl(box); 1928 + 1929 + if (msr) { 1930 + u64 init = SNBEP_PMON_BOX_CTL_INT; 1931 + u64 flags = 0; 1932 + int i; 1933 + 1934 + for_each_set_bit(i, (unsigned long *)&init, 64) { 1935 + flags |= (1ULL << i); 1936 + wrmsrl(msr, flags); 1937 + } 1938 + } 1939 + } 1940 + 1941 + static struct intel_uncore_ops hswep_uncore_sbox_msr_ops = { 1942 + __SNBEP_UNCORE_MSR_OPS_COMMON_INIT(), 1943 + .init_box = hswep_uncore_sbox_msr_init_box 1944 + }; 1945 + 1925 1946 static struct attribute *hswep_uncore_sbox_formats_attr[] = { 1926 1947 &format_attr_event.attr, 1927 1948 &format_attr_umask.attr, ··· 1971 1944 .event_mask = HSWEP_S_MSR_PMON_RAW_EVENT_MASK, 1972 1945 .box_ctl = HSWEP_S0_MSR_PMON_BOX_CTL, 1973 1946 .msr_offset = HSWEP_SBOX_MSR_OFFSET, 1974 - .ops = &snbep_uncore_msr_ops, 1947 + .ops = &hswep_uncore_sbox_msr_ops, 1975 1948 .format_group = &hswep_uncore_sbox_format_group, 1976 1949 }; 1977 1950 ··· 2052 2025 SNBEP_UNCORE_PCI_COMMON_INIT(), 2053 2026 }; 2054 2027 2028 + static unsigned hswep_uncore_irp_ctrs[] = {0xa0, 0xa8, 0xb0, 0xb8}; 2029 + 2030 + static u64 hswep_uncore_irp_read_counter(struct intel_uncore_box *box, struct perf_event *event) 2031 + { 2032 + struct pci_dev *pdev = box->pci_dev; 2033 + struct hw_perf_event *hwc = &event->hw; 2034 + u64 count = 0; 2035 + 2036 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx], (u32 *)&count); 2037 + pci_read_config_dword(pdev, hswep_uncore_irp_ctrs[hwc->idx] + 4, (u32 *)&count + 1); 2038 + 2039 + return count; 2040 + } 2041 + 2055 2042 static struct intel_uncore_ops hswep_uncore_irp_ops = { 2056 2043 .init_box = snbep_uncore_pci_init_box, 2057 2044 .disable_box = snbep_uncore_pci_disable_box, 2058 2045 .enable_box = snbep_uncore_pci_enable_box, 2059 2046 .disable_event = ivbep_uncore_irp_disable_event, 2060 2047 .enable_event = ivbep_uncore_irp_enable_event, 2061 - .read_counter = ivbep_uncore_irp_read_counter, 2048 + .read_counter = hswep_uncore_irp_read_counter, 2062 2049 }; 2063 2050 2064 2051 static struct intel_uncore_type hswep_uncore_irp = {
-1
arch/x86/kernel/dumpstack_64.c
··· 24 24 [ DEBUG_STACK-1 ] = "#DB", 25 25 [ NMI_STACK-1 ] = "NMI", 26 26 [ DOUBLEFAULT_STACK-1 ] = "#DF", 27 - [ STACKFAULT_STACK-1 ] = "#SS", 28 27 [ MCE_STACK-1 ] = "#MC", 29 28 #if DEBUG_STKSZ > EXCEPTION_STKSZ 30 29 [ N_EXCEPTION_STACKS ...
+22 -59
arch/x86/kernel/entry_64.S
··· 828 828 jnz native_irq_return_ldt 829 829 #endif 830 830 831 + .global native_irq_return_iret 831 832 native_irq_return_iret: 833 + /* 834 + * This may fault. Non-paranoid faults on return to userspace are 835 + * handled by fixup_bad_iret. These include #SS, #GP, and #NP. 836 + * Double-faults due to espfix64 are handled in do_double_fault. 837 + * Other faults here are fatal. 838 + */ 832 839 iretq 833 - _ASM_EXTABLE(native_irq_return_iret, bad_iret) 834 840 835 841 #ifdef CONFIG_X86_ESPFIX64 836 842 native_irq_return_ldt: ··· 863 857 popq_cfi %rax 864 858 jmp native_irq_return_iret 865 859 #endif 866 - 867 - .section .fixup,"ax" 868 - bad_iret: 869 - /* 870 - * The iret traps when the %cs or %ss being restored is bogus. 871 - * We've lost the original trap vector and error code. 872 - * #GPF is the most likely one to get for an invalid selector. 873 - * So pretend we completed the iret and took the #GPF in user mode. 874 - * 875 - * We are now running with the kernel GS after exception recovery. 876 - * But error_entry expects us to have user GS to match the user %cs, 877 - * so swap back. 878 - */ 879 - pushq $0 880 - 881 - SWAPGS 882 - jmp general_protection 883 - 884 - .previous 885 860 886 861 /* edi: workmask, edx: work */ 887 862 retint_careful: ··· 908 921 #endif 909 922 CFI_ENDPROC 910 923 END(common_interrupt) 911 - 912 - /* 913 - * If IRET takes a fault on the espfix stack, then we 914 - * end up promoting it to a doublefault. In that case, 915 - * modify the stack to make it look like we just entered 916 - * the #GP handler from user space, similar to bad_iret. 917 - */ 918 - #ifdef CONFIG_X86_ESPFIX64 919 - ALIGN 920 - __do_double_fault: 921 - XCPT_FRAME 1 RDI+8 922 - movq RSP(%rdi),%rax /* Trap on the espfix stack? */ 923 - sarq $PGDIR_SHIFT,%rax 924 - cmpl $ESPFIX_PGD_ENTRY,%eax 925 - jne do_double_fault /* No, just deliver the fault */ 926 - cmpl $__KERNEL_CS,CS(%rdi) 927 - jne do_double_fault 928 - movq RIP(%rdi),%rax 929 - cmpq $native_irq_return_iret,%rax 930 - jne do_double_fault /* This shouldn't happen... */ 931 - movq PER_CPU_VAR(kernel_stack),%rax 932 - subq $(6*8-KERNEL_STACK_OFFSET),%rax /* Reset to original stack */ 933 - movq %rax,RSP(%rdi) 934 - movq $0,(%rax) /* Missing (lost) #GP error code */ 935 - movq $general_protection,RIP(%rdi) 936 - retq 937 - CFI_ENDPROC 938 - END(__do_double_fault) 939 - #else 940 - # define __do_double_fault do_double_fault 941 - #endif 942 924 943 925 /* 944 926 * APIC interrupts. ··· 1080 1124 idtentry bounds do_bounds has_error_code=0 1081 1125 idtentry invalid_op do_invalid_op has_error_code=0 1082 1126 idtentry device_not_available do_device_not_available has_error_code=0 1083 - idtentry double_fault __do_double_fault has_error_code=1 paranoid=1 1127 + idtentry double_fault do_double_fault has_error_code=1 paranoid=1 1084 1128 idtentry coprocessor_segment_overrun do_coprocessor_segment_overrun has_error_code=0 1085 1129 idtentry invalid_TSS do_invalid_TSS has_error_code=1 1086 1130 idtentry segment_not_present do_segment_not_present has_error_code=1 ··· 1245 1289 1246 1290 idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK 1247 1291 idtentry int3 do_int3 has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK 1248 - idtentry stack_segment do_stack_segment has_error_code=1 paranoid=1 1292 + idtentry stack_segment do_stack_segment has_error_code=1 1249 1293 #ifdef CONFIG_XEN 1250 1294 idtentry xen_debug do_debug has_error_code=0 1251 1295 idtentry xen_int3 do_int3 has_error_code=0 ··· 1355 1399 1356 1400 /* 1357 1401 * There are two places in the kernel that can potentially fault with 1358 - * usergs. Handle them here. The exception handlers after iret run with 1359 - * kernel gs again, so don't set the user space flag. B stepping K8s 1360 - * sometimes report an truncated RIP for IRET exceptions returning to 1361 - * compat mode. Check for these here too. 1402 + * usergs. Handle them here. B stepping K8s sometimes report a 1403 + * truncated RIP for IRET exceptions returning to compat mode. Check 1404 + * for these here too. 1362 1405 */ 1363 1406 error_kernelspace: 1364 1407 CFI_REL_OFFSET rcx, RCX+8 1365 1408 incl %ebx 1366 1409 leaq native_irq_return_iret(%rip),%rcx 1367 1410 cmpq %rcx,RIP+8(%rsp) 1368 - je error_swapgs 1411 + je error_bad_iret 1369 1412 movl %ecx,%eax /* zero extend */ 1370 1413 cmpq %rax,RIP+8(%rsp) 1371 1414 je bstep_iret ··· 1375 1420 bstep_iret: 1376 1421 /* Fix truncated RIP */ 1377 1422 movq %rcx,RIP+8(%rsp) 1378 - jmp error_swapgs 1423 + /* fall through */ 1424 + 1425 + error_bad_iret: 1426 + SWAPGS 1427 + mov %rsp,%rdi 1428 + call fixup_bad_iret 1429 + mov %rax,%rsp 1430 + decl %ebx /* Return to usergs */ 1431 + jmp error_sti 1379 1432 CFI_ENDPROC 1380 1433 END(error_entry) 1381 1434
+1 -1
arch/x86/kernel/ptrace.c
··· 1484 1484 */ 1485 1485 if (work & _TIF_NOHZ) { 1486 1486 user_exit(); 1487 - work &= ~TIF_NOHZ; 1487 + work &= ~_TIF_NOHZ; 1488 1488 } 1489 1489 1490 1490 #ifdef CONFIG_SECCOMP
+54 -17
arch/x86/kernel/traps.c
··· 233 233 DO_ERROR(X86_TRAP_OLD_MF, SIGFPE, "coprocessor segment overrun",coprocessor_segment_overrun) 234 234 DO_ERROR(X86_TRAP_TS, SIGSEGV, "invalid TSS", invalid_TSS) 235 235 DO_ERROR(X86_TRAP_NP, SIGBUS, "segment not present", segment_not_present) 236 - #ifdef CONFIG_X86_32 237 236 DO_ERROR(X86_TRAP_SS, SIGBUS, "stack segment", stack_segment) 238 - #endif 239 237 DO_ERROR(X86_TRAP_AC, SIGBUS, "alignment check", alignment_check) 240 238 241 239 #ifdef CONFIG_X86_64 242 240 /* Runs on IST stack */ 243 - dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code) 244 - { 245 - enum ctx_state prev_state; 246 - 247 - prev_state = exception_enter(); 248 - if (notify_die(DIE_TRAP, "stack segment", regs, error_code, 249 - X86_TRAP_SS, SIGBUS) != NOTIFY_STOP) { 250 - preempt_conditional_sti(regs); 251 - do_trap(X86_TRAP_SS, SIGBUS, "stack segment", regs, error_code, NULL); 252 - preempt_conditional_cli(regs); 253 - } 254 - exception_exit(prev_state); 255 - } 256 - 257 241 dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) 258 242 { 259 243 static const char str[] = "double fault"; 260 244 struct task_struct *tsk = current; 245 + 246 + #ifdef CONFIG_X86_ESPFIX64 247 + extern unsigned char native_irq_return_iret[]; 248 + 249 + /* 250 + * If IRET takes a non-IST fault on the espfix64 stack, then we 251 + * end up promoting it to a doublefault. In that case, modify 252 + * the stack to make it look like we just entered the #GP 253 + * handler from user space, similar to bad_iret. 254 + */ 255 + if (((long)regs->sp >> PGDIR_SHIFT) == ESPFIX_PGD_ENTRY && 256 + regs->cs == __KERNEL_CS && 257 + regs->ip == (unsigned long)native_irq_return_iret) 258 + { 259 + struct pt_regs *normal_regs = task_pt_regs(current); 260 + 261 + /* Fake a #GP(0) from userspace. */ 262 + memmove(&normal_regs->ip, (void *)regs->sp, 5*8); 263 + normal_regs->orig_ax = 0; /* Missing (lost) #GP error code */ 264 + regs->ip = (unsigned long)general_protection; 265 + regs->sp = (unsigned long)&normal_regs->orig_ax; 266 + return; 267 + } 268 + #endif 261 269 262 270 exception_enter(); 263 271 /* Return not checked because double check cannot be ignored */ ··· 407 399 return regs; 408 400 } 409 401 NOKPROBE_SYMBOL(sync_regs); 402 + 403 + struct bad_iret_stack { 404 + void *error_entry_ret; 405 + struct pt_regs regs; 406 + }; 407 + 408 + asmlinkage __visible 409 + struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) 410 + { 411 + /* 412 + * This is called from entry_64.S early in handling a fault 413 + * caused by a bad iret to user mode. To handle the fault 414 + * correctly, we want move our stack frame to task_pt_regs 415 + * and we want to pretend that the exception came from the 416 + * iret target. 417 + */ 418 + struct bad_iret_stack *new_stack = 419 + container_of(task_pt_regs(current), 420 + struct bad_iret_stack, regs); 421 + 422 + /* Copy the IRET target to the new stack. */ 423 + memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); 424 + 425 + /* Copy the remainder of the stack from the current stack. */ 426 + memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); 427 + 428 + BUG_ON(!user_mode_vm(&new_stack->regs)); 429 + return new_stack; 430 + } 410 431 #endif 411 432 412 433 /* ··· 815 778 set_intr_gate(X86_TRAP_OLD_MF, coprocessor_segment_overrun); 816 779 set_intr_gate(X86_TRAP_TS, invalid_TSS); 817 780 set_intr_gate(X86_TRAP_NP, segment_not_present); 818 - set_intr_gate_ist(X86_TRAP_SS, &stack_segment, STACKFAULT_STACK); 781 + set_intr_gate(X86_TRAP_SS, stack_segment); 819 782 set_intr_gate(X86_TRAP_GP, general_protection); 820 783 set_intr_gate(X86_TRAP_SPURIOUS, spurious_interrupt_bug); 821 784 set_intr_gate(X86_TRAP_MF, coprocessor_error);
+10 -1
arch/x86/mm/init_64.c
··· 1123 1123 unsigned long end = (unsigned long) &__end_rodata_hpage_align; 1124 1124 unsigned long text_end = PFN_ALIGN(&__stop___ex_table); 1125 1125 unsigned long rodata_end = PFN_ALIGN(&__end_rodata); 1126 - unsigned long all_end = PFN_ALIGN(&_end); 1126 + unsigned long all_end; 1127 1127 1128 1128 printk(KERN_INFO "Write protecting the kernel read-only data: %luk\n", 1129 1129 (end - start) >> 10); ··· 1134 1134 /* 1135 1135 * The rodata/data/bss/brk section (but not the kernel text!) 1136 1136 * should also be not-executable. 1137 + * 1138 + * We align all_end to PMD_SIZE because the existing mapping 1139 + * is a full PMD. If we would align _brk_end to PAGE_SIZE we 1140 + * split the PMD and the reminder between _brk_end and the end 1141 + * of the PMD will remain mapped executable. 1142 + * 1143 + * Any PMD which was setup after the one which covers _brk_end 1144 + * has been zapped already via cleanup_highmem(). 1137 1145 */ 1146 + all_end = roundup((unsigned long)_brk_end, PMD_SIZE); 1138 1147 set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT); 1139 1148 1140 1149 rodata_test();
+10 -1
arch/x86/tools/calc_run_size.pl
··· 19 19 if ($file_offset == 0) { 20 20 $file_offset = $offset; 21 21 } elsif ($file_offset != $offset) { 22 - die ".bss and .brk lack common file offset\n"; 22 + # BFD linker shows the same file offset in ELF. 23 + # Gold linker shows them as consecutive. 24 + next if ($file_offset + $mem_size == $offset + $size); 25 + 26 + printf STDERR "file_offset: 0x%lx\n", $file_offset; 27 + printf STDERR "mem_size: 0x%lx\n", $mem_size; 28 + printf STDERR "offset: 0x%lx\n", $offset; 29 + printf STDERR "size: 0x%lx\n", $size; 30 + 31 + die ".bss and .brk are non-contiguous\n"; 23 32 } 24 33 } 25 34 }
+1 -1
drivers/acpi/device_pm.c
··· 878 878 return 0; 879 879 880 880 target_state = acpi_target_system_state(); 881 - wakeup = device_may_wakeup(dev); 881 + wakeup = device_may_wakeup(dev) && acpi_device_can_wakeup(adev); 882 882 error = acpi_device_wakeup(adev, target_state, wakeup); 883 883 if (wakeup && error) 884 884 return error;
+6 -6
drivers/clocksource/sun4i_timer.c
··· 182 182 /* Make sure timer is stopped before playing with interrupts */ 183 183 sun4i_clkevt_time_stop(0); 184 184 185 + sun4i_clockevent.cpumask = cpu_possible_mask; 186 + sun4i_clockevent.irq = irq; 187 + 188 + clockevents_config_and_register(&sun4i_clockevent, rate, 189 + TIMER_SYNC_TICKS, 0xffffffff); 190 + 185 191 ret = setup_irq(irq, &sun4i_timer_irq); 186 192 if (ret) 187 193 pr_warn("failed to setup irq %d\n", irq); ··· 195 189 /* Enable timer0 interrupt */ 196 190 val = readl(timer_base + TIMER_IRQ_EN_REG); 197 191 writel(val | TIMER_IRQ_EN(0), timer_base + TIMER_IRQ_EN_REG); 198 - 199 - sun4i_clockevent.cpumask = cpu_possible_mask; 200 - sun4i_clockevent.irq = irq; 201 - 202 - clockevents_config_and_register(&sun4i_clockevent, rate, 203 - TIMER_SYNC_TICKS, 0xffffffff); 204 192 } 205 193 CLOCKSOURCE_OF_DECLARE(sun4i, "allwinner,sun4i-a10-timer", 206 194 sun4i_timer_init);
+16 -7
drivers/dma/pl330.c
··· 271 271 #define DMAC_MODE_NS (1 << 0) 272 272 unsigned int mode; 273 273 unsigned int data_bus_width:10; /* In number of bits */ 274 - unsigned int data_buf_dep:10; 274 + unsigned int data_buf_dep:11; 275 275 unsigned int num_chan:4; 276 276 unsigned int num_peri:6; 277 277 u32 peri_ns; ··· 2336 2336 int burst_len; 2337 2337 2338 2338 burst_len = pl330->pcfg.data_bus_width / 8; 2339 - burst_len *= pl330->pcfg.data_buf_dep; 2339 + burst_len *= pl330->pcfg.data_buf_dep / pl330->pcfg.num_chan; 2340 2340 burst_len >>= desc->rqcfg.brst_size; 2341 2341 2342 2342 /* src/dst_burst_len can't be more than 16 */ ··· 2459 2459 /* Select max possible burst size */ 2460 2460 burst = pl330->pcfg.data_bus_width / 8; 2461 2461 2462 - while (burst > 1) { 2463 - if (!(len % burst)) 2464 - break; 2462 + /* 2463 + * Make sure we use a burst size that aligns with all the memcpy 2464 + * parameters because our DMA programming algorithm doesn't cope with 2465 + * transfers which straddle an entry in the DMA device's MFIFO. 2466 + */ 2467 + while ((src | dst | len) & (burst - 1)) 2465 2468 burst /= 2; 2466 - } 2467 2469 2468 2470 desc->rqcfg.brst_size = 0; 2469 2471 while (burst != (1 << desc->rqcfg.brst_size)) 2470 2472 desc->rqcfg.brst_size++; 2473 + 2474 + /* 2475 + * If burst size is smaller than bus width then make sure we only 2476 + * transfer one at a time to avoid a burst stradling an MFIFO entry. 2477 + */ 2478 + if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width) 2479 + desc->rqcfg.brst_len = 1; 2471 2480 2472 2481 desc->rqcfg.brst_len = get_burst_len(desc, len); 2473 2482 ··· 2741 2732 2742 2733 2743 2734 dev_info(&adev->dev, 2744 - "Loaded driver for PL330 DMAC-%d\n", adev->periphid); 2735 + "Loaded driver for PL330 DMAC-%x\n", adev->periphid); 2745 2736 dev_info(&adev->dev, 2746 2737 "\tDBUFF-%ux%ubytes Num_Chans-%u Num_Peri-%u Num_Events-%u\n", 2747 2738 pcfg->data_buf_dep, pcfg->data_bus_width / 8, pcfg->num_chan,
+30 -31
drivers/dma/sun6i-dma.c
··· 230 230 readl(pchan->base + DMA_CHAN_CUR_PARA)); 231 231 } 232 232 233 - static inline int convert_burst(u32 maxburst, u8 *burst) 233 + static inline s8 convert_burst(u32 maxburst) 234 234 { 235 235 switch (maxburst) { 236 236 case 1: 237 - *burst = 0; 238 - break; 237 + return 0; 239 238 case 8: 240 - *burst = 2; 241 - break; 239 + return 2; 242 240 default: 243 241 return -EINVAL; 244 242 } 245 - 246 - return 0; 247 243 } 248 244 249 - static inline int convert_buswidth(enum dma_slave_buswidth addr_width, u8 *width) 245 + static inline s8 convert_buswidth(enum dma_slave_buswidth addr_width) 250 246 { 251 247 if ((addr_width < DMA_SLAVE_BUSWIDTH_1_BYTE) || 252 248 (addr_width > DMA_SLAVE_BUSWIDTH_4_BYTES)) 253 249 return -EINVAL; 254 250 255 - *width = addr_width >> 1; 256 - return 0; 251 + return addr_width >> 1; 257 252 } 258 253 259 254 static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev, ··· 279 284 struct dma_slave_config *config) 280 285 { 281 286 u8 src_width, dst_width, src_burst, dst_burst; 282 - int ret; 283 287 284 288 if (!config) 285 289 return -EINVAL; 286 290 287 - ret = convert_burst(config->src_maxburst, &src_burst); 288 - if (ret) 289 - return ret; 291 + src_burst = convert_burst(config->src_maxburst); 292 + if (src_burst) 293 + return src_burst; 290 294 291 - ret = convert_burst(config->dst_maxburst, &dst_burst); 292 - if (ret) 293 - return ret; 295 + dst_burst = convert_burst(config->dst_maxburst); 296 + if (dst_burst) 297 + return dst_burst; 294 298 295 - ret = convert_buswidth(config->src_addr_width, &src_width); 296 - if (ret) 297 - return ret; 299 + src_width = convert_buswidth(config->src_addr_width); 300 + if (src_width) 301 + return src_width; 298 302 299 - ret = convert_buswidth(config->dst_addr_width, &dst_width); 300 - if (ret) 301 - return ret; 303 + dst_width = convert_buswidth(config->dst_addr_width); 304 + if (dst_width) 305 + return dst_width; 302 306 303 307 lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) | 304 308 DMA_CHAN_CFG_SRC_WIDTH(src_width) | ··· 536 542 { 537 543 struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device); 538 544 struct sun6i_vchan *vchan = to_sun6i_vchan(chan); 539 - struct dma_slave_config *sconfig = &vchan->cfg; 540 545 struct sun6i_dma_lli *v_lli; 541 546 struct sun6i_desc *txd; 542 547 dma_addr_t p_lli; 543 - int ret; 548 + s8 burst, width; 544 549 545 550 dev_dbg(chan2dev(chan), 546 551 "%s; chan: %d, dest: %pad, src: %pad, len: %zu. flags: 0x%08lx\n", ··· 558 565 goto err_txd_free; 559 566 } 560 567 561 - ret = sun6i_dma_cfg_lli(v_lli, src, dest, len, sconfig); 562 - if (ret) 563 - goto err_dma_free; 568 + v_lli->src = src; 569 + v_lli->dst = dest; 570 + v_lli->len = len; 571 + v_lli->para = NORMAL_WAIT; 564 572 573 + burst = convert_burst(8); 574 + width = convert_buswidth(DMA_SLAVE_BUSWIDTH_4_BYTES); 565 575 v_lli->cfg |= DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) | 566 576 DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) | 567 577 DMA_CHAN_CFG_DST_LINEAR_MODE | 568 - DMA_CHAN_CFG_SRC_LINEAR_MODE; 578 + DMA_CHAN_CFG_SRC_LINEAR_MODE | 579 + DMA_CHAN_CFG_SRC_BURST(burst) | 580 + DMA_CHAN_CFG_SRC_WIDTH(width) | 581 + DMA_CHAN_CFG_DST_BURST(burst) | 582 + DMA_CHAN_CFG_DST_WIDTH(width); 569 583 570 584 sun6i_dma_lli_add(NULL, v_lli, p_lli, txd); 571 585 ··· 580 580 581 581 return vchan_tx_prep(&vchan->vc, &txd->vd, flags); 582 582 583 - err_dma_free: 584 - dma_pool_free(sdev->pool, v_lli, p_lli); 585 583 err_txd_free: 586 584 kfree(txd); 587 585 return NULL; ··· 913 915 sdc->slave.device_prep_dma_memcpy = sun6i_dma_prep_dma_memcpy; 914 916 sdc->slave.device_control = sun6i_dma_control; 915 917 sdc->slave.chancnt = NR_MAX_VCHANS; 918 + sdc->slave.copy_align = 4; 916 919 917 920 sdc->slave.dev = &pdev->dev; 918 921
+8 -6
drivers/gpu/drm/i915/i915_dma.c
··· 1670 1670 goto out_regs; 1671 1671 1672 1672 if (drm_core_check_feature(dev, DRIVER_MODESET)) { 1673 - ret = i915_kick_out_vgacon(dev_priv); 1674 - if (ret) { 1675 - DRM_ERROR("failed to remove conflicting VGA console\n"); 1676 - goto out_gtt; 1677 - } 1678 - 1673 + /* WARNING: Apparently we must kick fbdev drivers before vgacon, 1674 + * otherwise the vga fbdev driver falls over. */ 1679 1675 ret = i915_kick_out_firmware_fb(dev_priv); 1680 1676 if (ret) { 1681 1677 DRM_ERROR("failed to remove conflicting framebuffer drivers\n"); 1678 + goto out_gtt; 1679 + } 1680 + 1681 + ret = i915_kick_out_vgacon(dev_priv); 1682 + if (ret) { 1683 + DRM_ERROR("failed to remove conflicting VGA console\n"); 1682 1684 goto out_gtt; 1683 1685 } 1684 1686 }
-5
drivers/gpu/drm/i915/intel_pm.c
··· 5469 5469 I915_WRITE(_3D_CHICKEN, 5470 5470 _MASKED_BIT_ENABLE(_3D_CHICKEN_HIZ_PLANE_DISABLE_MSAA_4X_SNB)); 5471 5471 5472 - /* WaSetupGtModeTdRowDispatch:snb */ 5473 - if (IS_SNB_GT1(dev)) 5474 - I915_WRITE(GEN6_GT_MODE, 5475 - _MASKED_BIT_ENABLE(GEN6_TD_FOUR_ROW_DISPATCH_DISABLE)); 5476 - 5477 5472 /* WaDisable_RenderCache_OperationalFlush:snb */ 5478 5473 I915_WRITE(CACHE_MODE_0, _MASKED_BIT_DISABLE(RC_OP_FLUSH_ENABLE)); 5479 5474
+1 -1
drivers/gpu/drm/radeon/r600_dpm.c
··· 1256 1256 (mode_info->atom_context->bios + data_offset + 1257 1257 le16_to_cpu(ext_hdr->usPowerTuneTableOffset)); 1258 1258 rdev->pm.dpm.dyn_state.cac_tdp_table->maximum_power_delivery_limit = 1259 - ppt->usMaximumPowerDeliveryLimit; 1259 + le16_to_cpu(ppt->usMaximumPowerDeliveryLimit); 1260 1260 pt = &ppt->power_tune_table; 1261 1261 } else { 1262 1262 ATOM_PPLIB_POWERTUNE_Table *ppt = (ATOM_PPLIB_POWERTUNE_Table *)
+3
drivers/gpu/drm/radeon/radeon_encoders.c
··· 179 179 (rdev->pdev->subsystem_vendor == 0x1734) && 180 180 (rdev->pdev->subsystem_device == 0x1107)) 181 181 use_bl = false; 182 + /* disable native backlight control on older asics */ 183 + else if (rdev->family < CHIP_R600) 184 + use_bl = false; 182 185 else 183 186 use_bl = true; 184 187 }
+30 -14
drivers/infiniband/ulp/isert/ib_isert.c
··· 115 115 attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS; 116 116 /* 117 117 * FIXME: Use devattr.max_sge - 2 for max_send_sge as 118 - * work-around for RDMA_READ.. 118 + * work-around for RDMA_READs with ConnectX-2. 119 + * 120 + * Also, still make sure to have at least two SGEs for 121 + * outgoing control PDU responses. 119 122 */ 120 - attr.cap.max_send_sge = device->dev_attr.max_sge - 2; 123 + attr.cap.max_send_sge = max(2, device->dev_attr.max_sge - 2); 121 124 isert_conn->max_sge = attr.cap.max_send_sge; 122 125 123 126 attr.cap.max_recv_sge = 1; ··· 228 225 struct isert_cq_desc *cq_desc; 229 226 struct ib_device_attr *dev_attr; 230 227 int ret = 0, i, j; 228 + int max_rx_cqe, max_tx_cqe; 231 229 232 230 dev_attr = &device->dev_attr; 233 231 ret = isert_query_device(ib_dev, dev_attr); 234 232 if (ret) 235 233 return ret; 234 + 235 + max_rx_cqe = min(ISER_MAX_RX_CQ_LEN, dev_attr->max_cqe); 236 + max_tx_cqe = min(ISER_MAX_TX_CQ_LEN, dev_attr->max_cqe); 236 237 237 238 /* asign function handlers */ 238 239 if (dev_attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS && ··· 279 272 isert_cq_rx_callback, 280 273 isert_cq_event_callback, 281 274 (void *)&cq_desc[i], 282 - ISER_MAX_RX_CQ_LEN, i); 275 + max_rx_cqe, i); 283 276 if (IS_ERR(device->dev_rx_cq[i])) { 284 277 ret = PTR_ERR(device->dev_rx_cq[i]); 285 278 device->dev_rx_cq[i] = NULL; ··· 291 284 isert_cq_tx_callback, 292 285 isert_cq_event_callback, 293 286 (void *)&cq_desc[i], 294 - ISER_MAX_TX_CQ_LEN, i); 287 + max_tx_cqe, i); 295 288 if (IS_ERR(device->dev_tx_cq[i])) { 296 289 ret = PTR_ERR(device->dev_tx_cq[i]); 297 290 device->dev_tx_cq[i] = NULL; ··· 810 803 complete(&isert_conn->conn_wait); 811 804 } 812 805 813 - static void 806 + static int 814 807 isert_disconnected_handler(struct rdma_cm_id *cma_id, bool disconnect) 815 808 { 816 - struct isert_conn *isert_conn = (struct isert_conn *)cma_id->context; 809 + struct isert_conn *isert_conn; 810 + 811 + if (!cma_id->qp) { 812 + struct isert_np *isert_np = cma_id->context; 813 + 814 + isert_np->np_cm_id = NULL; 815 + return -1; 816 + } 817 + 818 + isert_conn = (struct isert_conn *)cma_id->context; 817 819 818 820 isert_conn->disconnect = disconnect; 819 821 INIT_WORK(&isert_conn->conn_logout_work, isert_disconnect_work); 820 822 schedule_work(&isert_conn->conn_logout_work); 823 + 824 + return 0; 821 825 } 822 826 823 827 static int ··· 843 825 switch (event->event) { 844 826 case RDMA_CM_EVENT_CONNECT_REQUEST: 845 827 ret = isert_connect_request(cma_id, event); 828 + if (ret) 829 + pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 830 + event->event, ret); 846 831 break; 847 832 case RDMA_CM_EVENT_ESTABLISHED: 848 833 isert_connected_handler(cma_id); ··· 855 834 case RDMA_CM_EVENT_DEVICE_REMOVAL: /* FALLTHRU */ 856 835 disconnect = true; 857 836 case RDMA_CM_EVENT_TIMEWAIT_EXIT: /* FALLTHRU */ 858 - isert_disconnected_handler(cma_id, disconnect); 837 + ret = isert_disconnected_handler(cma_id, disconnect); 859 838 break; 860 839 case RDMA_CM_EVENT_CONNECT_ERROR: 861 840 default: 862 841 pr_err("Unhandled RDMA CMA event: %d\n", event->event); 863 842 break; 864 - } 865 - 866 - if (ret != 0) { 867 - pr_err("isert_cma_handler failed RDMA_CM_EVENT: 0x%08x %d\n", 868 - event->event, ret); 869 - dump_stack(); 870 843 } 871 844 872 845 return ret; ··· 3205 3190 { 3206 3191 struct isert_np *isert_np = (struct isert_np *)np->np_context; 3207 3192 3208 - rdma_destroy_id(isert_np->np_cm_id); 3193 + if (isert_np->np_cm_id) 3194 + rdma_destroy_id(isert_np->np_cm_id); 3209 3195 3210 3196 np->np_context = NULL; 3211 3197 kfree(isert_np);
+8
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 2092 2092 if (!qp_init) 2093 2093 goto out; 2094 2094 2095 + retry: 2095 2096 ch->cq = ib_create_cq(sdev->device, srpt_completion, NULL, ch, 2096 2097 ch->rq_size + srp_sq_size, 0); 2097 2098 if (IS_ERR(ch->cq)) { ··· 2116 2115 ch->qp = ib_create_qp(sdev->pd, qp_init); 2117 2116 if (IS_ERR(ch->qp)) { 2118 2117 ret = PTR_ERR(ch->qp); 2118 + if (ret == -ENOMEM) { 2119 + srp_sq_size /= 2; 2120 + if (srp_sq_size >= MIN_SRPT_SQ_SIZE) { 2121 + ib_destroy_cq(ch->cq); 2122 + goto retry; 2123 + } 2124 + } 2119 2125 printk(KERN_ERR "failed to create_qp ret= %d\n", ret); 2120 2126 goto err_destroy_cq; 2121 2127 }
+2 -1
drivers/net/bonding/bond_main.c
··· 2471 2471 bond_slave_state_change(bond); 2472 2472 if (BOND_MODE(bond) == BOND_MODE_XOR) 2473 2473 bond_update_slave_arr(bond, NULL); 2474 - } else if (do_failover) { 2474 + } 2475 + if (do_failover) { 2475 2476 block_netpoll_tx(); 2476 2477 bond_select_active_slave(bond); 2477 2478 unblock_netpoll_tx();
+2 -2
drivers/net/can/dev.c
··· 110 110 long rate; 111 111 u64 v64; 112 112 113 - /* Use CIA recommended sample points */ 113 + /* Use CiA recommended sample points */ 114 114 if (bt->sample_point) { 115 115 sampl_pt = bt->sample_point; 116 116 } else { ··· 382 382 BUG_ON(idx >= priv->echo_skb_max); 383 383 384 384 if (priv->echo_skb[idx]) { 385 - kfree_skb(priv->echo_skb[idx]); 385 + dev_kfree_skb_any(priv->echo_skb[idx]); 386 386 priv->echo_skb[idx] = NULL; 387 387 } 388 388 }
+1
drivers/net/can/m_can/Kconfig
··· 1 1 config CAN_M_CAN 2 + depends on HAS_IOMEM 2 3 tristate "Bosch M_CAN devices" 3 4 ---help--- 4 5 Say Y here if you want to support for Bosch M_CAN controller.
+166 -53
drivers/net/can/m_can/m_can.c
··· 105 105 MRAM_CFG_NUM, 106 106 }; 107 107 108 + /* Fast Bit Timing & Prescaler Register (FBTP) */ 109 + #define FBTR_FBRP_MASK 0x1f 110 + #define FBTR_FBRP_SHIFT 16 111 + #define FBTR_FTSEG1_SHIFT 8 112 + #define FBTR_FTSEG1_MASK (0xf << FBTR_FTSEG1_SHIFT) 113 + #define FBTR_FTSEG2_SHIFT 4 114 + #define FBTR_FTSEG2_MASK (0x7 << FBTR_FTSEG2_SHIFT) 115 + #define FBTR_FSJW_SHIFT 0 116 + #define FBTR_FSJW_MASK 0x3 117 + 108 118 /* Test Register (TEST) */ 109 119 #define TEST_LBCK BIT(4) 110 120 111 121 /* CC Control Register(CCCR) */ 112 - #define CCCR_TEST BIT(7) 113 - #define CCCR_MON BIT(5) 114 - #define CCCR_CCE BIT(1) 115 - #define CCCR_INIT BIT(0) 122 + #define CCCR_TEST BIT(7) 123 + #define CCCR_CMR_MASK 0x3 124 + #define CCCR_CMR_SHIFT 10 125 + #define CCCR_CMR_CANFD 0x1 126 + #define CCCR_CMR_CANFD_BRS 0x2 127 + #define CCCR_CMR_CAN 0x3 128 + #define CCCR_CME_MASK 0x3 129 + #define CCCR_CME_SHIFT 8 130 + #define CCCR_CME_CAN 0 131 + #define CCCR_CME_CANFD 0x1 132 + #define CCCR_CME_CANFD_BRS 0x2 133 + #define CCCR_TEST BIT(7) 134 + #define CCCR_MON BIT(5) 135 + #define CCCR_CCE BIT(1) 136 + #define CCCR_INIT BIT(0) 137 + #define CCCR_CANFD 0x10 116 138 117 139 /* Bit Timing & Prescaler Register (BTP) */ 118 140 #define BTR_BRP_MASK 0x3ff ··· 226 204 227 205 /* Rx Buffer / FIFO Element Size Configuration (RXESC) */ 228 206 #define M_CAN_RXESC_8BYTES 0x0 207 + #define M_CAN_RXESC_64BYTES 0x777 229 208 230 209 /* Tx Buffer Configuration(TXBC) */ 231 210 #define TXBC_NDTB_OFF 16 ··· 234 211 235 212 /* Tx Buffer Element Size Configuration(TXESC) */ 236 213 #define TXESC_TBDS_8BYTES 0x0 214 + #define TXESC_TBDS_64BYTES 0x7 237 215 238 216 /* Tx Event FIFO Con.guration (TXEFC) */ 239 217 #define TXEFC_EFS_OFF 16 ··· 243 219 /* Message RAM Configuration (in bytes) */ 244 220 #define SIDF_ELEMENT_SIZE 4 245 221 #define XIDF_ELEMENT_SIZE 8 246 - #define RXF0_ELEMENT_SIZE 16 247 - #define RXF1_ELEMENT_SIZE 16 222 + #define RXF0_ELEMENT_SIZE 72 223 + #define RXF1_ELEMENT_SIZE 72 248 224 #define RXB_ELEMENT_SIZE 16 249 225 #define TXE_ELEMENT_SIZE 8 250 - #define TXB_ELEMENT_SIZE 16 226 + #define TXB_ELEMENT_SIZE 72 251 227 252 228 /* Message RAM Elements */ 253 229 #define M_CAN_FIFO_ID 0x0 ··· 255 231 #define M_CAN_FIFO_DATA(n) (0x8 + ((n) << 2)) 256 232 257 233 /* Rx Buffer Element */ 234 + /* R0 */ 258 235 #define RX_BUF_ESI BIT(31) 259 236 #define RX_BUF_XTD BIT(30) 260 237 #define RX_BUF_RTR BIT(29) 238 + /* R1 */ 239 + #define RX_BUF_ANMF BIT(31) 240 + #define RX_BUF_EDL BIT(21) 241 + #define RX_BUF_BRS BIT(20) 261 242 262 243 /* Tx Buffer Element */ 244 + /* R0 */ 263 245 #define TX_BUF_XTD BIT(30) 264 246 #define TX_BUF_RTR BIT(29) 265 247 ··· 326 296 if (enable) { 327 297 /* enable m_can configuration */ 328 298 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT); 299 + udelay(5); 329 300 /* CCCR.CCE can only be set/reset while CCCR.INIT = '1' */ 330 301 m_can_write(priv, M_CAN_CCCR, cccr | CCCR_INIT | CCCR_CCE); 331 302 } else { ··· 357 326 m_can_write(priv, M_CAN_ILE, 0x0); 358 327 } 359 328 360 - static void m_can_read_fifo(const struct net_device *dev, struct can_frame *cf, 361 - u32 rxfs) 329 + static void m_can_read_fifo(struct net_device *dev, u32 rxfs) 362 330 { 331 + struct net_device_stats *stats = &dev->stats; 363 332 struct m_can_priv *priv = netdev_priv(dev); 364 - u32 id, fgi; 333 + struct canfd_frame *cf; 334 + struct sk_buff *skb; 335 + u32 id, fgi, dlc; 336 + int i; 365 337 366 338 /* calculate the fifo get index for where to read data */ 367 339 fgi = (rxfs & RXFS_FGI_MASK) >> RXFS_FGI_OFF; 340 + dlc = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 341 + if (dlc & RX_BUF_EDL) 342 + skb = alloc_canfd_skb(dev, &cf); 343 + else 344 + skb = alloc_can_skb(dev, (struct can_frame **)&cf); 345 + if (!skb) { 346 + stats->rx_dropped++; 347 + return; 348 + } 349 + 350 + if (dlc & RX_BUF_EDL) 351 + cf->len = can_dlc2len((dlc >> 16) & 0x0F); 352 + else 353 + cf->len = get_can_dlc((dlc >> 16) & 0x0F); 354 + 368 355 id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_ID); 369 356 if (id & RX_BUF_XTD) 370 357 cf->can_id = (id & CAN_EFF_MASK) | CAN_EFF_FLAG; 371 358 else 372 359 cf->can_id = (id >> 18) & CAN_SFF_MASK; 373 360 374 - if (id & RX_BUF_RTR) { 361 + if (id & RX_BUF_ESI) { 362 + cf->flags |= CANFD_ESI; 363 + netdev_dbg(dev, "ESI Error\n"); 364 + } 365 + 366 + if (!(dlc & RX_BUF_EDL) && (id & RX_BUF_RTR)) { 375 367 cf->can_id |= CAN_RTR_FLAG; 376 368 } else { 377 - id = m_can_fifo_read(priv, fgi, M_CAN_FIFO_DLC); 378 - cf->can_dlc = get_can_dlc((id >> 16) & 0x0F); 379 - *(u32 *)(cf->data + 0) = m_can_fifo_read(priv, fgi, 380 - M_CAN_FIFO_DATA(0)); 381 - *(u32 *)(cf->data + 4) = m_can_fifo_read(priv, fgi, 382 - M_CAN_FIFO_DATA(1)); 369 + if (dlc & RX_BUF_BRS) 370 + cf->flags |= CANFD_BRS; 371 + 372 + for (i = 0; i < cf->len; i += 4) 373 + *(u32 *)(cf->data + i) = 374 + m_can_fifo_read(priv, fgi, 375 + M_CAN_FIFO_DATA(i / 4)); 383 376 } 384 377 385 378 /* acknowledge rx fifo 0 */ 386 379 m_can_write(priv, M_CAN_RXF0A, fgi); 380 + 381 + stats->rx_packets++; 382 + stats->rx_bytes += cf->len; 383 + 384 + netif_receive_skb(skb); 387 385 } 388 386 389 387 static int m_can_do_rx_poll(struct net_device *dev, int quota) 390 388 { 391 389 struct m_can_priv *priv = netdev_priv(dev); 392 - struct net_device_stats *stats = &dev->stats; 393 - struct sk_buff *skb; 394 - struct can_frame *frame; 395 390 u32 pkts = 0; 396 391 u32 rxfs; 397 392 ··· 431 374 if (rxfs & RXFS_RFL) 432 375 netdev_warn(dev, "Rx FIFO 0 Message Lost\n"); 433 376 434 - skb = alloc_can_skb(dev, &frame); 435 - if (!skb) { 436 - stats->rx_dropped++; 437 - return pkts; 438 - } 439 - 440 - m_can_read_fifo(dev, frame, rxfs); 441 - 442 - stats->rx_packets++; 443 - stats->rx_bytes += frame->can_dlc; 444 - 445 - netif_receive_skb(skb); 377 + m_can_read_fifo(dev, rxfs); 446 378 447 379 quota--; 448 380 pkts++; ··· 527 481 return 1; 528 482 } 529 483 484 + static int __m_can_get_berr_counter(const struct net_device *dev, 485 + struct can_berr_counter *bec) 486 + { 487 + struct m_can_priv *priv = netdev_priv(dev); 488 + unsigned int ecr; 489 + 490 + ecr = m_can_read(priv, M_CAN_ECR); 491 + bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 492 + bec->txerr = ecr & ECR_TEC_MASK; 493 + 494 + return 0; 495 + } 496 + 530 497 static int m_can_get_berr_counter(const struct net_device *dev, 531 498 struct can_berr_counter *bec) 532 499 { 533 500 struct m_can_priv *priv = netdev_priv(dev); 534 - unsigned int ecr; 535 501 int err; 536 502 537 503 err = clk_prepare_enable(priv->hclk); ··· 556 498 return err; 557 499 } 558 500 559 - ecr = m_can_read(priv, M_CAN_ECR); 560 - bec->rxerr = (ecr & ECR_REC_MASK) >> ECR_REC_SHIFT; 561 - bec->txerr = ecr & ECR_TEC_MASK; 501 + __m_can_get_berr_counter(dev, bec); 562 502 563 503 clk_disable_unprepare(priv->cclk); 564 504 clk_disable_unprepare(priv->hclk); ··· 600 544 if (unlikely(!skb)) 601 545 return 0; 602 546 603 - m_can_get_berr_counter(dev, &bec); 547 + __m_can_get_berr_counter(dev, &bec); 604 548 605 549 switch (new_state) { 606 550 case CAN_STATE_ERROR_ACTIVE: ··· 652 596 653 597 if ((psr & PSR_EP) && 654 598 (priv->can.state != CAN_STATE_ERROR_PASSIVE)) { 655 - netdev_dbg(dev, "entered error warning state\n"); 599 + netdev_dbg(dev, "entered error passive state\n"); 656 600 work_done += m_can_handle_state_change(dev, 657 601 CAN_STATE_ERROR_PASSIVE); 658 602 } 659 603 660 604 if ((psr & PSR_BO) && 661 605 (priv->can.state != CAN_STATE_BUS_OFF)) { 662 - netdev_dbg(dev, "entered error warning state\n"); 606 + netdev_dbg(dev, "entered error bus off state\n"); 663 607 work_done += m_can_handle_state_change(dev, 664 608 CAN_STATE_BUS_OFF); 665 609 } ··· 671 615 { 672 616 if (irqstatus & IR_WDI) 673 617 netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); 674 - if (irqstatus & IR_BEU) 618 + if (irqstatus & IR_ELO) 675 619 netdev_err(dev, "Error Logging Overflow\n"); 676 620 if (irqstatus & IR_BEU) 677 621 netdev_err(dev, "Bit Error Uncorrected\n"); ··· 789 733 .brp_inc = 1, 790 734 }; 791 735 736 + static const struct can_bittiming_const m_can_data_bittiming_const = { 737 + .name = KBUILD_MODNAME, 738 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 739 + .tseg1_max = 16, 740 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 741 + .tseg2_max = 8, 742 + .sjw_max = 4, 743 + .brp_min = 1, 744 + .brp_max = 32, 745 + .brp_inc = 1, 746 + }; 747 + 792 748 static int m_can_set_bittiming(struct net_device *dev) 793 749 { 794 750 struct m_can_priv *priv = netdev_priv(dev); 795 751 const struct can_bittiming *bt = &priv->can.bittiming; 752 + const struct can_bittiming *dbt = &priv->can.data_bittiming; 796 753 u16 brp, sjw, tseg1, tseg2; 797 754 u32 reg_btp; 798 755 ··· 816 747 reg_btp = (brp << BTR_BRP_SHIFT) | (sjw << BTR_SJW_SHIFT) | 817 748 (tseg1 << BTR_TSEG1_SHIFT) | (tseg2 << BTR_TSEG2_SHIFT); 818 749 m_can_write(priv, M_CAN_BTP, reg_btp); 819 - netdev_dbg(dev, "setting BTP 0x%x\n", reg_btp); 750 + 751 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 752 + brp = dbt->brp - 1; 753 + sjw = dbt->sjw - 1; 754 + tseg1 = dbt->prop_seg + dbt->phase_seg1 - 1; 755 + tseg2 = dbt->phase_seg2 - 1; 756 + reg_btp = (brp << FBTR_FBRP_SHIFT) | (sjw << FBTR_FSJW_SHIFT) | 757 + (tseg1 << FBTR_FTSEG1_SHIFT) | 758 + (tseg2 << FBTR_FTSEG2_SHIFT); 759 + m_can_write(priv, M_CAN_FBTP, reg_btp); 760 + } 820 761 821 762 return 0; 822 763 } ··· 846 767 847 768 m_can_config_endisable(priv, true); 848 769 849 - /* RX Buffer/FIFO Element Size 8 bytes data field */ 850 - m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_8BYTES); 770 + /* RX Buffer/FIFO Element Size 64 bytes data field */ 771 + m_can_write(priv, M_CAN_RXESC, M_CAN_RXESC_64BYTES); 851 772 852 773 /* Accept Non-matching Frames Into FIFO 0 */ 853 774 m_can_write(priv, M_CAN_GFC, 0x0); ··· 856 777 m_can_write(priv, M_CAN_TXBC, (1 << TXBC_NDTB_OFF) | 857 778 priv->mcfg[MRAM_TXB].off); 858 779 859 - /* only support 8 bytes firstly */ 860 - m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_8BYTES); 780 + /* support 64 bytes payload */ 781 + m_can_write(priv, M_CAN_TXESC, TXESC_TBDS_64BYTES); 861 782 862 783 m_can_write(priv, M_CAN_TXEFC, (1 << TXEFC_EFS_OFF) | 863 784 priv->mcfg[MRAM_TXE].off); ··· 872 793 RXFC_FWM_1 | priv->mcfg[MRAM_RXF1].off); 873 794 874 795 cccr = m_can_read(priv, M_CAN_CCCR); 875 - cccr &= ~(CCCR_TEST | CCCR_MON); 796 + cccr &= ~(CCCR_TEST | CCCR_MON | (CCCR_CMR_MASK << CCCR_CMR_SHIFT) | 797 + (CCCR_CME_MASK << CCCR_CME_SHIFT)); 876 798 test = m_can_read(priv, M_CAN_TEST); 877 799 test &= ~TEST_LBCK; 878 800 ··· 884 804 cccr |= CCCR_TEST; 885 805 test |= TEST_LBCK; 886 806 } 807 + 808 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) 809 + cccr |= CCCR_CME_CANFD_BRS << CCCR_CME_SHIFT; 887 810 888 811 m_can_write(priv, M_CAN_CCCR, cccr); 889 812 m_can_write(priv, M_CAN_TEST, test); ··· 952 869 953 870 priv->dev = dev; 954 871 priv->can.bittiming_const = &m_can_bittiming_const; 872 + priv->can.data_bittiming_const = &m_can_data_bittiming_const; 955 873 priv->can.do_set_mode = m_can_set_mode; 956 874 priv->can.do_get_berr_counter = m_can_get_berr_counter; 957 875 priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK | 958 876 CAN_CTRLMODE_LISTENONLY | 959 - CAN_CTRLMODE_BERR_REPORTING; 877 + CAN_CTRLMODE_BERR_REPORTING | 878 + CAN_CTRLMODE_FD; 960 879 961 880 return dev; 962 881 } ··· 1041 956 struct net_device *dev) 1042 957 { 1043 958 struct m_can_priv *priv = netdev_priv(dev); 1044 - struct can_frame *cf = (struct can_frame *)skb->data; 1045 - u32 id; 959 + struct canfd_frame *cf = (struct canfd_frame *)skb->data; 960 + u32 id, cccr; 961 + int i; 1046 962 1047 963 if (can_dropped_invalid_skb(dev, skb)) 1048 964 return NETDEV_TX_OK; ··· 1062 976 1063 977 /* message ram configuration */ 1064 978 m_can_fifo_write(priv, 0, M_CAN_FIFO_ID, id); 1065 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, cf->can_dlc << 16); 1066 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(0), *(u32 *)(cf->data + 0)); 1067 - m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(1), *(u32 *)(cf->data + 4)); 979 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DLC, can_len2dlc(cf->len) << 16); 980 + 981 + for (i = 0; i < cf->len; i += 4) 982 + m_can_fifo_write(priv, 0, M_CAN_FIFO_DATA(i / 4), 983 + *(u32 *)(cf->data + i)); 984 + 1068 985 can_put_echo_skb(skb, dev, 0); 986 + 987 + if (priv->can.ctrlmode & CAN_CTRLMODE_FD) { 988 + cccr = m_can_read(priv, M_CAN_CCCR); 989 + cccr &= ~(CCCR_CMR_MASK << CCCR_CMR_SHIFT); 990 + if (can_is_canfd_skb(skb)) { 991 + if (cf->flags & CANFD_BRS) 992 + cccr |= CCCR_CMR_CANFD_BRS << CCCR_CMR_SHIFT; 993 + else 994 + cccr |= CCCR_CMR_CANFD << CCCR_CMR_SHIFT; 995 + } else { 996 + cccr |= CCCR_CMR_CAN << CCCR_CMR_SHIFT; 997 + } 998 + m_can_write(priv, M_CAN_CCCR, cccr); 999 + } 1069 1000 1070 1001 /* enable first TX buffer to start transfer */ 1071 1002 m_can_write(priv, M_CAN_TXBTIE, 0x1); ··· 1095 992 .ndo_open = m_can_open, 1096 993 .ndo_stop = m_can_close, 1097 994 .ndo_start_xmit = m_can_start_xmit, 995 + .ndo_change_mtu = can_change_mtu, 1098 996 }; 1099 997 1100 998 static int register_m_can_dev(struct net_device *dev) ··· 1113 1009 struct resource *res; 1114 1010 void __iomem *addr; 1115 1011 u32 out_val[MRAM_CFG_LEN]; 1116 - int ret; 1012 + int i, start, end, ret; 1117 1013 1118 1014 /* message ram could be shared */ 1119 1015 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "message_ram"); ··· 1163 1059 priv->mcfg[MRAM_RXB].off, priv->mcfg[MRAM_RXB].num, 1164 1060 priv->mcfg[MRAM_TXE].off, priv->mcfg[MRAM_TXE].num, 1165 1061 priv->mcfg[MRAM_TXB].off, priv->mcfg[MRAM_TXB].num); 1062 + 1063 + /* initialize the entire Message RAM in use to avoid possible 1064 + * ECC/parity checksum errors when reading an uninitialized buffer 1065 + */ 1066 + start = priv->mcfg[MRAM_SIDF].off; 1067 + end = priv->mcfg[MRAM_TXB].off + 1068 + priv->mcfg[MRAM_TXB].num * TXB_ELEMENT_SIZE; 1069 + for (i = start; i < end; i += 4) 1070 + writel(0x0, priv->mram_base + i); 1166 1071 1167 1072 return 0; 1168 1073 }
+1
drivers/net/can/rcar_can.c
··· 628 628 .ndo_open = rcar_can_open, 629 629 .ndo_stop = rcar_can_close, 630 630 .ndo_start_xmit = rcar_can_start_xmit, 631 + .ndo_change_mtu = can_change_mtu, 631 632 }; 632 633 633 634 static void rcar_can_rx_pkt(struct rcar_can_priv *priv)
+1 -4
drivers/net/can/sja1000/kvaser_pci.c
··· 214 214 struct net_device *dev; 215 215 struct sja1000_priv *priv; 216 216 struct kvaser_pci *board; 217 - int err, init_step; 217 + int err; 218 218 219 219 dev = alloc_sja1000dev(sizeof(struct kvaser_pci)); 220 220 if (dev == NULL) ··· 235 235 if (channel == 0) { 236 236 board->xilinx_ver = 237 237 ioread8(board->res_addr + XILINX_VERINT) >> 4; 238 - init_step = 2; 239 238 240 239 /* Assert PTADR# - we're in passive mode so the other bits are 241 240 not important */ ··· 262 263 263 264 priv->irq_flags = IRQF_SHARED; 264 265 dev->irq = pdev->irq; 265 - 266 - init_step = 4; 267 266 268 267 dev_info(&pdev->dev, "reg_base=%p conf_addr=%p irq=%d\n", 269 268 priv->reg_base, board->conf_addr, dev->irq);
+1 -2
drivers/net/can/usb/ems_usb.c
··· 434 434 if (urb->actual_length > CPC_HEADER_SIZE) { 435 435 struct ems_cpc_msg *msg; 436 436 u8 *ibuf = urb->transfer_buffer; 437 - u8 msg_count, again, start; 437 + u8 msg_count, start; 438 438 439 439 msg_count = ibuf[0] & ~0x80; 440 - again = ibuf[0] & 0x80; 441 440 442 441 start = CPC_HEADER_SIZE; 443 442
+1 -2
drivers/net/can/usb/esd_usb2.c
··· 464 464 { 465 465 struct esd_tx_urb_context *context = urb->context; 466 466 struct esd_usb2_net_priv *priv; 467 - struct esd_usb2 *dev; 468 467 struct net_device *netdev; 469 468 size_t size = sizeof(struct esd_usb2_msg); 470 469 ··· 471 472 472 473 priv = context->priv; 473 474 netdev = priv->netdev; 474 - dev = priv->usb2; 475 475 476 476 /* free up our allocated buffer */ 477 477 usb_free_coherent(urb->dev, size, ··· 1141 1143 } 1142 1144 } 1143 1145 unlink_all_urbs(dev); 1146 + kfree(dev); 1144 1147 } 1145 1148 } 1146 1149
+1
drivers/net/can/usb/gs_usb.c
··· 718 718 .ndo_open = gs_can_open, 719 719 .ndo_stop = gs_can_close, 720 720 .ndo_start_xmit = gs_can_start_xmit, 721 + .ndo_change_mtu = can_change_mtu, 721 722 }; 722 723 723 724 static struct gs_can *gs_make_candev(unsigned int channel, struct usb_interface *intf)
+3 -1
drivers/net/can/xilinx_can.c
··· 300 300 static int xcan_chip_start(struct net_device *ndev) 301 301 { 302 302 struct xcan_priv *priv = netdev_priv(ndev); 303 - u32 err, reg_msr, reg_sr_mask; 303 + u32 reg_msr, reg_sr_mask; 304 + int err; 304 305 unsigned long timeout; 305 306 306 307 /* Check if it is in reset mode */ ··· 962 961 .ndo_open = xcan_open, 963 962 .ndo_stop = xcan_close, 964 963 .ndo_start_xmit = xcan_start_xmit, 964 + .ndo_change_mtu = can_change_mtu, 965 965 }; 966 966 967 967 /**
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
··· 1082 1082 pgid = be32_to_cpu(pcmd.u.dcb.pgid.pgid); 1083 1083 1084 1084 for (i = 0; i < CXGB4_MAX_PRIORITY; i++) 1085 - pg->prio_pg[i] = (pgid >> (i * 4)) & 0xF; 1085 + pg->prio_pg[7 - i] = (pgid >> (i * 4)) & 0xF; 1086 1086 1087 1087 INIT_PORT_DCB_READ_PEER_CMD(pcmd, pi->port_id); 1088 1088 pcmd.u.dcb.pgrate.type = FW_PORT_DCB_TYPE_PGRATE;
+6
drivers/net/ethernet/emulex/benet/be_main.c
··· 4421 4421 "Disabled VxLAN offloads for UDP port %d\n", 4422 4422 be16_to_cpu(port)); 4423 4423 } 4424 + 4425 + static bool be_gso_check(struct sk_buff *skb, struct net_device *dev) 4426 + { 4427 + return vxlan_gso_check(skb); 4428 + } 4424 4429 #endif 4425 4430 4426 4431 static const struct net_device_ops be_netdev_ops = { ··· 4455 4450 #ifdef CONFIG_BE2NET_VXLAN 4456 4451 .ndo_add_vxlan_port = be_add_vxlan_port, 4457 4452 .ndo_del_vxlan_port = be_del_vxlan_port, 4453 + .ndo_gso_check = be_gso_check, 4458 4454 #endif 4459 4455 }; 4460 4456
+12 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1693 1693 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 1694 1694 1695 1695 #ifdef CONFIG_MLX4_EN_VXLAN 1696 - if (priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_VXLAN_OFFLOADS) 1696 + if (priv->mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) 1697 1697 vxlan_get_rx_port(dev); 1698 1698 #endif 1699 1699 priv->port_up = true; ··· 2355 2355 2356 2356 queue_work(priv->mdev->workqueue, &priv->vxlan_del_task); 2357 2357 } 2358 + 2359 + static bool mlx4_en_gso_check(struct sk_buff *skb, struct net_device *dev) 2360 + { 2361 + return vxlan_gso_check(skb); 2362 + } 2358 2363 #endif 2359 2364 2360 2365 static const struct net_device_ops mlx4_netdev_ops = { ··· 2391 2386 #ifdef CONFIG_MLX4_EN_VXLAN 2392 2387 .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2393 2388 .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2389 + .ndo_gso_check = mlx4_en_gso_check, 2394 2390 #endif 2395 2391 }; 2396 2392 ··· 2422 2416 .ndo_rx_flow_steer = mlx4_en_filter_rfs, 2423 2417 #endif 2424 2418 .ndo_get_phys_port_id = mlx4_en_get_phys_port_id, 2419 + #ifdef CONFIG_MLX4_EN_VXLAN 2420 + .ndo_add_vxlan_port = mlx4_en_add_vxlan_port, 2421 + .ndo_del_vxlan_port = mlx4_en_del_vxlan_port, 2422 + .ndo_gso_check = mlx4_en_gso_check, 2423 + #endif 2425 2424 }; 2426 2425 2427 2426 int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
+6
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 503 503 504 504 adapter->flags |= QLCNIC_DEL_VXLAN_PORT; 505 505 } 506 + 507 + static bool qlcnic_gso_check(struct sk_buff *skb, struct net_device *dev) 508 + { 509 + return vxlan_gso_check(skb); 510 + } 506 511 #endif 507 512 508 513 static const struct net_device_ops qlcnic_netdev_ops = { ··· 531 526 #ifdef CONFIG_QLCNIC_VXLAN 532 527 .ndo_add_vxlan_port = qlcnic_add_vxlan_port, 533 528 .ndo_del_vxlan_port = qlcnic_del_vxlan_port, 529 + .ndo_gso_check = qlcnic_gso_check, 534 530 #endif 535 531 #ifdef CONFIG_NET_POLL_CONTROLLER 536 532 .ndo_poll_controller = qlcnic_poll_controller,
+3 -3
drivers/net/ethernet/ti/cpsw.c
··· 129 129 #define CPSW_VLAN_AWARE BIT(1) 130 130 #define CPSW_ALE_VLAN_AWARE 1 131 131 132 - #define CPSW_FIFO_NORMAL_MODE (0 << 15) 133 - #define CPSW_FIFO_DUAL_MAC_MODE (1 << 15) 134 - #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 15) 132 + #define CPSW_FIFO_NORMAL_MODE (0 << 16) 133 + #define CPSW_FIFO_DUAL_MAC_MODE (1 << 16) 134 + #define CPSW_FIFO_RATE_LIMIT_MODE (2 << 16) 135 135 136 136 #define CPSW_INTPACEEN (0x3f << 16) 137 137 #define CPSW_INTPRESCALE_MASK (0x7FF << 0)
+8 -5
drivers/net/ieee802154/fakehard.c
··· 377 377 378 378 err = wpan_phy_register(phy); 379 379 if (err) 380 - goto out; 380 + goto err_phy_reg; 381 381 382 382 err = register_netdev(dev); 383 - if (err < 0) 384 - goto out; 383 + if (err) 384 + goto err_netdev_reg; 385 385 386 386 dev_info(&pdev->dev, "Added ieee802154 HardMAC hardware\n"); 387 387 return 0; 388 388 389 - out: 390 - unregister_netdev(dev); 389 + err_netdev_reg: 390 + wpan_phy_unregister(phy); 391 + err_phy_reg: 392 + free_netdev(dev); 393 + wpan_phy_free(phy); 391 394 return err; 392 395 } 393 396
+3 -1
drivers/net/ppp/pptp.c
··· 506 506 int len = sizeof(struct sockaddr_pppox); 507 507 struct sockaddr_pppox sp; 508 508 509 - sp.sa_family = AF_PPPOX; 509 + memset(&sp.sa_addr, 0, sizeof(sp.sa_addr)); 510 + 511 + sp.sa_family = AF_PPPOX; 510 512 sp.sa_protocol = PX_PROTO_PPTP; 511 513 sp.sa_addr.pptp = pppox_sk(sock->sk)->proto.pptp.src_addr; 512 514
+1
drivers/net/usb/qmi_wwan.c
··· 780 780 {QMI_FIXED_INTF(0x413c, 0x81a4, 8)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */ 781 781 {QMI_FIXED_INTF(0x413c, 0x81a8, 8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */ 782 782 {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */ 783 + {QMI_FIXED_INTF(0x03f0, 0x581d, 4)}, /* HP lt4112 LTE/HSPA+ Gobi 4G Module (Huawei me906e) */ 783 784 784 785 /* 4. Gobi 1000 devices */ 785 786 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+37
drivers/net/virtio_net.c
··· 1673 1673 }; 1674 1674 #endif 1675 1675 1676 + static bool virtnet_fail_on_feature(struct virtio_device *vdev, 1677 + unsigned int fbit, 1678 + const char *fname, const char *dname) 1679 + { 1680 + if (!virtio_has_feature(vdev, fbit)) 1681 + return false; 1682 + 1683 + dev_err(&vdev->dev, "device advertises feature %s but not %s", 1684 + fname, dname); 1685 + 1686 + return true; 1687 + } 1688 + 1689 + #define VIRTNET_FAIL_ON(vdev, fbit, dbit) \ 1690 + virtnet_fail_on_feature(vdev, fbit, #fbit, dbit) 1691 + 1692 + static bool virtnet_validate_features(struct virtio_device *vdev) 1693 + { 1694 + if (!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ) && 1695 + (VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_RX, 1696 + "VIRTIO_NET_F_CTRL_VQ") || 1697 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_VLAN, 1698 + "VIRTIO_NET_F_CTRL_VQ") || 1699 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_GUEST_ANNOUNCE, 1700 + "VIRTIO_NET_F_CTRL_VQ") || 1701 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_MQ, "VIRTIO_NET_F_CTRL_VQ") || 1702 + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_CTRL_MAC_ADDR, 1703 + "VIRTIO_NET_F_CTRL_VQ"))) { 1704 + return false; 1705 + } 1706 + 1707 + return true; 1708 + } 1709 + 1676 1710 static int virtnet_probe(struct virtio_device *vdev) 1677 1711 { 1678 1712 int i, err; 1679 1713 struct net_device *dev; 1680 1714 struct virtnet_info *vi; 1681 1715 u16 max_queue_pairs; 1716 + 1717 + if (!virtnet_validate_features(vdev)) 1718 + return -EINVAL; 1682 1719 1683 1720 /* Find if host supports multiqueue virtio_net device */ 1684 1721 err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
-6
drivers/net/vxlan.c
··· 67 67 68 68 #define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */ 69 69 70 - /* VXLAN protocol header */ 71 - struct vxlanhdr { 72 - __be32 vx_flags; 73 - __be32 vx_vni; 74 - }; 75 - 76 70 /* UDP port for VXLAN traffic. 77 71 * The IANA assigned port is 4789, but the Linux default is 8472 78 72 * for compatibility with early adopters.
+13
drivers/net/wireless/ath/ath9k/ar9003_phy.c
··· 664 664 ah->enabled_cals |= TX_CL_CAL; 665 665 else 666 666 ah->enabled_cals &= ~TX_CL_CAL; 667 + 668 + if (AR_SREV_9340(ah) || AR_SREV_9531(ah) || AR_SREV_9550(ah)) { 669 + if (ah->is_clk_25mhz) { 670 + REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1); 671 + REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7); 672 + REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae); 673 + } else { 674 + REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1); 675 + REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400); 676 + REG_WRITE(ah, AR_SLP32_INC, 0x0001e800); 677 + } 678 + udelay(100); 679 + } 667 680 } 668 681 669 682 static void ar9003_hw_prog_ini(struct ath_hw *ah,
-13
drivers/net/wireless/ath/ath9k/hw.c
··· 861 861 udelay(RTC_PLL_SETTLE_DELAY); 862 862 863 863 REG_WRITE(ah, AR_RTC_SLEEP_CLK, AR_RTC_FORCE_DERIVED_CLK); 864 - 865 - if (AR_SREV_9340(ah) || AR_SREV_9550(ah)) { 866 - if (ah->is_clk_25mhz) { 867 - REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x17c << 1); 868 - REG_WRITE(ah, AR_SLP32_MODE, 0x0010f3d7); 869 - REG_WRITE(ah, AR_SLP32_INC, 0x0001e7ae); 870 - } else { 871 - REG_WRITE(ah, AR_RTC_DERIVED_CLK, 0x261 << 1); 872 - REG_WRITE(ah, AR_SLP32_MODE, 0x0010f400); 873 - REG_WRITE(ah, AR_SLP32_INC, 0x0001e800); 874 - } 875 - udelay(100); 876 - } 877 864 } 878 865 879 866 static void ath9k_hw_init_interrupt_masks(struct ath_hw *ah,
+6 -3
drivers/net/wireless/ath/ath9k/main.c
··· 974 974 struct ath_vif *avp; 975 975 976 976 /* 977 - * Pick the MAC address of the first interface as the new hardware 978 - * MAC address. The hardware will use it together with the BSSID mask 979 - * when matching addresses. 977 + * The hardware will use primary station addr together with the 978 + * BSSID mask when matching addresses. 980 979 */ 981 980 memset(iter_data, 0, sizeof(*iter_data)); 982 981 memset(&iter_data->mask, 0xff, ETH_ALEN); ··· 1204 1205 list_add_tail(&avp->list, &avp->chanctx->vifs); 1205 1206 } 1206 1207 1208 + ath9k_calculate_summary_state(sc, avp->chanctx); 1209 + 1207 1210 ath9k_assign_hw_queues(hw, vif); 1208 1211 1209 1212 an->sc = sc; ··· 1274 1273 ath9k_beacon_remove_slot(sc, vif); 1275 1274 1276 1275 ath_tx_node_cleanup(sc, &avp->mcast_node); 1276 + 1277 + ath9k_calculate_summary_state(sc, avp->chanctx); 1277 1278 1278 1279 mutex_unlock(&sc->mutex); 1279 1280 }
+1 -3
drivers/net/wireless/b43/phy_common.c
··· 300 300 301 301 void b43_phy_copy(struct b43_wldev *dev, u16 destreg, u16 srcreg) 302 302 { 303 - assert_mac_suspended(dev); 304 - dev->phy.ops->phy_write(dev, destreg, 305 - dev->phy.ops->phy_read(dev, srcreg)); 303 + b43_phy_write(dev, destreg, b43_phy_read(dev, srcreg)); 306 304 } 307 305 308 306 void b43_phy_mask(struct b43_wldev *dev, u16 offset, u16 mask)
+2 -2
drivers/net/wireless/brcm80211/brcmfmac/of.c
··· 40 40 return; 41 41 42 42 irq = irq_of_parse_and_map(np, 0); 43 - if (irq < 0) { 44 - brcmf_err("interrupt could not be mapped: err=%d\n", irq); 43 + if (!irq) { 44 + brcmf_err("interrupt could not be mapped\n"); 45 45 devm_kfree(dev, sdiodev->pdata); 46 46 return; 47 47 }
+1 -1
drivers/net/wireless/brcm80211/brcmfmac/pcie.c
··· 19 19 #include <linux/pci.h> 20 20 #include <linux/vmalloc.h> 21 21 #include <linux/delay.h> 22 - #include <linux/unaligned/access_ok.h> 23 22 #include <linux/interrupt.h> 24 23 #include <linux/bcma/bcma.h> 25 24 #include <linux/sched.h> 25 + #include <asm/unaligned.h> 26 26 27 27 #include <soc.h> 28 28 #include <chipcommon.h>
+4 -2
drivers/net/wireless/brcm80211/brcmfmac/usb.c
··· 669 669 goto finalize; 670 670 } 671 671 672 - if (!brcmf_usb_ioctl_resp_wait(devinfo)) 672 + if (!brcmf_usb_ioctl_resp_wait(devinfo)) { 673 + usb_kill_urb(devinfo->ctl_urb); 673 674 ret = -ETIMEDOUT; 674 - else 675 + } else { 675 676 memcpy(buffer, tmpbuf, buflen); 677 + } 676 678 677 679 finalize: 678 680 kfree(tmpbuf);
+6
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 299 299 primary_offset = ch->center_freq1 - ch->chan->center_freq; 300 300 switch (ch->width) { 301 301 case NL80211_CHAN_WIDTH_20: 302 + case NL80211_CHAN_WIDTH_20_NOHT: 302 303 ch_inf.bw = BRCMU_CHAN_BW_20; 303 304 WARN_ON(primary_offset != 0); 304 305 break; ··· 324 323 ch_inf.sb = BRCMU_CHAN_SB_LU; 325 324 } 326 325 break; 326 + case NL80211_CHAN_WIDTH_80P80: 327 + case NL80211_CHAN_WIDTH_160: 328 + case NL80211_CHAN_WIDTH_5: 329 + case NL80211_CHAN_WIDTH_10: 327 330 default: 328 331 WARN_ON_ONCE(1); 329 332 } ··· 338 333 case IEEE80211_BAND_5GHZ: 339 334 ch_inf.band = BRCMU_CHAN_BAND_5G; 340 335 break; 336 + case IEEE80211_BAND_60GHZ: 341 337 default: 342 338 WARN_ON_ONCE(1); 343 339 }
+10 -10
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 602 602 SCAN_COMPLETE_NOTIFICATION }; 603 603 int ret; 604 604 605 - if (mvm->scan_status == IWL_MVM_SCAN_NONE) 606 - return 0; 607 - 608 - if (iwl_mvm_is_radio_killed(mvm)) { 609 - ieee80211_scan_completed(mvm->hw, true); 610 - iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 611 - mvm->scan_status = IWL_MVM_SCAN_NONE; 612 - return 0; 613 - } 614 - 615 605 iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_abort, 616 606 scan_abort_notif, 617 607 ARRAY_SIZE(scan_abort_notif), ··· 1390 1400 1391 1401 int iwl_mvm_cancel_scan(struct iwl_mvm *mvm) 1392 1402 { 1403 + if (mvm->scan_status == IWL_MVM_SCAN_NONE) 1404 + return 0; 1405 + 1406 + if (iwl_mvm_is_radio_killed(mvm)) { 1407 + ieee80211_scan_completed(mvm->hw, true); 1408 + iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 1409 + mvm->scan_status = IWL_MVM_SCAN_NONE; 1410 + return 0; 1411 + } 1412 + 1393 1413 if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_LMAC_SCAN) 1394 1414 return iwl_mvm_scan_offload_stop(mvm, true); 1395 1415 return iwl_mvm_cancel_regular_scan(mvm);
+1 -2
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 1894 1894 int reg; 1895 1895 __le32 *val; 1896 1896 1897 - prph_len += sizeof(*data) + sizeof(*prph) + 1898 - num_bytes_in_chunk; 1897 + prph_len += sizeof(**data) + sizeof(*prph) + num_bytes_in_chunk; 1899 1898 1900 1899 (*data)->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PRPH); 1901 1900 (*data)->len = cpu_to_le32(sizeof(*prph) +
+18 -44
drivers/net/wireless/rt2x00/rt2x00queue.c
··· 158 158 skb_trim(skb, frame_length); 159 159 } 160 160 161 - void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int header_length) 161 + /* 162 + * H/W needs L2 padding between the header and the paylod if header size 163 + * is not 4 bytes aligned. 164 + */ 165 + void rt2x00queue_insert_l2pad(struct sk_buff *skb, unsigned int hdr_len) 162 166 { 163 - unsigned int payload_length = skb->len - header_length; 164 - unsigned int header_align = ALIGN_SIZE(skb, 0); 165 - unsigned int payload_align = ALIGN_SIZE(skb, header_length); 166 - unsigned int l2pad = payload_length ? L2PAD_SIZE(header_length) : 0; 167 - 168 - /* 169 - * Adjust the header alignment if the payload needs to be moved more 170 - * than the header. 171 - */ 172 - if (payload_align > header_align) 173 - header_align += 4; 174 - 175 - /* There is nothing to do if no alignment is needed */ 176 - if (!header_align) 177 - return; 178 - 179 - /* Reserve the amount of space needed in front of the frame */ 180 - skb_push(skb, header_align); 181 - 182 - /* 183 - * Move the header. 184 - */ 185 - memmove(skb->data, skb->data + header_align, header_length); 186 - 187 - /* Move the payload, if present and if required */ 188 - if (payload_length && payload_align) 189 - memmove(skb->data + header_length + l2pad, 190 - skb->data + header_length + l2pad + payload_align, 191 - payload_length); 192 - 193 - /* Trim the skb to the correct size */ 194 - skb_trim(skb, header_length + l2pad + payload_length); 195 - } 196 - 197 - void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int header_length) 198 - { 199 - /* 200 - * L2 padding is only present if the skb contains more than just the 201 - * IEEE 802.11 header. 202 - */ 203 - unsigned int l2pad = (skb->len > header_length) ? 204 - L2PAD_SIZE(header_length) : 0; 167 + unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0; 205 168 206 169 if (!l2pad) 207 170 return; 208 171 209 - memmove(skb->data + l2pad, skb->data, header_length); 172 + skb_push(skb, l2pad); 173 + memmove(skb->data, skb->data + l2pad, hdr_len); 174 + } 175 + 176 + void rt2x00queue_remove_l2pad(struct sk_buff *skb, unsigned int hdr_len) 177 + { 178 + unsigned int l2pad = (skb->len > hdr_len) ? L2PAD_SIZE(hdr_len) : 0; 179 + 180 + if (!l2pad) 181 + return; 182 + 183 + memmove(skb->data + l2pad, skb->data, hdr_len); 210 184 skb_pull(skb, l2pad); 211 185 } 212 186
+12 -7
drivers/net/wireless/rtlwifi/pci.c
··· 842 842 break; 843 843 } 844 844 /* handle command packet here */ 845 - if (rtlpriv->cfg->ops->rx_command_packet(hw, stats, skb)) { 845 + if (rtlpriv->cfg->ops->rx_command_packet && 846 + rtlpriv->cfg->ops->rx_command_packet(hw, stats, skb)) { 846 847 dev_kfree_skb_any(skb); 847 848 goto end; 848 849 } ··· 1128 1127 1129 1128 __skb_queue_tail(&ring->queue, pskb); 1130 1129 1131 - rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc, true, HW_DESC_OWN, 1132 - &temp_one); 1133 - 1130 + if (rtlpriv->use_new_trx_flow) { 1131 + temp_one = 4; 1132 + rtlpriv->cfg->ops->set_desc(hw, (u8 *)pbuffer_desc, true, 1133 + HW_DESC_OWN, (u8 *)&temp_one); 1134 + } else { 1135 + rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc, true, HW_DESC_OWN, 1136 + &temp_one); 1137 + } 1134 1138 return; 1135 1139 } 1136 1140 ··· 1376 1370 ring->desc = NULL; 1377 1371 if (rtlpriv->use_new_trx_flow) { 1378 1372 pci_free_consistent(rtlpci->pdev, 1379 - sizeof(*ring->desc) * ring->entries, 1373 + sizeof(*ring->buffer_desc) * ring->entries, 1380 1374 ring->buffer_desc, ring->buffer_desc_dma); 1381 - ring->desc = NULL; 1375 + ring->buffer_desc = NULL; 1382 1376 } 1383 1377 } 1384 1378 ··· 1549 1543 true, 1550 1544 HW_DESC_TXBUFF_ADDR), 1551 1545 skb->len, PCI_DMA_TODEVICE); 1552 - ring->idx = (ring->idx + 1) % ring->entries; 1553 1546 kfree_skb(skb); 1554 1547 ring->idx = (ring->idx + 1) % ring->entries; 1555 1548 }
+5 -2
drivers/net/wireless/rtlwifi/rtl8192se/hw.c
··· 1201 1201 1202 1202 } 1203 1203 1204 + if (type != NL80211_IFTYPE_AP && 1205 + rtlpriv->mac80211.link_state < MAC80211_LINKED) 1206 + bt_msr = rtl_read_byte(rtlpriv, MSR) & ~MSR_LINK_MASK; 1204 1207 rtl_write_byte(rtlpriv, (MSR), bt_msr); 1205 1208 1206 1209 temp = rtl_read_dword(rtlpriv, TCR); ··· 1265 1262 rtl_write_dword(rtlpriv, INTA_MASK, rtlpci->irq_mask[0]); 1266 1263 /* Support Bit 32-37(Assign as Bit 0-5) interrupt setting now */ 1267 1264 rtl_write_dword(rtlpriv, INTA_MASK + 4, rtlpci->irq_mask[1] & 0x3F); 1265 + rtlpci->irq_enabled = true; 1268 1266 } 1269 1267 1270 1268 void rtl92se_disable_interrupt(struct ieee80211_hw *hw) ··· 1280 1276 rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 1281 1277 rtl_write_dword(rtlpriv, INTA_MASK, 0); 1282 1278 rtl_write_dword(rtlpriv, INTA_MASK + 4, 0); 1283 - 1284 - synchronize_irq(rtlpci->pdev->irq); 1279 + rtlpci->irq_enabled = false; 1285 1280 } 1286 1281 1287 1282 static u8 _rtl92s_set_sysclk(struct ieee80211_hw *hw, u8 data)
+2
drivers/net/wireless/rtlwifi/rtl8192se/phy.c
··· 399 399 case 2: 400 400 currentcmd = &postcommoncmd[*step]; 401 401 break; 402 + default: 403 + return true; 402 404 } 403 405 404 406 if (currentcmd->cmdid == CMDID_END) {
+16
drivers/net/wireless/rtlwifi/rtl8192se/sw.c
··· 236 236 } 237 237 } 238 238 239 + static bool rtl92se_is_tx_desc_closed(struct ieee80211_hw *hw, u8 hw_queue, 240 + u16 index) 241 + { 242 + struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 243 + struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[hw_queue]; 244 + u8 *entry = (u8 *)(&ring->desc[ring->idx]); 245 + u8 own = (u8)rtl92se_get_desc(entry, true, HW_DESC_OWN); 246 + 247 + if (own) 248 + return false; 249 + return true; 250 + } 251 + 239 252 static struct rtl_hal_ops rtl8192se_hal_ops = { 240 253 .init_sw_vars = rtl92s_init_sw_vars, 241 254 .deinit_sw_vars = rtl92s_deinit_sw_vars, ··· 282 269 .led_control = rtl92se_led_control, 283 270 .set_desc = rtl92se_set_desc, 284 271 .get_desc = rtl92se_get_desc, 272 + .is_tx_desc_closed = rtl92se_is_tx_desc_closed, 285 273 .tx_polling = rtl92se_tx_polling, 286 274 .enable_hw_sec = rtl92se_enable_hw_security_config, 287 275 .set_key = rtl92se_set_key, ··· 320 306 .maps[MAC_RCR_ACRC32] = RCR_ACRC32, 321 307 .maps[MAC_RCR_ACF] = RCR_ACF, 322 308 .maps[MAC_RCR_AAP] = RCR_AAP, 309 + .maps[MAC_HIMR] = INTA_MASK, 310 + .maps[MAC_HIMRE] = INTA_MASK + 4, 323 311 324 312 .maps[EFUSE_TEST] = REG_EFUSE_TEST, 325 313 .maps[EFUSE_CTRL] = REG_EFUSE_CTRL,
+16 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 + static int of_empty_ranges_quirk(void) 454 + { 455 + if (IS_ENABLED(CONFIG_PPC)) { 456 + /* To save cycles, we cache the result */ 457 + static int quirk_state = -1; 458 + 459 + if (quirk_state < 0) 460 + quirk_state = 461 + of_machine_is_compatible("Power Macintosh") || 462 + of_machine_is_compatible("MacRISC"); 463 + return quirk_state; 464 + } 465 + return false; 466 + } 467 + 453 468 static int of_translate_one(struct device_node *parent, struct of_bus *bus, 454 469 struct of_bus *pbus, __be32 *addr, 455 470 int na, int ns, int pna, const char *rprop) ··· 490 475 * This code is only enabled on powerpc. --gcl 491 476 */ 492 477 ranges = of_get_property(parent, rprop, &rlen); 493 - #if !defined(CONFIG_PPC) 494 - if (ranges == NULL) { 478 + if (ranges == NULL && !of_empty_ranges_quirk()) { 495 479 pr_err("OF: no ranges; cannot translate\n"); 496 480 return 1; 497 481 } 498 - #endif /* !defined(CONFIG_PPC) */ 499 482 if (ranges == NULL || rlen == 0) { 500 483 offset = of_read_number(addr, na); 501 484 memset(addr, 0, pna * 4);
+1 -1
drivers/of/dynamic.c
··· 247 247 * @allocflags: Allocation flags (typically pass GFP_KERNEL) 248 248 * 249 249 * Copy a property by dynamically allocating the memory of both the 250 - * property stucture and the property name & contents. The property's 250 + * property structure and the property name & contents. The property's 251 251 * flags have the OF_DYNAMIC bit set so that we can differentiate between 252 252 * dynamically allocated properties and not. 253 253 * Returns the newly allocated property or NULL on out of memory error.
+1 -1
drivers/of/fdt.c
··· 773 773 if (offset < 0) 774 774 return -ENODEV; 775 775 776 - while (match->compatible) { 776 + while (match->compatible[0]) { 777 777 unsigned long addr; 778 778 if (fdt_node_check_compatible(fdt, offset, match->compatible)) { 779 779 match++;
+8 -3
drivers/of/selftest.c
··· 896 896 return; 897 897 } 898 898 899 - while (last_node_index >= 0) { 899 + while (last_node_index-- > 0) { 900 900 if (nodes[last_node_index]) { 901 901 np = of_find_node_by_path(nodes[last_node_index]->full_name); 902 - if (strcmp(np->full_name, "/aliases") != 0) { 902 + if (np == nodes[last_node_index]) { 903 + if (of_aliases == np) { 904 + of_node_put(of_aliases); 905 + of_aliases = NULL; 906 + } 903 907 detach_node_and_children(np); 904 908 } else { 905 909 for_each_property_of_node(np, prop) { ··· 912 908 } 913 909 } 914 910 } 915 - last_node_index--; 916 911 } 917 912 } 918 913 ··· 924 921 res = selftest_data_add(); 925 922 if (res) 926 923 return res; 924 + if (!of_aliases) 925 + of_aliases = of_find_node_by_path("/aliases"); 927 926 928 927 np = of_find_node_by_path("/testcase-data/phandle-tests/consumer-a"); 929 928 if (!np) {
+1 -1
drivers/pci/access.c
··· 444 444 return pcie_caps_reg(dev) & PCI_EXP_FLAGS_VERS; 445 445 } 446 446 447 - static inline bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 447 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev) 448 448 { 449 449 int type = pci_pcie_type(dev); 450 450
+6 -1
drivers/pci/host/pci-xgene.c
··· 631 631 if (ret) 632 632 return ret; 633 633 634 - bus = pci_scan_root_bus(&pdev->dev, 0, &xgene_pcie_ops, port, &res); 634 + bus = pci_create_root_bus(&pdev->dev, 0, 635 + &xgene_pcie_ops, port, &res); 635 636 if (!bus) 636 637 return -ENOMEM; 638 + 639 + pci_scan_child_bus(bus); 640 + pci_assign_unassigned_bus_resources(bus); 641 + pci_bus_add_devices(bus); 637 642 638 643 platform_set_drvdata(pdev, port); 639 644 return 0;
+2
drivers/pci/pci.h
··· 6 6 7 7 extern const unsigned char pcie_link_speed[]; 8 8 9 + bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 10 + 9 11 /* Functions internal to the PCI core code */ 10 12 11 13 int pci_create_sysfs_dev_files(struct pci_dev *pdev);
+17 -13
drivers/pci/probe.c
··· 407 407 { 408 408 struct pci_dev *dev = child->self; 409 409 u16 mem_base_lo, mem_limit_lo; 410 - unsigned long base, limit; 410 + u64 base64, limit64; 411 + dma_addr_t base, limit; 411 412 struct pci_bus_region region; 412 413 struct resource *res; 413 414 414 415 res = child->resource[2]; 415 416 pci_read_config_word(dev, PCI_PREF_MEMORY_BASE, &mem_base_lo); 416 417 pci_read_config_word(dev, PCI_PREF_MEMORY_LIMIT, &mem_limit_lo); 417 - base = ((unsigned long) mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 418 - limit = ((unsigned long) mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 418 + base64 = (mem_base_lo & PCI_PREF_RANGE_MASK) << 16; 419 + limit64 = (mem_limit_lo & PCI_PREF_RANGE_MASK) << 16; 419 420 420 421 if ((mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) == PCI_PREF_RANGE_TYPE_64) { 421 422 u32 mem_base_hi, mem_limit_hi; ··· 430 429 * this, just assume they are not being used. 431 430 */ 432 431 if (mem_base_hi <= mem_limit_hi) { 433 - #if BITS_PER_LONG == 64 434 - base |= ((unsigned long) mem_base_hi) << 32; 435 - limit |= ((unsigned long) mem_limit_hi) << 32; 436 - #else 437 - if (mem_base_hi || mem_limit_hi) { 438 - dev_err(&dev->dev, "can't handle 64-bit address space for bridge\n"); 439 - return; 440 - } 441 - #endif 432 + base64 |= (u64) mem_base_hi << 32; 433 + limit64 |= (u64) mem_limit_hi << 32; 442 434 } 443 435 } 436 + 437 + base = (dma_addr_t) base64; 438 + limit = (dma_addr_t) limit64; 439 + 440 + if (base != base64) { 441 + dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n", 442 + (unsigned long long) base64); 443 + return; 444 + } 445 + 444 446 if (base <= limit) { 445 447 res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | 446 448 IORESOURCE_MEM | IORESOURCE_PREFETCH; ··· 1327 1323 ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 1328 1324 1329 1325 /* Initialize Link Control Register */ 1330 - if (dev->subordinate) 1326 + if (pcie_cap_has_lnkctl(dev)) 1331 1327 pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 1332 1328 ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 1333 1329
+2
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 828 828 if (status == CPL_ERR_RTX_NEG_ADVICE) 829 829 goto rel_skb; 830 830 831 + module_put(THIS_MODULE); 832 + 831 833 if (status && status != CPL_ERR_TCAM_FULL && 832 834 status != CPL_ERR_CONN_EXIST && 833 835 status != CPL_ERR_ARP_MISS)
+1 -1
drivers/scsi/cxgbi/libcxgbi.c
··· 816 816 read_lock_bh(&csk->callback_lock); 817 817 if (csk->user_data) 818 818 iscsi_conn_failure(csk->user_data, 819 - ISCSI_ERR_CONN_FAILED); 819 + ISCSI_ERR_TCP_CONN_CLOSE); 820 820 read_unlock_bh(&csk->callback_lock); 821 821 } 822 822 }
+1 -1
drivers/target/iscsi/iscsi_target.c
··· 3491 3491 len = sprintf(buf, "TargetAddress=" 3492 3492 "%s:%hu,%hu", 3493 3493 inaddr_any ? conn->local_ip : np->np_ip, 3494 - inaddr_any ? conn->local_port : np->np_port, 3494 + np->np_port, 3495 3495 tpg->tpgt); 3496 3496 len += 1; 3497 3497
+5 -4
drivers/target/target_core_pr.c
··· 2738 2738 struct t10_pr_registration *pr_reg, *pr_reg_tmp, *pr_reg_n, *pr_res_holder; 2739 2739 struct t10_reservation *pr_tmpl = &dev->t10_pr; 2740 2740 u32 pr_res_mapped_lun = 0; 2741 - int all_reg = 0, calling_it_nexus = 0, released_regs = 0; 2741 + int all_reg = 0, calling_it_nexus = 0; 2742 + bool sa_res_key_unmatched = sa_res_key != 0; 2742 2743 int prh_type = 0, prh_scope = 0; 2743 2744 2744 2745 if (!se_sess) ··· 2814 2813 if (!all_reg) { 2815 2814 if (pr_reg->pr_res_key != sa_res_key) 2816 2815 continue; 2816 + sa_res_key_unmatched = false; 2817 2817 2818 2818 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2819 2819 pr_reg_nacl = pr_reg->pr_reg_nacl; ··· 2822 2820 __core_scsi3_free_registration(dev, pr_reg, 2823 2821 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2824 2822 NULL, calling_it_nexus); 2825 - released_regs++; 2826 2823 } else { 2827 2824 /* 2828 2825 * Case for any existing all registrants type ··· 2839 2838 if ((sa_res_key) && 2840 2839 (pr_reg->pr_res_key != sa_res_key)) 2841 2840 continue; 2841 + sa_res_key_unmatched = false; 2842 2842 2843 2843 calling_it_nexus = (pr_reg_n == pr_reg) ? 1 : 0; 2844 2844 if (calling_it_nexus) ··· 2850 2848 __core_scsi3_free_registration(dev, pr_reg, 2851 2849 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : 2852 2850 NULL, 0); 2853 - released_regs++; 2854 2851 } 2855 2852 if (!calling_it_nexus) 2856 2853 core_scsi3_ua_allocate(pr_reg_nacl, ··· 2864 2863 * registered reservation key, then the device server shall 2865 2864 * complete the command with RESERVATION CONFLICT status. 2866 2865 */ 2867 - if (!released_regs) { 2866 + if (sa_res_key_unmatched) { 2868 2867 spin_unlock(&dev->dev_reservation_lock); 2869 2868 core_scsi3_put_pr_reg(pr_reg_n); 2870 2869 return TCM_RESERVATION_CONFLICT;
+1 -1
drivers/target/target_core_transport.c
··· 2292 2292 * and let it call back once the write buffers are ready. 2293 2293 */ 2294 2294 target_add_to_state_list(cmd); 2295 - if (cmd->data_direction != DMA_TO_DEVICE) { 2295 + if (cmd->data_direction != DMA_TO_DEVICE || cmd->data_length == 0) { 2296 2296 target_execute_cmd(cmd); 2297 2297 return 0; 2298 2298 }
+24
drivers/vhost/scsi.c
··· 1312 1312 vhost_scsi_set_endpoint(struct vhost_scsi *vs, 1313 1313 struct vhost_scsi_target *t) 1314 1314 { 1315 + struct se_portal_group *se_tpg; 1315 1316 struct tcm_vhost_tport *tv_tport; 1316 1317 struct tcm_vhost_tpg *tpg; 1317 1318 struct tcm_vhost_tpg **vs_tpg; ··· 1360 1359 ret = -EEXIST; 1361 1360 goto out; 1362 1361 } 1362 + /* 1363 + * In order to ensure individual vhost-scsi configfs 1364 + * groups cannot be removed while in use by vhost ioctl, 1365 + * go ahead and take an explicit se_tpg->tpg_group.cg_item 1366 + * dependency now. 1367 + */ 1368 + se_tpg = &tpg->se_tpg; 1369 + ret = configfs_depend_item(se_tpg->se_tpg_tfo->tf_subsys, 1370 + &se_tpg->tpg_group.cg_item); 1371 + if (ret) { 1372 + pr_warn("configfs_depend_item() failed: %d\n", ret); 1373 + kfree(vs_tpg); 1374 + mutex_unlock(&tpg->tv_tpg_mutex); 1375 + goto out; 1376 + } 1363 1377 tpg->tv_tpg_vhost_count++; 1364 1378 tpg->vhost_scsi = vs; 1365 1379 vs_tpg[tpg->tport_tpgt] = tpg; ··· 1417 1401 vhost_scsi_clear_endpoint(struct vhost_scsi *vs, 1418 1402 struct vhost_scsi_target *t) 1419 1403 { 1404 + struct se_portal_group *se_tpg; 1420 1405 struct tcm_vhost_tport *tv_tport; 1421 1406 struct tcm_vhost_tpg *tpg; 1422 1407 struct vhost_virtqueue *vq; ··· 1466 1449 vs->vs_tpg[target] = NULL; 1467 1450 match = true; 1468 1451 mutex_unlock(&tpg->tv_tpg_mutex); 1452 + /* 1453 + * Release se_tpg->tpg_group.cg_item configfs dependency now 1454 + * to allow vhost-scsi WWPN se_tpg->tpg_group shutdown to occur. 1455 + */ 1456 + se_tpg = &tpg->se_tpg; 1457 + configfs_undepend_item(se_tpg->se_tpg_tfo->tf_subsys, 1458 + &se_tpg->tpg_group.cg_item); 1469 1459 } 1470 1460 if (match) { 1471 1461 for (i = 0; i < VHOST_SCSI_MAX_VQ; i++) {
+1 -1
fs/Makefile
··· 104 104 obj-$(CONFIG_AUTOFS4_FS) += autofs4/ 105 105 obj-$(CONFIG_ADFS_FS) += adfs/ 106 106 obj-$(CONFIG_FUSE_FS) += fuse/ 107 - obj-$(CONFIG_OVERLAYFS_FS) += overlayfs/ 107 + obj-$(CONFIG_OVERLAY_FS) += overlayfs/ 108 108 obj-$(CONFIG_UDF_FS) += udf/ 109 109 obj-$(CONFIG_SUN_OPENPROMFS) += openpromfs/ 110 110 obj-$(CONFIG_OMFS_FS) += omfs/
+2 -12
fs/btrfs/ctree.c
··· 80 80 { 81 81 int i; 82 82 83 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 84 - /* lockdep really cares that we take all of these spinlocks 85 - * in the right order. If any of the locks in the path are not 86 - * currently blocking, it is going to complain. So, make really 87 - * really sure by forcing the path to blocking before we clear 88 - * the path blocking. 89 - */ 90 83 if (held) { 91 84 btrfs_set_lock_blocking_rw(held, held_rw); 92 85 if (held_rw == BTRFS_WRITE_LOCK) ··· 88 95 held_rw = BTRFS_READ_LOCK_BLOCKING; 89 96 } 90 97 btrfs_set_path_blocking(p); 91 - #endif 92 98 93 99 for (i = BTRFS_MAX_LEVEL - 1; i >= 0; i--) { 94 100 if (p->nodes[i] && p->locks[i]) { ··· 99 107 } 100 108 } 101 109 102 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 103 110 if (held) 104 111 btrfs_clear_lock_blocking_rw(held, held_rw); 105 - #endif 106 112 } 107 113 108 114 /* this also releases the path */ ··· 2883 2893 } 2884 2894 p->locks[level] = BTRFS_WRITE_LOCK; 2885 2895 } else { 2886 - err = btrfs_try_tree_read_lock(b); 2896 + err = btrfs_tree_read_lock_atomic(b); 2887 2897 if (!err) { 2888 2898 btrfs_set_path_blocking(p); 2889 2899 btrfs_tree_read_lock(b); ··· 3015 3025 } 3016 3026 3017 3027 level = btrfs_header_level(b); 3018 - err = btrfs_try_tree_read_lock(b); 3028 + err = btrfs_tree_read_lock_atomic(b); 3019 3029 if (!err) { 3020 3030 btrfs_set_path_blocking(p); 3021 3031 btrfs_tree_read_lock(b);
+21 -3
fs/btrfs/locking.c
··· 128 128 } 129 129 130 130 /* 131 + * take a spinning read lock. 132 + * returns 1 if we get the read lock and 0 if we don't 133 + * this won't wait for blocking writers 134 + */ 135 + int btrfs_tree_read_lock_atomic(struct extent_buffer *eb) 136 + { 137 + if (atomic_read(&eb->blocking_writers)) 138 + return 0; 139 + 140 + read_lock(&eb->lock); 141 + if (atomic_read(&eb->blocking_writers)) { 142 + read_unlock(&eb->lock); 143 + return 0; 144 + } 145 + atomic_inc(&eb->read_locks); 146 + atomic_inc(&eb->spinning_readers); 147 + return 1; 148 + } 149 + 150 + /* 131 151 * returns 1 if we get the read lock and 0 if we don't 132 152 * this won't wait for blocking writers 133 153 */ ··· 178 158 atomic_read(&eb->blocking_readers)) 179 159 return 0; 180 160 181 - if (!write_trylock(&eb->lock)) 182 - return 0; 183 - 161 + write_lock(&eb->lock); 184 162 if (atomic_read(&eb->blocking_writers) || 185 163 atomic_read(&eb->blocking_readers)) { 186 164 write_unlock(&eb->lock);
+2
fs/btrfs/locking.h
··· 35 35 void btrfs_assert_tree_locked(struct extent_buffer *eb); 36 36 int btrfs_try_tree_read_lock(struct extent_buffer *eb); 37 37 int btrfs_try_tree_write_lock(struct extent_buffer *eb); 38 + int btrfs_tree_read_lock_atomic(struct extent_buffer *eb); 39 + 38 40 39 41 static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw) 40 42 {
+1
fs/dcache.c
··· 778 778 struct dentry *parent = lock_parent(dentry); 779 779 if (likely(!dentry->d_lockref.count)) { 780 780 __dentry_kill(dentry); 781 + dput(parent); 781 782 goto restart; 782 783 } 783 784 if (parent)
+21 -21
fs/isofs/inode.c
··· 174 174 * Compute the hash for the isofs name corresponding to the dentry. 175 175 */ 176 176 static int 177 - isofs_hash_common(struct qstr *qstr, int ms) 178 - { 179 - const char *name; 180 - int len; 181 - 182 - len = qstr->len; 183 - name = qstr->name; 184 - if (ms) { 185 - while (len && name[len-1] == '.') 186 - len--; 187 - } 188 - 189 - qstr->hash = full_name_hash(name, len); 190 - 191 - return 0; 192 - } 193 - 194 - /* 195 - * Compute the hash for the isofs name corresponding to the dentry. 196 - */ 197 - static int 198 177 isofs_hashi_common(struct qstr *qstr, int ms) 199 178 { 200 179 const char *name; ··· 242 263 } 243 264 244 265 #ifdef CONFIG_JOLIET 266 + /* 267 + * Compute the hash for the isofs name corresponding to the dentry. 268 + */ 269 + static int 270 + isofs_hash_common(struct qstr *qstr, int ms) 271 + { 272 + const char *name; 273 + int len; 274 + 275 + len = qstr->len; 276 + name = qstr->name; 277 + if (ms) { 278 + while (len && name[len-1] == '.') 279 + len--; 280 + } 281 + 282 + qstr->hash = full_name_hash(name, len); 283 + 284 + return 0; 285 + } 286 + 245 287 static int 246 288 isofs_hash_ms(const struct dentry *dentry, struct qstr *qstr) 247 289 {
+1 -1
fs/lockd/svclock.c
··· 53 53 static LIST_HEAD(nlm_blocked); 54 54 static DEFINE_SPINLOCK(nlm_blocked_lock); 55 55 56 - #ifdef LOCKD_DEBUG 56 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 57 57 static const char *nlmdbg_cookie2a(const struct nlm_cookie *cookie) 58 58 { 59 59 /*
+1 -1
fs/nfs/blocklayout/blocklayout.c
··· 812 812 813 813 /* Optimize common case that writes from 0 to end of file */ 814 814 end = DIV_ROUND_UP(i_size_read(inode), PAGE_CACHE_SIZE); 815 - if (end != NFS_I(inode)->npages) { 815 + if (end != inode->i_mapping->nrpages) { 816 816 rcu_read_lock(); 817 817 end = page_cache_next_hole(mapping, idx + 1, ULONG_MAX); 818 818 rcu_read_unlock();
+1 -1
fs/nfs/callback_proc.c
··· 49 49 goto out_iput; 50 50 res->size = i_size_read(inode); 51 51 res->change_attr = delegation->change_attr; 52 - if (nfsi->npages != 0) 52 + if (nfsi->nrequests != 0) 53 53 res->change_attr++; 54 54 res->ctime = inode->i_ctime; 55 55 res->mtime = inode->i_mtime;
+1 -2
fs/nfs/filelayout/filelayoutdev.c
··· 204 204 ifdebug(FACILITY) 205 205 print_ds(ds); 206 206 207 - if (ds->ds_clp) 208 - nfs_put_client(ds->ds_clp); 207 + nfs_put_client(ds->ds_clp); 209 208 210 209 while (!list_empty(&ds->ds_addrs)) { 211 210 da = list_first_entry(&ds->ds_addrs,
+12 -12
fs/nfs/fscache.c
··· 269 269 if (!fscache_maybe_release_page(cookie, page, gfp)) 270 270 return 0; 271 271 272 - nfs_add_fscache_stats(page->mapping->host, 273 - NFSIOS_FSCACHE_PAGES_UNCACHED, 1); 272 + nfs_inc_fscache_stats(page->mapping->host, 273 + NFSIOS_FSCACHE_PAGES_UNCACHED); 274 274 } 275 275 276 276 return 1; ··· 293 293 294 294 BUG_ON(!PageLocked(page)); 295 295 fscache_uncache_page(cookie, page); 296 - nfs_add_fscache_stats(page->mapping->host, 297 - NFSIOS_FSCACHE_PAGES_UNCACHED, 1); 296 + nfs_inc_fscache_stats(page->mapping->host, 297 + NFSIOS_FSCACHE_PAGES_UNCACHED); 298 298 } 299 299 300 300 /* ··· 343 343 case 0: /* read BIO submitted (page in fscache) */ 344 344 dfprintk(FSCACHE, 345 345 "NFS: readpage_from_fscache: BIO submitted\n"); 346 - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK, 1); 346 + nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK); 347 347 return ret; 348 348 349 349 case -ENOBUFS: /* inode not in cache */ 350 350 case -ENODATA: /* page not in cache */ 351 - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, 1); 351 + nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); 352 352 dfprintk(FSCACHE, 353 353 "NFS: readpage_from_fscache %d\n", ret); 354 354 return 1; 355 355 356 356 default: 357 357 dfprintk(FSCACHE, "NFS: readpage_from_fscache %d\n", ret); 358 - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL, 1); 358 + nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL); 359 359 } 360 360 return ret; 361 361 } ··· 429 429 430 430 if (ret != 0) { 431 431 fscache_uncache_page(nfs_i_fscache(inode), page); 432 - nfs_add_fscache_stats(inode, 433 - NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL, 1); 434 - nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED, 1); 432 + nfs_inc_fscache_stats(inode, 433 + NFSIOS_FSCACHE_PAGES_WRITTEN_FAIL); 434 + nfs_inc_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_UNCACHED); 435 435 } else { 436 - nfs_add_fscache_stats(inode, 437 - NFSIOS_FSCACHE_PAGES_WRITTEN_OK, 1); 436 + nfs_inc_fscache_stats(inode, 437 + NFSIOS_FSCACHE_PAGES_WRITTEN_OK); 438 438 } 439 439 }
+5 -4
fs/nfs/inode.c
··· 192 192 nfs_zap_caches_locked(inode); 193 193 spin_unlock(&inode->i_lock); 194 194 } 195 + EXPORT_SYMBOL_GPL(nfs_zap_caches); 195 196 196 197 void nfs_zap_mapping(struct inode *inode, struct address_space *mapping) 197 198 { ··· 1150 1149 if ((fattr->valid & NFS_ATTR_FATTR_PRESIZE) 1151 1150 && (fattr->valid & NFS_ATTR_FATTR_SIZE) 1152 1151 && i_size_read(inode) == nfs_size_to_loff_t(fattr->pre_size) 1153 - && nfsi->npages == 0) { 1152 + && nfsi->nrequests == 0) { 1154 1153 i_size_write(inode, nfs_size_to_loff_t(fattr->size)); 1155 1154 ret |= NFS_INO_INVALID_ATTR; 1156 1155 } ··· 1193 1192 if (fattr->valid & NFS_ATTR_FATTR_SIZE) { 1194 1193 cur_size = i_size_read(inode); 1195 1194 new_isize = nfs_size_to_loff_t(fattr->size); 1196 - if (cur_size != new_isize && nfsi->npages == 0) 1195 + if (cur_size != new_isize && nfsi->nrequests == 0) 1197 1196 invalid |= NFS_INO_INVALID_ATTR|NFS_INO_REVAL_PAGECACHE; 1198 1197 } 1199 1198 ··· 1620 1619 if (new_isize != cur_isize) { 1621 1620 /* Do we perhaps have any outstanding writes, or has 1622 1621 * the file grown beyond our last write? */ 1623 - if ((nfsi->npages == 0) || new_isize > cur_isize) { 1622 + if ((nfsi->nrequests == 0) || new_isize > cur_isize) { 1624 1623 i_size_write(inode, new_isize); 1625 1624 invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_DATA; 1626 1625 invalid &= ~NFS_INO_REVAL_PAGECACHE; ··· 1785 1784 INIT_LIST_HEAD(&nfsi->access_cache_entry_lru); 1786 1785 INIT_LIST_HEAD(&nfsi->access_cache_inode_lru); 1787 1786 INIT_LIST_HEAD(&nfsi->commit_info.list); 1788 - nfsi->npages = 0; 1787 + nfsi->nrequests = 0; 1789 1788 nfsi->commit_info.ncommit = 0; 1790 1789 atomic_set(&nfsi->commit_info.rpcs_out, 0); 1791 1790 atomic_set(&nfsi->silly_count, 1);
+5
fs/nfs/iostat.h
··· 55 55 { 56 56 this_cpu_add(NFS_SERVER(inode)->io_stats->fscache[stat], addend); 57 57 } 58 + static inline void nfs_inc_fscache_stats(struct inode *inode, 59 + enum nfs_stat_fscachecounters stat) 60 + { 61 + this_cpu_inc(NFS_SERVER(inode)->io_stats->fscache[stat]); 62 + } 58 63 #endif 59 64 60 65 static inline struct nfs_iostats __percpu *nfs_alloc_iostats(void)
+2
fs/nfs/nfs42.h
··· 6 6 #define __LINUX_FS_NFS_NFS4_2_H 7 7 8 8 /* nfs4.2proc.c */ 9 + int nfs42_proc_allocate(struct file *, loff_t, loff_t); 10 + int nfs42_proc_deallocate(struct file *, loff_t, loff_t); 9 11 loff_t nfs42_proc_llseek(struct file *, loff_t, int); 10 12 11 13 /* nfs4.2xdr.h */
+76 -1
fs/nfs/nfs42proc.c
··· 32 32 return ret; 33 33 } 34 34 35 + static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, 36 + loff_t offset, loff_t len) 37 + { 38 + struct inode *inode = file_inode(filep); 39 + struct nfs42_falloc_args args = { 40 + .falloc_fh = NFS_FH(inode), 41 + .falloc_offset = offset, 42 + .falloc_length = len, 43 + }; 44 + struct nfs42_falloc_res res; 45 + struct nfs_server *server = NFS_SERVER(inode); 46 + int status; 47 + 48 + msg->rpc_argp = &args; 49 + msg->rpc_resp = &res; 50 + 51 + status = nfs42_set_rw_stateid(&args.falloc_stateid, filep, FMODE_WRITE); 52 + if (status) 53 + return status; 54 + 55 + return nfs4_call_sync(server->client, server, msg, 56 + &args.seq_args, &res.seq_res, 0); 57 + } 58 + 59 + static int nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep, 60 + loff_t offset, loff_t len) 61 + { 62 + struct nfs_server *server = NFS_SERVER(file_inode(filep)); 63 + struct nfs4_exception exception = { }; 64 + int err; 65 + 66 + do { 67 + err = _nfs42_proc_fallocate(msg, filep, offset, len); 68 + if (err == -ENOTSUPP) 69 + return -EOPNOTSUPP; 70 + err = nfs4_handle_exception(server, err, &exception); 71 + } while (exception.retry); 72 + 73 + return err; 74 + } 75 + 76 + int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len) 77 + { 78 + struct rpc_message msg = { 79 + .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE], 80 + }; 81 + struct inode *inode = file_inode(filep); 82 + int err; 83 + 84 + if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE)) 85 + return -EOPNOTSUPP; 86 + 87 + err = nfs42_proc_fallocate(&msg, filep, offset, len); 88 + if (err == -EOPNOTSUPP) 89 + NFS_SERVER(inode)->caps &= ~NFS_CAP_ALLOCATE; 90 + return err; 91 + } 92 + 93 + int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len) 94 + { 95 + struct rpc_message msg = { 96 + .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_DEALLOCATE], 97 + }; 98 + struct inode *inode = file_inode(filep); 99 + int err; 100 + 101 + if (!nfs_server_capable(inode, NFS_CAP_DEALLOCATE)) 102 + return -EOPNOTSUPP; 103 + 104 + err = nfs42_proc_fallocate(&msg, filep, offset, len); 105 + if (err == -EOPNOTSUPP) 106 + NFS_SERVER(inode)->caps &= ~NFS_CAP_DEALLOCATE; 107 + return err; 108 + } 109 + 35 110 loff_t nfs42_proc_llseek(struct file *filep, loff_t offset, int whence) 36 111 { 37 112 struct inode *inode = file_inode(filep); ··· 125 50 struct nfs_server *server = NFS_SERVER(inode); 126 51 int status; 127 52 128 - if (!(server->caps & NFS_CAP_SEEK)) 53 + if (!nfs_server_capable(inode, NFS_CAP_SEEK)) 129 54 return -ENOTSUPP; 130 55 131 56 status = nfs42_set_rw_stateid(&args.sa_stateid, filep, FMODE_READ);
+139
fs/nfs/nfs42xdr.c
··· 4 4 #ifndef __LINUX_FS_NFS_NFS4_2XDR_H 5 5 #define __LINUX_FS_NFS_NFS4_2XDR_H 6 6 7 + #define encode_fallocate_maxsz (encode_stateid_maxsz + \ 8 + 2 /* offset */ + \ 9 + 2 /* length */) 10 + #define encode_allocate_maxsz (op_encode_hdr_maxsz + \ 11 + encode_fallocate_maxsz) 12 + #define decode_allocate_maxsz (op_decode_hdr_maxsz) 13 + #define encode_deallocate_maxsz (op_encode_hdr_maxsz + \ 14 + encode_fallocate_maxsz) 15 + #define decode_deallocate_maxsz (op_decode_hdr_maxsz) 7 16 #define encode_seek_maxsz (op_encode_hdr_maxsz + \ 8 17 encode_stateid_maxsz + \ 9 18 2 /* offset */ + \ ··· 23 14 2 /* offset */ + \ 24 15 2 /* length */) 25 16 17 + #define NFS4_enc_allocate_sz (compound_encode_hdr_maxsz + \ 18 + encode_putfh_maxsz + \ 19 + encode_allocate_maxsz) 20 + #define NFS4_dec_allocate_sz (compound_decode_hdr_maxsz + \ 21 + decode_putfh_maxsz + \ 22 + decode_allocate_maxsz) 23 + #define NFS4_enc_deallocate_sz (compound_encode_hdr_maxsz + \ 24 + encode_putfh_maxsz + \ 25 + encode_deallocate_maxsz) 26 + #define NFS4_dec_deallocate_sz (compound_decode_hdr_maxsz + \ 27 + decode_putfh_maxsz + \ 28 + decode_deallocate_maxsz) 26 29 #define NFS4_enc_seek_sz (compound_encode_hdr_maxsz + \ 27 30 encode_putfh_maxsz + \ 28 31 encode_seek_maxsz) ··· 42 21 decode_putfh_maxsz + \ 43 22 decode_seek_maxsz) 44 23 24 + 25 + static void encode_fallocate(struct xdr_stream *xdr, 26 + struct nfs42_falloc_args *args) 27 + { 28 + encode_nfs4_stateid(xdr, &args->falloc_stateid); 29 + encode_uint64(xdr, args->falloc_offset); 30 + encode_uint64(xdr, args->falloc_length); 31 + } 32 + 33 + static void encode_allocate(struct xdr_stream *xdr, 34 + struct nfs42_falloc_args *args, 35 + struct compound_hdr *hdr) 36 + { 37 + encode_op_hdr(xdr, OP_ALLOCATE, decode_allocate_maxsz, hdr); 38 + encode_fallocate(xdr, args); 39 + } 40 + 41 + static void encode_deallocate(struct xdr_stream *xdr, 42 + struct nfs42_falloc_args *args, 43 + struct compound_hdr *hdr) 44 + { 45 + encode_op_hdr(xdr, OP_DEALLOCATE, decode_deallocate_maxsz, hdr); 46 + encode_fallocate(xdr, args); 47 + } 45 48 46 49 static void encode_seek(struct xdr_stream *xdr, 47 50 struct nfs42_seek_args *args, ··· 75 30 encode_nfs4_stateid(xdr, &args->sa_stateid); 76 31 encode_uint64(xdr, args->sa_offset); 77 32 encode_uint32(xdr, args->sa_what); 33 + } 34 + 35 + /* 36 + * Encode ALLOCATE request 37 + */ 38 + static void nfs4_xdr_enc_allocate(struct rpc_rqst *req, 39 + struct xdr_stream *xdr, 40 + struct nfs42_falloc_args *args) 41 + { 42 + struct compound_hdr hdr = { 43 + .minorversion = nfs4_xdr_minorversion(&args->seq_args), 44 + }; 45 + 46 + encode_compound_hdr(xdr, req, &hdr); 47 + encode_sequence(xdr, &args->seq_args, &hdr); 48 + encode_putfh(xdr, args->falloc_fh, &hdr); 49 + encode_allocate(xdr, args, &hdr); 50 + encode_nops(&hdr); 51 + } 52 + 53 + /* 54 + * Encode DEALLOCATE request 55 + */ 56 + static void nfs4_xdr_enc_deallocate(struct rpc_rqst *req, 57 + struct xdr_stream *xdr, 58 + struct nfs42_falloc_args *args) 59 + { 60 + struct compound_hdr hdr = { 61 + .minorversion = nfs4_xdr_minorversion(&args->seq_args), 62 + }; 63 + 64 + encode_compound_hdr(xdr, req, &hdr); 65 + encode_sequence(xdr, &args->seq_args, &hdr); 66 + encode_putfh(xdr, args->falloc_fh, &hdr); 67 + encode_deallocate(xdr, args, &hdr); 68 + encode_nops(&hdr); 78 69 } 79 70 80 71 /* ··· 129 48 encode_putfh(xdr, args->sa_fh, &hdr); 130 49 encode_seek(xdr, args, &hdr); 131 50 encode_nops(&hdr); 51 + } 52 + 53 + static int decode_allocate(struct xdr_stream *xdr, struct nfs42_falloc_res *res) 54 + { 55 + return decode_op_hdr(xdr, OP_ALLOCATE); 56 + } 57 + 58 + static int decode_deallocate(struct xdr_stream *xdr, struct nfs42_falloc_res *res) 59 + { 60 + return decode_op_hdr(xdr, OP_DEALLOCATE); 132 61 } 133 62 134 63 static int decode_seek(struct xdr_stream *xdr, struct nfs42_seek_res *res) ··· 161 70 out_overflow: 162 71 print_overflow_msg(__func__, xdr); 163 72 return -EIO; 73 + } 74 + 75 + /* 76 + * Decode ALLOCATE request 77 + */ 78 + static int nfs4_xdr_dec_allocate(struct rpc_rqst *rqstp, 79 + struct xdr_stream *xdr, 80 + struct nfs42_falloc_res *res) 81 + { 82 + struct compound_hdr hdr; 83 + int status; 84 + 85 + status = decode_compound_hdr(xdr, &hdr); 86 + if (status) 87 + goto out; 88 + status = decode_sequence(xdr, &res->seq_res, rqstp); 89 + if (status) 90 + goto out; 91 + status = decode_putfh(xdr); 92 + if (status) 93 + goto out; 94 + status = decode_allocate(xdr, res); 95 + out: 96 + return status; 97 + } 98 + 99 + /* 100 + * Decode DEALLOCATE request 101 + */ 102 + static int nfs4_xdr_dec_deallocate(struct rpc_rqst *rqstp, 103 + struct xdr_stream *xdr, 104 + struct nfs42_falloc_res *res) 105 + { 106 + struct compound_hdr hdr; 107 + int status; 108 + 109 + status = decode_compound_hdr(xdr, &hdr); 110 + if (status) 111 + goto out; 112 + status = decode_sequence(xdr, &res->seq_res, rqstp); 113 + if (status) 114 + goto out; 115 + status = decode_putfh(xdr); 116 + if (status) 117 + goto out; 118 + status = decode_deallocate(xdr, res); 119 + out: 120 + return status; 164 121 } 165 122 166 123 /*
+1
fs/nfs/nfs4_fs.h
··· 226 226 const struct nfs4_fs_locations *locations); 227 227 228 228 /* nfs4proc.c */ 229 + extern int nfs4_handle_exception(struct nfs_server *, int, struct nfs4_exception *); 229 230 extern int nfs4_call_sync(struct rpc_clnt *, struct nfs_server *, 230 231 struct rpc_message *, struct nfs4_sequence_args *, 231 232 struct nfs4_sequence_res *, int);
+19 -27
fs/nfs/nfs4client.c
··· 241 241 */ 242 242 static int nfs4_init_callback(struct nfs_client *clp) 243 243 { 244 + struct rpc_xprt *xprt; 244 245 int error; 245 246 246 - if (clp->rpc_ops->version == 4) { 247 - struct rpc_xprt *xprt; 247 + xprt = rcu_dereference_raw(clp->cl_rpcclient->cl_xprt); 248 248 249 - xprt = rcu_dereference_raw(clp->cl_rpcclient->cl_xprt); 250 - 251 - if (nfs4_has_session(clp)) { 252 - error = xprt_setup_backchannel(xprt, 253 - NFS41_BC_MIN_CALLBACKS); 254 - if (error < 0) 255 - return error; 256 - } 257 - 258 - error = nfs_callback_up(clp->cl_mvops->minor_version, xprt); 259 - if (error < 0) { 260 - dprintk("%s: failed to start callback. Error = %d\n", 261 - __func__, error); 249 + if (nfs4_has_session(clp)) { 250 + error = xprt_setup_backchannel(xprt, NFS41_BC_MIN_CALLBACKS); 251 + if (error < 0) 262 252 return error; 263 - } 264 - __set_bit(NFS_CS_CALLBACK, &clp->cl_res_state); 265 253 } 254 + 255 + error = nfs_callback_up(clp->cl_mvops->minor_version, xprt); 256 + if (error < 0) { 257 + dprintk("%s: failed to start callback. Error = %d\n", 258 + __func__, error); 259 + return error; 260 + } 261 + __set_bit(NFS_CS_CALLBACK, &clp->cl_res_state); 262 + 266 263 return 0; 267 264 } 268 265 ··· 495 498 atomic_inc(&pos->cl_count); 496 499 spin_unlock(&nn->nfs_client_lock); 497 500 498 - if (prev) 499 - nfs_put_client(prev); 501 + nfs_put_client(prev); 500 502 prev = pos; 501 503 502 504 status = nfs_wait_client_init_complete(pos); ··· 513 517 atomic_inc(&pos->cl_count); 514 518 spin_unlock(&nn->nfs_client_lock); 515 519 516 - if (prev) 517 - nfs_put_client(prev); 520 + nfs_put_client(prev); 518 521 prev = pos; 519 522 520 523 status = nfs4_proc_setclientid_confirm(pos, &clid, cred); ··· 544 549 545 550 /* No match found. The server lost our clientid */ 546 551 out: 547 - if (prev) 548 - nfs_put_client(prev); 552 + nfs_put_client(prev); 549 553 dprintk("NFS: <-- %s status = %d\n", __func__, status); 550 554 return status; 551 555 } ··· 635 641 atomic_inc(&pos->cl_count); 636 642 spin_unlock(&nn->nfs_client_lock); 637 643 638 - if (prev) 639 - nfs_put_client(prev); 644 + nfs_put_client(prev); 640 645 prev = pos; 641 646 642 647 status = nfs_wait_client_init_complete(pos); ··· 668 675 /* No matching nfs_client found. */ 669 676 spin_unlock(&nn->nfs_client_lock); 670 677 dprintk("NFS: <-- %s status = %d\n", __func__, status); 671 - if (prev) 672 - nfs_put_client(prev); 678 + nfs_put_client(prev); 673 679 return status; 674 680 } 675 681 #endif /* CONFIG_NFS_V4_1 */
+31
fs/nfs/nfs4file.c
··· 3 3 * 4 4 * Copyright (C) 1992 Rick Sladkey 5 5 */ 6 + #include <linux/fs.h> 7 + #include <linux/falloc.h> 6 8 #include <linux/nfs_fs.h> 7 9 #include "internal.h" 8 10 #include "fscache.h" ··· 136 134 return nfs_file_llseek(filep, offset, whence); 137 135 } 138 136 } 137 + 138 + static long nfs42_fallocate(struct file *filep, int mode, loff_t offset, loff_t len) 139 + { 140 + struct inode *inode = file_inode(filep); 141 + long ret; 142 + 143 + if (!S_ISREG(inode->i_mode)) 144 + return -EOPNOTSUPP; 145 + 146 + if ((mode != 0) && (mode != (FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE))) 147 + return -EOPNOTSUPP; 148 + 149 + ret = inode_newsize_ok(inode, offset + len); 150 + if (ret < 0) 151 + return ret; 152 + 153 + mutex_lock(&inode->i_mutex); 154 + if (mode & FALLOC_FL_PUNCH_HOLE) 155 + ret = nfs42_proc_deallocate(filep, offset, len); 156 + else 157 + ret = nfs42_proc_allocate(filep, offset, len); 158 + mutex_unlock(&inode->i_mutex); 159 + 160 + nfs_zap_caches(inode); 161 + return ret; 162 + } 139 163 #endif /* CONFIG_NFS_V4_2 */ 140 164 141 165 const struct file_operations nfs4_file_operations = { ··· 183 155 .flock = nfs_flock, 184 156 .splice_read = nfs_file_splice_read, 185 157 .splice_write = iter_file_splice_write, 158 + #ifdef CONFIG_NFS_V4_2 159 + .fallocate = nfs42_fallocate, 160 + #endif /* CONFIG_NFS_V4_2 */ 186 161 .check_flags = nfs_check_flags, 187 162 .setlease = simple_nosetlease, 188 163 };
+6 -6
fs/nfs/nfs4proc.c
··· 158 158 return -EACCES; 159 159 case -NFS4ERR_MINOR_VERS_MISMATCH: 160 160 return -EPROTONOSUPPORT; 161 - case -NFS4ERR_ACCESS: 162 - return -EACCES; 163 161 case -NFS4ERR_FILE_OPEN: 164 162 return -EBUSY; 165 163 default: ··· 342 344 /* This is the error handling routine for processes that are allowed 343 345 * to sleep. 344 346 */ 345 - static int nfs4_handle_exception(struct nfs_server *server, int errorcode, struct nfs4_exception *exception) 347 + int nfs4_handle_exception(struct nfs_server *server, int errorcode, struct nfs4_exception *exception) 346 348 { 347 349 struct nfs_client *clp = server->nfs_client; 348 350 struct nfs4_state *state = exception->state; ··· 7702 7704 7703 7705 dprintk("--> %s\n", __func__); 7704 7706 7707 + /* nfs4_layoutget_release calls pnfs_put_layout_hdr */ 7708 + pnfs_get_layout_hdr(NFS_I(inode)->layout); 7709 + 7705 7710 lgp->args.layout.pages = nfs4_alloc_pages(max_pages, gfp_flags); 7706 7711 if (!lgp->args.layout.pages) { 7707 7712 nfs4_layoutget_release(lgp); ··· 7716 7715 lgp->res.layoutp = &lgp->args.layout; 7717 7716 lgp->res.seq_res.sr_slot = NULL; 7718 7717 nfs4_init_sequence(&lgp->args.seq_args, &lgp->res.seq_res, 0); 7719 - 7720 - /* nfs4_layoutget_release calls pnfs_put_layout_hdr */ 7721 - pnfs_get_layout_hdr(NFS_I(inode)->layout); 7722 7718 7723 7719 task = rpc_run_task(&task_setup_data); 7724 7720 if (IS_ERR(task)) ··· 8424 8426 | NFS_CAP_POSIX_LOCK 8425 8427 | NFS_CAP_STATEID_NFSV41 8426 8428 | NFS_CAP_ATOMIC_OPEN_V1 8429 + | NFS_CAP_ALLOCATE 8430 + | NFS_CAP_DEALLOCATE 8427 8431 | NFS_CAP_SEEK, 8428 8432 .init_client = nfs41_init_client, 8429 8433 .shutdown_client = nfs41_shutdown_client,
+8 -4
fs/nfs/nfs4xdr.c
··· 141 141 XDR_QUADLEN(NFS4_VERIFIER_SIZE) + \ 142 142 XDR_QUADLEN(NFS4_SETCLIENTID_NAMELEN) + \ 143 143 1 /* sc_prog */ + \ 144 - XDR_QUADLEN(RPCBIND_MAXNETIDLEN) + \ 145 - XDR_QUADLEN(RPCBIND_MAXUADDRLEN) + \ 144 + 1 + XDR_QUADLEN(RPCBIND_MAXNETIDLEN) + \ 145 + 1 + XDR_QUADLEN(RPCBIND_MAXUADDRLEN) + \ 146 146 1) /* sc_cb_ident */ 147 147 #define decode_setclientid_maxsz \ 148 148 (op_decode_hdr_maxsz + \ 149 - 2 + \ 150 - 1024) /* large value for CLID_INUSE */ 149 + 2 /* clientid */ + \ 150 + XDR_QUADLEN(NFS4_VERIFIER_SIZE) + \ 151 + 1 + XDR_QUADLEN(RPCBIND_MAXNETIDLEN) + \ 152 + 1 + XDR_QUADLEN(RPCBIND_MAXUADDRLEN)) 151 153 #define encode_setclientid_confirm_maxsz \ 152 154 (op_encode_hdr_maxsz + \ 153 155 3 + (NFS4_VERIFIER_SIZE >> 2)) ··· 7396 7394 #endif /* CONFIG_NFS_V4_1 */ 7397 7395 #ifdef CONFIG_NFS_V4_2 7398 7396 PROC(SEEK, enc_seek, dec_seek), 7397 + PROC(ALLOCATE, enc_allocate, dec_allocate), 7398 + PROC(DEALLOCATE, enc_deallocate, dec_deallocate), 7399 7399 #endif /* CONFIG_NFS_V4_2 */ 7400 7400 }; 7401 7401
+8 -3
fs/nfs/pagelist.c
··· 258 258 static inline void 259 259 nfs_page_group_init(struct nfs_page *req, struct nfs_page *prev) 260 260 { 261 + struct inode *inode; 261 262 WARN_ON_ONCE(prev == req); 262 263 263 264 if (!prev) { ··· 277 276 * nfs_page_group_destroy is called */ 278 277 kref_get(&req->wb_head->wb_kref); 279 278 280 - /* grab extra ref if head request has extra ref from 281 - * the write/commit path to handle handoff between write 282 - * and commit lists */ 279 + /* grab extra ref and bump the request count if head request 280 + * has extra ref from the write/commit path to handle handoff 281 + * between write and commit lists. */ 283 282 if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) { 283 + inode = page_file_mapping(req->wb_page)->host; 284 284 set_bit(PG_INODE_REF, &req->wb_flags); 285 285 kref_get(&req->wb_kref); 286 + spin_lock(&inode->i_lock); 287 + NFS_I(inode)->nrequests++; 288 + spin_unlock(&inode->i_lock); 286 289 } 287 290 } 288 291 }
+1 -1
fs/nfs/read.c
··· 269 269 dprintk("NFS: nfs_readpage (%p %ld@%lu)\n", 270 270 page, PAGE_CACHE_SIZE, page_file_index(page)); 271 271 nfs_inc_stats(inode, NFSIOS_VFSREADPAGE); 272 - nfs_add_stats(inode, NFSIOS_READPAGES, 1); 272 + nfs_inc_stats(inode, NFSIOS_READPAGES); 273 273 274 274 /* 275 275 * Try to flush any pending writes to the file..
+13 -6
fs/nfs/write.c
··· 575 575 int ret; 576 576 577 577 nfs_inc_stats(inode, NFSIOS_VFSWRITEPAGE); 578 - nfs_add_stats(inode, NFSIOS_WRITEPAGES, 1); 578 + nfs_inc_stats(inode, NFSIOS_WRITEPAGES); 579 579 580 580 nfs_pageio_cond_complete(pgio, page_file_index(page)); 581 581 ret = nfs_page_async_flush(pgio, page, wbc->sync_mode == WB_SYNC_NONE); ··· 670 670 nfs_lock_request(req); 671 671 672 672 spin_lock(&inode->i_lock); 673 - if (!nfsi->npages && NFS_PROTO(inode)->have_delegation(inode, FMODE_WRITE)) 673 + if (!nfsi->nrequests && 674 + NFS_PROTO(inode)->have_delegation(inode, FMODE_WRITE)) 674 675 inode->i_version++; 675 676 /* 676 677 * Swap-space should not get truncated. Hence no need to plug the race ··· 682 681 SetPagePrivate(req->wb_page); 683 682 set_page_private(req->wb_page, (unsigned long)req); 684 683 } 685 - nfsi->npages++; 684 + nfsi->nrequests++; 686 685 /* this a head request for a page group - mark it as having an 687 - * extra reference so sub groups can follow suit */ 686 + * extra reference so sub groups can follow suit. 687 + * This flag also informs pgio layer when to bump nrequests when 688 + * adding subrequests. */ 688 689 WARN_ON(test_and_set_bit(PG_INODE_REF, &req->wb_flags)); 689 690 kref_get(&req->wb_kref); 690 691 spin_unlock(&inode->i_lock); ··· 712 709 wake_up_page(head->wb_page, PG_private); 713 710 clear_bit(PG_MAPPED, &head->wb_flags); 714 711 } 715 - nfsi->npages--; 712 + nfsi->nrequests--; 713 + spin_unlock(&inode->i_lock); 714 + } else { 715 + spin_lock(&inode->i_lock); 716 + nfsi->nrequests--; 716 717 spin_unlock(&inode->i_lock); 717 718 } 718 719 ··· 1742 1735 /* Don't commit yet if this is a non-blocking flush and there 1743 1736 * are a lot of outstanding writes for this mapping. 1744 1737 */ 1745 - if (nfsi->commit_info.ncommit <= (nfsi->npages >> 1)) 1738 + if (nfsi->commit_info.ncommit <= (nfsi->nrequests >> 1)) 1746 1739 goto out_mark_dirty; 1747 1740 1748 1741 /* don't wait for the COMMIT response */
+1 -1
fs/overlayfs/Kconfig
··· 1 - config OVERLAYFS_FS 1 + config OVERLAY_FS 2 2 tristate "Overlay filesystem support" 3 3 help 4 4 An overlay filesystem combines two filesystems - an 'upper' filesystem
+2 -2
fs/overlayfs/Makefile
··· 2 2 # Makefile for the overlay filesystem. 3 3 # 4 4 5 - obj-$(CONFIG_OVERLAYFS_FS) += overlayfs.o 5 + obj-$(CONFIG_OVERLAY_FS) += overlay.o 6 6 7 - overlayfs-objs := super.o inode.o dir.o readdir.o copy_up.o 7 + overlay-objs := super.o inode.o dir.o readdir.o copy_up.o
+19 -12
fs/overlayfs/dir.c
··· 284 284 return ERR_PTR(err); 285 285 } 286 286 287 - static struct dentry *ovl_check_empty_and_clear(struct dentry *dentry, 288 - enum ovl_path_type type) 287 + static struct dentry *ovl_check_empty_and_clear(struct dentry *dentry) 289 288 { 290 289 int err; 291 290 struct dentry *ret = NULL; ··· 293 294 err = ovl_check_empty_dir(dentry, &list); 294 295 if (err) 295 296 ret = ERR_PTR(err); 296 - else if (type == OVL_PATH_MERGE) 297 - ret = ovl_clear_empty(dentry, &list); 297 + else { 298 + /* 299 + * If no upperdentry then skip clearing whiteouts. 300 + * 301 + * Can race with copy-up, since we don't hold the upperdir 302 + * mutex. Doesn't matter, since copy-up can't create a 303 + * non-empty directory from an empty one. 304 + */ 305 + if (ovl_dentry_upper(dentry)) 306 + ret = ovl_clear_empty(dentry, &list); 307 + } 298 308 299 309 ovl_cache_free(&list); 300 310 ··· 495 487 return err; 496 488 } 497 489 498 - static int ovl_remove_and_whiteout(struct dentry *dentry, 499 - enum ovl_path_type type, bool is_dir) 490 + static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir) 500 491 { 501 492 struct dentry *workdir = ovl_workdir(dentry); 502 493 struct inode *wdir = workdir->d_inode; ··· 507 500 int err; 508 501 509 502 if (is_dir) { 510 - opaquedir = ovl_check_empty_and_clear(dentry, type); 503 + opaquedir = ovl_check_empty_and_clear(dentry); 511 504 err = PTR_ERR(opaquedir); 512 505 if (IS_ERR(opaquedir)) 513 506 goto out; ··· 522 515 if (IS_ERR(whiteout)) 523 516 goto out_unlock; 524 517 525 - if (type == OVL_PATH_LOWER) { 518 + upper = ovl_dentry_upper(dentry); 519 + if (!upper) { 526 520 upper = lookup_one_len(dentry->d_name.name, upperdir, 527 - dentry->d_name.len); 521 + dentry->d_name.len); 528 522 err = PTR_ERR(upper); 529 523 if (IS_ERR(upper)) 530 524 goto kill_whiteout; ··· 537 529 } else { 538 530 int flags = 0; 539 531 540 - upper = ovl_dentry_upper(dentry); 541 532 if (opaquedir) 542 533 upper = opaquedir; 543 534 err = -ESTALE; ··· 655 648 cap_raise(override_cred->cap_effective, CAP_CHOWN); 656 649 old_cred = override_creds(override_cred); 657 650 658 - err = ovl_remove_and_whiteout(dentry, type, is_dir); 651 + err = ovl_remove_and_whiteout(dentry, is_dir); 659 652 660 653 revert_creds(old_cred); 661 654 put_cred(override_cred); ··· 788 781 } 789 782 790 783 if (overwrite && (new_type == OVL_PATH_LOWER || new_type == OVL_PATH_MERGE) && new_is_dir) { 791 - opaquedir = ovl_check_empty_and_clear(new, new_type); 784 + opaquedir = ovl_check_empty_and_clear(new); 792 785 err = PTR_ERR(opaquedir); 793 786 if (IS_ERR(opaquedir)) { 794 787 opaquedir = NULL;
+18 -9
fs/overlayfs/inode.c
··· 235 235 return err; 236 236 } 237 237 238 + static bool ovl_need_xattr_filter(struct dentry *dentry, 239 + enum ovl_path_type type) 240 + { 241 + return type == OVL_PATH_UPPER && S_ISDIR(dentry->d_inode->i_mode); 242 + } 243 + 238 244 ssize_t ovl_getxattr(struct dentry *dentry, const char *name, 239 245 void *value, size_t size) 240 246 { 241 - if (ovl_path_type(dentry->d_parent) == OVL_PATH_MERGE && 242 - ovl_is_private_xattr(name)) 247 + struct path realpath; 248 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 249 + 250 + if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 243 251 return -ENODATA; 244 252 245 - return vfs_getxattr(ovl_dentry_real(dentry), name, value, size); 253 + return vfs_getxattr(realpath.dentry, name, value, size); 246 254 } 247 255 248 256 ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size) 249 257 { 258 + struct path realpath; 259 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 250 260 ssize_t res; 251 261 int off; 252 262 253 - res = vfs_listxattr(ovl_dentry_real(dentry), list, size); 263 + res = vfs_listxattr(realpath.dentry, list, size); 254 264 if (res <= 0 || size == 0) 255 265 return res; 256 266 257 - if (ovl_path_type(dentry->d_parent) != OVL_PATH_MERGE) 267 + if (!ovl_need_xattr_filter(dentry, type)) 258 268 return res; 259 269 260 270 /* filter out private xattrs */ ··· 289 279 { 290 280 int err; 291 281 struct path realpath; 292 - enum ovl_path_type type; 282 + enum ovl_path_type type = ovl_path_real(dentry, &realpath); 293 283 294 284 err = ovl_want_write(dentry); 295 285 if (err) 296 286 goto out; 297 287 298 - if (ovl_path_type(dentry->d_parent) == OVL_PATH_MERGE && 299 - ovl_is_private_xattr(name)) 288 + err = -ENODATA; 289 + if (ovl_need_xattr_filter(dentry, type) && ovl_is_private_xattr(name)) 300 290 goto out_drop_write; 301 291 302 - type = ovl_path_real(dentry, &realpath); 303 292 if (type == OVL_PATH_LOWER) { 304 293 err = vfs_getxattr(realpath.dentry, name, NULL, 0); 305 294 if (err < 0)
+16 -23
fs/overlayfs/readdir.c
··· 274 274 return 0; 275 275 } 276 276 277 - static inline int ovl_dir_read_merged(struct path *upperpath, 278 - struct path *lowerpath, 279 - struct list_head *list) 277 + static int ovl_dir_read_merged(struct dentry *dentry, struct list_head *list) 280 278 { 281 279 int err; 280 + struct path lowerpath; 281 + struct path upperpath; 282 282 struct ovl_readdir_data rdd = { 283 283 .ctx.actor = ovl_fill_merge, 284 284 .list = list, ··· 286 286 .is_merge = false, 287 287 }; 288 288 289 - if (upperpath->dentry) { 290 - err = ovl_dir_read(upperpath, &rdd); 289 + ovl_path_lower(dentry, &lowerpath); 290 + ovl_path_upper(dentry, &upperpath); 291 + 292 + if (upperpath.dentry) { 293 + err = ovl_dir_read(&upperpath, &rdd); 291 294 if (err) 292 295 goto out; 293 296 294 - if (lowerpath->dentry) { 295 - err = ovl_dir_mark_whiteouts(upperpath->dentry, &rdd); 297 + if (lowerpath.dentry) { 298 + err = ovl_dir_mark_whiteouts(upperpath.dentry, &rdd); 296 299 if (err) 297 300 goto out; 298 301 } 299 302 } 300 - if (lowerpath->dentry) { 303 + if (lowerpath.dentry) { 301 304 /* 302 305 * Insert lowerpath entries before upperpath ones, this allows 303 306 * offsets to be reasonably constant 304 307 */ 305 308 list_add(&rdd.middle, rdd.list); 306 309 rdd.is_merge = true; 307 - err = ovl_dir_read(lowerpath, &rdd); 310 + err = ovl_dir_read(&lowerpath, &rdd); 308 311 list_del(&rdd.middle); 309 312 } 310 313 out: ··· 332 329 static struct ovl_dir_cache *ovl_cache_get(struct dentry *dentry) 333 330 { 334 331 int res; 335 - struct path lowerpath; 336 - struct path upperpath; 337 332 struct ovl_dir_cache *cache; 338 333 339 334 cache = ovl_dir_cache(dentry); ··· 348 347 cache->refcount = 1; 349 348 INIT_LIST_HEAD(&cache->entries); 350 349 351 - ovl_path_lower(dentry, &lowerpath); 352 - ovl_path_upper(dentry, &upperpath); 353 - 354 - res = ovl_dir_read_merged(&upperpath, &lowerpath, &cache->entries); 350 + res = ovl_dir_read_merged(dentry, &cache->entries); 355 351 if (res) { 356 352 ovl_cache_free(&cache->entries); 357 353 kfree(cache); ··· 450 452 /* 451 453 * Need to check if we started out being a lower dir, but got copied up 452 454 */ 453 - if (!od->is_upper && ovl_path_type(dentry) == OVL_PATH_MERGE) { 455 + if (!od->is_upper && ovl_path_type(dentry) != OVL_PATH_LOWER) { 454 456 struct inode *inode = file_inode(file); 455 457 456 - realfile =lockless_dereference(od->upperfile); 458 + realfile = lockless_dereference(od->upperfile); 457 459 if (!realfile) { 458 460 struct path upperpath; 459 461 ··· 536 538 int ovl_check_empty_dir(struct dentry *dentry, struct list_head *list) 537 539 { 538 540 int err; 539 - struct path lowerpath; 540 - struct path upperpath; 541 541 struct ovl_cache_entry *p; 542 542 543 - ovl_path_upper(dentry, &upperpath); 544 - ovl_path_lower(dentry, &lowerpath); 545 - 546 - err = ovl_dir_read_merged(&upperpath, &lowerpath, list); 543 + err = ovl_dir_read_merged(dentry, list); 547 544 if (err) 548 545 return err; 549 546
+49 -12
fs/overlayfs/super.c
··· 24 24 MODULE_DESCRIPTION("Overlay filesystem"); 25 25 MODULE_LICENSE("GPL"); 26 26 27 - #define OVERLAYFS_SUPER_MAGIC 0x794c764f 27 + #define OVERLAYFS_SUPER_MAGIC 0x794c7630 28 28 29 29 struct ovl_config { 30 30 char *lowerdir; ··· 84 84 85 85 static struct dentry *ovl_upperdentry_dereference(struct ovl_entry *oe) 86 86 { 87 - struct dentry *upperdentry = ACCESS_ONCE(oe->__upperdentry); 88 - /* 89 - * Make sure to order reads to upperdentry wrt ovl_dentry_update() 90 - */ 91 - smp_read_barrier_depends(); 92 - return upperdentry; 87 + return lockless_dereference(oe->__upperdentry); 93 88 } 94 89 95 90 void ovl_path_upper(struct dentry *dentry, struct path *path) ··· 457 462 {OPT_ERR, NULL} 458 463 }; 459 464 465 + static char *ovl_next_opt(char **s) 466 + { 467 + char *sbegin = *s; 468 + char *p; 469 + 470 + if (sbegin == NULL) 471 + return NULL; 472 + 473 + for (p = sbegin; *p; p++) { 474 + if (*p == '\\') { 475 + p++; 476 + if (!*p) 477 + break; 478 + } else if (*p == ',') { 479 + *p = '\0'; 480 + *s = p + 1; 481 + return sbegin; 482 + } 483 + } 484 + *s = NULL; 485 + return sbegin; 486 + } 487 + 460 488 static int ovl_parse_opt(char *opt, struct ovl_config *config) 461 489 { 462 490 char *p; 463 491 464 - while ((p = strsep(&opt, ",")) != NULL) { 492 + while ((p = ovl_next_opt(&opt)) != NULL) { 465 493 int token; 466 494 substring_t args[MAX_OPT_ARGS]; 467 495 ··· 572 554 goto out_unlock; 573 555 } 574 556 557 + static void ovl_unescape(char *s) 558 + { 559 + char *d = s; 560 + 561 + for (;; s++, d++) { 562 + if (*s == '\\') 563 + s++; 564 + *d = *s; 565 + if (!*s) 566 + break; 567 + } 568 + } 569 + 575 570 static int ovl_mount_dir(const char *name, struct path *path) 576 571 { 577 572 int err; 573 + char *tmp = kstrdup(name, GFP_KERNEL); 578 574 579 - err = kern_path(name, LOOKUP_FOLLOW, path); 575 + if (!tmp) 576 + return -ENOMEM; 577 + 578 + ovl_unescape(tmp); 579 + err = kern_path(tmp, LOOKUP_FOLLOW, path); 580 580 if (err) { 581 - pr_err("overlayfs: failed to resolve '%s': %i\n", name, err); 581 + pr_err("overlayfs: failed to resolve '%s': %i\n", tmp, err); 582 582 err = -EINVAL; 583 583 } 584 + kfree(tmp); 584 585 return err; 585 586 } 586 587 ··· 813 776 814 777 static struct file_system_type ovl_fs_type = { 815 778 .owner = THIS_MODULE, 816 - .name = "overlayfs", 779 + .name = "overlay", 817 780 .mount = ovl_mount, 818 781 .kill_sb = kill_anon_super, 819 782 }; 820 - MODULE_ALIAS_FS("overlayfs"); 783 + MODULE_ALIAS_FS("overlay"); 821 784 822 785 static int __init ovl_init(void) 823 786 {
+5 -2
include/linux/bitops.h
··· 18 18 * position @h. For example 19 19 * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000. 20 20 */ 21 - #define GENMASK(h, l) (((U32_C(1) << ((h) - (l) + 1)) - 1) << (l)) 22 - #define GENMASK_ULL(h, l) (((U64_C(1) << ((h) - (l) + 1)) - 1) << (l)) 21 + #define GENMASK(h, l) \ 22 + (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h)))) 23 + 24 + #define GENMASK_ULL(h, l) \ 25 + (((~0ULL) << (l)) & (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) 23 26 24 27 extern unsigned int __sw_hweight8(unsigned int w); 25 28 extern unsigned int __sw_hweight16(unsigned int w);
+6
include/linux/can/dev.h
··· 99 99 return 1; 100 100 } 101 101 102 + static inline bool can_is_canfd_skb(const struct sk_buff *skb) 103 + { 104 + /* the CAN specific type of skb is identified by its data length */ 105 + return skb->len == CANFD_MTU; 106 + } 107 + 102 108 /* get data length from can_dlc with sanitized can_dlc */ 103 109 u8 can_dlc2len(u8 can_dlc); 104 110
+1 -1
include/linux/inetdevice.h
··· 242 242 static __inline__ __be32 inet_make_mask(int logmask) 243 243 { 244 244 if (logmask) 245 - return htonl(~((1<<(32-logmask))-1)); 245 + return htonl(~((1U<<(32-logmask))-1)); 246 246 return 0; 247 247 } 248 248
-5
include/linux/kernel_stat.h
··· 77 77 return kstat_cpu(cpu).irqs_sum; 78 78 } 79 79 80 - /* 81 - * Lock/unlock the current runqueue - to extract task statistics: 82 - */ 83 - extern unsigned long long task_delta_exec(struct task_struct *); 84 - 85 80 extern void account_user_time(struct task_struct *, cputime_t, cputime_t); 86 81 extern void account_system_time(struct task_struct *, int, cputime_t, cputime_t); 87 82 extern void account_steal_time(cputime_t);
+1 -5
include/linux/lockd/debug.h
··· 17 17 * Enable lockd debugging. 18 18 * Requires RPC_DEBUG. 19 19 */ 20 - #ifdef RPC_DEBUG 21 - # define LOCKD_DEBUG 1 22 - #endif 23 - 24 20 #undef ifdebug 25 - #if defined(RPC_DEBUG) && defined(LOCKD_DEBUG) 21 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 26 22 # define ifdebug(flag) if (unlikely(nlm_debug & NLMDBG_##flag)) 27 23 #else 28 24 # define ifdebug(flag) if (0)
+2
include/linux/nfs4.h
··· 490 490 491 491 /* nfs42 */ 492 492 NFSPROC4_CLNT_SEEK, 493 + NFSPROC4_CLNT_ALLOCATE, 494 + NFSPROC4_CLNT_DEALLOCATE, 493 495 }; 494 496 495 497 /* nfs41 types */
+2 -2
include/linux/nfs_fs.h
··· 163 163 */ 164 164 __be32 cookieverf[2]; 165 165 166 - unsigned long npages; 166 + unsigned long nrequests; 167 167 struct nfs_mds_commit_info commit_info; 168 168 169 169 /* Open contexts for shared mmap writes */ ··· 520 520 static inline int 521 521 nfs_have_writebacks(struct inode *inode) 522 522 { 523 - return NFS_I(inode)->npages != 0; 523 + return NFS_I(inode)->nrequests != 0; 524 524 } 525 525 526 526 /*
+2
include/linux/nfs_fs_sb.h
··· 231 231 #define NFS_CAP_ATOMIC_OPEN_V1 (1U << 17) 232 232 #define NFS_CAP_SECURITY_LABEL (1U << 18) 233 233 #define NFS_CAP_SEEK (1U << 19) 234 + #define NFS_CAP_ALLOCATE (1U << 20) 235 + #define NFS_CAP_DEALLOCATE (1U << 21) 234 236 235 237 #endif
+14
include/linux/nfs_xdr.h
··· 1243 1243 #endif /* CONFIG_NFS_V4_1 */ 1244 1244 1245 1245 #ifdef CONFIG_NFS_V4_2 1246 + struct nfs42_falloc_args { 1247 + struct nfs4_sequence_args seq_args; 1248 + 1249 + struct nfs_fh *falloc_fh; 1250 + nfs4_stateid falloc_stateid; 1251 + u64 falloc_offset; 1252 + u64 falloc_length; 1253 + }; 1254 + 1255 + struct nfs42_falloc_res { 1256 + struct nfs4_sequence_res seq_res; 1257 + unsigned int status; 1258 + }; 1259 + 1246 1260 struct nfs42_seek_args { 1247 1261 struct nfs4_sequence_args seq_args; 1248 1262
+7 -1
include/linux/percpu-refcount.h
··· 133 133 /* paired with smp_store_release() in percpu_ref_reinit() */ 134 134 smp_read_barrier_depends(); 135 135 136 - if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC)) 136 + /* 137 + * Theoretically, the following could test just ATOMIC; however, 138 + * then we'd have to mask off DEAD separately as DEAD may be 139 + * visible without ATOMIC if we race with percpu_ref_kill(). DEAD 140 + * implies ATOMIC anyway. Test them together. 141 + */ 142 + if (unlikely(percpu_ptr & __PERCPU_REF_ATOMIC_DEAD)) 137 143 return false; 138 144 139 145 *percpu_countp = (unsigned long __percpu *)percpu_ptr;
+1 -1
include/linux/sunrpc/auth.h
··· 53 53 struct rcu_head cr_rcu; 54 54 struct rpc_auth * cr_auth; 55 55 const struct rpc_credops *cr_ops; 56 - #ifdef RPC_DEBUG 56 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 57 57 unsigned long cr_magic; /* 0x0f4aa4f0 */ 58 58 #endif 59 59 unsigned long cr_expire; /* when to gc */
+4
include/linux/sunrpc/clnt.h
··· 63 63 struct rpc_rtt cl_rtt_default; 64 64 struct rpc_timeout cl_timeout_default; 65 65 const struct rpc_program *cl_program; 66 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 67 + struct dentry *cl_debugfs; /* debugfs directory */ 68 + #endif 66 69 }; 67 70 68 71 /* ··· 179 176 const char *rpc_peeraddr2str(struct rpc_clnt *, enum rpc_display_format_t); 180 177 int rpc_localaddr(struct rpc_clnt *, struct sockaddr *, size_t); 181 178 179 + const char *rpc_proc_name(const struct rpc_task *task); 182 180 #endif /* __KERNEL__ */ 183 181 #endif /* _LINUX_SUNRPC_CLNT_H */
+49 -15
include/linux/sunrpc/debug.h
··· 10 10 11 11 #include <uapi/linux/sunrpc/debug.h> 12 12 13 - 14 - /* 15 - * Enable RPC debugging/profiling. 16 - */ 17 - #ifdef CONFIG_SUNRPC_DEBUG 18 - #define RPC_DEBUG 19 - #endif 20 - #ifdef CONFIG_TRACEPOINTS 21 - #define RPC_TRACEPOINTS 22 - #endif 23 - /* #define RPC_PROFILE */ 24 - 25 13 /* 26 14 * Debugging macros etc 27 15 */ 28 - #ifdef RPC_DEBUG 16 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 29 17 extern unsigned int rpc_debug; 30 18 extern unsigned int nfs_debug; 31 19 extern unsigned int nfsd_debug; ··· 24 36 #define dprintk_rcu(args...) dfprintk_rcu(FACILITY, ## args) 25 37 26 38 #undef ifdebug 27 - #ifdef RPC_DEBUG 39 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 28 40 # define ifdebug(fac) if (unlikely(rpc_debug & RPCDBG_##fac)) 29 41 30 42 # define dfprintk(fac, args...) \ ··· 53 65 /* 54 66 * Sysctl interface for RPC debugging 55 67 */ 56 - #ifdef RPC_DEBUG 68 + 69 + struct rpc_clnt; 70 + struct rpc_xprt; 71 + 72 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 57 73 void rpc_register_sysctl(void); 58 74 void rpc_unregister_sysctl(void); 75 + int sunrpc_debugfs_init(void); 76 + void sunrpc_debugfs_exit(void); 77 + int rpc_clnt_debugfs_register(struct rpc_clnt *); 78 + void rpc_clnt_debugfs_unregister(struct rpc_clnt *); 79 + int rpc_xprt_debugfs_register(struct rpc_xprt *); 80 + void rpc_xprt_debugfs_unregister(struct rpc_xprt *); 81 + #else 82 + static inline int 83 + sunrpc_debugfs_init(void) 84 + { 85 + return 0; 86 + } 87 + 88 + static inline void 89 + sunrpc_debugfs_exit(void) 90 + { 91 + return; 92 + } 93 + 94 + static inline int 95 + rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 96 + { 97 + return 0; 98 + } 99 + 100 + static inline void 101 + rpc_clnt_debugfs_unregister(struct rpc_clnt *clnt) 102 + { 103 + return; 104 + } 105 + 106 + static inline int 107 + rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 108 + { 109 + return 0; 110 + } 111 + 112 + static inline void 113 + rpc_xprt_debugfs_unregister(struct rpc_xprt *xprt) 114 + { 115 + return; 116 + } 59 117 #endif 60 118 61 119 #endif /* _LINUX_SUNRPC_DEBUG_H_ */
+3
include/linux/sunrpc/metrics.h
··· 27 27 28 28 #include <linux/seq_file.h> 29 29 #include <linux/ktime.h> 30 + #include <linux/spinlock.h> 30 31 31 32 #define RPC_IOSTATS_VERS "1.0" 32 33 33 34 struct rpc_iostats { 35 + spinlock_t om_lock; 36 + 34 37 /* 35 38 * These counters give an idea about how many request 36 39 * transmissions are required, on average, to complete that
+4 -4
include/linux/sunrpc/sched.h
··· 79 79 unsigned short tk_flags; /* misc flags */ 80 80 unsigned short tk_timeouts; /* maj timeouts */ 81 81 82 - #if defined(RPC_DEBUG) || defined(RPC_TRACEPOINTS) 82 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) || IS_ENABLED(CONFIG_TRACEPOINTS) 83 83 unsigned short tk_pid; /* debugging aid */ 84 84 #endif 85 85 unsigned char tk_priority : 2,/* Task priority */ ··· 187 187 unsigned char nr; /* # tasks remaining for cookie */ 188 188 unsigned short qlen; /* total # tasks waiting in queue */ 189 189 struct rpc_timer timer_list; 190 - #if defined(RPC_DEBUG) || defined(RPC_TRACEPOINTS) 190 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) || IS_ENABLED(CONFIG_TRACEPOINTS) 191 191 const char * name; 192 192 #endif 193 193 }; ··· 237 237 int rpciod_up(void); 238 238 void rpciod_down(void); 239 239 int __rpc_wait_for_completion_task(struct rpc_task *task, wait_bit_action_f *); 240 - #ifdef RPC_DEBUG 240 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 241 241 struct net; 242 242 void rpc_show_tasks(struct net *); 243 243 #endif ··· 251 251 return __rpc_wait_for_completion_task(task, NULL); 252 252 } 253 253 254 - #if defined(RPC_DEBUG) || defined (RPC_TRACEPOINTS) 254 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) || IS_ENABLED(CONFIG_TRACEPOINTS) 255 255 static inline const char * rpc_qname(const struct rpc_wait_queue *q) 256 256 { 257 257 return ((q && q->name) ? q->name : "unknown");
+3
include/linux/sunrpc/xprt.h
··· 239 239 struct net *xprt_net; 240 240 const char *servername; 241 241 const char *address_strings[RPC_DISPLAY_MAX]; 242 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 243 + struct dentry *debugfs; /* debugfs directory */ 244 + #endif 242 245 }; 243 246 244 247 #if defined(CONFIG_SUNRPC_BACKCHANNEL)
+59
include/linux/sunrpc/xprtsock.h
··· 17 17 #define RPC_DEF_MIN_RESVPORT (665U) 18 18 #define RPC_DEF_MAX_RESVPORT (1023U) 19 19 20 + struct sock_xprt { 21 + struct rpc_xprt xprt; 22 + 23 + /* 24 + * Network layer 25 + */ 26 + struct socket * sock; 27 + struct sock * inet; 28 + 29 + /* 30 + * State of TCP reply receive 31 + */ 32 + __be32 tcp_fraghdr, 33 + tcp_xid, 34 + tcp_calldir; 35 + 36 + u32 tcp_offset, 37 + tcp_reclen; 38 + 39 + unsigned long tcp_copied, 40 + tcp_flags; 41 + 42 + /* 43 + * Connection of transports 44 + */ 45 + struct delayed_work connect_worker; 46 + struct sockaddr_storage srcaddr; 47 + unsigned short srcport; 48 + 49 + /* 50 + * UDP socket buffer size parameters 51 + */ 52 + size_t rcvsize, 53 + sndsize; 54 + 55 + /* 56 + * Saved socket callback addresses 57 + */ 58 + void (*old_data_ready)(struct sock *); 59 + void (*old_state_change)(struct sock *); 60 + void (*old_write_space)(struct sock *); 61 + void (*old_error_report)(struct sock *); 62 + }; 63 + 64 + /* 65 + * TCP receive state flags 66 + */ 67 + #define TCP_RCV_LAST_FRAG (1UL << 0) 68 + #define TCP_RCV_COPY_FRAGHDR (1UL << 1) 69 + #define TCP_RCV_COPY_XID (1UL << 2) 70 + #define TCP_RCV_COPY_DATA (1UL << 3) 71 + #define TCP_RCV_READ_CALLDIR (1UL << 4) 72 + #define TCP_RCV_COPY_CALLDIR (1UL << 5) 73 + 74 + /* 75 + * TCP RPC flags 76 + */ 77 + #define TCP_RPC_REPLY (1UL << 6) 78 + 20 79 #endif /* __KERNEL__ */ 21 80 22 81 #endif /* _LINUX_SUNRPC_XPRTSOCK_H */
-2
include/net/netfilter/nf_tables.h
··· 396 396 /** 397 397 * struct nft_trans - nf_tables object update in transaction 398 398 * 399 - * @rcu_head: rcu head to defer release of transaction data 400 399 * @list: used internally 401 400 * @msg_type: message type 402 401 * @ctx: transaction context 403 402 * @data: internal information related to the transaction 404 403 */ 405 404 struct nft_trans { 406 - struct rcu_head rcu_head; 407 405 struct list_head list; 408 406 int msg_type; 409 407 struct nft_ctx ctx;
+18
include/net/vxlan.h
··· 8 8 #define VNI_HASH_BITS 10 9 9 #define VNI_HASH_SIZE (1<<VNI_HASH_BITS) 10 10 11 + /* VXLAN protocol header */ 12 + struct vxlanhdr { 13 + __be32 vx_flags; 14 + __be32 vx_vni; 15 + }; 16 + 11 17 struct vxlan_sock; 12 18 typedef void (vxlan_rcv_t)(struct vxlan_sock *vh, struct sk_buff *skb, __be32 key); 13 19 ··· 50 44 struct rtable *rt, struct sk_buff *skb, 51 45 __be32 src, __be32 dst, __u8 tos, __u8 ttl, __be16 df, 52 46 __be16 src_port, __be16 dst_port, __be32 vni, bool xnet); 47 + 48 + static inline bool vxlan_gso_check(struct sk_buff *skb) 49 + { 50 + if ((skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL) && 51 + (skb->inner_protocol_type != ENCAP_TYPE_ETHER || 52 + skb->inner_protocol != htons(ETH_P_TEB) || 53 + (skb_inner_mac_header(skb) - skb_transport_header(skb) != 54 + sizeof(struct udphdr) + sizeof(struct vxlanhdr)))) 55 + return false; 56 + 57 + return true; 58 + } 53 59 54 60 /* IP header + UDP + VXLAN + Ethernet header */ 55 61 #define VXLAN_HEADROOM (20 + 8 + 8 + 14)
+2
include/sound/soc-dpcm.h
··· 102 102 /* state and update */ 103 103 enum snd_soc_dpcm_update runtime_update; 104 104 enum snd_soc_dpcm_state state; 105 + 106 + int trigger_pending; /* trigger cmd + 1 if pending, 0 if not */ 105 107 }; 106 108 107 109 /* can this BE stop and free */
+160
include/trace/events/sunrpc.h
··· 6 6 7 7 #include <linux/sunrpc/sched.h> 8 8 #include <linux/sunrpc/clnt.h> 9 + #include <linux/sunrpc/svc.h> 10 + #include <linux/sunrpc/xprtsock.h> 9 11 #include <net/tcp_states.h> 10 12 #include <linux/net.h> 11 13 #include <linux/tracepoint.h> ··· 307 305 DEFINE_RPC_SOCKET_EVENT_DONE(rpc_socket_reset_connection); 308 306 DEFINE_RPC_SOCKET_EVENT(rpc_socket_close); 309 307 DEFINE_RPC_SOCKET_EVENT(rpc_socket_shutdown); 308 + 309 + DECLARE_EVENT_CLASS(rpc_xprt_event, 310 + TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status), 311 + 312 + TP_ARGS(xprt, xid, status), 313 + 314 + TP_STRUCT__entry( 315 + __field(__be32, xid) 316 + __field(int, status) 317 + __string(addr, xprt->address_strings[RPC_DISPLAY_ADDR]) 318 + __string(port, xprt->address_strings[RPC_DISPLAY_PORT]) 319 + ), 320 + 321 + TP_fast_assign( 322 + __entry->xid = xid; 323 + __entry->status = status; 324 + __assign_str(addr, xprt->address_strings[RPC_DISPLAY_ADDR]); 325 + __assign_str(port, xprt->address_strings[RPC_DISPLAY_PORT]); 326 + ), 327 + 328 + TP_printk("peer=[%s]:%s xid=0x%x status=%d", __get_str(addr), 329 + __get_str(port), be32_to_cpu(__entry->xid), 330 + __entry->status) 331 + ); 332 + 333 + DEFINE_EVENT(rpc_xprt_event, xprt_lookup_rqst, 334 + TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status), 335 + TP_ARGS(xprt, xid, status)); 336 + 337 + DEFINE_EVENT(rpc_xprt_event, xprt_transmit, 338 + TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status), 339 + TP_ARGS(xprt, xid, status)); 340 + 341 + DEFINE_EVENT(rpc_xprt_event, xprt_complete_rqst, 342 + TP_PROTO(struct rpc_xprt *xprt, __be32 xid, int status), 343 + TP_ARGS(xprt, xid, status)); 344 + 345 + TRACE_EVENT(xs_tcp_data_ready, 346 + TP_PROTO(struct rpc_xprt *xprt, int err, unsigned int total), 347 + 348 + TP_ARGS(xprt, err, total), 349 + 350 + TP_STRUCT__entry( 351 + __field(int, err) 352 + __field(unsigned int, total) 353 + __string(addr, xprt ? xprt->address_strings[RPC_DISPLAY_ADDR] : 354 + "(null)") 355 + __string(port, xprt ? xprt->address_strings[RPC_DISPLAY_PORT] : 356 + "(null)") 357 + ), 358 + 359 + TP_fast_assign( 360 + __entry->err = err; 361 + __entry->total = total; 362 + __assign_str(addr, xprt ? 363 + xprt->address_strings[RPC_DISPLAY_ADDR] : "(null)"); 364 + __assign_str(port, xprt ? 365 + xprt->address_strings[RPC_DISPLAY_PORT] : "(null)"); 366 + ), 367 + 368 + TP_printk("peer=[%s]:%s err=%d total=%u", __get_str(addr), 369 + __get_str(port), __entry->err, __entry->total) 370 + ); 371 + 372 + #define rpc_show_sock_xprt_flags(flags) \ 373 + __print_flags(flags, "|", \ 374 + { TCP_RCV_LAST_FRAG, "TCP_RCV_LAST_FRAG" }, \ 375 + { TCP_RCV_COPY_FRAGHDR, "TCP_RCV_COPY_FRAGHDR" }, \ 376 + { TCP_RCV_COPY_XID, "TCP_RCV_COPY_XID" }, \ 377 + { TCP_RCV_COPY_DATA, "TCP_RCV_COPY_DATA" }, \ 378 + { TCP_RCV_READ_CALLDIR, "TCP_RCV_READ_CALLDIR" }, \ 379 + { TCP_RCV_COPY_CALLDIR, "TCP_RCV_COPY_CALLDIR" }, \ 380 + { TCP_RPC_REPLY, "TCP_RPC_REPLY" }) 381 + 382 + TRACE_EVENT(xs_tcp_data_recv, 383 + TP_PROTO(struct sock_xprt *xs), 384 + 385 + TP_ARGS(xs), 386 + 387 + TP_STRUCT__entry( 388 + __string(addr, xs->xprt.address_strings[RPC_DISPLAY_ADDR]) 389 + __string(port, xs->xprt.address_strings[RPC_DISPLAY_PORT]) 390 + __field(__be32, xid) 391 + __field(unsigned long, flags) 392 + __field(unsigned long, copied) 393 + __field(unsigned int, reclen) 394 + __field(unsigned long, offset) 395 + ), 396 + 397 + TP_fast_assign( 398 + __assign_str(addr, xs->xprt.address_strings[RPC_DISPLAY_ADDR]); 399 + __assign_str(port, xs->xprt.address_strings[RPC_DISPLAY_PORT]); 400 + __entry->xid = xs->tcp_xid; 401 + __entry->flags = xs->tcp_flags; 402 + __entry->copied = xs->tcp_copied; 403 + __entry->reclen = xs->tcp_reclen; 404 + __entry->offset = xs->tcp_offset; 405 + ), 406 + 407 + TP_printk("peer=[%s]:%s xid=0x%x flags=%s copied=%lu reclen=%u offset=%lu", 408 + __get_str(addr), __get_str(port), be32_to_cpu(__entry->xid), 409 + rpc_show_sock_xprt_flags(__entry->flags), 410 + __entry->copied, __entry->reclen, __entry->offset) 411 + ); 412 + 413 + TRACE_EVENT(svc_recv, 414 + TP_PROTO(struct svc_rqst *rqst, int status), 415 + 416 + TP_ARGS(rqst, status), 417 + 418 + TP_STRUCT__entry( 419 + __field(struct sockaddr *, addr) 420 + __field(__be32, xid) 421 + __field(int, status) 422 + ), 423 + 424 + TP_fast_assign( 425 + __entry->addr = (struct sockaddr *)&rqst->rq_addr; 426 + __entry->xid = status > 0 ? rqst->rq_xid : 0; 427 + __entry->status = status; 428 + ), 429 + 430 + TP_printk("addr=%pIScp xid=0x%x status=%d", __entry->addr, 431 + be32_to_cpu(__entry->xid), __entry->status) 432 + ); 433 + 434 + DECLARE_EVENT_CLASS(svc_rqst_status, 435 + 436 + TP_PROTO(struct svc_rqst *rqst, int status), 437 + 438 + TP_ARGS(rqst, status), 439 + 440 + TP_STRUCT__entry( 441 + __field(struct sockaddr *, addr) 442 + __field(__be32, xid) 443 + __field(int, dropme) 444 + __field(int, status) 445 + ), 446 + 447 + TP_fast_assign( 448 + __entry->addr = (struct sockaddr *)&rqst->rq_addr; 449 + __entry->xid = rqst->rq_xid; 450 + __entry->dropme = (int)rqst->rq_dropme; 451 + __entry->status = status; 452 + ), 453 + 454 + TP_printk("addr=%pIScp rq_xid=0x%x dropme=%d status=%d", 455 + __entry->addr, be32_to_cpu(__entry->xid), __entry->dropme, 456 + __entry->status) 457 + ); 458 + 459 + DEFINE_EVENT(svc_rqst_status, svc_process, 460 + TP_PROTO(struct svc_rqst *rqst, int status), 461 + TP_ARGS(rqst, status)); 462 + 463 + DEFINE_EVENT(svc_rqst_status, svc_send, 464 + TP_PROTO(struct svc_rqst *rqst, int status), 465 + TP_ARGS(rqst, status)); 310 466 311 467 #endif /* _TRACE_SUNRPC_H */ 312 468
+1 -1
include/uapi/linux/nfsd/debug.h
··· 15 15 * Enable debugging for nfsd. 16 16 * Requires RPC_DEBUG. 17 17 */ 18 - #ifdef RPC_DEBUG 18 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 19 19 # define NFSD_DEBUG 1 20 20 #endif 21 21
+5 -3
kernel/events/core.c
··· 1562 1562 1563 1563 if (!task) { 1564 1564 /* 1565 - * Per cpu events are removed via an smp call and 1566 - * the removal is always successful. 1565 + * Per cpu events are removed via an smp call. The removal can 1566 + * fail if the CPU is currently offline, but in that case we 1567 + * already called __perf_remove_from_context from 1568 + * perf_event_exit_cpu. 1567 1569 */ 1568 1570 cpu_function_call(event->cpu, __perf_remove_from_context, &re); 1569 1571 return; ··· 8119 8117 8120 8118 static void __perf_event_exit_context(void *__info) 8121 8119 { 8122 - struct remove_event re = { .detach_group = false }; 8120 + struct remove_event re = { .detach_group = true }; 8123 8121 struct perf_event_context *ctx = __info; 8124 8122 8125 8123 perf_pmu_rotate_stop(ctx->pmu);
-1
kernel/events/uprobes.c
··· 1640 1640 if (__fatal_signal_pending(t) || arch_uprobe_xol_was_trapped(t)) { 1641 1641 utask->state = UTASK_SSTEP_TRAPPED; 1642 1642 set_tsk_thread_flag(t, TIF_UPROBE); 1643 - set_tsk_thread_flag(t, TIF_NOTIFY_RESUME); 1644 1643 } 1645 1644 } 1646 1645
+21 -42
kernel/sched/core.c
··· 2475 2475 EXPORT_PER_CPU_SYMBOL(kernel_cpustat); 2476 2476 2477 2477 /* 2478 - * Return any ns on the sched_clock that have not yet been accounted in 2479 - * @p in case that task is currently running. 2480 - * 2481 - * Called with task_rq_lock() held on @rq. 2482 - */ 2483 - static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq) 2484 - { 2485 - u64 ns = 0; 2486 - 2487 - /* 2488 - * Must be ->curr _and_ ->on_rq. If dequeued, we would 2489 - * project cycles that may never be accounted to this 2490 - * thread, breaking clock_gettime(). 2491 - */ 2492 - if (task_current(rq, p) && task_on_rq_queued(p)) { 2493 - update_rq_clock(rq); 2494 - ns = rq_clock_task(rq) - p->se.exec_start; 2495 - if ((s64)ns < 0) 2496 - ns = 0; 2497 - } 2498 - 2499 - return ns; 2500 - } 2501 - 2502 - unsigned long long task_delta_exec(struct task_struct *p) 2503 - { 2504 - unsigned long flags; 2505 - struct rq *rq; 2506 - u64 ns = 0; 2507 - 2508 - rq = task_rq_lock(p, &flags); 2509 - ns = do_task_delta_exec(p, rq); 2510 - task_rq_unlock(rq, p, &flags); 2511 - 2512 - return ns; 2513 - } 2514 - 2515 - /* 2516 2478 * Return accounted runtime for the task. 2517 2479 * In case the task is currently running, return the runtime plus current's 2518 2480 * pending runtime that have not been accounted yet. ··· 2483 2521 { 2484 2522 unsigned long flags; 2485 2523 struct rq *rq; 2486 - u64 ns = 0; 2524 + u64 ns; 2487 2525 2488 2526 #if defined(CONFIG_64BIT) && defined(CONFIG_SMP) 2489 2527 /* ··· 2502 2540 #endif 2503 2541 2504 2542 rq = task_rq_lock(p, &flags); 2505 - ns = p->se.sum_exec_runtime + do_task_delta_exec(p, rq); 2543 + /* 2544 + * Must be ->curr _and_ ->on_rq. If dequeued, we would 2545 + * project cycles that may never be accounted to this 2546 + * thread, breaking clock_gettime(). 2547 + */ 2548 + if (task_current(rq, p) && task_on_rq_queued(p)) { 2549 + update_rq_clock(rq); 2550 + p->sched_class->update_curr(rq); 2551 + } 2552 + ns = p->se.sum_exec_runtime; 2506 2553 task_rq_unlock(rq, p, &flags); 2507 2554 2508 2555 return ns; ··· 6339 6368 if (!sched_debug()) 6340 6369 break; 6341 6370 } 6371 + 6372 + if (!level) 6373 + return; 6374 + 6342 6375 /* 6343 6376 * 'level' contains the number of unique distances, excluding the 6344 6377 * identity distance node_distance(i,i). ··· 7419 7444 if (unlikely(running)) 7420 7445 put_prev_task(rq, tsk); 7421 7446 7422 - tg = container_of(task_css_check(tsk, cpu_cgrp_id, 7423 - lockdep_is_held(&tsk->sighand->siglock)), 7447 + /* 7448 + * All callers are synchronized by task_rq_lock(); we do not use RCU 7449 + * which is pointless here. Thus, we pass "true" to task_css_check() 7450 + * to prevent lockdep warnings. 7451 + */ 7452 + tg = container_of(task_css_check(tsk, cpu_cgrp_id, true), 7424 7453 struct task_group, css); 7425 7454 tg = autogroup_task_group(tsk, tg); 7426 7455 tsk->sched_task_group = tg;
+2
kernel/sched/deadline.c
··· 1701 1701 .prio_changed = prio_changed_dl, 1702 1702 .switched_from = switched_from_dl, 1703 1703 .switched_to = switched_to_dl, 1704 + 1705 + .update_curr = update_curr_dl, 1704 1706 };
+14
kernel/sched/fair.c
··· 726 726 account_cfs_rq_runtime(cfs_rq, delta_exec); 727 727 } 728 728 729 + static void update_curr_fair(struct rq *rq) 730 + { 731 + update_curr(cfs_rq_of(&rq->curr->se)); 732 + } 733 + 729 734 static inline void 730 735 update_stats_wait_start(struct cfs_rq *cfs_rq, struct sched_entity *se) 731 736 { ··· 1183 1178 if ((cur->flags & PF_EXITING) || is_idle_task(cur)) 1184 1179 cur = NULL; 1185 1180 raw_spin_unlock_irq(&dst_rq->lock); 1181 + 1182 + /* 1183 + * Because we have preemption enabled we can get migrated around and 1184 + * end try selecting ourselves (current == env->p) as a swap candidate. 1185 + */ 1186 + if (cur == env->p) 1187 + goto unlock; 1186 1188 1187 1189 /* 1188 1190 * "imp" is the fault differential for the source task between the ··· 7960 7948 .switched_to = switched_to_fair, 7961 7949 7962 7950 .get_rr_interval = get_rr_interval_fair, 7951 + 7952 + .update_curr = update_curr_fair, 7963 7953 7964 7954 #ifdef CONFIG_FAIR_GROUP_SCHED 7965 7955 .task_move_group = task_move_group_fair,
+5
kernel/sched/idle_task.c
··· 75 75 return 0; 76 76 } 77 77 78 + static void update_curr_idle(struct rq *rq) 79 + { 80 + } 81 + 78 82 /* 79 83 * Simple, special scheduling class for the per-CPU idle tasks: 80 84 */ ··· 105 101 106 102 .prio_changed = prio_changed_idle, 107 103 .switched_to = switched_to_idle, 104 + .update_curr = update_curr_idle, 108 105 };
+2
kernel/sched/rt.c
··· 2128 2128 2129 2129 .prio_changed = prio_changed_rt, 2130 2130 .switched_to = switched_to_rt, 2131 + 2132 + .update_curr = update_curr_rt, 2131 2133 }; 2132 2134 2133 2135 #ifdef CONFIG_SCHED_DEBUG
+2
kernel/sched/sched.h
··· 1135 1135 unsigned int (*get_rr_interval) (struct rq *rq, 1136 1136 struct task_struct *task); 1137 1137 1138 + void (*update_curr) (struct rq *rq); 1139 + 1138 1140 #ifdef CONFIG_FAIR_GROUP_SCHED 1139 1141 void (*task_move_group) (struct task_struct *p, int on_rq); 1140 1142 #endif
+5
kernel/sched/stop_task.c
··· 102 102 return 0; 103 103 } 104 104 105 + static void update_curr_stop(struct rq *rq) 106 + { 107 + } 108 + 105 109 /* 106 110 * Simple, special scheduling class for the per-CPU stop tasks: 107 111 */ ··· 132 128 133 129 .prio_changed = prio_changed_stop, 134 130 .switched_to = switched_to_stop, 131 + .update_curr = update_curr_stop, 135 132 };
+1 -1
kernel/time/posix-cpu-timers.c
··· 553 553 *sample = cputime_to_expires(cputime.utime); 554 554 break; 555 555 case CPUCLOCK_SCHED: 556 - *sample = cputime.sum_exec_runtime + task_delta_exec(p); 556 + *sample = cputime.sum_exec_runtime; 557 557 break; 558 558 } 559 559 return 0;
+2 -2
lib/Makefile
··· 10 10 lib-y := ctype.o string.o vsprintf.o cmdline.o \ 11 11 rbtree.o radix-tree.o dump_stack.o timerqueue.o\ 12 12 idr.o int_sqrt.o extable.o \ 13 - sha1.o md5.o irq_regs.o reciprocal_div.o argv_split.o \ 13 + sha1.o md5.o irq_regs.o argv_split.o \ 14 14 proportions.o flex_proportions.o ratelimit.o show_mem.o \ 15 15 is_single_threaded.o plist.o decompress.o kobject_uevent.o \ 16 16 earlycpio.o ··· 26 26 bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \ 27 27 gcd.o lcm.o list_sort.o uuid.o flex_array.o iovec.o clz_ctz.o \ 28 28 bsearch.o find_last_bit.o find_next_bit.o llist.o memweight.o kfifo.o \ 29 - percpu-refcount.o percpu_ida.o hash.o rhashtable.o 29 + percpu-refcount.o percpu_ida.o hash.o rhashtable.o reciprocal_div.o 30 30 obj-y += string_helpers.o 31 31 obj-$(CONFIG_TEST_STRING_HELPERS) += test-string_helpers.o 32 32 obj-y += kstrtox.o
+1 -2
net/bridge/br_multicast.c
··· 813 813 return; 814 814 815 815 if (port) { 816 - __skb_push(skb, sizeof(struct ethhdr)); 817 816 skb->dev = port->dev; 818 817 NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT, skb, NULL, skb->dev, 819 - dev_queue_xmit); 818 + br_dev_queue_push_xmit); 820 819 } else { 821 820 br_multicast_select_own_querier(br, ip, skb); 822 821 netif_rx(skb);
+6 -17
net/core/skbuff.c
··· 552 552 case SKB_FCLONE_CLONE: 553 553 fclones = container_of(skb, struct sk_buff_fclones, skb2); 554 554 555 - /* Warning : We must perform the atomic_dec_and_test() before 556 - * setting skb->fclone back to SKB_FCLONE_FREE, otherwise 557 - * skb_clone() could set clone_ref to 2 before our decrement. 558 - * Anyway, if we are going to free the structure, no need to 559 - * rewrite skb->fclone. 555 + /* The clone portion is available for 556 + * fast-cloning again. 560 557 */ 561 - if (atomic_dec_and_test(&fclones->fclone_ref)) { 558 + skb->fclone = SKB_FCLONE_FREE; 559 + 560 + if (atomic_dec_and_test(&fclones->fclone_ref)) 562 561 kmem_cache_free(skbuff_fclone_cache, fclones); 563 - } else { 564 - /* The clone portion is available for 565 - * fast-cloning again. 566 - */ 567 - skb->fclone = SKB_FCLONE_FREE; 568 - } 569 562 break; 570 563 } 571 564 } ··· 880 887 if (skb->fclone == SKB_FCLONE_ORIG && 881 888 n->fclone == SKB_FCLONE_FREE) { 882 889 n->fclone = SKB_FCLONE_CLONE; 883 - /* As our fastclone was free, clone_ref must be 1 at this point. 884 - * We could use atomic_inc() here, but it is faster 885 - * to set the final value. 886 - */ 887 - atomic_set(&fclones->fclone_ref, 2); 890 + atomic_inc(&fclones->fclone_ref); 888 891 } else { 889 892 if (skb_pfmemalloc(skb)) 890 893 gfp_mask |= __GFP_MEMALLOC;
+18 -18
net/dcb/dcbnl.c
··· 1080 1080 if (!app) 1081 1081 return -EMSGSIZE; 1082 1082 1083 - spin_lock(&dcb_lock); 1083 + spin_lock_bh(&dcb_lock); 1084 1084 list_for_each_entry(itr, &dcb_app_list, list) { 1085 1085 if (itr->ifindex == netdev->ifindex) { 1086 1086 err = nla_put(skb, DCB_ATTR_IEEE_APP, sizeof(itr->app), 1087 1087 &itr->app); 1088 1088 if (err) { 1089 - spin_unlock(&dcb_lock); 1089 + spin_unlock_bh(&dcb_lock); 1090 1090 return -EMSGSIZE; 1091 1091 } 1092 1092 } ··· 1097 1097 else 1098 1098 dcbx = -EOPNOTSUPP; 1099 1099 1100 - spin_unlock(&dcb_lock); 1100 + spin_unlock_bh(&dcb_lock); 1101 1101 nla_nest_end(skb, app); 1102 1102 1103 1103 /* get peer info if available */ ··· 1234 1234 } 1235 1235 1236 1236 /* local app */ 1237 - spin_lock(&dcb_lock); 1237 + spin_lock_bh(&dcb_lock); 1238 1238 app = nla_nest_start(skb, DCB_ATTR_CEE_APP_TABLE); 1239 1239 if (!app) 1240 1240 goto dcb_unlock; ··· 1271 1271 else 1272 1272 dcbx = -EOPNOTSUPP; 1273 1273 1274 - spin_unlock(&dcb_lock); 1274 + spin_unlock_bh(&dcb_lock); 1275 1275 1276 1276 /* features flags */ 1277 1277 if (ops->getfeatcfg) { ··· 1326 1326 return 0; 1327 1327 1328 1328 dcb_unlock: 1329 - spin_unlock(&dcb_lock); 1329 + spin_unlock_bh(&dcb_lock); 1330 1330 nla_put_failure: 1331 1331 return err; 1332 1332 } ··· 1762 1762 struct dcb_app_type *itr; 1763 1763 u8 prio = 0; 1764 1764 1765 - spin_lock(&dcb_lock); 1765 + spin_lock_bh(&dcb_lock); 1766 1766 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1767 1767 prio = itr->app.priority; 1768 - spin_unlock(&dcb_lock); 1768 + spin_unlock_bh(&dcb_lock); 1769 1769 1770 1770 return prio; 1771 1771 } ··· 1789 1789 if (dev->dcbnl_ops->getdcbx) 1790 1790 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1791 1791 1792 - spin_lock(&dcb_lock); 1792 + spin_lock_bh(&dcb_lock); 1793 1793 /* Search for existing match and replace */ 1794 1794 if ((itr = dcb_app_lookup(new, dev->ifindex, 0))) { 1795 1795 if (new->priority) ··· 1804 1804 if (new->priority) 1805 1805 err = dcb_app_add(new, dev->ifindex); 1806 1806 out: 1807 - spin_unlock(&dcb_lock); 1807 + spin_unlock_bh(&dcb_lock); 1808 1808 if (!err) 1809 1809 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1810 1810 return err; ··· 1823 1823 struct dcb_app_type *itr; 1824 1824 u8 prio = 0; 1825 1825 1826 - spin_lock(&dcb_lock); 1826 + spin_lock_bh(&dcb_lock); 1827 1827 if ((itr = dcb_app_lookup(app, dev->ifindex, 0))) 1828 1828 prio |= 1 << itr->app.priority; 1829 - spin_unlock(&dcb_lock); 1829 + spin_unlock_bh(&dcb_lock); 1830 1830 1831 1831 return prio; 1832 1832 } ··· 1850 1850 if (dev->dcbnl_ops->getdcbx) 1851 1851 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1852 1852 1853 - spin_lock(&dcb_lock); 1853 + spin_lock_bh(&dcb_lock); 1854 1854 /* Search for existing match and abort if found */ 1855 1855 if (dcb_app_lookup(new, dev->ifindex, new->priority)) { 1856 1856 err = -EEXIST; ··· 1859 1859 1860 1860 err = dcb_app_add(new, dev->ifindex); 1861 1861 out: 1862 - spin_unlock(&dcb_lock); 1862 + spin_unlock_bh(&dcb_lock); 1863 1863 if (!err) 1864 1864 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1865 1865 return err; ··· 1882 1882 if (dev->dcbnl_ops->getdcbx) 1883 1883 event.dcbx = dev->dcbnl_ops->getdcbx(dev); 1884 1884 1885 - spin_lock(&dcb_lock); 1885 + spin_lock_bh(&dcb_lock); 1886 1886 /* Search for existing match and remove it. */ 1887 1887 if ((itr = dcb_app_lookup(del, dev->ifindex, del->priority))) { 1888 1888 list_del(&itr->list); ··· 1890 1890 err = 0; 1891 1891 } 1892 1892 1893 - spin_unlock(&dcb_lock); 1893 + spin_unlock_bh(&dcb_lock); 1894 1894 if (!err) 1895 1895 call_dcbevent_notifiers(DCB_APP_EVENT, &event); 1896 1896 return err; ··· 1902 1902 struct dcb_app_type *app; 1903 1903 struct dcb_app_type *tmp; 1904 1904 1905 - spin_lock(&dcb_lock); 1905 + spin_lock_bh(&dcb_lock); 1906 1906 list_for_each_entry_safe(app, tmp, &dcb_app_list, list) { 1907 1907 list_del(&app->list); 1908 1908 kfree(app); 1909 1909 } 1910 - spin_unlock(&dcb_lock); 1910 + spin_unlock_bh(&dcb_lock); 1911 1911 } 1912 1912 1913 1913 static int __init dcbnl_init(void)
+4
net/ipv4/fib_rules.c
··· 62 62 else 63 63 res->tclassid = 0; 64 64 #endif 65 + 66 + if (err == -ESRCH) 67 + err = -ENETUNREACH; 68 + 65 69 return err; 66 70 } 67 71 EXPORT_SYMBOL_GPL(__fib_lookup);
+5 -6
net/ipv4/igmp.c
··· 318 318 return scount; 319 319 } 320 320 321 - #define igmp_skb_size(skb) (*(unsigned int *)((skb)->cb)) 322 - 323 - static struct sk_buff *igmpv3_newpack(struct net_device *dev, int size) 321 + static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu) 324 322 { 325 323 struct sk_buff *skb; 326 324 struct rtable *rt; ··· 328 330 struct flowi4 fl4; 329 331 int hlen = LL_RESERVED_SPACE(dev); 330 332 int tlen = dev->needed_tailroom; 333 + unsigned int size = mtu; 331 334 332 335 while (1) { 333 336 skb = alloc_skb(size + hlen + tlen, ··· 340 341 return NULL; 341 342 } 342 343 skb->priority = TC_PRIO_CONTROL; 343 - igmp_skb_size(skb) = size; 344 344 345 345 rt = ip_route_output_ports(net, &fl4, NULL, IGMPV3_ALL_MCR, 0, 346 346 0, 0, ··· 352 354 skb_dst_set(skb, &rt->dst); 353 355 skb->dev = dev; 354 356 357 + skb->reserved_tailroom = skb_end_offset(skb) - 358 + min(mtu, skb_end_offset(skb)); 355 359 skb_reserve(skb, hlen); 356 360 357 361 skb_reset_network_header(skb); ··· 423 423 return skb; 424 424 } 425 425 426 - #define AVAILABLE(skb) ((skb) ? ((skb)->dev ? igmp_skb_size(skb) - (skb)->len : \ 427 - skb_tailroom(skb)) : 0) 426 + #define AVAILABLE(skb) ((skb) ? skb_availroom(skb) : 0) 428 427 429 428 static struct sk_buff *add_grec(struct sk_buff *skb, struct ip_mc_list *pmc, 430 429 int type, int gdeleted, int sdeleted)
+1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 24 24 struct nf_nat_range range; 25 25 unsigned int verdict; 26 26 27 + memset(&range, 0, sizeof(range)); 27 28 range.flags = priv->flags; 28 29 29 30 verdict = nf_nat_masquerade_ipv4(pkt->skb, pkt->ops->hooknum,
+2 -2
net/ipv4/tcp_input.c
··· 5231 5231 if (len < (th->doff << 2) || tcp_checksum_complete_user(sk, skb)) 5232 5232 goto csum_error; 5233 5233 5234 - if (!th->ack && !th->rst) 5234 + if (!th->ack && !th->rst && !th->syn) 5235 5235 goto discard; 5236 5236 5237 5237 /* ··· 5650 5650 goto discard; 5651 5651 } 5652 5652 5653 - if (!th->ack && !th->rst) 5653 + if (!th->ack && !th->rst && !th->syn) 5654 5654 goto discard; 5655 5655 5656 5656 if (!tcp_validate_incoming(sk, skb, th, 0))
+4
net/ipv6/ip6mr.c
··· 1439 1439 1440 1440 void ip6_mr_cleanup(void) 1441 1441 { 1442 + rtnl_unregister(RTNL_FAMILY_IP6MR, RTM_GETROUTE); 1443 + #ifdef CONFIG_IPV6_PIMSM_V2 1444 + inet6_del_protocol(&pim6_protocol, IPPROTO_PIM); 1445 + #endif 1442 1446 unregister_netdevice_notifier(&ip6_mr_notifier); 1443 1447 unregister_pernet_subsys(&ip6mr_net_ops); 1444 1448 kmem_cache_destroy(mrt_cachep);
+5 -4
net/ipv6/mcast.c
··· 1550 1550 hdr->daddr = *daddr; 1551 1551 } 1552 1552 1553 - static struct sk_buff *mld_newpack(struct inet6_dev *idev, int size) 1553 + static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) 1554 1554 { 1555 1555 struct net_device *dev = idev->dev; 1556 1556 struct net *net = dev_net(dev); ··· 1561 1561 const struct in6_addr *saddr; 1562 1562 int hlen = LL_RESERVED_SPACE(dev); 1563 1563 int tlen = dev->needed_tailroom; 1564 + unsigned int size = mtu + hlen + tlen; 1564 1565 int err; 1565 1566 u8 ra[8] = { IPPROTO_ICMPV6, 0, 1566 1567 IPV6_TLV_ROUTERALERT, 2, 0, 0, 1567 1568 IPV6_TLV_PADN, 0 }; 1568 1569 1569 1570 /* we assume size > sizeof(ra) here */ 1570 - size += hlen + tlen; 1571 1571 /* limit our allocations to order-0 page */ 1572 1572 size = min_t(int, size, SKB_MAX_ORDER(0, 0)); 1573 1573 skb = sock_alloc_send_skb(sk, size, 1, &err); ··· 1576 1576 return NULL; 1577 1577 1578 1578 skb->priority = TC_PRIO_CONTROL; 1579 + skb->reserved_tailroom = skb_end_offset(skb) - 1580 + min(mtu, skb_end_offset(skb)); 1579 1581 skb_reserve(skb, hlen); 1580 1582 1581 1583 if (__ipv6_get_lladdr(idev, &addr_buf, IFA_F_TENTATIVE)) { ··· 1692 1690 return skb; 1693 1691 } 1694 1692 1695 - #define AVAILABLE(skb) ((skb) ? ((skb)->dev ? (skb)->dev->mtu - (skb)->len : \ 1696 - skb_tailroom(skb)) : 0) 1693 + #define AVAILABLE(skb) ((skb) ? skb_availroom(skb) : 0) 1697 1694 1698 1695 static struct sk_buff *add_grec(struct sk_buff *skb, struct ifmcaddr6 *pmc, 1699 1696 int type, int gdeleted, int sdeleted, int crsend)
+1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 25 25 struct nf_nat_range range; 26 26 unsigned int verdict; 27 27 28 + memset(&range, 0, sizeof(range)); 28 29 range.flags = priv->flags; 29 30 30 31 verdict = nf_nat_masquerade_ipv6(pkt->skb, &range, pkt->out);
+5 -1
net/ipx/af_ipx.c
··· 1764 1764 struct ipxhdr *ipx = NULL; 1765 1765 struct sk_buff *skb; 1766 1766 int copied, rc; 1767 + bool locked = true; 1767 1768 1768 1769 lock_sock(sk); 1769 1770 /* put the autobinding in */ ··· 1791 1790 if (sock_flag(sk, SOCK_ZAPPED)) 1792 1791 goto out; 1793 1792 1793 + release_sock(sk); 1794 + locked = false; 1794 1795 skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, 1795 1796 flags & MSG_DONTWAIT, &rc); 1796 1797 if (!skb) { ··· 1829 1826 out_free: 1830 1827 skb_free_datagram(sk, skb); 1831 1828 out: 1832 - release_sock(sk); 1829 + if (locked) 1830 + release_sock(sk); 1833 1831 return rc; 1834 1832 } 1835 1833
+3
net/mac80211/aes_ccm.c
··· 53 53 __aligned(__alignof__(struct aead_request)); 54 54 struct aead_request *aead_req = (void *) aead_req_data; 55 55 56 + if (data_len == 0) 57 + return -EINVAL; 58 + 56 59 memset(aead_req, 0, sizeof(aead_req_data)); 57 60 58 61 sg_init_one(&pt, data, data_len);
+6 -9
net/mac80211/rc80211_minstrel_ht.c
··· 252 252 cur_thr = mi->groups[cur_group].rates[cur_idx].cur_tp; 253 253 cur_prob = mi->groups[cur_group].rates[cur_idx].probability; 254 254 255 - tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 256 - tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 257 - tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 258 - tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 259 - 260 - while (j > 0 && (cur_thr > tmp_thr || 261 - (cur_thr == tmp_thr && cur_prob > tmp_prob))) { 262 - j--; 255 + do { 263 256 tmp_group = tp_list[j - 1] / MCS_GROUP_RATES; 264 257 tmp_idx = tp_list[j - 1] % MCS_GROUP_RATES; 265 258 tmp_thr = mi->groups[tmp_group].rates[tmp_idx].cur_tp; 266 259 tmp_prob = mi->groups[tmp_group].rates[tmp_idx].probability; 267 - } 260 + if (cur_thr < tmp_thr || 261 + (cur_thr == tmp_thr && cur_prob <= tmp_prob)) 262 + break; 263 + j--; 264 + } while (j > 0); 268 265 269 266 if (j < MAX_THR_RATES - 1) { 270 267 memmove(&tp_list[j + 1], &tp_list[j], (sizeof(*tp_list) *
+6
net/netfilter/ipset/ip_set_core.c
··· 1863 1863 if (*op < IP_SET_OP_VERSION) { 1864 1864 /* Check the version at the beginning of operations */ 1865 1865 struct ip_set_req_version *req_version = data; 1866 + 1867 + if (*len < sizeof(struct ip_set_req_version)) { 1868 + ret = -EINVAL; 1869 + goto done; 1870 + } 1871 + 1866 1872 if (req_version->version != IPSET_PROTOCOL) { 1867 1873 ret = -EPROTO; 1868 1874 goto done;
+2
net/netfilter/ipvs/ip_vs_xmit.c
··· 846 846 new_skb = skb_realloc_headroom(skb, max_headroom); 847 847 if (!new_skb) 848 848 goto error; 849 + if (skb->sk) 850 + skb_set_owner_w(new_skb, skb->sk); 849 851 consume_skb(skb); 850 852 skb = new_skb; 851 853 }
+8 -6
net/netfilter/nf_conntrack_core.c
··· 611 611 */ 612 612 NF_CT_ASSERT(!nf_ct_is_confirmed(ct)); 613 613 pr_debug("Confirming conntrack %p\n", ct); 614 - /* We have to check the DYING flag inside the lock to prevent 615 - a race against nf_ct_get_next_corpse() possibly called from 616 - user context, else we insert an already 'dead' hash, blocking 617 - further use of that particular connection -JM */ 614 + 615 + /* We have to check the DYING flag after unlink to prevent 616 + * a race against nf_ct_get_next_corpse() possibly called from 617 + * user context, else we insert an already 'dead' hash, blocking 618 + * further use of that particular connection -JM. 619 + */ 620 + nf_ct_del_from_dying_or_unconfirmed_list(ct); 618 621 619 622 if (unlikely(nf_ct_is_dying(ct))) { 623 + nf_ct_add_to_dying_list(ct); 620 624 nf_conntrack_double_unlock(hash, reply_hash); 621 625 local_bh_enable(); 622 626 return NF_ACCEPT; ··· 639 635 &h->tuple) && 640 636 zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h))) 641 637 goto out; 642 - 643 - nf_ct_del_from_dying_or_unconfirmed_list(ct); 644 638 645 639 /* Timer relative to confirmation time, not original 646 640 setting time, otherwise we'd get timer wrap in
+8 -16
net/netfilter/nf_tables_api.c
··· 3484 3484 } 3485 3485 } 3486 3486 3487 - /* Schedule objects for release via rcu to make sure no packets are accesing 3488 - * removed rules. 3489 - */ 3490 - static void nf_tables_commit_release_rcu(struct rcu_head *rt) 3487 + static void nf_tables_commit_release(struct nft_trans *trans) 3491 3488 { 3492 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3493 - 3494 3489 switch (trans->msg_type) { 3495 3490 case NFT_MSG_DELTABLE: 3496 3491 nf_tables_table_destroy(&trans->ctx); ··· 3607 3612 } 3608 3613 } 3609 3614 3615 + synchronize_rcu(); 3616 + 3610 3617 list_for_each_entry_safe(trans, next, &net->nft.commit_list, list) { 3611 3618 list_del(&trans->list); 3612 - trans->ctx.nla = NULL; 3613 - call_rcu(&trans->rcu_head, nf_tables_commit_release_rcu); 3619 + nf_tables_commit_release(trans); 3614 3620 } 3615 3621 3616 3622 nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); ··· 3619 3623 return 0; 3620 3624 } 3621 3625 3622 - /* Schedule objects for release via rcu to make sure no packets are accesing 3623 - * aborted rules. 3624 - */ 3625 - static void nf_tables_abort_release_rcu(struct rcu_head *rt) 3626 + static void nf_tables_abort_release(struct nft_trans *trans) 3626 3627 { 3627 - struct nft_trans *trans = container_of(rt, struct nft_trans, rcu_head); 3628 - 3629 3628 switch (trans->msg_type) { 3630 3629 case NFT_MSG_NEWTABLE: 3631 3630 nf_tables_table_destroy(&trans->ctx); ··· 3716 3725 } 3717 3726 } 3718 3727 3728 + synchronize_rcu(); 3729 + 3719 3730 list_for_each_entry_safe_reverse(trans, next, 3720 3731 &net->nft.commit_list, list) { 3721 3732 list_del(&trans->list); 3722 - trans->ctx.nla = NULL; 3723 - call_rcu(&trans->rcu_head, nf_tables_abort_release_rcu); 3733 + nf_tables_abort_release(trans); 3724 3734 } 3725 3735 3726 3736 return 0;
+11 -1
net/netfilter/nfnetlink.c
··· 47 47 [NFNLGRP_CONNTRACK_EXP_NEW] = NFNL_SUBSYS_CTNETLINK_EXP, 48 48 [NFNLGRP_CONNTRACK_EXP_UPDATE] = NFNL_SUBSYS_CTNETLINK_EXP, 49 49 [NFNLGRP_CONNTRACK_EXP_DESTROY] = NFNL_SUBSYS_CTNETLINK_EXP, 50 + [NFNLGRP_NFTABLES] = NFNL_SUBSYS_NFTABLES, 51 + [NFNLGRP_ACCT_QUOTA] = NFNL_SUBSYS_ACCT, 50 52 }; 51 53 52 54 void nfnl_lock(__u8 subsys_id) ··· 466 464 static int nfnetlink_bind(int group) 467 465 { 468 466 const struct nfnetlink_subsystem *ss; 469 - int type = nfnl_group2type[group]; 467 + int type; 468 + 469 + if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX) 470 + return -EINVAL; 471 + 472 + type = nfnl_group2type[group]; 470 473 471 474 rcu_read_lock(); 472 475 ss = nfnetlink_get_subsys(type); ··· 520 513 static int __init nfnetlink_init(void) 521 514 { 522 515 int i; 516 + 517 + for (i = NFNLGRP_NONE + 1; i <= NFNLGRP_MAX; i++) 518 + BUG_ON(nfnl_group2type[i] == NFNL_SUBSYS_NONE); 523 519 524 520 for (i=0; i<NFNL_SUBSYS_COUNT; i++) 525 521 mutex_init(&table[i].mutex);
+6 -34
net/netfilter/nft_compat.c
··· 21 21 #include <linux/netfilter_ipv6/ip6_tables.h> 22 22 #include <net/netfilter/nf_tables.h> 23 23 24 - static const struct { 25 - const char *name; 26 - u8 type; 27 - } table_to_chaintype[] = { 28 - { "filter", NFT_CHAIN_T_DEFAULT }, 29 - { "raw", NFT_CHAIN_T_DEFAULT }, 30 - { "security", NFT_CHAIN_T_DEFAULT }, 31 - { "mangle", NFT_CHAIN_T_ROUTE }, 32 - { "nat", NFT_CHAIN_T_NAT }, 33 - { }, 34 - }; 35 - 36 - static int nft_compat_table_to_chaintype(const char *table) 37 - { 38 - int i; 39 - 40 - for (i = 0; table_to_chaintype[i].name != NULL; i++) { 41 - if (strcmp(table_to_chaintype[i].name, table) == 0) 42 - return table_to_chaintype[i].type; 43 - } 44 - 45 - return -1; 46 - } 47 - 48 24 static int nft_compat_chain_validate_dependency(const char *tablename, 49 25 const struct nft_chain *chain) 50 26 { 51 - enum nft_chain_type type; 52 27 const struct nft_base_chain *basechain; 53 28 54 29 if (!tablename || !(chain->flags & NFT_BASE_CHAIN)) 55 30 return 0; 56 31 57 - type = nft_compat_table_to_chaintype(tablename); 58 - if (type < 0) 59 - return -EINVAL; 60 - 61 32 basechain = nft_base_chain(chain); 62 - if (basechain->type->type != type) 33 + if (strcmp(tablename, "nat") == 0 && 34 + basechain->type->type != NFT_CHAIN_T_NAT) 63 35 return -EINVAL; 64 36 65 37 return 0; ··· 89 117 struct xt_target *target, void *info, 90 118 union nft_entry *entry, u8 proto, bool inv) 91 119 { 92 - par->net = &init_net; 120 + par->net = ctx->net; 93 121 par->table = ctx->table->name; 94 122 switch (ctx->afi->family) { 95 123 case AF_INET: ··· 296 324 struct xt_match *match, void *info, 297 325 union nft_entry *entry, u8 proto, bool inv) 298 326 { 299 - par->net = &init_net; 327 + par->net = ctx->net; 300 328 par->table = ctx->table->name; 301 329 switch (ctx->afi->family) { 302 330 case AF_INET: ··· 346 374 union nft_entry e = {}; 347 375 int ret; 348 376 349 - ret = nft_compat_chain_validate_dependency(match->name, ctx->chain); 377 + ret = nft_compat_chain_validate_dependency(match->table, ctx->chain); 350 378 if (ret < 0) 351 379 goto err; 352 380 ··· 420 448 if (!(hook_mask & match->hooks)) 421 449 return -EINVAL; 422 450 423 - ret = nft_compat_chain_validate_dependency(match->name, 451 + ret = nft_compat_chain_validate_dependency(match->table, 424 452 ctx->chain); 425 453 if (ret < 0) 426 454 return ret;
+6 -4
net/openvswitch/actions.c
··· 246 246 { 247 247 int transport_len = skb->len - skb_transport_offset(skb); 248 248 249 - if (l4_proto == IPPROTO_TCP) { 249 + if (l4_proto == NEXTHDR_TCP) { 250 250 if (likely(transport_len >= sizeof(struct tcphdr))) 251 251 inet_proto_csum_replace16(&tcp_hdr(skb)->check, skb, 252 252 addr, new_addr, 1); 253 - } else if (l4_proto == IPPROTO_UDP) { 253 + } else if (l4_proto == NEXTHDR_UDP) { 254 254 if (likely(transport_len >= sizeof(struct udphdr))) { 255 255 struct udphdr *uh = udp_hdr(skb); 256 256 ··· 261 261 uh->check = CSUM_MANGLED_0; 262 262 } 263 263 } 264 + } else if (l4_proto == NEXTHDR_ICMP) { 265 + if (likely(transport_len >= sizeof(struct icmp6hdr))) 266 + inet_proto_csum_replace16(&icmp6_hdr(skb)->icmp6_cksum, 267 + skb, addr, new_addr, 1); 264 268 } 265 269 } 266 270 ··· 726 722 727 723 case OVS_ACTION_ATTR_SAMPLE: 728 724 err = sample(dp, skb, key, a); 729 - if (unlikely(err)) /* skb already freed. */ 730 - return err; 731 725 break; 732 726 } 733 727
+7 -7
net/openvswitch/datapath.c
··· 1265 1265 return msgsize; 1266 1266 } 1267 1267 1268 - /* Called with ovs_mutex or RCU read lock. */ 1268 + /* Called with ovs_mutex. */ 1269 1269 static int ovs_dp_cmd_fill_info(struct datapath *dp, struct sk_buff *skb, 1270 1270 u32 portid, u32 seq, u32 flags, u8 cmd) 1271 1271 { ··· 1555 1555 if (!reply) 1556 1556 return -ENOMEM; 1557 1557 1558 - rcu_read_lock(); 1558 + ovs_lock(); 1559 1559 dp = lookup_datapath(sock_net(skb->sk), info->userhdr, info->attrs); 1560 1560 if (IS_ERR(dp)) { 1561 1561 err = PTR_ERR(dp); ··· 1564 1564 err = ovs_dp_cmd_fill_info(dp, reply, info->snd_portid, 1565 1565 info->snd_seq, 0, OVS_DP_CMD_NEW); 1566 1566 BUG_ON(err < 0); 1567 - rcu_read_unlock(); 1567 + ovs_unlock(); 1568 1568 1569 1569 return genlmsg_reply(reply, info); 1570 1570 1571 1571 err_unlock_free: 1572 - rcu_read_unlock(); 1572 + ovs_unlock(); 1573 1573 kfree_skb(reply); 1574 1574 return err; 1575 1575 } ··· 1581 1581 int skip = cb->args[0]; 1582 1582 int i = 0; 1583 1583 1584 - rcu_read_lock(); 1585 - list_for_each_entry_rcu(dp, &ovs_net->dps, list_node) { 1584 + ovs_lock(); 1585 + list_for_each_entry(dp, &ovs_net->dps, list_node) { 1586 1586 if (i >= skip && 1587 1587 ovs_dp_cmd_fill_info(dp, skb, NETLINK_CB(cb->skb).portid, 1588 1588 cb->nlh->nlmsg_seq, NLM_F_MULTI, ··· 1590 1590 break; 1591 1591 i++; 1592 1592 } 1593 - rcu_read_unlock(); 1593 + ovs_unlock(); 1594 1594 1595 1595 cb->args[0] = i; 1596 1596
+8 -1
net/openvswitch/flow_netlink.c
··· 145 145 if (match->key->eth.type == htons(ETH_P_ARP) 146 146 || match->key->eth.type == htons(ETH_P_RARP)) { 147 147 key_expected |= 1 << OVS_KEY_ATTR_ARP; 148 - if (match->mask && (match->mask->key.eth.type == htons(0xffff))) 148 + if (match->mask && (match->mask->key.tp.src == htons(0xff))) 149 149 mask_allowed |= 1 << OVS_KEY_ATTR_ARP; 150 150 } 151 151 ··· 689 689 ipv6_key->ipv6_frag, OVS_FRAG_TYPE_MAX); 690 690 return -EINVAL; 691 691 } 692 + 693 + if (!is_mask && ipv6_key->ipv6_label & htonl(0xFFF00000)) { 694 + OVS_NLERR("IPv6 flow label %x is out of range (max=%x).\n", 695 + ntohl(ipv6_key->ipv6_label), (1 << 20) - 1); 696 + return -EINVAL; 697 + } 698 + 692 699 SW_FLOW_KEY_PUT(match, ipv6.label, 693 700 ipv6_key->ipv6_label, is_mask); 694 701 SW_FLOW_KEY_PUT(match, ip.proto,
+1
net/sunrpc/Kconfig
··· 35 35 config SUNRPC_DEBUG 36 36 bool "RPC: Enable dprintk debugging" 37 37 depends on SUNRPC && SYSCTL 38 + select DEBUG_FS 38 39 help 39 40 This option enables a sysctl-based debugging interface 40 41 that is be used by the 'rpcdebug' utility to turn on or off
+1
net/sunrpc/Makefile
··· 14 14 addr.o rpcb_clnt.o timer.o xdr.o \ 15 15 sunrpc_syms.o cache.o rpc_pipe.o \ 16 16 svc_xprt.o 17 + sunrpc-$(CONFIG_SUNRPC_DEBUG) += debugfs.o 17 18 sunrpc-$(CONFIG_SUNRPC_BACKCHANNEL) += backchannel_rqst.o bc_svc.o 18 19 sunrpc-$(CONFIG_PROC_FS) += stats.o 19 20 sunrpc-$(CONFIG_SYSCTL) += sysctl.o
+2 -2
net/sunrpc/auth.c
··· 16 16 #include <linux/sunrpc/gss_api.h> 17 17 #include <linux/spinlock.h> 18 18 19 - #ifdef RPC_DEBUG 19 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 20 20 # define RPCDBG_FACILITY RPCDBG_AUTH 21 21 #endif 22 22 ··· 646 646 cred->cr_auth = auth; 647 647 cred->cr_ops = ops; 648 648 cred->cr_expire = jiffies; 649 - #ifdef RPC_DEBUG 649 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 650 650 cred->cr_magic = RPCAUTH_CRED_MAGIC; 651 651 #endif 652 652 cred->cr_uid = acred->uid;
+1 -1
net/sunrpc/auth_generic.c
··· 14 14 #include <linux/sunrpc/debug.h> 15 15 #include <linux/sunrpc/sched.h> 16 16 17 - #ifdef RPC_DEBUG 17 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 18 18 # define RPCDBG_FACILITY RPCDBG_AUTH 19 19 #endif 20 20
+1 -1
net/sunrpc/auth_gss/auth_gss.c
··· 66 66 #define GSS_KEY_EXPIRE_TIMEO 240 67 67 static unsigned int gss_key_expire_timeo = GSS_KEY_EXPIRE_TIMEO; 68 68 69 - #ifdef RPC_DEBUG 69 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 70 70 # define RPCDBG_FACILITY RPCDBG_AUTH 71 71 #endif 72 72
+1 -1
net/sunrpc/auth_gss/gss_generic_token.c
··· 38 38 #include <linux/sunrpc/gss_asn1.h> 39 39 40 40 41 - #ifdef RPC_DEBUG 41 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 42 42 # define RPCDBG_FACILITY RPCDBG_AUTH 43 43 #endif 44 44
+1 -1
net/sunrpc/auth_gss/gss_krb5_crypto.c
··· 45 45 #include <linux/sunrpc/gss_krb5.h> 46 46 #include <linux/sunrpc/xdr.h> 47 47 48 - #ifdef RPC_DEBUG 48 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 49 49 # define RPCDBG_FACILITY RPCDBG_AUTH 50 50 #endif 51 51
+1 -1
net/sunrpc/auth_gss/gss_krb5_keys.c
··· 61 61 #include <linux/sunrpc/xdr.h> 62 62 #include <linux/lcm.h> 63 63 64 - #ifdef RPC_DEBUG 64 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 65 65 # define RPCDBG_FACILITY RPCDBG_AUTH 66 66 #endif 67 67
+1 -1
net/sunrpc/auth_gss/gss_krb5_mech.c
··· 45 45 #include <linux/crypto.h> 46 46 #include <linux/sunrpc/gss_krb5_enctypes.h> 47 47 48 - #ifdef RPC_DEBUG 48 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 49 49 # define RPCDBG_FACILITY RPCDBG_AUTH 50 50 #endif 51 51
+1 -1
net/sunrpc/auth_gss/gss_krb5_seal.c
··· 64 64 #include <linux/random.h> 65 65 #include <linux/crypto.h> 66 66 67 - #ifdef RPC_DEBUG 67 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 68 68 # define RPCDBG_FACILITY RPCDBG_AUTH 69 69 #endif 70 70
+1 -1
net/sunrpc/auth_gss/gss_krb5_seqnum.c
··· 35 35 #include <linux/sunrpc/gss_krb5.h> 36 36 #include <linux/crypto.h> 37 37 38 - #ifdef RPC_DEBUG 38 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 39 39 # define RPCDBG_FACILITY RPCDBG_AUTH 40 40 #endif 41 41
+1 -1
net/sunrpc/auth_gss/gss_krb5_unseal.c
··· 62 62 #include <linux/sunrpc/gss_krb5.h> 63 63 #include <linux/crypto.h> 64 64 65 - #ifdef RPC_DEBUG 65 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 66 66 # define RPCDBG_FACILITY RPCDBG_AUTH 67 67 #endif 68 68
+1 -1
net/sunrpc/auth_gss/gss_krb5_wrap.c
··· 35 35 #include <linux/pagemap.h> 36 36 #include <linux/crypto.h> 37 37 38 - #ifdef RPC_DEBUG 38 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 39 39 # define RPCDBG_FACILITY RPCDBG_AUTH 40 40 #endif 41 41
+1 -1
net/sunrpc/auth_gss/gss_mech_switch.c
··· 46 46 #include <linux/sunrpc/gss_api.h> 47 47 #include <linux/sunrpc/clnt.h> 48 48 49 - #ifdef RPC_DEBUG 49 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 50 50 # define RPCDBG_FACILITY RPCDBG_AUTH 51 51 #endif 52 52
+1 -1
net/sunrpc/auth_gss/gss_rpc_xdr.h
··· 25 25 #include <linux/sunrpc/clnt.h> 26 26 #include <linux/sunrpc/xprtsock.h> 27 27 28 - #ifdef RPC_DEBUG 28 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 29 29 # define RPCDBG_FACILITY RPCDBG_AUTH 30 30 #endif 31 31
+1 -1
net/sunrpc/auth_gss/svcauth_gss.c
··· 51 51 #include "gss_rpc_upcall.h" 52 52 53 53 54 - #ifdef RPC_DEBUG 54 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 55 55 # define RPCDBG_FACILITY RPCDBG_AUTH 56 56 #endif 57 57
+2 -2
net/sunrpc/auth_null.c
··· 10 10 #include <linux/module.h> 11 11 #include <linux/sunrpc/clnt.h> 12 12 13 - #ifdef RPC_DEBUG 13 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 14 14 # define RPCDBG_FACILITY RPCDBG_AUTH 15 15 #endif 16 16 ··· 138 138 .cr_ops = &null_credops, 139 139 .cr_count = ATOMIC_INIT(1), 140 140 .cr_flags = 1UL << RPCAUTH_CRED_UPTODATE, 141 - #ifdef RPC_DEBUG 141 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 142 142 .cr_magic = RPCAUTH_CRED_MAGIC, 143 143 #endif 144 144 };
+1 -1
net/sunrpc/auth_unix.c
··· 25 25 26 26 #define UNX_WRITESLACK (21 + (UNX_MAXNODENAME >> 2)) 27 27 28 - #ifdef RPC_DEBUG 28 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 29 29 # define RPCDBG_FACILITY RPCDBG_AUTH 30 30 #endif 31 31
+1 -1
net/sunrpc/backchannel_rqst.c
··· 27 27 #include <linux/export.h> 28 28 #include <linux/sunrpc/bc_xprt.h> 29 29 30 - #ifdef RPC_DEBUG 30 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 31 31 #define RPCDBG_FACILITY RPCDBG_TRANS 32 32 #endif 33 33
+12 -4
net/sunrpc/clnt.c
··· 42 42 #include "sunrpc.h" 43 43 #include "netns.h" 44 44 45 - #ifdef RPC_DEBUG 45 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 46 46 # define RPCDBG_FACILITY RPCDBG_CALL 47 47 #endif 48 48 ··· 305 305 struct super_block *pipefs_sb; 306 306 int err; 307 307 308 + err = rpc_clnt_debugfs_register(clnt); 309 + if (err) 310 + return err; 311 + 308 312 pipefs_sb = rpc_get_sb_net(net); 309 313 if (pipefs_sb) { 310 314 err = rpc_setup_pipedir(pipefs_sb, clnt); ··· 335 331 out: 336 332 if (pipefs_sb) 337 333 rpc_put_sb_net(net); 334 + rpc_clnt_debugfs_unregister(clnt); 338 335 return err; 339 336 } 340 337 ··· 675 670 676 671 rpc_unregister_client(clnt); 677 672 __rpc_clnt_remove_pipedir(clnt); 673 + rpc_clnt_debugfs_unregister(clnt); 678 674 679 675 /* 680 676 * A new transport was created. "clnt" therefore ··· 777 771 rcu_dereference(clnt->cl_xprt)->servername); 778 772 if (clnt->cl_parent != clnt) 779 773 parent = clnt->cl_parent; 774 + rpc_clnt_debugfs_unregister(clnt); 780 775 rpc_clnt_remove_pipedir(clnt); 781 776 rpc_unregister_client(clnt); 782 777 rpc_free_iostats(clnt->cl_metrics); ··· 1403 1396 } 1404 1397 EXPORT_SYMBOL_GPL(rpc_restart_call); 1405 1398 1406 - #ifdef RPC_DEBUG 1407 - static const char *rpc_proc_name(const struct rpc_task *task) 1399 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 1400 + const char 1401 + *rpc_proc_name(const struct rpc_task *task) 1408 1402 { 1409 1403 const struct rpc_procinfo *proc = task->tk_msg.rpc_proc; 1410 1404 ··· 2429 2421 } 2430 2422 EXPORT_SYMBOL_GPL(rpc_call_null); 2431 2423 2432 - #ifdef RPC_DEBUG 2424 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 2433 2425 static void rpc_show_header(void) 2434 2426 { 2435 2427 printk(KERN_INFO "-pid- flgs status -client- --rqstp- "
+292
net/sunrpc/debugfs.c
··· 1 + /** 2 + * debugfs interface for sunrpc 3 + * 4 + * (c) 2014 Jeff Layton <jlayton@primarydata.com> 5 + */ 6 + 7 + #include <linux/debugfs.h> 8 + #include <linux/sunrpc/sched.h> 9 + #include <linux/sunrpc/clnt.h> 10 + #include "netns.h" 11 + 12 + static struct dentry *topdir; 13 + static struct dentry *rpc_clnt_dir; 14 + static struct dentry *rpc_xprt_dir; 15 + 16 + struct rpc_clnt_iter { 17 + struct rpc_clnt *clnt; 18 + loff_t pos; 19 + }; 20 + 21 + static int 22 + tasks_show(struct seq_file *f, void *v) 23 + { 24 + u32 xid = 0; 25 + struct rpc_task *task = v; 26 + struct rpc_clnt *clnt = task->tk_client; 27 + const char *rpc_waitq = "none"; 28 + 29 + if (RPC_IS_QUEUED(task)) 30 + rpc_waitq = rpc_qname(task->tk_waitqueue); 31 + 32 + if (task->tk_rqstp) 33 + xid = be32_to_cpu(task->tk_rqstp->rq_xid); 34 + 35 + seq_printf(f, "%5u %04x %6d 0x%x 0x%x %8ld %ps %sv%u %s a:%ps q:%s\n", 36 + task->tk_pid, task->tk_flags, task->tk_status, 37 + clnt->cl_clid, xid, task->tk_timeout, task->tk_ops, 38 + clnt->cl_program->name, clnt->cl_vers, rpc_proc_name(task), 39 + task->tk_action, rpc_waitq); 40 + return 0; 41 + } 42 + 43 + static void * 44 + tasks_start(struct seq_file *f, loff_t *ppos) 45 + __acquires(&clnt->cl_lock) 46 + { 47 + struct rpc_clnt_iter *iter = f->private; 48 + loff_t pos = *ppos; 49 + struct rpc_clnt *clnt = iter->clnt; 50 + struct rpc_task *task; 51 + 52 + iter->pos = pos + 1; 53 + spin_lock(&clnt->cl_lock); 54 + list_for_each_entry(task, &clnt->cl_tasks, tk_task) 55 + if (pos-- == 0) 56 + return task; 57 + return NULL; 58 + } 59 + 60 + static void * 61 + tasks_next(struct seq_file *f, void *v, loff_t *pos) 62 + { 63 + struct rpc_clnt_iter *iter = f->private; 64 + struct rpc_clnt *clnt = iter->clnt; 65 + struct rpc_task *task = v; 66 + struct list_head *next = task->tk_task.next; 67 + 68 + ++iter->pos; 69 + ++*pos; 70 + 71 + /* If there's another task on list, return it */ 72 + if (next == &clnt->cl_tasks) 73 + return NULL; 74 + return list_entry(next, struct rpc_task, tk_task); 75 + } 76 + 77 + static void 78 + tasks_stop(struct seq_file *f, void *v) 79 + __releases(&clnt->cl_lock) 80 + { 81 + struct rpc_clnt_iter *iter = f->private; 82 + struct rpc_clnt *clnt = iter->clnt; 83 + 84 + spin_unlock(&clnt->cl_lock); 85 + } 86 + 87 + static const struct seq_operations tasks_seq_operations = { 88 + .start = tasks_start, 89 + .next = tasks_next, 90 + .stop = tasks_stop, 91 + .show = tasks_show, 92 + }; 93 + 94 + static int tasks_open(struct inode *inode, struct file *filp) 95 + { 96 + int ret = seq_open_private(filp, &tasks_seq_operations, 97 + sizeof(struct rpc_clnt_iter)); 98 + 99 + if (!ret) { 100 + struct seq_file *seq = filp->private_data; 101 + struct rpc_clnt_iter *iter = seq->private; 102 + 103 + iter->clnt = inode->i_private; 104 + 105 + if (!atomic_inc_not_zero(&iter->clnt->cl_count)) { 106 + seq_release_private(inode, filp); 107 + ret = -EINVAL; 108 + } 109 + } 110 + 111 + return ret; 112 + } 113 + 114 + static int 115 + tasks_release(struct inode *inode, struct file *filp) 116 + { 117 + struct seq_file *seq = filp->private_data; 118 + struct rpc_clnt_iter *iter = seq->private; 119 + 120 + rpc_release_client(iter->clnt); 121 + return seq_release_private(inode, filp); 122 + } 123 + 124 + static const struct file_operations tasks_fops = { 125 + .owner = THIS_MODULE, 126 + .open = tasks_open, 127 + .read = seq_read, 128 + .llseek = seq_lseek, 129 + .release = tasks_release, 130 + }; 131 + 132 + int 133 + rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 134 + { 135 + int len, err; 136 + char name[24]; /* enough for "../../rpc_xprt/ + 8 hex digits + NULL */ 137 + 138 + /* Already registered? */ 139 + if (clnt->cl_debugfs) 140 + return 0; 141 + 142 + len = snprintf(name, sizeof(name), "%x", clnt->cl_clid); 143 + if (len >= sizeof(name)) 144 + return -EINVAL; 145 + 146 + /* make the per-client dir */ 147 + clnt->cl_debugfs = debugfs_create_dir(name, rpc_clnt_dir); 148 + if (!clnt->cl_debugfs) 149 + return -ENOMEM; 150 + 151 + /* make tasks file */ 152 + err = -ENOMEM; 153 + if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs, 154 + clnt, &tasks_fops)) 155 + goto out_err; 156 + 157 + err = -EINVAL; 158 + rcu_read_lock(); 159 + len = snprintf(name, sizeof(name), "../../rpc_xprt/%s", 160 + rcu_dereference(clnt->cl_xprt)->debugfs->d_name.name); 161 + rcu_read_unlock(); 162 + if (len >= sizeof(name)) 163 + goto out_err; 164 + 165 + err = -ENOMEM; 166 + if (!debugfs_create_symlink("xprt", clnt->cl_debugfs, name)) 167 + goto out_err; 168 + 169 + return 0; 170 + out_err: 171 + debugfs_remove_recursive(clnt->cl_debugfs); 172 + clnt->cl_debugfs = NULL; 173 + return err; 174 + } 175 + 176 + void 177 + rpc_clnt_debugfs_unregister(struct rpc_clnt *clnt) 178 + { 179 + debugfs_remove_recursive(clnt->cl_debugfs); 180 + clnt->cl_debugfs = NULL; 181 + } 182 + 183 + static int 184 + xprt_info_show(struct seq_file *f, void *v) 185 + { 186 + struct rpc_xprt *xprt = f->private; 187 + 188 + seq_printf(f, "netid: %s\n", xprt->address_strings[RPC_DISPLAY_NETID]); 189 + seq_printf(f, "addr: %s\n", xprt->address_strings[RPC_DISPLAY_ADDR]); 190 + seq_printf(f, "port: %s\n", xprt->address_strings[RPC_DISPLAY_PORT]); 191 + seq_printf(f, "state: 0x%lx\n", xprt->state); 192 + return 0; 193 + } 194 + 195 + static int 196 + xprt_info_open(struct inode *inode, struct file *filp) 197 + { 198 + int ret; 199 + struct rpc_xprt *xprt = inode->i_private; 200 + 201 + ret = single_open(filp, xprt_info_show, xprt); 202 + 203 + if (!ret) { 204 + if (!xprt_get(xprt)) { 205 + single_release(inode, filp); 206 + ret = -EINVAL; 207 + } 208 + } 209 + return ret; 210 + } 211 + 212 + static int 213 + xprt_info_release(struct inode *inode, struct file *filp) 214 + { 215 + struct rpc_xprt *xprt = inode->i_private; 216 + 217 + xprt_put(xprt); 218 + return single_release(inode, filp); 219 + } 220 + 221 + static const struct file_operations xprt_info_fops = { 222 + .owner = THIS_MODULE, 223 + .open = xprt_info_open, 224 + .read = seq_read, 225 + .llseek = seq_lseek, 226 + .release = xprt_info_release, 227 + }; 228 + 229 + int 230 + rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 231 + { 232 + int len, id; 233 + static atomic_t cur_id; 234 + char name[9]; /* 8 hex digits + NULL term */ 235 + 236 + id = (unsigned int)atomic_inc_return(&cur_id); 237 + 238 + len = snprintf(name, sizeof(name), "%x", id); 239 + if (len >= sizeof(name)) 240 + return -EINVAL; 241 + 242 + /* make the per-client dir */ 243 + xprt->debugfs = debugfs_create_dir(name, rpc_xprt_dir); 244 + if (!xprt->debugfs) 245 + return -ENOMEM; 246 + 247 + /* make tasks file */ 248 + if (!debugfs_create_file("info", S_IFREG | S_IRUSR, xprt->debugfs, 249 + xprt, &xprt_info_fops)) { 250 + debugfs_remove_recursive(xprt->debugfs); 251 + xprt->debugfs = NULL; 252 + return -ENOMEM; 253 + } 254 + 255 + return 0; 256 + } 257 + 258 + void 259 + rpc_xprt_debugfs_unregister(struct rpc_xprt *xprt) 260 + { 261 + debugfs_remove_recursive(xprt->debugfs); 262 + xprt->debugfs = NULL; 263 + } 264 + 265 + void __exit 266 + sunrpc_debugfs_exit(void) 267 + { 268 + debugfs_remove_recursive(topdir); 269 + } 270 + 271 + int __init 272 + sunrpc_debugfs_init(void) 273 + { 274 + topdir = debugfs_create_dir("sunrpc", NULL); 275 + if (!topdir) 276 + goto out; 277 + 278 + rpc_clnt_dir = debugfs_create_dir("rpc_clnt", topdir); 279 + if (!rpc_clnt_dir) 280 + goto out_remove; 281 + 282 + rpc_xprt_dir = debugfs_create_dir("rpc_xprt", topdir); 283 + if (!rpc_xprt_dir) 284 + goto out_remove; 285 + 286 + return 0; 287 + out_remove: 288 + debugfs_remove_recursive(topdir); 289 + topdir = NULL; 290 + out: 291 + return -ENOMEM; 292 + }
+1 -1
net/sunrpc/rpcb_clnt.c
··· 32 32 33 33 #include "netns.h" 34 34 35 - #ifdef RPC_DEBUG 35 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 36 36 # define RPCDBG_FACILITY RPCDBG_BIND 37 37 #endif 38 38
+2 -2
net/sunrpc/sched.c
··· 24 24 25 25 #include "sunrpc.h" 26 26 27 - #ifdef RPC_DEBUG 27 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 28 28 #define RPCDBG_FACILITY RPCDBG_SCHED 29 29 #endif 30 30 ··· 258 258 return 0; 259 259 } 260 260 261 - #if defined(RPC_DEBUG) || defined(RPC_TRACEPOINTS) 261 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) || IS_ENABLED(CONFIG_TRACEPOINTS) 262 262 static void rpc_task_set_debuginfo(struct rpc_task *task) 263 263 { 264 264 static atomic_t rpc_pid;
+16 -5
net/sunrpc/stats.c
··· 116 116 */ 117 117 struct rpc_iostats *rpc_alloc_iostats(struct rpc_clnt *clnt) 118 118 { 119 - return kcalloc(clnt->cl_maxproc, sizeof(struct rpc_iostats), GFP_KERNEL); 119 + struct rpc_iostats *stats; 120 + int i; 121 + 122 + stats = kcalloc(clnt->cl_maxproc, sizeof(*stats), GFP_KERNEL); 123 + if (stats) { 124 + for (i = 0; i < clnt->cl_maxproc; i++) 125 + spin_lock_init(&stats[i].om_lock); 126 + } 127 + return stats; 120 128 } 121 129 EXPORT_SYMBOL_GPL(rpc_alloc_iostats); 122 130 ··· 143 135 * rpc_count_iostats - tally up per-task stats 144 136 * @task: completed rpc_task 145 137 * @stats: array of stat structures 146 - * 147 - * Relies on the caller for serialization. 148 138 */ 149 139 void rpc_count_iostats(const struct rpc_task *task, struct rpc_iostats *stats) 150 140 { 151 141 struct rpc_rqst *req = task->tk_rqstp; 152 142 struct rpc_iostats *op_metrics; 153 - ktime_t delta; 143 + ktime_t delta, now; 154 144 155 145 if (!stats || !req) 156 146 return; 157 147 148 + now = ktime_get(); 158 149 op_metrics = &stats[task->tk_msg.rpc_proc->p_statidx]; 150 + 151 + spin_lock(&op_metrics->om_lock); 159 152 160 153 op_metrics->om_ops++; 161 154 op_metrics->om_ntrans += req->rq_ntrans; ··· 170 161 171 162 op_metrics->om_rtt = ktime_add(op_metrics->om_rtt, req->rq_rtt); 172 163 173 - delta = ktime_sub(ktime_get(), task->tk_start); 164 + delta = ktime_sub(now, task->tk_start); 174 165 op_metrics->om_execute = ktime_add(op_metrics->om_execute, delta); 166 + 167 + spin_unlock(&op_metrics->om_lock); 175 168 } 176 169 EXPORT_SYMBOL_GPL(rpc_count_iostats); 177 170
+10 -2
net/sunrpc/sunrpc_syms.c
··· 97 97 err = register_rpc_pipefs(); 98 98 if (err) 99 99 goto out4; 100 - #ifdef RPC_DEBUG 100 + 101 + err = sunrpc_debugfs_init(); 102 + if (err) 103 + goto out5; 104 + 105 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 101 106 rpc_register_sysctl(); 102 107 #endif 103 108 svc_init_xprt_sock(); /* svc sock transport */ 104 109 init_socket_xprt(); /* clnt sock transport */ 105 110 return 0; 106 111 112 + out5: 113 + unregister_rpc_pipefs(); 107 114 out4: 108 115 unregister_pernet_subsys(&sunrpc_net_ops); 109 116 out3: ··· 127 120 rpcauth_remove_module(); 128 121 cleanup_socket_xprt(); 129 122 svc_cleanup_xprt_sock(); 123 + sunrpc_debugfs_exit(); 130 124 unregister_rpc_pipefs(); 131 125 rpc_destroy_mempool(); 132 126 unregister_pernet_subsys(&sunrpc_net_ops); 133 - #ifdef RPC_DEBUG 127 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 134 128 rpc_unregister_sysctl(); 135 129 #endif 136 130 rcu_barrier(); /* Wait for completion of call_rcu()'s */
+13 -10
net/sunrpc/svc.c
··· 28 28 #include <linux/sunrpc/clnt.h> 29 29 #include <linux/sunrpc/bc_xprt.h> 30 30 31 + #include <trace/events/sunrpc.h> 32 + 31 33 #define RPCDBG_FACILITY RPCDBG_SVCDSP 32 34 33 35 static void svc_unregister(const struct svc_serv *serv, struct net *net); ··· 1044 1042 /* 1045 1043 * dprintk the given error with the address of the client that caused it. 1046 1044 */ 1047 - #ifdef RPC_DEBUG 1045 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 1048 1046 static __printf(2, 3) 1049 1047 void svc_printk(struct svc_rqst *rqstp, const char *fmt, ...) 1050 1048 { ··· 1318 1316 rqstp->rq_res.tail[0].iov_base = NULL; 1319 1317 rqstp->rq_res.tail[0].iov_len = 0; 1320 1318 1321 - rqstp->rq_xid = svc_getu32(argv); 1322 - 1323 1319 dir = svc_getnl(argv); 1324 1320 if (dir != 0) { 1325 1321 /* direction != CALL */ 1326 1322 svc_printk(rqstp, "bad direction %d, dropping request\n", dir); 1327 1323 serv->sv_stats->rpcbadfmt++; 1328 - svc_drop(rqstp); 1329 - return 0; 1324 + goto out_drop; 1330 1325 } 1331 1326 1332 1327 /* Returns 1 for send, 0 for drop */ 1333 - if (svc_process_common(rqstp, argv, resv)) 1334 - return svc_send(rqstp); 1335 - else { 1336 - svc_drop(rqstp); 1337 - return 0; 1328 + if (likely(svc_process_common(rqstp, argv, resv))) { 1329 + int ret = svc_send(rqstp); 1330 + 1331 + trace_svc_process(rqstp, ret); 1332 + return ret; 1338 1333 } 1334 + out_drop: 1335 + trace_svc_process(rqstp, 0); 1336 + svc_drop(rqstp); 1337 + return 0; 1339 1338 } 1340 1339 1341 1340 #if defined(CONFIG_SUNRPC_BACKCHANNEL)
+21 -10
net/sunrpc/svc_xprt.c
··· 15 15 #include <linux/sunrpc/svcsock.h> 16 16 #include <linux/sunrpc/xprt.h> 17 17 #include <linux/module.h> 18 + #include <trace/events/sunrpc.h> 18 19 19 20 #define RPCDBG_FACILITY RPCDBG_SVCXPRT 20 21 ··· 774 773 775 774 err = svc_alloc_arg(rqstp); 776 775 if (err) 777 - return err; 776 + goto out; 778 777 779 778 try_to_freeze(); 780 779 cond_resched(); 780 + err = -EINTR; 781 781 if (signalled() || kthread_should_stop()) 782 - return -EINTR; 782 + goto out; 783 783 784 784 xprt = svc_get_next_xprt(rqstp, timeout); 785 - if (IS_ERR(xprt)) 786 - return PTR_ERR(xprt); 785 + if (IS_ERR(xprt)) { 786 + err = PTR_ERR(xprt); 787 + goto out; 788 + } 787 789 788 790 len = svc_handle_xprt(rqstp, xprt); 789 791 790 792 /* No data, incomplete (TCP) read, or accept() */ 793 + err = -EAGAIN; 791 794 if (len <= 0) 792 - goto out; 795 + goto out_release; 793 796 794 797 clear_bit(XPT_OLD, &xprt->xpt_flags); 795 798 796 799 rqstp->rq_secure = xprt->xpt_ops->xpo_secure_port(rqstp); 797 800 rqstp->rq_chandle.defer = svc_defer; 801 + rqstp->rq_xid = svc_getu32(&rqstp->rq_arg.head[0]); 798 802 799 803 if (serv->sv_stats) 800 804 serv->sv_stats->netcnt++; 805 + trace_svc_recv(rqstp, len); 801 806 return len; 802 - out: 807 + out_release: 803 808 rqstp->rq_res.len = 0; 804 809 svc_xprt_release(rqstp); 805 - return -EAGAIN; 810 + out: 811 + trace_svc_recv(rqstp, err); 812 + return err; 806 813 } 807 814 EXPORT_SYMBOL_GPL(svc_recv); 808 815 ··· 830 821 int svc_send(struct svc_rqst *rqstp) 831 822 { 832 823 struct svc_xprt *xprt; 833 - int len; 824 + int len = -EFAULT; 834 825 struct xdr_buf *xb; 835 826 836 827 xprt = rqstp->rq_xprt; 837 828 if (!xprt) 838 - return -EFAULT; 829 + goto out; 839 830 840 831 /* release the receive skb before sending the reply */ 841 832 rqstp->rq_xprt->xpt_ops->xpo_release_rqst(rqstp); ··· 858 849 svc_xprt_release(rqstp); 859 850 860 851 if (len == -ECONNREFUSED || len == -ENOTCONN || len == -EAGAIN) 861 - return 0; 852 + len = 0; 853 + out: 854 + trace_svc_send(rqstp, len); 862 855 return len; 863 856 } 864 857
+1 -1
net/sunrpc/sysctl.c
··· 37 37 unsigned int nlm_debug; 38 38 EXPORT_SYMBOL_GPL(nlm_debug); 39 39 40 - #ifdef RPC_DEBUG 40 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 41 41 42 42 static struct ctl_table_header *sunrpc_table_header; 43 43 static struct ctl_table sunrpc_table[];
+17 -2
net/sunrpc/xprt.c
··· 49 49 #include <linux/sunrpc/metrics.h> 50 50 #include <linux/sunrpc/bc_xprt.h> 51 51 52 + #include <trace/events/sunrpc.h> 53 + 52 54 #include "sunrpc.h" 53 55 54 56 /* 55 57 * Local variables 56 58 */ 57 59 58 - #ifdef RPC_DEBUG 60 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 59 61 # define RPCDBG_FACILITY RPCDBG_XPRT 60 62 #endif 61 63 ··· 774 772 struct rpc_rqst *entry; 775 773 776 774 list_for_each_entry(entry, &xprt->recv, rq_list) 777 - if (entry->rq_xid == xid) 775 + if (entry->rq_xid == xid) { 776 + trace_xprt_lookup_rqst(xprt, xid, 0); 778 777 return entry; 778 + } 779 779 780 780 dprintk("RPC: xprt_lookup_rqst did not find xid %08x\n", 781 781 ntohl(xid)); 782 + trace_xprt_lookup_rqst(xprt, xid, -ENOENT); 782 783 xprt->stat.bad_xids++; 783 784 return NULL; 784 785 } ··· 815 810 816 811 dprintk("RPC: %5u xid %08x complete (%d bytes received)\n", 817 812 task->tk_pid, ntohl(req->rq_xid), copied); 813 + trace_xprt_complete_rqst(xprt, req->rq_xid, copied); 818 814 819 815 xprt->stat.recvs++; 820 816 req->rq_rtt = ktime_sub(ktime_get(), req->rq_xtime); ··· 932 926 933 927 req->rq_xtime = ktime_get(); 934 928 status = xprt->ops->send_request(task); 929 + trace_xprt_transmit(xprt, req->rq_xid, status); 935 930 if (status != 0) { 936 931 task->tk_status = status; 937 932 return; ··· 1303 1296 */ 1304 1297 struct rpc_xprt *xprt_create_transport(struct xprt_create *args) 1305 1298 { 1299 + int err; 1306 1300 struct rpc_xprt *xprt; 1307 1301 struct xprt_class *t; 1308 1302 ··· 1344 1336 return ERR_PTR(-ENOMEM); 1345 1337 } 1346 1338 1339 + err = rpc_xprt_debugfs_register(xprt); 1340 + if (err) { 1341 + xprt_destroy(xprt); 1342 + return ERR_PTR(err); 1343 + } 1344 + 1347 1345 dprintk("RPC: created transport %p with %u slots\n", xprt, 1348 1346 xprt->max_reqs); 1349 1347 out: ··· 1366 1352 dprintk("RPC: destroying transport %p\n", xprt); 1367 1353 del_timer_sync(&xprt->timer); 1368 1354 1355 + rpc_xprt_debugfs_unregister(xprt); 1369 1356 rpc_destroy_wait_queue(&xprt->binding); 1370 1357 rpc_destroy_wait_queue(&xprt->pending); 1371 1358 rpc_destroy_wait_queue(&xprt->sending);
+2 -2
net/sunrpc/xprtrdma/rpc_rdma.c
··· 49 49 50 50 #include <linux/highmem.h> 51 51 52 - #ifdef RPC_DEBUG 52 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 53 53 # define RPCDBG_FACILITY RPCDBG_TRANS 54 54 #endif 55 55 56 - #ifdef RPC_DEBUG 56 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 57 57 static const char transfertypes[][12] = { 58 58 "pure inline", /* no chunks */ 59 59 " read chunk", /* some argument via rdma read */
+6 -6
net/sunrpc/xprtrdma/transport.c
··· 55 55 56 56 #include "xprt_rdma.h" 57 57 58 - #ifdef RPC_DEBUG 58 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 59 59 # define RPCDBG_FACILITY RPCDBG_TRANS 60 60 #endif 61 61 ··· 73 73 static unsigned int xprt_rdma_max_inline_write = RPCRDMA_DEF_INLINE; 74 74 static unsigned int xprt_rdma_inline_write_padding; 75 75 static unsigned int xprt_rdma_memreg_strategy = RPCRDMA_FRMR; 76 - int xprt_rdma_pad_optimize = 0; 76 + int xprt_rdma_pad_optimize = 1; 77 77 78 - #ifdef RPC_DEBUG 78 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 79 79 80 80 static unsigned int min_slot_table_size = RPCRDMA_MIN_SLOT_TABLE; 81 81 static unsigned int max_slot_table_size = RPCRDMA_MAX_SLOT_TABLE; ··· 599 599 600 600 if (req->rl_niovs == 0) 601 601 rc = rpcrdma_marshal_req(rqst); 602 - else if (r_xprt->rx_ia.ri_memreg_strategy == RPCRDMA_FRMR) 602 + else if (r_xprt->rx_ia.ri_memreg_strategy != RPCRDMA_ALLPHYSICAL) 603 603 rc = rpcrdma_marshal_chunks(rqst, 0); 604 604 if (rc < 0) 605 605 goto failed_marshal; ··· 705 705 int rc; 706 706 707 707 dprintk("RPCRDMA Module Removed, deregister RPC RDMA transport\n"); 708 - #ifdef RPC_DEBUG 708 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 709 709 if (sunrpc_table_header) { 710 710 unregister_sysctl_table(sunrpc_table_header); 711 711 sunrpc_table_header = NULL; ··· 736 736 dprintk("\tPadding %d\n\tMemreg %d\n", 737 737 xprt_rdma_inline_write_padding, xprt_rdma_memreg_strategy); 738 738 739 - #ifdef RPC_DEBUG 739 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 740 740 if (!sunrpc_table_header) 741 741 sunrpc_table_header = register_sysctl_table(sunrpc_table); 742 742 #endif
+103 -19
net/sunrpc/xprtrdma/verbs.c
··· 57 57 * Globals/Macros 58 58 */ 59 59 60 - #ifdef RPC_DEBUG 60 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 61 61 # define RPCDBG_FACILITY RPCDBG_TRANS 62 62 #endif 63 63 64 64 static void rpcrdma_reset_frmrs(struct rpcrdma_ia *); 65 + static void rpcrdma_reset_fmrs(struct rpcrdma_ia *); 65 66 66 67 /* 67 68 * internal functions ··· 106 105 107 106 static DECLARE_TASKLET(rpcrdma_tasklet_g, rpcrdma_run_tasklet, 0UL); 108 107 108 + static const char * const async_event[] = { 109 + "CQ error", 110 + "QP fatal error", 111 + "QP request error", 112 + "QP access error", 113 + "communication established", 114 + "send queue drained", 115 + "path migration successful", 116 + "path mig error", 117 + "device fatal error", 118 + "port active", 119 + "port error", 120 + "LID change", 121 + "P_key change", 122 + "SM change", 123 + "SRQ error", 124 + "SRQ limit reached", 125 + "last WQE reached", 126 + "client reregister", 127 + "GID change", 128 + }; 129 + 130 + #define ASYNC_MSG(status) \ 131 + ((status) < ARRAY_SIZE(async_event) ? \ 132 + async_event[(status)] : "unknown async error") 133 + 134 + static void 135 + rpcrdma_schedule_tasklet(struct list_head *sched_list) 136 + { 137 + unsigned long flags; 138 + 139 + spin_lock_irqsave(&rpcrdma_tk_lock_g, flags); 140 + list_splice_tail(sched_list, &rpcrdma_tasklets_g); 141 + spin_unlock_irqrestore(&rpcrdma_tk_lock_g, flags); 142 + tasklet_schedule(&rpcrdma_tasklet_g); 143 + } 144 + 109 145 static void 110 146 rpcrdma_qp_async_error_upcall(struct ib_event *event, void *context) 111 147 { 112 148 struct rpcrdma_ep *ep = context; 113 149 114 - dprintk("RPC: %s: QP error %X on device %s ep %p\n", 115 - __func__, event->event, event->device->name, context); 150 + pr_err("RPC: %s: %s on device %s ep %p\n", 151 + __func__, ASYNC_MSG(event->event), 152 + event->device->name, context); 116 153 if (ep->rep_connected == 1) { 117 154 ep->rep_connected = -EIO; 118 155 ep->rep_func(ep); ··· 163 124 { 164 125 struct rpcrdma_ep *ep = context; 165 126 166 - dprintk("RPC: %s: CQ error %X on device %s ep %p\n", 167 - __func__, event->event, event->device->name, context); 127 + pr_err("RPC: %s: %s on device %s ep %p\n", 128 + __func__, ASYNC_MSG(event->event), 129 + event->device->name, context); 168 130 if (ep->rep_connected == 1) { 169 131 ep->rep_connected = -EIO; 170 132 ep->rep_func(ep); ··· 283 243 struct list_head sched_list; 284 244 struct ib_wc *wcs; 285 245 int budget, count, rc; 286 - unsigned long flags; 287 246 288 247 INIT_LIST_HEAD(&sched_list); 289 248 budget = RPCRDMA_WC_BUDGET / RPCRDMA_POLLSIZE; ··· 300 261 rc = 0; 301 262 302 263 out_schedule: 303 - spin_lock_irqsave(&rpcrdma_tk_lock_g, flags); 304 - list_splice_tail(&sched_list, &rpcrdma_tasklets_g); 305 - spin_unlock_irqrestore(&rpcrdma_tk_lock_g, flags); 306 - tasklet_schedule(&rpcrdma_tasklet_g); 264 + rpcrdma_schedule_tasklet(&sched_list); 307 265 return rc; 308 266 } 309 267 ··· 345 309 static void 346 310 rpcrdma_flush_cqs(struct rpcrdma_ep *ep) 347 311 { 348 - rpcrdma_recvcq_upcall(ep->rep_attr.recv_cq, ep); 349 - rpcrdma_sendcq_upcall(ep->rep_attr.send_cq, ep); 312 + struct ib_wc wc; 313 + LIST_HEAD(sched_list); 314 + 315 + while (ib_poll_cq(ep->rep_attr.recv_cq, 1, &wc) > 0) 316 + rpcrdma_recvcq_process_wc(&wc, &sched_list); 317 + if (!list_empty(&sched_list)) 318 + rpcrdma_schedule_tasklet(&sched_list); 319 + while (ib_poll_cq(ep->rep_attr.send_cq, 1, &wc) > 0) 320 + rpcrdma_sendcq_process_wc(&wc); 350 321 } 351 322 352 - #ifdef RPC_DEBUG 323 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 353 324 static const char * const conn[] = { 354 325 "address resolved", 355 326 "address error", ··· 387 344 struct rpcrdma_xprt *xprt = id->context; 388 345 struct rpcrdma_ia *ia = &xprt->rx_ia; 389 346 struct rpcrdma_ep *ep = &xprt->rx_ep; 390 - #ifdef RPC_DEBUG 347 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 391 348 struct sockaddr_in *addr = (struct sockaddr_in *) &ep->rep_remote_addr; 392 349 #endif 393 350 struct ib_qp_attr attr; ··· 451 408 break; 452 409 } 453 410 454 - #ifdef RPC_DEBUG 411 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 455 412 if (connstate == 1) { 456 413 int ird = attr.max_dest_rd_atomic; 457 414 int tird = ep->rep_remote_cma.responder_resources; ··· 776 733 777 734 /* set trigger for requesting send completion */ 778 735 ep->rep_cqinit = ep->rep_attr.cap.max_send_wr/2 - 1; 779 - if (ep->rep_cqinit <= 2) 736 + if (ep->rep_cqinit > RPCRDMA_MAX_UNSIGNALED_SENDS) 737 + ep->rep_cqinit = RPCRDMA_MAX_UNSIGNALED_SENDS; 738 + else if (ep->rep_cqinit <= 2) 780 739 ep->rep_cqinit = 0; 781 740 INIT_CQCOUNT(ep); 782 741 ep->rep_ia = ia; ··· 911 866 rpcrdma_ep_disconnect(ep, ia); 912 867 rpcrdma_flush_cqs(ep); 913 868 914 - if (ia->ri_memreg_strategy == RPCRDMA_FRMR) 869 + switch (ia->ri_memreg_strategy) { 870 + case RPCRDMA_FRMR: 915 871 rpcrdma_reset_frmrs(ia); 872 + break; 873 + case RPCRDMA_MTHCAFMR: 874 + rpcrdma_reset_fmrs(ia); 875 + break; 876 + case RPCRDMA_ALLPHYSICAL: 877 + break; 878 + default: 879 + rc = -EIO; 880 + goto out; 881 + } 916 882 917 883 xprt = container_of(ia, struct rpcrdma_xprt, rx_ia); 918 884 id = rpcrdma_create_id(xprt, ia, ··· 1341 1285 } 1342 1286 1343 1287 kfree(buf->rb_pool); 1288 + } 1289 + 1290 + /* After a disconnect, unmap all FMRs. 1291 + * 1292 + * This is invoked only in the transport connect worker in order 1293 + * to serialize with rpcrdma_register_fmr_external(). 1294 + */ 1295 + static void 1296 + rpcrdma_reset_fmrs(struct rpcrdma_ia *ia) 1297 + { 1298 + struct rpcrdma_xprt *r_xprt = 1299 + container_of(ia, struct rpcrdma_xprt, rx_ia); 1300 + struct rpcrdma_buffer *buf = &r_xprt->rx_buf; 1301 + struct list_head *pos; 1302 + struct rpcrdma_mw *r; 1303 + LIST_HEAD(l); 1304 + int rc; 1305 + 1306 + list_for_each(pos, &buf->rb_all) { 1307 + r = list_entry(pos, struct rpcrdma_mw, mw_all); 1308 + 1309 + INIT_LIST_HEAD(&l); 1310 + list_add(&r->r.fmr->list, &l); 1311 + rc = ib_unmap_fmr(&l); 1312 + if (rc) 1313 + dprintk("RPC: %s: ib_unmap_fmr failed %i\n", 1314 + __func__, rc); 1315 + } 1344 1316 } 1345 1317 1346 1318 /* After a disconnect, a flushed FAST_REG_MR can leave an FRMR in ··· 2002 1918 break; 2003 1919 2004 1920 default: 2005 - return -1; 1921 + return -EIO; 2006 1922 } 2007 1923 if (rc) 2008 - return -1; 1924 + return rc; 2009 1925 2010 1926 return nsegs; 2011 1927 }
+6
net/sunrpc/xprtrdma/xprt_rdma.h
··· 97 97 struct ib_wc rep_recv_wcs[RPCRDMA_POLLSIZE]; 98 98 }; 99 99 100 + /* 101 + * Force a signaled SEND Work Request every so often, 102 + * in case the provider needs to do some housekeeping. 103 + */ 104 + #define RPCRDMA_MAX_UNSIGNALED_SENDS (32) 105 + 100 106 #define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit) 101 107 #define DECR_CQCOUNT(ep) atomic_sub_return(1, &(ep)->rep_cqcount) 102 108
+13 -64
net/sunrpc/xprtsock.c
··· 75 75 * someone else's file names! 76 76 */ 77 77 78 - #ifdef RPC_DEBUG 78 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 79 79 80 80 static unsigned int min_slot_table_size = RPC_MIN_SLOT_TABLE; 81 81 static unsigned int max_slot_table_size = RPC_MAX_SLOT_TABLE; ··· 186 186 */ 187 187 #define XS_IDLE_DISC_TO (5U * 60 * HZ) 188 188 189 - #ifdef RPC_DEBUG 189 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 190 190 # undef RPC_DEBUG_DATA 191 191 # define RPCDBG_FACILITY RPCDBG_TRANS 192 192 #endif ··· 215 215 /* NOP */ 216 216 } 217 217 #endif 218 - 219 - struct sock_xprt { 220 - struct rpc_xprt xprt; 221 - 222 - /* 223 - * Network layer 224 - */ 225 - struct socket * sock; 226 - struct sock * inet; 227 - 228 - /* 229 - * State of TCP reply receive 230 - */ 231 - __be32 tcp_fraghdr, 232 - tcp_xid, 233 - tcp_calldir; 234 - 235 - u32 tcp_offset, 236 - tcp_reclen; 237 - 238 - unsigned long tcp_copied, 239 - tcp_flags; 240 - 241 - /* 242 - * Connection of transports 243 - */ 244 - struct delayed_work connect_worker; 245 - struct sockaddr_storage srcaddr; 246 - unsigned short srcport; 247 - 248 - /* 249 - * UDP socket buffer size parameters 250 - */ 251 - size_t rcvsize, 252 - sndsize; 253 - 254 - /* 255 - * Saved socket callback addresses 256 - */ 257 - void (*old_data_ready)(struct sock *); 258 - void (*old_state_change)(struct sock *); 259 - void (*old_write_space)(struct sock *); 260 - void (*old_error_report)(struct sock *); 261 - }; 262 - 263 - /* 264 - * TCP receive state flags 265 - */ 266 - #define TCP_RCV_LAST_FRAG (1UL << 0) 267 - #define TCP_RCV_COPY_FRAGHDR (1UL << 1) 268 - #define TCP_RCV_COPY_XID (1UL << 2) 269 - #define TCP_RCV_COPY_DATA (1UL << 3) 270 - #define TCP_RCV_READ_CALLDIR (1UL << 4) 271 - #define TCP_RCV_COPY_CALLDIR (1UL << 5) 272 - 273 - /* 274 - * TCP RPC flags 275 - */ 276 - #define TCP_RPC_REPLY (1UL << 6) 277 218 278 219 static inline struct rpc_xprt *xprt_from_sock(struct sock *sk) 279 220 { ··· 1356 1415 1357 1416 dprintk("RPC: xs_tcp_data_recv started\n"); 1358 1417 do { 1418 + trace_xs_tcp_data_recv(transport); 1359 1419 /* Read in a new fragment marker if necessary */ 1360 1420 /* Can we ever really expect to get completely empty fragments? */ 1361 1421 if (transport->tcp_flags & TCP_RCV_COPY_FRAGHDR) { ··· 1381 1439 /* Skip over any trailing bytes on short reads */ 1382 1440 xs_tcp_read_discard(transport, &desc); 1383 1441 } while (desc.count); 1442 + trace_xs_tcp_data_recv(transport); 1384 1443 dprintk("RPC: xs_tcp_data_recv done\n"); 1385 1444 return len - desc.count; 1386 1445 } ··· 1397 1454 struct rpc_xprt *xprt; 1398 1455 read_descriptor_t rd_desc; 1399 1456 int read; 1457 + unsigned long total = 0; 1400 1458 1401 1459 dprintk("RPC: xs_tcp_data_ready...\n"); 1402 1460 1403 1461 read_lock_bh(&sk->sk_callback_lock); 1404 - if (!(xprt = xprt_from_sock(sk))) 1462 + if (!(xprt = xprt_from_sock(sk))) { 1463 + read = 0; 1405 1464 goto out; 1465 + } 1406 1466 /* Any data means we had a useful conversation, so 1407 1467 * the we don't need to delay the next reconnect 1408 1468 */ ··· 1417 1471 do { 1418 1472 rd_desc.count = 65536; 1419 1473 read = tcp_read_sock(sk, &rd_desc, xs_tcp_data_recv); 1474 + if (read > 0) 1475 + total += read; 1420 1476 } while (read > 0); 1421 1477 out: 1478 + trace_xs_tcp_data_ready(xprt, read, total); 1422 1479 read_unlock_bh(&sk->sk_callback_lock); 1423 1480 } 1424 1481 ··· 2991 3042 */ 2992 3043 int init_socket_xprt(void) 2993 3044 { 2994 - #ifdef RPC_DEBUG 3045 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 2995 3046 if (!sunrpc_table_header) 2996 3047 sunrpc_table_header = register_sysctl_table(sunrpc_table); 2997 3048 #endif ··· 3010 3061 */ 3011 3062 void cleanup_socket_xprt(void) 3012 3063 { 3013 - #ifdef RPC_DEBUG 3064 + #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 3014 3065 if (sunrpc_table_header) { 3015 3066 unregister_sysctl_table(sunrpc_table_header); 3016 3067 sunrpc_table_header = NULL;
+4 -4
sound/pci/hda/patch_realtek.c
··· 4520 4520 [ALC269_FIXUP_HEADSET_MODE] = { 4521 4521 .type = HDA_FIXUP_FUNC, 4522 4522 .v.func = alc_fixup_headset_mode, 4523 + .chained = true, 4524 + .chain_id = ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED 4523 4525 }, 4524 4526 [ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 4525 4527 .type = HDA_FIXUP_FUNC, ··· 4711 4709 [ALC255_FIXUP_HEADSET_MODE] = { 4712 4710 .type = HDA_FIXUP_FUNC, 4713 4711 .v.func = alc_fixup_headset_mode_alc255, 4712 + .chained = true, 4713 + .chain_id = ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED 4714 4714 }, 4715 4715 [ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC] = { 4716 4716 .type = HDA_FIXUP_FUNC, ··· 4748 4744 [ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED] = { 4749 4745 .type = HDA_FIXUP_FUNC, 4750 4746 .v.func = alc_fixup_dell_wmi, 4751 - .chained_before = true, 4752 - .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 4753 4747 }, 4754 4748 [ALC282_FIXUP_ASPIRE_V5_PINS] = { 4755 4749 .type = HDA_FIXUP_PINS, ··· 4785 4783 SND_PCI_QUIRK(0x1028, 0x05f4, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4786 4784 SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4787 4785 SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 4788 - SND_PCI_QUIRK(0x1028, 0x0610, "Dell", ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED), 4789 4786 SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), 4790 4787 SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK), 4791 - SND_PCI_QUIRK(0x1028, 0x061f, "Dell", ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED), 4792 4788 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK), 4793 4789 SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4794 4790 SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+1
sound/soc/codecs/cs42l51-i2c.c
··· 46 46 .driver = { 47 47 .name = "cs42l51", 48 48 .owner = THIS_MODULE, 49 + .of_match_table = cs42l51_of_match, 49 50 }, 50 51 .probe = cs42l51_i2c_probe, 51 52 .remove = cs42l51_i2c_remove,
+3 -1
sound/soc/codecs/cs42l51.c
··· 558 558 } 559 559 EXPORT_SYMBOL_GPL(cs42l51_probe); 560 560 561 - static const struct of_device_id cs42l51_of_match[] = { 561 + const struct of_device_id cs42l51_of_match[] = { 562 562 { .compatible = "cirrus,cs42l51", }, 563 563 { } 564 564 }; 565 565 MODULE_DEVICE_TABLE(of, cs42l51_of_match); 566 + EXPORT_SYMBOL_GPL(cs42l51_of_match); 567 + 566 568 MODULE_AUTHOR("Arnaud Patard <arnaud.patard@rtp-net.org>"); 567 569 MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver"); 568 570 MODULE_LICENSE("GPL");
+1
sound/soc/codecs/cs42l51.h
··· 22 22 23 23 extern const struct regmap_config cs42l51_regmap; 24 24 int cs42l51_probe(struct device *dev, struct regmap *regmap); 25 + extern const struct of_device_id cs42l51_of_match[]; 25 26 26 27 #define CS42L51_CHIP_ID 0x1B 27 28 #define CS42L51_CHIP_REV_A 0x00
+1 -1
sound/soc/codecs/es8328-i2c.c
··· 19 19 #include "es8328.h" 20 20 21 21 static const struct i2c_device_id es8328_id[] = { 22 - { "everest,es8328", 0 }, 22 + { "es8328", 0 }, 23 23 { } 24 24 }; 25 25 MODULE_DEVICE_TABLE(i2c, es8328_id);
+3 -3
sound/soc/codecs/max98090.c
··· 1941 1941 * 0x02 (when master clk is 20MHz to 40MHz).. 1942 1942 * 0x03 (when master clk is 40MHz to 60MHz).. 1943 1943 */ 1944 - if ((freq >= 10000000) && (freq < 20000000)) { 1944 + if ((freq >= 10000000) && (freq <= 20000000)) { 1945 1945 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1946 1946 M98090_PSCLK_DIV1); 1947 - } else if ((freq >= 20000000) && (freq < 40000000)) { 1947 + } else if ((freq > 20000000) && (freq <= 40000000)) { 1948 1948 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1949 1949 M98090_PSCLK_DIV2); 1950 - } else if ((freq >= 40000000) && (freq < 60000000)) { 1950 + } else if ((freq > 40000000) && (freq <= 60000000)) { 1951 1951 snd_soc_write(codec, M98090_REG_SYSTEM_CLOCK, 1952 1952 M98090_PSCLK_DIV4); 1953 1953 } else {
+2
sound/soc/codecs/rt5645.c
··· 139 139 { 0x76, 0x000a }, 140 140 { 0x77, 0x0c00 }, 141 141 { 0x78, 0x0000 }, 142 + { 0x79, 0x0123 }, 142 143 { 0x80, 0x0000 }, 143 144 { 0x81, 0x0000 }, 144 145 { 0x82, 0x0000 }, ··· 335 334 case RT5645_DMIC_CTRL2: 336 335 case RT5645_TDM_CTRL_1: 337 336 case RT5645_TDM_CTRL_2: 337 + case RT5645_TDM_CTRL_3: 338 338 case RT5645_GLB_CLK: 339 339 case RT5645_PLL_CTRL1: 340 340 case RT5645_PLL_CTRL2:
+18 -18
sound/soc/codecs/rt5670.c
··· 100 100 { 0x4c, 0x5380 }, 101 101 { 0x4f, 0x0073 }, 102 102 { 0x52, 0x00d3 }, 103 - { 0x53, 0xf0f0 }, 103 + { 0x53, 0xf000 }, 104 104 { 0x61, 0x0000 }, 105 105 { 0x62, 0x0001 }, 106 106 { 0x63, 0x00c3 }, 107 107 { 0x64, 0x0000 }, 108 - { 0x65, 0x0000 }, 108 + { 0x65, 0x0001 }, 109 109 { 0x66, 0x0000 }, 110 110 { 0x6f, 0x8000 }, 111 111 { 0x70, 0x8000 }, 112 112 { 0x71, 0x8000 }, 113 113 { 0x72, 0x8000 }, 114 - { 0x73, 0x1110 }, 114 + { 0x73, 0x7770 }, 115 115 { 0x74, 0x0e00 }, 116 116 { 0x75, 0x1505 }, 117 117 { 0x76, 0x0015 }, ··· 125 125 { 0x83, 0x0000 }, 126 126 { 0x84, 0x0000 }, 127 127 { 0x85, 0x0000 }, 128 - { 0x86, 0x0008 }, 128 + { 0x86, 0x0004 }, 129 129 { 0x87, 0x0000 }, 130 130 { 0x88, 0x0000 }, 131 131 { 0x89, 0x0000 }, 132 132 { 0x8a, 0x0000 }, 133 133 { 0x8b, 0x0000 }, 134 - { 0x8c, 0x0007 }, 134 + { 0x8c, 0x0003 }, 135 135 { 0x8d, 0x0000 }, 136 136 { 0x8e, 0x0004 }, 137 137 { 0x8f, 0x1100 }, 138 138 { 0x90, 0x0646 }, 139 139 { 0x91, 0x0c06 }, 140 140 { 0x93, 0x0000 }, 141 - { 0x94, 0x0000 }, 142 - { 0x95, 0x0000 }, 141 + { 0x94, 0x1270 }, 142 + { 0x95, 0x1000 }, 143 143 { 0x97, 0x0000 }, 144 144 { 0x98, 0x0000 }, 145 145 { 0x99, 0x0000 }, ··· 150 150 { 0x9e, 0x0400 }, 151 151 { 0xae, 0x7000 }, 152 152 { 0xaf, 0x0000 }, 153 - { 0xb0, 0x6000 }, 153 + { 0xb0, 0x7000 }, 154 154 { 0xb1, 0x0000 }, 155 155 { 0xb2, 0x0000 }, 156 156 { 0xb3, 0x001f }, 157 - { 0xb4, 0x2206 }, 157 + { 0xb4, 0x220c }, 158 158 { 0xb5, 0x1f00 }, 159 159 { 0xb6, 0x0000 }, 160 160 { 0xb7, 0x0000 }, ··· 171 171 { 0xcf, 0x1813 }, 172 172 { 0xd0, 0x0690 }, 173 173 { 0xd1, 0x1c17 }, 174 - { 0xd3, 0xb320 }, 174 + { 0xd3, 0xa220 }, 175 175 { 0xd4, 0x0000 }, 176 176 { 0xd6, 0x0400 }, 177 177 { 0xd9, 0x0809 }, 178 178 { 0xda, 0x0000 }, 179 179 { 0xdb, 0x0001 }, 180 180 { 0xdc, 0x0049 }, 181 - { 0xdd, 0x0009 }, 181 + { 0xdd, 0x0024 }, 182 182 { 0xe6, 0x8000 }, 183 183 { 0xe7, 0x0000 }, 184 - { 0xec, 0xb300 }, 184 + { 0xec, 0xa200 }, 185 185 { 0xed, 0x0000 }, 186 - { 0xee, 0xb300 }, 186 + { 0xee, 0xa200 }, 187 187 { 0xef, 0x0000 }, 188 188 { 0xf8, 0x0000 }, 189 189 { 0xf9, 0x0000 }, 190 190 { 0xfa, 0x8010 }, 191 191 { 0xfb, 0x0033 }, 192 - { 0xfc, 0x0080 }, 192 + { 0xfc, 0x0100 }, 193 193 }; 194 194 195 195 static bool rt5670_volatile_register(struct device *dev, unsigned int reg) ··· 1877 1877 { "DAC1 MIXR", "DAC1 Switch", "DAC1 R Mux" }, 1878 1878 { "DAC1 MIXR", NULL, "DAC Stereo1 Filter" }, 1879 1879 1880 + { "DAC Stereo1 Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1881 + { "DAC Mono Left Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1882 + { "DAC Mono Right Filter", NULL, "PLL1", is_sys_clk_from_pll }, 1883 + 1880 1884 { "DAC MIX", NULL, "DAC1 MIXL" }, 1881 1885 { "DAC MIX", NULL, "DAC1 MIXR" }, 1882 1886 ··· 1930 1926 1931 1927 { "DAC L1", NULL, "DAC L1 Power" }, 1932 1928 { "DAC L1", NULL, "Stereo DAC MIXL" }, 1933 - { "DAC L1", NULL, "PLL1", is_sys_clk_from_pll }, 1934 1929 { "DAC R1", NULL, "DAC R1 Power" }, 1935 1930 { "DAC R1", NULL, "Stereo DAC MIXR" }, 1936 - { "DAC R1", NULL, "PLL1", is_sys_clk_from_pll }, 1937 1931 { "DAC L2", NULL, "Mono DAC MIXL" }, 1938 - { "DAC L2", NULL, "PLL1", is_sys_clk_from_pll }, 1939 1932 { "DAC R2", NULL, "Mono DAC MIXR" }, 1940 - { "DAC R2", NULL, "PLL1", is_sys_clk_from_pll }, 1941 1933 1942 1934 { "OUT MIXL", "BST1 Switch", "BST1" }, 1943 1935 { "OUT MIXL", "INL Switch", "INL VOL" },
+1 -2
sound/soc/codecs/sgtl5000.c
··· 1299 1299 1300 1300 /* enable small pop, introduce 400ms delay in turning off */ 1301 1301 snd_soc_update_bits(codec, SGTL5000_CHIP_REF_CTRL, 1302 - SGTL5000_SMALL_POP, 1303 - SGTL5000_SMALL_POP); 1302 + SGTL5000_SMALL_POP, 1); 1304 1303 1305 1304 /* disable short cut detector */ 1306 1305 snd_soc_write(codec, SGTL5000_CHIP_SHORT_CTRL, 0);
+1 -1
sound/soc/codecs/sgtl5000.h
··· 275 275 #define SGTL5000_BIAS_CTRL_MASK 0x000e 276 276 #define SGTL5000_BIAS_CTRL_SHIFT 1 277 277 #define SGTL5000_BIAS_CTRL_WIDTH 3 278 - #define SGTL5000_SMALL_POP 0x0001 278 + #define SGTL5000_SMALL_POP 0 279 279 280 280 /* 281 281 * SGTL5000_CHIP_MIC_CTRL
+1
sound/soc/codecs/wm_adsp.c
··· 1355 1355 file, blocks, pos - firmware->size); 1356 1356 1357 1357 out_fw: 1358 + regmap_async_complete(regmap); 1358 1359 release_firmware(firmware); 1359 1360 wm_adsp_buf_free(&buf_list); 1360 1361 out:
+26
sound/soc/fsl/fsl_asrc.c
··· 684 684 } 685 685 } 686 686 687 + static struct reg_default fsl_asrc_reg[] = { 688 + { REG_ASRCTR, 0x0000 }, { REG_ASRIER, 0x0000 }, 689 + { REG_ASRCNCR, 0x0000 }, { REG_ASRCFG, 0x0000 }, 690 + { REG_ASRCSR, 0x0000 }, { REG_ASRCDR1, 0x0000 }, 691 + { REG_ASRCDR2, 0x0000 }, { REG_ASRSTR, 0x0000 }, 692 + { REG_ASRRA, 0x0000 }, { REG_ASRRB, 0x0000 }, 693 + { REG_ASRRC, 0x0000 }, { REG_ASRPM1, 0x0000 }, 694 + { REG_ASRPM2, 0x0000 }, { REG_ASRPM3, 0x0000 }, 695 + { REG_ASRPM4, 0x0000 }, { REG_ASRPM5, 0x0000 }, 696 + { REG_ASRTFR1, 0x0000 }, { REG_ASRCCR, 0x0000 }, 697 + { REG_ASRDIA, 0x0000 }, { REG_ASRDOA, 0x0000 }, 698 + { REG_ASRDIB, 0x0000 }, { REG_ASRDOB, 0x0000 }, 699 + { REG_ASRDIC, 0x0000 }, { REG_ASRDOC, 0x0000 }, 700 + { REG_ASRIDRHA, 0x0000 }, { REG_ASRIDRLA, 0x0000 }, 701 + { REG_ASRIDRHB, 0x0000 }, { REG_ASRIDRLB, 0x0000 }, 702 + { REG_ASRIDRHC, 0x0000 }, { REG_ASRIDRLC, 0x0000 }, 703 + { REG_ASR76K, 0x0A47 }, { REG_ASR56K, 0x0DF3 }, 704 + { REG_ASRMCRA, 0x0000 }, { REG_ASRFSTA, 0x0000 }, 705 + { REG_ASRMCRB, 0x0000 }, { REG_ASRFSTB, 0x0000 }, 706 + { REG_ASRMCRC, 0x0000 }, { REG_ASRFSTC, 0x0000 }, 707 + { REG_ASRMCR1A, 0x0000 }, { REG_ASRMCR1B, 0x0000 }, 708 + { REG_ASRMCR1C, 0x0000 }, 709 + }; 710 + 687 711 static const struct regmap_config fsl_asrc_regmap_config = { 688 712 .reg_bits = 32, 689 713 .reg_stride = 4, 690 714 .val_bits = 32, 691 715 692 716 .max_register = REG_ASRMCR1C, 717 + .reg_defaults = fsl_asrc_reg, 718 + .num_reg_defaults = ARRAY_SIZE(fsl_asrc_reg), 693 719 .readable_reg = fsl_asrc_readable_reg, 694 720 .volatile_reg = fsl_asrc_volatile_reg, 695 721 .writeable_reg = fsl_asrc_writeable_reg,
+3 -1
sound/soc/rockchip/rockchip_i2s.c
··· 154 154 while (val) { 155 155 regmap_read(i2s->regmap, I2S_CLR, &val); 156 156 retry--; 157 - if (!retry) 157 + if (!retry) { 158 158 dev_warn(i2s->dev, "fail to clear\n"); 159 + break; 160 + } 159 161 } 160 162 } 161 163 }
+1
sound/soc/samsung/snow.c
··· 110 110 { .compatible = "google,snow-audio-max98095", }, 111 111 {}, 112 112 }; 113 + MODULE_DEVICE_TABLE(of, snow_of_match); 113 114 114 115 static struct platform_driver snow_driver = { 115 116 .driver = {
+1 -2
sound/soc/sh/fsi.c
··· 1711 1711 static struct snd_pcm_hardware fsi_pcm_hardware = { 1712 1712 .info = SNDRV_PCM_INFO_INTERLEAVED | 1713 1713 SNDRV_PCM_INFO_MMAP | 1714 - SNDRV_PCM_INFO_MMAP_VALID | 1715 - SNDRV_PCM_INFO_PAUSE, 1714 + SNDRV_PCM_INFO_MMAP_VALID, 1716 1715 .buffer_bytes_max = 64 * 1024, 1717 1716 .period_bytes_min = 32, 1718 1717 .period_bytes_max = 8192,
+1 -2
sound/soc/sh/rcar/core.c
··· 886 886 static struct snd_pcm_hardware rsnd_pcm_hardware = { 887 887 .info = SNDRV_PCM_INFO_INTERLEAVED | 888 888 SNDRV_PCM_INFO_MMAP | 889 - SNDRV_PCM_INFO_MMAP_VALID | 890 - SNDRV_PCM_INFO_PAUSE, 889 + SNDRV_PCM_INFO_MMAP_VALID, 891 890 .buffer_bytes_max = 64 * 1024, 892 891 .period_bytes_min = 32, 893 892 .period_bytes_max = 8192,
+1 -1
sound/soc/soc-core.c
··· 884 884 list_for_each_entry(component, &component_list, list) { 885 885 if (dlc->of_node && component->dev->of_node != dlc->of_node) 886 886 continue; 887 - if (dlc->name && strcmp(dev_name(component->dev), dlc->name)) 887 + if (dlc->name && strcmp(component->name, dlc->name)) 888 888 continue; 889 889 list_for_each_entry(dai, &component->dai_list, list) { 890 890 if (dlc->dai_name && strcmp(dai->name, dlc->dai_name))
+56 -16
sound/soc/soc-pcm.c
··· 1522 1522 dpcm_init_runtime_hw(runtime, &cpu_dai_drv->capture); 1523 1523 } 1524 1524 1525 + static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd); 1526 + 1527 + /* Set FE's runtime_update state; the state is protected via PCM stream lock 1528 + * for avoiding the race with trigger callback. 1529 + * If the state is unset and a trigger is pending while the previous operation, 1530 + * process the pending trigger action here. 1531 + */ 1532 + static void dpcm_set_fe_update_state(struct snd_soc_pcm_runtime *fe, 1533 + int stream, enum snd_soc_dpcm_update state) 1534 + { 1535 + struct snd_pcm_substream *substream = 1536 + snd_soc_dpcm_get_substream(fe, stream); 1537 + 1538 + snd_pcm_stream_lock_irq(substream); 1539 + if (state == SND_SOC_DPCM_UPDATE_NO && fe->dpcm[stream].trigger_pending) { 1540 + dpcm_fe_dai_do_trigger(substream, 1541 + fe->dpcm[stream].trigger_pending - 1); 1542 + fe->dpcm[stream].trigger_pending = 0; 1543 + } 1544 + fe->dpcm[stream].runtime_update = state; 1545 + snd_pcm_stream_unlock_irq(substream); 1546 + } 1547 + 1525 1548 static int dpcm_fe_dai_startup(struct snd_pcm_substream *fe_substream) 1526 1549 { 1527 1550 struct snd_soc_pcm_runtime *fe = fe_substream->private_data; 1528 1551 struct snd_pcm_runtime *runtime = fe_substream->runtime; 1529 1552 int stream = fe_substream->stream, ret = 0; 1530 1553 1531 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1554 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1532 1555 1533 1556 ret = dpcm_be_dai_startup(fe, fe_substream->stream); 1534 1557 if (ret < 0) { ··· 1573 1550 dpcm_set_fe_runtime(fe_substream); 1574 1551 snd_pcm_limit_hw_rates(runtime); 1575 1552 1576 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1553 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1577 1554 return 0; 1578 1555 1579 1556 unwind: 1580 1557 dpcm_be_dai_startup_unwind(fe, fe_substream->stream); 1581 1558 be_err: 1582 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1559 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1583 1560 return ret; 1584 1561 } 1585 1562 ··· 1626 1603 struct snd_soc_pcm_runtime *fe = substream->private_data; 1627 1604 int stream = substream->stream; 1628 1605 1629 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1606 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1630 1607 1631 1608 /* shutdown the BEs */ 1632 1609 dpcm_be_dai_shutdown(fe, substream->stream); ··· 1640 1617 dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_STOP); 1641 1618 1642 1619 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_CLOSE; 1643 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1620 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1644 1621 return 0; 1645 1622 } 1646 1623 ··· 1688 1665 int err, stream = substream->stream; 1689 1666 1690 1667 mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); 1691 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1668 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1692 1669 1693 1670 dev_dbg(fe->dev, "ASoC: hw_free FE %s\n", fe->dai_link->name); 1694 1671 ··· 1703 1680 err = dpcm_be_dai_hw_free(fe, stream); 1704 1681 1705 1682 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_FREE; 1706 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1683 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1707 1684 1708 1685 mutex_unlock(&fe->card->mutex); 1709 1686 return 0; ··· 1796 1773 int ret, stream = substream->stream; 1797 1774 1798 1775 mutex_lock_nested(&fe->card->mutex, SND_SOC_CARD_CLASS_RUNTIME); 1799 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 1776 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 1800 1777 1801 1778 memcpy(&fe->dpcm[substream->stream].hw_params, params, 1802 1779 sizeof(struct snd_pcm_hw_params)); ··· 1819 1796 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_HW_PARAMS; 1820 1797 1821 1798 out: 1822 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 1799 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 1823 1800 mutex_unlock(&fe->card->mutex); 1824 1801 return ret; 1825 1802 } ··· 1933 1910 } 1934 1911 EXPORT_SYMBOL_GPL(dpcm_be_dai_trigger); 1935 1912 1936 - static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd) 1913 + static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd) 1937 1914 { 1938 1915 struct snd_soc_pcm_runtime *fe = substream->private_data; 1939 1916 int stream = substream->stream, ret; ··· 2007 1984 return ret; 2008 1985 } 2009 1986 1987 + static int dpcm_fe_dai_trigger(struct snd_pcm_substream *substream, int cmd) 1988 + { 1989 + struct snd_soc_pcm_runtime *fe = substream->private_data; 1990 + int stream = substream->stream; 1991 + 1992 + /* if FE's runtime_update is already set, we're in race; 1993 + * process this trigger later at exit 1994 + */ 1995 + if (fe->dpcm[stream].runtime_update != SND_SOC_DPCM_UPDATE_NO) { 1996 + fe->dpcm[stream].trigger_pending = cmd + 1; 1997 + return 0; /* delayed, assuming it's successful */ 1998 + } 1999 + 2000 + /* we're alone, let's trigger */ 2001 + return dpcm_fe_dai_do_trigger(substream, cmd); 2002 + } 2003 + 2010 2004 int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream) 2011 2005 { 2012 2006 struct snd_soc_dpcm *dpcm; ··· 2067 2027 2068 2028 dev_dbg(fe->dev, "ASoC: prepare FE %s\n", fe->dai_link->name); 2069 2029 2070 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE; 2030 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_FE); 2071 2031 2072 2032 /* there is no point preparing this FE if there are no BEs */ 2073 2033 if (list_empty(&fe->dpcm[stream].be_clients)) { ··· 2094 2054 fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE; 2095 2055 2096 2056 out: 2097 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2057 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2098 2058 mutex_unlock(&fe->card->mutex); 2099 2059 2100 2060 return ret; ··· 2241 2201 { 2242 2202 int ret; 2243 2203 2244 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE; 2204 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE); 2245 2205 ret = dpcm_run_update_startup(fe, stream); 2246 2206 if (ret < 0) 2247 2207 dev_err(fe->dev, "ASoC: failed to startup some BEs\n"); 2248 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2208 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2249 2209 2250 2210 return ret; 2251 2211 } ··· 2254 2214 { 2255 2215 int ret; 2256 2216 2257 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_BE; 2217 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_BE); 2258 2218 ret = dpcm_run_update_shutdown(fe, stream); 2259 2219 if (ret < 0) 2260 2220 dev_err(fe->dev, "ASoC: failed to shutdown some BEs\n"); 2261 - fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO; 2221 + dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO); 2262 2222 2263 2223 return ret; 2264 2224 }
+4 -3
sound/usb/mixer.c
··· 2033 2033 cval->res = 1; 2034 2034 cval->initialized = 1; 2035 2035 2036 - if (desc->bDescriptorSubtype == UAC2_CLOCK_SELECTOR) 2037 - cval->control = UAC2_CX_CLOCK_SELECTOR; 2038 - else 2036 + if (state->mixer->protocol == UAC_VERSION_1) 2039 2037 cval->control = 0; 2038 + else /* UAC_VERSION_2 */ 2039 + cval->control = (desc->bDescriptorSubtype == UAC2_CLOCK_SELECTOR) ? 2040 + UAC2_CX_CLOCK_SELECTOR : UAC2_SU_SELECTOR; 2040 2041 2041 2042 namelist = kmalloc(sizeof(char *) * desc->bNrInPins, GFP_KERNEL); 2042 2043 if (!namelist) {
+14
sound/usb/quirks.c
··· 1146 1146 if ((le16_to_cpu(dev->descriptor.idVendor) == 0x23ba) && 1147 1147 (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) 1148 1148 mdelay(20); 1149 + 1150 + /* Marantz/Denon devices with USB DAC functionality need a delay 1151 + * after each class compliant request 1152 + */ 1153 + if ((le16_to_cpu(dev->descriptor.idVendor) == 0x154e) && 1154 + (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS) { 1155 + 1156 + switch (le16_to_cpu(dev->descriptor.idProduct)) { 1157 + case 0x3005: /* Marantz HD-DAC1 */ 1158 + case 0x3006: /* Marantz SA-14S1 */ 1159 + mdelay(20); 1160 + break; 1161 + } 1162 + } 1149 1163 } 1150 1164 1151 1165 /*