Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.8-rc6 into usb-next

We want the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+1051 -667
+1
.mailmap
··· 88 88 Kenneth W Chen <kenneth.w.chen@intel.com> 89 89 Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com> 90 90 Koushik <raghavendra.koushik@neterion.com> 91 + Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com> 91 92 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> 92 93 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 93 94 Leonid I Ananiev <leonid.i.ananiev@intel.com>
+10 -6
Documentation/arm/CCN.txt
··· 18 18 directory provides configuration templates for all documented 19 19 events, that can be used with perf tool. For example "xp_valid_flit" 20 20 is an equivalent of "type=0x8,event=0x4". Other parameters must be 21 - explicitly specified. For events originating from device, "node" 22 - defines its index. All crosspoint events require "xp" (index), 23 - "port" (device port number) and "vc" (virtual channel ID) and 24 - "dir" (direction). Watchpoints (special "event" value 0xfe) also 25 - require comparator values ("cmp_l" and "cmp_h") and "mask", being 26 - index of the comparator mask. 21 + explicitly specified. 27 22 23 + For events originating from device, "node" defines its index. 24 + 25 + Crosspoint PMU events require "xp" (index), "bus" (bus number) 26 + and "vc" (virtual channel ID). 27 + 28 + Crosspoint watchpoint-based events (special "event" value 0xfe) 29 + require "xp" and "vc" as as above plus "port" (device port index), 30 + "dir" (transmit/receive direction), comparator values ("cmp_l" 31 + and "cmp_h") and "mask", being index of the comparator mask. 28 32 Masks are defined separately from the event description 29 33 (due to limited number of the config values) in the "cmp_mask" 30 34 directory, with first 8 configurable by user and additional
+1 -1
Documentation/cpu-freq/cpufreq-stats.txt
··· 103 103 Power management options (ACPI, APM) ---> 104 104 CPU Frequency scaling ---> 105 105 [*] CPU Frequency scaling 106 - <*> CPU frequency translation statistics 106 + [*] CPU frequency translation statistics 107 107 [*] CPU frequency translation statistics details 108 108 109 109
+5
Documentation/i2c/slave-interface
··· 145 145 146 146 * Catch the slave interrupts and send appropriate i2c_slave_events to the backend. 147 147 148 + Note that most hardware supports being master _and_ slave on the same bus. So, 149 + if you extend a bus driver, please make sure that the driver supports that as 150 + well. In almost all cases, slave support does not need to disable the master 151 + functionality. 152 + 148 153 Check the i2c-rcar driver as an example. 149 154 150 155
+21 -7
MAINTAINERS
··· 1624 1624 1625 1625 ARM/SAMSUNG EXYNOS ARM ARCHITECTURES 1626 1626 M: Kukjin Kim <kgene@kernel.org> 1627 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 1627 + M: Krzysztof Kozlowski <krzk@kernel.org> 1628 1628 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1629 1629 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 1630 1630 S: Maintained ··· 1644 1644 F: drivers/*/*s5pv210* 1645 1645 F: drivers/memory/samsung/* 1646 1646 F: drivers/soc/samsung/* 1647 - F: drivers/spi/spi-s3c* 1648 1647 F: Documentation/arm/Samsung/ 1649 1648 F: Documentation/devicetree/bindings/arm/samsung/ 1650 1649 F: Documentation/devicetree/bindings/sram/samsung-sram.txt ··· 1831 1832 ARM/UNIPHIER ARCHITECTURE 1832 1833 M: Masahiro Yamada <yamada.masahiro@socionext.com> 1833 1834 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1835 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-uniphier.git 1834 1836 S: Maintained 1835 1837 F: arch/arm/boot/dts/uniphier* 1836 1838 F: arch/arm/include/asm/hardware/cache-uniphier.h ··· 7465 7465 F: sound/soc/codecs/max9860.* 7466 7466 7467 7467 MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS 7468 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 7468 + M: Krzysztof Kozlowski <krzk@kernel.org> 7469 + M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 7469 7470 L: linux-pm@vger.kernel.org 7470 7471 S: Supported 7471 7472 F: drivers/power/max14577_charger.c ··· 7482 7481 7483 7482 MAXIM PMIC AND MUIC DRIVERS FOR EXYNOS BASED BOARDS 7484 7483 M: Chanwoo Choi <cw00.choi@samsung.com> 7485 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 7484 + M: Krzysztof Kozlowski <krzk@kernel.org> 7485 + M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 7486 7486 L: linux-kernel@vger.kernel.org 7487 7487 S: Supported 7488 7488 F: drivers/*/max14577*.c ··· 9249 9247 9250 9248 PIN CONTROLLER - SAMSUNG 9251 9249 M: Tomasz Figa <tomasz.figa@gmail.com> 9252 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 9250 + M: Krzysztof Kozlowski <krzk@kernel.org> 9253 9251 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 9254 9252 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9255 9253 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) ··· 10182 10180 F: drivers/platform/x86/samsung-laptop.c 10183 10181 10184 10182 SAMSUNG AUDIO (ASoC) DRIVERS 10185 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 10183 + M: Krzysztof Kozlowski <krzk@kernel.org> 10186 10184 M: Sangbeom Kim <sbkim73@samsung.com> 10187 10185 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 10188 10186 L: alsa-devel@alsa-project.org (moderated for non-subscribers) ··· 10197 10195 10198 10196 SAMSUNG MULTIFUNCTION PMIC DEVICE DRIVERS 10199 10197 M: Sangbeom Kim <sbkim73@samsung.com> 10200 - M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 10198 + M: Krzysztof Kozlowski <krzk@kernel.org> 10199 + M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 10201 10200 L: linux-kernel@vger.kernel.org 10202 10201 L: linux-samsung-soc@vger.kernel.org 10203 10202 S: Supported ··· 10256 10253 S: Supported 10257 10254 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 10258 10255 F: drivers/clk/samsung/ 10256 + 10257 + SAMSUNG SPI DRIVERS 10258 + M: Kukjin Kim <kgene@kernel.org> 10259 + M: Krzysztof Kozlowski <krzk@kernel.org> 10260 + M: Andi Shyti <andi.shyti@samsung.com> 10261 + L: linux-spi@vger.kernel.org 10262 + L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 10263 + S: Maintained 10264 + F: Documentation/devicetree/bindings/spi/spi-samsung.txt 10265 + F: drivers/spi/spi-s3c* 10266 + F: include/linux/platform_data/spi-s3c64xx.h 10259 10267 10260 10268 SAMSUNG SXGBE DRIVERS 10261 10269 M: Byungho An <bh74.an@samsung.com>
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 8 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Psychotic Stoned Sheep 6 6 7 7 # *DOCUMENTATION*
-11
arch/Kconfig
··· 336 336 results in the system call being skipped immediately. 337 337 - seccomp syscall wired up 338 338 339 - For best performance, an arch should use seccomp_phase1 and 340 - seccomp_phase2 directly. It should call seccomp_phase1 for all 341 - syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not 342 - need to be called from a ptrace-safe context. It must then 343 - call seccomp_phase2 if seccomp_phase1 returns anything other 344 - than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP. 345 - 346 - As an additional optimization, an arch may provide seccomp_data 347 - directly to seccomp_phase1; this avoids multiple calls 348 - to the syscall_xyz helpers for every syscall. 349 - 350 339 config SECCOMP_FILTER 351 340 def_bool y 352 341 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
+1 -1
arch/arm/boot/dts/am335x-baltos.dtsi
··· 226 226 227 227 #address-cells = <1>; 228 228 #size-cells = <1>; 229 - elm_id = <&elm>; 229 + ti,elm-id = <&elm>; 230 230 }; 231 231 }; 232 232
+1 -1
arch/arm/boot/dts/am335x-igep0033.dtsi
··· 161 161 162 162 #address-cells = <1>; 163 163 #size-cells = <1>; 164 - elm_id = <&elm>; 164 + ti,elm-id = <&elm>; 165 165 166 166 /* MTD partition table */ 167 167 partition@0 {
+1 -1
arch/arm/boot/dts/am335x-phycore-som.dtsi
··· 197 197 gpmc,wr-access-ns = <30>; 198 198 gpmc,wr-data-mux-bus-ns = <0>; 199 199 200 - elm_id = <&elm>; 200 + ti,elm-id = <&elm>; 201 201 202 202 #address-cells = <1>; 203 203 #size-cells = <1>;
+4 -4
arch/arm/boot/dts/armada-388-clearfog.dts
··· 390 390 391 391 port@0 { 392 392 reg = <0>; 393 - label = "lan1"; 393 + label = "lan5"; 394 394 }; 395 395 396 396 port@1 { 397 397 reg = <1>; 398 - label = "lan2"; 398 + label = "lan4"; 399 399 }; 400 400 401 401 port@2 { ··· 405 405 406 406 port@3 { 407 407 reg = <3>; 408 - label = "lan4"; 408 + label = "lan2"; 409 409 }; 410 410 411 411 port@4 { 412 412 reg = <4>; 413 - label = "lan5"; 413 + label = "lan1"; 414 414 }; 415 415 416 416 port@5 {
-3
arch/arm/boot/dts/exynos5410-odroidxu.dts
··· 447 447 samsung,dw-mshc-ciu-div = <3>; 448 448 samsung,dw-mshc-sdr-timing = <0 4>; 449 449 samsung,dw-mshc-ddr-timing = <0 2>; 450 - samsung,dw-mshc-hs400-timing = <0 2>; 451 - samsung,read-strobe-delay = <90>; 452 450 pinctrl-names = "default"; 453 451 pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus1 &sd0_bus4 &sd0_bus8 &sd0_cd>; 454 452 bus-width = <8>; 455 453 cap-mmc-highspeed; 456 454 mmc-hs200-1_8v; 457 - mmc-hs400-1_8v; 458 455 vmmc-supply = <&ldo20_reg>; 459 456 vqmmc-supply = <&ldo11_reg>; 460 457 };
+1 -1
arch/arm/boot/dts/imx6qdl.dtsi
··· 243 243 clocks = <&clks IMX6QDL_CLK_SPDIF_GCLK>, <&clks IMX6QDL_CLK_OSC>, 244 244 <&clks IMX6QDL_CLK_SPDIF>, <&clks IMX6QDL_CLK_ASRC>, 245 245 <&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_ESAI_EXTAL>, 246 - <&clks IMX6QDL_CLK_IPG>, <&clks IMX6QDL_CLK_MLB>, 246 + <&clks IMX6QDL_CLK_IPG>, <&clks IMX6QDL_CLK_DUMMY>, 247 247 <&clks IMX6QDL_CLK_DUMMY>, <&clks IMX6QDL_CLK_SPBA>; 248 248 clock-names = "core", "rxtx0", 249 249 "rxtx1", "rxtx2",
+1 -1
arch/arm/boot/dts/imx6sx-sabreauto.dts
··· 64 64 cd-gpios = <&gpio7 11 GPIO_ACTIVE_LOW>; 65 65 no-1-8-v; 66 66 keep-power-in-suspend; 67 - enable-sdio-wakup; 67 + wakeup-source; 68 68 status = "okay"; 69 69 }; 70 70
+1 -1
arch/arm/boot/dts/imx7d-sdb.dts
··· 131 131 ti,y-min = /bits/ 16 <0>; 132 132 ti,y-max = /bits/ 16 <0>; 133 133 ti,pressure-max = /bits/ 16 <0>; 134 - ti,x-plat-ohms = /bits/ 16 <400>; 134 + ti,x-plate-ohms = /bits/ 16 <400>; 135 135 wakeup-source; 136 136 }; 137 137 };
+1 -1
arch/arm/boot/dts/kirkwood-ib62x0.dts
··· 113 113 114 114 partition@e0000 { 115 115 label = "u-boot environment"; 116 - reg = <0xe0000 0x100000>; 116 + reg = <0xe0000 0x20000>; 117 117 }; 118 118 119 119 partition@100000 {
+4
arch/arm/boot/dts/kirkwood-openrd.dtsi
··· 116 116 }; 117 117 }; 118 118 119 + &pciec { 120 + status = "okay"; 121 + }; 122 + 119 123 &pcie0 { 120 124 status = "okay"; 121 125 };
+6 -5
arch/arm/boot/dts/logicpd-som-lv.dtsi
··· 35 35 ranges = <0 0 0x00000000 0x1000000>; /* CS0: 16MB for NAND */ 36 36 37 37 nand@0,0 { 38 - linux,mtd-name = "micron,mt29f4g16abbda3w"; 38 + compatible = "ti,omap2-nand"; 39 39 reg = <0 0 4>; /* CS0, offset 0, IO size 4 */ 40 + interrupt-parent = <&gpmc>; 41 + interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */ 42 + <1 IRQ_TYPE_NONE>; /* termcount */ 43 + linux,mtd-name = "micron,mt29f4g16abbda3w"; 40 44 nand-bus-width = <16>; 41 45 ti,nand-ecc-opt = "bch8"; 46 + rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */ 42 47 gpmc,sync-clk-ps = <0>; 43 48 gpmc,cs-on-ns = <0>; 44 49 gpmc,cs-rd-off-ns = <44>; ··· 59 54 gpmc,wr-access-ns = <40>; 60 55 gpmc,wr-data-mux-bus-ns = <0>; 61 56 gpmc,device-width = <2>; 62 - 63 - gpmc,page-burst-access-ns = <5>; 64 - gpmc,cycle2cycle-delay-ns = <50>; 65 - 66 57 #address-cells = <1>; 67 58 #size-cells = <1>; 68 59
+1
arch/arm/boot/dts/logicpd-torpedo-som.dtsi
··· 46 46 linux,mtd-name = "micron,mt29f4g16abbda3w"; 47 47 nand-bus-width = <16>; 48 48 ti,nand-ecc-opt = "bch8"; 49 + rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */ 49 50 gpmc,sync-clk-ps = <0>; 50 51 gpmc,cs-on-ns = <0>; 51 52 gpmc,cs-rd-off-ns = <44>;
+3 -1
arch/arm/boot/dts/omap3-overo-base.dtsi
··· 223 223 }; 224 224 225 225 &gpmc { 226 - ranges = <0 0 0x00000000 0x20000000>; 226 + ranges = <0 0 0x30000000 0x1000000>, /* CS0 */ 227 + <4 0 0x2b000000 0x1000000>, /* CS4 */ 228 + <5 0 0x2c000000 0x1000000>; /* CS5 */ 227 229 228 230 nand@0,0 { 229 231 compatible = "ti,omap2-nand";
-2
arch/arm/boot/dts/omap3-overo-chestnut43-common.dtsi
··· 55 55 #include "omap-gpmc-smsc9221.dtsi" 56 56 57 57 &gpmc { 58 - ranges = <5 0 0x2c000000 0x1000000>; /* CS5 */ 59 - 60 58 ethernet@gpmc { 61 59 reg = <5 0 0xff>; 62 60 interrupt-parent = <&gpio6>;
-2
arch/arm/boot/dts/omap3-overo-tobi-common.dtsi
··· 27 27 #include "omap-gpmc-smsc9221.dtsi" 28 28 29 29 &gpmc { 30 - ranges = <5 0 0x2c000000 0x1000000>; /* CS5 */ 31 - 32 30 ethernet@gpmc { 33 31 reg = <5 0 0xff>; 34 32 interrupt-parent = <&gpio6>;
-3
arch/arm/boot/dts/omap3-overo-tobiduo-common.dtsi
··· 15 15 #include "omap-gpmc-smsc9221.dtsi" 16 16 17 17 &gpmc { 18 - ranges = <4 0 0x2b000000 0x1000000>, /* CS4 */ 19 - <5 0 0x2c000000 0x1000000>; /* CS5 */ 20 - 21 18 smsc1: ethernet@gpmc { 22 19 reg = <5 0 0xff>; 23 20 interrupt-parent = <&gpio6>;
+1 -1
arch/arm/boot/dts/sun5i-a13.dtsi
··· 84 84 trips { 85 85 cpu_alert0: cpu_alert0 { 86 86 /* milliCelsius */ 87 - temperature = <850000>; 87 + temperature = <85000>; 88 88 hysteresis = <2000>; 89 89 type = "passive"; 90 90 };
+1 -1
arch/arm/boot/dts/tegra114-dalmore.dts
··· 897 897 palmas: tps65913@58 { 898 898 compatible = "ti,palmas"; 899 899 reg = <0x58>; 900 - interrupts = <0 86 IRQ_TYPE_LEVEL_LOW>; 900 + interrupts = <0 86 IRQ_TYPE_LEVEL_HIGH>; 901 901 902 902 #interrupt-cells = <2>; 903 903 interrupt-controller;
+1 -1
arch/arm/boot/dts/tegra114-roth.dts
··· 802 802 palmas: pmic@58 { 803 803 compatible = "ti,palmas"; 804 804 reg = <0x58>; 805 - interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_LOW>; 805 + interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>; 806 806 807 807 #interrupt-cells = <2>; 808 808 interrupt-controller;
+1 -1
arch/arm/boot/dts/tegra114-tn7.dts
··· 63 63 palmas: pmic@58 { 64 64 compatible = "ti,palmas"; 65 65 reg = <0x58>; 66 - interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_LOW>; 66 + interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>; 67 67 68 68 #interrupt-cells = <2>; 69 69 interrupt-controller;
+2 -2
arch/arm/boot/dts/tegra124-jetson-tk1.dts
··· 1382 1382 * Pin 41: BR_UART1_TXD 1383 1383 * Pin 44: BR_UART1_RXD 1384 1384 */ 1385 - serial@0,70006000 { 1385 + serial@70006000 { 1386 1386 compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart"; 1387 1387 status = "okay"; 1388 1388 }; ··· 1394 1394 * Pin 71: UART2_CTS_L 1395 1395 * Pin 74: UART2_RTS_L 1396 1396 */ 1397 - serial@0,70006040 { 1397 + serial@70006040 { 1398 1398 compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart"; 1399 1399 status = "okay"; 1400 1400 };
+13
arch/arm/kernel/hyp-stub.S
··· 142 142 and r7, #0x1f @ Preserve HPMN 143 143 mcr p15, 4, r7, c1, c1, 1 @ HDCR 144 144 145 + @ Make sure NS-SVC is initialised appropriately 146 + mrc p15, 0, r7, c1, c0, 0 @ SCTLR 147 + orr r7, #(1 << 5) @ CP15 barriers enabled 148 + bic r7, #(3 << 7) @ Clear SED/ITD for v8 (RES0 for v7) 149 + bic r7, #(3 << 19) @ WXN and UWXN disabled 150 + mcr p15, 0, r7, c1, c0, 0 @ SCTLR 151 + 152 + mrc p15, 0, r7, c0, c0, 0 @ MIDR 153 + mcr p15, 4, r7, c0, c0, 0 @ VPIDR 154 + 155 + mrc p15, 0, r7, c0, c0, 5 @ MPIDR 156 + mcr p15, 4, r7, c0, c0, 5 @ VMPIDR 157 + 145 158 #if !defined(ZIMAGE) && defined(CONFIG_ARM_ARCH_TIMER) 146 159 @ make CNTP_* and CNTPCT accessible from PL1 147 160 mrc p15, 0, r7, c0, c1, 1 @ ID_PFR1
+1
arch/arm/mach-imx/mach-imx6ul.c
··· 64 64 if (parent == NULL) 65 65 pr_warn("failed to initialize soc device\n"); 66 66 67 + of_platform_default_populate(NULL, NULL, parent); 67 68 imx6ul_enet_init(); 68 69 imx_anatop_init(); 69 70 imx6ul_pm_init();
+2 -2
arch/arm/mach-imx/pm-imx6.c
··· 295 295 val &= ~BM_CLPCR_SBYOS; 296 296 if (cpu_is_imx6sl()) 297 297 val |= BM_CLPCR_BYPASS_PMIC_READY; 298 - if (cpu_is_imx6sl() || cpu_is_imx6sx()) 298 + if (cpu_is_imx6sl() || cpu_is_imx6sx() || cpu_is_imx6ul()) 299 299 val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS; 300 300 else 301 301 val |= BM_CLPCR_BYP_MMDC_CH1_LPM_HS; ··· 310 310 val |= 0x3 << BP_CLPCR_STBY_COUNT; 311 311 val |= BM_CLPCR_VSTBY; 312 312 val |= BM_CLPCR_SBYOS; 313 - if (cpu_is_imx6sl()) 313 + if (cpu_is_imx6sl() || cpu_is_imx6sx()) 314 314 val |= BM_CLPCR_BYPASS_PMIC_READY; 315 315 if (cpu_is_imx6sl() || cpu_is_imx6sx() || cpu_is_imx6ul()) 316 316 val |= BM_CLPCR_BYP_MMDC_CH0_LPM_HS;
-6
arch/arm/mach-omap2/cm33xx.c
··· 220 220 { 221 221 int i = 0; 222 222 223 - if (!clkctrl_offs) 224 - return 0; 225 - 226 223 omap_test_timeout(_is_module_ready(inst, clkctrl_offs), 227 224 MAX_MODULE_READY_TIME, i); 228 225 ··· 242 245 u8 bit_shift) 243 246 { 244 247 int i = 0; 245 - 246 - if (!clkctrl_offs) 247 - return 0; 248 248 249 249 omap_test_timeout((_clkctrl_idlest(inst, clkctrl_offs) == 250 250 CLKCTRL_IDLEST_DISABLED),
-6
arch/arm/mach-omap2/cminst44xx.c
··· 278 278 { 279 279 int i = 0; 280 280 281 - if (!clkctrl_offs) 282 - return 0; 283 - 284 281 omap_test_timeout(_is_module_ready(part, inst, clkctrl_offs), 285 282 MAX_MODULE_READY_TIME, i); 286 283 ··· 300 303 u8 bit_shift) 301 304 { 302 305 int i = 0; 303 - 304 - if (!clkctrl_offs) 305 - return 0; 306 306 307 307 omap_test_timeout((_clkctrl_idlest(part, inst, clkctrl_offs) == 308 308 CLKCTRL_IDLEST_DISABLED),
+8
arch/arm/mach-omap2/omap_hwmod.c
··· 1053 1053 if (oh->flags & HWMOD_NO_IDLEST) 1054 1054 return 0; 1055 1055 1056 + if (!oh->prcm.omap4.clkctrl_offs && 1057 + !(oh->prcm.omap4.flags & HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET)) 1058 + return 0; 1059 + 1056 1060 return omap_cm_wait_module_idle(oh->clkdm->prcm_partition, 1057 1061 oh->clkdm->cm_inst, 1058 1062 oh->prcm.omap4.clkctrl_offs, 0); ··· 2973 2969 return 0; 2974 2970 2975 2971 if (!_find_mpu_rt_port(oh)) 2972 + return 0; 2973 + 2974 + if (!oh->prcm.omap4.clkctrl_offs && 2975 + !(oh->prcm.omap4.flags & HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET)) 2976 2976 return 0; 2977 2977 2978 2978 /* XXX check module SIDLEMODE, hardreset status */
+4
arch/arm/mach-omap2/omap_hwmod.h
··· 443 443 * HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT: Some IP blocks don't have a PRCM 444 444 * module-level context loss register associated with them; this 445 445 * flag bit should be set in those cases 446 + * HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET: Some IP blocks have a valid CLKCTRL 447 + * offset of zero; this flag bit should be set in those cases to 448 + * distinguish from hwmods that have no clkctrl offset. 446 449 */ 447 450 #define HWMOD_OMAP4_NO_CONTEXT_LOSS_BIT (1 << 0) 451 + #define HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET (1 << 1) 448 452 449 453 /** 450 454 * struct omap_hwmod_omap4_prcm - OMAP4-specific PRCM data
+2
arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
··· 29 29 #define CLKCTRL(oh, clkctrl) ((oh).prcm.omap4.clkctrl_offs = (clkctrl)) 30 30 #define RSTCTRL(oh, rstctrl) ((oh).prcm.omap4.rstctrl_offs = (rstctrl)) 31 31 #define RSTST(oh, rstst) ((oh).prcm.omap4.rstst_offs = (rstst)) 32 + #define PRCM_FLAGS(oh, flag) ((oh).prcm.omap4.flags = (flag)) 32 33 33 34 /* 34 35 * 'l3' class ··· 1297 1296 CLKCTRL(am33xx_i2c1_hwmod, AM33XX_CM_WKUP_I2C0_CLKCTRL_OFFSET); 1298 1297 CLKCTRL(am33xx_wd_timer1_hwmod, AM33XX_CM_WKUP_WDT1_CLKCTRL_OFFSET); 1299 1298 CLKCTRL(am33xx_rtc_hwmod, AM33XX_CM_RTC_RTC_CLKCTRL_OFFSET); 1299 + PRCM_FLAGS(am33xx_rtc_hwmod, HWMOD_OMAP4_ZERO_CLKCTRL_OFFSET); 1300 1300 CLKCTRL(am33xx_mmc2_hwmod, AM33XX_CM_PER_MMC2_CLKCTRL_OFFSET); 1301 1301 CLKCTRL(am33xx_gpmc_hwmod, AM33XX_CM_PER_GPMC_CLKCTRL_OFFSET); 1302 1302 CLKCTRL(am33xx_l4_ls_hwmod, AM33XX_CM_PER_L4LS_CLKCTRL_OFFSET);
+12
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 722 722 * display serial interface controller 723 723 */ 724 724 725 + static struct omap_hwmod_class_sysconfig omap3xxx_dsi_sysc = { 726 + .rev_offs = 0x0000, 727 + .sysc_offs = 0x0010, 728 + .syss_offs = 0x0014, 729 + .sysc_flags = (SYSC_HAS_AUTOIDLE | SYSC_HAS_CLOCKACTIVITY | 730 + SYSC_HAS_ENAWAKEUP | SYSC_HAS_SIDLEMODE | 731 + SYSC_HAS_SOFTRESET | SYSS_HAS_RESET_STATUS), 732 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART), 733 + .sysc_fields = &omap_hwmod_sysc_type1, 734 + }; 735 + 725 736 static struct omap_hwmod_class omap3xxx_dsi_hwmod_class = { 726 737 .name = "dsi", 738 + .sysc = &omap3xxx_dsi_sysc, 727 739 }; 728 740 729 741 static struct omap_hwmod_irq_info omap3xxx_dsi1_irqs[] = {
+3 -2
arch/arm/mach-sa1100/clock.c
··· 125 125 } 126 126 127 127 static struct clkops clk_36864_ops = { 128 + .enable = clk_cpu_enable, 129 + .disable = clk_cpu_disable, 128 130 .get_rate = clk_36864_get_rate, 129 131 }; 130 132 ··· 142 140 CLKDEV_INIT(NULL, "OSTIMER0", &clk_36864), 143 141 }; 144 142 145 - static int __init sa11xx_clk_init(void) 143 + int __init sa11xx_clk_init(void) 146 144 { 147 145 clkdev_add_table(sa11xx_clkregs, ARRAY_SIZE(sa11xx_clkregs)); 148 146 return 0; 149 147 } 150 - core_initcall(sa11xx_clk_init);
+4
arch/arm/mach-sa1100/generic.c
··· 34 34 35 35 #include <mach/hardware.h> 36 36 #include <mach/irqs.h> 37 + #include <mach/reset.h> 37 38 38 39 #include "generic.h" 39 40 #include <clocksource/pxa.h> ··· 96 95 97 96 void sa11x0_restart(enum reboot_mode mode, const char *cmd) 98 97 { 98 + clear_reset_status(RESET_STATUS_ALL); 99 + 99 100 if (mode == REBOOT_SOFT) { 100 101 /* Jump into ROM at address 0 */ 101 102 soft_restart(0); ··· 391 388 sa11x0_init_irq_nodt(IRQ_GPIO0_SC, irq_resource.start); 392 389 393 390 sa1100_init_gpio(); 391 + sa11xx_clk_init(); 394 392 } 395 393 396 394 /*
+2
arch/arm/mach-sa1100/generic.h
··· 44 44 #else 45 45 static inline int sa11x0_pm_init(void) { return 0; } 46 46 #endif 47 + 48 + int sa11xx_clk_init(void);
+1
arch/arm/mm/proc-v7.S
··· 16 16 #include <asm/hwcap.h> 17 17 #include <asm/pgtable-hwdef.h> 18 18 #include <asm/pgtable.h> 19 + #include <asm/memory.h> 19 20 20 21 #include "proc-macros.S" 21 22
+4 -4
arch/arm64/include/asm/percpu.h
··· 199 199 #define _percpu_read(pcp) \ 200 200 ({ \ 201 201 typeof(pcp) __retval; \ 202 - preempt_disable(); \ 202 + preempt_disable_notrace(); \ 203 203 __retval = (typeof(pcp))__percpu_read(raw_cpu_ptr(&(pcp)), \ 204 204 sizeof(pcp)); \ 205 - preempt_enable(); \ 205 + preempt_enable_notrace(); \ 206 206 __retval; \ 207 207 }) 208 208 209 209 #define _percpu_write(pcp, val) \ 210 210 do { \ 211 - preempt_disable(); \ 211 + preempt_disable_notrace(); \ 212 212 __percpu_write(raw_cpu_ptr(&(pcp)), (unsigned long)(val), \ 213 213 sizeof(pcp)); \ 214 - preempt_enable(); \ 214 + preempt_enable_notrace(); \ 215 215 } while(0) \ 216 216 217 217 #define _pcp_protect(operation, pcp, val) \
+10
arch/arm64/include/asm/spinlock.h
··· 363 363 #define arch_read_relax(lock) cpu_relax() 364 364 #define arch_write_relax(lock) cpu_relax() 365 365 366 + /* 367 + * Accesses appearing in program order before a spin_lock() operation 368 + * can be reordered with accesses inside the critical section, by virtue 369 + * of arch_spin_lock being constructed using acquire semantics. 370 + * 371 + * In cases where this is problematic (e.g. try_to_wake_up), an 372 + * smp_mb__before_spinlock() can restore the required ordering. 373 + */ 374 + #define smp_mb__before_spinlock() smp_mb() 375 + 366 376 #endif /* __ASM_SPINLOCK_H */
+4 -8
arch/ia64/include/asm/uaccess.h
··· 241 241 static inline unsigned long 242 242 __copy_to_user (void __user *to, const void *from, unsigned long count) 243 243 { 244 - if (!__builtin_constant_p(count)) 245 - check_object_size(from, count, true); 244 + check_object_size(from, count, true); 246 245 247 246 return __copy_user(to, (__force void __user *) from, count); 248 247 } ··· 249 250 static inline unsigned long 250 251 __copy_from_user (void *to, const void __user *from, unsigned long count) 251 252 { 252 - if (!__builtin_constant_p(count)) 253 - check_object_size(to, count, false); 253 + check_object_size(to, count, false); 254 254 255 255 return __copy_user((__force void __user *) to, from, count); 256 256 } ··· 263 265 long __cu_len = (n); \ 264 266 \ 265 267 if (__access_ok(__cu_to, __cu_len, get_fs())) { \ 266 - if (!__builtin_constant_p(n)) \ 267 - check_object_size(__cu_from, __cu_len, true); \ 268 + check_object_size(__cu_from, __cu_len, true); \ 268 269 __cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len); \ 269 270 } \ 270 271 __cu_len; \ ··· 277 280 \ 278 281 __chk_user_ptr(__cu_from); \ 279 282 if (__access_ok(__cu_from, __cu_len, get_fs())) { \ 280 - if (!__builtin_constant_p(n)) \ 281 - check_object_size(__cu_to, __cu_len, false); \ 283 + check_object_size(__cu_to, __cu_len, false); \ 282 284 __cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len); \ 283 285 } \ 284 286 __cu_len; \
+7 -12
arch/powerpc/include/asm/uaccess.h
··· 311 311 unsigned long over; 312 312 313 313 if (access_ok(VERIFY_READ, from, n)) { 314 - if (!__builtin_constant_p(n)) 315 - check_object_size(to, n, false); 314 + check_object_size(to, n, false); 316 315 return __copy_tofrom_user((__force void __user *)to, from, n); 317 316 } 318 317 if ((unsigned long)from < TASK_SIZE) { 319 318 over = (unsigned long)from + n - TASK_SIZE; 320 - if (!__builtin_constant_p(n - over)) 321 - check_object_size(to, n - over, false); 319 + check_object_size(to, n - over, false); 322 320 return __copy_tofrom_user((__force void __user *)to, from, 323 321 n - over) + over; 324 322 } ··· 329 331 unsigned long over; 330 332 331 333 if (access_ok(VERIFY_WRITE, to, n)) { 332 - if (!__builtin_constant_p(n)) 333 - check_object_size(from, n, true); 334 + check_object_size(from, n, true); 334 335 return __copy_tofrom_user(to, (__force void __user *)from, n); 335 336 } 336 337 if ((unsigned long)to < TASK_SIZE) { 337 338 over = (unsigned long)to + n - TASK_SIZE; 338 - if (!__builtin_constant_p(n)) 339 - check_object_size(from, n - over, true); 339 + check_object_size(from, n - over, true); 340 340 return __copy_tofrom_user(to, (__force void __user *)from, 341 341 n - over) + over; 342 342 } ··· 379 383 return 0; 380 384 } 381 385 382 - if (!__builtin_constant_p(n)) 383 - check_object_size(to, n, false); 386 + check_object_size(to, n, false); 384 387 385 388 return __copy_tofrom_user((__force void __user *)to, from, n); 386 389 } ··· 407 412 if (ret == 0) 408 413 return 0; 409 414 } 410 - if (!__builtin_constant_p(n)) 411 - check_object_size(from, n, true); 415 + 416 + check_object_size(from, n, true); 412 417 413 418 return __copy_tofrom_user(to, (__force const void __user *)from, n); 414 419 }
+4 -3
arch/powerpc/lib/checksum_32.S
··· 127 127 stw r7,12(r1) 128 128 stw r8,8(r1) 129 129 130 - rlwinm r0,r4,3,0x8 131 - rlwnm r6,r6,r0,0,31 /* odd destination address: rotate one byte */ 132 - cmplwi cr7,r0,0 /* is destination address even ? */ 133 130 addic r12,r6,0 134 131 addi r6,r4,-4 135 132 neg r0,r4 136 133 addi r4,r3,-4 137 134 andi. r0,r0,CACHELINE_MASK /* # bytes to start of cache line */ 135 + crset 4*cr7+eq 138 136 beq 58f 139 137 140 138 cmplw 0,r5,r0 /* is this more than total to do? */ 141 139 blt 63f /* if not much to do */ 140 + rlwinm r7,r6,3,0x8 141 + rlwnm r12,r12,r7,0,31 /* odd destination address: rotate one byte */ 142 + cmplwi cr7,r7,0 /* is destination address even ? */ 142 143 andi. r8,r0,3 /* get it word-aligned first */ 143 144 mtctr r8 144 145 beq+ 61f
+6 -1
arch/powerpc/mm/slb_low.S
··· 113 113 END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT) 114 114 b slb_finish_load_1T 115 115 116 - 0: 116 + 0: /* 117 + * For userspace addresses, make sure this is region 0. 118 + */ 119 + cmpdi r9, 0 120 + bne 8f 121 + 117 122 /* when using slices, we extract the psize off the slice bitmaps 118 123 * and then we need to get the sllp encoding off the mmu_psize_defs 119 124 * array.
+11 -7
arch/powerpc/platforms/powernv/pci-ioda.c
··· 162 162 static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe) 163 163 { 164 164 struct pnv_phb *phb = pe->phb; 165 + unsigned int pe_num = pe->pe_number; 165 166 166 167 WARN_ON(pe->pdev); 167 168 168 169 memset(pe, 0, sizeof(struct pnv_ioda_pe)); 169 - clear_bit(pe->pe_number, phb->ioda.pe_alloc); 170 + clear_bit(pe_num, phb->ioda.pe_alloc); 170 171 } 171 172 172 173 /* The default M64 BAR is shared by all PEs */ ··· 3403 3402 struct pnv_phb *phb = pe->phb; 3404 3403 struct pnv_ioda_pe *slave, *tmp; 3405 3404 3406 - /* Release slave PEs in compound PE */ 3407 - if (pe->flags & PNV_IODA_PE_MASTER) { 3408 - list_for_each_entry_safe(slave, tmp, &pe->slaves, list) 3409 - pnv_ioda_release_pe(slave); 3410 - } 3411 - 3412 3405 list_del(&pe->list); 3413 3406 switch (phb->type) { 3414 3407 case PNV_PHB_IODA1: ··· 3417 3422 3418 3423 pnv_ioda_release_pe_seg(pe); 3419 3424 pnv_ioda_deconfigure_pe(pe->phb, pe); 3425 + 3426 + /* Release slave PEs in the compound PE */ 3427 + if (pe->flags & PNV_IODA_PE_MASTER) { 3428 + list_for_each_entry_safe(slave, tmp, &pe->slaves, list) { 3429 + list_del(&slave->list); 3430 + pnv_ioda_free_pe(slave); 3431 + } 3432 + } 3433 + 3420 3434 pnv_ioda_free_pe(pe); 3421 3435 } 3422 3436
+1 -1
arch/powerpc/platforms/pseries/setup.c
··· 41 41 #include <linux/root_dev.h> 42 42 #include <linux/of.h> 43 43 #include <linux/of_pci.h> 44 - #include <linux/kexec.h> 45 44 46 45 #include <asm/mmu.h> 47 46 #include <asm/processor.h> ··· 65 66 #include <asm/eeh.h> 66 67 #include <asm/reg.h> 67 68 #include <asm/plpar_wrappers.h> 69 + #include <asm/kexec.h> 68 70 69 71 #include "pseries.h" 70 72
+7 -5
arch/powerpc/sysdev/xics/icp-opal.c
··· 23 23 24 24 static void icp_opal_teardown_cpu(void) 25 25 { 26 - int cpu = smp_processor_id(); 26 + int hw_cpu = hard_smp_processor_id(); 27 27 28 28 /* Clear any pending IPI */ 29 - opal_int_set_mfrr(cpu, 0xff); 29 + opal_int_set_mfrr(hw_cpu, 0xff); 30 30 } 31 31 32 32 static void icp_opal_flush_ipi(void) ··· 101 101 102 102 static void icp_opal_cause_ipi(int cpu, unsigned long data) 103 103 { 104 - opal_int_set_mfrr(cpu, IPI_PRIORITY); 104 + int hw_cpu = get_hard_smp_processor_id(cpu); 105 + 106 + opal_int_set_mfrr(hw_cpu, IPI_PRIORITY); 105 107 } 106 108 107 109 static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id) 108 110 { 109 - int cpu = smp_processor_id(); 111 + int hw_cpu = hard_smp_processor_id(); 110 112 111 - opal_int_set_mfrr(cpu, 0xff); 113 + opal_int_set_mfrr(hw_cpu, 0xff); 112 114 113 115 return smp_ipi_demux(); 114 116 }
+3 -6
arch/sparc/include/asm/uaccess_32.h
··· 249 249 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n) 250 250 { 251 251 if (n && __access_ok((unsigned long) to, n)) { 252 - if (!__builtin_constant_p(n)) 253 - check_object_size(from, n, true); 252 + check_object_size(from, n, true); 254 253 return __copy_user(to, (__force void __user *) from, n); 255 254 } else 256 255 return n; ··· 257 258 258 259 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n) 259 260 { 260 - if (!__builtin_constant_p(n)) 261 - check_object_size(from, n, true); 261 + check_object_size(from, n, true); 262 262 return __copy_user(to, (__force void __user *) from, n); 263 263 } 264 264 265 265 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n) 266 266 { 267 267 if (n && __access_ok((unsigned long) from, n)) { 268 - if (!__builtin_constant_p(n)) 269 - check_object_size(to, n, false); 268 + check_object_size(to, n, false); 270 269 return __copy_user((__force void __user *) to, from, n); 271 270 } else 272 271 return n;
+3 -4
arch/sparc/include/asm/uaccess_64.h
··· 212 212 { 213 213 unsigned long ret; 214 214 215 - if (!__builtin_constant_p(size)) 216 - check_object_size(to, size, false); 215 + check_object_size(to, size, false); 217 216 218 217 ret = ___copy_from_user(to, from, size); 219 218 if (unlikely(ret)) ··· 232 233 { 233 234 unsigned long ret; 234 235 235 - if (!__builtin_constant_p(size)) 236 - check_object_size(from, size, true); 236 + check_object_size(from, size, true); 237 + 237 238 ret = ___copy_to_user(to, from, size); 238 239 if (unlikely(ret)) 239 240 ret = copy_to_user_fixup(to, from, size);
+3 -7
arch/um/kernel/skas/syscall.c
··· 21 21 PT_REGS_SET_SYSCALL_RETURN(regs, -ENOSYS); 22 22 23 23 if (syscall_trace_enter(regs)) 24 - return; 24 + goto out; 25 25 26 26 /* Do the seccomp check after ptrace; failures should be fast. */ 27 27 if (secure_computing(NULL) == -1) 28 - return; 28 + goto out; 29 29 30 - /* Update the syscall number after orig_ax has potentially been updated 31 - * with ptrace. 32 - */ 33 - UPT_SYSCALL_NR(r) = PT_SYSCALL_NR(r->gp); 34 30 syscall = UPT_SYSCALL_NR(r); 35 - 36 31 if (syscall >= 0 && syscall <= __NR_syscall_max) 37 32 PT_REGS_SET_SYSCALL_RETURN(regs, 38 33 EXECUTE_SYSCALL(syscall, regs)); 39 34 35 + out: 40 36 syscall_trace_leave(regs); 41 37 }
+2 -2
arch/x86/include/asm/uaccess.h
··· 705 705 WARN(1, "Buffer overflow detected (%d < %lu)!\n", size, count); 706 706 } 707 707 708 - static inline unsigned long __must_check 708 + static __always_inline unsigned long __must_check 709 709 copy_from_user(void *to, const void __user *from, unsigned long n) 710 710 { 711 711 int sz = __compiletime_object_size(to); ··· 725 725 return n; 726 726 } 727 727 728 - static inline unsigned long __must_check 728 + static __always_inline unsigned long __must_check 729 729 copy_to_user(void __user *to, const void *from, unsigned long n) 730 730 { 731 731 int sz = __compiletime_object_size(from);
+10 -7
arch/x86/mm/pat.c
··· 927 927 } 928 928 929 929 /* 930 - * prot is passed in as a parameter for the new mapping. If the vma has a 931 - * linear pfn mapping for the entire range reserve the entire vma range with 932 - * single reserve_pfn_range call. 930 + * prot is passed in as a parameter for the new mapping. If the vma has 931 + * a linear pfn mapping for the entire range, or no vma is provided, 932 + * reserve the entire pfn + size range with single reserve_pfn_range 933 + * call. 933 934 */ 934 935 int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, 935 936 unsigned long pfn, unsigned long addr, unsigned long size) ··· 939 938 enum page_cache_mode pcm; 940 939 941 940 /* reserve the whole chunk starting from paddr */ 942 - if (addr == vma->vm_start && size == (vma->vm_end - vma->vm_start)) { 941 + if (!vma || (addr == vma->vm_start 942 + && size == (vma->vm_end - vma->vm_start))) { 943 943 int ret; 944 944 945 945 ret = reserve_pfn_range(paddr, size, prot, 0); 946 - if (!ret) 946 + if (ret == 0 && vma) 947 947 vma->vm_flags |= VM_PAT; 948 948 return ret; 949 949 } ··· 999 997 resource_size_t paddr; 1000 998 unsigned long prot; 1001 999 1002 - if (!(vma->vm_flags & VM_PAT)) 1000 + if (vma && !(vma->vm_flags & VM_PAT)) 1003 1001 return; 1004 1002 1005 1003 /* free the chunk starting from pfn or the whole chunk */ ··· 1013 1011 size = vma->vm_end - vma->vm_start; 1014 1012 } 1015 1013 free_pfn_range(paddr, size); 1016 - vma->vm_flags &= ~VM_PAT; 1014 + if (vma) 1015 + vma->vm_flags &= ~VM_PAT; 1017 1016 } 1018 1017 1019 1018 /*
+3
arch/x86/um/ptrace_32.c
··· 84 84 case EAX: 85 85 case EIP: 86 86 case UESP: 87 + break; 87 88 case ORIG_EAX: 89 + /* Update the syscall number. */ 90 + UPT_SYSCALL_NR(&child->thread.regs.regs) = value; 88 91 break; 89 92 case FS: 90 93 if (value && (value & 3) != 3)
+4
arch/x86/um/ptrace_64.c
··· 78 78 case RSI: 79 79 case RDI: 80 80 case RBP: 81 + break; 82 + 81 83 case ORIG_RAX: 84 + /* Update the syscall number. */ 85 + UPT_SYSCALL_NR(&child->thread.regs.regs) = value; 82 86 break; 83 87 84 88 case FS:
+2 -1
crypto/cryptd.c
··· 733 733 rctx = aead_request_ctx(req); 734 734 compl = rctx->complete; 735 735 736 + tfm = crypto_aead_reqtfm(req); 737 + 736 738 if (unlikely(err == -EINPROGRESS)) 737 739 goto out; 738 740 aead_request_set_tfm(req, child); 739 741 err = crypt( req ); 740 742 741 743 out: 742 - tfm = crypto_aead_reqtfm(req); 743 744 ctx = crypto_aead_ctx(tfm); 744 745 refcnt = atomic_read(&ctx->refcnt); 745 746
+1 -1
drivers/acpi/nfit/mce.c
··· 42 42 list_for_each_entry(nfit_spa, &acpi_desc->spas, list) { 43 43 struct acpi_nfit_system_address *spa = nfit_spa->spa; 44 44 45 - if (nfit_spa_type(spa) == NFIT_SPA_PM) 45 + if (nfit_spa_type(spa) != NFIT_SPA_PM) 46 46 continue; 47 47 /* find the spa that covers the mce addr */ 48 48 if (spa->address > mce->addr)
+28 -10
drivers/base/regmap/regcache-rbtree.c
··· 404 404 unsigned int new_base_reg, new_top_reg; 405 405 unsigned int min, max; 406 406 unsigned int max_dist; 407 + unsigned int dist, best_dist = UINT_MAX; 407 408 408 409 max_dist = map->reg_stride * sizeof(*rbnode_tmp) / 409 410 map->cache_word_size; ··· 424 423 &base_reg, &top_reg); 425 424 426 425 if (base_reg <= max && top_reg >= min) { 427 - new_base_reg = min(reg, base_reg); 428 - new_top_reg = max(reg, top_reg); 429 - } else { 430 - if (max < base_reg) 431 - node = node->rb_left; 426 + if (reg < base_reg) 427 + dist = base_reg - reg; 428 + else if (reg > top_reg) 429 + dist = reg - top_reg; 432 430 else 433 - node = node->rb_right; 434 - 435 - continue; 431 + dist = 0; 432 + if (dist < best_dist) { 433 + rbnode = rbnode_tmp; 434 + best_dist = dist; 435 + new_base_reg = min(reg, base_reg); 436 + new_top_reg = max(reg, top_reg); 437 + } 436 438 } 437 439 438 - ret = regcache_rbtree_insert_to_block(map, rbnode_tmp, 440 + /* 441 + * Keep looking, we want to choose the closest block, 442 + * otherwise we might end up creating overlapping 443 + * blocks, which breaks the rbtree. 444 + */ 445 + if (reg < base_reg) 446 + node = node->rb_left; 447 + else if (reg > top_reg) 448 + node = node->rb_right; 449 + else 450 + break; 451 + } 452 + 453 + if (rbnode) { 454 + ret = regcache_rbtree_insert_to_block(map, rbnode, 439 455 new_base_reg, 440 456 new_top_reg, reg, 441 457 value); 442 458 if (ret) 443 459 return ret; 444 - rbtree_ctx->cached_rbnode = rbnode_tmp; 460 + rbtree_ctx->cached_rbnode = rbnode; 445 461 return 0; 446 462 } 447 463
+3 -2
drivers/base/regmap/regcache.c
··· 38 38 39 39 /* calculate the size of reg_defaults */ 40 40 for (count = 0, i = 0; i < map->num_reg_defaults_raw; i++) 41 - if (!regmap_volatile(map, i * map->reg_stride)) 41 + if (regmap_readable(map, i * map->reg_stride) && 42 + !regmap_volatile(map, i * map->reg_stride)) 42 43 count++; 43 44 44 - /* all registers are volatile, so just bypass */ 45 + /* all registers are unreadable or volatile, so just bypass */ 45 46 if (!count) { 46 47 map->cache_bypass = true; 47 48 return 0;
+2
drivers/base/regmap/regmap.c
··· 1474 1474 ret = map->bus->write(map->bus_context, buf, len); 1475 1475 1476 1476 kfree(buf); 1477 + } else if (ret != 0 && !map->cache_bypass && map->format.parse_val) { 1478 + regcache_drop_region(map, reg, reg + 1); 1477 1479 } 1478 1480 1479 1481 trace_regmap_hw_write_done(map, reg, val_len / map->format.val_bytes);
+1 -1
drivers/bus/arm-cci.c
··· 551 551 CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_wrq, 0xB), 552 552 CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_cd_hs, 0xC), 553 553 CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_rq_stall_addr_hazard, 0xD), 554 - CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snopp_rq_stall_tt_full, 0xE), 554 + CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_stall_tt_full, 0xE), 555 555 CCI5xx_GLOBAL_EVENT_EXT_ATTR_ENTRY(cci_snoop_rq_tzmp1_prot, 0xF), 556 556 NULL 557 557 };
+79 -33
drivers/bus/arm-ccn.c
··· 187 187 struct arm_ccn_component *xp; 188 188 189 189 struct arm_ccn_dt dt; 190 + int mn_id; 190 191 }; 191 192 192 193 static DEFINE_MUTEX(arm_ccn_mutex); ··· 213 212 #define CCN_CONFIG_TYPE(_config) (((_config) >> 8) & 0xff) 214 213 #define CCN_CONFIG_EVENT(_config) (((_config) >> 16) & 0xff) 215 214 #define CCN_CONFIG_PORT(_config) (((_config) >> 24) & 0x3) 215 + #define CCN_CONFIG_BUS(_config) (((_config) >> 24) & 0x3) 216 216 #define CCN_CONFIG_VC(_config) (((_config) >> 26) & 0x7) 217 217 #define CCN_CONFIG_DIR(_config) (((_config) >> 29) & 0x1) 218 218 #define CCN_CONFIG_MASK(_config) (((_config) >> 30) & 0xf) ··· 243 241 static CCN_FORMAT_ATTR(type, "config:8-15"); 244 242 static CCN_FORMAT_ATTR(event, "config:16-23"); 245 243 static CCN_FORMAT_ATTR(port, "config:24-25"); 244 + static CCN_FORMAT_ATTR(bus, "config:24-25"); 246 245 static CCN_FORMAT_ATTR(vc, "config:26-28"); 247 246 static CCN_FORMAT_ATTR(dir, "config:29-29"); 248 247 static CCN_FORMAT_ATTR(mask, "config:30-33"); ··· 256 253 &arm_ccn_pmu_format_attr_type.attr.attr, 257 254 &arm_ccn_pmu_format_attr_event.attr.attr, 258 255 &arm_ccn_pmu_format_attr_port.attr.attr, 256 + &arm_ccn_pmu_format_attr_bus.attr.attr, 259 257 &arm_ccn_pmu_format_attr_vc.attr.attr, 260 258 &arm_ccn_pmu_format_attr_dir.attr.attr, 261 259 &arm_ccn_pmu_format_attr_mask.attr.attr, ··· 332 328 static ssize_t arm_ccn_pmu_event_show(struct device *dev, 333 329 struct device_attribute *attr, char *buf) 334 330 { 331 + struct arm_ccn *ccn = pmu_to_arm_ccn(dev_get_drvdata(dev)); 335 332 struct arm_ccn_pmu_event *event = container_of(attr, 336 333 struct arm_ccn_pmu_event, attr); 337 334 ssize_t res; ··· 354 349 break; 355 350 case CCN_TYPE_XP: 356 351 res += snprintf(buf + res, PAGE_SIZE - res, 357 - ",xp=?,port=?,vc=?,dir=?"); 352 + ",xp=?,vc=?"); 358 353 if (event->event == CCN_EVENT_WATCHPOINT) 359 354 res += snprintf(buf + res, PAGE_SIZE - res, 360 - ",cmp_l=?,cmp_h=?,mask=?"); 355 + ",port=?,dir=?,cmp_l=?,cmp_h=?,mask=?"); 356 + else 357 + res += snprintf(buf + res, PAGE_SIZE - res, 358 + ",bus=?"); 359 + 360 + break; 361 + case CCN_TYPE_MN: 362 + res += snprintf(buf + res, PAGE_SIZE - res, ",node=%d", ccn->mn_id); 361 363 break; 362 364 default: 363 365 res += snprintf(buf + res, PAGE_SIZE - res, ",node=?"); ··· 395 383 } 396 384 397 385 static struct arm_ccn_pmu_event arm_ccn_pmu_events[] = { 398 - CCN_EVENT_MN(eobarrier, "dir=0,vc=0,cmp_h=0x1c00", CCN_IDX_MASK_OPCODE), 399 - CCN_EVENT_MN(ecbarrier, "dir=0,vc=0,cmp_h=0x1e00", CCN_IDX_MASK_OPCODE), 400 - CCN_EVENT_MN(dvmop, "dir=0,vc=0,cmp_h=0x2800", CCN_IDX_MASK_OPCODE), 386 + CCN_EVENT_MN(eobarrier, "dir=1,vc=0,cmp_h=0x1c00", CCN_IDX_MASK_OPCODE), 387 + CCN_EVENT_MN(ecbarrier, "dir=1,vc=0,cmp_h=0x1e00", CCN_IDX_MASK_OPCODE), 388 + CCN_EVENT_MN(dvmop, "dir=1,vc=0,cmp_h=0x2800", CCN_IDX_MASK_OPCODE), 401 389 CCN_EVENT_HNI(txdatflits, "dir=1,vc=3", CCN_IDX_MASK_ANY), 402 390 CCN_EVENT_HNI(rxdatflits, "dir=0,vc=3", CCN_IDX_MASK_ANY), 403 391 CCN_EVENT_HNI(txreqflits, "dir=1,vc=0", CCN_IDX_MASK_ANY), ··· 745 733 746 734 if (has_branch_stack(event) || event->attr.exclude_user || 747 735 event->attr.exclude_kernel || event->attr.exclude_hv || 748 - event->attr.exclude_idle) { 736 + event->attr.exclude_idle || event->attr.exclude_host || 737 + event->attr.exclude_guest) { 749 738 dev_warn(ccn->dev, "Can't exclude execution levels!\n"); 750 - return -EOPNOTSUPP; 739 + return -EINVAL; 751 740 } 752 741 753 742 if (event->cpu < 0) { ··· 772 759 773 760 /* Validate node/xp vs topology */ 774 761 switch (type) { 762 + case CCN_TYPE_MN: 763 + if (node_xp != ccn->mn_id) { 764 + dev_warn(ccn->dev, "Invalid MN ID %d!\n", node_xp); 765 + return -EINVAL; 766 + } 767 + break; 775 768 case CCN_TYPE_XP: 776 769 if (node_xp >= ccn->num_xps) { 777 770 dev_warn(ccn->dev, "Invalid XP ID %d!\n", node_xp); ··· 905 886 struct arm_ccn_component *xp; 906 887 u32 val, dt_cfg; 907 888 889 + /* Nothing to do for cycle counter */ 890 + if (hw->idx == CCN_IDX_PMU_CYCLE_COUNTER) 891 + return; 892 + 908 893 if (CCN_CONFIG_TYPE(event->attr.config) == CCN_TYPE_XP) 909 894 xp = &ccn->xp[CCN_CONFIG_XP(event->attr.config)]; 910 895 else ··· 940 917 arm_ccn_pmu_read_counter(ccn, hw->idx)); 941 918 hw->state = 0; 942 919 943 - /* 944 - * Pin the timer, so that the overflows are handled by the chosen 945 - * event->cpu (this is the same one as presented in "cpumask" 946 - * attribute). 947 - */ 948 - if (!ccn->irq) 949 - hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(), 950 - HRTIMER_MODE_REL_PINNED); 951 - 952 920 /* Set the DT bus input, engaging the counter */ 953 921 arm_ccn_pmu_xp_dt_config(event, 1); 954 922 } 955 923 956 924 static void arm_ccn_pmu_event_stop(struct perf_event *event, int flags) 957 925 { 958 - struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 959 926 struct hw_perf_event *hw = &event->hw; 960 - u64 timeout; 961 927 962 928 /* Disable counting, setting the DT bus to pass-through mode */ 963 929 arm_ccn_pmu_xp_dt_config(event, 0); 964 - 965 - if (!ccn->irq) 966 - hrtimer_cancel(&ccn->dt.hrtimer); 967 - 968 - /* Let the DT bus drain */ 969 - timeout = arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) + 970 - ccn->num_xps; 971 - while (arm_ccn_pmu_read_counter(ccn, CCN_IDX_PMU_CYCLE_COUNTER) < 972 - timeout) 973 - cpu_relax(); 974 930 975 931 if (flags & PERF_EF_UPDATE) 976 932 arm_ccn_pmu_event_update(event); ··· 990 988 991 989 /* Comparison values */ 992 990 writel(cmp_l & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_L(wp)); 993 - writel((cmp_l >> 32) & 0xefffffff, 991 + writel((cmp_l >> 32) & 0x7fffffff, 994 992 source->base + CCN_XP_DT_CMP_VAL_L(wp) + 4); 995 993 writel(cmp_h & 0xffffffff, source->base + CCN_XP_DT_CMP_VAL_H(wp)); 996 994 writel((cmp_h >> 32) & 0x0fffffff, ··· 998 996 999 997 /* Mask */ 1000 998 writel(mask_l & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_L(wp)); 1001 - writel((mask_l >> 32) & 0xefffffff, 999 + writel((mask_l >> 32) & 0x7fffffff, 1002 1000 source->base + CCN_XP_DT_CMP_MASK_L(wp) + 4); 1003 1001 writel(mask_h & 0xffffffff, source->base + CCN_XP_DT_CMP_MASK_H(wp)); 1004 1002 writel((mask_h >> 32) & 0x0fffffff, ··· 1016 1014 hw->event_base = CCN_XP_DT_CONFIG__DT_CFG__XP_PMU_EVENT(hw->config_base); 1017 1015 1018 1016 id = (CCN_CONFIG_VC(event->attr.config) << 4) | 1019 - (CCN_CONFIG_PORT(event->attr.config) << 3) | 1017 + (CCN_CONFIG_BUS(event->attr.config) << 3) | 1020 1018 (CCN_CONFIG_EVENT(event->attr.config) << 0); 1021 1019 1022 1020 val = readl(source->base + CCN_XP_PMU_EVENT_SEL); ··· 1101 1099 spin_unlock(&ccn->dt.config_lock); 1102 1100 } 1103 1101 1102 + static int arm_ccn_pmu_active_counters(struct arm_ccn *ccn) 1103 + { 1104 + return bitmap_weight(ccn->dt.pmu_counters_mask, 1105 + CCN_NUM_PMU_EVENT_COUNTERS + 1); 1106 + } 1107 + 1104 1108 static int arm_ccn_pmu_event_add(struct perf_event *event, int flags) 1105 1109 { 1106 1110 int err; 1107 1111 struct hw_perf_event *hw = &event->hw; 1112 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 1108 1113 1109 1114 err = arm_ccn_pmu_event_alloc(event); 1110 1115 if (err) 1111 1116 return err; 1117 + 1118 + /* 1119 + * Pin the timer, so that the overflows are handled by the chosen 1120 + * event->cpu (this is the same one as presented in "cpumask" 1121 + * attribute). 1122 + */ 1123 + if (!ccn->irq && arm_ccn_pmu_active_counters(ccn) == 1) 1124 + hrtimer_start(&ccn->dt.hrtimer, arm_ccn_pmu_timer_period(), 1125 + HRTIMER_MODE_REL_PINNED); 1112 1126 1113 1127 arm_ccn_pmu_event_config(event); 1114 1128 ··· 1138 1120 1139 1121 static void arm_ccn_pmu_event_del(struct perf_event *event, int flags) 1140 1122 { 1123 + struct arm_ccn *ccn = pmu_to_arm_ccn(event->pmu); 1124 + 1141 1125 arm_ccn_pmu_event_stop(event, PERF_EF_UPDATE); 1142 1126 1143 1127 arm_ccn_pmu_event_release(event); 1128 + 1129 + if (!ccn->irq && arm_ccn_pmu_active_counters(ccn) == 0) 1130 + hrtimer_cancel(&ccn->dt.hrtimer); 1144 1131 } 1145 1132 1146 1133 static void arm_ccn_pmu_event_read(struct perf_event *event) 1147 1134 { 1148 1135 arm_ccn_pmu_event_update(event); 1136 + } 1137 + 1138 + static void arm_ccn_pmu_enable(struct pmu *pmu) 1139 + { 1140 + struct arm_ccn *ccn = pmu_to_arm_ccn(pmu); 1141 + 1142 + u32 val = readl(ccn->dt.base + CCN_DT_PMCR); 1143 + val |= CCN_DT_PMCR__PMU_EN; 1144 + writel(val, ccn->dt.base + CCN_DT_PMCR); 1145 + } 1146 + 1147 + static void arm_ccn_pmu_disable(struct pmu *pmu) 1148 + { 1149 + struct arm_ccn *ccn = pmu_to_arm_ccn(pmu); 1150 + 1151 + u32 val = readl(ccn->dt.base + CCN_DT_PMCR); 1152 + val &= ~CCN_DT_PMCR__PMU_EN; 1153 + writel(val, ccn->dt.base + CCN_DT_PMCR); 1149 1154 } 1150 1155 1151 1156 static irqreturn_t arm_ccn_pmu_overflow_handler(struct arm_ccn_dt *dt) ··· 1293 1252 .start = arm_ccn_pmu_event_start, 1294 1253 .stop = arm_ccn_pmu_event_stop, 1295 1254 .read = arm_ccn_pmu_event_read, 1255 + .pmu_enable = arm_ccn_pmu_enable, 1256 + .pmu_disable = arm_ccn_pmu_disable, 1296 1257 }; 1297 1258 1298 1259 /* No overflow interrupt? Have to use a timer instead. */ ··· 1404 1361 1405 1362 switch (type) { 1406 1363 case CCN_TYPE_MN: 1364 + ccn->mn_id = id; 1365 + return 0; 1407 1366 case CCN_TYPE_DT: 1408 1367 return 0; 1409 1368 case CCN_TYPE_XP: ··· 1516 1471 /* Can set 'disable' bits, so can acknowledge interrupts */ 1517 1472 writel(CCN_MN_ERRINT_STATUS__PMU_EVENTS__ENABLE, 1518 1473 ccn->base + CCN_MN_ERRINT_STATUS); 1519 - err = devm_request_irq(ccn->dev, irq, arm_ccn_irq_handler, 0, 1520 - dev_name(ccn->dev), ccn); 1474 + err = devm_request_irq(ccn->dev, irq, arm_ccn_irq_handler, 1475 + IRQF_NOBALANCING | IRQF_NO_THREAD, 1476 + dev_name(ccn->dev), ccn); 1521 1477 if (err) 1522 1478 return err; 1523 1479
+1
drivers/bus/vexpress-config.c
··· 178 178 179 179 parent = class_find_device(vexpress_config_class, NULL, bridge, 180 180 vexpress_config_node_match); 181 + of_node_put(bridge); 181 182 if (WARN_ON(!parent)) 182 183 return -ENODEV; 183 184
+15 -8
drivers/char/virtio_console.c
··· 165 165 */ 166 166 struct virtqueue *c_ivq, *c_ovq; 167 167 168 + /* 169 + * A control packet buffer for guest->host requests, protected 170 + * by c_ovq_lock. 171 + */ 172 + struct virtio_console_control cpkt; 173 + 168 174 /* Array of per-port IO virtqueues */ 169 175 struct virtqueue **in_vqs, **out_vqs; 170 176 ··· 566 560 unsigned int event, unsigned int value) 567 561 { 568 562 struct scatterlist sg[1]; 569 - struct virtio_console_control cpkt; 570 563 struct virtqueue *vq; 571 564 unsigned int len; 572 565 573 566 if (!use_multiport(portdev)) 574 567 return 0; 575 568 576 - cpkt.id = cpu_to_virtio32(portdev->vdev, port_id); 577 - cpkt.event = cpu_to_virtio16(portdev->vdev, event); 578 - cpkt.value = cpu_to_virtio16(portdev->vdev, value); 579 - 580 569 vq = portdev->c_ovq; 581 570 582 - sg_init_one(sg, &cpkt, sizeof(cpkt)); 583 - 584 571 spin_lock(&portdev->c_ovq_lock); 585 - if (virtqueue_add_outbuf(vq, sg, 1, &cpkt, GFP_ATOMIC) == 0) { 572 + 573 + portdev->cpkt.id = cpu_to_virtio32(portdev->vdev, port_id); 574 + portdev->cpkt.event = cpu_to_virtio16(portdev->vdev, event); 575 + portdev->cpkt.value = cpu_to_virtio16(portdev->vdev, value); 576 + 577 + sg_init_one(sg, &portdev->cpkt, sizeof(struct virtio_console_control)); 578 + 579 + if (virtqueue_add_outbuf(vq, sg, 1, &portdev->cpkt, GFP_ATOMIC) == 0) { 586 580 virtqueue_kick(vq); 587 581 while (!virtqueue_get_buf(vq, &len) 588 582 && !virtqueue_is_broken(vq)) 589 583 cpu_relax(); 590 584 } 585 + 591 586 spin_unlock(&portdev->c_ovq_lock); 592 587 return 0; 593 588 }
+37 -40
drivers/crypto/caam/caamalg.c
··· 556 556 557 557 /* Read and write assoclen bytes */ 558 558 append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ); 559 - append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ); 559 + if (alg->caam.geniv) 560 + append_math_add_imm_u32(desc, VARSEQOUTLEN, REG3, IMM, ivsize); 561 + else 562 + append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ); 560 563 561 564 /* Skip assoc data */ 562 565 append_seq_fifo_store(desc, 0, FIFOST_TYPE_SKIP | FIFOLDST_VLF); ··· 567 564 /* read assoc before reading payload */ 568 565 append_seq_fifo_load(desc, 0, FIFOLD_CLASS_CLASS2 | FIFOLD_TYPE_MSG | 569 566 KEY_VLF); 567 + 568 + if (alg->caam.geniv) { 569 + append_seq_load(desc, ivsize, LDST_CLASS_1_CCB | 570 + LDST_SRCDST_BYTE_CONTEXT | 571 + (ctx1_iv_off << LDST_OFFSET_SHIFT)); 572 + append_move(desc, MOVE_SRC_CLASS1CTX | MOVE_DEST_CLASS2INFIFO | 573 + (ctx1_iv_off << MOVE_OFFSET_SHIFT) | ivsize); 574 + } 570 575 571 576 /* Load Counter into CONTEXT1 reg */ 572 577 if (is_rfc3686) ··· 2161 2150 2162 2151 init_aead_job(req, edesc, all_contig, encrypt); 2163 2152 2164 - if (ivsize && (is_rfc3686 || !(alg->caam.geniv && encrypt))) 2153 + if (ivsize && ((is_rfc3686 && encrypt) || !alg->caam.geniv)) 2165 2154 append_load_as_imm(desc, req->iv, ivsize, 2166 2155 LDST_CLASS_1_CCB | 2167 2156 LDST_SRCDST_BYTE_CONTEXT | ··· 2546 2535 } 2547 2536 2548 2537 return ret; 2549 - } 2550 - 2551 - static int aead_givdecrypt(struct aead_request *req) 2552 - { 2553 - struct crypto_aead *aead = crypto_aead_reqtfm(req); 2554 - unsigned int ivsize = crypto_aead_ivsize(aead); 2555 - 2556 - if (req->cryptlen < ivsize) 2557 - return -EINVAL; 2558 - 2559 - req->cryptlen -= ivsize; 2560 - req->assoclen += ivsize; 2561 - 2562 - return aead_decrypt(req); 2563 2538 } 2564 2539 2565 2540 /* ··· 3207 3210 .setkey = aead_setkey, 3208 3211 .setauthsize = aead_setauthsize, 3209 3212 .encrypt = aead_encrypt, 3210 - .decrypt = aead_givdecrypt, 3213 + .decrypt = aead_decrypt, 3211 3214 .ivsize = AES_BLOCK_SIZE, 3212 3215 .maxauthsize = MD5_DIGEST_SIZE, 3213 3216 }, ··· 3253 3256 .setkey = aead_setkey, 3254 3257 .setauthsize = aead_setauthsize, 3255 3258 .encrypt = aead_encrypt, 3256 - .decrypt = aead_givdecrypt, 3259 + .decrypt = aead_decrypt, 3257 3260 .ivsize = AES_BLOCK_SIZE, 3258 3261 .maxauthsize = SHA1_DIGEST_SIZE, 3259 3262 }, ··· 3299 3302 .setkey = aead_setkey, 3300 3303 .setauthsize = aead_setauthsize, 3301 3304 .encrypt = aead_encrypt, 3302 - .decrypt = aead_givdecrypt, 3305 + .decrypt = aead_decrypt, 3303 3306 .ivsize = AES_BLOCK_SIZE, 3304 3307 .maxauthsize = SHA224_DIGEST_SIZE, 3305 3308 }, ··· 3345 3348 .setkey = aead_setkey, 3346 3349 .setauthsize = aead_setauthsize, 3347 3350 .encrypt = aead_encrypt, 3348 - .decrypt = aead_givdecrypt, 3351 + .decrypt = aead_decrypt, 3349 3352 .ivsize = AES_BLOCK_SIZE, 3350 3353 .maxauthsize = SHA256_DIGEST_SIZE, 3351 3354 }, ··· 3391 3394 .setkey = aead_setkey, 3392 3395 .setauthsize = aead_setauthsize, 3393 3396 .encrypt = aead_encrypt, 3394 - .decrypt = aead_givdecrypt, 3397 + .decrypt = aead_decrypt, 3395 3398 .ivsize = AES_BLOCK_SIZE, 3396 3399 .maxauthsize = SHA384_DIGEST_SIZE, 3397 3400 }, ··· 3437 3440 .setkey = aead_setkey, 3438 3441 .setauthsize = aead_setauthsize, 3439 3442 .encrypt = aead_encrypt, 3440 - .decrypt = aead_givdecrypt, 3443 + .decrypt = aead_decrypt, 3441 3444 .ivsize = AES_BLOCK_SIZE, 3442 3445 .maxauthsize = SHA512_DIGEST_SIZE, 3443 3446 }, ··· 3483 3486 .setkey = aead_setkey, 3484 3487 .setauthsize = aead_setauthsize, 3485 3488 .encrypt = aead_encrypt, 3486 - .decrypt = aead_givdecrypt, 3489 + .decrypt = aead_decrypt, 3487 3490 .ivsize = DES3_EDE_BLOCK_SIZE, 3488 3491 .maxauthsize = MD5_DIGEST_SIZE, 3489 3492 }, ··· 3531 3534 .setkey = aead_setkey, 3532 3535 .setauthsize = aead_setauthsize, 3533 3536 .encrypt = aead_encrypt, 3534 - .decrypt = aead_givdecrypt, 3537 + .decrypt = aead_decrypt, 3535 3538 .ivsize = DES3_EDE_BLOCK_SIZE, 3536 3539 .maxauthsize = SHA1_DIGEST_SIZE, 3537 3540 }, ··· 3579 3582 .setkey = aead_setkey, 3580 3583 .setauthsize = aead_setauthsize, 3581 3584 .encrypt = aead_encrypt, 3582 - .decrypt = aead_givdecrypt, 3585 + .decrypt = aead_decrypt, 3583 3586 .ivsize = DES3_EDE_BLOCK_SIZE, 3584 3587 .maxauthsize = SHA224_DIGEST_SIZE, 3585 3588 }, ··· 3627 3630 .setkey = aead_setkey, 3628 3631 .setauthsize = aead_setauthsize, 3629 3632 .encrypt = aead_encrypt, 3630 - .decrypt = aead_givdecrypt, 3633 + .decrypt = aead_decrypt, 3631 3634 .ivsize = DES3_EDE_BLOCK_SIZE, 3632 3635 .maxauthsize = SHA256_DIGEST_SIZE, 3633 3636 }, ··· 3675 3678 .setkey = aead_setkey, 3676 3679 .setauthsize = aead_setauthsize, 3677 3680 .encrypt = aead_encrypt, 3678 - .decrypt = aead_givdecrypt, 3681 + .decrypt = aead_decrypt, 3679 3682 .ivsize = DES3_EDE_BLOCK_SIZE, 3680 3683 .maxauthsize = SHA384_DIGEST_SIZE, 3681 3684 }, ··· 3723 3726 .setkey = aead_setkey, 3724 3727 .setauthsize = aead_setauthsize, 3725 3728 .encrypt = aead_encrypt, 3726 - .decrypt = aead_givdecrypt, 3729 + .decrypt = aead_decrypt, 3727 3730 .ivsize = DES3_EDE_BLOCK_SIZE, 3728 3731 .maxauthsize = SHA512_DIGEST_SIZE, 3729 3732 }, ··· 3769 3772 .setkey = aead_setkey, 3770 3773 .setauthsize = aead_setauthsize, 3771 3774 .encrypt = aead_encrypt, 3772 - .decrypt = aead_givdecrypt, 3775 + .decrypt = aead_decrypt, 3773 3776 .ivsize = DES_BLOCK_SIZE, 3774 3777 .maxauthsize = MD5_DIGEST_SIZE, 3775 3778 }, ··· 3815 3818 .setkey = aead_setkey, 3816 3819 .setauthsize = aead_setauthsize, 3817 3820 .encrypt = aead_encrypt, 3818 - .decrypt = aead_givdecrypt, 3821 + .decrypt = aead_decrypt, 3819 3822 .ivsize = DES_BLOCK_SIZE, 3820 3823 .maxauthsize = SHA1_DIGEST_SIZE, 3821 3824 }, ··· 3861 3864 .setkey = aead_setkey, 3862 3865 .setauthsize = aead_setauthsize, 3863 3866 .encrypt = aead_encrypt, 3864 - .decrypt = aead_givdecrypt, 3867 + .decrypt = aead_decrypt, 3865 3868 .ivsize = DES_BLOCK_SIZE, 3866 3869 .maxauthsize = SHA224_DIGEST_SIZE, 3867 3870 }, ··· 3907 3910 .setkey = aead_setkey, 3908 3911 .setauthsize = aead_setauthsize, 3909 3912 .encrypt = aead_encrypt, 3910 - .decrypt = aead_givdecrypt, 3913 + .decrypt = aead_decrypt, 3911 3914 .ivsize = DES_BLOCK_SIZE, 3912 3915 .maxauthsize = SHA256_DIGEST_SIZE, 3913 3916 }, ··· 3953 3956 .setkey = aead_setkey, 3954 3957 .setauthsize = aead_setauthsize, 3955 3958 .encrypt = aead_encrypt, 3956 - .decrypt = aead_givdecrypt, 3959 + .decrypt = aead_decrypt, 3957 3960 .ivsize = DES_BLOCK_SIZE, 3958 3961 .maxauthsize = SHA384_DIGEST_SIZE, 3959 3962 }, ··· 3999 4002 .setkey = aead_setkey, 4000 4003 .setauthsize = aead_setauthsize, 4001 4004 .encrypt = aead_encrypt, 4002 - .decrypt = aead_givdecrypt, 4005 + .decrypt = aead_decrypt, 4003 4006 .ivsize = DES_BLOCK_SIZE, 4004 4007 .maxauthsize = SHA512_DIGEST_SIZE, 4005 4008 }, ··· 4048 4051 .setkey = aead_setkey, 4049 4052 .setauthsize = aead_setauthsize, 4050 4053 .encrypt = aead_encrypt, 4051 - .decrypt = aead_givdecrypt, 4054 + .decrypt = aead_decrypt, 4052 4055 .ivsize = CTR_RFC3686_IV_SIZE, 4053 4056 .maxauthsize = MD5_DIGEST_SIZE, 4054 4057 }, ··· 4099 4102 .setkey = aead_setkey, 4100 4103 .setauthsize = aead_setauthsize, 4101 4104 .encrypt = aead_encrypt, 4102 - .decrypt = aead_givdecrypt, 4105 + .decrypt = aead_decrypt, 4103 4106 .ivsize = CTR_RFC3686_IV_SIZE, 4104 4107 .maxauthsize = SHA1_DIGEST_SIZE, 4105 4108 }, ··· 4150 4153 .setkey = aead_setkey, 4151 4154 .setauthsize = aead_setauthsize, 4152 4155 .encrypt = aead_encrypt, 4153 - .decrypt = aead_givdecrypt, 4156 + .decrypt = aead_decrypt, 4154 4157 .ivsize = CTR_RFC3686_IV_SIZE, 4155 4158 .maxauthsize = SHA224_DIGEST_SIZE, 4156 4159 }, ··· 4201 4204 .setkey = aead_setkey, 4202 4205 .setauthsize = aead_setauthsize, 4203 4206 .encrypt = aead_encrypt, 4204 - .decrypt = aead_givdecrypt, 4207 + .decrypt = aead_decrypt, 4205 4208 .ivsize = CTR_RFC3686_IV_SIZE, 4206 4209 .maxauthsize = SHA256_DIGEST_SIZE, 4207 4210 }, ··· 4252 4255 .setkey = aead_setkey, 4253 4256 .setauthsize = aead_setauthsize, 4254 4257 .encrypt = aead_encrypt, 4255 - .decrypt = aead_givdecrypt, 4258 + .decrypt = aead_decrypt, 4256 4259 .ivsize = CTR_RFC3686_IV_SIZE, 4257 4260 .maxauthsize = SHA384_DIGEST_SIZE, 4258 4261 }, ··· 4303 4306 .setkey = aead_setkey, 4304 4307 .setauthsize = aead_setauthsize, 4305 4308 .encrypt = aead_encrypt, 4306 - .decrypt = aead_givdecrypt, 4309 + .decrypt = aead_decrypt, 4307 4310 .ivsize = CTR_RFC3686_IV_SIZE, 4308 4311 .maxauthsize = SHA512_DIGEST_SIZE, 4309 4312 },
+1 -1
drivers/dax/dax.c
··· 459 459 } 460 460 461 461 pgoff = linear_page_index(vma, pmd_addr); 462 - phys = pgoff_to_phys(dax_dev, pgoff, PAGE_SIZE); 462 + phys = pgoff_to_phys(dax_dev, pgoff, PMD_SIZE); 463 463 if (phys == -1) { 464 464 dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__, 465 465 pgoff);
+3 -2
drivers/firmware/arm_scpi.c
··· 709 709 struct mbox_client *cl = &pchan->cl; 710 710 struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 711 711 712 - if (of_address_to_resource(shmem, 0, &res)) { 712 + ret = of_address_to_resource(shmem, 0, &res); 713 + of_node_put(shmem); 714 + if (ret) { 713 715 dev_err(dev, "failed to get SCPI payload mem resource\n"); 714 - ret = -EINVAL; 715 716 goto err; 716 717 } 717 718
+4 -4
drivers/firmware/dmi-id.c
··· 229 229 230 230 ret = device_register(dmi_dev); 231 231 if (ret) 232 - goto fail_free_dmi_dev; 232 + goto fail_put_dmi_dev; 233 233 234 234 return 0; 235 235 236 - fail_free_dmi_dev: 237 - kfree(dmi_dev); 238 - fail_class_unregister: 236 + fail_put_dmi_dev: 237 + put_device(dmi_dev); 239 238 239 + fail_class_unregister: 240 240 class_unregister(&dmi_class); 241 241 242 242 return ret;
+1
drivers/gpio/Kconfig
··· 1131 1131 1132 1132 config GPIO_MCP23S08 1133 1133 tristate "Microchip MCP23xxx I/O expander" 1134 + depends on OF_GPIO 1134 1135 select GPIOLIB_IRQCHIP 1135 1136 help 1136 1137 SPI/I2C driver for Microchip MCP23S08/MCP23S17/MCP23008/MCP23017
+1 -1
drivers/gpio/gpio-mcp23s08.c
··· 564 564 mcp->chip.direction_output = mcp23s08_direction_output; 565 565 mcp->chip.set = mcp23s08_set; 566 566 mcp->chip.dbg_show = mcp23s08_dbg_show; 567 - #ifdef CONFIG_OF 567 + #ifdef CONFIG_OF_GPIO 568 568 mcp->chip.of_gpio_n_cells = 2; 569 569 mcp->chip.of_node = dev->of_node; 570 570 #endif
+1 -1
drivers/gpio/gpio-sa1100.c
··· 155 155 { 156 156 irq_set_chip_and_handler(irq, &sa1100_gpio_irq_chip, 157 157 handle_edge_irq); 158 - irq_set_noprobe(irq); 158 + irq_set_probe(irq); 159 159 160 160 return 0; 161 161 }
-1
drivers/gpio/gpiolib-of.c
··· 16 16 #include <linux/errno.h> 17 17 #include <linux/module.h> 18 18 #include <linux/io.h> 19 - #include <linux/io-mapping.h> 20 19 #include <linux/gpio/consumer.h> 21 20 #include <linux/of.h> 22 21 #include <linux/of_address.h>
+1 -1
drivers/i2c/busses/i2c-bcm-kona.c
··· 643 643 if (rc < 0) { 644 644 dev_err(dev->device, 645 645 "restart cmd failed rc = %d\n", rc); 646 - goto xfer_send_stop; 646 + goto xfer_send_stop; 647 647 } 648 648 } 649 649
+1 -1
drivers/i2c/busses/i2c-cadence.c
··· 767 767 * depending on the scaling direction. 768 768 * 769 769 * Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK 770 - * to acknowedge the change, NOTIFY_DONE if the notification is 770 + * to acknowledge the change, NOTIFY_DONE if the notification is 771 771 * considered irrelevant. 772 772 */ 773 773 static int cdns_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
+10 -6
drivers/i2c/busses/i2c-designware-core.c
··· 367 367 dev_dbg(dev->dev, "Fast-mode HCNT:LCNT = %d:%d\n", hcnt, lcnt); 368 368 369 369 /* Configure SDA Hold Time if required */ 370 - if (dev->sda_hold_time) { 371 - reg = dw_readl(dev, DW_IC_COMP_VERSION); 372 - if (reg >= DW_IC_SDA_HOLD_MIN_VERS) 370 + reg = dw_readl(dev, DW_IC_COMP_VERSION); 371 + if (reg >= DW_IC_SDA_HOLD_MIN_VERS) { 372 + if (dev->sda_hold_time) { 373 373 dw_writel(dev, dev->sda_hold_time, DW_IC_SDA_HOLD); 374 - else 375 - dev_warn(dev->dev, 376 - "Hardware too old to adjust SDA hold time."); 374 + } else { 375 + /* Keep previous hold time setting if no one set it */ 376 + dev->sda_hold_time = dw_readl(dev, DW_IC_SDA_HOLD); 377 + } 378 + } else { 379 + dev_warn(dev->dev, 380 + "Hardware too old to adjust SDA hold time.\n"); 377 381 } 378 382 379 383 /* Configure Tx/Rx FIFO threshold levels */
+1 -1
drivers/i2c/busses/i2c-rcar.c
··· 378 378 } 379 379 380 380 dma_addr = dma_map_single(chan->device->dev, buf, len, dir); 381 - if (dma_mapping_error(dev, dma_addr)) { 381 + if (dma_mapping_error(chan->device->dev, dma_addr)) { 382 382 dev_dbg(dev, "dma map failed, using PIO\n"); 383 383 return; 384 384 }
+13 -1
drivers/i2c/busses/i2c-rk3x.c
··· 918 918 * Code adapted from i2c-cadence.c. 919 919 * 920 920 * Return: NOTIFY_STOP if the rate change should be aborted, NOTIFY_OK 921 - * to acknowedge the change, NOTIFY_DONE if the notification is 921 + * to acknowledge the change, NOTIFY_DONE if the notification is 922 922 * considered irrelevant. 923 923 */ 924 924 static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long ··· 1109 1109 spin_unlock_irqrestore(&i2c->lock, flags); 1110 1110 1111 1111 return ret < 0 ? ret : num; 1112 + } 1113 + 1114 + static __maybe_unused int rk3x_i2c_resume(struct device *dev) 1115 + { 1116 + struct rk3x_i2c *i2c = dev_get_drvdata(dev); 1117 + 1118 + rk3x_i2c_adapt_div(i2c, clk_get_rate(i2c->clk)); 1119 + 1120 + return 0; 1112 1121 } 1113 1122 1114 1123 static u32 rk3x_i2c_func(struct i2c_adapter *adap) ··· 1343 1334 return 0; 1344 1335 } 1345 1336 1337 + static SIMPLE_DEV_PM_OPS(rk3x_i2c_pm_ops, NULL, rk3x_i2c_resume); 1338 + 1346 1339 static struct platform_driver rk3x_i2c_driver = { 1347 1340 .probe = rk3x_i2c_probe, 1348 1341 .remove = rk3x_i2c_remove, 1349 1342 .driver = { 1350 1343 .name = "rk3x-i2c", 1351 1344 .of_match_table = rk3x_i2c_match, 1345 + .pm = &rk3x_i2c_pm_ops, 1352 1346 }, 1353 1347 }; 1354 1348
+1 -1
drivers/i2c/busses/i2c-sh_mobile.c
··· 610 610 return; 611 611 612 612 dma_addr = dma_map_single(chan->device->dev, pd->msg->buf, pd->msg->len, dir); 613 - if (dma_mapping_error(pd->dev, dma_addr)) { 613 + if (dma_mapping_error(chan->device->dev, dma_addr)) { 614 614 dev_dbg(pd->dev, "dma map failed, using PIO\n"); 615 615 return; 616 616 }
+11 -4
drivers/i2c/muxes/i2c-demux-pinctrl.c
··· 37 37 struct i2c_demux_pinctrl_chan chan[]; 38 38 }; 39 39 40 - static struct property status_okay = { .name = "status", .length = 3, .value = "ok" }; 41 - 42 40 static int i2c_demux_master_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num) 43 41 { 44 42 struct i2c_demux_pinctrl_priv *priv = adap->algo_data; ··· 105 107 of_changeset_revert(&priv->chan[new_chan].chgset); 106 108 err: 107 109 dev_err(priv->dev, "failed to setup demux-adapter %d (%d)\n", new_chan, ret); 110 + priv->cur_chan = -EINVAL; 108 111 return ret; 109 112 } 110 113 ··· 191 192 { 192 193 struct device_node *np = pdev->dev.of_node; 193 194 struct i2c_demux_pinctrl_priv *priv; 195 + struct property *props; 194 196 int num_chan, i, j, err; 195 197 196 198 num_chan = of_count_phandle_with_args(np, "i2c-parent", NULL); ··· 202 202 203 203 priv = devm_kzalloc(&pdev->dev, sizeof(*priv) 204 204 + num_chan * sizeof(struct i2c_demux_pinctrl_chan), GFP_KERNEL); 205 - if (!priv) 205 + 206 + props = devm_kcalloc(&pdev->dev, num_chan, sizeof(*props), GFP_KERNEL); 207 + 208 + if (!priv || !props) 206 209 return -ENOMEM; 207 210 208 211 err = of_property_read_string(np, "i2c-bus-name", &priv->bus_name); ··· 223 220 } 224 221 priv->chan[i].parent_np = adap_np; 225 222 223 + props[i].name = devm_kstrdup(&pdev->dev, "status", GFP_KERNEL); 224 + props[i].value = devm_kstrdup(&pdev->dev, "ok", GFP_KERNEL); 225 + props[i].length = 3; 226 + 226 227 of_changeset_init(&priv->chan[i].chgset); 227 - of_changeset_update_property(&priv->chan[i].chgset, adap_np, &status_okay); 228 + of_changeset_update_property(&priv->chan[i].chgset, adap_np, &props[i]); 228 229 } 229 230 230 231 priv->num_chan = num_chan;
+11
drivers/iio/accel/bmc150-accel-core.c
··· 67 67 #define BMC150_ACCEL_REG_PMU_BW 0x10 68 68 #define BMC150_ACCEL_DEF_BW 125 69 69 70 + #define BMC150_ACCEL_REG_RESET 0x14 71 + #define BMC150_ACCEL_RESET_VAL 0xB6 72 + 70 73 #define BMC150_ACCEL_REG_INT_MAP_0 0x19 71 74 #define BMC150_ACCEL_INT_MAP_0_BIT_SLOPE BIT(2) 72 75 ··· 1499 1496 struct device *dev = regmap_get_device(data->regmap); 1500 1497 int ret, i; 1501 1498 unsigned int val; 1499 + 1500 + /* 1501 + * Reset chip to get it in a known good state. A delay of 1.8ms after 1502 + * reset is required according to the data sheets of supported chips. 1503 + */ 1504 + regmap_write(data->regmap, BMC150_ACCEL_REG_RESET, 1505 + BMC150_ACCEL_RESET_VAL); 1506 + usleep_range(1800, 2500); 1502 1507 1503 1508 ret = regmap_read(data->regmap, BMC150_ACCEL_REG_CHIP_ID, &val); 1504 1509 if (ret < 0) {
+1
drivers/iio/accel/kxsd9.c
··· 166 166 ret = spi_w8r8(st->us, KXSD9_READ(KXSD9_REG_CTRL_C)); 167 167 if (ret < 0) 168 168 goto error_ret; 169 + *val = 0; 169 170 *val2 = kxsd9_micro_scales[ret & KXSD9_FS_MASK]; 170 171 ret = IIO_VAL_INT_PLUS_MICRO; 171 172 break;
+2 -2
drivers/iio/common/hid-sensors/hid-sensor-attributes.c
··· 56 56 {HID_USAGE_SENSOR_ALS, 0, 1, 0}, 57 57 {HID_USAGE_SENSOR_ALS, HID_USAGE_SENSOR_UNITS_LUX, 1, 0}, 58 58 59 - {HID_USAGE_SENSOR_PRESSURE, 0, 100000, 0}, 60 - {HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 1, 0}, 59 + {HID_USAGE_SENSOR_PRESSURE, 0, 100, 0}, 60 + {HID_USAGE_SENSOR_PRESSURE, HID_USAGE_SENSOR_UNITS_PASCAL, 0, 1000}, 61 61 }; 62 62 63 63 static int pow_10(unsigned power)
+2 -2
drivers/iio/industrialio-buffer.c
··· 110 110 DEFINE_WAIT_FUNC(wait, woken_wake_function); 111 111 size_t datum_size; 112 112 size_t to_wait; 113 - int ret; 113 + int ret = 0; 114 114 115 115 if (!indio_dev->info) 116 116 return -ENODEV; ··· 153 153 ret = rb->access->read_first_n(rb, n, buf); 154 154 if (ret == 0 && (filp->f_flags & O_NONBLOCK)) 155 155 ret = -EAGAIN; 156 - } while (ret == 0); 156 + } while (ret == 0); 157 157 remove_wait_queue(&rb->pollq, &wait); 158 158 159 159 return ret;
+2 -3
drivers/iio/industrialio-core.c
··· 613 613 return sprintf(buf, "%d.%09u\n", vals[0], vals[1]); 614 614 case IIO_VAL_FRACTIONAL: 615 615 tmp = div_s64((s64)vals[0] * 1000000000LL, vals[1]); 616 - vals[1] = do_div(tmp, 1000000000LL); 617 - vals[0] = tmp; 618 - return sprintf(buf, "%d.%09u\n", vals[0], vals[1]); 616 + vals[0] = (int)div_s64_rem(tmp, 1000000000, &vals[1]); 617 + return sprintf(buf, "%d.%09u\n", vals[0], abs(vals[1])); 619 618 case IIO_VAL_FRACTIONAL_LOG2: 620 619 tmp = (s64)vals[0] * 1000000000LL >> vals[1]; 621 620 vals[1] = do_div(tmp, 1000000000LL);
+2 -11
drivers/infiniband/core/multicast.c
··· 106 106 atomic_t refcount; 107 107 enum mcast_group_state state; 108 108 struct ib_sa_query *query; 109 - int query_id; 110 109 u16 pkey_index; 111 110 u8 leave_state; 112 111 int retries; ··· 339 340 member->multicast.comp_mask, 340 341 3000, GFP_KERNEL, join_handler, group, 341 342 &group->query); 342 - if (ret >= 0) { 343 - group->query_id = ret; 344 - ret = 0; 345 - } 346 - return ret; 343 + return (ret > 0) ? 0 : ret; 347 344 } 348 345 349 346 static int send_leave(struct mcast_group *group, u8 leave_state) ··· 359 364 IB_SA_MCMEMBER_REC_JOIN_STATE, 360 365 3000, GFP_KERNEL, leave_handler, 361 366 group, &group->query); 362 - if (ret >= 0) { 363 - group->query_id = ret; 364 - ret = 0; 365 - } 366 - return ret; 367 + return (ret > 0) ? 0 : ret; 367 368 } 368 369 369 370 static void join_group(struct mcast_group *group, struct mcast_member *member,
+1 -1
drivers/infiniband/hw/cxgb4/qp.c
··· 683 683 return 0; 684 684 } 685 685 686 - void _free_qp(struct kref *kref) 686 + static void _free_qp(struct kref *kref) 687 687 { 688 688 struct c4iw_qp *qhp; 689 689
+81 -11
drivers/infiniband/hw/hfi1/chip.c
··· 9490 9490 write_csr(dd, DC_LCB_CFG_TX_FIFOS_RESET, 0x00); 9491 9491 } 9492 9492 9493 + /* 9494 + * Perform a test read on the QSFP. Return 0 on success, -ERRNO 9495 + * on error. 9496 + */ 9497 + static int test_qsfp_read(struct hfi1_pportdata *ppd) 9498 + { 9499 + int ret; 9500 + u8 status; 9501 + 9502 + /* report success if not a QSFP */ 9503 + if (ppd->port_type != PORT_TYPE_QSFP) 9504 + return 0; 9505 + 9506 + /* read byte 2, the status byte */ 9507 + ret = one_qsfp_read(ppd, ppd->dd->hfi1_id, 2, &status, 1); 9508 + if (ret < 0) 9509 + return ret; 9510 + if (ret != 1) 9511 + return -EIO; 9512 + 9513 + return 0; /* success */ 9514 + } 9515 + 9516 + /* 9517 + * Values for QSFP retry. 9518 + * 9519 + * Give up after 10s (20 x 500ms). The overall timeout was empirically 9520 + * arrived at from experience on a large cluster. 9521 + */ 9522 + #define MAX_QSFP_RETRIES 20 9523 + #define QSFP_RETRY_WAIT 500 /* msec */ 9524 + 9525 + /* 9526 + * Try a QSFP read. If it fails, schedule a retry for later. 9527 + * Called on first link activation after driver load. 9528 + */ 9529 + static void try_start_link(struct hfi1_pportdata *ppd) 9530 + { 9531 + if (test_qsfp_read(ppd)) { 9532 + /* read failed */ 9533 + if (ppd->qsfp_retry_count >= MAX_QSFP_RETRIES) { 9534 + dd_dev_err(ppd->dd, "QSFP not responding, giving up\n"); 9535 + return; 9536 + } 9537 + dd_dev_info(ppd->dd, 9538 + "QSFP not responding, waiting and retrying %d\n", 9539 + (int)ppd->qsfp_retry_count); 9540 + ppd->qsfp_retry_count++; 9541 + queue_delayed_work(ppd->hfi1_wq, &ppd->start_link_work, 9542 + msecs_to_jiffies(QSFP_RETRY_WAIT)); 9543 + return; 9544 + } 9545 + ppd->qsfp_retry_count = 0; 9546 + 9547 + /* 9548 + * Tune the SerDes to a ballpark setting for optimal signal and bit 9549 + * error rate. Needs to be done before starting the link. 9550 + */ 9551 + tune_serdes(ppd); 9552 + start_link(ppd); 9553 + } 9554 + 9555 + /* 9556 + * Workqueue function to start the link after a delay. 9557 + */ 9558 + void handle_start_link(struct work_struct *work) 9559 + { 9560 + struct hfi1_pportdata *ppd = container_of(work, struct hfi1_pportdata, 9561 + start_link_work.work); 9562 + try_start_link(ppd); 9563 + } 9564 + 9493 9565 int bringup_serdes(struct hfi1_pportdata *ppd) 9494 9566 { 9495 9567 struct hfi1_devdata *dd = ppd->dd; ··· 9597 9525 set_qsfp_int_n(ppd, 1); 9598 9526 } 9599 9527 9600 - /* 9601 - * Tune the SerDes to a ballpark setting for 9602 - * optimal signal and bit error rate 9603 - * Needs to be done before starting the link 9604 - */ 9605 - tune_serdes(ppd); 9606 - 9607 - return start_link(ppd); 9528 + try_start_link(ppd); 9529 + return 0; 9608 9530 } 9609 9531 9610 9532 void hfi1_quiet_serdes(struct hfi1_pportdata *ppd) ··· 9614 9548 */ 9615 9549 ppd->driver_link_ready = 0; 9616 9550 ppd->link_enabled = 0; 9551 + 9552 + ppd->qsfp_retry_count = MAX_QSFP_RETRIES; /* prevent more retries */ 9553 + flush_delayed_work(&ppd->start_link_work); 9554 + cancel_delayed_work_sync(&ppd->start_link_work); 9617 9555 9618 9556 ppd->offline_disabled_reason = 9619 9557 HFI1_ODR_MASK(OPA_LINKDOWN_REASON_SMA_DISABLED); ··· 12935 12865 */ 12936 12866 static int set_up_context_variables(struct hfi1_devdata *dd) 12937 12867 { 12938 - int num_kernel_contexts; 12868 + unsigned long num_kernel_contexts; 12939 12869 int total_contexts; 12940 12870 int ret; 12941 12871 unsigned ngroups; ··· 12964 12894 */ 12965 12895 if (num_kernel_contexts > (dd->chip_send_contexts - num_vls - 1)) { 12966 12896 dd_dev_err(dd, 12967 - "Reducing # kernel rcv contexts to: %d, from %d\n", 12897 + "Reducing # kernel rcv contexts to: %d, from %lu\n", 12968 12898 (int)(dd->chip_send_contexts - num_vls - 1), 12969 - (int)num_kernel_contexts); 12899 + num_kernel_contexts); 12970 12900 num_kernel_contexts = dd->chip_send_contexts - num_vls - 1; 12971 12901 } 12972 12902 /*
+1
drivers/infiniband/hw/hfi1/chip.h
··· 706 706 void handle_link_down(struct work_struct *work); 707 707 void handle_link_downgrade(struct work_struct *work); 708 708 void handle_link_bounce(struct work_struct *work); 709 + void handle_start_link(struct work_struct *work); 709 710 void handle_sma_message(struct work_struct *work); 710 711 void reset_qsfp(struct hfi1_pportdata *ppd); 711 712 void qsfp_event(struct work_struct *work);
+52 -80
drivers/infiniband/hw/hfi1/debugfs.c
··· 59 59 60 60 static struct dentry *hfi1_dbg_root; 61 61 62 + /* wrappers to enforce srcu in seq file */ 63 + static ssize_t hfi1_seq_read( 64 + struct file *file, 65 + char __user *buf, 66 + size_t size, 67 + loff_t *ppos) 68 + { 69 + struct dentry *d = file->f_path.dentry; 70 + int srcu_idx; 71 + ssize_t r; 72 + 73 + r = debugfs_use_file_start(d, &srcu_idx); 74 + if (likely(!r)) 75 + r = seq_read(file, buf, size, ppos); 76 + debugfs_use_file_finish(srcu_idx); 77 + return r; 78 + } 79 + 80 + static loff_t hfi1_seq_lseek( 81 + struct file *file, 82 + loff_t offset, 83 + int whence) 84 + { 85 + struct dentry *d = file->f_path.dentry; 86 + int srcu_idx; 87 + loff_t r; 88 + 89 + r = debugfs_use_file_start(d, &srcu_idx); 90 + if (likely(!r)) 91 + r = seq_lseek(file, offset, whence); 92 + debugfs_use_file_finish(srcu_idx); 93 + return r; 94 + } 95 + 62 96 #define private2dd(file) (file_inode(file)->i_private) 63 97 #define private2ppd(file) (file_inode(file)->i_private) 64 98 ··· 121 87 static const struct file_operations _##name##_file_ops = { \ 122 88 .owner = THIS_MODULE, \ 123 89 .open = _##name##_open, \ 124 - .read = seq_read, \ 125 - .llseek = seq_lseek, \ 90 + .read = hfi1_seq_read, \ 91 + .llseek = hfi1_seq_lseek, \ 126 92 .release = seq_release \ 127 93 } 128 94 ··· 139 105 DEBUGFS_FILE_CREATE(#name, parent, data, &_##name##_file_ops, S_IRUGO) 140 106 141 107 static void *_opcode_stats_seq_start(struct seq_file *s, loff_t *pos) 142 - __acquires(RCU) 143 108 { 144 109 struct hfi1_opcode_stats_perctx *opstats; 145 110 146 - rcu_read_lock(); 147 111 if (*pos >= ARRAY_SIZE(opstats->stats)) 148 112 return NULL; 149 113 return pos; ··· 158 126 } 159 127 160 128 static void _opcode_stats_seq_stop(struct seq_file *s, void *v) 161 - __releases(RCU) 162 129 { 163 - rcu_read_unlock(); 164 130 } 165 131 166 132 static int _opcode_stats_seq_show(struct seq_file *s, void *v) ··· 315 285 DEBUGFS_FILE_OPS(qp_stats); 316 286 317 287 static void *_sdes_seq_start(struct seq_file *s, loff_t *pos) 318 - __acquires(RCU) 319 288 { 320 289 struct hfi1_ibdev *ibd; 321 290 struct hfi1_devdata *dd; 322 291 323 - rcu_read_lock(); 324 292 ibd = (struct hfi1_ibdev *)s->private; 325 293 dd = dd_from_dev(ibd); 326 294 if (!dd->per_sdma || *pos >= dd->num_sdma) ··· 338 310 } 339 311 340 312 static void _sdes_seq_stop(struct seq_file *s, void *v) 341 - __releases(RCU) 342 313 { 343 - rcu_read_unlock(); 344 314 } 345 315 346 316 static int _sdes_seq_show(struct seq_file *s, void *v) ··· 365 339 struct hfi1_devdata *dd; 366 340 ssize_t rval; 367 341 368 - rcu_read_lock(); 369 342 dd = private2dd(file); 370 343 avail = hfi1_read_cntrs(dd, NULL, &counters); 371 344 rval = simple_read_from_buffer(buf, count, ppos, counters, avail); 372 - rcu_read_unlock(); 373 345 return rval; 374 346 } 375 347 ··· 380 356 struct hfi1_devdata *dd; 381 357 ssize_t rval; 382 358 383 - rcu_read_lock(); 384 359 dd = private2dd(file); 385 360 avail = hfi1_read_cntrs(dd, &names, NULL); 386 361 rval = simple_read_from_buffer(buf, count, ppos, names, avail); 387 - rcu_read_unlock(); 388 362 return rval; 389 363 } 390 364 ··· 405 383 struct hfi1_devdata *dd; 406 384 ssize_t rval; 407 385 408 - rcu_read_lock(); 409 386 dd = private2dd(file); 410 387 avail = hfi1_read_portcntrs(dd->pport, &names, NULL); 411 388 rval = simple_read_from_buffer(buf, count, ppos, names, avail); 412 - rcu_read_unlock(); 413 389 return rval; 414 390 } 415 391 ··· 420 400 struct hfi1_pportdata *ppd; 421 401 ssize_t rval; 422 402 423 - rcu_read_lock(); 424 403 ppd = private2ppd(file); 425 404 avail = hfi1_read_portcntrs(ppd, NULL, &counters); 426 405 rval = simple_read_from_buffer(buf, count, ppos, counters, avail); 427 - rcu_read_unlock(); 428 406 return rval; 429 407 } 430 408 ··· 452 434 int used; 453 435 int i; 454 436 455 - rcu_read_lock(); 456 437 ppd = private2ppd(file); 457 438 dd = ppd->dd; 458 439 size = PAGE_SIZE; 459 440 used = 0; 460 441 tmp = kmalloc(size, GFP_KERNEL); 461 - if (!tmp) { 462 - rcu_read_unlock(); 442 + if (!tmp) 463 443 return -ENOMEM; 464 - } 465 444 466 445 scratch0 = read_csr(dd, ASIC_CFG_SCRATCH); 467 446 used += scnprintf(tmp + used, size - used, ··· 485 470 used += scnprintf(tmp + used, size - used, "Write bits to clear\n"); 486 471 487 472 ret = simple_read_from_buffer(buf, count, ppos, tmp, used); 488 - rcu_read_unlock(); 489 473 kfree(tmp); 490 474 return ret; 491 475 } ··· 500 486 u64 scratch0; 501 487 u64 clear; 502 488 503 - rcu_read_lock(); 504 489 ppd = private2ppd(file); 505 490 dd = ppd->dd; 506 491 507 492 buff = kmalloc(count + 1, GFP_KERNEL); 508 - if (!buff) { 509 - ret = -ENOMEM; 510 - goto do_return; 511 - } 493 + if (!buff) 494 + return -ENOMEM; 512 495 513 496 ret = copy_from_user(buff, buf, count); 514 497 if (ret > 0) { ··· 538 527 539 528 do_free: 540 529 kfree(buff); 541 - do_return: 542 - rcu_read_unlock(); 543 530 return ret; 544 531 } 545 532 ··· 551 542 char *tmp; 552 543 int ret; 553 544 554 - rcu_read_lock(); 555 545 ppd = private2ppd(file); 556 546 tmp = kmalloc(PAGE_SIZE, GFP_KERNEL); 557 - if (!tmp) { 558 - rcu_read_unlock(); 547 + if (!tmp) 559 548 return -ENOMEM; 560 - } 561 549 562 550 ret = qsfp_dump(ppd, tmp, PAGE_SIZE); 563 551 if (ret > 0) 564 552 ret = simple_read_from_buffer(buf, count, ppos, tmp, ret); 565 - rcu_read_unlock(); 566 553 kfree(tmp); 567 554 return ret; 568 555 } ··· 574 569 int offset; 575 570 int total_written; 576 571 577 - rcu_read_lock(); 578 572 ppd = private2ppd(file); 579 573 580 574 /* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */ ··· 581 577 offset = *ppos & 0xffff; 582 578 583 579 /* explicitly reject invalid address 0 to catch cp and cat */ 584 - if (i2c_addr == 0) { 585 - ret = -EINVAL; 586 - goto _return; 587 - } 580 + if (i2c_addr == 0) 581 + return -EINVAL; 588 582 589 583 buff = kmalloc(count, GFP_KERNEL); 590 - if (!buff) { 591 - ret = -ENOMEM; 592 - goto _return; 593 - } 584 + if (!buff) 585 + return -ENOMEM; 594 586 595 587 ret = copy_from_user(buff, buf, count); 596 588 if (ret > 0) { ··· 606 606 607 607 _free: 608 608 kfree(buff); 609 - _return: 610 - rcu_read_unlock(); 611 609 return ret; 612 610 } 613 611 ··· 634 636 int offset; 635 637 int total_read; 636 638 637 - rcu_read_lock(); 638 639 ppd = private2ppd(file); 639 640 640 641 /* byte offset format: [offsetSize][i2cAddr][offsetHigh][offsetLow] */ ··· 641 644 offset = *ppos & 0xffff; 642 645 643 646 /* explicitly reject invalid address 0 to catch cp and cat */ 644 - if (i2c_addr == 0) { 645 - ret = -EINVAL; 646 - goto _return; 647 - } 647 + if (i2c_addr == 0) 648 + return -EINVAL; 648 649 649 650 buff = kmalloc(count, GFP_KERNEL); 650 - if (!buff) { 651 - ret = -ENOMEM; 652 - goto _return; 653 - } 651 + if (!buff) 652 + return -ENOMEM; 654 653 655 654 total_read = i2c_read(ppd, target, i2c_addr, offset, buff, count); 656 655 if (total_read < 0) { ··· 666 673 667 674 _free: 668 675 kfree(buff); 669 - _return: 670 - rcu_read_unlock(); 671 676 return ret; 672 677 } 673 678 ··· 692 701 int ret; 693 702 int total_written; 694 703 695 - rcu_read_lock(); 696 - if (*ppos + count > QSFP_PAGESIZE * 4) { /* base page + page00-page03 */ 697 - ret = -EINVAL; 698 - goto _return; 699 - } 704 + if (*ppos + count > QSFP_PAGESIZE * 4) /* base page + page00-page03 */ 705 + return -EINVAL; 700 706 701 707 ppd = private2ppd(file); 702 708 703 709 buff = kmalloc(count, GFP_KERNEL); 704 - if (!buff) { 705 - ret = -ENOMEM; 706 - goto _return; 707 - } 710 + if (!buff) 711 + return -ENOMEM; 708 712 709 713 ret = copy_from_user(buff, buf, count); 710 714 if (ret > 0) { 711 715 ret = -EFAULT; 712 716 goto _free; 713 717 } 714 - 715 718 total_written = qsfp_write(ppd, target, *ppos, buff, count); 716 719 if (total_written < 0) { 717 720 ret = total_written; ··· 718 733 719 734 _free: 720 735 kfree(buff); 721 - _return: 722 - rcu_read_unlock(); 723 736 return ret; 724 737 } 725 738 ··· 744 761 int ret; 745 762 int total_read; 746 763 747 - rcu_read_lock(); 748 764 if (*ppos + count > QSFP_PAGESIZE * 4) { /* base page + page00-page03 */ 749 765 ret = -EINVAL; 750 766 goto _return; ··· 776 794 _free: 777 795 kfree(buff); 778 796 _return: 779 - rcu_read_unlock(); 780 797 return ret; 781 798 } 782 799 ··· 991 1010 debugfs_remove_recursive(ibd->hfi1_ibdev_dbg); 992 1011 out: 993 1012 ibd->hfi1_ibdev_dbg = NULL; 994 - synchronize_rcu(); 995 1013 } 996 1014 997 1015 /* ··· 1015 1035 }; 1016 1036 1017 1037 static void *_driver_stats_names_seq_start(struct seq_file *s, loff_t *pos) 1018 - __acquires(RCU) 1019 1038 { 1020 - rcu_read_lock(); 1021 1039 if (*pos >= ARRAY_SIZE(hfi1_statnames)) 1022 1040 return NULL; 1023 1041 return pos; ··· 1033 1055 } 1034 1056 1035 1057 static void _driver_stats_names_seq_stop(struct seq_file *s, void *v) 1036 - __releases(RCU) 1037 1058 { 1038 - rcu_read_unlock(); 1039 1059 } 1040 1060 1041 1061 static int _driver_stats_names_seq_show(struct seq_file *s, void *v) ··· 1049 1073 DEBUGFS_FILE_OPS(driver_stats_names); 1050 1074 1051 1075 static void *_driver_stats_seq_start(struct seq_file *s, loff_t *pos) 1052 - __acquires(RCU) 1053 1076 { 1054 - rcu_read_lock(); 1055 1077 if (*pos >= ARRAY_SIZE(hfi1_statnames)) 1056 1078 return NULL; 1057 1079 return pos; ··· 1064 1090 } 1065 1091 1066 1092 static void _driver_stats_seq_stop(struct seq_file *s, void *v) 1067 - __releases(RCU) 1068 1093 { 1069 - rcu_read_unlock(); 1070 1094 } 1071 1095 1072 1096 static u64 hfi1_sps_ints(void)
+3 -1
drivers/infiniband/hw/hfi1/hfi.h
··· 605 605 struct work_struct freeze_work; 606 606 struct work_struct link_downgrade_work; 607 607 struct work_struct link_bounce_work; 608 + struct delayed_work start_link_work; 608 609 /* host link state variables */ 609 610 struct mutex hls_lock; 610 611 u32 host_link_state; ··· 660 659 u8 linkinit_reason; 661 660 u8 local_tx_rate; /* rate given to 8051 firmware */ 662 661 u8 last_pstate; /* info only */ 662 + u8 qsfp_retry_count; 663 663 664 664 /* placeholders for IB MAD packet settings */ 665 665 u8 overrun_threshold; ··· 1806 1804 extern unsigned int hfi1_cu; 1807 1805 extern unsigned int user_credit_return_threshold; 1808 1806 extern int num_user_contexts; 1809 - extern unsigned n_krcvqs; 1807 + extern unsigned long n_krcvqs; 1810 1808 extern uint krcvqs[]; 1811 1809 extern int krcvqsset; 1812 1810 extern uint kdeth_qp;
+2 -1
drivers/infiniband/hw/hfi1/init.c
··· 94 94 MODULE_PARM_DESC(krcvqs, "Array of the number of non-control kernel receive queues by VL"); 95 95 96 96 /* computed based on above array */ 97 - unsigned n_krcvqs; 97 + unsigned long n_krcvqs; 98 98 99 99 static unsigned hfi1_rcvarr_split = 25; 100 100 module_param_named(rcvarr_split, hfi1_rcvarr_split, uint, S_IRUGO); ··· 500 500 INIT_WORK(&ppd->link_downgrade_work, handle_link_downgrade); 501 501 INIT_WORK(&ppd->sma_message_work, handle_sma_message); 502 502 INIT_WORK(&ppd->link_bounce_work, handle_link_bounce); 503 + INIT_DELAYED_WORK(&ppd->start_link_work, handle_start_link); 503 504 INIT_WORK(&ppd->linkstate_active_work, receive_interrupt_work); 504 505 INIT_WORK(&ppd->qsfp_info.qsfp_work, qsfp_event); 505 506
+6 -6
drivers/infiniband/hw/hfi1/mad.c
··· 2604 2604 u8 lq, num_vls; 2605 2605 u8 res_lli, res_ler; 2606 2606 u64 port_mask; 2607 - unsigned long port_num; 2607 + u8 port_num; 2608 2608 unsigned long vl; 2609 2609 u32 vl_select_mask; 2610 2610 int vfi; ··· 2638 2638 */ 2639 2639 port_mask = be64_to_cpu(req->port_select_mask[3]); 2640 2640 port_num = find_first_bit((unsigned long *)&port_mask, 2641 - sizeof(port_mask)); 2641 + sizeof(port_mask) * 8); 2642 2642 2643 - if ((u8)port_num != port) { 2643 + if (port_num != port) { 2644 2644 pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; 2645 2645 return reply((struct ib_mad_hdr *)pmp); 2646 2646 } ··· 2842 2842 */ 2843 2843 port_mask = be64_to_cpu(req->port_select_mask[3]); 2844 2844 port_num = find_first_bit((unsigned long *)&port_mask, 2845 - sizeof(port_mask)); 2845 + sizeof(port_mask) * 8); 2846 2846 2847 2847 if (port_num != port) { 2848 2848 pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; ··· 3015 3015 */ 3016 3016 port_mask = be64_to_cpu(req->port_select_mask[3]); 3017 3017 port_num = find_first_bit((unsigned long *)&port_mask, 3018 - sizeof(port_mask)); 3018 + sizeof(port_mask) * 8); 3019 3019 3020 3020 if (port_num != port) { 3021 3021 pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD; ··· 3252 3252 */ 3253 3253 port_mask = be64_to_cpu(req->port_select_mask[3]); 3254 3254 port_num = find_first_bit((unsigned long *)&port_mask, 3255 - sizeof(port_mask)); 3255 + sizeof(port_mask) * 8); 3256 3256 3257 3257 if (port_num != port) { 3258 3258 pmp->mad_hdr.status |= IB_SMP_INVALID_FIELD;
+12
drivers/infiniband/hw/hfi1/pio_copy.c
··· 771 771 read_extra_bytes(pbuf, from, to_fill); 772 772 from += to_fill; 773 773 nbytes -= to_fill; 774 + /* may not be enough valid bytes left to align */ 775 + if (extra > nbytes) 776 + extra = nbytes; 774 777 775 778 /* ...now write carry */ 776 779 dest = pbuf->start + (pbuf->qw_written * sizeof(u64)); ··· 801 798 read_low_bytes(pbuf, from, extra); 802 799 from += extra; 803 800 nbytes -= extra; 801 + /* 802 + * If no bytes are left, return early - we are done. 803 + * NOTE: This short-circuit is *required* because 804 + * "extra" may have been reduced in size and "from" 805 + * is not aligned, as required when leaving this 806 + * if block. 807 + */ 808 + if (nbytes == 0) 809 + return; 804 810 } 805 811 806 812 /* at this point, from is QW aligned */
+4 -1
drivers/infiniband/hw/hfi1/user_sdma.c
··· 114 114 #define KDETH_HCRC_LOWER_SHIFT 24 115 115 #define KDETH_HCRC_LOWER_MASK 0xff 116 116 117 + #define AHG_KDETH_INTR_SHIFT 12 118 + 117 119 #define PBC2LRH(x) ((((x) & 0xfff) << 2) - 4) 118 120 #define LRH2PBC(x) ((((x) >> 2) + 1) & 0xfff) 119 121 ··· 1482 1480 /* Clear KDETH.SH on last packet */ 1483 1481 if (unlikely(tx->flags & TXREQ_FLAGS_REQ_LAST_PKT)) { 1484 1482 val |= cpu_to_le16(KDETH_GET(hdr->kdeth.ver_tid_offset, 1485 - INTR) >> 16); 1483 + INTR) << 1484 + AHG_KDETH_INTR_SHIFT); 1486 1485 val &= cpu_to_le16(~(1U << 13)); 1487 1486 AHG_HEADER_SET(req->ahg, diff, 7, 16, 14, val); 1488 1487 } else {
+1
drivers/infiniband/hw/i40iw/i40iw_hw.c
··· 265 265 info.dont_send_fin = false; 266 266 if (iwqp->sc_qp.term_flags && (state == I40IW_QP_STATE_ERROR)) 267 267 info.reset_tcp_conn = true; 268 + iwqp->hw_iwarp_state = state; 268 269 i40iw_hw_modify_qp(iwqp->iwdev, iwqp, &info, 0); 269 270 } 270 271
+3 -5
drivers/infiniband/hw/i40iw/i40iw_main.c
··· 100 100 .notifier_call = i40iw_net_event 101 101 }; 102 102 103 - static int i40iw_notifiers_registered; 103 + static atomic_t i40iw_notifiers_registered; 104 104 105 105 /** 106 106 * i40iw_find_i40e_handler - find a handler given a client info ··· 1342 1342 */ 1343 1343 static void i40iw_register_notifiers(void) 1344 1344 { 1345 - if (!i40iw_notifiers_registered) { 1345 + if (atomic_inc_return(&i40iw_notifiers_registered) == 1) { 1346 1346 register_inetaddr_notifier(&i40iw_inetaddr_notifier); 1347 1347 register_inet6addr_notifier(&i40iw_inetaddr6_notifier); 1348 1348 register_netevent_notifier(&i40iw_net_notifier); 1349 1349 } 1350 - i40iw_notifiers_registered++; 1351 1350 } 1352 1351 1353 1352 /** ··· 1428 1429 i40iw_del_macip_entry(iwdev, (u8)iwdev->mac_ip_table_idx); 1429 1430 /* fallthrough */ 1430 1431 case INET_NOTIFIER: 1431 - if (i40iw_notifiers_registered > 0) { 1432 - i40iw_notifiers_registered--; 1432 + if (!atomic_dec_return(&i40iw_notifiers_registered)) { 1433 1433 unregister_netevent_notifier(&i40iw_net_notifier); 1434 1434 unregister_inetaddr_notifier(&i40iw_inetaddr_notifier); 1435 1435 unregister_inet6addr_notifier(&i40iw_inetaddr6_notifier);
+2 -24
drivers/infiniband/hw/mlx4/cq.c
··· 687 687 is_error = (cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == 688 688 MLX4_CQE_OPCODE_ERROR; 689 689 690 - if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == MLX4_OPCODE_NOP && 691 - is_send)) { 692 - pr_warn("Completion for NOP opcode detected!\n"); 693 - return -EAGAIN; 694 - } 695 - 696 690 /* Resize CQ in progress */ 697 691 if (unlikely((cqe->owner_sr_opcode & MLX4_CQE_OPCODE_MASK) == MLX4_CQE_OPCODE_RESIZE)) { 698 692 if (cq->resize_buf) { ··· 712 718 */ 713 719 mqp = __mlx4_qp_lookup(to_mdev(cq->ibcq.device)->dev, 714 720 be32_to_cpu(cqe->vlan_my_qpn)); 715 - if (unlikely(!mqp)) { 716 - pr_warn("CQ %06x with entry for unknown QPN %06x\n", 717 - cq->mcq.cqn, be32_to_cpu(cqe->vlan_my_qpn) & MLX4_CQE_QPN_MASK); 718 - return -EAGAIN; 719 - } 720 - 721 721 *cur_qp = to_mibqp(mqp); 722 722 } 723 723 ··· 724 736 /* SRQ is also in the radix tree */ 725 737 msrq = mlx4_srq_lookup(to_mdev(cq->ibcq.device)->dev, 726 738 srq_num); 727 - if (unlikely(!msrq)) { 728 - pr_warn("CQ %06x with entry for unknown SRQN %06x\n", 729 - cq->mcq.cqn, srq_num); 730 - return -EAGAIN; 731 - } 732 739 } 733 740 734 741 if (is_send) { ··· 874 891 struct mlx4_ib_qp *cur_qp = NULL; 875 892 unsigned long flags; 876 893 int npolled; 877 - int err = 0; 878 894 struct mlx4_ib_dev *mdev = to_mdev(cq->ibcq.device); 879 895 880 896 spin_lock_irqsave(&cq->lock, flags); ··· 883 901 } 884 902 885 903 for (npolled = 0; npolled < num_entries; ++npolled) { 886 - err = mlx4_ib_poll_one(cq, &cur_qp, wc + npolled); 887 - if (err) 904 + if (mlx4_ib_poll_one(cq, &cur_qp, wc + npolled)) 888 905 break; 889 906 } 890 907 ··· 892 911 out: 893 912 spin_unlock_irqrestore(&cq->lock, flags); 894 913 895 - if (err == 0 || err == -EAGAIN) 896 - return npolled; 897 - else 898 - return err; 914 + return npolled; 899 915 } 900 916 901 917 int mlx4_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
+2 -20
drivers/infiniband/hw/mlx5/cq.c
··· 553 553 * from the table. 554 554 */ 555 555 mqp = __mlx5_qp_lookup(dev->mdev, qpn); 556 - if (unlikely(!mqp)) { 557 - mlx5_ib_warn(dev, "CQE@CQ %06x for unknown QPN %6x\n", 558 - cq->mcq.cqn, qpn); 559 - return -EINVAL; 560 - } 561 - 562 556 *cur_qp = to_mibqp(mqp); 563 557 } 564 558 ··· 613 619 read_lock(&dev->mdev->priv.mkey_table.lock); 614 620 mmkey = __mlx5_mr_lookup(dev->mdev, 615 621 mlx5_base_mkey(be32_to_cpu(sig_err_cqe->mkey))); 616 - if (unlikely(!mmkey)) { 617 - read_unlock(&dev->mdev->priv.mkey_table.lock); 618 - mlx5_ib_warn(dev, "CQE@CQ %06x for unknown MR %6x\n", 619 - cq->mcq.cqn, be32_to_cpu(sig_err_cqe->mkey)); 620 - return -EINVAL; 621 - } 622 - 623 622 mr = to_mibmr(mmkey); 624 623 get_sig_err_item(sig_err_cqe, &mr->sig->err_item); 625 624 mr->sig->sig_err_exists = true; ··· 663 676 unsigned long flags; 664 677 int soft_polled = 0; 665 678 int npolled; 666 - int err = 0; 667 679 668 680 spin_lock_irqsave(&cq->lock, flags); 669 681 if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { ··· 674 688 soft_polled = poll_soft_wc(cq, num_entries, wc); 675 689 676 690 for (npolled = 0; npolled < num_entries - soft_polled; npolled++) { 677 - err = mlx5_poll_one(cq, &cur_qp, wc + soft_polled + npolled); 678 - if (err) 691 + if (mlx5_poll_one(cq, &cur_qp, wc + soft_polled + npolled)) 679 692 break; 680 693 } 681 694 ··· 683 698 out: 684 699 spin_unlock_irqrestore(&cq->lock, flags); 685 700 686 - if (err == 0 || err == -EAGAIN) 687 - return soft_polled + npolled; 688 - else 689 - return err; 701 + return soft_polled + npolled; 690 702 } 691 703 692 704 int mlx5_ib_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
+5 -1
drivers/infiniband/hw/mlx5/main.c
··· 1849 1849 int domain) 1850 1850 { 1851 1851 struct mlx5_ib_dev *dev = to_mdev(qp->device); 1852 + struct mlx5_ib_qp *mqp = to_mqp(qp); 1852 1853 struct mlx5_ib_flow_handler *handler = NULL; 1853 1854 struct mlx5_flow_destination *dst = NULL; 1854 1855 struct mlx5_ib_flow_prio *ft_prio; ··· 1876 1875 } 1877 1876 1878 1877 dst->type = MLX5_FLOW_DESTINATION_TYPE_TIR; 1879 - dst->tir_num = to_mqp(qp)->raw_packet_qp.rq.tirn; 1878 + if (mqp->flags & MLX5_IB_QP_RSS) 1879 + dst->tir_num = mqp->rss_qp.tirn; 1880 + else 1881 + dst->tir_num = mqp->raw_packet_qp.rq.tirn; 1880 1882 1881 1883 if (flow_attr->type == IB_FLOW_ATTR_NORMAL) { 1882 1884 if (flow_attr->flags & IB_FLOW_ATTR_FLAGS_DONT_TRAP) {
+3 -3
drivers/infiniband/hw/mlx5/mem.c
··· 71 71 72 72 addr = addr >> page_shift; 73 73 tmp = (unsigned long)addr; 74 - m = find_first_bit(&tmp, sizeof(tmp)); 74 + m = find_first_bit(&tmp, BITS_PER_LONG); 75 75 skip = 1 << m; 76 76 mask = skip - 1; 77 77 i = 0; ··· 81 81 for (k = 0; k < len; k++) { 82 82 if (!(i & mask)) { 83 83 tmp = (unsigned long)pfn; 84 - m = min_t(unsigned long, m, find_first_bit(&tmp, sizeof(tmp))); 84 + m = min_t(unsigned long, m, find_first_bit(&tmp, BITS_PER_LONG)); 85 85 skip = 1 << m; 86 86 mask = skip - 1; 87 87 base = pfn; ··· 89 89 } else { 90 90 if (base + p != pfn) { 91 91 tmp = (unsigned long)p; 92 - m = find_first_bit(&tmp, sizeof(tmp)); 92 + m = find_first_bit(&tmp, BITS_PER_LONG); 93 93 skip = 1 << m; 94 94 mask = skip - 1; 95 95 base = pfn;
+1
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 402 402 /* QP uses 1 as its source QP number */ 403 403 MLX5_IB_QP_SQPN_QP1 = 1 << 6, 404 404 MLX5_IB_QP_CAP_SCATTER_FCS = 1 << 7, 405 + MLX5_IB_QP_RSS = 1 << 8, 405 406 }; 406 407 407 408 struct mlx5_umr_wr {
+5 -8
drivers/infiniband/hw/mlx5/qp.c
··· 1449 1449 kvfree(in); 1450 1450 /* qpn is reserved for that QP */ 1451 1451 qp->trans_qp.base.mqp.qpn = 0; 1452 + qp->flags |= MLX5_IB_QP_RSS; 1452 1453 return 0; 1453 1454 1454 1455 err: ··· 3659 3658 struct ib_send_wr *wr, unsigned *idx, 3660 3659 int *size, int nreq) 3661 3660 { 3662 - int err = 0; 3663 - 3664 - if (unlikely(mlx5_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq))) { 3665 - err = -ENOMEM; 3666 - return err; 3667 - } 3661 + if (unlikely(mlx5_wq_overflow(&qp->sq, nreq, qp->ibqp.send_cq))) 3662 + return -ENOMEM; 3668 3663 3669 3664 *idx = qp->sq.cur_post & (qp->sq.wqe_cnt - 1); 3670 3665 *seg = mlx5_get_send_wqe(qp, *idx); ··· 3676 3679 *seg += sizeof(**ctrl); 3677 3680 *size = sizeof(**ctrl) / 16; 3678 3681 3679 - return err; 3682 + return 0; 3680 3683 } 3681 3684 3682 3685 static void finish_wqe(struct mlx5_ib_qp *qp, ··· 3755 3758 num_sge = wr->num_sge; 3756 3759 if (unlikely(num_sge > qp->sq.max_gs)) { 3757 3760 mlx5_ib_warn(dev, "\n"); 3758 - err = -ENOMEM; 3761 + err = -EINVAL; 3759 3762 *bad_wr = wr; 3760 3763 goto out; 3761 3764 }
+1
drivers/infiniband/ulp/ipoib/ipoib.h
··· 478 478 struct ipoib_ah *address, u32 qpn); 479 479 void ipoib_reap_ah(struct work_struct *work); 480 480 481 + struct ipoib_path *__path_find(struct net_device *dev, void *gid); 481 482 void ipoib_mark_paths_invalid(struct net_device *dev); 482 483 void ipoib_flush_paths(struct net_device *dev); 483 484 int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv);
+16
drivers/infiniband/ulp/ipoib/ipoib_cm.c
··· 1318 1318 } 1319 1319 } 1320 1320 1321 + #define QPN_AND_OPTIONS_OFFSET 4 1322 + 1321 1323 static void ipoib_cm_tx_start(struct work_struct *work) 1322 1324 { 1323 1325 struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv, ··· 1328 1326 struct ipoib_neigh *neigh; 1329 1327 struct ipoib_cm_tx *p; 1330 1328 unsigned long flags; 1329 + struct ipoib_path *path; 1331 1330 int ret; 1332 1331 1333 1332 struct ib_sa_path_rec pathrec; ··· 1341 1338 p = list_entry(priv->cm.start_list.next, typeof(*p), list); 1342 1339 list_del_init(&p->list); 1343 1340 neigh = p->neigh; 1341 + 1344 1342 qpn = IPOIB_QPN(neigh->daddr); 1343 + /* 1344 + * As long as the search is with these 2 locks, 1345 + * path existence indicates its validity. 1346 + */ 1347 + path = __path_find(dev, neigh->daddr + QPN_AND_OPTIONS_OFFSET); 1348 + if (!path) { 1349 + pr_info("%s ignore not valid path %pI6\n", 1350 + __func__, 1351 + neigh->daddr + QPN_AND_OPTIONS_OFFSET); 1352 + goto free_neigh; 1353 + } 1345 1354 memcpy(&pathrec, &p->path->pathrec, sizeof pathrec); 1346 1355 1347 1356 spin_unlock_irqrestore(&priv->lock, flags); ··· 1365 1350 spin_lock_irqsave(&priv->lock, flags); 1366 1351 1367 1352 if (ret) { 1353 + free_neigh: 1368 1354 neigh = p->neigh; 1369 1355 if (neigh) { 1370 1356 neigh->cm = NULL;
+1 -1
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 485 485 return -EINVAL; 486 486 } 487 487 488 - static struct ipoib_path *__path_find(struct net_device *dev, void *gid) 488 + struct ipoib_path *__path_find(struct net_device *dev, void *gid) 489 489 { 490 490 struct ipoib_dev_priv *priv = netdev_priv(dev); 491 491 struct rb_node *n = priv->path_tree.rb_node;
+20 -3
drivers/infiniband/ulp/isert/ib_isert.c
··· 403 403 INIT_LIST_HEAD(&isert_conn->node); 404 404 init_completion(&isert_conn->login_comp); 405 405 init_completion(&isert_conn->login_req_comp); 406 + init_waitqueue_head(&isert_conn->rem_wait); 406 407 kref_init(&isert_conn->kref); 407 408 mutex_init(&isert_conn->mutex); 408 409 INIT_WORK(&isert_conn->release_work, isert_release_work); ··· 579 578 BUG_ON(!device); 580 579 581 580 isert_free_rx_descriptors(isert_conn); 582 - if (isert_conn->cm_id) 581 + if (isert_conn->cm_id && 582 + !isert_conn->dev_removed) 583 583 rdma_destroy_id(isert_conn->cm_id); 584 584 585 585 if (isert_conn->qp) { ··· 595 593 596 594 isert_device_put(device); 597 595 598 - kfree(isert_conn); 596 + if (isert_conn->dev_removed) 597 + wake_up_interruptible(&isert_conn->rem_wait); 598 + else 599 + kfree(isert_conn); 599 600 } 600 601 601 602 static void ··· 758 753 isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) 759 754 { 760 755 struct isert_np *isert_np = cma_id->context; 756 + struct isert_conn *isert_conn; 761 757 int ret = 0; 762 758 763 759 isert_info("%s (%d): status %d id %p np %p\n", ··· 779 773 break; 780 774 case RDMA_CM_EVENT_ADDR_CHANGE: /* FALLTHRU */ 781 775 case RDMA_CM_EVENT_DISCONNECTED: /* FALLTHRU */ 782 - case RDMA_CM_EVENT_DEVICE_REMOVAL: /* FALLTHRU */ 783 776 case RDMA_CM_EVENT_TIMEWAIT_EXIT: /* FALLTHRU */ 784 777 ret = isert_disconnected_handler(cma_id, event->event); 785 778 break; 779 + case RDMA_CM_EVENT_DEVICE_REMOVAL: 780 + isert_conn = cma_id->qp->qp_context; 781 + isert_conn->dev_removed = true; 782 + isert_disconnected_handler(cma_id, event->event); 783 + wait_event_interruptible(isert_conn->rem_wait, 784 + isert_conn->state == ISER_CONN_DOWN); 785 + kfree(isert_conn); 786 + /* 787 + * return non-zero from the callback to destroy 788 + * the rdma cm id 789 + */ 790 + return 1; 786 791 case RDMA_CM_EVENT_REJECTED: /* FALLTHRU */ 787 792 case RDMA_CM_EVENT_UNREACHABLE: /* FALLTHRU */ 788 793 case RDMA_CM_EVENT_CONNECT_ERROR:
+2
drivers/infiniband/ulp/isert/ib_isert.h
··· 158 158 struct work_struct release_work; 159 159 bool logout_posted; 160 160 bool snd_w_inv; 161 + wait_queue_head_t rem_wait; 162 + bool dev_removed; 161 163 }; 162 164 163 165 #define ISERT_MAX_CQ 64
+1
drivers/mailbox/Kconfig
··· 127 127 config BCM_PDC_MBOX 128 128 tristate "Broadcom PDC Mailbox" 129 129 depends on ARM64 || COMPILE_TEST 130 + depends on HAS_DMA 130 131 default ARCH_BCM_IPROC 131 132 help 132 133 Mailbox implementation for the Broadcom PDC ring manager,
+6 -5
drivers/mailbox/bcm-pdc-mailbox.c
··· 469 469 * this directory for a SPU. 470 470 * @pdcs: PDC state structure 471 471 */ 472 - void pdc_setup_debugfs(struct pdc_state *pdcs) 472 + static void pdc_setup_debugfs(struct pdc_state *pdcs) 473 473 { 474 474 char spu_stats_name[16]; 475 475 ··· 485 485 &pdc_debugfs_stats); 486 486 } 487 487 488 - void pdc_free_debugfs(void) 488 + static void pdc_free_debugfs(void) 489 489 { 490 490 if (debugfs_dir && simple_empty(debugfs_dir)) { 491 491 debugfs_remove_recursive(debugfs_dir); ··· 1191 1191 { 1192 1192 struct pdc_state *pdcs = chan->con_priv; 1193 1193 1194 - if (pdcs) 1195 - dev_dbg(&pdcs->pdev->dev, 1196 - "Shutdown mailbox channel for PDC %u", pdcs->pdc_idx); 1194 + if (!pdcs) 1195 + return; 1197 1196 1197 + dev_dbg(&pdcs->pdev->dev, 1198 + "Shutdown mailbox channel for PDC %u", pdcs->pdc_idx); 1198 1199 pdc_ring_free(pdcs); 1199 1200 } 1200 1201
+7 -14
drivers/memory/omap-gpmc.c
··· 2185 2185 return 0; 2186 2186 } 2187 2187 2188 - static int gpmc_probe_dt_children(struct platform_device *pdev) 2188 + static void gpmc_probe_dt_children(struct platform_device *pdev) 2189 2189 { 2190 2190 int ret; 2191 2191 struct device_node *child; ··· 2200 2200 else 2201 2201 ret = gpmc_probe_generic_child(pdev, child); 2202 2202 2203 - if (ret) 2204 - return ret; 2203 + if (ret) { 2204 + dev_err(&pdev->dev, "failed to probe DT child '%s': %d\n", 2205 + child->name, ret); 2206 + } 2205 2207 } 2206 - 2207 - return 0; 2208 2208 } 2209 2209 #else 2210 2210 static int gpmc_probe_dt(struct platform_device *pdev) ··· 2212 2212 return 0; 2213 2213 } 2214 2214 2215 - static int gpmc_probe_dt_children(struct platform_device *pdev) 2215 + static void gpmc_probe_dt_children(struct platform_device *pdev) 2216 2216 { 2217 - return 0; 2218 2217 } 2219 2218 #endif /* CONFIG_OF */ 2220 2219 ··· 2368 2369 goto setup_irq_failed; 2369 2370 } 2370 2371 2371 - rc = gpmc_probe_dt_children(pdev); 2372 - if (rc < 0) { 2373 - dev_err(gpmc->dev, "failed to probe DT children\n"); 2374 - goto dt_children_failed; 2375 - } 2372 + gpmc_probe_dt_children(pdev); 2376 2373 2377 2374 return 0; 2378 2375 2379 - dt_children_failed: 2380 - gpmc_free_irq(gpmc); 2381 2376 setup_irq_failed: 2382 2377 gpmc_gpio_exit(gpmc); 2383 2378 gpio_init_failed:
+17 -8
drivers/misc/lkdtm_usercopy.c
··· 9 9 #include <linux/uaccess.h> 10 10 #include <asm/cacheflush.h> 11 11 12 - static size_t cache_size = 1024; 12 + /* 13 + * Many of the tests here end up using const sizes, but those would 14 + * normally be ignored by hardened usercopy, so force the compiler 15 + * into choosing the non-const path to make sure we trigger the 16 + * hardened usercopy checks by added "unconst" to all the const copies, 17 + * and making sure "cache_size" isn't optimized into a const. 18 + */ 19 + static volatile size_t unconst = 0; 20 + static volatile size_t cache_size = 1024; 13 21 static struct kmem_cache *bad_cache; 14 22 15 23 static const unsigned char test_text[] = "This is a test.\n"; ··· 75 67 if (to_user) { 76 68 pr_info("attempting good copy_to_user of local stack\n"); 77 69 if (copy_to_user((void __user *)user_addr, good_stack, 78 - sizeof(good_stack))) { 70 + unconst + sizeof(good_stack))) { 79 71 pr_warn("copy_to_user failed unexpectedly?!\n"); 80 72 goto free_user; 81 73 } 82 74 83 75 pr_info("attempting bad copy_to_user of distant stack\n"); 84 76 if (copy_to_user((void __user *)user_addr, bad_stack, 85 - sizeof(good_stack))) { 77 + unconst + sizeof(good_stack))) { 86 78 pr_warn("copy_to_user failed, but lacked Oops\n"); 87 79 goto free_user; 88 80 } ··· 96 88 97 89 pr_info("attempting good copy_from_user of local stack\n"); 98 90 if (copy_from_user(good_stack, (void __user *)user_addr, 99 - sizeof(good_stack))) { 91 + unconst + sizeof(good_stack))) { 100 92 pr_warn("copy_from_user failed unexpectedly?!\n"); 101 93 goto free_user; 102 94 } 103 95 104 96 pr_info("attempting bad copy_from_user of distant stack\n"); 105 97 if (copy_from_user(bad_stack, (void __user *)user_addr, 106 - sizeof(good_stack))) { 98 + unconst + sizeof(good_stack))) { 107 99 pr_warn("copy_from_user failed, but lacked Oops\n"); 108 100 goto free_user; 109 101 } ··· 117 109 { 118 110 unsigned long user_addr; 119 111 unsigned char *one, *two; 120 - const size_t size = 1024; 112 + size_t size = unconst + 1024; 121 113 122 114 one = kmalloc(size, GFP_KERNEL); 123 115 two = kmalloc(size, GFP_KERNEL); ··· 293 285 294 286 pr_info("attempting good copy_to_user from kernel rodata\n"); 295 287 if (copy_to_user((void __user *)user_addr, test_text, 296 - sizeof(test_text))) { 288 + unconst + sizeof(test_text))) { 297 289 pr_warn("copy_to_user failed unexpectedly?!\n"); 298 290 goto free_user; 299 291 } 300 292 301 293 pr_info("attempting bad copy_to_user from kernel text\n"); 302 - if (copy_to_user((void __user *)user_addr, vm_mmap, PAGE_SIZE)) { 294 + if (copy_to_user((void __user *)user_addr, vm_mmap, 295 + unconst + PAGE_SIZE)) { 303 296 pr_warn("copy_to_user failed, but lacked Oops\n"); 304 297 goto free_user; 305 298 }
+5 -1
drivers/nvdimm/bus.c
··· 185 185 return -ENXIO; 186 186 187 187 nd_desc = nvdimm_bus->nd_desc; 188 + /* 189 + * if ndctl does not exist, it's PMEM_LEGACY and 190 + * we want to just pretend everything is handled. 191 + */ 188 192 if (!nd_desc->ndctl) 189 - return -ENXIO; 193 + return len; 190 194 191 195 memset(&ars_cap, 0, sizeof(ars_cap)); 192 196 ars_cap.address = phys;
+1 -1
drivers/nvme/host/Kconfig
··· 30 30 31 31 config NVME_RDMA 32 32 tristate "NVM Express over Fabrics RDMA host driver" 33 - depends on INFINIBAND 33 + depends on INFINIBAND && BLOCK 34 34 select NVME_CORE 35 35 select NVME_FABRICS 36 36 select SG_POOL
+2 -3
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1539 1539 offset += range->npins; 1540 1540 } 1541 1541 1542 - /* Mask and clear all interrupts */ 1543 - chv_writel(0, pctrl->regs + CHV_INTMASK); 1542 + /* Clear all interrupts */ 1544 1543 chv_writel(0xffff, pctrl->regs + CHV_INTSTAT); 1545 1544 1546 1545 ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, 0, 1547 - handle_simple_irq, IRQ_TYPE_NONE); 1546 + handle_bad_irq, IRQ_TYPE_NONE); 1548 1547 if (ret) { 1549 1548 dev_err(pctrl->dev, "failed to add IRQ chip\n"); 1550 1549 goto fail;
+6 -6
drivers/pinctrl/pinctrl-pistachio.c
··· 809 809 PADS_FUNCTION_SELECT2, 12, 0x3), 810 810 MFIO_MUX_PIN_GROUP(83, MIPS_PLL_LOCK, MIPS_TRACE_DATA, USB_DEBUG, 811 811 PADS_FUNCTION_SELECT2, 14, 0x3), 812 - MFIO_MUX_PIN_GROUP(84, SYS_PLL_LOCK, MIPS_TRACE_DATA, USB_DEBUG, 812 + MFIO_MUX_PIN_GROUP(84, AUDIO_PLL_LOCK, MIPS_TRACE_DATA, USB_DEBUG, 813 813 PADS_FUNCTION_SELECT2, 16, 0x3), 814 - MFIO_MUX_PIN_GROUP(85, WIFI_PLL_LOCK, MIPS_TRACE_DATA, SDHOST_DEBUG, 814 + MFIO_MUX_PIN_GROUP(85, RPU_V_PLL_LOCK, MIPS_TRACE_DATA, SDHOST_DEBUG, 815 815 PADS_FUNCTION_SELECT2, 18, 0x3), 816 - MFIO_MUX_PIN_GROUP(86, BT_PLL_LOCK, MIPS_TRACE_DATA, SDHOST_DEBUG, 816 + MFIO_MUX_PIN_GROUP(86, RPU_L_PLL_LOCK, MIPS_TRACE_DATA, SDHOST_DEBUG, 817 817 PADS_FUNCTION_SELECT2, 20, 0x3), 818 - MFIO_MUX_PIN_GROUP(87, RPU_V_PLL_LOCK, DREQ2, SOCIF_DEBUG, 818 + MFIO_MUX_PIN_GROUP(87, SYS_PLL_LOCK, DREQ2, SOCIF_DEBUG, 819 819 PADS_FUNCTION_SELECT2, 22, 0x3), 820 - MFIO_MUX_PIN_GROUP(88, RPU_L_PLL_LOCK, DREQ3, SOCIF_DEBUG, 820 + MFIO_MUX_PIN_GROUP(88, WIFI_PLL_LOCK, DREQ3, SOCIF_DEBUG, 821 821 PADS_FUNCTION_SELECT2, 24, 0x3), 822 - MFIO_MUX_PIN_GROUP(89, AUDIO_PLL_LOCK, DREQ4, DREQ5, 822 + MFIO_MUX_PIN_GROUP(89, BT_PLL_LOCK, DREQ4, DREQ5, 823 823 PADS_FUNCTION_SELECT2, 26, 0x3), 824 824 PIN_GROUP(TCK, "tck"), 825 825 PIN_GROUP(TRSTN, "trstn"),
+2 -2
drivers/pinctrl/sunxi/pinctrl-sun8i-a23.c
··· 485 485 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 8), 486 486 SUNXI_FUNCTION(0x0, "gpio_in"), 487 487 SUNXI_FUNCTION(0x1, "gpio_out"), 488 - SUNXI_FUNCTION(0x2, "uart2"), /* RTS */ 488 + SUNXI_FUNCTION(0x2, "uart1"), /* RTS */ 489 489 SUNXI_FUNCTION_IRQ_BANK(0x4, 2, 8)), /* PG_EINT8 */ 490 490 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 9), 491 491 SUNXI_FUNCTION(0x0, "gpio_in"), 492 492 SUNXI_FUNCTION(0x1, "gpio_out"), 493 - SUNXI_FUNCTION(0x2, "uart2"), /* CTS */ 493 + SUNXI_FUNCTION(0x2, "uart1"), /* CTS */ 494 494 SUNXI_FUNCTION_IRQ_BANK(0x4, 2, 9)), /* PG_EINT9 */ 495 495 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 10), 496 496 SUNXI_FUNCTION(0x0, "gpio_in"),
+2 -2
drivers/pinctrl/sunxi/pinctrl-sun8i-a33.c
··· 407 407 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 8), 408 408 SUNXI_FUNCTION(0x0, "gpio_in"), 409 409 SUNXI_FUNCTION(0x1, "gpio_out"), 410 - SUNXI_FUNCTION(0x2, "uart2"), /* RTS */ 410 + SUNXI_FUNCTION(0x2, "uart1"), /* RTS */ 411 411 SUNXI_FUNCTION_IRQ_BANK(0x4, 1, 8)), /* PG_EINT8 */ 412 412 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 9), 413 413 SUNXI_FUNCTION(0x0, "gpio_in"), 414 414 SUNXI_FUNCTION(0x1, "gpio_out"), 415 - SUNXI_FUNCTION(0x2, "uart2"), /* CTS */ 415 + SUNXI_FUNCTION(0x2, "uart1"), /* CTS */ 416 416 SUNXI_FUNCTION_IRQ_BANK(0x4, 1, 9)), /* PG_EINT9 */ 417 417 SUNXI_PIN(SUNXI_PINCTRL_PIN(G, 10), 418 418 SUNXI_FUNCTION(0x0, "gpio_in"),
+2 -2
drivers/regulator/max14577-regulator.c
··· 2 2 * max14577.c - Regulator driver for the Maxim 14577/77836 3 3 * 4 4 * Copyright (C) 2013,2014 Samsung Electronics 5 - * Krzysztof Kozlowski <k.kozlowski@samsung.com> 5 + * Krzysztof Kozlowski <krzk@kernel.org> 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify 8 8 * it under the terms of the GNU General Public License as published by ··· 331 331 } 332 332 module_exit(max14577_regulator_exit); 333 333 334 - MODULE_AUTHOR("Krzysztof Kozlowski <k.kozlowski@samsung.com>"); 334 + MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>"); 335 335 MODULE_DESCRIPTION("Maxim 14577/77836 regulator driver"); 336 336 MODULE_LICENSE("GPL"); 337 337 MODULE_ALIAS("platform:max14577-regulator");
+2 -2
drivers/regulator/max77693-regulator.c
··· 3 3 * 4 4 * Copyright (C) 2013-2015 Samsung Electronics 5 5 * Jonghwa Lee <jonghwa3.lee@samsung.com> 6 - * Krzysztof Kozlowski <k.kozlowski.k@gmail.com> 6 + * Krzysztof Kozlowski <krzk@kernel.org> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License as published by ··· 314 314 315 315 MODULE_DESCRIPTION("MAXIM 77693/77843 regulator driver"); 316 316 MODULE_AUTHOR("Jonghwa Lee <jonghwa3.lee@samsung.com>"); 317 - MODULE_AUTHOR("Krzysztof Kozlowski <k.kozlowski.k@gmail.com>"); 317 + MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>"); 318 318 MODULE_LICENSE("GPL");
+16 -14
drivers/regulator/qcom_smd-regulator.c
··· 178 178 static const struct regulator_desc pma8084_ftsmps = { 179 179 .linear_ranges = (struct regulator_linear_range[]) { 180 180 REGULATOR_LINEAR_RANGE(350000, 0, 184, 5000), 181 - REGULATOR_LINEAR_RANGE(700000, 185, 339, 10000), 181 + REGULATOR_LINEAR_RANGE(1280000, 185, 261, 10000), 182 182 }, 183 183 .n_linear_ranges = 2, 184 - .n_voltages = 340, 184 + .n_voltages = 262, 185 185 .ops = &rpm_smps_ldo_ops, 186 186 }; 187 187 188 188 static const struct regulator_desc pma8084_pldo = { 189 189 .linear_ranges = (struct regulator_linear_range[]) { 190 - REGULATOR_LINEAR_RANGE(750000, 0, 30, 25000), 191 - REGULATOR_LINEAR_RANGE(1500000, 31, 99, 50000), 190 + REGULATOR_LINEAR_RANGE( 750000, 0, 63, 12500), 191 + REGULATOR_LINEAR_RANGE(1550000, 64, 126, 25000), 192 + REGULATOR_LINEAR_RANGE(3100000, 127, 163, 50000), 192 193 }, 193 - .n_linear_ranges = 2, 194 - .n_voltages = 100, 194 + .n_linear_ranges = 3, 195 + .n_voltages = 164, 195 196 .ops = &rpm_smps_ldo_ops, 196 197 }; 197 198 ··· 222 221 static const struct regulator_desc pm8841_ftsmps = { 223 222 .linear_ranges = (struct regulator_linear_range[]) { 224 223 REGULATOR_LINEAR_RANGE(350000, 0, 184, 5000), 225 - REGULATOR_LINEAR_RANGE(700000, 185, 339, 10000), 224 + REGULATOR_LINEAR_RANGE(1280000, 185, 261, 10000), 226 225 }, 227 226 .n_linear_ranges = 2, 228 - .n_voltages = 340, 227 + .n_voltages = 262, 229 228 .ops = &rpm_smps_ldo_ops, 230 229 }; 231 230 232 231 static const struct regulator_desc pm8941_boost = { 233 232 .linear_ranges = (struct regulator_linear_range[]) { 234 - REGULATOR_LINEAR_RANGE(4000000, 0, 15, 100000), 233 + REGULATOR_LINEAR_RANGE(4000000, 0, 30, 50000), 235 234 }, 236 235 .n_linear_ranges = 1, 237 - .n_voltages = 16, 236 + .n_voltages = 31, 238 237 .ops = &rpm_smps_ldo_ops, 239 238 }; 240 239 241 240 static const struct regulator_desc pm8941_pldo = { 242 241 .linear_ranges = (struct regulator_linear_range[]) { 243 - REGULATOR_LINEAR_RANGE( 750000, 0, 30, 25000), 244 - REGULATOR_LINEAR_RANGE(1500000, 31, 99, 50000), 242 + REGULATOR_LINEAR_RANGE( 750000, 0, 63, 12500), 243 + REGULATOR_LINEAR_RANGE(1550000, 64, 126, 25000), 244 + REGULATOR_LINEAR_RANGE(3100000, 127, 163, 50000), 245 245 }, 246 - .n_linear_ranges = 2, 247 - .n_voltages = 100, 246 + .n_linear_ranges = 3, 247 + .n_voltages = 164, 248 248 .ops = &rpm_smps_ldo_ops, 249 249 }; 250 250
+3 -2
drivers/scsi/constants.c
··· 361 361 362 362 /* Get sense key string or NULL if not available */ 363 363 const char * 364 - scsi_sense_key_string(unsigned char key) { 365 - if (key <= 0xE) 364 + scsi_sense_key_string(unsigned char key) 365 + { 366 + if (key < ARRAY_SIZE(snstext)) 366 367 return snstext[key]; 367 368 return NULL; 368 369 }
+4
drivers/scsi/scsi_devinfo.c
··· 246 246 {"IBM", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 247 247 {"SUN", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 248 248 {"DELL", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 249 + {"STK", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 250 + {"NETAPP", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 251 + {"LSI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 252 + {"ENGENIO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH}, 249 253 {"SMSC", "USB 2 HS-CF", NULL, BLIST_SPARSELUN | BLIST_INQUIRY_36}, 250 254 {"SONY", "CD-ROM CDU-8001", NULL, BLIST_BORKEN}, 251 255 {"SONY", "TSL", NULL, BLIST_FORCELUN}, /* DDS3 & DDS4 autoloaders */
-16
drivers/scsi/scsi_transport_sas.c
··· 341 341 } 342 342 343 343 /** 344 - * is_sas_attached - check if device is SAS attached 345 - * @sdev: scsi device to check 346 - * 347 - * returns true if the device is SAS attached 348 - */ 349 - int is_sas_attached(struct scsi_device *sdev) 350 - { 351 - struct Scsi_Host *shost = sdev->host; 352 - 353 - return shost->transportt->host_attrs.ac.class == 354 - &sas_host_class.class; 355 - } 356 - EXPORT_SYMBOL(is_sas_attached); 357 - 358 - 359 - /** 360 344 * sas_remove_children - tear down a devices SAS data structures 361 345 * @dev: device belonging to the sas object 362 346 *
+1 -1
drivers/scsi/ses.c
··· 587 587 588 588 ses_enclosure_data_process(edev, to_scsi_device(edev->edev.parent), 0); 589 589 590 - if (is_sas_attached(sdev)) 590 + if (scsi_is_sas_rphy(&sdev->sdev_gendev)) 591 591 efd.addr = sas_get_address(sdev); 592 592 593 593 if (efd.addr) {
-2
drivers/spi/spi-img-spfi.c
··· 720 720 clk_disable_unprepare(spfi->sys_clk); 721 721 } 722 722 723 - spi_master_put(master); 724 - 725 723 return 0; 726 724 } 727 725
-1
drivers/spi/spi-mt65xx.c
··· 685 685 pm_runtime_disable(&pdev->dev); 686 686 687 687 mtk_spi_reset(mdata); 688 - spi_master_put(master); 689 688 690 689 return 0; 691 690 }
+1
drivers/spi/spi-pxa2xx-pci.c
··· 214 214 return PTR_ERR(ssp->clk); 215 215 216 216 memset(&pi, 0, sizeof(pi)); 217 + pi.fwnode = dev->dev.fwnode; 217 218 pi.parent = &dev->dev; 218 219 pi.name = "pxa2xx-spi"; 219 220 pi.id = ssp->port_id;
-1
drivers/spi/spi-qup.c
··· 1030 1030 1031 1031 pm_runtime_put_noidle(&pdev->dev); 1032 1032 pm_runtime_disable(&pdev->dev); 1033 - spi_master_put(master); 1034 1033 1035 1034 return 0; 1036 1035 }
+3
drivers/spi/spi-sh-msiof.c
··· 262 262 263 263 for (k = 0; k < ARRAY_SIZE(sh_msiof_spi_div_table); k++) { 264 264 brps = DIV_ROUND_UP(div, sh_msiof_spi_div_table[k].div); 265 + /* SCR_BRDV_DIV_1 is valid only if BRPS is x 1/1 or x 1/2 */ 266 + if (sh_msiof_spi_div_table[k].div == 1 && brps > 2) 267 + continue; 265 268 if (brps <= 32) /* max of brdv is 32 */ 266 269 break; 267 270 }
+8 -2
drivers/spi/spi.c
··· 960 960 struct spi_transfer *xfer; 961 961 bool keep_cs = false; 962 962 int ret = 0; 963 - unsigned long ms = 1; 963 + unsigned long long ms = 1; 964 964 struct spi_statistics *statm = &master->statistics; 965 965 struct spi_statistics *stats = &msg->spi->statistics; 966 966 ··· 991 991 992 992 if (ret > 0) { 993 993 ret = 0; 994 - ms = xfer->len * 8 * 1000 / xfer->speed_hz; 994 + ms = 8LL * 1000LL * xfer->len; 995 + do_div(ms, xfer->speed_hz); 995 996 ms += ms + 100; /* some tolerance */ 997 + 998 + if (ms > UINT_MAX) 999 + ms = UINT_MAX; 996 1000 997 1001 ms = wait_for_completion_timeout(&master->xfer_completion, 998 1002 msecs_to_jiffies(ms)); ··· 1163 1159 if (ret < 0) { 1164 1160 dev_err(&master->dev, "Failed to power device: %d\n", 1165 1161 ret); 1162 + mutex_unlock(&master->io_mutex); 1166 1163 return; 1167 1164 } 1168 1165 } ··· 1179 1174 1180 1175 if (master->auto_runtime_pm) 1181 1176 pm_runtime_put(master->dev.parent); 1177 + mutex_unlock(&master->io_mutex); 1182 1178 return; 1183 1179 } 1184 1180 }
+1
drivers/thermal/rcar_thermal.c
··· 504 504 if (IS_ERR(priv->zone)) { 505 505 dev_err(dev, "can't register thermal zone\n"); 506 506 ret = PTR_ERR(priv->zone); 507 + priv->zone = NULL; 507 508 goto error_unregister; 508 509 } 509 510
+9
drivers/usb/chipidea/udc.c
··· 949 949 int retval; 950 950 struct ci_hw_ep *hwep; 951 951 952 + /* 953 + * Unexpected USB controller behavior, caused by bad signal integrity 954 + * or ground reference problems, can lead to isr_setup_status_phase 955 + * being called with ci->status equal to NULL. 956 + * If this situation occurs, you should review your USB hardware design. 957 + */ 958 + if (WARN_ON_ONCE(!ci->status)) 959 + return -EPIPE; 960 + 952 961 hwep = (ci->ep0_dir == TX) ? ci->ep0out : ci->ep0in; 953 962 ci->status->context = ci; 954 963 ci->status->complete = isr_setup_status_complete;
+3 -1
drivers/usb/dwc3/dwc3-pci.c
··· 249 249 250 250 return pm_runtime_get(&dwc3->dev); 251 251 } 252 + #endif /* CONFIG_PM */ 252 253 254 + #ifdef CONFIG_PM_SLEEP 253 255 static int dwc3_pci_pm_dummy(struct device *dev) 254 256 { 255 257 /* ··· 264 262 */ 265 263 return 0; 266 264 } 267 - #endif /* CONFIG_PM */ 265 + #endif /* CONFIG_PM_SLEEP */ 268 266 269 267 static struct dev_pm_ops dwc3_pci_dev_pm_ops = { 270 268 SET_SYSTEM_SLEEP_PM_OPS(dwc3_pci_pm_dummy, dwc3_pci_pm_dummy)
+4 -1
drivers/usb/dwc3/gadget.c
··· 884 884 return DWC3_TRB_NUM - 1; 885 885 } 886 886 887 - trbs_left = dep->trb_dequeue - dep->trb_enqueue - 1; 887 + trbs_left = dep->trb_dequeue - dep->trb_enqueue; 888 888 trbs_left &= (DWC3_TRB_NUM - 1); 889 + 890 + if (dep->trb_dequeue < dep->trb_enqueue) 891 + trbs_left--; 889 892 890 893 return trbs_left; 891 894 }
+1 -1
drivers/usb/gadget/function/f_eem.c
··· 342 342 struct sk_buff *skb2 = NULL; 343 343 struct usb_ep *in = port->in_ep; 344 344 int headroom, tailroom, padlen = 0; 345 - u16 len = skb->len; 345 + u16 len; 346 346 347 347 if (!skb) 348 348 return NULL;
+2
drivers/usb/gadget/udc/renesas_usb3.c
··· 106 106 107 107 /* DRD_CON */ 108 108 #define DRD_CON_PERI_CON BIT(24) 109 + #define DRD_CON_VBOUT BIT(0) 109 110 110 111 /* USB_INT_ENA_1 and USB_INT_STA_1 */ 111 112 #define USB_INT_1_B3_PLLWKUP BIT(31) ··· 364 363 { 365 364 /* FIXME: How to change host / peripheral mode as well? */ 366 365 usb3_set_bit(usb3, DRD_CON_PERI_CON, USB3_DRD_CON); 366 + usb3_clear_bit(usb3, DRD_CON_VBOUT, USB3_DRD_CON); 367 367 368 368 usb3_write(usb3, ~0, USB3_USB_INT_STA_1); 369 369 usb3_enable_irq_1(usb3, USB_INT_1_VBUS_CNG);
+5 -1
drivers/usb/host/xhci-ring.c
··· 850 850 spin_lock_irqsave(&xhci->lock, flags); 851 851 852 852 ep->stop_cmds_pending--; 853 + if (xhci->xhc_state & XHCI_STATE_REMOVING) { 854 + spin_unlock_irqrestore(&xhci->lock, flags); 855 + return; 856 + } 853 857 if (xhci->xhc_state & XHCI_STATE_DYING) { 854 858 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 855 859 "Stop EP timer ran, but another timer marked " ··· 907 903 spin_unlock_irqrestore(&xhci->lock, flags); 908 904 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 909 905 "Calling usb_hc_died()"); 910 - usb_hc_died(xhci_to_hcd(xhci)->primary_hcd); 906 + usb_hc_died(xhci_to_hcd(xhci)); 911 907 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, 912 908 "xHCI host controller is dead."); 913 909 }
+6 -2
drivers/usb/phy/phy-generic.c
··· 144 144 int usb_gen_phy_init(struct usb_phy *phy) 145 145 { 146 146 struct usb_phy_generic *nop = dev_get_drvdata(phy->dev); 147 + int ret; 147 148 148 149 if (!IS_ERR(nop->vcc)) { 149 150 if (regulator_enable(nop->vcc)) 150 151 dev_err(phy->dev, "Failed to enable power\n"); 151 152 } 152 153 153 - if (!IS_ERR(nop->clk)) 154 - clk_prepare_enable(nop->clk); 154 + if (!IS_ERR(nop->clk)) { 155 + ret = clk_prepare_enable(nop->clk); 156 + if (ret) 157 + return ret; 158 + } 155 159 156 160 nop_reset(nop); 157 161
+9 -2
drivers/usb/renesas_usbhs/mod.c
··· 282 282 if (usbhs_mod_is_host(priv)) 283 283 usbhs_write(priv, INTSTS1, ~irq_state.intsts1 & INTSTS1_MAGIC); 284 284 285 - usbhs_write(priv, BRDYSTS, ~irq_state.brdysts); 285 + /* 286 + * The driver should not clear the xxxSTS after the line of 287 + * "call irq callback functions" because each "if" statement is 288 + * possible to call the callback function for avoiding any side effects. 289 + */ 290 + if (irq_state.intsts0 & BRDY) 291 + usbhs_write(priv, BRDYSTS, ~irq_state.brdysts); 286 292 usbhs_write(priv, NRDYSTS, ~irq_state.nrdysts); 287 - usbhs_write(priv, BEMPSTS, ~irq_state.bempsts); 293 + if (irq_state.intsts0 & BEMP) 294 + usbhs_write(priv, BEMPSTS, ~irq_state.bempsts); 288 295 289 296 /* 290 297 * call irq callback functions
+1 -1
drivers/virtio/virtio_ring.c
··· 167 167 * making all of the arch DMA ops work on the vring device itself 168 168 * is a mess. For now, we use the parent device for DMA ops. 169 169 */ 170 - struct device *vring_dma_dev(const struct vring_virtqueue *vq) 170 + static struct device *vring_dma_dev(const struct vring_virtqueue *vq) 171 171 { 172 172 return vq->vq.vdev->dev.parent; 173 173 }
+1
fs/btrfs/ctree.h
··· 427 427 struct list_head ro_bgs; 428 428 struct list_head priority_tickets; 429 429 struct list_head tickets; 430 + u64 tickets_id; 430 431 431 432 struct rw_semaphore groups_sem; 432 433 /* for block groups in our same type */
+15 -8
fs/btrfs/extent-tree.c
··· 4966 4966 */ 4967 4967 static void btrfs_async_reclaim_metadata_space(struct work_struct *work) 4968 4968 { 4969 - struct reserve_ticket *last_ticket = NULL; 4970 4969 struct btrfs_fs_info *fs_info; 4971 4970 struct btrfs_space_info *space_info; 4972 4971 u64 to_reclaim; 4973 4972 int flush_state; 4974 4973 int commit_cycles = 0; 4974 + u64 last_tickets_id; 4975 4975 4976 4976 fs_info = container_of(work, struct btrfs_fs_info, async_reclaim_work); 4977 4977 space_info = __find_space_info(fs_info, BTRFS_BLOCK_GROUP_METADATA); ··· 4984 4984 spin_unlock(&space_info->lock); 4985 4985 return; 4986 4986 } 4987 - last_ticket = list_first_entry(&space_info->tickets, 4988 - struct reserve_ticket, list); 4987 + last_tickets_id = space_info->tickets_id; 4989 4988 spin_unlock(&space_info->lock); 4990 4989 4991 4990 flush_state = FLUSH_DELAYED_ITEMS_NR; ··· 5004 5005 space_info); 5005 5006 ticket = list_first_entry(&space_info->tickets, 5006 5007 struct reserve_ticket, list); 5007 - if (last_ticket == ticket) { 5008 + if (last_tickets_id == space_info->tickets_id) { 5008 5009 flush_state++; 5009 5010 } else { 5010 - last_ticket = ticket; 5011 + last_tickets_id = space_info->tickets_id; 5011 5012 flush_state = FLUSH_DELAYED_ITEMS_NR; 5012 5013 if (commit_cycles) 5013 5014 commit_cycles--; ··· 5383 5384 list_del_init(&ticket->list); 5384 5385 num_bytes -= ticket->bytes; 5385 5386 ticket->bytes = 0; 5387 + space_info->tickets_id++; 5386 5388 wake_up(&ticket->wait); 5387 5389 } else { 5388 5390 ticket->bytes -= num_bytes; ··· 5426 5426 num_bytes -= ticket->bytes; 5427 5427 space_info->bytes_may_use += ticket->bytes; 5428 5428 ticket->bytes = 0; 5429 + space_info->tickets_id++; 5429 5430 wake_up(&ticket->wait); 5430 5431 } else { 5431 5432 trace_btrfs_space_reservation(fs_info, "space_info", ··· 8217 8216 { 8218 8217 int ret; 8219 8218 struct btrfs_block_group_cache *block_group; 8219 + struct btrfs_space_info *space_info; 8220 8220 8221 8221 /* 8222 8222 * Mixed block groups will exclude before processing the log so we only ··· 8233 8231 if (!block_group) 8234 8232 return -EINVAL; 8235 8233 8236 - ret = btrfs_add_reserved_bytes(block_group, ins->offset, 8237 - ins->offset, 0); 8238 - BUG_ON(ret); /* logic error */ 8234 + space_info = block_group->space_info; 8235 + spin_lock(&space_info->lock); 8236 + spin_lock(&block_group->lock); 8237 + space_info->bytes_reserved += ins->offset; 8238 + block_group->reserved += ins->offset; 8239 + spin_unlock(&block_group->lock); 8240 + spin_unlock(&space_info->lock); 8241 + 8239 8242 ret = alloc_reserved_file_extent(trans, root, 0, root_objectid, 8240 8243 0, owner, offset, ins, 1); 8241 8244 btrfs_put_block_group(block_group);
+1
fs/btrfs/tree-log.c
··· 2867 2867 2868 2868 if (log_root_tree->log_transid_committed >= root_log_ctx.log_transid) { 2869 2869 blk_finish_plug(&plug); 2870 + list_del_init(&root_log_ctx.list); 2870 2871 mutex_unlock(&log_root_tree->log_mutex); 2871 2872 ret = root_log_ctx.log_ret; 2872 2873 goto out;
+1 -1
fs/ceph/dir.c
··· 597 597 if (is_hash_order(new_pos)) { 598 598 /* no need to reset last_name for a forward seek when 599 599 * dentries are sotred in hash order */ 600 - } else if (fi->frag |= fpos_frag(new_pos)) { 600 + } else if (fi->frag != fpos_frag(new_pos)) { 601 601 return true; 602 602 } 603 603 rinfo = fi->last_readdir ? &fi->last_readdir->r_reply_info : NULL;
+29 -12
fs/crypto/policy.c
··· 11 11 #include <linux/random.h> 12 12 #include <linux/string.h> 13 13 #include <linux/fscrypto.h> 14 + #include <linux/mount.h> 14 15 15 16 static int inode_has_encryption_context(struct inode *inode) 16 17 { ··· 93 92 return inode->i_sb->s_cop->set_context(inode, &ctx, sizeof(ctx), NULL); 94 93 } 95 94 96 - int fscrypt_process_policy(struct inode *inode, 95 + int fscrypt_process_policy(struct file *filp, 97 96 const struct fscrypt_policy *policy) 98 97 { 98 + struct inode *inode = file_inode(filp); 99 + int ret; 100 + 101 + if (!inode_owner_or_capable(inode)) 102 + return -EACCES; 103 + 99 104 if (policy->version != 0) 100 105 return -EINVAL; 101 106 107 + ret = mnt_want_write_file(filp); 108 + if (ret) 109 + return ret; 110 + 102 111 if (!inode_has_encryption_context(inode)) { 103 - if (!inode->i_sb->s_cop->empty_dir) 104 - return -EOPNOTSUPP; 105 - if (!inode->i_sb->s_cop->empty_dir(inode)) 106 - return -ENOTEMPTY; 107 - return create_encryption_context_from_policy(inode, policy); 112 + if (!S_ISDIR(inode->i_mode)) 113 + ret = -EINVAL; 114 + else if (!inode->i_sb->s_cop->empty_dir) 115 + ret = -EOPNOTSUPP; 116 + else if (!inode->i_sb->s_cop->empty_dir(inode)) 117 + ret = -ENOTEMPTY; 118 + else 119 + ret = create_encryption_context_from_policy(inode, 120 + policy); 121 + } else if (!is_encryption_context_consistent_with_policy(inode, 122 + policy)) { 123 + printk(KERN_WARNING 124 + "%s: Policy inconsistent with encryption context\n", 125 + __func__); 126 + ret = -EINVAL; 108 127 } 109 128 110 - if (is_encryption_context_consistent_with_policy(inode, policy)) 111 - return 0; 112 - 113 - printk(KERN_WARNING "%s: Policy inconsistent with encryption context\n", 114 - __func__); 115 - return -EINVAL; 129 + mnt_drop_write_file(filp); 130 + return ret; 116 131 } 117 132 EXPORT_SYMBOL(fscrypt_process_policy); 118 133
+1 -1
fs/ext4/ioctl.c
··· 776 776 (struct fscrypt_policy __user *)arg, 777 777 sizeof(policy))) 778 778 return -EFAULT; 779 - return fscrypt_process_policy(inode, &policy); 779 + return fscrypt_process_policy(filp, &policy); 780 780 #else 781 781 return -EOPNOTSUPP; 782 782 #endif
+1 -8
fs/f2fs/file.c
··· 1757 1757 { 1758 1758 struct fscrypt_policy policy; 1759 1759 struct inode *inode = file_inode(filp); 1760 - int ret; 1761 1760 1762 1761 if (copy_from_user(&policy, (struct fscrypt_policy __user *)arg, 1763 1762 sizeof(policy))) 1764 1763 return -EFAULT; 1765 1764 1766 - ret = mnt_want_write_file(filp); 1767 - if (ret) 1768 - return ret; 1769 - 1770 1765 f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); 1771 - ret = fscrypt_process_policy(inode, &policy); 1772 1766 1773 - mnt_drop_write_file(filp); 1774 - return ret; 1767 + return fscrypt_process_policy(filp, &policy); 1775 1768 } 1776 1769 1777 1770 static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg)
+4 -3
fs/fuse/file.c
··· 530 530 req->out.args[0].size = count; 531 531 } 532 532 533 - static void fuse_release_user_pages(struct fuse_req *req, int write) 533 + static void fuse_release_user_pages(struct fuse_req *req, bool should_dirty) 534 534 { 535 535 unsigned i; 536 536 537 537 for (i = 0; i < req->num_pages; i++) { 538 538 struct page *page = req->pages[i]; 539 - if (write) 539 + if (should_dirty) 540 540 set_page_dirty_lock(page); 541 541 put_page(page); 542 542 } ··· 1320 1320 loff_t *ppos, int flags) 1321 1321 { 1322 1322 int write = flags & FUSE_DIO_WRITE; 1323 + bool should_dirty = !write && iter_is_iovec(iter); 1323 1324 int cuse = flags & FUSE_DIO_CUSE; 1324 1325 struct file *file = io->file; 1325 1326 struct inode *inode = file->f_mapping->host; ··· 1364 1363 nres = fuse_send_read(req, io, pos, nbytes, owner); 1365 1364 1366 1365 if (!io->async) 1367 - fuse_release_user_pages(req, !write); 1366 + fuse_release_user_pages(req, should_dirty); 1368 1367 if (req->out.h.error) { 1369 1368 err = req->out.h.error; 1370 1369 break;
+2 -2
fs/overlayfs/super.c
··· 835 835 goto out_dput; 836 836 837 837 err = vfs_removexattr(work, XATTR_NAME_POSIX_ACL_DEFAULT); 838 - if (err && err != -ENODATA) 838 + if (err && err != -ENODATA && err != -EOPNOTSUPP) 839 839 goto out_dput; 840 840 841 841 err = vfs_removexattr(work, XATTR_NAME_POSIX_ACL_ACCESS); 842 - if (err && err != -ENODATA) 842 + if (err && err != -ENODATA && err != -EOPNOTSUPP) 843 843 goto out_dput; 844 844 845 845 /* Clear any inherited mode bits */
+2
fs/proc/task_mmu.c
··· 581 581 mss->anonymous_thp += HPAGE_PMD_SIZE; 582 582 else if (PageSwapBacked(page)) 583 583 mss->shmem_thp += HPAGE_PMD_SIZE; 584 + else if (is_zone_device_page(page)) 585 + /* pass */; 584 586 else 585 587 VM_BUG_ON_PAGE(1, page); 586 588 smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd));
+2 -3
include/linux/fscrypto.h
··· 274 274 extern int fscrypt_zeroout_range(struct inode *, pgoff_t, sector_t, 275 275 unsigned int); 276 276 /* policy.c */ 277 - extern int fscrypt_process_policy(struct inode *, 278 - const struct fscrypt_policy *); 277 + extern int fscrypt_process_policy(struct file *, const struct fscrypt_policy *); 279 278 extern int fscrypt_get_policy(struct inode *, struct fscrypt_policy *); 280 279 extern int fscrypt_has_permitted_context(struct inode *, struct inode *); 281 280 extern int fscrypt_inherit_context(struct inode *, struct inode *, ··· 344 345 } 345 346 346 347 /* policy.c */ 347 - static inline int fscrypt_notsupp_process_policy(struct inode *i, 348 + static inline int fscrypt_notsupp_process_policy(struct file *f, 348 349 const struct fscrypt_policy *p) 349 350 { 350 351 return -EOPNOTSUPP;
+4 -3
include/linux/thread_info.h
··· 118 118 extern void __check_object_size(const void *ptr, unsigned long n, 119 119 bool to_user); 120 120 121 - static inline void check_object_size(const void *ptr, unsigned long n, 122 - bool to_user) 121 + static __always_inline void check_object_size(const void *ptr, unsigned long n, 122 + bool to_user) 123 123 { 124 - __check_object_size(ptr, n, to_user); 124 + if (!__builtin_constant_p(n)) 125 + __check_object_size(ptr, n, to_user); 125 126 } 126 127 #else 127 128 static inline void check_object_size(const void *ptr, unsigned long n,
+2 -3
include/scsi/scsi_transport_sas.h
··· 11 11 struct request; 12 12 13 13 #if !IS_ENABLED(CONFIG_SCSI_SAS_ATTRS) 14 - static inline int is_sas_attached(struct scsi_device *sdev) 14 + static inline int scsi_is_sas_rphy(const struct device *sdev) 15 15 { 16 16 return 0; 17 17 } 18 18 #else 19 - extern int is_sas_attached(struct scsi_device *sdev); 19 + extern int scsi_is_sas_rphy(const struct device *); 20 20 #endif 21 21 22 22 static inline int sas_protocol_ata(enum sas_protocol proto) ··· 202 202 extern void sas_rphy_remove(struct sas_rphy *); 203 203 extern void sas_rphy_delete(struct sas_rphy *); 204 204 extern void sas_rphy_unlink(struct sas_rphy *); 205 - extern int scsi_is_sas_rphy(const struct device *); 206 205 207 206 struct sas_port *sas_port_alloc(struct device *, int); 208 207 struct sas_port *sas_port_alloc_num(struct device *);
+9
kernel/memremap.c
··· 247 247 align_start = res->start & ~(SECTION_SIZE - 1); 248 248 align_size = ALIGN(resource_size(res), SECTION_SIZE); 249 249 arch_remove_memory(align_start, align_size); 250 + untrack_pfn(NULL, PHYS_PFN(align_start), align_size); 250 251 pgmap_radix_release(res); 251 252 dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc, 252 253 "%s: failed to free all reserved pages\n", __func__); ··· 283 282 struct percpu_ref *ref, struct vmem_altmap *altmap) 284 283 { 285 284 resource_size_t key, align_start, align_size, align_end; 285 + pgprot_t pgprot = PAGE_KERNEL; 286 286 struct dev_pagemap *pgmap; 287 287 struct page_map *page_map; 288 288 int error, nid, is_ram; ··· 353 351 if (nid < 0) 354 352 nid = numa_mem_id(); 355 353 354 + error = track_pfn_remap(NULL, &pgprot, PHYS_PFN(align_start), 0, 355 + align_size); 356 + if (error) 357 + goto err_pfn_remap; 358 + 356 359 error = arch_add_memory(nid, align_start, align_size, true); 357 360 if (error) 358 361 goto err_add_memory; ··· 378 371 return __va(res->start); 379 372 380 373 err_add_memory: 374 + untrack_pfn(NULL, PHYS_PFN(align_start), align_size); 375 + err_pfn_remap: 381 376 err_radix: 382 377 pgmap_radix_release(res); 383 378 devres_free(page_map);
+10 -1
kernel/power/qos.c
··· 482 482 return; 483 483 } 484 484 485 - cancel_delayed_work_sync(&req->work); 485 + /* 486 + * This function may be called very early during boot, for example, 487 + * from of_clk_init(), where irq needs to stay disabled. 488 + * cancel_delayed_work_sync() assumes that irq is enabled on 489 + * invocation and re-enables it on return. Avoid calling it until 490 + * workqueue is initialized. 491 + */ 492 + if (keventd_up()) 493 + cancel_delayed_work_sync(&req->work); 494 + 486 495 __pm_qos_update_request(req, new_value); 487 496 } 488 497 EXPORT_SYMBOL_GPL(pm_qos_update_request);
+2 -2
mm/huge_memory.c
··· 1078 1078 goto out; 1079 1079 1080 1080 page = pmd_page(*pmd); 1081 - VM_BUG_ON_PAGE(!PageHead(page), page); 1081 + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); 1082 1082 if (flags & FOLL_TOUCH) 1083 1083 touch_pmd(vma, addr, pmd); 1084 1084 if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) { ··· 1116 1116 } 1117 1117 skip_mlock: 1118 1118 page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; 1119 - VM_BUG_ON_PAGE(!PageCompound(page), page); 1119 + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); 1120 1120 if (flags & FOLL_GET) 1121 1121 get_page(page); 1122 1122
+35 -26
mm/usercopy.c
··· 134 134 return NULL; 135 135 } 136 136 137 - static inline const char *check_heap_object(const void *ptr, unsigned long n, 138 - bool to_user) 137 + /* Checks for allocs that are marked in some way as spanning multiple pages. */ 138 + static inline const char *check_page_span(const void *ptr, unsigned long n, 139 + struct page *page, bool to_user) 139 140 { 140 - struct page *page, *endpage; 141 + #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN 141 142 const void *end = ptr + n - 1; 143 + struct page *endpage; 142 144 bool is_reserved, is_cma; 143 - 144 - /* 145 - * Some architectures (arm64) return true for virt_addr_valid() on 146 - * vmalloced addresses. Work around this by checking for vmalloc 147 - * first. 148 - */ 149 - if (is_vmalloc_addr(ptr)) 150 - return NULL; 151 - 152 - if (!virt_addr_valid(ptr)) 153 - return NULL; 154 - 155 - page = virt_to_head_page(ptr); 156 - 157 - /* Check slab allocator for flags and size. */ 158 - if (PageSlab(page)) 159 - return __check_heap_object(ptr, n, page); 160 145 161 146 /* 162 147 * Sometimes the kernel data regions are not marked Reserved (see ··· 171 186 ((unsigned long)end & (unsigned long)PAGE_MASK))) 172 187 return NULL; 173 188 174 - /* Allow if start and end are inside the same compound page. */ 189 + /* Allow if fully inside the same compound (__GFP_COMP) page. */ 175 190 endpage = virt_to_head_page(end); 176 191 if (likely(endpage == page)) 177 192 return NULL; ··· 184 199 is_reserved = PageReserved(page); 185 200 is_cma = is_migrate_cma_page(page); 186 201 if (!is_reserved && !is_cma) 187 - goto reject; 202 + return "<spans multiple pages>"; 188 203 189 204 for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) { 190 205 page = virt_to_head_page(ptr); 191 206 if (is_reserved && !PageReserved(page)) 192 - goto reject; 207 + return "<spans Reserved and non-Reserved pages>"; 193 208 if (is_cma && !is_migrate_cma_page(page)) 194 - goto reject; 209 + return "<spans CMA and non-CMA pages>"; 195 210 } 211 + #endif 196 212 197 213 return NULL; 214 + } 198 215 199 - reject: 200 - return "<spans multiple pages>"; 216 + static inline const char *check_heap_object(const void *ptr, unsigned long n, 217 + bool to_user) 218 + { 219 + struct page *page; 220 + 221 + /* 222 + * Some architectures (arm64) return true for virt_addr_valid() on 223 + * vmalloced addresses. Work around this by checking for vmalloc 224 + * first. 225 + */ 226 + if (is_vmalloc_addr(ptr)) 227 + return NULL; 228 + 229 + if (!virt_addr_valid(ptr)) 230 + return NULL; 231 + 232 + page = virt_to_head_page(ptr); 233 + 234 + /* Check slab allocator for flags and size. */ 235 + if (PageSlab(page)) 236 + return __check_heap_object(ptr, n, page); 237 + 238 + /* Verify object does not incorrectly span multiple pages. */ 239 + return check_page_span(ptr, n, page, to_user); 201 240 } 202 241 203 242 /*
+3 -1
scripts/package/builddeb
··· 332 332 (cd $objtree; find tools/objtool -type f -executable) >> "$objtree/debian/hdrobjfiles" 333 333 fi 334 334 (cd $objtree; find arch/$SRCARCH/include Module.symvers include scripts -type f) >> "$objtree/debian/hdrobjfiles" 335 - (cd $objtree; find scripts/gcc-plugins -name \*.so -o -name gcc-common.h) >> "$objtree/debian/hdrobjfiles" 335 + if grep -q '^CONFIG_GCC_PLUGINS=y' $KCONFIG_CONFIG ; then 336 + (cd $objtree; find scripts/gcc-plugins -name \*.so -o -name gcc-common.h) >> "$objtree/debian/hdrobjfiles" 337 + fi 336 338 destdir=$kernel_headers_dir/usr/src/linux-headers-$version 337 339 mkdir -p "$destdir" 338 340 (cd $srctree; tar -c -f - -T -) < "$objtree/debian/hdrsrcfiles" | (cd $destdir; tar -xf -)
+11
security/Kconfig
··· 147 147 or are part of the kernel text. This kills entire classes 148 148 of heap overflow exploits and similar kernel memory exposures. 149 149 150 + config HARDENED_USERCOPY_PAGESPAN 151 + bool "Refuse to copy allocations that span multiple pages" 152 + depends on HARDENED_USERCOPY 153 + depends on EXPERT 154 + help 155 + When a multi-page allocation is done without __GFP_COMP, 156 + hardened usercopy will reject attempts to copy it. There are, 157 + however, several cases of this in the kernel that have not all 158 + been removed. This config is intended to be used only while 159 + trying to find such users. 160 + 150 161 source security/selinux/Kconfig 151 162 source security/smack/Kconfig 152 163 source security/tomoyo/Kconfig
+3 -1
sound/core/rawmidi.c
··· 1633 1633 return -EBUSY; 1634 1634 } 1635 1635 list_add_tail(&rmidi->list, &snd_rawmidi_devices); 1636 + mutex_unlock(&register_mutex); 1636 1637 err = snd_register_device(SNDRV_DEVICE_TYPE_RAWMIDI, 1637 1638 rmidi->card, rmidi->device, 1638 1639 &snd_rawmidi_f_ops, rmidi, &rmidi->dev); 1639 1640 if (err < 0) { 1640 1641 rmidi_err(rmidi, "unable to register\n"); 1642 + mutex_lock(&register_mutex); 1641 1643 list_del(&rmidi->list); 1642 1644 mutex_unlock(&register_mutex); 1643 1645 return err; ··· 1647 1645 if (rmidi->ops && rmidi->ops->dev_register && 1648 1646 (err = rmidi->ops->dev_register(rmidi)) < 0) { 1649 1647 snd_unregister_device(&rmidi->dev); 1648 + mutex_lock(&register_mutex); 1650 1649 list_del(&rmidi->list); 1651 1650 mutex_unlock(&register_mutex); 1652 1651 return err; ··· 1680 1677 } 1681 1678 } 1682 1679 #endif /* CONFIG_SND_OSSEMUL */ 1683 - mutex_unlock(&register_mutex); 1684 1680 sprintf(name, "midi%d", rmidi->device); 1685 1681 entry = snd_info_create_card_entry(rmidi->card, name, rmidi->card->proc_root); 1686 1682 if (entry) {
+32 -2
sound/core/timer.c
··· 35 35 #include <sound/initval.h> 36 36 #include <linux/kmod.h> 37 37 38 + /* internal flags */ 39 + #define SNDRV_TIMER_IFLG_PAUSED 0x00010000 40 + 38 41 #if IS_ENABLED(CONFIG_SND_HRTIMER) 39 42 #define DEFAULT_TIMER_LIMIT 4 40 43 #else ··· 297 294 get_device(&timer->card->card_dev); 298 295 timeri->slave_class = tid->dev_sclass; 299 296 timeri->slave_id = slave_id; 300 - if (list_empty(&timer->open_list_head) && timer->hw.open) 301 - timer->hw.open(timer); 297 + 298 + if (list_empty(&timer->open_list_head) && timer->hw.open) { 299 + int err = timer->hw.open(timer); 300 + if (err) { 301 + kfree(timeri->owner); 302 + kfree(timeri); 303 + 304 + if (timer->card) 305 + put_device(&timer->card->card_dev); 306 + module_put(timer->module); 307 + mutex_unlock(&register_mutex); 308 + return err; 309 + } 310 + } 311 + 302 312 list_add_tail(&timeri->open_list, &timer->open_list_head); 303 313 snd_timer_check_master(timeri); 304 314 mutex_unlock(&register_mutex); ··· 542 526 } 543 527 } 544 528 timeri->flags &= ~(SNDRV_TIMER_IFLG_RUNNING | SNDRV_TIMER_IFLG_START); 529 + if (stop) 530 + timeri->flags &= ~SNDRV_TIMER_IFLG_PAUSED; 531 + else 532 + timeri->flags |= SNDRV_TIMER_IFLG_PAUSED; 545 533 snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP : 546 534 SNDRV_TIMER_EVENT_CONTINUE); 547 535 unlock: ··· 607 587 */ 608 588 int snd_timer_continue(struct snd_timer_instance *timeri) 609 589 { 590 + /* timer can continue only after pause */ 591 + if (!(timeri->flags & SNDRV_TIMER_IFLG_PAUSED)) 592 + return -EINVAL; 593 + 610 594 if (timeri->flags & SNDRV_TIMER_IFLG_SLAVE) 611 595 return snd_timer_start_slave(timeri, false); 612 596 else ··· 837 813 timer->tmr_subdevice = tid->subdevice; 838 814 if (id) 839 815 strlcpy(timer->id, id, sizeof(timer->id)); 816 + timer->sticks = 1; 840 817 INIT_LIST_HEAD(&timer->device_list); 841 818 INIT_LIST_HEAD(&timer->open_list_head); 842 819 INIT_LIST_HEAD(&timer->active_list_head); ··· 1842 1817 tu = file->private_data; 1843 1818 if (!tu->timeri) 1844 1819 return -EBADFD; 1820 + /* start timer instead of continue if it's not used before */ 1821 + if (!(tu->timeri->flags & SNDRV_TIMER_IFLG_PAUSED)) 1822 + return snd_timer_user_start(file); 1845 1823 tu->timeri->lost = 0; 1846 1824 return (err = snd_timer_continue(tu->timeri)) < 0 ? err : 0; 1847 1825 } ··· 1986 1958 tu->qused--; 1987 1959 spin_unlock_irq(&tu->qlock); 1988 1960 1961 + mutex_lock(&tu->ioctl_lock); 1989 1962 if (tu->tread) { 1990 1963 if (copy_to_user(buffer, &tu->tqueue[qhead], 1991 1964 sizeof(struct snd_timer_tread))) ··· 1996 1967 sizeof(struct snd_timer_read))) 1997 1968 err = -EFAULT; 1998 1969 } 1970 + mutex_unlock(&tu->ioctl_lock); 1999 1971 2000 1972 spin_lock_irq(&tu->qlock); 2001 1973 if (err < 0)
-1
sound/firewire/fireworks/fireworks.h
··· 108 108 u8 *resp_buf; 109 109 u8 *pull_ptr; 110 110 u8 *push_ptr; 111 - unsigned int resp_queues; 112 111 }; 113 112 114 113 int snd_efw_transaction_cmd(struct fw_unit *unit,
+53 -20
sound/firewire/fireworks/fireworks_hwdep.c
··· 25 25 { 26 26 unsigned int length, till_end, type; 27 27 struct snd_efw_transaction *t; 28 + u8 *pull_ptr; 28 29 long count = 0; 29 30 30 31 if (remained < sizeof(type) + sizeof(struct snd_efw_transaction)) ··· 39 38 buf += sizeof(type); 40 39 41 40 /* write into buffer as many responses as possible */ 42 - while (efw->resp_queues > 0) { 43 - t = (struct snd_efw_transaction *)(efw->pull_ptr); 41 + spin_lock_irq(&efw->lock); 42 + 43 + /* 44 + * When another task reaches here during this task's access to user 45 + * space, it picks up current position in buffer and can read the same 46 + * series of responses. 47 + */ 48 + pull_ptr = efw->pull_ptr; 49 + 50 + while (efw->push_ptr != pull_ptr) { 51 + t = (struct snd_efw_transaction *)(pull_ptr); 44 52 length = be32_to_cpu(t->length) * sizeof(__be32); 45 53 46 54 /* confirm enough space for this response */ ··· 59 49 /* copy from ring buffer to user buffer */ 60 50 while (length > 0) { 61 51 till_end = snd_efw_resp_buf_size - 62 - (unsigned int)(efw->pull_ptr - efw->resp_buf); 52 + (unsigned int)(pull_ptr - efw->resp_buf); 63 53 till_end = min_t(unsigned int, length, till_end); 64 54 65 - if (copy_to_user(buf, efw->pull_ptr, till_end)) 55 + spin_unlock_irq(&efw->lock); 56 + 57 + if (copy_to_user(buf, pull_ptr, till_end)) 66 58 return -EFAULT; 67 59 68 - efw->pull_ptr += till_end; 69 - if (efw->pull_ptr >= efw->resp_buf + 70 - snd_efw_resp_buf_size) 71 - efw->pull_ptr -= snd_efw_resp_buf_size; 60 + spin_lock_irq(&efw->lock); 61 + 62 + pull_ptr += till_end; 63 + if (pull_ptr >= efw->resp_buf + snd_efw_resp_buf_size) 64 + pull_ptr -= snd_efw_resp_buf_size; 72 65 73 66 length -= till_end; 74 67 buf += till_end; 75 68 count += till_end; 76 69 remained -= till_end; 77 70 } 78 - 79 - efw->resp_queues--; 80 71 } 72 + 73 + /* 74 + * All of tasks can read from the buffer nearly simultaneously, but the 75 + * last position for each task is different depending on the length of 76 + * given buffer. Here, for simplicity, a position of buffer is set by 77 + * the latest task. It's better for a listening application to allow one 78 + * thread to read from the buffer. Unless, each task can read different 79 + * sequence of responses depending on variation of buffer length. 80 + */ 81 + efw->pull_ptr = pull_ptr; 82 + 83 + spin_unlock_irq(&efw->lock); 81 84 82 85 return count; 83 86 } ··· 99 76 hwdep_read_locked(struct snd_efw *efw, char __user *buf, long count, 100 77 loff_t *offset) 101 78 { 102 - union snd_firewire_event event; 79 + union snd_firewire_event event = { 80 + .lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS, 81 + }; 103 82 104 - memset(&event, 0, sizeof(event)); 83 + spin_lock_irq(&efw->lock); 105 84 106 - event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS; 107 85 event.lock_status.status = (efw->dev_lock_count > 0); 108 86 efw->dev_lock_changed = false; 87 + 88 + spin_unlock_irq(&efw->lock); 109 89 110 90 count = min_t(long, count, sizeof(event.lock_status)); 111 91 ··· 124 98 { 125 99 struct snd_efw *efw = hwdep->private_data; 126 100 DEFINE_WAIT(wait); 101 + bool dev_lock_changed; 102 + bool queued; 127 103 128 104 spin_lock_irq(&efw->lock); 129 105 130 - while ((!efw->dev_lock_changed) && (efw->resp_queues == 0)) { 106 + dev_lock_changed = efw->dev_lock_changed; 107 + queued = efw->push_ptr != efw->pull_ptr; 108 + 109 + while (!dev_lock_changed && !queued) { 131 110 prepare_to_wait(&efw->hwdep_wait, &wait, TASK_INTERRUPTIBLE); 132 111 spin_unlock_irq(&efw->lock); 133 112 schedule(); ··· 140 109 if (signal_pending(current)) 141 110 return -ERESTARTSYS; 142 111 spin_lock_irq(&efw->lock); 112 + dev_lock_changed = efw->dev_lock_changed; 113 + queued = efw->push_ptr != efw->pull_ptr; 143 114 } 144 115 145 - if (efw->dev_lock_changed) 146 - count = hwdep_read_locked(efw, buf, count, offset); 147 - else if (efw->resp_queues > 0) 148 - count = hwdep_read_resp_buf(efw, buf, count, offset); 149 - 150 116 spin_unlock_irq(&efw->lock); 117 + 118 + if (dev_lock_changed) 119 + count = hwdep_read_locked(efw, buf, count, offset); 120 + else if (queued) 121 + count = hwdep_read_resp_buf(efw, buf, count, offset); 151 122 152 123 return count; 153 124 } ··· 193 160 poll_wait(file, &efw->hwdep_wait, wait); 194 161 195 162 spin_lock_irq(&efw->lock); 196 - if (efw->dev_lock_changed || (efw->resp_queues > 0)) 163 + if (efw->dev_lock_changed || efw->pull_ptr != efw->push_ptr) 197 164 events = POLLIN | POLLRDNORM; 198 165 else 199 166 events = 0;
+2 -2
sound/firewire/fireworks/fireworks_proc.c
··· 188 188 else 189 189 consumed = (unsigned int)(efw->push_ptr - efw->pull_ptr); 190 190 191 - snd_iprintf(buffer, "%d %d/%d\n", 192 - efw->resp_queues, consumed, snd_efw_resp_buf_size); 191 + snd_iprintf(buffer, "%d/%d\n", 192 + consumed, snd_efw_resp_buf_size); 193 193 } 194 194 195 195 static void
+2 -3
sound/firewire/fireworks/fireworks_transaction.c
··· 121 121 size_t capacity, till_end; 122 122 struct snd_efw_transaction *t; 123 123 124 - spin_lock_irq(&efw->lock); 125 - 126 124 t = (struct snd_efw_transaction *)data; 127 125 length = min_t(size_t, be32_to_cpu(t->length) * sizeof(u32), length); 126 + 127 + spin_lock_irq(&efw->lock); 128 128 129 129 if (efw->push_ptr < efw->pull_ptr) 130 130 capacity = (unsigned int)(efw->pull_ptr - efw->push_ptr); ··· 155 155 } 156 156 157 157 /* for hwdep */ 158 - efw->resp_queues++; 159 158 wake_up(&efw->hwdep_wait); 160 159 161 160 *rcode = RCODE_COMPLETE;
+11 -22
sound/firewire/tascam/tascam-hwdep.c
··· 16 16 17 17 #include "tascam.h" 18 18 19 - static long hwdep_read_locked(struct snd_tscm *tscm, char __user *buf, 20 - long count) 21 - { 22 - union snd_firewire_event event; 23 - 24 - memset(&event, 0, sizeof(event)); 25 - 26 - event.lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS; 27 - event.lock_status.status = (tscm->dev_lock_count > 0); 28 - tscm->dev_lock_changed = false; 29 - 30 - count = min_t(long, count, sizeof(event.lock_status)); 31 - 32 - if (copy_to_user(buf, &event, count)) 33 - return -EFAULT; 34 - 35 - return count; 36 - } 37 - 38 19 static long hwdep_read(struct snd_hwdep *hwdep, char __user *buf, long count, 39 20 loff_t *offset) 40 21 { 41 22 struct snd_tscm *tscm = hwdep->private_data; 42 23 DEFINE_WAIT(wait); 43 - union snd_firewire_event event; 24 + union snd_firewire_event event = { 25 + .lock_status.type = SNDRV_FIREWIRE_EVENT_LOCK_STATUS, 26 + }; 44 27 45 28 spin_lock_irq(&tscm->lock); 46 29 ··· 37 54 spin_lock_irq(&tscm->lock); 38 55 } 39 56 40 - memset(&event, 0, sizeof(event)); 41 - count = hwdep_read_locked(tscm, buf, count); 57 + event.lock_status.status = (tscm->dev_lock_count > 0); 58 + tscm->dev_lock_changed = false; 59 + 42 60 spin_unlock_irq(&tscm->lock); 61 + 62 + count = min_t(long, count, sizeof(event.lock_status)); 63 + 64 + if (copy_to_user(buf, &event, count)) 65 + return -EFAULT; 43 66 44 67 return count; 45 68 }
+15
sound/pci/hda/patch_realtek.c
··· 4855 4855 ALC221_FIXUP_HP_FRONT_MIC, 4856 4856 ALC292_FIXUP_TPT460, 4857 4857 ALC298_FIXUP_SPK_VOLUME, 4858 + ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER, 4858 4859 }; 4859 4860 4860 4861 static const struct hda_fixup alc269_fixups[] = { ··· 5517 5516 .chained = true, 5518 5517 .chain_id = ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, 5519 5518 }, 5519 + [ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER] = { 5520 + .type = HDA_FIXUP_PINS, 5521 + .v.pins = (const struct hda_pintbl[]) { 5522 + { 0x1b, 0x90170151 }, 5523 + { } 5524 + }, 5525 + .chained = true, 5526 + .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE 5527 + }, 5520 5528 }; 5521 5529 5522 5530 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 5570 5560 SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), 5571 5561 SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK), 5572 5562 SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE), 5563 + SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), 5573 5564 SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE), 5574 5565 SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE), 5575 5566 SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME), ··· 5904 5893 {0x21, 0x02211030}), 5905 5894 SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5906 5895 {0x12, 0x90a60170}, 5896 + {0x14, 0x90170120}, 5897 + {0x21, 0x02211030}), 5898 + SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell Inspiron 5468", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, 5899 + {0x12, 0x90a60180}, 5907 5900 {0x14, 0x90170120}, 5908 5901 {0x21, 0x02211030}), 5909 5902 SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+1
sound/usb/quirks.c
··· 1141 1141 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */ 1142 1142 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */ 1143 1143 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */ 1144 + case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */ 1144 1145 case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */ 1145 1146 case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */ 1146 1147 case USB_ID(0x1de7, 0x0114): /* Phoenix Audio MT202pcs */
+1 -1
tools/iio/iio_generic_buffer.c
··· 456 456 457 457 if (notrigger) { 458 458 printf("trigger-less mode selected\n"); 459 - } if (trig_num >= 0) { 459 + } else if (trig_num >= 0) { 460 460 char *trig_dev_name; 461 461 ret = asprintf(&trig_dev_name, "%strigger%d", iio_dir, trig_num); 462 462 if (ret < 0) {