Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.17-rc8 into staging-next

We need the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4996 -1874
+3
.mailmap
··· 187 187 Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com> 188 188 Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.cz> 189 189 Jiri Slaby <jirislaby@kernel.org> <xslaby@fi.muni.cz> 190 + Jisheng Zhang <jszhang@kernel.org> <jszhang@marvell.com> 191 + Jisheng Zhang <jszhang@kernel.org> <Jisheng.Zhang@synaptics.com> 190 192 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 191 193 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 192 194 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> ··· 218 216 Krishna Manikandan <quic_mkrishn@quicinc.com> <mkrishn@codeaurora.org> 219 217 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> 220 218 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com> 219 + Krzysztof Kozlowski <krzk@kernel.org> <krzysztof.kozlowski@canonical.com> 221 220 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 222 221 Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org> 223 222 Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com>
+6
CREDITS
··· 895 895 S: Warrendale, Pennsylvania 15086 896 896 S: USA 897 897 898 + N: Ludovic Desroches 899 + E: ludovic.desroches@microchip.com 900 + D: Maintainer for ARM/Microchip (AT91) SoC support 901 + D: Author of ADC, pinctrl, XDMA and SDHCI drivers for this platform 902 + S: France 903 + 898 904 N: Martin Devera 899 905 E: devik@cdi.cz 900 906 W: http://luxik.cdi.cz/~devik/qos/
+33 -17
Documentation/admin-guide/hw-vuln/spectre.rst
··· 60 60 Spectre variant 1 attacks take advantage of speculative execution of 61 61 conditional branches, while Spectre variant 2 attacks use speculative 62 62 execution of indirect branches to leak privileged memory. 63 - See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>` 64 - :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`. 63 + See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>` 64 + :ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`. 65 65 66 66 Spectre variant 1 (Bounds Check Bypass) 67 67 --------------------------------------- ··· 130 130 steer its indirect branch speculations to gadget code, and measure the 131 131 speculative execution's side effects left in level 1 cache to infer the 132 132 victim's data. 133 + 134 + Yet another variant 2 attack vector is for the attacker to poison the 135 + Branch History Buffer (BHB) to speculatively steer an indirect branch 136 + to a specific Branch Target Buffer (BTB) entry, even if the entry isn't 137 + associated with the source address of the indirect branch. Specifically, 138 + the BHB might be shared across privilege levels even in the presence of 139 + Enhanced IBRS. 140 + 141 + Currently the only known real-world BHB attack vector is via 142 + unprivileged eBPF. Therefore, it's highly recommended to not enable 143 + unprivileged eBPF, especially when eIBRS is used (without retpolines). 144 + For a full mitigation against BHB attacks, it's recommended to use 145 + retpolines (or eIBRS combined with retpolines). 133 146 134 147 Attack scenarios 135 148 ---------------- ··· 377 364 378 365 - Kernel status: 379 366 380 - ==================================== ================================= 381 - 'Not affected' The processor is not vulnerable 382 - 'Vulnerable' Vulnerable, no mitigation 383 - 'Mitigation: Full generic retpoline' Software-focused mitigation 384 - 'Mitigation: Full AMD retpoline' AMD-specific software mitigation 385 - 'Mitigation: Enhanced IBRS' Hardware-focused mitigation 386 - ==================================== ================================= 367 + ======================================== ================================= 368 + 'Not affected' The processor is not vulnerable 369 + 'Mitigation: None' Vulnerable, no mitigation 370 + 'Mitigation: Retpolines' Use Retpoline thunks 371 + 'Mitigation: LFENCE' Use LFENCE instructions 372 + 'Mitigation: Enhanced IBRS' Hardware-focused mitigation 373 + 'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines 374 + 'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE 375 + ======================================== ================================= 387 376 388 377 - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is 389 378 used to protect against Spectre variant 2 attacks when calling firmware (x86 only). ··· 598 583 599 584 Specific mitigations can also be selected manually: 600 585 601 - retpoline 602 - replace indirect branches 603 - retpoline,generic 604 - google's original retpoline 605 - retpoline,amd 606 - AMD-specific minimal thunk 586 + retpoline auto pick between generic,lfence 587 + retpoline,generic Retpolines 588 + retpoline,lfence LFENCE; indirect branch 589 + retpoline,amd alias for retpoline,lfence 590 + eibrs enhanced IBRS 591 + eibrs,retpoline enhanced IBRS + Retpolines 592 + eibrs,lfence enhanced IBRS + LFENCE 607 593 608 594 Not specifying this option is equivalent to 609 595 spectre_v2=auto. ··· 615 599 spectre_v2=off. Spectre variant 1 mitigations 616 600 cannot be disabled. 617 601 618 - For spectre_v2_user see :doc:`/admin-guide/kernel-parameters`. 602 + For spectre_v2_user see Documentation/admin-guide/kernel-parameters.txt 619 603 620 604 Mitigation selection guide 621 605 -------------------------- ··· 697 681 698 682 .. _spec_ref6: 699 683 700 - [6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_. 684 + [6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_. 701 685 702 686 ARM white papers: 703 687
+6 -2
Documentation/admin-guide/kernel-parameters.txt
··· 5361 5361 Specific mitigations can also be selected manually: 5362 5362 5363 5363 retpoline - replace indirect branches 5364 - retpoline,generic - google's original retpoline 5365 - retpoline,amd - AMD-specific minimal thunk 5364 + retpoline,generic - Retpolines 5365 + retpoline,lfence - LFENCE; indirect branch 5366 + retpoline,amd - alias for retpoline,lfence 5367 + eibrs - enhanced IBRS 5368 + eibrs,retpoline - enhanced IBRS + Retpolines 5369 + eibrs,lfence - enhanced IBRS + LFENCE 5366 5370 5367 5371 Not specifying this option is equivalent to 5368 5372 spectre_v2=auto.
+1 -1
Documentation/admin-guide/mm/pagemap.rst
··· 23 23 * Bit 56 page exclusively mapped (since 4.2) 24 24 * Bit 57 pte is uffd-wp write-protected (since 5.13) (see 25 25 :ref:`Documentation/admin-guide/mm/userfaultfd.rst <userfaultfd>`) 26 - * Bits 57-60 zero 26 + * Bits 58-60 zero 27 27 * Bit 61 page is file-page or shared-anon (since 3.5) 28 28 * Bit 62 page swapped 29 29 * Bit 63 page present
-8
Documentation/core-api/dma-attributes.rst
··· 130 130 subsystem that the buffer is fully accessible at the elevated privilege 131 131 level (and ideally inaccessible or at least read-only at the 132 132 lesser-privileged levels). 133 - 134 - DMA_ATTR_OVERWRITE 135 - ------------------ 136 - 137 - This is a hint to the DMA-mapping subsystem that the device is expected to 138 - overwrite the entire mapped size, thus the caller does not require any of the 139 - previous buffer contents to be preserved. This allows bounce-buffering 140 - implementations to optimise DMA_FROM_DEVICE transfers.
+2 -1
Documentation/devicetree/bindings/arm/atmel-at91.yaml
··· 8 8 9 9 maintainers: 10 10 - Alexandre Belloni <alexandre.belloni@bootlin.com> 11 - - Ludovic Desroches <ludovic.desroches@microchip.com> 11 + - Claudiu Beznea <claudiu.beznea@microchip.com> 12 + - Nicolas Ferre <nicolas.ferre@microchip.com> 12 13 13 14 description: | 14 15 Boards with a SoC of the Atmel AT91 or SMART family shall have the following
+1 -1
Documentation/devicetree/bindings/arm/freescale/fsl,layerscape-dcfg.txt
··· 8 8 - compatible: Should contain a chip-specific compatible string, 9 9 Chip-specific strings are of the form "fsl,<chip>-dcfg", 10 10 The following <chip>s are known to be supported: 11 - ls1012a, ls1021a, ls1043a, ls1046a, ls2080a. 11 + ls1012a, ls1021a, ls1043a, ls1046a, ls2080a, lx2160a 12 12 13 13 - reg : should contain base address and length of DCFG memory-mapped registers 14 14
-6
Documentation/devicetree/bindings/arm/qcom.yaml
··· 48 48 sdx65 49 49 sm7225 50 50 sm8150 51 - sdx65 52 51 sm8250 53 52 sm8350 54 53 sm8450 ··· 221 222 - qcom,sdx55-telit-fn980-tlb 222 223 - qcom,sdx55-t55 223 224 - const: qcom,sdx55 224 - 225 - - items: 226 - - enum: 227 - - qcom,sdx65-mtp 228 - - const: qcom,sdx65 229 225 230 226 - items: 231 227 - enum:
+1
Documentation/devicetree/bindings/clock/qoriq-clock.txt
··· 44 44 * "fsl,ls1046a-clockgen" 45 45 * "fsl,ls1088a-clockgen" 46 46 * "fsl,ls2080a-clockgen" 47 + * "fsl,lx2160a-clockgen" 47 48 Chassis-version clock strings include: 48 49 * "fsl,qoriq-clockgen-1.0": for chassis 1.0 clocks 49 50 * "fsl,qoriq-clockgen-2.0": for chassis 2.0 clocks
+1 -18
Documentation/devicetree/bindings/display/bridge/analogix,anx7625.yaml
··· 91 91 $ref: /schemas/graph.yaml#/$defs/port-base 92 92 unevaluatedProperties: false 93 93 description: 94 - MIPI DSI/DPI input. 95 - 96 - properties: 97 - endpoint: 98 - $ref: /schemas/media/video-interfaces.yaml# 99 - type: object 100 - additionalProperties: false 101 - 102 - properties: 103 - remote-endpoint: true 104 - 105 - bus-type: 106 - enum: [1, 5] 107 - default: 1 108 - 109 - data-lanes: true 94 + Video port for MIPI DSI input. 110 95 111 96 port@1: 112 97 $ref: /schemas/graph.yaml#/properties/port ··· 140 155 reg = <0>; 141 156 anx7625_in: endpoint { 142 157 remote-endpoint = <&mipi_dsi>; 143 - bus-type = <5>; 144 - data-lanes = <0 1 2 3>; 145 158 }; 146 159 }; 147 160
+2 -2
Documentation/devicetree/bindings/mfd/brcm,cru.yaml
··· 39 39 '^phy@[a-f0-9]+$': 40 40 $ref: ../phy/bcm-ns-usb2-phy.yaml 41 41 42 - '^pin-controller@[a-f0-9]+$': 42 + '^pinctrl@[a-f0-9]+$': 43 43 $ref: ../pinctrl/brcm,ns-pinmux.yaml 44 44 45 45 '^syscon@[a-f0-9]+$': ··· 94 94 reg = <0x180 0x4>; 95 95 }; 96 96 97 - pin-controller@1c0 { 97 + pinctrl@1c0 { 98 98 compatible = "brcm,bcm4708-pinmux"; 99 99 reg = <0x1c0 0x24>; 100 100 reg-names = "cru_gpio_control";
+3 -3
Documentation/devicetree/bindings/mfd/cirrus,lochnagar.yaml
··· 126 126 clock-frequency: 127 127 const: 12288000 128 128 129 - lochnagar-pinctrl: 129 + pinctrl: 130 130 type: object 131 131 $ref: /schemas/pinctrl/cirrus,lochnagar.yaml# 132 132 ··· 255 255 - reg 256 256 - reset-gpios 257 257 - lochnagar-clk 258 - - lochnagar-pinctrl 258 + - pinctrl 259 259 260 260 additionalProperties: false 261 261 ··· 293 293 clock-frequency = <32768>; 294 294 }; 295 295 296 - lochnagar-pinctrl { 296 + pinctrl { 297 297 compatible = "cirrus,lochnagar-pinctrl"; 298 298 299 299 gpio-controller;
+7
Documentation/devicetree/bindings/phy/ti,tcan104x-can.yaml
··· 37 37 max bit rate supported in bps 38 38 minimum: 1 39 39 40 + mux-states: 41 + description: 42 + mux controller node to route the signals from controller to 43 + transceiver. 44 + maxItems: 1 45 + 40 46 required: 41 47 - compatible 42 48 - '#phy-cells' ··· 59 53 max-bitrate = <5000000>; 60 54 standby-gpios = <&wakeup_gpio1 16 GPIO_ACTIVE_LOW>; 61 55 enable-gpios = <&main_gpio1 67 GPIO_ACTIVE_HIGH>; 56 + mux-states = <&mux0 1>; 62 57 };
-3
Documentation/devicetree/bindings/pinctrl/cirrus,madera.yaml
··· 107 107 108 108 additionalProperties: false 109 109 110 - allOf: 111 - - $ref: "pinctrl.yaml#" 112 - 113 110 required: 114 111 - pinctrl-0 115 112 - pinctrl-names
+1
Documentation/devicetree/bindings/usb/dwc2.yaml
··· 53 53 - const: st,stm32mp15-hsotg 54 54 - const: snps,dwc2 55 55 - const: samsung,s3c6400-hsotg 56 + - const: intel,socfpga-agilex-hsotg 56 57 57 58 reg: 58 59 maxItems: 1
+25 -26
MAINTAINERS
··· 2254 2254 ARM/Microchip (AT91) SoC support 2255 2255 M: Nicolas Ferre <nicolas.ferre@microchip.com> 2256 2256 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 2257 - M: Ludovic Desroches <ludovic.desroches@microchip.com> 2257 + M: Claudiu Beznea <claudiu.beznea@microchip.com> 2258 2258 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2259 2259 S: Supported 2260 2260 W: http://www.linux4sam.org ··· 2572 2572 N: rockchip 2573 2573 2574 2574 ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES 2575 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 2575 + M: Krzysztof Kozlowski <krzk@kernel.org> 2576 2576 R: Alim Akhtar <alim.akhtar@samsung.com> 2577 2577 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2578 2578 L: linux-samsung-soc@vger.kernel.org ··· 2739 2739 N: stm 2740 2740 2741 2741 ARM/Synaptics SoC support 2742 - M: Jisheng Zhang <Jisheng.Zhang@synaptics.com> 2742 + M: Jisheng Zhang <jszhang@kernel.org> 2743 2743 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 2744 2744 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2745 2745 S: Maintained ··· 3905 3905 M: bcm-kernel-feedback-list@broadcom.com 3906 3906 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3907 3907 S: Maintained 3908 - T: git git://github.com/broadcom/cygnus-linux.git 3908 + T: git git://github.com/broadcom/stblinux.git 3909 3909 F: arch/arm64/boot/dts/broadcom/northstar2/* 3910 3910 F: arch/arm64/boot/dts/broadcom/stingray/* 3911 3911 F: drivers/clk/bcm/clk-ns* ··· 7744 7744 L: linuxppc-dev@lists.ozlabs.org 7745 7745 S: Maintained 7746 7746 F: drivers/soc/fsl/qe/ 7747 - F: include/soc/fsl/*qe*.h 7748 - F: include/soc/fsl/*ucc*.h 7747 + F: include/soc/fsl/qe/ 7749 7748 7750 7749 FREESCALE QUICC ENGINE UCC ETHERNET DRIVER 7751 7750 M: Li Yang <leoyang.li@nxp.com> ··· 7775 7776 F: Documentation/devicetree/bindings/soc/fsl/ 7776 7777 F: drivers/soc/fsl/ 7777 7778 F: include/linux/fsl/ 7779 + F: include/soc/fsl/ 7778 7780 7779 7781 FREESCALE SOC FS_ENET DRIVER 7780 7782 M: Pantelis Antoniou <pantelis.antoniou@gmail.com> ··· 11675 11675 11676 11676 MAXIM MAX17040 FAMILY FUEL GAUGE DRIVERS 11677 11677 R: Iskren Chernev <iskren.chernev@gmail.com> 11678 - R: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11678 + R: Krzysztof Kozlowski <krzk@kernel.org> 11679 11679 R: Marek Szyprowski <m.szyprowski@samsung.com> 11680 11680 R: Matheus Castello <matheus@castello.eng.br> 11681 11681 L: linux-pm@vger.kernel.org ··· 11685 11685 11686 11686 MAXIM MAX17042 FAMILY FUEL GAUGE DRIVERS 11687 11687 R: Hans de Goede <hdegoede@redhat.com> 11688 - R: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11688 + R: Krzysztof Kozlowski <krzk@kernel.org> 11689 11689 R: Marek Szyprowski <m.szyprowski@samsung.com> 11690 11690 R: Sebastian Krzyszkowiak <sebastian.krzyszkowiak@puri.sm> 11691 11691 R: Purism Kernel Team <kernel@puri.sm> ··· 11730 11730 F: drivers/power/supply/max77976_charger.c 11731 11731 11732 11732 MAXIM MUIC CHARGER DRIVERS FOR EXYNOS BASED BOARDS 11733 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11733 + M: Krzysztof Kozlowski <krzk@kernel.org> 11734 11734 M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 11735 11735 L: linux-pm@vger.kernel.org 11736 11736 S: Supported ··· 11739 11739 11740 11740 MAXIM PMIC AND MUIC DRIVERS FOR EXYNOS BASED BOARDS 11741 11741 M: Chanwoo Choi <cw00.choi@samsung.com> 11742 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 11742 + M: Krzysztof Kozlowski <krzk@kernel.org> 11743 11743 M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 11744 11744 L: linux-kernel@vger.kernel.org 11745 11745 S: Supported ··· 12428 12428 F: mm/memblock.c 12429 12429 12430 12430 MEMORY CONTROLLER DRIVERS 12431 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 12431 + M: Krzysztof Kozlowski <krzk@kernel.org> 12432 12432 L: linux-kernel@vger.kernel.org 12433 12433 S: Maintained 12434 12434 T: git git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-mem-ctrl.git ··· 13565 13565 F: net/ipv4/nexthop.c 13566 13566 13567 13567 NFC SUBSYSTEM 13568 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 13568 + M: Krzysztof Kozlowski <krzk@kernel.org> 13569 13569 L: linux-nfc@lists.01.org (subscribers-only) 13570 13570 L: netdev@vger.kernel.org 13571 13571 S: Maintained ··· 13699 13699 NTB AMD DRIVER 13700 13700 M: Sanjay R Mehta <sanju.mehta@amd.com> 13701 13701 M: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> 13702 - L: linux-ntb@googlegroups.com 13702 + L: ntb@lists.linux.dev 13703 13703 S: Supported 13704 13704 F: drivers/ntb/hw/amd/ 13705 13705 ··· 13707 13707 M: Jon Mason <jdmason@kudzu.us> 13708 13708 M: Dave Jiang <dave.jiang@intel.com> 13709 13709 M: Allen Hubbe <allenbh@gmail.com> 13710 - L: linux-ntb@googlegroups.com 13710 + L: ntb@lists.linux.dev 13711 13711 S: Supported 13712 13712 W: https://github.com/jonmason/ntb/wiki 13713 13713 T: git git://github.com/jonmason/ntb.git ··· 13719 13719 13720 13720 NTB IDT DRIVER 13721 13721 M: Serge Semin <fancer.lancer@gmail.com> 13722 - L: linux-ntb@googlegroups.com 13722 + L: ntb@lists.linux.dev 13723 13723 S: Supported 13724 13724 F: drivers/ntb/hw/idt/ 13725 13725 13726 13726 NTB INTEL DRIVER 13727 13727 M: Dave Jiang <dave.jiang@intel.com> 13728 - L: linux-ntb@googlegroups.com 13728 + L: ntb@lists.linux.dev 13729 13729 S: Supported 13730 13730 W: https://github.com/davejiang/linux/wiki 13731 13731 T: git https://github.com/davejiang/linux.git ··· 13879 13879 F: drivers/regulator/pf8x00-regulator.c 13880 13880 13881 13881 NXP PTN5150A CC LOGIC AND EXTCON DRIVER 13882 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 13882 + M: Krzysztof Kozlowski <krzk@kernel.org> 13883 13883 L: linux-kernel@vger.kernel.org 13884 13884 S: Maintained 13885 13885 F: Documentation/devicetree/bindings/extcon/extcon-ptn5150.yaml ··· 15305 15305 15306 15306 PIN CONTROLLER - SAMSUNG 15307 15307 M: Tomasz Figa <tomasz.figa@gmail.com> 15308 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 15308 + M: Krzysztof Kozlowski <krzk@kernel.org> 15309 15309 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 15310 15310 R: Alim Akhtar <alim.akhtar@samsung.com> 15311 15311 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 16947 16947 F: drivers/s390/scsi/zfcp_* 16948 16948 16949 16949 S3C ADC BATTERY DRIVER 16950 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 16950 + M: Krzysztof Kozlowski <krzk@kernel.org> 16951 16951 L: linux-samsung-soc@vger.kernel.org 16952 16952 S: Odd Fixes 16953 16953 F: drivers/power/supply/s3c_adc_battery.c ··· 16992 16992 F: security/safesetid/ 16993 16993 16994 16994 SAMSUNG AUDIO (ASoC) DRIVERS 16995 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 16995 + M: Krzysztof Kozlowski <krzk@kernel.org> 16996 16996 M: Sylwester Nawrocki <s.nawrocki@samsung.com> 16997 16997 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 16998 16998 S: Supported ··· 17000 17000 F: sound/soc/samsung/ 17001 17001 17002 17002 SAMSUNG EXYNOS PSEUDO RANDOM NUMBER GENERATOR (RNG) DRIVER 17003 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 17003 + M: Krzysztof Kozlowski <krzk@kernel.org> 17004 17004 L: linux-crypto@vger.kernel.org 17005 17005 L: linux-samsung-soc@vger.kernel.org 17006 17006 S: Maintained ··· 17035 17035 F: drivers/platform/x86/samsung-laptop.c 17036 17036 17037 17037 SAMSUNG MULTIFUNCTION PMIC DEVICE DRIVERS 17038 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 17038 + M: Krzysztof Kozlowski <krzk@kernel.org> 17039 17039 M: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> 17040 17040 L: linux-kernel@vger.kernel.org 17041 17041 L: linux-samsung-soc@vger.kernel.org ··· 17061 17061 F: include/media/drv-intf/s3c_camif.h 17062 17062 17063 17063 SAMSUNG S3FWRN5 NFC DRIVER 17064 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 17064 + M: Krzysztof Kozlowski <krzk@kernel.org> 17065 17065 M: Krzysztof Opasiak <k.opasiak@samsung.com> 17066 17066 L: linux-nfc@lists.01.org (subscribers-only) 17067 17067 S: Maintained ··· 17083 17083 F: drivers/media/i2c/s5k5baf.c 17084 17084 17085 17085 SAMSUNG S5P Security SubSystem (SSS) DRIVER 17086 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 17086 + M: Krzysztof Kozlowski <krzk@kernel.org> 17087 17087 M: Vladimir Zapolskiy <vz@mleia.com> 17088 17088 L: linux-crypto@vger.kernel.org 17089 17089 L: linux-samsung-soc@vger.kernel.org ··· 17118 17118 F: include/linux/platform_data/clk-s3c2410.h 17119 17119 17120 17120 SAMSUNG SPI DRIVERS 17121 - M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 17121 + M: Krzysztof Kozlowski <krzk@kernel.org> 17122 17122 M: Andi Shyti <andi@etezian.org> 17123 17123 L: linux-spi@vger.kernel.org 17124 17124 L: linux-samsung-soc@vger.kernel.org ··· 21469 21469 M: Linus Torvalds <torvalds@linux-foundation.org> 21470 21470 L: linux-kernel@vger.kernel.org 21471 21471 S: Buried alive in reporters 21472 - Q: http://patchwork.kernel.org/project/LKML/list/ 21473 21472 T: git git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 21474 21473 F: * 21475 21474 F: */
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc8 6 6 NAME = Superb Owl 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/aspeed-g6-pinctrl.dtsi
··· 118 118 }; 119 119 120 120 pinctrl_fwqspid_default: fwqspid_default { 121 - function = "FWQSPID"; 121 + function = "FWSPID"; 122 122 groups = "FWQSPID"; 123 123 }; 124 124
+1
arch/arm/boot/dts/bcm2711.dtsi
··· 290 290 291 291 hvs: hvs@7e400000 { 292 292 compatible = "brcm,bcm2711-hvs"; 293 + reg = <0x7e400000 0x8000>; 293 294 interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>; 294 295 }; 295 296
+18
arch/arm/boot/dts/omap3-devkit8000-common.dtsi
··· 158 158 status = "disabled"; 159 159 }; 160 160 161 + /* Unusable as clockevent because if unreliable oscillator, allow to idle */ 162 + &timer1_target { 163 + /delete-property/ti,no-reset-on-init; 164 + /delete-property/ti,no-idle; 165 + timer@0 { 166 + /delete-property/ti,timer-alwon; 167 + }; 168 + }; 169 + 170 + /* Preferred timer for clockevent */ 171 + &timer12_target { 172 + ti,no-reset-on-init; 173 + ti,no-idle; 174 + timer@0 { 175 + /* Always clocked by secure_32k_fck */ 176 + }; 177 + }; 178 + 161 179 &twl_gpio { 162 180 ti,use-leds; 163 181 /*
-33
arch/arm/boot/dts/omap3-devkit8000.dts
··· 14 14 display2 = &tv0; 15 15 }; 16 16 }; 17 - 18 - /* Unusable as clocksource because of unreliable oscillator */ 19 - &counter32k { 20 - status = "disabled"; 21 - }; 22 - 23 - /* Unusable as clockevent because if unreliable oscillator, allow to idle */ 24 - &timer1_target { 25 - /delete-property/ti,no-reset-on-init; 26 - /delete-property/ti,no-idle; 27 - timer@0 { 28 - /delete-property/ti,timer-alwon; 29 - }; 30 - }; 31 - 32 - /* Preferred always-on timer for clocksource */ 33 - &timer12_target { 34 - ti,no-reset-on-init; 35 - ti,no-idle; 36 - timer@0 { 37 - /* Always clocked by secure_32k_fck */ 38 - }; 39 - }; 40 - 41 - /* Preferred timer for clockevent */ 42 - &timer2_target { 43 - ti,no-reset-on-init; 44 - ti,no-idle; 45 - timer@0 { 46 - assigned-clocks = <&gpt2_fck>; 47 - assigned-clock-parents = <&sys_ck>; 48 - }; 49 - };
+2 -2
arch/arm/boot/dts/rk322x.dtsi
··· 718 718 interrupts = <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>; 719 719 assigned-clocks = <&cru SCLK_HDMI_PHY>; 720 720 assigned-clock-parents = <&hdmi_phy>; 721 - clocks = <&cru SCLK_HDMI_HDCP>, <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_CEC>; 722 - clock-names = "isfr", "iahb", "cec"; 721 + clocks = <&cru PCLK_HDMI_CTRL>, <&cru SCLK_HDMI_HDCP>, <&cru SCLK_HDMI_CEC>; 722 + clock-names = "iahb", "isfr", "cec"; 723 723 pinctrl-names = "default"; 724 724 pinctrl-0 = <&hdmii2c_xfer &hdmi_hpd &hdmi_cec>; 725 725 resets = <&cru SRST_HDMI_P>;
+1 -1
arch/arm/boot/dts/rk3288.dtsi
··· 971 971 status = "disabled"; 972 972 }; 973 973 974 - crypto: cypto-controller@ff8a0000 { 974 + crypto: crypto@ff8a0000 { 975 975 compatible = "rockchip,rk3288-crypto"; 976 976 reg = <0x0 0xff8a0000 0x0 0x4000>; 977 977 interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+8 -2
arch/arm/boot/dts/tegra124-nyan-big-fhd.dts
··· 5 5 6 6 / { 7 7 /* Version of Nyan Big with 1080p panel */ 8 - panel { 9 - compatible = "auo,b133htn01"; 8 + host1x@50000000 { 9 + dpaux@545c0000 { 10 + aux-bus { 11 + panel: panel { 12 + compatible = "auo,b133htn01"; 13 + }; 14 + }; 15 + }; 10 16 }; 11 17 };
+9 -6
arch/arm/boot/dts/tegra124-nyan-big.dts
··· 13 13 "google,nyan-big-rev1", "google,nyan-big-rev0", 14 14 "google,nyan-big", "google,nyan", "nvidia,tegra124"; 15 15 16 - panel: panel { 17 - compatible = "auo,b133xtn01"; 18 - 19 - power-supply = <&vdd_3v3_panel>; 20 - backlight = <&backlight>; 21 - ddc-i2c-bus = <&dpaux>; 16 + host1x@50000000 { 17 + dpaux@545c0000 { 18 + aux-bus { 19 + panel: panel { 20 + compatible = "auo,b133xtn01"; 21 + backlight = <&backlight>; 22 + }; 23 + }; 24 + }; 22 25 }; 23 26 24 27 mmc@700b0400 { /* SD Card on this bus */
+9 -6
arch/arm/boot/dts/tegra124-nyan-blaze.dts
··· 15 15 "google,nyan-blaze-rev0", "google,nyan-blaze", 16 16 "google,nyan", "nvidia,tegra124"; 17 17 18 - panel: panel { 19 - compatible = "samsung,ltn140at29-301"; 20 - 21 - power-supply = <&vdd_3v3_panel>; 22 - backlight = <&backlight>; 23 - ddc-i2c-bus = <&dpaux>; 18 + host1x@50000000 { 19 + dpaux@545c0000 { 20 + aux-bus { 21 + panel: panel { 22 + compatible = "samsung,ltn140at29-301"; 23 + backlight = <&backlight>; 24 + }; 25 + }; 26 + }; 24 27 }; 25 28 26 29 sound {
+7 -7
arch/arm/boot/dts/tegra124-venice2.dts
··· 48 48 dpaux@545c0000 { 49 49 vdd-supply = <&vdd_3v3_panel>; 50 50 status = "okay"; 51 + 52 + aux-bus { 53 + panel: panel { 54 + compatible = "lg,lp129qe"; 55 + backlight = <&backlight>; 56 + }; 57 + }; 51 58 }; 52 59 }; 53 60 ··· 1085 1078 debounce-interval = <10>; 1086 1079 wakeup-source; 1087 1080 }; 1088 - }; 1089 - 1090 - panel: panel { 1091 - compatible = "lg,lp129qe"; 1092 - power-supply = <&vdd_3v3_panel>; 1093 - backlight = <&backlight>; 1094 - ddc-i2c-bus = <&dpaux>; 1095 1081 }; 1096 1082 1097 1083 vdd_mux: regulator-mux {
+10
arch/arm/include/asm/assembler.h
··· 107 107 .endm 108 108 #endif 109 109 110 + #if __LINUX_ARM_ARCH__ < 7 111 + .macro dsb, args 112 + mcr p15, 0, r0, c7, c10, 4 113 + .endm 114 + 115 + .macro isb, args 116 + mcr p15, 0, r0, c7, c5, 4 117 + .endm 118 + #endif 119 + 110 120 .macro asm_trace_hardirqs_off, save=1 111 121 #if defined(CONFIG_TRACE_IRQFLAGS) 112 122 .if \save
+38
arch/arm/include/asm/spectre.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __ASM_SPECTRE_H 4 + #define __ASM_SPECTRE_H 5 + 6 + enum { 7 + SPECTRE_UNAFFECTED, 8 + SPECTRE_MITIGATED, 9 + SPECTRE_VULNERABLE, 10 + }; 11 + 12 + enum { 13 + __SPECTRE_V2_METHOD_BPIALL, 14 + __SPECTRE_V2_METHOD_ICIALLU, 15 + __SPECTRE_V2_METHOD_SMC, 16 + __SPECTRE_V2_METHOD_HVC, 17 + __SPECTRE_V2_METHOD_LOOP8, 18 + }; 19 + 20 + enum { 21 + SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL), 22 + SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU), 23 + SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC), 24 + SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC), 25 + SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8), 26 + }; 27 + 28 + #ifdef CONFIG_GENERIC_CPU_VULNERABILITIES 29 + void spectre_v2_update_state(unsigned int state, unsigned int methods); 30 + #else 31 + static inline void spectre_v2_update_state(unsigned int state, 32 + unsigned int methods) 33 + {} 34 + #endif 35 + 36 + int spectre_bhb_update_vectors(unsigned int method); 37 + 38 + #endif
+34 -9
arch/arm/include/asm/vmlinux.lds.h
··· 26 26 #define ARM_MMU_DISCARD(x) x 27 27 #endif 28 28 29 + /* 30 + * ld.lld does not support NOCROSSREFS: 31 + * https://github.com/ClangBuiltLinux/linux/issues/1609 32 + */ 33 + #ifdef CONFIG_LD_IS_LLD 34 + #define NOCROSSREFS 35 + #endif 36 + 37 + /* Set start/end symbol names to the LMA for the section */ 38 + #define ARM_LMA(sym, section) \ 39 + sym##_start = LOADADDR(section); \ 40 + sym##_end = LOADADDR(section) + SIZEOF(section) 41 + 29 42 #define PROC_INFO \ 30 43 . = ALIGN(4); \ 31 44 __proc_info_begin = .; \ ··· 123 110 * only thing that matters is their relative offsets 124 111 */ 125 112 #define ARM_VECTORS \ 126 - __vectors_start = .; \ 127 - .vectors 0xffff0000 : AT(__vectors_start) { \ 128 - *(.vectors) \ 113 + __vectors_lma = .; \ 114 + OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \ 115 + .vectors { \ 116 + *(.vectors) \ 117 + } \ 118 + .vectors.bhb.loop8 { \ 119 + *(.vectors.bhb.loop8) \ 120 + } \ 121 + .vectors.bhb.bpiall { \ 122 + *(.vectors.bhb.bpiall) \ 123 + } \ 129 124 } \ 130 - . = __vectors_start + SIZEOF(.vectors); \ 131 - __vectors_end = .; \ 125 + ARM_LMA(__vectors, .vectors); \ 126 + ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8); \ 127 + ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall); \ 128 + . = __vectors_lma + SIZEOF(.vectors) + \ 129 + SIZEOF(.vectors.bhb.loop8) + \ 130 + SIZEOF(.vectors.bhb.bpiall); \ 132 131 \ 133 - __stubs_start = .; \ 134 - .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { \ 132 + __stubs_lma = .; \ 133 + .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) { \ 135 134 *(.stubs) \ 136 135 } \ 137 - . = __stubs_start + SIZEOF(.stubs); \ 138 - __stubs_end = .; \ 136 + ARM_LMA(__stubs, .stubs); \ 137 + . = __stubs_lma + SIZEOF(.stubs); \ 139 138 \ 140 139 PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors)); 141 140
+2
arch/arm/kernel/Makefile
··· 106 106 107 107 obj-$(CONFIG_HAVE_ARM_SMCCC) += smccc-call.o 108 108 109 + obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o 110 + 109 111 extra-y := $(head-y) vmlinux.lds
+73 -6
arch/arm/kernel/entry-armv.S
··· 1002 1002 sub lr, lr, #\correction 1003 1003 .endif 1004 1004 1005 - @ 1006 - @ Save r0, lr_<exception> (parent PC) and spsr_<exception> 1007 - @ (parent CPSR) 1008 - @ 1005 + @ Save r0, lr_<exception> (parent PC) 1009 1006 stmia sp, {r0, lr} @ save r0, lr 1010 - mrs lr, spsr 1007 + 1008 + @ Save spsr_<exception> (parent CPSR) 1009 + 2: mrs lr, spsr 1011 1010 str lr, [sp, #8] @ save spsr 1012 1011 1013 1012 @ ··· 1027 1028 movs pc, lr @ branch to handler in SVC mode 1028 1029 ENDPROC(vector_\name) 1029 1030 1031 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 1032 + .subsection 1 1033 + .align 5 1034 + vector_bhb_loop8_\name: 1035 + .if \correction 1036 + sub lr, lr, #\correction 1037 + .endif 1038 + 1039 + @ Save r0, lr_<exception> (parent PC) 1040 + stmia sp, {r0, lr} 1041 + 1042 + @ bhb workaround 1043 + mov r0, #8 1044 + 3: b . + 4 1045 + subs r0, r0, #1 1046 + bne 3b 1047 + dsb 1048 + isb 1049 + b 2b 1050 + ENDPROC(vector_bhb_loop8_\name) 1051 + 1052 + vector_bhb_bpiall_\name: 1053 + .if \correction 1054 + sub lr, lr, #\correction 1055 + .endif 1056 + 1057 + @ Save r0, lr_<exception> (parent PC) 1058 + stmia sp, {r0, lr} 1059 + 1060 + @ bhb workaround 1061 + mcr p15, 0, r0, c7, c5, 6 @ BPIALL 1062 + @ isb not needed due to "movs pc, lr" in the vector stub 1063 + @ which gives a "context synchronisation". 1064 + b 2b 1065 + ENDPROC(vector_bhb_bpiall_\name) 1066 + .previous 1067 + #endif 1068 + 1030 1069 .align 2 1031 1070 @ handler addresses follow this label 1032 1071 1: ··· 1073 1036 .section .stubs, "ax", %progbits 1074 1037 @ This must be the first word 1075 1038 .word vector_swi 1039 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 1040 + .word vector_bhb_loop8_swi 1041 + .word vector_bhb_bpiall_swi 1042 + #endif 1076 1043 1077 1044 vector_rst: 1078 1045 ARM( swi SYS_ERROR0 ) ··· 1191 1150 * FIQ "NMI" handler 1192 1151 *----------------------------------------------------------------------------- 1193 1152 * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86 1194 - * systems. 1153 + * systems. This must be the last vector stub, so lets place it in its own 1154 + * subsection. 1195 1155 */ 1156 + .subsection 2 1196 1157 vector_stub fiq, FIQ_MODE, 4 1197 1158 1198 1159 .long __fiq_usr @ 0 (USR_26 / USR_32) ··· 1226 1183 W(b) vector_addrexcptn 1227 1184 W(b) vector_irq 1228 1185 W(b) vector_fiq 1186 + 1187 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 1188 + .section .vectors.bhb.loop8, "ax", %progbits 1189 + .L__vectors_bhb_loop8_start: 1190 + W(b) vector_rst 1191 + W(b) vector_bhb_loop8_und 1192 + W(ldr) pc, .L__vectors_bhb_loop8_start + 0x1004 1193 + W(b) vector_bhb_loop8_pabt 1194 + W(b) vector_bhb_loop8_dabt 1195 + W(b) vector_addrexcptn 1196 + W(b) vector_bhb_loop8_irq 1197 + W(b) vector_bhb_loop8_fiq 1198 + 1199 + .section .vectors.bhb.bpiall, "ax", %progbits 1200 + .L__vectors_bhb_bpiall_start: 1201 + W(b) vector_rst 1202 + W(b) vector_bhb_bpiall_und 1203 + W(ldr) pc, .L__vectors_bhb_bpiall_start + 0x1008 1204 + W(b) vector_bhb_bpiall_pabt 1205 + W(b) vector_bhb_bpiall_dabt 1206 + W(b) vector_addrexcptn 1207 + W(b) vector_bhb_bpiall_irq 1208 + W(b) vector_bhb_bpiall_fiq 1209 + #endif 1229 1210 1230 1211 .data 1231 1212 .align 2
+24
arch/arm/kernel/entry-common.S
··· 154 154 */ 155 155 156 156 .align 5 157 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 158 + ENTRY(vector_bhb_loop8_swi) 159 + sub sp, sp, #PT_REGS_SIZE 160 + stmia sp, {r0 - r12} 161 + mov r8, #8 162 + 1: b 2f 163 + 2: subs r8, r8, #1 164 + bne 1b 165 + dsb 166 + isb 167 + b 3f 168 + ENDPROC(vector_bhb_loop8_swi) 169 + 170 + .align 5 171 + ENTRY(vector_bhb_bpiall_swi) 172 + sub sp, sp, #PT_REGS_SIZE 173 + stmia sp, {r0 - r12} 174 + mcr p15, 0, r8, c7, c5, 6 @ BPIALL 175 + isb 176 + b 3f 177 + ENDPROC(vector_bhb_bpiall_swi) 178 + #endif 179 + .align 5 157 180 ENTRY(vector_swi) 158 181 #ifdef CONFIG_CPU_V7M 159 182 v7m_exception_entry 160 183 #else 161 184 sub sp, sp, #PT_REGS_SIZE 162 185 stmia sp, {r0 - r12} @ Calling r0 - r12 186 + 3: 163 187 ARM( add r8, sp, #S_PC ) 164 188 ARM( stmdb r8, {sp, lr}^ ) @ Calling sp, lr 165 189 THUMB( mov r8, sp )
+28 -8
arch/arm/kernel/kgdb.c
··· 154 154 return 0; 155 155 } 156 156 157 - static struct undef_hook kgdb_brkpt_hook = { 157 + static struct undef_hook kgdb_brkpt_arm_hook = { 158 158 .instr_mask = 0xffffffff, 159 159 .instr_val = KGDB_BREAKINST, 160 - .cpsr_mask = MODE_MASK, 160 + .cpsr_mask = PSR_T_BIT | MODE_MASK, 161 161 .cpsr_val = SVC_MODE, 162 162 .fn = kgdb_brk_fn 163 163 }; 164 164 165 - static struct undef_hook kgdb_compiled_brkpt_hook = { 165 + static struct undef_hook kgdb_brkpt_thumb_hook = { 166 + .instr_mask = 0xffff, 167 + .instr_val = KGDB_BREAKINST & 0xffff, 168 + .cpsr_mask = PSR_T_BIT | MODE_MASK, 169 + .cpsr_val = PSR_T_BIT | SVC_MODE, 170 + .fn = kgdb_brk_fn 171 + }; 172 + 173 + static struct undef_hook kgdb_compiled_brkpt_arm_hook = { 166 174 .instr_mask = 0xffffffff, 167 175 .instr_val = KGDB_COMPILED_BREAK, 168 - .cpsr_mask = MODE_MASK, 176 + .cpsr_mask = PSR_T_BIT | MODE_MASK, 169 177 .cpsr_val = SVC_MODE, 178 + .fn = kgdb_compiled_brk_fn 179 + }; 180 + 181 + static struct undef_hook kgdb_compiled_brkpt_thumb_hook = { 182 + .instr_mask = 0xffff, 183 + .instr_val = KGDB_COMPILED_BREAK & 0xffff, 184 + .cpsr_mask = PSR_T_BIT | MODE_MASK, 185 + .cpsr_val = PSR_T_BIT | SVC_MODE, 170 186 .fn = kgdb_compiled_brk_fn 171 187 }; 172 188 ··· 226 210 if (ret != 0) 227 211 return ret; 228 212 229 - register_undef_hook(&kgdb_brkpt_hook); 230 - register_undef_hook(&kgdb_compiled_brkpt_hook); 213 + register_undef_hook(&kgdb_brkpt_arm_hook); 214 + register_undef_hook(&kgdb_brkpt_thumb_hook); 215 + register_undef_hook(&kgdb_compiled_brkpt_arm_hook); 216 + register_undef_hook(&kgdb_compiled_brkpt_thumb_hook); 231 217 232 218 return 0; 233 219 } ··· 242 224 */ 243 225 void kgdb_arch_exit(void) 244 226 { 245 - unregister_undef_hook(&kgdb_brkpt_hook); 246 - unregister_undef_hook(&kgdb_compiled_brkpt_hook); 227 + unregister_undef_hook(&kgdb_brkpt_arm_hook); 228 + unregister_undef_hook(&kgdb_brkpt_thumb_hook); 229 + unregister_undef_hook(&kgdb_compiled_brkpt_arm_hook); 230 + unregister_undef_hook(&kgdb_compiled_brkpt_thumb_hook); 247 231 unregister_die_notifier(&kgdb_notifier); 248 232 } 249 233
+71
arch/arm/kernel/spectre.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/bpf.h> 3 + #include <linux/cpu.h> 4 + #include <linux/device.h> 5 + 6 + #include <asm/spectre.h> 7 + 8 + static bool _unprivileged_ebpf_enabled(void) 9 + { 10 + #ifdef CONFIG_BPF_SYSCALL 11 + return !sysctl_unprivileged_bpf_disabled; 12 + #else 13 + return false; 14 + #endif 15 + } 16 + 17 + ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, 18 + char *buf) 19 + { 20 + return sprintf(buf, "Mitigation: __user pointer sanitization\n"); 21 + } 22 + 23 + static unsigned int spectre_v2_state; 24 + static unsigned int spectre_v2_methods; 25 + 26 + void spectre_v2_update_state(unsigned int state, unsigned int method) 27 + { 28 + if (state > spectre_v2_state) 29 + spectre_v2_state = state; 30 + spectre_v2_methods |= method; 31 + } 32 + 33 + ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, 34 + char *buf) 35 + { 36 + const char *method; 37 + 38 + if (spectre_v2_state == SPECTRE_UNAFFECTED) 39 + return sprintf(buf, "%s\n", "Not affected"); 40 + 41 + if (spectre_v2_state != SPECTRE_MITIGATED) 42 + return sprintf(buf, "%s\n", "Vulnerable"); 43 + 44 + if (_unprivileged_ebpf_enabled()) 45 + return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n"); 46 + 47 + switch (spectre_v2_methods) { 48 + case SPECTRE_V2_METHOD_BPIALL: 49 + method = "Branch predictor hardening"; 50 + break; 51 + 52 + case SPECTRE_V2_METHOD_ICIALLU: 53 + method = "I-cache invalidation"; 54 + break; 55 + 56 + case SPECTRE_V2_METHOD_SMC: 57 + case SPECTRE_V2_METHOD_HVC: 58 + method = "Firmware call"; 59 + break; 60 + 61 + case SPECTRE_V2_METHOD_LOOP8: 62 + method = "History overwrite"; 63 + break; 64 + 65 + default: 66 + method = "Multiple mitigations"; 67 + break; 68 + } 69 + 70 + return sprintf(buf, "Mitigation: %s\n", method); 71 + }
+59 -6
arch/arm/kernel/traps.c
··· 30 30 #include <linux/atomic.h> 31 31 #include <asm/cacheflush.h> 32 32 #include <asm/exception.h> 33 + #include <asm/spectre.h> 33 34 #include <asm/unistd.h> 34 35 #include <asm/traps.h> 35 36 #include <asm/ptrace.h> ··· 790 789 } 791 790 #endif 792 791 792 + #ifndef CONFIG_CPU_V7M 793 + static void copy_from_lma(void *vma, void *lma_start, void *lma_end) 794 + { 795 + memcpy(vma, lma_start, lma_end - lma_start); 796 + } 797 + 798 + static void flush_vectors(void *vma, size_t offset, size_t size) 799 + { 800 + unsigned long start = (unsigned long)vma + offset; 801 + unsigned long end = start + size; 802 + 803 + flush_icache_range(start, end); 804 + } 805 + 806 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 807 + int spectre_bhb_update_vectors(unsigned int method) 808 + { 809 + extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[]; 810 + extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[]; 811 + void *vec_start, *vec_end; 812 + 813 + if (system_state >= SYSTEM_FREEING_INITMEM) { 814 + pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n", 815 + smp_processor_id()); 816 + return SPECTRE_VULNERABLE; 817 + } 818 + 819 + switch (method) { 820 + case SPECTRE_V2_METHOD_LOOP8: 821 + vec_start = __vectors_bhb_loop8_start; 822 + vec_end = __vectors_bhb_loop8_end; 823 + break; 824 + 825 + case SPECTRE_V2_METHOD_BPIALL: 826 + vec_start = __vectors_bhb_bpiall_start; 827 + vec_end = __vectors_bhb_bpiall_end; 828 + break; 829 + 830 + default: 831 + pr_err("CPU%u: unknown Spectre BHB state %d\n", 832 + smp_processor_id(), method); 833 + return SPECTRE_VULNERABLE; 834 + } 835 + 836 + copy_from_lma(vectors_page, vec_start, vec_end); 837 + flush_vectors(vectors_page, 0, vec_end - vec_start); 838 + 839 + return SPECTRE_MITIGATED; 840 + } 841 + #endif 842 + 793 843 void __init early_trap_init(void *vectors_base) 794 844 { 795 - #ifndef CONFIG_CPU_V7M 796 - unsigned long vectors = (unsigned long)vectors_base; 797 845 extern char __stubs_start[], __stubs_end[]; 798 846 extern char __vectors_start[], __vectors_end[]; 799 847 unsigned i; ··· 863 813 * into the vector page, mapped at 0xffff0000, and ensure these 864 814 * are visible to the instruction stream. 865 815 */ 866 - memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start); 867 - memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start); 816 + copy_from_lma(vectors_base, __vectors_start, __vectors_end); 817 + copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end); 868 818 869 819 kuser_init(vectors_base); 870 820 871 - flush_icache_range(vectors, vectors + PAGE_SIZE * 2); 821 + flush_vectors(vectors_base, 0, PAGE_SIZE * 2); 822 + } 872 823 #else /* ifndef CONFIG_CPU_V7M */ 824 + void __init early_trap_init(void *vectors_base) 825 + { 873 826 /* 874 827 * on V7-M there is no need to copy the vector table to a dedicated 875 828 * memory area. The address is configurable and so a table in the kernel 876 829 * image can be used. 877 830 */ 878 - #endif 879 831 } 832 + #endif
+1
arch/arm/mach-mstar/Kconfig
··· 3 3 depends on ARCH_MULTI_V7 4 4 select ARM_GIC 5 5 select ARM_HEAVY_MB 6 + select HAVE_ARM_ARCH_TIMER 6 7 select MST_IRQ 7 8 select MSTAR_MSC313_MPLL 8 9 help
+11
arch/arm/mm/Kconfig
··· 830 830 831 831 config CPU_SPECTRE 832 832 bool 833 + select GENERIC_CPU_VULNERABILITIES 833 834 834 835 config HARDEN_BRANCH_PREDICTOR 835 836 bool "Harden the branch predictor against aliasing attacks" if EXPERT ··· 850 849 the system firmware. 851 850 852 851 If unsure, say Y. 852 + 853 + config HARDEN_BRANCH_HISTORY 854 + bool "Harden Spectre style attacks against branch history" if EXPERT 855 + depends on CPU_SPECTRE 856 + default y 857 + help 858 + Speculation attacks against some high-performance processors can 859 + make use of branch history to influence future speculation. When 860 + taking an exception, a sequence of branches overwrites the branch 861 + history, or branch history is invalidated. 853 862 854 863 config TLS_REG_EMUL 855 864 bool
+2
arch/arm/mm/mmu.c
··· 212 212 static int __init early_cachepolicy(char *p) 213 213 { 214 214 pr_warn("cachepolicy kernel parameter not supported without cp15\n"); 215 + return 0; 215 216 } 216 217 early_param("cachepolicy", early_cachepolicy); 217 218 218 219 static int __init noalign_setup(char *__unused) 219 220 { 220 221 pr_warn("noalign kernel parameter not supported without cp15\n"); 222 + return 1; 221 223 } 222 224 __setup("noalign", noalign_setup); 223 225
+175 -35
arch/arm/mm/proc-v7-bugs.c
··· 6 6 #include <asm/cp15.h> 7 7 #include <asm/cputype.h> 8 8 #include <asm/proc-fns.h> 9 + #include <asm/spectre.h> 9 10 #include <asm/system_misc.h> 11 + 12 + #ifdef CONFIG_ARM_PSCI 13 + static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void) 14 + { 15 + struct arm_smccc_res res; 16 + 17 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, 18 + ARM_SMCCC_ARCH_WORKAROUND_1, &res); 19 + 20 + switch ((int)res.a0) { 21 + case SMCCC_RET_SUCCESS: 22 + return SPECTRE_MITIGATED; 23 + 24 + case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED: 25 + return SPECTRE_UNAFFECTED; 26 + 27 + default: 28 + return SPECTRE_VULNERABLE; 29 + } 30 + } 31 + #else 32 + static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void) 33 + { 34 + return SPECTRE_VULNERABLE; 35 + } 36 + #endif 10 37 11 38 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR 12 39 DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); ··· 63 36 arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); 64 37 } 65 38 66 - static void cpu_v7_spectre_init(void) 39 + static unsigned int spectre_v2_install_workaround(unsigned int method) 67 40 { 68 41 const char *spectre_v2_method = NULL; 69 42 int cpu = smp_processor_id(); 70 43 71 44 if (per_cpu(harden_branch_predictor_fn, cpu)) 72 - return; 45 + return SPECTRE_MITIGATED; 46 + 47 + switch (method) { 48 + case SPECTRE_V2_METHOD_BPIALL: 49 + per_cpu(harden_branch_predictor_fn, cpu) = 50 + harden_branch_predictor_bpiall; 51 + spectre_v2_method = "BPIALL"; 52 + break; 53 + 54 + case SPECTRE_V2_METHOD_ICIALLU: 55 + per_cpu(harden_branch_predictor_fn, cpu) = 56 + harden_branch_predictor_iciallu; 57 + spectre_v2_method = "ICIALLU"; 58 + break; 59 + 60 + case SPECTRE_V2_METHOD_HVC: 61 + per_cpu(harden_branch_predictor_fn, cpu) = 62 + call_hvc_arch_workaround_1; 63 + cpu_do_switch_mm = cpu_v7_hvc_switch_mm; 64 + spectre_v2_method = "hypervisor"; 65 + break; 66 + 67 + case SPECTRE_V2_METHOD_SMC: 68 + per_cpu(harden_branch_predictor_fn, cpu) = 69 + call_smc_arch_workaround_1; 70 + cpu_do_switch_mm = cpu_v7_smc_switch_mm; 71 + spectre_v2_method = "firmware"; 72 + break; 73 + } 74 + 75 + if (spectre_v2_method) 76 + pr_info("CPU%u: Spectre v2: using %s workaround\n", 77 + smp_processor_id(), spectre_v2_method); 78 + 79 + return SPECTRE_MITIGATED; 80 + } 81 + #else 82 + static unsigned int spectre_v2_install_workaround(unsigned int method) 83 + { 84 + pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n", 85 + smp_processor_id()); 86 + 87 + return SPECTRE_VULNERABLE; 88 + } 89 + #endif 90 + 91 + static void cpu_v7_spectre_v2_init(void) 92 + { 93 + unsigned int state, method = 0; 73 94 74 95 switch (read_cpuid_part()) { 75 96 case ARM_CPU_PART_CORTEX_A8: ··· 126 51 case ARM_CPU_PART_CORTEX_A17: 127 52 case ARM_CPU_PART_CORTEX_A73: 128 53 case ARM_CPU_PART_CORTEX_A75: 129 - per_cpu(harden_branch_predictor_fn, cpu) = 130 - harden_branch_predictor_bpiall; 131 - spectre_v2_method = "BPIALL"; 54 + state = SPECTRE_MITIGATED; 55 + method = SPECTRE_V2_METHOD_BPIALL; 132 56 break; 133 57 134 58 case ARM_CPU_PART_CORTEX_A15: 135 59 case ARM_CPU_PART_BRAHMA_B15: 136 - per_cpu(harden_branch_predictor_fn, cpu) = 137 - harden_branch_predictor_iciallu; 138 - spectre_v2_method = "ICIALLU"; 60 + state = SPECTRE_MITIGATED; 61 + method = SPECTRE_V2_METHOD_ICIALLU; 139 62 break; 140 63 141 - #ifdef CONFIG_ARM_PSCI 142 64 case ARM_CPU_PART_BRAHMA_B53: 143 65 /* Requires no workaround */ 66 + state = SPECTRE_UNAFFECTED; 144 67 break; 68 + 145 69 default: 146 70 /* Other ARM CPUs require no workaround */ 147 - if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) 71 + if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) { 72 + state = SPECTRE_UNAFFECTED; 148 73 break; 149 - fallthrough; 150 - /* Cortex A57/A72 require firmware workaround */ 151 - case ARM_CPU_PART_CORTEX_A57: 152 - case ARM_CPU_PART_CORTEX_A72: { 153 - struct arm_smccc_res res; 74 + } 154 75 155 - arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, 156 - ARM_SMCCC_ARCH_WORKAROUND_1, &res); 157 - if ((int)res.a0 != 0) 158 - return; 76 + fallthrough; 77 + 78 + /* Cortex A57/A72 require firmware workaround */ 79 + case ARM_CPU_PART_CORTEX_A57: 80 + case ARM_CPU_PART_CORTEX_A72: 81 + state = spectre_v2_get_cpu_fw_mitigation_state(); 82 + if (state != SPECTRE_MITIGATED) 83 + break; 159 84 160 85 switch (arm_smccc_1_1_get_conduit()) { 161 86 case SMCCC_CONDUIT_HVC: 162 - per_cpu(harden_branch_predictor_fn, cpu) = 163 - call_hvc_arch_workaround_1; 164 - cpu_do_switch_mm = cpu_v7_hvc_switch_mm; 165 - spectre_v2_method = "hypervisor"; 87 + method = SPECTRE_V2_METHOD_HVC; 166 88 break; 167 89 168 90 case SMCCC_CONDUIT_SMC: 169 - per_cpu(harden_branch_predictor_fn, cpu) = 170 - call_smc_arch_workaround_1; 171 - cpu_do_switch_mm = cpu_v7_smc_switch_mm; 172 - spectre_v2_method = "firmware"; 91 + method = SPECTRE_V2_METHOD_SMC; 173 92 break; 174 93 175 94 default: 95 + state = SPECTRE_VULNERABLE; 176 96 break; 177 97 } 178 98 } 179 - #endif 99 + 100 + if (state == SPECTRE_MITIGATED) 101 + state = spectre_v2_install_workaround(method); 102 + 103 + spectre_v2_update_state(state, method); 104 + } 105 + 106 + #ifdef CONFIG_HARDEN_BRANCH_HISTORY 107 + static int spectre_bhb_method; 108 + 109 + static const char *spectre_bhb_method_name(int method) 110 + { 111 + switch (method) { 112 + case SPECTRE_V2_METHOD_LOOP8: 113 + return "loop"; 114 + 115 + case SPECTRE_V2_METHOD_BPIALL: 116 + return "BPIALL"; 117 + 118 + default: 119 + return "unknown"; 120 + } 121 + } 122 + 123 + static int spectre_bhb_install_workaround(int method) 124 + { 125 + if (spectre_bhb_method != method) { 126 + if (spectre_bhb_method) { 127 + pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n", 128 + smp_processor_id()); 129 + 130 + return SPECTRE_VULNERABLE; 131 + } 132 + 133 + if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE) 134 + return SPECTRE_VULNERABLE; 135 + 136 + spectre_bhb_method = method; 180 137 } 181 138 182 - if (spectre_v2_method) 183 - pr_info("CPU%u: Spectre v2: using %s workaround\n", 184 - smp_processor_id(), spectre_v2_method); 139 + pr_info("CPU%u: Spectre BHB: using %s workaround\n", 140 + smp_processor_id(), spectre_bhb_method_name(method)); 141 + 142 + return SPECTRE_MITIGATED; 185 143 } 186 144 #else 187 - static void cpu_v7_spectre_init(void) 145 + static int spectre_bhb_install_workaround(int method) 188 146 { 147 + return SPECTRE_VULNERABLE; 189 148 } 190 149 #endif 150 + 151 + static void cpu_v7_spectre_bhb_init(void) 152 + { 153 + unsigned int state, method = 0; 154 + 155 + switch (read_cpuid_part()) { 156 + case ARM_CPU_PART_CORTEX_A15: 157 + case ARM_CPU_PART_BRAHMA_B15: 158 + case ARM_CPU_PART_CORTEX_A57: 159 + case ARM_CPU_PART_CORTEX_A72: 160 + state = SPECTRE_MITIGATED; 161 + method = SPECTRE_V2_METHOD_LOOP8; 162 + break; 163 + 164 + case ARM_CPU_PART_CORTEX_A73: 165 + case ARM_CPU_PART_CORTEX_A75: 166 + state = SPECTRE_MITIGATED; 167 + method = SPECTRE_V2_METHOD_BPIALL; 168 + break; 169 + 170 + default: 171 + state = SPECTRE_UNAFFECTED; 172 + break; 173 + } 174 + 175 + if (state == SPECTRE_MITIGATED) 176 + state = spectre_bhb_install_workaround(method); 177 + 178 + spectre_v2_update_state(state, method); 179 + } 191 180 192 181 static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned, 193 182 u32 mask, const char *msg) ··· 281 142 void cpu_v7_ca8_ibe(void) 282 143 { 283 144 if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6))) 284 - cpu_v7_spectre_init(); 145 + cpu_v7_spectre_v2_init(); 285 146 } 286 147 287 148 void cpu_v7_ca15_ibe(void) 288 149 { 289 150 if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0))) 290 - cpu_v7_spectre_init(); 151 + cpu_v7_spectre_v2_init(); 291 152 } 292 153 293 154 void cpu_v7_bugs_init(void) 294 155 { 295 - cpu_v7_spectre_init(); 156 + cpu_v7_spectre_v2_init(); 157 + cpu_v7_spectre_bhb_init(); 296 158 }
+9 -3
arch/arm64/Kconfig
··· 1252 1252 def_bool y 1253 1253 depends on ARM_PMU 1254 1254 1255 - config ARCH_HAS_FILTER_PGPROT 1256 - def_bool y 1257 - 1258 1255 # Supported by clang >= 7.0 1259 1256 config CC_HAVE_SHADOW_CALL_STACK 1260 1257 def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) ··· 1379 1382 via a trampoline page in the vector table. 1380 1383 1381 1384 If unsure, say Y. 1385 + 1386 + config MITIGATE_SPECTRE_BRANCH_HISTORY 1387 + bool "Mitigate Spectre style attacks against branch history" if EXPERT 1388 + default y 1389 + help 1390 + Speculation attacks against some high-performance processors can 1391 + make use of branch history to influence future speculation. 1392 + When taking an exception from user-space, a sequence of branches 1393 + or a firmware call overwrites the branch history. 1382 1394 1383 1395 config RODATA_FULL_DEFAULT_ENABLED 1384 1396 bool "Apply r/o permissions of VM areas also to their linear aliases"
+1 -2
arch/arm64/boot/dts/arm/juno-base.dtsi
··· 543 543 <0x02000000 0x00 0x50000000 0x00 0x50000000 0x0 0x08000000>, 544 544 <0x42000000 0x40 0x00000000 0x40 0x00000000 0x1 0x00000000>; 545 545 /* Standard AXI Translation entries as programmed by EDK2 */ 546 - dma-ranges = <0x02000000 0x0 0x2c1c0000 0x0 0x2c1c0000 0x0 0x00040000>, 547 - <0x02000000 0x0 0x80000000 0x0 0x80000000 0x0 0x80000000>, 546 + dma-ranges = <0x02000000 0x0 0x80000000 0x0 0x80000000 0x0 0x80000000>, 548 547 <0x43000000 0x8 0x00000000 0x8 0x00000000 0x2 0x00000000>; 549 548 #interrupt-cells = <1>; 550 549 interrupt-map-mask = <0 0 0 7>;
-1
arch/arm64/boot/dts/freescale/imx8mm.dtsi
··· 707 707 clocks = <&clk IMX8MM_CLK_VPU_DEC_ROOT>; 708 708 assigned-clocks = <&clk IMX8MM_CLK_VPU_BUS>; 709 709 assigned-clock-parents = <&clk IMX8MM_SYS_PLL1_800M>; 710 - resets = <&src IMX8MQ_RESET_VPU_RESET>; 711 710 }; 712 711 713 712 pgc_vpu_g1: power-domain@7 {
+1 -1
arch/arm64/boot/dts/freescale/imx8ulp.dtsi
··· 132 132 133 133 scmi_sensor: protocol@15 { 134 134 reg = <0x15>; 135 - #thermal-sensor-cells = <0>; 135 + #thermal-sensor-cells = <1>; 136 136 }; 137 137 }; 138 138 };
+2 -2
arch/arm64/boot/dts/intel/socfpga_agilex.dtsi
··· 502 502 }; 503 503 504 504 usb0: usb@ffb00000 { 505 - compatible = "snps,dwc2"; 505 + compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2"; 506 506 reg = <0xffb00000 0x40000>; 507 507 interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>; 508 508 phys = <&usbphy0>; ··· 515 515 }; 516 516 517 517 usb1: usb@ffb40000 { 518 - compatible = "snps,dwc2"; 518 + compatible = "intel,socfpga-agilex-hsotg", "snps,dwc2"; 519 519 reg = <0xffb40000 0x40000>; 520 520 interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>; 521 521 phys = <&usbphy0>;
+7 -1
arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
··· 18 18 19 19 aliases { 20 20 spi0 = &spi0; 21 + ethernet0 = &eth0; 21 22 ethernet1 = &eth1; 22 23 mmc0 = &sdhci0; 23 24 mmc1 = &sdhci1; ··· 139 138 /* 140 139 * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property 141 140 * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and 142 - * 2 size cells and also expects that the second range starts at 16 MB offset. If these 141 + * 2 size cells and also expects that the second range starts at 16 MB offset. Also it 142 + * expects that first range uses same address for PCI (child) and CPU (parent) cells (so 143 + * no remapping) and that this address is the lowest from all specified ranges. If these 143 144 * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address 144 145 * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window 145 146 * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB. ··· 150 147 * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7 151 148 * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf 152 149 * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33 150 + * Bug related to requirement of same child and parent addresses for first range is fixed 151 + * in U-Boot version 2022.04 by following commit: 152 + * https://source.denx.de/u-boot/u-boot/-/commit/1fd54253bca7d43d046bba4853fe5fafd034bc17 153 153 */ 154 154 #address-cells = <3>; 155 155 #size-cells = <2>;
+1 -1
arch/arm64/boot/dts/marvell/armada-37xx.dtsi
··· 499 499 * (totaling 127 MiB) for MEM. 500 500 */ 501 501 ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x07f00000 /* Port 0 MEM */ 502 - 0x81000000 0 0xefff0000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */ 502 + 0x81000000 0 0x00000000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */ 503 503 interrupt-map-mask = <0 0 0 7>; 504 504 interrupt-map = <0 0 0 1 &pcie_intc 0>, 505 505 <0 0 0 2 &pcie_intc 1>,
+1 -1
arch/arm64/boot/dts/nvidia/tegra194.dtsi
··· 1584 1584 #iommu-cells = <1>; 1585 1585 1586 1586 nvidia,memory-controller = <&mc>; 1587 - status = "okay"; 1587 + status = "disabled"; 1588 1588 }; 1589 1589 1590 1590 smmu: iommu@12000000 {
+5
arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
··· 807 807 808 808 qcom,snoc-host-cap-8bit-quirk; 809 809 }; 810 + 811 + &crypto { 812 + /* FIXME: qce_start triggers an SError */ 813 + status= "disable"; 814 + };
+23 -5
arch/arm64/boot/dts/qcom/sm8350.dtsi
··· 35 35 clock-frequency = <32000>; 36 36 #clock-cells = <0>; 37 37 }; 38 + 39 + ufs_phy_rx_symbol_0_clk: ufs-phy-rx-symbol-0 { 40 + compatible = "fixed-clock"; 41 + clock-frequency = <1000>; 42 + #clock-cells = <0>; 43 + }; 44 + 45 + ufs_phy_rx_symbol_1_clk: ufs-phy-rx-symbol-1 { 46 + compatible = "fixed-clock"; 47 + clock-frequency = <1000>; 48 + #clock-cells = <0>; 49 + }; 50 + 51 + ufs_phy_tx_symbol_0_clk: ufs-phy-tx-symbol-0 { 52 + compatible = "fixed-clock"; 53 + clock-frequency = <1000>; 54 + #clock-cells = <0>; 55 + }; 38 56 }; 39 57 40 58 cpus { ··· 621 603 <0>, 622 604 <0>, 623 605 <0>, 624 - <0>, 625 - <0>, 626 - <0>, 606 + <&ufs_phy_rx_symbol_0_clk>, 607 + <&ufs_phy_rx_symbol_1_clk>, 608 + <&ufs_phy_tx_symbol_0_clk>, 627 609 <0>, 628 610 <0>; 629 611 }; ··· 1941 1923 <75000000 300000000>, 1942 1924 <0 0>, 1943 1925 <0 0>, 1944 - <75000000 300000000>, 1945 - <75000000 300000000>; 1926 + <0 0>, 1927 + <0 0>; 1946 1928 status = "disabled"; 1947 1929 }; 1948 1930
+5 -3
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 726 726 compatible = "qcom,sm8450-smmu-500", "arm,mmu-500"; 727 727 reg = <0 0x15000000 0 0x100000>; 728 728 #iommu-cells = <2>; 729 - #global-interrupts = <2>; 729 + #global-interrupts = <1>; 730 730 interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>, 731 731 <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>, 732 732 <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, ··· 813 813 <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>, 814 814 <GIC_SPI 421 IRQ_TYPE_LEVEL_HIGH>, 815 815 <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>, 816 + <GIC_SPI 423 IRQ_TYPE_LEVEL_HIGH>, 816 817 <GIC_SPI 424 IRQ_TYPE_LEVEL_HIGH>, 817 818 <GIC_SPI 425 IRQ_TYPE_LEVEL_HIGH>, 818 819 <GIC_SPI 690 IRQ_TYPE_LEVEL_HIGH>, ··· 1073 1072 <&gcc GCC_USB30_PRIM_MASTER_CLK>, 1074 1073 <&gcc GCC_AGGRE_USB3_PRIM_AXI_CLK>, 1075 1074 <&gcc GCC_USB30_PRIM_MOCK_UTMI_CLK>, 1076 - <&gcc GCC_USB30_PRIM_SLEEP_CLK>; 1075 + <&gcc GCC_USB30_PRIM_SLEEP_CLK>, 1076 + <&gcc GCC_USB3_0_CLKREF_EN>; 1077 1077 clock-names = "cfg_noc", "core", "iface", "mock_utmi", 1078 - "sleep"; 1078 + "sleep", "xo"; 1079 1079 1080 1080 assigned-clocks = <&gcc GCC_USB30_PRIM_MOCK_UTMI_CLK>, 1081 1081 <&gcc GCC_USB30_PRIM_MASTER_CLK>;
+1 -1
arch/arm64/boot/dts/rockchip/px30.dtsi
··· 711 711 clock-names = "pclk", "timer"; 712 712 }; 713 713 714 - dmac: dmac@ff240000 { 714 + dmac: dma-controller@ff240000 { 715 715 compatible = "arm,pl330", "arm,primecell"; 716 716 reg = <0x0 0xff240000 0x0 0x4000>; 717 717 interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
+1 -1
arch/arm64/boot/dts/rockchip/rk3328.dtsi
··· 489 489 status = "disabled"; 490 490 }; 491 491 492 - dmac: dmac@ff1f0000 { 492 + dmac: dma-controller@ff1f0000 { 493 493 compatible = "arm,pl330", "arm,primecell"; 494 494 reg = <0x0 0xff1f0000 0x0 0x4000>; 495 495 interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
+12 -5
arch/arm64/boot/dts/rockchip/rk3399-gru.dtsi
··· 286 286 287 287 sound: sound { 288 288 compatible = "rockchip,rk3399-gru-sound"; 289 - rockchip,cpu = <&i2s0 &i2s2>; 289 + rockchip,cpu = <&i2s0 &spdif>; 290 290 }; 291 291 }; 292 292 ··· 437 437 status = "okay"; 438 438 }; 439 439 440 - &i2s2 { 441 - status = "okay"; 442 - }; 443 - 444 440 &io_domains { 445 441 status = "okay"; 446 442 ··· 531 535 sd-uhs-sdr104; 532 536 vmmc-supply = <&pp3000_sd_slot>; 533 537 vqmmc-supply = <&ppvar_sd_card_io>; 538 + }; 539 + 540 + &spdif { 541 + status = "okay"; 542 + 543 + /* 544 + * SPDIF is routed internally to DP; we either don't use these pins, or 545 + * mux them to something else. 546 + */ 547 + /delete-property/ pinctrl-0; 548 + /delete-property/ pinctrl-names; 534 549 }; 535 550 536 551 &spi1 {
+1
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
··· 232 232 233 233 &usbdrd_dwc3_0 { 234 234 dr_mode = "otg"; 235 + extcon = <&extcon_usb3>; 235 236 status = "okay"; 236 237 }; 237 238
+20
arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
··· 25 25 }; 26 26 }; 27 27 28 + extcon_usb3: extcon-usb3 { 29 + compatible = "linux,extcon-usb-gpio"; 30 + id-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_HIGH>; 31 + pinctrl-names = "default"; 32 + pinctrl-0 = <&usb3_id>; 33 + }; 34 + 28 35 clkin_gmac: external-gmac-clock { 29 36 compatible = "fixed-clock"; 30 37 clock-frequency = <125000000>; ··· 429 422 <4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>; 430 423 }; 431 424 }; 425 + 426 + usb3 { 427 + usb3_id: usb3-id { 428 + rockchip,pins = 429 + <1 RK_PC2 RK_FUNC_GPIO &pcfg_pull_none>; 430 + }; 431 + }; 432 432 }; 433 433 434 434 &sdhci { 435 + /* 436 + * Signal integrity isn't great at 200MHz but 100MHz has proven stable 437 + * enough. 438 + */ 439 + max-frequency = <100000000>; 440 + 435 441 bus-width = <8>; 436 442 mmc-hs400-1_8v; 437 443 mmc-hs400-enhanced-strobe;
+3 -3
arch/arm64/boot/dts/rockchip/rk3399.dtsi
··· 1881 1881 interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH 0>; 1882 1882 clocks = <&cru PCLK_HDMI_CTRL>, 1883 1883 <&cru SCLK_HDMI_SFR>, 1884 - <&cru PLL_VPLL>, 1884 + <&cru SCLK_HDMI_CEC>, 1885 1885 <&cru PCLK_VIO_GRF>, 1886 - <&cru SCLK_HDMI_CEC>; 1887 - clock-names = "iahb", "isfr", "vpll", "grf", "cec"; 1886 + <&cru PLL_VPLL>; 1887 + clock-names = "iahb", "isfr", "cec", "grf", "vpll"; 1888 1888 power-domains = <&power RK3399_PD_HDCP>; 1889 1889 reg-io-width = <4>; 1890 1890 rockchip,grf = <&grf>;
-2
arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
··· 285 285 vcc_ddr: DCDC_REG3 { 286 286 regulator-always-on; 287 287 regulator-boot-on; 288 - regulator-min-microvolt = <1100000>; 289 - regulator-max-microvolt = <1100000>; 290 288 regulator-initial-mode = <0x2>; 291 289 regulator-name = "vcc_ddr"; 292 290 regulator-state-mem {
+2 -4
arch/arm64/boot/dts/rockchip/rk3568.dtsi
··· 32 32 clocks = <&cru SCLK_GMAC0>, <&cru SCLK_GMAC0_RX_TX>, 33 33 <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_MAC0_REFOUT>, 34 34 <&cru ACLK_GMAC0>, <&cru PCLK_GMAC0>, 35 - <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_GMAC0_PTP_REF>, 36 - <&cru PCLK_XPCS>; 35 + <&cru SCLK_GMAC0_RX_TX>, <&cru CLK_GMAC0_PTP_REF>; 37 36 clock-names = "stmmaceth", "mac_clk_rx", 38 37 "mac_clk_tx", "clk_mac_refout", 39 38 "aclk_mac", "pclk_mac", 40 - "clk_mac_speed", "ptp_ref", 41 - "pclk_xpcs"; 39 + "clk_mac_speed", "ptp_ref"; 42 40 resets = <&cru SRST_A_GMAC0>; 43 41 reset-names = "stmmaceth"; 44 42 rockchip,grf = <&grf>;
+2 -2
arch/arm64/boot/dts/rockchip/rk356x.dtsi
··· 651 651 status = "disabled"; 652 652 }; 653 653 654 - dmac0: dmac@fe530000 { 654 + dmac0: dma-controller@fe530000 { 655 655 compatible = "arm,pl330", "arm,primecell"; 656 656 reg = <0x0 0xfe530000 0x0 0x4000>; 657 657 interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>, ··· 662 662 #dma-cells = <1>; 663 663 }; 664 664 665 - dmac1: dmac@fe550000 { 665 + dmac1: dma-controller@fe550000 { 666 666 compatible = "arm,pl330", "arm,primecell"; 667 667 reg = <0x0 0xfe550000 0x0 0x4000>; 668 668 interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
+53
arch/arm64/include/asm/assembler.h
··· 109 109 .endm 110 110 111 111 /* 112 + * Clear Branch History instruction 113 + */ 114 + .macro clearbhb 115 + hint #22 116 + .endm 117 + 118 + /* 112 119 * Speculation barrier 113 120 */ 114 121 .macro sb ··· 857 850 858 851 #endif /* GNU_PROPERTY_AARCH64_FEATURE_1_DEFAULT */ 859 852 853 + .macro __mitigate_spectre_bhb_loop tmp 854 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 855 + alternative_cb spectre_bhb_patch_loop_iter 856 + mov \tmp, #32 // Patched to correct the immediate 857 + alternative_cb_end 858 + .Lspectre_bhb_loop\@: 859 + b . + 4 860 + subs \tmp, \tmp, #1 861 + b.ne .Lspectre_bhb_loop\@ 862 + sb 863 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 864 + .endm 865 + 866 + .macro mitigate_spectre_bhb_loop tmp 867 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 868 + alternative_cb spectre_bhb_patch_loop_mitigation_enable 869 + b .L_spectre_bhb_loop_done\@ // Patched to NOP 870 + alternative_cb_end 871 + __mitigate_spectre_bhb_loop \tmp 872 + .L_spectre_bhb_loop_done\@: 873 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 874 + .endm 875 + 876 + /* Save/restores x0-x3 to the stack */ 877 + .macro __mitigate_spectre_bhb_fw 878 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 879 + stp x0, x1, [sp, #-16]! 880 + stp x2, x3, [sp, #-16]! 881 + mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3 882 + alternative_cb smccc_patch_fw_mitigation_conduit 883 + nop // Patched to SMC/HVC #0 884 + alternative_cb_end 885 + ldp x2, x3, [sp], #16 886 + ldp x0, x1, [sp], #16 887 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 888 + .endm 889 + 890 + .macro mitigate_spectre_bhb_clear_insn 891 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 892 + alternative_cb spectre_bhb_patch_clearbhb 893 + /* Patched to NOP when not supported */ 894 + clearbhb 895 + isb 896 + alternative_cb_end 897 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 898 + .endm 860 899 #endif /* __ASM_ASSEMBLER_H */
+29
arch/arm64/include/asm/cpufeature.h
··· 637 637 return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1)); 638 638 } 639 639 640 + 641 + static inline bool supports_csv2p3(int scope) 642 + { 643 + u64 pfr0; 644 + u8 csv2_val; 645 + 646 + if (scope == SCOPE_LOCAL_CPU) 647 + pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1); 648 + else 649 + pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 650 + 651 + csv2_val = cpuid_feature_extract_unsigned_field(pfr0, 652 + ID_AA64PFR0_CSV2_SHIFT); 653 + return csv2_val == 3; 654 + } 655 + 656 + static inline bool supports_clearbhb(int scope) 657 + { 658 + u64 isar2; 659 + 660 + if (scope == SCOPE_LOCAL_CPU) 661 + isar2 = read_sysreg_s(SYS_ID_AA64ISAR2_EL1); 662 + else 663 + isar2 = read_sanitised_ftr_reg(SYS_ID_AA64ISAR2_EL1); 664 + 665 + return cpuid_feature_extract_unsigned_field(isar2, 666 + ID_AA64ISAR2_CLEARBHB_SHIFT); 667 + } 668 + 640 669 const struct cpumask *system_32bit_el0_cpumask(void); 641 670 DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0); 642 671
+8
arch/arm64/include/asm/cputype.h
··· 73 73 #define ARM_CPU_PART_CORTEX_A76 0xD0B 74 74 #define ARM_CPU_PART_NEOVERSE_N1 0xD0C 75 75 #define ARM_CPU_PART_CORTEX_A77 0xD0D 76 + #define ARM_CPU_PART_NEOVERSE_V1 0xD40 77 + #define ARM_CPU_PART_CORTEX_A78 0xD41 78 + #define ARM_CPU_PART_CORTEX_X1 0xD44 76 79 #define ARM_CPU_PART_CORTEX_A510 0xD46 77 80 #define ARM_CPU_PART_CORTEX_A710 0xD47 78 81 #define ARM_CPU_PART_CORTEX_X2 0xD48 79 82 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 83 + #define ARM_CPU_PART_CORTEX_A78C 0xD4B 80 84 81 85 #define APM_CPU_PART_POTENZA 0x000 82 86 ··· 121 117 #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76) 122 118 #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1) 123 119 #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77) 120 + #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1) 121 + #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78) 122 + #define MIDR_CORTEX_X1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X1) 124 123 #define MIDR_CORTEX_A510 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A510) 125 124 #define MIDR_CORTEX_A710 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A710) 126 125 #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) 127 126 #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) 127 + #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) 128 128 #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 129 129 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 130 130 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+4 -2
arch/arm64/include/asm/fixmap.h
··· 62 62 #endif /* CONFIG_ACPI_APEI_GHES */ 63 63 64 64 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 65 + FIX_ENTRY_TRAMP_TEXT3, 66 + FIX_ENTRY_TRAMP_TEXT2, 67 + FIX_ENTRY_TRAMP_TEXT1, 65 68 FIX_ENTRY_TRAMP_DATA, 66 - FIX_ENTRY_TRAMP_TEXT, 67 - #define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT)) 69 + #define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT1)) 68 70 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ 69 71 __end_of_permanent_fixed_addresses, 70 72
+1
arch/arm64/include/asm/insn.h
··· 65 65 AARCH64_INSN_HINT_PSB = 0x11 << 5, 66 66 AARCH64_INSN_HINT_TSB = 0x12 << 5, 67 67 AARCH64_INSN_HINT_CSDB = 0x14 << 5, 68 + AARCH64_INSN_HINT_CLEARBHB = 0x16 << 5, 68 69 69 70 AARCH64_INSN_HINT_BTI = 0x20 << 5, 70 71 AARCH64_INSN_HINT_BTIC = 0x22 << 5,
+5
arch/arm64/include/asm/kvm_host.h
··· 714 714 ctxt_sys_reg(cpu_ctxt, MPIDR_EL1) = read_cpuid_mpidr(); 715 715 } 716 716 717 + static inline bool kvm_system_needs_idmapped_vectors(void) 718 + { 719 + return cpus_have_const_cap(ARM64_SPECTRE_V3A); 720 + } 721 + 717 722 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu); 718 723 719 724 static inline void kvm_arch_hardware_unsetup(void) {}
+1
arch/arm64/include/asm/mte-kasan.h
··· 5 5 #ifndef __ASM_MTE_KASAN_H 6 6 #define __ASM_MTE_KASAN_H 7 7 8 + #include <asm/compiler.h> 8 9 #include <asm/mte-def.h> 9 10 10 11 #ifndef __ASSEMBLY__
+2 -2
arch/arm64/include/asm/pgtable-prot.h
··· 92 92 #define __P001 PAGE_READONLY 93 93 #define __P010 PAGE_READONLY 94 94 #define __P011 PAGE_READONLY 95 - #define __P100 PAGE_EXECONLY 95 + #define __P100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */ 96 96 #define __P101 PAGE_READONLY_EXEC 97 97 #define __P110 PAGE_READONLY_EXEC 98 98 #define __P111 PAGE_READONLY_EXEC ··· 101 101 #define __S001 PAGE_READONLY 102 102 #define __S010 PAGE_SHARED 103 103 #define __S011 PAGE_SHARED 104 - #define __S100 PAGE_EXECONLY 104 + #define __S100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */ 105 105 #define __S101 PAGE_READONLY_EXEC 106 106 #define __S110 PAGE_SHARED_EXEC 107 107 #define __S111 PAGE_SHARED_EXEC
-11
arch/arm64/include/asm/pgtable.h
··· 1017 1017 } 1018 1018 #define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte 1019 1019 1020 - static inline pgprot_t arch_filter_pgprot(pgprot_t prot) 1021 - { 1022 - if (cpus_have_const_cap(ARM64_HAS_EPAN)) 1023 - return prot; 1024 - 1025 - if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY)) 1026 - return prot; 1027 - 1028 - return PAGE_READONLY_EXEC; 1029 - } 1030 - 1031 1020 static inline bool pud_sect_supported(void) 1032 1021 { 1033 1022 return PAGE_SIZE == SZ_4K;
+2 -2
arch/arm64/include/asm/rwonce.h
··· 5 5 #ifndef __ASM_RWONCE_H 6 6 #define __ASM_RWONCE_H 7 7 8 - #ifdef CONFIG_LTO 8 + #if defined(CONFIG_LTO) && !defined(__ASSEMBLY__) 9 9 10 10 #include <linux/compiler_types.h> 11 11 #include <asm/alternative-macros.h> ··· 66 66 }) 67 67 68 68 #endif /* !BUILD_VDSO */ 69 - #endif /* CONFIG_LTO */ 69 + #endif /* CONFIG_LTO && !__ASSEMBLY__ */ 70 70 71 71 #include <asm-generic/rwonce.h> 72 72
+5
arch/arm64/include/asm/sections.h
··· 23 23 extern char __entry_tramp_text_start[], __entry_tramp_text_end[]; 24 24 extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[]; 25 25 26 + static inline size_t entry_tramp_text_size(void) 27 + { 28 + return __entry_tramp_text_end - __entry_tramp_text_start; 29 + } 30 + 26 31 #endif /* __ASM_SECTIONS_H */
+4
arch/arm64/include/asm/spectre.h
··· 93 93 94 94 enum mitigation_state arm64_get_meltdown_state(void); 95 95 96 + enum mitigation_state arm64_get_spectre_bhb_state(void); 97 + bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope); 98 + u8 spectre_bhb_loop_affected(int scope); 99 + void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused); 96 100 #endif /* __ASSEMBLY__ */ 97 101 #endif /* __ASM_SPECTRE_H */
+2
arch/arm64/include/asm/sysreg.h
··· 773 773 #define ID_AA64ISAR1_GPI_IMP_DEF 0x1 774 774 775 775 /* id_aa64isar2 */ 776 + #define ID_AA64ISAR2_CLEARBHB_SHIFT 28 776 777 #define ID_AA64ISAR2_RPRES_SHIFT 4 777 778 #define ID_AA64ISAR2_WFXT_SHIFT 0 778 779 ··· 905 904 #endif 906 905 907 906 /* id_aa64mmfr1 */ 907 + #define ID_AA64MMFR1_ECBHB_SHIFT 60 908 908 #define ID_AA64MMFR1_AFP_SHIFT 44 909 909 #define ID_AA64MMFR1_ETS_SHIFT 36 910 910 #define ID_AA64MMFR1_TWED_SHIFT 32
+73
arch/arm64/include/asm/vectors.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2022 ARM Ltd. 4 + */ 5 + #ifndef __ASM_VECTORS_H 6 + #define __ASM_VECTORS_H 7 + 8 + #include <linux/bug.h> 9 + #include <linux/percpu.h> 10 + 11 + #include <asm/fixmap.h> 12 + 13 + extern char vectors[]; 14 + extern char tramp_vectors[]; 15 + extern char __bp_harden_el1_vectors[]; 16 + 17 + /* 18 + * Note: the order of this enum corresponds to two arrays in entry.S: 19 + * tramp_vecs and __bp_harden_el1_vectors. By default the canonical 20 + * 'full fat' vectors are used directly. 21 + */ 22 + enum arm64_bp_harden_el1_vectors { 23 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 24 + /* 25 + * Perform the BHB loop mitigation, before branching to the canonical 26 + * vectors. 27 + */ 28 + EL1_VECTOR_BHB_LOOP, 29 + 30 + /* 31 + * Make the SMC call for firmware mitigation, before branching to the 32 + * canonical vectors. 33 + */ 34 + EL1_VECTOR_BHB_FW, 35 + 36 + /* 37 + * Use the ClearBHB instruction, before branching to the canonical 38 + * vectors. 39 + */ 40 + EL1_VECTOR_BHB_CLEAR_INSN, 41 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 42 + 43 + /* 44 + * Remap the kernel before branching to the canonical vectors. 45 + */ 46 + EL1_VECTOR_KPTI, 47 + }; 48 + 49 + #ifndef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 50 + #define EL1_VECTOR_BHB_LOOP -1 51 + #define EL1_VECTOR_BHB_FW -1 52 + #define EL1_VECTOR_BHB_CLEAR_INSN -1 53 + #endif /* !CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 54 + 55 + /* The vectors to use on return from EL0. e.g. to remap the kernel */ 56 + DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector); 57 + 58 + #ifndef CONFIG_UNMAP_KERNEL_AT_EL0 59 + #define TRAMP_VALIAS 0 60 + #endif 61 + 62 + static inline const char * 63 + arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot) 64 + { 65 + if (arm64_kernel_unmapped_at_el0()) 66 + return (char *)TRAMP_VALIAS + SZ_2K * slot; 67 + 68 + WARN_ON_ONCE(slot == EL1_VECTOR_KPTI); 69 + 70 + return __bp_harden_el1_vectors + SZ_2K * slot; 71 + } 72 + 73 + #endif /* __ASM_VECTORS_H */
+5
arch/arm64/include/uapi/asm/kvm.h
··· 281 281 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED 3 282 282 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED (1U << 4) 283 283 284 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3 KVM_REG_ARM_FW_REG(3) 285 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL 0 286 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL 1 287 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED 2 288 + 284 289 /* SVE registers */ 285 290 #define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) 286 291
+7
arch/arm64/kernel/cpu_errata.c
··· 502 502 .matches = has_spectre_v4, 503 503 .cpu_enable = spectre_v4_enable_mitigation, 504 504 }, 505 + { 506 + .desc = "Spectre-BHB", 507 + .capability = ARM64_SPECTRE_BHB, 508 + .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, 509 + .matches = is_spectre_bhb_affected, 510 + .cpu_enable = spectre_bhb_enable_mitigation, 511 + }, 505 512 #ifdef CONFIG_ARM64_ERRATUM_1418040 506 513 { 507 514 .desc = "ARM erratum 1418040",
+12
arch/arm64/kernel/cpufeature.c
··· 73 73 #include <linux/mm.h> 74 74 #include <linux/cpu.h> 75 75 #include <linux/kasan.h> 76 + #include <linux/percpu.h> 77 + 76 78 #include <asm/cpu.h> 77 79 #include <asm/cpufeature.h> 78 80 #include <asm/cpu_ops.h> ··· 87 85 #include <asm/smp.h> 88 86 #include <asm/sysreg.h> 89 87 #include <asm/traps.h> 88 + #include <asm/vectors.h> 90 89 #include <asm/virt.h> 91 90 92 91 /* Kernel representation of AT_HWCAP and AT_HWCAP2 */ ··· 112 109 113 110 bool arm64_use_ng_mappings = false; 114 111 EXPORT_SYMBOL(arm64_use_ng_mappings); 112 + 113 + DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors; 115 114 116 115 /* 117 116 * Permit PER_LINUX32 and execve() of 32-bit binaries even if not all CPUs ··· 231 226 }; 232 227 233 228 static const struct arm64_ftr_bits ftr_id_aa64isar2[] = { 229 + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_HIGHER_SAFE, ID_AA64ISAR2_CLEARBHB_SHIFT, 4, 0), 234 230 ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_RPRES_SHIFT, 4, 0), 235 231 ARM64_FTR_END, 236 232 }; ··· 1595 1589 kpti_remap_fn *remap_fn; 1596 1590 1597 1591 int cpu = smp_processor_id(); 1592 + 1593 + if (__this_cpu_read(this_cpu_vector) == vectors) { 1594 + const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI); 1595 + 1596 + __this_cpu_write(this_cpu_vector, v); 1597 + } 1598 1598 1599 1599 /* 1600 1600 * We don't need to rewrite the page-tables if either we've done
+157 -57
arch/arm64/kernel/entry.S
··· 37 37 38 38 .macro kernel_ventry, el:req, ht:req, regsize:req, label:req 39 39 .align 7 40 - #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 40 + .Lventry_start\@: 41 41 .if \el == 0 42 - alternative_if ARM64_UNMAP_KERNEL_AT_EL0 42 + /* 43 + * This must be the first instruction of the EL0 vector entries. It is 44 + * skipped by the trampoline vectors, to trigger the cleanup. 45 + */ 46 + b .Lskip_tramp_vectors_cleanup\@ 43 47 .if \regsize == 64 44 48 mrs x30, tpidrro_el0 45 49 msr tpidrro_el0, xzr 46 50 .else 47 51 mov x30, xzr 48 52 .endif 49 - alternative_else_nop_endif 53 + .Lskip_tramp_vectors_cleanup\@: 50 54 .endif 51 - #endif 52 55 53 56 sub sp, sp, #PT_REGS_SIZE 54 57 #ifdef CONFIG_VMAP_STACK ··· 98 95 mrs x0, tpidrro_el0 99 96 #endif 100 97 b el\el\ht\()_\regsize\()_\label 98 + .org .Lventry_start\@ + 128 // Did we overflow the ventry slot? 101 99 .endm 102 100 103 - .macro tramp_alias, dst, sym 101 + .macro tramp_alias, dst, sym, tmp 104 102 mov_q \dst, TRAMP_VALIAS 105 - add \dst, \dst, #(\sym - .entry.tramp.text) 103 + adr_l \tmp, \sym 104 + add \dst, \dst, \tmp 105 + adr_l \tmp, .entry.tramp.text 106 + sub \dst, \dst, \tmp 106 107 .endm 107 108 108 109 /* ··· 123 116 tbnz \tmp2, #TIF_SSBD, .L__asm_ssbd_skip\@ 124 117 mov w0, #ARM_SMCCC_ARCH_WORKAROUND_2 125 118 mov w1, #\state 126 - alternative_cb spectre_v4_patch_fw_mitigation_conduit 119 + alternative_cb smccc_patch_fw_mitigation_conduit 127 120 nop // Patched to SMC/HVC #0 128 121 alternative_cb_end 129 122 .L__asm_ssbd_skip\@: ··· 420 413 ldp x24, x25, [sp, #16 * 12] 421 414 ldp x26, x27, [sp, #16 * 13] 422 415 ldp x28, x29, [sp, #16 * 14] 423 - ldr lr, [sp, #S_LR] 424 - add sp, sp, #PT_REGS_SIZE // restore sp 425 416 426 417 .if \el == 0 427 - alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 418 + alternative_if_not ARM64_UNMAP_KERNEL_AT_EL0 419 + ldr lr, [sp, #S_LR] 420 + add sp, sp, #PT_REGS_SIZE // restore sp 421 + eret 422 + alternative_else_nop_endif 428 423 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 429 424 bne 4f 430 - msr far_el1, x30 431 - tramp_alias x30, tramp_exit_native 425 + msr far_el1, x29 426 + tramp_alias x30, tramp_exit_native, x29 432 427 br x30 433 428 4: 434 - tramp_alias x30, tramp_exit_compat 429 + tramp_alias x30, tramp_exit_compat, x29 435 430 br x30 436 431 #endif 437 432 .else 433 + ldr lr, [sp, #S_LR] 434 + add sp, sp, #PT_REGS_SIZE // restore sp 435 + 438 436 /* Ensure any device/NC reads complete */ 439 437 alternative_insn nop, "dmb sy", ARM64_WORKAROUND_1508412 440 438 ··· 606 594 607 595 .popsection // .entry.text 608 596 609 - #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 610 - /* 611 - * Exception vectors trampoline. 612 - */ 613 - .pushsection ".entry.tramp.text", "ax" 614 - 615 597 // Move from tramp_pg_dir to swapper_pg_dir 616 598 .macro tramp_map_kernel, tmp 617 599 mrs \tmp, ttbr1_el1 ··· 639 633 */ 640 634 .endm 641 635 642 - .macro tramp_ventry, regsize = 64 636 + .macro tramp_data_page dst 637 + adr_l \dst, .entry.tramp.text 638 + sub \dst, \dst, PAGE_SIZE 639 + .endm 640 + 641 + .macro tramp_data_read_var dst, var 642 + #ifdef CONFIG_RANDOMIZE_BASE 643 + tramp_data_page \dst 644 + add \dst, \dst, #:lo12:__entry_tramp_data_\var 645 + ldr \dst, [\dst] 646 + #else 647 + ldr \dst, =\var 648 + #endif 649 + .endm 650 + 651 + #define BHB_MITIGATION_NONE 0 652 + #define BHB_MITIGATION_LOOP 1 653 + #define BHB_MITIGATION_FW 2 654 + #define BHB_MITIGATION_INSN 3 655 + 656 + .macro tramp_ventry, vector_start, regsize, kpti, bhb 643 657 .align 7 644 658 1: 645 659 .if \regsize == 64 646 660 msr tpidrro_el0, x30 // Restored in kernel_ventry 647 661 .endif 662 + 663 + .if \bhb == BHB_MITIGATION_LOOP 664 + /* 665 + * This sequence must appear before the first indirect branch. i.e. the 666 + * ret out of tramp_ventry. It appears here because x30 is free. 667 + */ 668 + __mitigate_spectre_bhb_loop x30 669 + .endif // \bhb == BHB_MITIGATION_LOOP 670 + 671 + .if \bhb == BHB_MITIGATION_INSN 672 + clearbhb 673 + isb 674 + .endif // \bhb == BHB_MITIGATION_INSN 675 + 676 + .if \kpti == 1 648 677 /* 649 678 * Defend against branch aliasing attacks by pushing a dummy 650 679 * entry onto the return stack and using a RET instruction to ··· 689 648 b . 690 649 2: 691 650 tramp_map_kernel x30 692 - #ifdef CONFIG_RANDOMIZE_BASE 693 - adr x30, tramp_vectors + PAGE_SIZE 694 651 alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003 695 - ldr x30, [x30] 696 - #else 697 - ldr x30, =vectors 698 - #endif 652 + tramp_data_read_var x30, vectors 699 653 alternative_if_not ARM64_WORKAROUND_CAVIUM_TX2_219_PRFM 700 - prfm plil1strm, [x30, #(1b - tramp_vectors)] 654 + prfm plil1strm, [x30, #(1b - \vector_start)] 701 655 alternative_else_nop_endif 656 + 702 657 msr vbar_el1, x30 703 - add x30, x30, #(1b - tramp_vectors) 704 658 isb 659 + .else 660 + ldr x30, =vectors 661 + .endif // \kpti == 1 662 + 663 + .if \bhb == BHB_MITIGATION_FW 664 + /* 665 + * The firmware sequence must appear before the first indirect branch. 666 + * i.e. the ret out of tramp_ventry. But it also needs the stack to be 667 + * mapped to save/restore the registers the SMC clobbers. 668 + */ 669 + __mitigate_spectre_bhb_fw 670 + .endif // \bhb == BHB_MITIGATION_FW 671 + 672 + add x30, x30, #(1b - \vector_start + 4) 705 673 ret 674 + .org 1b + 128 // Did we overflow the ventry slot? 706 675 .endm 707 676 708 677 .macro tramp_exit, regsize = 64 709 - adr x30, tramp_vectors 678 + tramp_data_read_var x30, this_cpu_vector 679 + get_this_cpu_offset x29 680 + ldr x30, [x30, x29] 681 + 710 682 msr vbar_el1, x30 711 - tramp_unmap_kernel x30 683 + ldr lr, [sp, #S_LR] 684 + tramp_unmap_kernel x29 712 685 .if \regsize == 64 713 - mrs x30, far_el1 686 + mrs x29, far_el1 714 687 .endif 688 + add sp, sp, #PT_REGS_SIZE // restore sp 715 689 eret 716 690 sb 717 691 .endm 718 692 719 - .align 11 720 - SYM_CODE_START_NOALIGN(tramp_vectors) 693 + .macro generate_tramp_vector, kpti, bhb 694 + .Lvector_start\@: 721 695 .space 0x400 722 696 723 - tramp_ventry 724 - tramp_ventry 725 - tramp_ventry 726 - tramp_ventry 697 + .rept 4 698 + tramp_ventry .Lvector_start\@, 64, \kpti, \bhb 699 + .endr 700 + .rept 4 701 + tramp_ventry .Lvector_start\@, 32, \kpti, \bhb 702 + .endr 703 + .endm 727 704 728 - tramp_ventry 32 729 - tramp_ventry 32 730 - tramp_ventry 32 731 - tramp_ventry 32 705 + #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 706 + /* 707 + * Exception vectors trampoline. 708 + * The order must match __bp_harden_el1_vectors and the 709 + * arm64_bp_harden_el1_vectors enum. 710 + */ 711 + .pushsection ".entry.tramp.text", "ax" 712 + .align 11 713 + SYM_CODE_START_NOALIGN(tramp_vectors) 714 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 715 + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_LOOP 716 + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_FW 717 + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_INSN 718 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 719 + generate_tramp_vector kpti=1, bhb=BHB_MITIGATION_NONE 732 720 SYM_CODE_END(tramp_vectors) 733 721 734 722 SYM_CODE_START(tramp_exit_native) ··· 774 704 .pushsection ".rodata", "a" 775 705 .align PAGE_SHIFT 776 706 SYM_DATA_START(__entry_tramp_data_start) 707 + __entry_tramp_data_vectors: 777 708 .quad vectors 709 + #ifdef CONFIG_ARM_SDE_INTERFACE 710 + __entry_tramp_data___sdei_asm_handler: 711 + .quad __sdei_asm_handler 712 + #endif /* CONFIG_ARM_SDE_INTERFACE */ 713 + __entry_tramp_data_this_cpu_vector: 714 + .quad this_cpu_vector 778 715 SYM_DATA_END(__entry_tramp_data_start) 779 716 .popsection // .rodata 780 717 #endif /* CONFIG_RANDOMIZE_BASE */ 781 718 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ 719 + 720 + /* 721 + * Exception vectors for spectre mitigations on entry from EL1 when 722 + * kpti is not in use. 723 + */ 724 + .macro generate_el1_vector, bhb 725 + .Lvector_start\@: 726 + kernel_ventry 1, t, 64, sync // Synchronous EL1t 727 + kernel_ventry 1, t, 64, irq // IRQ EL1t 728 + kernel_ventry 1, t, 64, fiq // FIQ EL1h 729 + kernel_ventry 1, t, 64, error // Error EL1t 730 + 731 + kernel_ventry 1, h, 64, sync // Synchronous EL1h 732 + kernel_ventry 1, h, 64, irq // IRQ EL1h 733 + kernel_ventry 1, h, 64, fiq // FIQ EL1h 734 + kernel_ventry 1, h, 64, error // Error EL1h 735 + 736 + .rept 4 737 + tramp_ventry .Lvector_start\@, 64, 0, \bhb 738 + .endr 739 + .rept 4 740 + tramp_ventry .Lvector_start\@, 32, 0, \bhb 741 + .endr 742 + .endm 743 + 744 + /* The order must match tramp_vecs and the arm64_bp_harden_el1_vectors enum. */ 745 + .pushsection ".entry.text", "ax" 746 + .align 11 747 + SYM_CODE_START(__bp_harden_el1_vectors) 748 + #ifdef CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY 749 + generate_el1_vector bhb=BHB_MITIGATION_LOOP 750 + generate_el1_vector bhb=BHB_MITIGATION_FW 751 + generate_el1_vector bhb=BHB_MITIGATION_INSN 752 + #endif /* CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY */ 753 + SYM_CODE_END(__bp_harden_el1_vectors) 754 + .popsection 755 + 782 756 783 757 /* 784 758 * Register switch for AArch64. The callee-saved registers need to be saved ··· 949 835 * Remember whether to unmap the kernel on exit. 950 836 */ 951 837 1: str x4, [x1, #(SDEI_EVENT_INTREGS + S_SDEI_TTBR1)] 952 - 953 - #ifdef CONFIG_RANDOMIZE_BASE 954 - adr x4, tramp_vectors + PAGE_SIZE 955 - add x4, x4, #:lo12:__sdei_asm_trampoline_next_handler 956 - ldr x4, [x4] 957 - #else 958 - ldr x4, =__sdei_asm_handler 959 - #endif 838 + tramp_data_read_var x4, __sdei_asm_handler 960 839 br x4 961 840 SYM_CODE_END(__sdei_asm_entry_trampoline) 962 841 NOKPROBE(__sdei_asm_entry_trampoline) ··· 972 865 NOKPROBE(__sdei_asm_exit_trampoline) 973 866 .ltorg 974 867 .popsection // .entry.tramp.text 975 - #ifdef CONFIG_RANDOMIZE_BASE 976 - .pushsection ".rodata", "a" 977 - SYM_DATA_START(__sdei_asm_trampoline_next_handler) 978 - .quad __sdei_asm_handler 979 - SYM_DATA_END(__sdei_asm_trampoline_next_handler) 980 - .popsection // .rodata 981 - #endif /* CONFIG_RANDOMIZE_BASE */ 982 868 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */ 983 869 984 870 /* ··· 1081 981 alternative_else_nop_endif 1082 982 1083 983 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 1084 - tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline 984 + tramp_alias dst=x5, sym=__sdei_asm_exit_trampoline, tmp=x3 1085 985 br x5 1086 986 #endif 1087 987 SYM_CODE_END(__sdei_asm_handler)
+4
arch/arm64/kernel/image-vars.h
··· 66 66 KVM_NVHE_ALIAS(kvm_update_va_mask); 67 67 KVM_NVHE_ALIAS(kvm_get_kimage_voffset); 68 68 KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0); 69 + KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter); 70 + KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable); 71 + KVM_NVHE_ALIAS(spectre_bhb_patch_wa3); 72 + KVM_NVHE_ALIAS(spectre_bhb_patch_clearbhb); 69 73 70 74 /* Global kernel state accessed by nVHE hyp code. */ 71 75 KVM_NVHE_ALIAS(kvm_vgic_global_state);
+386 -5
arch/arm64/kernel/proton-pack.c
··· 18 18 */ 19 19 20 20 #include <linux/arm-smccc.h> 21 + #include <linux/bpf.h> 21 22 #include <linux/cpu.h> 22 23 #include <linux/device.h> 23 24 #include <linux/nospec.h> 24 25 #include <linux/prctl.h> 25 26 #include <linux/sched/task_stack.h> 26 27 28 + #include <asm/debug-monitors.h> 27 29 #include <asm/insn.h> 28 30 #include <asm/spectre.h> 29 31 #include <asm/traps.h> 32 + #include <asm/vectors.h> 30 33 #include <asm/virt.h> 31 34 32 35 /* ··· 99 96 return ret; 100 97 } 101 98 99 + static const char *get_bhb_affected_string(enum mitigation_state bhb_state) 100 + { 101 + switch (bhb_state) { 102 + case SPECTRE_UNAFFECTED: 103 + return ""; 104 + default: 105 + case SPECTRE_VULNERABLE: 106 + return ", but not BHB"; 107 + case SPECTRE_MITIGATED: 108 + return ", BHB"; 109 + } 110 + } 111 + 112 + static bool _unprivileged_ebpf_enabled(void) 113 + { 114 + #ifdef CONFIG_BPF_SYSCALL 115 + return !sysctl_unprivileged_bpf_disabled; 116 + #else 117 + return false; 118 + #endif 119 + } 120 + 102 121 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, 103 122 char *buf) 104 123 { 124 + enum mitigation_state bhb_state = arm64_get_spectre_bhb_state(); 125 + const char *bhb_str = get_bhb_affected_string(bhb_state); 126 + const char *v2_str = "Branch predictor hardening"; 127 + 105 128 switch (spectre_v2_state) { 106 129 case SPECTRE_UNAFFECTED: 107 - return sprintf(buf, "Not affected\n"); 130 + if (bhb_state == SPECTRE_UNAFFECTED) 131 + return sprintf(buf, "Not affected\n"); 132 + 133 + /* 134 + * Platforms affected by Spectre-BHB can't report 135 + * "Not affected" for Spectre-v2. 136 + */ 137 + v2_str = "CSV2"; 138 + fallthrough; 108 139 case SPECTRE_MITIGATED: 109 - return sprintf(buf, "Mitigation: Branch predictor hardening\n"); 140 + if (bhb_state == SPECTRE_MITIGATED && _unprivileged_ebpf_enabled()) 141 + return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n"); 142 + 143 + return sprintf(buf, "Mitigation: %s%s\n", v2_str, bhb_str); 110 144 case SPECTRE_VULNERABLE: 111 145 fallthrough; 112 146 default: ··· 594 554 * Patch a NOP in the Spectre-v4 mitigation code with an SMC/HVC instruction 595 555 * to call into firmware to adjust the mitigation state. 596 556 */ 597 - void __init spectre_v4_patch_fw_mitigation_conduit(struct alt_instr *alt, 598 - __le32 *origptr, 599 - __le32 *updptr, int nr_inst) 557 + void __init smccc_patch_fw_mitigation_conduit(struct alt_instr *alt, 558 + __le32 *origptr, 559 + __le32 *updptr, int nr_inst) 600 560 { 601 561 u32 insn; 602 562 ··· 810 770 return -ENODEV; 811 771 } 812 772 } 773 + 774 + /* 775 + * Spectre BHB. 776 + * 777 + * A CPU is either: 778 + * - Mitigated by a branchy loop a CPU specific number of times, and listed 779 + * in our "loop mitigated list". 780 + * - Mitigated in software by the firmware Spectre v2 call. 781 + * - Has the ClearBHB instruction to perform the mitigation. 782 + * - Has the 'Exception Clears Branch History Buffer' (ECBHB) feature, so no 783 + * software mitigation in the vectors is needed. 784 + * - Has CSV2.3, so is unaffected. 785 + */ 786 + static enum mitigation_state spectre_bhb_state; 787 + 788 + enum mitigation_state arm64_get_spectre_bhb_state(void) 789 + { 790 + return spectre_bhb_state; 791 + } 792 + 793 + enum bhb_mitigation_bits { 794 + BHB_LOOP, 795 + BHB_FW, 796 + BHB_HW, 797 + BHB_INSN, 798 + }; 799 + static unsigned long system_bhb_mitigations; 800 + 801 + /* 802 + * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any 803 + * SCOPE_SYSTEM call will give the right answer. 804 + */ 805 + u8 spectre_bhb_loop_affected(int scope) 806 + { 807 + u8 k = 0; 808 + static u8 max_bhb_k; 809 + 810 + if (scope == SCOPE_LOCAL_CPU) { 811 + static const struct midr_range spectre_bhb_k32_list[] = { 812 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A78), 813 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C), 814 + MIDR_ALL_VERSIONS(MIDR_CORTEX_X1), 815 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A710), 816 + MIDR_ALL_VERSIONS(MIDR_CORTEX_X2), 817 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2), 818 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1), 819 + {}, 820 + }; 821 + static const struct midr_range spectre_bhb_k24_list[] = { 822 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A76), 823 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A77), 824 + MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1), 825 + {}, 826 + }; 827 + static const struct midr_range spectre_bhb_k8_list[] = { 828 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A72), 829 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A57), 830 + {}, 831 + }; 832 + 833 + if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list)) 834 + k = 32; 835 + else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list)) 836 + k = 24; 837 + else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list)) 838 + k = 8; 839 + 840 + max_bhb_k = max(max_bhb_k, k); 841 + } else { 842 + k = max_bhb_k; 843 + } 844 + 845 + return k; 846 + } 847 + 848 + static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void) 849 + { 850 + int ret; 851 + struct arm_smccc_res res; 852 + 853 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, 854 + ARM_SMCCC_ARCH_WORKAROUND_3, &res); 855 + 856 + ret = res.a0; 857 + switch (ret) { 858 + case SMCCC_RET_SUCCESS: 859 + return SPECTRE_MITIGATED; 860 + case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED: 861 + return SPECTRE_UNAFFECTED; 862 + default: 863 + fallthrough; 864 + case SMCCC_RET_NOT_SUPPORTED: 865 + return SPECTRE_VULNERABLE; 866 + } 867 + } 868 + 869 + static bool is_spectre_bhb_fw_affected(int scope) 870 + { 871 + static bool system_affected; 872 + enum mitigation_state fw_state; 873 + bool has_smccc = arm_smccc_1_1_get_conduit() != SMCCC_CONDUIT_NONE; 874 + static const struct midr_range spectre_bhb_firmware_mitigated_list[] = { 875 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A73), 876 + MIDR_ALL_VERSIONS(MIDR_CORTEX_A75), 877 + {}, 878 + }; 879 + bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(), 880 + spectre_bhb_firmware_mitigated_list); 881 + 882 + if (scope != SCOPE_LOCAL_CPU) 883 + return system_affected; 884 + 885 + fw_state = spectre_bhb_get_cpu_fw_mitigation_state(); 886 + if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) { 887 + system_affected = true; 888 + return true; 889 + } 890 + 891 + return false; 892 + } 893 + 894 + static bool supports_ecbhb(int scope) 895 + { 896 + u64 mmfr1; 897 + 898 + if (scope == SCOPE_LOCAL_CPU) 899 + mmfr1 = read_sysreg_s(SYS_ID_AA64MMFR1_EL1); 900 + else 901 + mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 902 + 903 + return cpuid_feature_extract_unsigned_field(mmfr1, 904 + ID_AA64MMFR1_ECBHB_SHIFT); 905 + } 906 + 907 + bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, 908 + int scope) 909 + { 910 + WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible()); 911 + 912 + if (supports_csv2p3(scope)) 913 + return false; 914 + 915 + if (supports_clearbhb(scope)) 916 + return true; 917 + 918 + if (spectre_bhb_loop_affected(scope)) 919 + return true; 920 + 921 + if (is_spectre_bhb_fw_affected(scope)) 922 + return true; 923 + 924 + return false; 925 + } 926 + 927 + static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot) 928 + { 929 + const char *v = arm64_get_bp_hardening_vector(slot); 930 + 931 + if (slot < 0) 932 + return; 933 + 934 + __this_cpu_write(this_cpu_vector, v); 935 + 936 + /* 937 + * When KPTI is in use, the vectors are switched when exiting to 938 + * user-space. 939 + */ 940 + if (arm64_kernel_unmapped_at_el0()) 941 + return; 942 + 943 + write_sysreg(v, vbar_el1); 944 + isb(); 945 + } 946 + 947 + void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry) 948 + { 949 + bp_hardening_cb_t cpu_cb; 950 + enum mitigation_state fw_state, state = SPECTRE_VULNERABLE; 951 + struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data); 952 + 953 + if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU)) 954 + return; 955 + 956 + if (arm64_get_spectre_v2_state() == SPECTRE_VULNERABLE) { 957 + /* No point mitigating Spectre-BHB alone. */ 958 + } else if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) { 959 + pr_info_once("spectre-bhb mitigation disabled by compile time option\n"); 960 + } else if (cpu_mitigations_off()) { 961 + pr_info_once("spectre-bhb mitigation disabled by command line option\n"); 962 + } else if (supports_ecbhb(SCOPE_LOCAL_CPU)) { 963 + state = SPECTRE_MITIGATED; 964 + set_bit(BHB_HW, &system_bhb_mitigations); 965 + } else if (supports_clearbhb(SCOPE_LOCAL_CPU)) { 966 + /* 967 + * Ensure KVM uses the indirect vector which will have ClearBHB 968 + * added. 969 + */ 970 + if (!data->slot) 971 + data->slot = HYP_VECTOR_INDIRECT; 972 + 973 + this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN); 974 + state = SPECTRE_MITIGATED; 975 + set_bit(BHB_INSN, &system_bhb_mitigations); 976 + } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) { 977 + /* 978 + * Ensure KVM uses the indirect vector which will have the 979 + * branchy-loop added. A57/A72-r0 will already have selected 980 + * the spectre-indirect vector, which is sufficient for BHB 981 + * too. 982 + */ 983 + if (!data->slot) 984 + data->slot = HYP_VECTOR_INDIRECT; 985 + 986 + this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP); 987 + state = SPECTRE_MITIGATED; 988 + set_bit(BHB_LOOP, &system_bhb_mitigations); 989 + } else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) { 990 + fw_state = spectre_bhb_get_cpu_fw_mitigation_state(); 991 + if (fw_state == SPECTRE_MITIGATED) { 992 + /* 993 + * Ensure KVM uses one of the spectre bp_hardening 994 + * vectors. The indirect vector doesn't include the EL3 995 + * call, so needs upgrading to 996 + * HYP_VECTOR_SPECTRE_INDIRECT. 997 + */ 998 + if (!data->slot || data->slot == HYP_VECTOR_INDIRECT) 999 + data->slot += 1; 1000 + 1001 + this_cpu_set_vectors(EL1_VECTOR_BHB_FW); 1002 + 1003 + /* 1004 + * The WA3 call in the vectors supersedes the WA1 call 1005 + * made during context-switch. Uninstall any firmware 1006 + * bp_hardening callback. 1007 + */ 1008 + cpu_cb = spectre_v2_get_sw_mitigation_cb(); 1009 + if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb) 1010 + __this_cpu_write(bp_hardening_data.fn, NULL); 1011 + 1012 + state = SPECTRE_MITIGATED; 1013 + set_bit(BHB_FW, &system_bhb_mitigations); 1014 + } 1015 + } 1016 + 1017 + update_mitigation_state(&spectre_bhb_state, state); 1018 + } 1019 + 1020 + /* Patched to NOP when enabled */ 1021 + void noinstr spectre_bhb_patch_loop_mitigation_enable(struct alt_instr *alt, 1022 + __le32 *origptr, 1023 + __le32 *updptr, int nr_inst) 1024 + { 1025 + BUG_ON(nr_inst != 1); 1026 + 1027 + if (test_bit(BHB_LOOP, &system_bhb_mitigations)) 1028 + *updptr++ = cpu_to_le32(aarch64_insn_gen_nop()); 1029 + } 1030 + 1031 + /* Patched to NOP when enabled */ 1032 + void noinstr spectre_bhb_patch_fw_mitigation_enabled(struct alt_instr *alt, 1033 + __le32 *origptr, 1034 + __le32 *updptr, int nr_inst) 1035 + { 1036 + BUG_ON(nr_inst != 1); 1037 + 1038 + if (test_bit(BHB_FW, &system_bhb_mitigations)) 1039 + *updptr++ = cpu_to_le32(aarch64_insn_gen_nop()); 1040 + } 1041 + 1042 + /* Patched to correct the immediate */ 1043 + void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt, 1044 + __le32 *origptr, __le32 *updptr, int nr_inst) 1045 + { 1046 + u8 rd; 1047 + u32 insn; 1048 + u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM); 1049 + 1050 + BUG_ON(nr_inst != 1); /* MOV -> MOV */ 1051 + 1052 + if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY)) 1053 + return; 1054 + 1055 + insn = le32_to_cpu(*origptr); 1056 + rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn); 1057 + insn = aarch64_insn_gen_movewide(rd, loop_count, 0, 1058 + AARCH64_INSN_VARIANT_64BIT, 1059 + AARCH64_INSN_MOVEWIDE_ZERO); 1060 + *updptr++ = cpu_to_le32(insn); 1061 + } 1062 + 1063 + /* Patched to mov WA3 when supported */ 1064 + void noinstr spectre_bhb_patch_wa3(struct alt_instr *alt, 1065 + __le32 *origptr, __le32 *updptr, int nr_inst) 1066 + { 1067 + u8 rd; 1068 + u32 insn; 1069 + 1070 + BUG_ON(nr_inst != 1); /* MOV -> MOV */ 1071 + 1072 + if (!IS_ENABLED(CONFIG_MITIGATE_SPECTRE_BRANCH_HISTORY) || 1073 + !test_bit(BHB_FW, &system_bhb_mitigations)) 1074 + return; 1075 + 1076 + insn = le32_to_cpu(*origptr); 1077 + rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn); 1078 + 1079 + insn = aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_ORR, 1080 + AARCH64_INSN_VARIANT_32BIT, 1081 + AARCH64_INSN_REG_ZR, rd, 1082 + ARM_SMCCC_ARCH_WORKAROUND_3); 1083 + if (WARN_ON_ONCE(insn == AARCH64_BREAK_FAULT)) 1084 + return; 1085 + 1086 + *updptr++ = cpu_to_le32(insn); 1087 + } 1088 + 1089 + /* Patched to NOP when not supported */ 1090 + void __init spectre_bhb_patch_clearbhb(struct alt_instr *alt, 1091 + __le32 *origptr, __le32 *updptr, int nr_inst) 1092 + { 1093 + BUG_ON(nr_inst != 2); 1094 + 1095 + if (test_bit(BHB_INSN, &system_bhb_mitigations)) 1096 + return; 1097 + 1098 + *updptr++ = cpu_to_le32(aarch64_insn_gen_nop()); 1099 + *updptr++ = cpu_to_le32(aarch64_insn_gen_nop()); 1100 + } 1101 + 1102 + #ifdef CONFIG_BPF_SYSCALL 1103 + #define EBPF_WARN "Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks!\n" 1104 + void unpriv_ebpf_notify(int new_state) 1105 + { 1106 + if (spectre_v2_state == SPECTRE_VULNERABLE || 1107 + spectre_bhb_state != SPECTRE_MITIGATED) 1108 + return; 1109 + 1110 + if (!new_state) 1111 + pr_err("WARNING: %s", EBPF_WARN); 1112 + } 1113 + #endif
+1 -1
arch/arm64/kernel/vmlinux.lds.S
··· 341 341 <= SZ_4K, "Hibernate exit text too big or misaligned") 342 342 #endif 343 343 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 344 - ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE, 344 + ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) <= 3*PAGE_SIZE, 345 345 "Entry trampoline text too big") 346 346 #endif 347 347 #ifdef CONFIG_KVM
+1 -4
arch/arm64/kvm/arm.c
··· 1491 1491 base = kern_hyp_va(kvm_ksym_ref(__bp_harden_hyp_vecs)); 1492 1492 kvm_init_vector_slot(base, HYP_VECTOR_SPECTRE_DIRECT); 1493 1493 1494 - if (!cpus_have_const_cap(ARM64_SPECTRE_V3A)) 1495 - return 0; 1496 - 1497 - if (!has_vhe()) { 1494 + if (kvm_system_needs_idmapped_vectors() && !has_vhe()) { 1498 1495 err = create_hyp_exec_mappings(__pa_symbol(__bp_harden_hyp_vecs), 1499 1496 __BP_HARDEN_HYP_VECS_SZ, &base); 1500 1497 if (err)
+9
arch/arm64/kvm/hyp/hyp-entry.S
··· 62 62 /* ARM_SMCCC_ARCH_WORKAROUND_2 handling */ 63 63 eor w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_1 ^ \ 64 64 ARM_SMCCC_ARCH_WORKAROUND_2) 65 + cbz w1, wa_epilogue 66 + 67 + eor w1, w1, #(ARM_SMCCC_ARCH_WORKAROUND_2 ^ \ 68 + ARM_SMCCC_ARCH_WORKAROUND_3) 65 69 cbnz w1, el1_trap 66 70 67 71 wa_epilogue: ··· 196 192 sub sp, sp, #(8 * 4) 197 193 stp x2, x3, [sp, #(8 * 0)] 198 194 stp x0, x1, [sp, #(8 * 2)] 195 + alternative_cb spectre_bhb_patch_wa3 196 + /* Patched to mov WA3 when supported */ 199 197 mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1 198 + alternative_cb_end 200 199 smc #0 201 200 ldp x2, x3, [sp, #(8 * 0)] 202 201 add sp, sp, #(8 * 2) ··· 212 205 spectrev2_smccc_wa1_smc 213 206 .else 214 207 stp x0, x1, [sp, #-16]! 208 + mitigate_spectre_bhb_loop x0 209 + mitigate_spectre_bhb_clear_insn 215 210 .endif 216 211 .if \indirect != 0 217 212 alternative_cb kvm_patch_vector_branch
+3 -1
arch/arm64/kvm/hyp/nvhe/mm.c
··· 148 148 phys_addr_t phys; 149 149 void *bp_base; 150 150 151 - if (!cpus_have_const_cap(ARM64_SPECTRE_V3A)) 151 + if (!kvm_system_needs_idmapped_vectors()) { 152 + __hyp_bp_vect_base = __bp_harden_hyp_vecs; 152 153 return 0; 154 + } 153 155 154 156 phys = __hyp_pa(__bp_harden_hyp_vecs); 155 157 bp_base = (void *)__pkvm_create_private_mapping(phys,
+8 -2
arch/arm64/kvm/hyp/vhe/switch.c
··· 10 10 #include <linux/kvm_host.h> 11 11 #include <linux/types.h> 12 12 #include <linux/jump_label.h> 13 + #include <linux/percpu.h> 13 14 #include <uapi/linux/psci.h> 14 15 15 16 #include <kvm/arm_psci.h> ··· 25 24 #include <asm/fpsimd.h> 26 25 #include <asm/debug-monitors.h> 27 26 #include <asm/processor.h> 27 + #include <asm/thread_info.h> 28 + #include <asm/vectors.h> 28 29 29 30 /* VHE specific context */ 30 31 DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data); ··· 70 67 71 68 static void __deactivate_traps(struct kvm_vcpu *vcpu) 72 69 { 73 - extern char vectors[]; /* kernel exception vectors */ 70 + const char *host_vectors = vectors; 74 71 75 72 ___deactivate_traps(vcpu); 76 73 ··· 84 81 asm(ALTERNATIVE("nop", "isb", ARM64_WORKAROUND_SPECULATIVE_AT)); 85 82 86 83 write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1); 87 - write_sysreg(vectors, vbar_el1); 84 + 85 + if (!arm64_kernel_unmapped_at_el0()) 86 + host_vectors = __this_cpu_read(this_cpu_vector); 87 + write_sysreg(host_vectors, vbar_el1); 88 88 } 89 89 NOKPROBE_SYMBOL(__deactivate_traps); 90 90
+12
arch/arm64/kvm/hypercalls.c
··· 107 107 break; 108 108 } 109 109 break; 110 + case ARM_SMCCC_ARCH_WORKAROUND_3: 111 + switch (arm64_get_spectre_bhb_state()) { 112 + case SPECTRE_VULNERABLE: 113 + break; 114 + case SPECTRE_MITIGATED: 115 + val[0] = SMCCC_RET_SUCCESS; 116 + break; 117 + case SPECTRE_UNAFFECTED: 118 + val[0] = SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED; 119 + break; 120 + } 121 + break; 110 122 case ARM_SMCCC_HV_PV_TIME_FEATURES: 111 123 val[0] = SMCCC_RET_SUCCESS; 112 124 break;
+18 -3
arch/arm64/kvm/psci.c
··· 46 46 * specification (ARM DEN 0022A). This means all suspend states 47 47 * for KVM will preserve the register state. 48 48 */ 49 - kvm_vcpu_halt(vcpu); 50 - kvm_clear_request(KVM_REQ_UNHALT, vcpu); 49 + kvm_vcpu_wfi(vcpu); 51 50 52 51 return PSCI_RET_SUCCESS; 53 52 } ··· 405 406 406 407 int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu) 407 408 { 408 - return 3; /* PSCI version and two workaround registers */ 409 + return 4; /* PSCI version and three workaround registers */ 409 410 } 410 411 411 412 int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices) ··· 417 418 return -EFAULT; 418 419 419 420 if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++)) 421 + return -EFAULT; 422 + 423 + if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3, uindices++)) 420 424 return -EFAULT; 421 425 422 426 return 0; ··· 461 459 case SPECTRE_VULNERABLE: 462 460 return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL; 463 461 } 462 + break; 463 + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3: 464 + switch (arm64_get_spectre_bhb_state()) { 465 + case SPECTRE_VULNERABLE: 466 + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL; 467 + case SPECTRE_MITIGATED: 468 + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL; 469 + case SPECTRE_UNAFFECTED: 470 + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED; 471 + } 472 + return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL; 464 473 } 465 474 466 475 return -EINVAL; ··· 488 475 break; 489 476 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: 490 477 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2: 478 + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3: 491 479 val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK; 492 480 break; 493 481 default: ··· 534 520 } 535 521 536 522 case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1: 523 + case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3: 537 524 if (val & ~KVM_REG_FEATURE_LEVEL_MASK) 538 525 return -EINVAL; 539 526
+17
arch/arm64/mm/mmap.c
··· 7 7 8 8 #include <linux/io.h> 9 9 #include <linux/memblock.h> 10 + #include <linux/mm.h> 10 11 #include <linux/types.h> 11 12 13 + #include <asm/cpufeature.h> 12 14 #include <asm/page.h> 13 15 14 16 /* ··· 40 38 { 41 39 return !(((pfn << PAGE_SHIFT) + size) & ~PHYS_MASK); 42 40 } 41 + 42 + static int __init adjust_protection_map(void) 43 + { 44 + /* 45 + * With Enhanced PAN we can honour the execute-only permissions as 46 + * there is no PAN override with such mappings. 47 + */ 48 + if (cpus_have_const_cap(ARM64_HAS_EPAN)) { 49 + protection_map[VM_EXEC] = PAGE_EXECONLY; 50 + protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY; 51 + } 52 + 53 + return 0; 54 + } 55 + arch_initcall(adjust_protection_map);
+9 -3
arch/arm64/mm/mmu.c
··· 617 617 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 618 618 static int __init map_entry_trampoline(void) 619 619 { 620 + int i; 621 + 620 622 pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC; 621 623 phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start); 622 624 ··· 627 625 628 626 /* Map only the text into the trampoline page table */ 629 627 memset(tramp_pg_dir, 0, PGD_SIZE); 630 - __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE, 631 - prot, __pgd_pgtable_alloc, 0); 628 + __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, 629 + entry_tramp_text_size(), prot, 630 + __pgd_pgtable_alloc, NO_BLOCK_MAPPINGS); 632 631 633 632 /* Map both the text and data into the kernel page table */ 634 - __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot); 633 + for (i = 0; i < DIV_ROUND_UP(entry_tramp_text_size(), PAGE_SIZE); i++) 634 + __set_fixmap(FIX_ENTRY_TRAMP_TEXT1 - i, 635 + pa_start + i * PAGE_SIZE, prot); 636 + 635 637 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) { 636 638 extern char __entry_tramp_data_start[]; 637 639
+1
arch/arm64/tools/cpucaps
··· 44 44 SPECTRE_V2 45 45 SPECTRE_V3A 46 46 SPECTRE_V4 47 + SPECTRE_BHB 47 48 SSBS 48 49 SVE 49 50 UNMAP_KERNEL_AT_EL0
+1 -1
arch/mips/kernel/setup.c
··· 803 803 804 804 static int __init setnocoherentio(char *str) 805 805 { 806 - dma_default_coherent = true; 806 + dma_default_coherent = false; 807 807 pr_info("Software DMA cache coherency (command line)\n"); 808 808 return 0; 809 809 }
+3 -3
arch/mips/kernel/smp.c
··· 351 351 cpu = smp_processor_id(); 352 352 cpu_data[cpu].udelay_val = loops_per_jiffy; 353 353 354 + set_cpu_sibling_map(cpu); 355 + set_cpu_core_map(cpu); 356 + 354 357 cpumask_set_cpu(cpu, &cpu_coherent_mask); 355 358 notify_cpu_starting(cpu); 356 359 ··· 364 361 365 362 /* The CPU is running and counters synchronised, now mark it online */ 366 363 set_cpu_online(cpu, true); 367 - 368 - set_cpu_sibling_map(cpu); 369 - set_cpu_core_map(cpu); 370 364 371 365 calculate_cpu_foreign_map(); 372 366
+23 -13
arch/mips/ralink/mt7621.c
··· 22 22 23 23 #include "common.h" 24 24 25 - static void *detect_magic __initdata = detect_memory_region; 25 + #define MT7621_MEM_TEST_PATTERN 0xaa5555aa 26 + 27 + static u32 detect_magic __initdata; 26 28 27 29 int pcibios_root_bridge_prepare(struct pci_host_bridge *bridge) 28 30 { ··· 60 58 panic("Cannot detect cpc address"); 61 59 } 62 60 61 + static bool __init mt7621_addr_wraparound_test(phys_addr_t size) 62 + { 63 + void *dm = (void *)KSEG1ADDR(&detect_magic); 64 + 65 + if (CPHYSADDR(dm + size) >= MT7621_LOWMEM_MAX_SIZE) 66 + return true; 67 + __raw_writel(MT7621_MEM_TEST_PATTERN, dm); 68 + if (__raw_readl(dm) != __raw_readl(dm + size)) 69 + return false; 70 + __raw_writel(~MT7621_MEM_TEST_PATTERN, dm); 71 + return __raw_readl(dm) == __raw_readl(dm + size); 72 + } 73 + 63 74 static void __init mt7621_memory_detect(void) 64 75 { 65 - void *dm = &detect_magic; 66 76 phys_addr_t size; 67 77 68 - for (size = 32 * SZ_1M; size < 256 * SZ_1M; size <<= 1) { 69 - if (!__builtin_memcmp(dm, dm + size, sizeof(detect_magic))) 70 - break; 78 + for (size = 32 * SZ_1M; size <= 256 * SZ_1M; size <<= 1) { 79 + if (mt7621_addr_wraparound_test(size)) { 80 + memblock_add(MT7621_LOWMEM_BASE, size); 81 + return; 82 + } 71 83 } 72 84 73 - if ((size == 256 * SZ_1M) && 74 - (CPHYSADDR(dm + size) < MT7621_LOWMEM_MAX_SIZE) && 75 - __builtin_memcmp(dm, dm + size, sizeof(detect_magic))) { 76 - memblock_add(MT7621_LOWMEM_BASE, MT7621_LOWMEM_MAX_SIZE); 77 - memblock_add(MT7621_HIGHMEM_BASE, MT7621_HIGHMEM_SIZE); 78 - } else { 79 - memblock_add(MT7621_LOWMEM_BASE, size); 80 - } 85 + memblock_add(MT7621_LOWMEM_BASE, MT7621_LOWMEM_MAX_SIZE); 86 + memblock_add(MT7621_HIGHMEM_BASE, MT7621_HIGHMEM_SIZE); 81 87 } 82 88 83 89 void __init ralink_of_remap(void)
+1 -1
arch/powerpc/include/asm/book3s/64/mmu.h
··· 202 202 /* 203 203 * The current system page and segment sizes 204 204 */ 205 - extern int mmu_linear_psize; 206 205 extern int mmu_virtual_psize; 207 206 extern int mmu_vmalloc_psize; 208 207 extern int mmu_io_psize; ··· 212 213 #define mmu_virtual_psize MMU_PAGE_4K 213 214 #endif 214 215 #endif 216 + extern int mmu_linear_psize; 215 217 extern int mmu_vmemmap_psize; 216 218 217 219 /* MMU initialization */
+1 -1
arch/powerpc/include/asm/kexec_ranges.h
··· 9 9 int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); 10 10 int add_tce_mem_ranges(struct crash_mem **mem_ranges); 11 11 int add_initrd_mem_range(struct crash_mem **mem_ranges); 12 - #ifdef CONFIG_PPC_BOOK3S_64 12 + #ifdef CONFIG_PPC_64S_HASH_MMU 13 13 int add_htab_mem_range(struct crash_mem **mem_ranges); 14 14 #else 15 15 static inline int add_htab_mem_range(struct crash_mem **mem_ranges)
+1 -1
arch/powerpc/include/asm/nmi.h
··· 9 9 static inline void arch_touch_nmi_watchdog(void) {} 10 10 #endif 11 11 12 - #if defined(CONFIG_NMI_IPI) && defined(CONFIG_STACKTRACE) 12 + #ifdef CONFIG_NMI_IPI 13 13 extern void arch_trigger_cpumask_backtrace(const cpumask_t *mask, 14 14 bool exclude_self); 15 15 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+1
arch/riscv/Kconfig.erratas
··· 2 2 3 3 config RISCV_ERRATA_ALTERNATIVE 4 4 bool "RISC-V alternative scheme" 5 + depends on !XIP_KERNEL 5 6 default y 6 7 help 7 8 This Kconfig allows the kernel to automatically patch the
+2 -2
arch/riscv/Kconfig.socs
··· 14 14 select CLK_SIFIVE 15 15 select CLK_SIFIVE_PRCI 16 16 select SIFIVE_PLIC 17 - select RISCV_ERRATA_ALTERNATIVE 18 - select ERRATA_SIFIVE 17 + select RISCV_ERRATA_ALTERNATIVE if !XIP_KERNEL 18 + select ERRATA_SIFIVE if !XIP_KERNEL 19 19 help 20 20 This enables support for SiFive SoC platform hardware. 21 21
+2 -1
arch/riscv/boot/dts/canaan/k210.dtsi
··· 113 113 compatible = "canaan,k210-plic", "sifive,plic-1.0.0"; 114 114 reg = <0xC000000 0x4000000>; 115 115 interrupt-controller; 116 - interrupts-extended = <&cpu0_intc 11>, <&cpu1_intc 11>; 116 + interrupts-extended = <&cpu0_intc 11>, <&cpu0_intc 9>, 117 + <&cpu1_intc 11>, <&cpu1_intc 9>; 117 118 riscv,ndev = <65>; 118 119 }; 119 120
+1 -1
arch/riscv/include/asm/page.h
··· 119 119 ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size)) 120 120 121 121 #define is_linear_mapping(x) \ 122 - ((x) >= PAGE_OFFSET && (!IS_ENABLED(CONFIG_64BIT) || (x) < kernel_map.virt_addr)) 122 + ((x) >= PAGE_OFFSET && (!IS_ENABLED(CONFIG_64BIT) || (x) < PAGE_OFFSET + KERN_VIRT_SIZE)) 123 123 124 124 #define linear_mapping_pa_to_va(x) ((void *)((unsigned long)(x) + kernel_map.va_pa_offset)) 125 125 #define kernel_mapping_pa_to_va(y) ({ \
+1
arch/riscv/include/asm/pgtable.h
··· 13 13 14 14 #ifndef CONFIG_MMU 15 15 #define KERNEL_LINK_ADDR PAGE_OFFSET 16 + #define KERN_VIRT_SIZE (UL(-1)) 16 17 #else 17 18 18 19 #define ADDRESS_SPACE_END (UL(-1))
+16 -5
arch/riscv/kernel/module.c
··· 13 13 #include <linux/pgtable.h> 14 14 #include <asm/sections.h> 15 15 16 + /* 17 + * The auipc+jalr instruction pair can reach any PC-relative offset 18 + * in the range [-2^31 - 2^11, 2^31 - 2^11) 19 + */ 20 + static bool riscv_insn_valid_32bit_offset(ptrdiff_t val) 21 + { 22 + #ifdef CONFIG_32BIT 23 + return true; 24 + #else 25 + return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11)); 26 + #endif 27 + } 28 + 16 29 static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v) 17 30 { 18 31 if (v != (u32)v) { ··· 108 95 ptrdiff_t offset = (void *)v - (void *)location; 109 96 s32 hi20; 110 97 111 - if (offset != (s32)offset) { 98 + if (!riscv_insn_valid_32bit_offset(offset)) { 112 99 pr_err( 113 100 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 114 101 me->name, (long long)v, location); ··· 210 197 Elf_Addr v) 211 198 { 212 199 ptrdiff_t offset = (void *)v - (void *)location; 213 - s32 fill_v = offset; 214 200 u32 hi20, lo12; 215 201 216 - if (offset != fill_v) { 202 + if (!riscv_insn_valid_32bit_offset(offset)) { 217 203 /* Only emit the plt entry if offset over 32-bit range */ 218 204 if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) { 219 205 offset = module_emit_plt_entry(me, v); ··· 236 224 Elf_Addr v) 237 225 { 238 226 ptrdiff_t offset = (void *)v - (void *)location; 239 - s32 fill_v = offset; 240 227 u32 hi20, lo12; 241 228 242 - if (offset != fill_v) { 229 + if (!riscv_insn_valid_32bit_offset(offset)) { 243 230 pr_err( 244 231 "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n", 245 232 me->name, (long long)v, location);
+3
arch/riscv/mm/Makefile
··· 24 24 ifdef CONFIG_KASAN 25 25 KASAN_SANITIZE_kasan_init.o := n 26 26 KASAN_SANITIZE_init.o := n 27 + ifdef CONFIG_DEBUG_VIRTUAL 28 + KASAN_SANITIZE_physaddr.o := n 29 + endif 27 30 endif 28 31 29 32 obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
+1 -1
arch/riscv/mm/init.c
··· 125 125 else 126 126 swiotlb_force = SWIOTLB_NO_FORCE; 127 127 #endif 128 - high_memory = (void *)(__va(PFN_PHYS(max_low_pfn))); 129 128 memblock_free_all(); 130 129 131 130 print_vm_layout(); ··· 194 195 195 196 min_low_pfn = PFN_UP(phys_ram_base); 196 197 max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end); 198 + high_memory = (void *)(__va(PFN_PHYS(max_low_pfn))); 197 199 198 200 dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn)); 199 201 set_max_mapnr(max_low_pfn - ARCH_PFN_OFFSET);
+5 -3
arch/riscv/mm/kasan_init.c
··· 113 113 base_pud = pt_ops.get_pud_virt(pfn_to_phys(_pgd_pfn(*pgd))); 114 114 } else { 115 115 base_pud = (pud_t *)pgd_page_vaddr(*pgd); 116 - if (base_pud == lm_alias(kasan_early_shadow_pud)) 116 + if (base_pud == lm_alias(kasan_early_shadow_pud)) { 117 117 base_pud = memblock_alloc(PTRS_PER_PUD * sizeof(pud_t), PAGE_SIZE); 118 + memcpy(base_pud, (void *)kasan_early_shadow_pud, 119 + sizeof(pud_t) * PTRS_PER_PUD); 120 + } 118 121 } 119 122 120 123 pudp = base_pud + pud_index(vaddr); ··· 205 202 206 203 for (i = 0; i < PTRS_PER_PTE; ++i) 207 204 set_pte(kasan_early_shadow_pte + i, 208 - mk_pte(virt_to_page(kasan_early_shadow_page), 209 - PAGE_KERNEL)); 205 + pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL)); 210 206 211 207 for (i = 0; i < PTRS_PER_PMD; ++i) 212 208 set_pmd(kasan_early_shadow_pmd + i,
+1 -3
arch/riscv/mm/physaddr.c
··· 8 8 9 9 phys_addr_t __virt_to_phys(unsigned long x) 10 10 { 11 - phys_addr_t y = x - PAGE_OFFSET; 12 - 13 11 /* 14 12 * Boundary checking aginst the kernel linear mapping space. 15 13 */ 16 - WARN(y >= KERN_VIRT_SIZE, 14 + WARN(!is_linear_mapping(x) && !is_kernel_mapping(x), 17 15 "virt_to_phys used for non-linear address: %pK (%pS)\n", 18 16 (void *)x, (void *)x); 19 17
+7 -2
arch/s390/include/asm/extable.h
··· 69 69 { 70 70 a->fixup = b->fixup + delta; 71 71 b->fixup = tmp.fixup - delta; 72 - a->handler = b->handler + delta; 73 - b->handler = tmp.handler - delta; 72 + a->handler = b->handler; 73 + if (a->handler) 74 + a->handler += delta; 75 + b->handler = tmp.handler; 76 + if (b->handler) 77 + b->handler -= delta; 74 78 } 79 + #define swap_ex_entry_fixup swap_ex_entry_fixup 75 80 76 81 #endif
+6 -4
arch/s390/include/asm/ftrace.h
··· 47 47 48 48 static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs) 49 49 { 50 - return &fregs->regs; 50 + struct pt_regs *regs = &fregs->regs; 51 + 52 + if (test_pt_regs_flag(regs, PIF_FTRACE_FULL_REGS)) 53 + return regs; 54 + return NULL; 51 55 } 52 56 53 57 static __always_inline void ftrace_instruction_pointer_set(struct ftrace_regs *fregs, 54 58 unsigned long ip) 55 59 { 56 - struct pt_regs *regs = arch_ftrace_get_regs(fregs); 57 - 58 - regs->psw.addr = ip; 60 + fregs->regs.psw.addr = ip; 59 61 } 60 62 61 63 /*
+2
arch/s390/include/asm/ptrace.h
··· 15 15 #define PIF_EXECVE_PGSTE_RESTART 1 /* restart execve for PGSTE binaries */ 16 16 #define PIF_SYSCALL_RET_SET 2 /* return value was set via ptrace */ 17 17 #define PIF_GUEST_FAULT 3 /* indicates program check in sie64a */ 18 + #define PIF_FTRACE_FULL_REGS 4 /* all register contents valid (ftrace) */ 18 19 19 20 #define _PIF_SYSCALL BIT(PIF_SYSCALL) 20 21 #define _PIF_EXECVE_PGSTE_RESTART BIT(PIF_EXECVE_PGSTE_RESTART) 21 22 #define _PIF_SYSCALL_RET_SET BIT(PIF_SYSCALL_RET_SET) 22 23 #define _PIF_GUEST_FAULT BIT(PIF_GUEST_FAULT) 24 + #define _PIF_FTRACE_FULL_REGS BIT(PIF_FTRACE_FULL_REGS) 23 25 24 26 #ifndef __ASSEMBLY__ 25 27
+36 -1
arch/s390/kernel/ftrace.c
··· 159 159 return 0; 160 160 } 161 161 162 + static struct ftrace_hotpatch_trampoline *ftrace_get_trampoline(struct dyn_ftrace *rec) 163 + { 164 + struct ftrace_hotpatch_trampoline *trampoline; 165 + struct ftrace_insn insn; 166 + s64 disp; 167 + u16 opc; 168 + 169 + if (copy_from_kernel_nofault(&insn, (void *)rec->ip, sizeof(insn))) 170 + return ERR_PTR(-EFAULT); 171 + disp = (s64)insn.disp * 2; 172 + trampoline = (void *)(rec->ip + disp); 173 + if (get_kernel_nofault(opc, &trampoline->brasl_opc)) 174 + return ERR_PTR(-EFAULT); 175 + if (opc != 0xc015) 176 + return ERR_PTR(-EINVAL); 177 + return trampoline; 178 + } 179 + 162 180 int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 163 181 unsigned long addr) 164 182 { 183 + struct ftrace_hotpatch_trampoline *trampoline; 184 + u64 old; 185 + 186 + trampoline = ftrace_get_trampoline(rec); 187 + if (IS_ERR(trampoline)) 188 + return PTR_ERR(trampoline); 189 + if (get_kernel_nofault(old, &trampoline->interceptor)) 190 + return -EFAULT; 191 + if (old != old_addr) 192 + return -EINVAL; 193 + s390_kernel_write(&trampoline->interceptor, &addr, sizeof(addr)); 165 194 return 0; 166 195 } 167 196 ··· 217 188 218 189 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 219 190 { 191 + struct ftrace_hotpatch_trampoline *trampoline; 192 + 193 + trampoline = ftrace_get_trampoline(rec); 194 + if (IS_ERR(trampoline)) 195 + return PTR_ERR(trampoline); 196 + s390_kernel_write(&trampoline->interceptor, &addr, sizeof(addr)); 220 197 brcl_enable((void *)rec->ip); 221 198 return 0; 222 199 } ··· 326 291 327 292 regs = ftrace_get_regs(fregs); 328 293 p = get_kprobe((kprobe_opcode_t *)ip); 329 - if (unlikely(!p) || kprobe_disabled(p)) 294 + if (!regs || unlikely(!p) || kprobe_disabled(p)) 330 295 goto out; 331 296 332 297 if (kprobe_running()) {
+9
arch/s390/kernel/mcount.S
··· 27 27 #define STACK_PTREGS_GPRS (STACK_PTREGS + __PT_GPRS) 28 28 #define STACK_PTREGS_PSW (STACK_PTREGS + __PT_PSW) 29 29 #define STACK_PTREGS_ORIG_GPR2 (STACK_PTREGS + __PT_ORIG_GPR2) 30 + #define STACK_PTREGS_FLAGS (STACK_PTREGS + __PT_FLAGS) 30 31 #ifdef __PACK_STACK 31 32 /* allocate just enough for r14, r15 and backchain */ 32 33 #define TRACED_FUNC_FRAME_SIZE 24 ··· 58 57 .if \allregs == 1 59 58 stg %r14,(STACK_PTREGS_PSW)(%r15) 60 59 stosm (STACK_PTREGS_PSW)(%r15),0 60 + #ifdef CONFIG_HAVE_MARCH_Z10_FEATURES 61 + mvghi STACK_PTREGS_FLAGS(%r15),_PIF_FTRACE_FULL_REGS 62 + #else 63 + lghi %r14,_PIF_FTRACE_FULL_REGS 64 + stg %r14,STACK_PTREGS_FLAGS(%r15) 65 + #endif 66 + .else 67 + xc STACK_PTREGS_FLAGS(8,%r15),STACK_PTREGS_FLAGS(%r15) 61 68 .endif 62 69 63 70 lg %r14,(__SF_GPRS+8*8)(%r1) # restore original return address
+2
arch/s390/kernel/setup.c
··· 800 800 static void __init reserve_kernel(void) 801 801 { 802 802 memblock_reserve(0, STARTUP_NORMAL_OFFSET); 803 + memblock_reserve(OLDMEM_BASE, sizeof(unsigned long)); 804 + memblock_reserve(OLDMEM_SIZE, sizeof(unsigned long)); 803 805 memblock_reserve(__amode31_base, __eamode31 - __samode31); 804 806 memblock_reserve(__pa(sclp_early_sccb), EXT_SCCB_READ_SCP); 805 807 memblock_reserve(__pa(_stext), _end - _stext);
+1 -1
arch/x86/include/asm/cpufeatures.h
··· 204 204 /* FREE! ( 7*32+10) */ 205 205 #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ 206 206 #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ 207 - #define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ 207 + #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ 208 208 #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ 209 209 #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ 210 210 #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+9 -7
arch/x86/include/asm/nospec-branch.h
··· 84 84 #ifdef CONFIG_RETPOLINE 85 85 ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ 86 86 __stringify(jmp __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \ 87 - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_AMD 87 + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), X86_FEATURE_RETPOLINE_LFENCE 88 88 #else 89 89 jmp *%\reg 90 90 #endif ··· 94 94 #ifdef CONFIG_RETPOLINE 95 95 ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *%\reg), \ 96 96 __stringify(call __x86_indirect_thunk_\reg), X86_FEATURE_RETPOLINE, \ 97 - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_AMD 97 + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *%\reg), X86_FEATURE_RETPOLINE_LFENCE 98 98 #else 99 99 call *%\reg 100 100 #endif ··· 146 146 "lfence;\n" \ 147 147 ANNOTATE_RETPOLINE_SAFE \ 148 148 "call *%[thunk_target]\n", \ 149 - X86_FEATURE_RETPOLINE_AMD) 149 + X86_FEATURE_RETPOLINE_LFENCE) 150 150 151 151 # define THUNK_TARGET(addr) [thunk_target] "r" (addr) 152 152 ··· 176 176 "lfence;\n" \ 177 177 ANNOTATE_RETPOLINE_SAFE \ 178 178 "call *%[thunk_target]\n", \ 179 - X86_FEATURE_RETPOLINE_AMD) 179 + X86_FEATURE_RETPOLINE_LFENCE) 180 180 181 181 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) 182 182 #endif ··· 188 188 /* The Spectre V2 mitigation variants */ 189 189 enum spectre_v2_mitigation { 190 190 SPECTRE_V2_NONE, 191 - SPECTRE_V2_RETPOLINE_GENERIC, 192 - SPECTRE_V2_RETPOLINE_AMD, 193 - SPECTRE_V2_IBRS_ENHANCED, 191 + SPECTRE_V2_RETPOLINE, 192 + SPECTRE_V2_LFENCE, 193 + SPECTRE_V2_EIBRS, 194 + SPECTRE_V2_EIBRS_RETPOLINE, 195 + SPECTRE_V2_EIBRS_LFENCE, 194 196 }; 195 197 196 198 /* The indirect branch speculation control variants */
+4 -4
arch/x86/kernel/alternative.c
··· 389 389 * 390 390 * CALL *%\reg 391 391 * 392 - * It also tries to inline spectre_v2=retpoline,amd when size permits. 392 + * It also tries to inline spectre_v2=retpoline,lfence when size permits. 393 393 */ 394 394 static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes) 395 395 { ··· 407 407 BUG_ON(reg == 4); 408 408 409 409 if (cpu_feature_enabled(X86_FEATURE_RETPOLINE) && 410 - !cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD)) 410 + !cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) 411 411 return -1; 412 412 413 413 op = insn->opcode.bytes[0]; ··· 438 438 } 439 439 440 440 /* 441 - * For RETPOLINE_AMD: prepend the indirect CALL/JMP with an LFENCE. 441 + * For RETPOLINE_LFENCE: prepend the indirect CALL/JMP with an LFENCE. 442 442 */ 443 - if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD)) { 443 + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) { 444 444 bytes[i++] = 0x0f; 445 445 bytes[i++] = 0xae; 446 446 bytes[i++] = 0xe8; /* LFENCE */
+157 -49
arch/x86/kernel/cpu/bugs.c
··· 16 16 #include <linux/prctl.h> 17 17 #include <linux/sched/smt.h> 18 18 #include <linux/pgtable.h> 19 + #include <linux/bpf.h> 19 20 20 21 #include <asm/spec-ctrl.h> 21 22 #include <asm/cmdline.h> ··· 651 650 static inline const char *spectre_v2_module_string(void) { return ""; } 652 651 #endif 653 652 653 + #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" 654 + #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" 655 + #define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n" 656 + 657 + #ifdef CONFIG_BPF_SYSCALL 658 + void unpriv_ebpf_notify(int new_state) 659 + { 660 + if (new_state) 661 + return; 662 + 663 + /* Unprivileged eBPF is enabled */ 664 + 665 + switch (spectre_v2_enabled) { 666 + case SPECTRE_V2_EIBRS: 667 + pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); 668 + break; 669 + case SPECTRE_V2_EIBRS_LFENCE: 670 + if (sched_smt_active()) 671 + pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); 672 + break; 673 + default: 674 + break; 675 + } 676 + } 677 + #endif 678 + 654 679 static inline bool match_option(const char *arg, int arglen, const char *opt) 655 680 { 656 681 int len = strlen(opt); ··· 691 664 SPECTRE_V2_CMD_FORCE, 692 665 SPECTRE_V2_CMD_RETPOLINE, 693 666 SPECTRE_V2_CMD_RETPOLINE_GENERIC, 694 - SPECTRE_V2_CMD_RETPOLINE_AMD, 667 + SPECTRE_V2_CMD_RETPOLINE_LFENCE, 668 + SPECTRE_V2_CMD_EIBRS, 669 + SPECTRE_V2_CMD_EIBRS_RETPOLINE, 670 + SPECTRE_V2_CMD_EIBRS_LFENCE, 695 671 }; 696 672 697 673 enum spectre_v2_user_cmd { ··· 767 737 return SPECTRE_V2_USER_CMD_AUTO; 768 738 } 769 739 740 + static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode) 741 + { 742 + return (mode == SPECTRE_V2_EIBRS || 743 + mode == SPECTRE_V2_EIBRS_RETPOLINE || 744 + mode == SPECTRE_V2_EIBRS_LFENCE); 745 + } 746 + 770 747 static void __init 771 748 spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) 772 749 { ··· 841 804 */ 842 805 if (!boot_cpu_has(X86_FEATURE_STIBP) || 843 806 !smt_possible || 844 - spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 807 + spectre_v2_in_eibrs_mode(spectre_v2_enabled)) 845 808 return; 846 809 847 810 /* ··· 861 824 862 825 static const char * const spectre_v2_strings[] = { 863 826 [SPECTRE_V2_NONE] = "Vulnerable", 864 - [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", 865 - [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", 866 - [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", 827 + [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines", 828 + [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE", 829 + [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS", 830 + [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE", 831 + [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines", 867 832 }; 868 833 869 834 static const struct { ··· 876 837 { "off", SPECTRE_V2_CMD_NONE, false }, 877 838 { "on", SPECTRE_V2_CMD_FORCE, true }, 878 839 { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, 879 - { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, 840 + { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, 841 + { "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, 880 842 { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, 843 + { "eibrs", SPECTRE_V2_CMD_EIBRS, false }, 844 + { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false }, 845 + { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false }, 881 846 { "auto", SPECTRE_V2_CMD_AUTO, false }, 882 847 }; 883 848 ··· 918 875 } 919 876 920 877 if ((cmd == SPECTRE_V2_CMD_RETPOLINE || 921 - cmd == SPECTRE_V2_CMD_RETPOLINE_AMD || 922 - cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) && 878 + cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE || 879 + cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC || 880 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE || 881 + cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) && 923 882 !IS_ENABLED(CONFIG_RETPOLINE)) { 924 - pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option); 883 + pr_err("%s selected but not compiled in. Switching to AUTO select\n", 884 + mitigation_options[i].option); 885 + return SPECTRE_V2_CMD_AUTO; 886 + } 887 + 888 + if ((cmd == SPECTRE_V2_CMD_EIBRS || 889 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE || 890 + cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) && 891 + !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { 892 + pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n", 893 + mitigation_options[i].option); 894 + return SPECTRE_V2_CMD_AUTO; 895 + } 896 + 897 + if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE || 898 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) && 899 + !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { 900 + pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n", 901 + mitigation_options[i].option); 925 902 return SPECTRE_V2_CMD_AUTO; 926 903 } 927 904 928 905 spec_v2_print_cond(mitigation_options[i].option, 929 906 mitigation_options[i].secure); 930 907 return cmd; 908 + } 909 + 910 + static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) 911 + { 912 + if (!IS_ENABLED(CONFIG_RETPOLINE)) { 913 + pr_err("Kernel not compiled with retpoline; no mitigation available!"); 914 + return SPECTRE_V2_NONE; 915 + } 916 + 917 + return SPECTRE_V2_RETPOLINE; 931 918 } 932 919 933 920 static void __init spectre_v2_select_mitigation(void) ··· 980 907 case SPECTRE_V2_CMD_FORCE: 981 908 case SPECTRE_V2_CMD_AUTO: 982 909 if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { 983 - mode = SPECTRE_V2_IBRS_ENHANCED; 984 - /* Force it so VMEXIT will restore correctly */ 985 - x86_spec_ctrl_base |= SPEC_CTRL_IBRS; 986 - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 987 - goto specv2_set_mode; 910 + mode = SPECTRE_V2_EIBRS; 911 + break; 988 912 } 989 - if (IS_ENABLED(CONFIG_RETPOLINE)) 990 - goto retpoline_auto; 913 + 914 + mode = spectre_v2_select_retpoline(); 991 915 break; 992 - case SPECTRE_V2_CMD_RETPOLINE_AMD: 993 - if (IS_ENABLED(CONFIG_RETPOLINE)) 994 - goto retpoline_amd; 916 + 917 + case SPECTRE_V2_CMD_RETPOLINE_LFENCE: 918 + pr_err(SPECTRE_V2_LFENCE_MSG); 919 + mode = SPECTRE_V2_LFENCE; 995 920 break; 921 + 996 922 case SPECTRE_V2_CMD_RETPOLINE_GENERIC: 997 - if (IS_ENABLED(CONFIG_RETPOLINE)) 998 - goto retpoline_generic; 923 + mode = SPECTRE_V2_RETPOLINE; 999 924 break; 925 + 1000 926 case SPECTRE_V2_CMD_RETPOLINE: 1001 - if (IS_ENABLED(CONFIG_RETPOLINE)) 1002 - goto retpoline_auto; 927 + mode = spectre_v2_select_retpoline(); 928 + break; 929 + 930 + case SPECTRE_V2_CMD_EIBRS: 931 + mode = SPECTRE_V2_EIBRS; 932 + break; 933 + 934 + case SPECTRE_V2_CMD_EIBRS_LFENCE: 935 + mode = SPECTRE_V2_EIBRS_LFENCE; 936 + break; 937 + 938 + case SPECTRE_V2_CMD_EIBRS_RETPOLINE: 939 + mode = SPECTRE_V2_EIBRS_RETPOLINE; 1003 940 break; 1004 941 } 1005 - pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!"); 1006 - return; 1007 942 1008 - retpoline_auto: 1009 - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD || 1010 - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) { 1011 - retpoline_amd: 1012 - if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { 1013 - pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); 1014 - goto retpoline_generic; 1015 - } 1016 - mode = SPECTRE_V2_RETPOLINE_AMD; 1017 - setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD); 1018 - setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 1019 - } else { 1020 - retpoline_generic: 1021 - mode = SPECTRE_V2_RETPOLINE_GENERIC; 1022 - setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 943 + if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) 944 + pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); 945 + 946 + if (spectre_v2_in_eibrs_mode(mode)) { 947 + /* Force it so VMEXIT will restore correctly */ 948 + x86_spec_ctrl_base |= SPEC_CTRL_IBRS; 949 + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 1023 950 } 1024 951 1025 - specv2_set_mode: 952 + switch (mode) { 953 + case SPECTRE_V2_NONE: 954 + case SPECTRE_V2_EIBRS: 955 + break; 956 + 957 + case SPECTRE_V2_LFENCE: 958 + case SPECTRE_V2_EIBRS_LFENCE: 959 + setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); 960 + fallthrough; 961 + 962 + case SPECTRE_V2_RETPOLINE: 963 + case SPECTRE_V2_EIBRS_RETPOLINE: 964 + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 965 + break; 966 + } 967 + 1026 968 spectre_v2_enabled = mode; 1027 969 pr_info("%s\n", spectre_v2_strings[mode]); 1028 970 ··· 1063 975 * the CPU supports Enhanced IBRS, kernel might un-intentionally not 1064 976 * enable IBRS around firmware calls. 1065 977 */ 1066 - if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { 978 + if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) { 1067 979 setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); 1068 980 pr_info("Enabling Restricted Speculation for firmware calls\n"); 1069 981 } ··· 1132 1044 void cpu_bugs_smt_update(void) 1133 1045 { 1134 1046 mutex_lock(&spec_ctrl_mutex); 1047 + 1048 + if (sched_smt_active() && unprivileged_ebpf_enabled() && 1049 + spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) 1050 + pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); 1135 1051 1136 1052 switch (spectre_v2_user_stibp) { 1137 1053 case SPECTRE_V2_USER_NONE: ··· 1776 1684 1777 1685 static char *stibp_state(void) 1778 1686 { 1779 - if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 1687 + if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) 1780 1688 return ""; 1781 1689 1782 1690 switch (spectre_v2_user_stibp) { ··· 1806 1714 return ""; 1807 1715 } 1808 1716 1717 + static ssize_t spectre_v2_show_state(char *buf) 1718 + { 1719 + if (spectre_v2_enabled == SPECTRE_V2_LFENCE) 1720 + return sprintf(buf, "Vulnerable: LFENCE\n"); 1721 + 1722 + if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) 1723 + return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n"); 1724 + 1725 + if (sched_smt_active() && unprivileged_ebpf_enabled() && 1726 + spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) 1727 + return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n"); 1728 + 1729 + return sprintf(buf, "%s%s%s%s%s%s\n", 1730 + spectre_v2_strings[spectre_v2_enabled], 1731 + ibpb_state(), 1732 + boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", 1733 + stibp_state(), 1734 + boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", 1735 + spectre_v2_module_string()); 1736 + } 1737 + 1809 1738 static ssize_t srbds_show_state(char *buf) 1810 1739 { 1811 1740 return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); ··· 1852 1739 return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]); 1853 1740 1854 1741 case X86_BUG_SPECTRE_V2: 1855 - return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], 1856 - ibpb_state(), 1857 - boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", 1858 - stibp_state(), 1859 - boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", 1860 - spectre_v2_module_string()); 1742 + return spectre_v2_show_state(buf); 1861 1743 1862 1744 case X86_BUG_SPEC_STORE_BYPASS: 1863 1745 return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+48 -9
arch/x86/kernel/cpu/sgx/encl.c
··· 13 13 #include "sgx.h" 14 14 15 15 /* 16 + * Calculate byte offset of a PCMD struct associated with an enclave page. PCMD's 17 + * follow right after the EPC data in the backing storage. In addition to the 18 + * visible enclave pages, there's one extra page slot for SECS, before PCMD 19 + * structs. 20 + */ 21 + static inline pgoff_t sgx_encl_get_backing_page_pcmd_offset(struct sgx_encl *encl, 22 + unsigned long page_index) 23 + { 24 + pgoff_t epc_end_off = encl->size + sizeof(struct sgx_secs); 25 + 26 + return epc_end_off + page_index * sizeof(struct sgx_pcmd); 27 + } 28 + 29 + /* 30 + * Free a page from the backing storage in the given page index. 31 + */ 32 + static inline void sgx_encl_truncate_backing_page(struct sgx_encl *encl, unsigned long page_index) 33 + { 34 + struct inode *inode = file_inode(encl->backing); 35 + 36 + shmem_truncate_range(inode, PFN_PHYS(page_index), PFN_PHYS(page_index) + PAGE_SIZE - 1); 37 + } 38 + 39 + /* 16 40 * ELDU: Load an EPC page as unblocked. For more info, see "OS Management of EPC 17 41 * Pages" in the SDM. 18 42 */ ··· 46 22 { 47 23 unsigned long va_offset = encl_page->desc & SGX_ENCL_PAGE_VA_OFFSET_MASK; 48 24 struct sgx_encl *encl = encl_page->encl; 25 + pgoff_t page_index, page_pcmd_off; 49 26 struct sgx_pageinfo pginfo; 50 27 struct sgx_backing b; 51 - pgoff_t page_index; 28 + bool pcmd_page_empty; 29 + u8 *pcmd_page; 52 30 int ret; 53 31 54 32 if (secs_page) ··· 58 32 else 59 33 page_index = PFN_DOWN(encl->size); 60 34 35 + page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); 36 + 61 37 ret = sgx_encl_get_backing(encl, page_index, &b); 62 38 if (ret) 63 39 return ret; 64 40 65 41 pginfo.addr = encl_page->desc & PAGE_MASK; 66 42 pginfo.contents = (unsigned long)kmap_atomic(b.contents); 67 - pginfo.metadata = (unsigned long)kmap_atomic(b.pcmd) + 68 - b.pcmd_offset; 43 + pcmd_page = kmap_atomic(b.pcmd); 44 + pginfo.metadata = (unsigned long)pcmd_page + b.pcmd_offset; 69 45 70 46 if (secs_page) 71 47 pginfo.secs = (u64)sgx_get_epc_virt_addr(secs_page); ··· 83 55 ret = -EFAULT; 84 56 } 85 57 86 - kunmap_atomic((void *)(unsigned long)(pginfo.metadata - b.pcmd_offset)); 58 + memset(pcmd_page + b.pcmd_offset, 0, sizeof(struct sgx_pcmd)); 59 + 60 + /* 61 + * The area for the PCMD in the page was zeroed above. Check if the 62 + * whole page is now empty meaning that all PCMD's have been zeroed: 63 + */ 64 + pcmd_page_empty = !memchr_inv(pcmd_page, 0, PAGE_SIZE); 65 + 66 + kunmap_atomic(pcmd_page); 87 67 kunmap_atomic((void *)(unsigned long)pginfo.contents); 88 68 89 69 sgx_encl_put_backing(&b, false); 70 + 71 + sgx_encl_truncate_backing_page(encl, page_index); 72 + 73 + if (pcmd_page_empty) 74 + sgx_encl_truncate_backing_page(encl, PFN_DOWN(page_pcmd_off)); 90 75 91 76 return ret; 92 77 } ··· 620 579 int sgx_encl_get_backing(struct sgx_encl *encl, unsigned long page_index, 621 580 struct sgx_backing *backing) 622 581 { 623 - pgoff_t pcmd_index = PFN_DOWN(encl->size) + 1 + (page_index >> 5); 582 + pgoff_t page_pcmd_off = sgx_encl_get_backing_page_pcmd_offset(encl, page_index); 624 583 struct page *contents; 625 584 struct page *pcmd; 626 585 ··· 628 587 if (IS_ERR(contents)) 629 588 return PTR_ERR(contents); 630 589 631 - pcmd = sgx_encl_get_backing_page(encl, pcmd_index); 590 + pcmd = sgx_encl_get_backing_page(encl, PFN_DOWN(page_pcmd_off)); 632 591 if (IS_ERR(pcmd)) { 633 592 put_page(contents); 634 593 return PTR_ERR(pcmd); ··· 637 596 backing->page_index = page_index; 638 597 backing->contents = contents; 639 598 backing->pcmd = pcmd; 640 - backing->pcmd_offset = 641 - (page_index & (PAGE_SIZE / sizeof(struct sgx_pcmd) - 1)) * 642 - sizeof(struct sgx_pcmd); 599 + backing->pcmd_offset = page_pcmd_off & (PAGE_SIZE - 1); 643 600 644 601 return 0; 645 602 }
+30 -11
arch/x86/kernel/e820.c
··· 995 995 */ 996 996 void __init e820__reserve_setup_data(void) 997 997 { 998 + struct setup_indirect *indirect; 998 999 struct setup_data *data; 999 - u64 pa_data; 1000 + u64 pa_data, pa_next; 1001 + u32 len; 1000 1002 1001 1003 pa_data = boot_params.hdr.setup_data; 1002 1004 if (!pa_data) ··· 1006 1004 1007 1005 while (pa_data) { 1008 1006 data = early_memremap(pa_data, sizeof(*data)); 1007 + if (!data) { 1008 + pr_warn("e820: failed to memremap setup_data entry\n"); 1009 + return; 1010 + } 1011 + 1012 + len = sizeof(*data); 1013 + pa_next = data->next; 1014 + 1009 1015 e820__range_update(pa_data, sizeof(*data)+data->len, E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1010 1016 1011 1017 /* ··· 1025 1015 sizeof(*data) + data->len, 1026 1016 E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1027 1017 1028 - if (data->type == SETUP_INDIRECT && 1029 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) { 1030 - e820__range_update(((struct setup_indirect *)data->data)->addr, 1031 - ((struct setup_indirect *)data->data)->len, 1032 - E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1033 - e820__range_update_kexec(((struct setup_indirect *)data->data)->addr, 1034 - ((struct setup_indirect *)data->data)->len, 1035 - E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1018 + if (data->type == SETUP_INDIRECT) { 1019 + len += data->len; 1020 + early_memunmap(data, sizeof(*data)); 1021 + data = early_memremap(pa_data, len); 1022 + if (!data) { 1023 + pr_warn("e820: failed to memremap indirect setup_data\n"); 1024 + return; 1025 + } 1026 + 1027 + indirect = (struct setup_indirect *)data->data; 1028 + 1029 + if (indirect->type != SETUP_INDIRECT) { 1030 + e820__range_update(indirect->addr, indirect->len, 1031 + E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1032 + e820__range_update_kexec(indirect->addr, indirect->len, 1033 + E820_TYPE_RAM, E820_TYPE_RESERVED_KERN); 1034 + } 1036 1035 } 1037 1036 1038 - pa_data = data->next; 1039 - early_memunmap(data, sizeof(*data)); 1037 + pa_data = pa_next; 1038 + early_memunmap(data, len); 1040 1039 } 1041 1040 1042 1041 e820__update_table(e820_table);
+27 -8
arch/x86/kernel/kdebugfs.c
··· 88 88 89 89 static int __init create_setup_data_nodes(struct dentry *parent) 90 90 { 91 + struct setup_indirect *indirect; 91 92 struct setup_data_node *node; 92 93 struct setup_data *data; 93 - int error; 94 + u64 pa_data, pa_next; 94 95 struct dentry *d; 95 - u64 pa_data; 96 + int error; 97 + u32 len; 96 98 int no = 0; 97 99 98 100 d = debugfs_create_dir("setup_data", parent); ··· 114 112 error = -ENOMEM; 115 113 goto err_dir; 116 114 } 115 + pa_next = data->next; 117 116 118 - if (data->type == SETUP_INDIRECT && 119 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) { 120 - node->paddr = ((struct setup_indirect *)data->data)->addr; 121 - node->type = ((struct setup_indirect *)data->data)->type; 122 - node->len = ((struct setup_indirect *)data->data)->len; 117 + if (data->type == SETUP_INDIRECT) { 118 + len = sizeof(*data) + data->len; 119 + memunmap(data); 120 + data = memremap(pa_data, len, MEMREMAP_WB); 121 + if (!data) { 122 + kfree(node); 123 + error = -ENOMEM; 124 + goto err_dir; 125 + } 126 + 127 + indirect = (struct setup_indirect *)data->data; 128 + 129 + if (indirect->type != SETUP_INDIRECT) { 130 + node->paddr = indirect->addr; 131 + node->type = indirect->type; 132 + node->len = indirect->len; 133 + } else { 134 + node->paddr = pa_data; 135 + node->type = data->type; 136 + node->len = data->len; 137 + } 123 138 } else { 124 139 node->paddr = pa_data; 125 140 node->type = data->type; ··· 144 125 } 145 126 146 127 create_setup_data_node(d, no, node); 147 - pa_data = data->next; 128 + pa_data = pa_next; 148 129 149 130 memunmap(data); 150 131 no++;
+61 -16
arch/x86/kernel/ksysfs.c
··· 91 91 92 92 static int __init get_setup_data_size(int nr, size_t *size) 93 93 { 94 - int i = 0; 94 + u64 pa_data = boot_params.hdr.setup_data, pa_next; 95 + struct setup_indirect *indirect; 95 96 struct setup_data *data; 96 - u64 pa_data = boot_params.hdr.setup_data; 97 + int i = 0; 98 + u32 len; 97 99 98 100 while (pa_data) { 99 101 data = memremap(pa_data, sizeof(*data), MEMREMAP_WB); 100 102 if (!data) 101 103 return -ENOMEM; 104 + pa_next = data->next; 105 + 102 106 if (nr == i) { 103 - if (data->type == SETUP_INDIRECT && 104 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) 105 - *size = ((struct setup_indirect *)data->data)->len; 106 - else 107 + if (data->type == SETUP_INDIRECT) { 108 + len = sizeof(*data) + data->len; 109 + memunmap(data); 110 + data = memremap(pa_data, len, MEMREMAP_WB); 111 + if (!data) 112 + return -ENOMEM; 113 + 114 + indirect = (struct setup_indirect *)data->data; 115 + 116 + if (indirect->type != SETUP_INDIRECT) 117 + *size = indirect->len; 118 + else 119 + *size = data->len; 120 + } else { 107 121 *size = data->len; 122 + } 108 123 109 124 memunmap(data); 110 125 return 0; 111 126 } 112 127 113 - pa_data = data->next; 128 + pa_data = pa_next; 114 129 memunmap(data); 115 130 i++; 116 131 } ··· 135 120 static ssize_t type_show(struct kobject *kobj, 136 121 struct kobj_attribute *attr, char *buf) 137 122 { 123 + struct setup_indirect *indirect; 124 + struct setup_data *data; 138 125 int nr, ret; 139 126 u64 paddr; 140 - struct setup_data *data; 127 + u32 len; 141 128 142 129 ret = kobj_to_setup_data_nr(kobj, &nr); 143 130 if (ret) ··· 152 135 if (!data) 153 136 return -ENOMEM; 154 137 155 - if (data->type == SETUP_INDIRECT) 156 - ret = sprintf(buf, "0x%x\n", ((struct setup_indirect *)data->data)->type); 157 - else 138 + if (data->type == SETUP_INDIRECT) { 139 + len = sizeof(*data) + data->len; 140 + memunmap(data); 141 + data = memremap(paddr, len, MEMREMAP_WB); 142 + if (!data) 143 + return -ENOMEM; 144 + 145 + indirect = (struct setup_indirect *)data->data; 146 + 147 + ret = sprintf(buf, "0x%x\n", indirect->type); 148 + } else { 158 149 ret = sprintf(buf, "0x%x\n", data->type); 150 + } 151 + 159 152 memunmap(data); 160 153 return ret; 161 154 } ··· 176 149 char *buf, 177 150 loff_t off, size_t count) 178 151 { 152 + struct setup_indirect *indirect; 153 + struct setup_data *data; 179 154 int nr, ret = 0; 180 155 u64 paddr, len; 181 - struct setup_data *data; 182 156 void *p; 183 157 184 158 ret = kobj_to_setup_data_nr(kobj, &nr); ··· 193 165 if (!data) 194 166 return -ENOMEM; 195 167 196 - if (data->type == SETUP_INDIRECT && 197 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) { 198 - paddr = ((struct setup_indirect *)data->data)->addr; 199 - len = ((struct setup_indirect *)data->data)->len; 168 + if (data->type == SETUP_INDIRECT) { 169 + len = sizeof(*data) + data->len; 170 + memunmap(data); 171 + data = memremap(paddr, len, MEMREMAP_WB); 172 + if (!data) 173 + return -ENOMEM; 174 + 175 + indirect = (struct setup_indirect *)data->data; 176 + 177 + if (indirect->type != SETUP_INDIRECT) { 178 + paddr = indirect->addr; 179 + len = indirect->len; 180 + } else { 181 + /* 182 + * Even though this is technically undefined, return 183 + * the data as though it is a normal setup_data struct. 184 + * This will at least allow it to be inspected. 185 + */ 186 + paddr += sizeof(*data); 187 + len = data->len; 188 + } 200 189 } else { 201 190 paddr += sizeof(*data); 202 191 len = data->len;
+3 -1
arch/x86/kernel/kvm.c
··· 463 463 return (kvm_para_has_feature(KVM_FEATURE_PV_TLB_FLUSH) && 464 464 !kvm_para_has_hint(KVM_HINTS_REALTIME) && 465 465 kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) && 466 + !boot_cpu_has(X86_FEATURE_MWAIT) && 466 467 (num_possible_cpus() != 1)); 467 468 } 468 469 ··· 478 477 return (kvm_para_has_feature(KVM_FEATURE_PV_SCHED_YIELD) && 479 478 !kvm_para_has_hint(KVM_HINTS_REALTIME) && 480 479 kvm_para_has_feature(KVM_FEATURE_STEAL_TIME) && 480 + !boot_cpu_has(X86_FEATURE_MWAIT) && 481 481 (num_possible_cpus() != 1)); 482 482 } 483 483 ··· 624 622 625 623 /* Make sure other vCPUs get a chance to run if they need to. */ 626 624 for_each_cpu(cpu, mask) { 627 - if (vcpu_is_preempted(cpu)) { 625 + if (!idle_cpu(cpu) && vcpu_is_preempted(cpu)) { 628 626 kvm_hypercall1(KVM_HC_SCHED_YIELD, per_cpu(x86_cpu_to_apicid, cpu)); 629 627 break; 630 628 }
+3
arch/x86/kernel/kvmclock.c
··· 239 239 240 240 static int __init kvm_setup_vsyscall_timeinfo(void) 241 241 { 242 + if (!kvm_para_available() || !kvmclock) 243 + return 0; 244 + 242 245 kvmclock_init_mem(); 243 246 244 247 #ifdef CONFIG_X86_64
+8 -5
arch/x86/kernel/module.c
··· 273 273 retpolines = s; 274 274 } 275 275 276 + /* 277 + * See alternative_instructions() for the ordering rules between the 278 + * various patching types. 279 + */ 280 + if (para) { 281 + void *pseg = (void *)para->sh_addr; 282 + apply_paravirt(pseg, pseg + para->sh_size); 283 + } 276 284 if (retpolines) { 277 285 void *rseg = (void *)retpolines->sh_addr; 278 286 apply_retpolines(rseg, rseg + retpolines->sh_size); ··· 296 288 alternatives_smp_module_add(me, me->name, 297 289 lseg, lseg + locks->sh_size, 298 290 tseg, tseg + text->sh_size); 299 - } 300 - 301 - if (para) { 302 - void *pseg = (void *)para->sh_addr; 303 - apply_paravirt(pseg, pseg + para->sh_size); 304 291 } 305 292 306 293 /* make jump label nops */
+27 -7
arch/x86/kernel/setup.c
··· 369 369 370 370 static void __init memblock_x86_reserve_range_setup_data(void) 371 371 { 372 + struct setup_indirect *indirect; 372 373 struct setup_data *data; 373 - u64 pa_data; 374 + u64 pa_data, pa_next; 375 + u32 len; 374 376 375 377 pa_data = boot_params.hdr.setup_data; 376 378 while (pa_data) { 377 379 data = early_memremap(pa_data, sizeof(*data)); 380 + if (!data) { 381 + pr_warn("setup: failed to memremap setup_data entry\n"); 382 + return; 383 + } 384 + 385 + len = sizeof(*data); 386 + pa_next = data->next; 387 + 378 388 memblock_reserve(pa_data, sizeof(*data) + data->len); 379 389 380 - if (data->type == SETUP_INDIRECT && 381 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) 382 - memblock_reserve(((struct setup_indirect *)data->data)->addr, 383 - ((struct setup_indirect *)data->data)->len); 390 + if (data->type == SETUP_INDIRECT) { 391 + len += data->len; 392 + early_memunmap(data, sizeof(*data)); 393 + data = early_memremap(pa_data, len); 394 + if (!data) { 395 + pr_warn("setup: failed to memremap indirect setup_data\n"); 396 + return; 397 + } 384 398 385 - pa_data = data->next; 386 - early_memunmap(data, sizeof(*data)); 399 + indirect = (struct setup_indirect *)data->data; 400 + 401 + if (indirect->type != SETUP_INDIRECT) 402 + memblock_reserve(indirect->addr, indirect->len); 403 + } 404 + 405 + pa_data = pa_next; 406 + early_memunmap(data, len); 387 407 } 388 408 } 389 409
+1
arch/x86/kernel/traps.c
··· 659 659 660 660 return res == NOTIFY_STOP; 661 661 } 662 + NOKPROBE_SYMBOL(do_int3); 662 663 663 664 static void do_int3_user(struct pt_regs *regs) 664 665 {
+1 -1
arch/x86/kvm/mmu/mmu.c
··· 3565 3565 out_unlock: 3566 3566 write_unlock(&vcpu->kvm->mmu_lock); 3567 3567 3568 - return 0; 3568 + return r; 3569 3569 } 3570 3570 3571 3571 static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
+8 -3
arch/x86/kvm/vmx/nested.c
··· 246 246 src = &prev->host_state; 247 247 dest = &vmx->loaded_vmcs->host_state; 248 248 249 - vmx_set_vmcs_host_state(dest, src->cr3, src->fs_sel, src->gs_sel, 250 - src->fs_base, src->gs_base); 249 + vmx_set_host_fs_gs(dest, src->fs_sel, src->gs_sel, src->fs_base, src->gs_base); 251 250 dest->ldt_sel = src->ldt_sel; 252 251 #ifdef CONFIG_X86_64 253 252 dest->ds_sel = src->ds_sel; ··· 3055 3056 static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) 3056 3057 { 3057 3058 struct vcpu_vmx *vmx = to_vmx(vcpu); 3058 - unsigned long cr4; 3059 + unsigned long cr3, cr4; 3059 3060 bool vm_fail; 3060 3061 3061 3062 if (!nested_early_check) ··· 3077 3078 * there is no need to preserve other bits or save/restore the field. 3078 3079 */ 3079 3080 vmcs_writel(GUEST_RFLAGS, 0); 3081 + 3082 + cr3 = __get_current_cr3_fast(); 3083 + if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { 3084 + vmcs_writel(HOST_CR3, cr3); 3085 + vmx->loaded_vmcs->host_state.cr3 = cr3; 3086 + } 3080 3087 3081 3088 cr4 = cr4_read_shadow(); 3082 3089 if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) {
+17 -11
arch/x86/kvm/vmx/vmx.c
··· 1080 1080 wrmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host.ctl); 1081 1081 } 1082 1082 1083 - void vmx_set_vmcs_host_state(struct vmcs_host_state *host, unsigned long cr3, 1084 - u16 fs_sel, u16 gs_sel, 1085 - unsigned long fs_base, unsigned long gs_base) 1083 + void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, 1084 + unsigned long fs_base, unsigned long gs_base) 1086 1085 { 1087 - if (unlikely(cr3 != host->cr3)) { 1088 - vmcs_writel(HOST_CR3, cr3); 1089 - host->cr3 = cr3; 1090 - } 1091 1086 if (unlikely(fs_sel != host->fs_sel)) { 1092 1087 if (!(fs_sel & 7)) 1093 1088 vmcs_write16(HOST_FS_SELECTOR, fs_sel); ··· 1177 1182 gs_base = segment_base(gs_sel); 1178 1183 #endif 1179 1184 1180 - vmx_set_vmcs_host_state(host_state, __get_current_cr3_fast(), 1181 - fs_sel, gs_sel, fs_base, gs_base); 1182 - 1185 + vmx_set_host_fs_gs(host_state, fs_sel, gs_sel, fs_base, gs_base); 1183 1186 vmx->guest_state_loaded = true; 1184 1187 } 1185 1188 ··· 6784 6791 static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu) 6785 6792 { 6786 6793 struct vcpu_vmx *vmx = to_vmx(vcpu); 6787 - unsigned long cr4; 6794 + unsigned long cr3, cr4; 6788 6795 6789 6796 /* Record the guest's net vcpu time for enforced NMI injections. */ 6790 6797 if (unlikely(!enable_vnmi && ··· 6826 6833 if (kvm_register_is_dirty(vcpu, VCPU_REGS_RIP)) 6827 6834 vmcs_writel(GUEST_RIP, vcpu->arch.regs[VCPU_REGS_RIP]); 6828 6835 vcpu->arch.regs_dirty = 0; 6836 + 6837 + /* 6838 + * Refresh vmcs.HOST_CR3 if necessary. This must be done immediately 6839 + * prior to VM-Enter, as the kernel may load a new ASID (PCID) any time 6840 + * it switches back to the current->mm, which can occur in KVM context 6841 + * when switching to a temporary mm to patch kernel code, e.g. if KVM 6842 + * toggles a static key while handling a VM-Exit. 6843 + */ 6844 + cr3 = __get_current_cr3_fast(); 6845 + if (unlikely(cr3 != vmx->loaded_vmcs->host_state.cr3)) { 6846 + vmcs_writel(HOST_CR3, cr3); 6847 + vmx->loaded_vmcs->host_state.cr3 = cr3; 6848 + } 6829 6849 6830 6850 cr4 = cr4_read_shadow(); 6831 6851 if (unlikely(cr4 != vmx->loaded_vmcs->host_state.cr4)) {
+2 -3
arch/x86/kvm/vmx/vmx.h
··· 374 374 void free_vpid(int vpid); 375 375 void vmx_set_constant_host_state(struct vcpu_vmx *vmx); 376 376 void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu); 377 - void vmx_set_vmcs_host_state(struct vmcs_host_state *host, unsigned long cr3, 378 - u16 fs_sel, u16 gs_sel, 379 - unsigned long fs_base, unsigned long gs_base); 377 + void vmx_set_host_fs_gs(struct vmcs_host_state *host, u16 fs_sel, u16 gs_sel, 378 + unsigned long fs_base, unsigned long gs_base); 380 379 int vmx_get_cpl(struct kvm_vcpu *vcpu); 381 380 bool vmx_emulation_required(struct kvm_vcpu *vcpu); 382 381 unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu);
+13 -12
arch/x86/kvm/x86.c
··· 9180 9180 likely(!pic_in_kernel(vcpu->kvm)); 9181 9181 } 9182 9182 9183 + /* Called within kvm->srcu read side. */ 9183 9184 static void post_kvm_run_save(struct kvm_vcpu *vcpu) 9184 9185 { 9185 9186 struct kvm_run *kvm_run = vcpu->run; ··· 9189 9188 kvm_run->cr8 = kvm_get_cr8(vcpu); 9190 9189 kvm_run->apic_base = kvm_get_apic_base(vcpu); 9191 9190 9192 - /* 9193 - * The call to kvm_ready_for_interrupt_injection() may end up in 9194 - * kvm_xen_has_interrupt() which may require the srcu lock to be 9195 - * held, to protect against changes in the vcpu_info address. 9196 - */ 9197 - vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); 9198 9191 kvm_run->ready_for_interrupt_injection = 9199 9192 pic_in_kernel(vcpu->kvm) || 9200 9193 kvm_vcpu_ready_for_interrupt_injection(vcpu); 9201 - srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); 9202 9194 9203 9195 if (is_smm(vcpu)) 9204 9196 kvm_run->flags |= KVM_RUN_X86_SMM; ··· 9809 9815 EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit); 9810 9816 9811 9817 /* 9818 + * Called within kvm->srcu read side. 9812 9819 * Returns 1 to let vcpu_run() continue the guest execution loop without 9813 9820 * exiting to the userspace. Otherwise, the value will be returned to the 9814 9821 * userspace. ··· 10188 10193 return r; 10189 10194 } 10190 10195 10196 + /* Called within kvm->srcu read side. */ 10191 10197 static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) 10192 10198 { 10193 10199 bool hv_timer; ··· 10248 10252 !vcpu->arch.apf.halted); 10249 10253 } 10250 10254 10255 + /* Called within kvm->srcu read side. */ 10251 10256 static int vcpu_run(struct kvm_vcpu *vcpu) 10252 10257 { 10253 10258 int r; 10254 10259 struct kvm *kvm = vcpu->kvm; 10255 10260 10256 - vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); 10257 10261 vcpu->arch.l1tf_flush_l1d = true; 10258 10262 10259 10263 for (;;) { ··· 10281 10285 if (__xfer_to_guest_mode_work_pending()) { 10282 10286 srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); 10283 10287 r = xfer_to_guest_mode_handle_work(vcpu); 10288 + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); 10284 10289 if (r) 10285 10290 return r; 10286 - vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); 10287 10291 } 10288 10292 } 10289 - 10290 - srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); 10291 10293 10292 10294 return r; 10293 10295 } ··· 10392 10398 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) 10393 10399 { 10394 10400 struct kvm_run *kvm_run = vcpu->run; 10401 + struct kvm *kvm = vcpu->kvm; 10395 10402 int r; 10396 10403 10397 10404 vcpu_load(vcpu); ··· 10400 10405 kvm_run->flags = 0; 10401 10406 kvm_load_guest_fpu(vcpu); 10402 10407 10408 + vcpu->srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); 10403 10409 if (unlikely(vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED)) { 10404 10410 if (kvm_run->immediate_exit) { 10405 10411 r = -EINTR; ··· 10411 10415 * use before KVM has ever run the vCPU. 10412 10416 */ 10413 10417 WARN_ON_ONCE(kvm_lapic_hv_timer_in_use(vcpu)); 10418 + 10419 + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); 10414 10420 kvm_vcpu_block(vcpu); 10421 + vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); 10422 + 10415 10423 if (kvm_apic_accept_events(vcpu) < 0) { 10416 10424 r = 0; 10417 10425 goto out; ··· 10475 10475 if (kvm_run->kvm_valid_regs) 10476 10476 store_regs(vcpu); 10477 10477 post_kvm_run_save(vcpu); 10478 - kvm_sigset_deactivate(vcpu); 10478 + srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); 10479 10479 10480 + kvm_sigset_deactivate(vcpu); 10480 10481 vcpu_put(vcpu); 10481 10482 return r; 10482 10483 }
+1 -1
arch/x86/lib/retpoline.S
··· 34 34 35 35 ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *%\reg), \ 36 36 __stringify(RETPOLINE \reg), X86_FEATURE_RETPOLINE, \ 37 - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_AMD 37 + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *%\reg; int3), X86_FEATURE_RETPOLINE_LFENCE 38 38 39 39 .endm 40 40
+51 -6
arch/x86/mm/ioremap.c
··· 615 615 static bool memremap_is_setup_data(resource_size_t phys_addr, 616 616 unsigned long size) 617 617 { 618 + struct setup_indirect *indirect; 618 619 struct setup_data *data; 619 620 u64 paddr, paddr_next; 620 621 ··· 628 627 629 628 data = memremap(paddr, sizeof(*data), 630 629 MEMREMAP_WB | MEMREMAP_DEC); 630 + if (!data) { 631 + pr_warn("failed to memremap setup_data entry\n"); 632 + return false; 633 + } 631 634 632 635 paddr_next = data->next; 633 636 len = data->len; ··· 641 636 return true; 642 637 } 643 638 644 - if (data->type == SETUP_INDIRECT && 645 - ((struct setup_indirect *)data->data)->type != SETUP_INDIRECT) { 646 - paddr = ((struct setup_indirect *)data->data)->addr; 647 - len = ((struct setup_indirect *)data->data)->len; 639 + if (data->type == SETUP_INDIRECT) { 640 + memunmap(data); 641 + data = memremap(paddr, sizeof(*data) + len, 642 + MEMREMAP_WB | MEMREMAP_DEC); 643 + if (!data) { 644 + pr_warn("failed to memremap indirect setup_data\n"); 645 + return false; 646 + } 647 + 648 + indirect = (struct setup_indirect *)data->data; 649 + 650 + if (indirect->type != SETUP_INDIRECT) { 651 + paddr = indirect->addr; 652 + len = indirect->len; 653 + } 648 654 } 649 655 650 656 memunmap(data); ··· 676 660 static bool __init early_memremap_is_setup_data(resource_size_t phys_addr, 677 661 unsigned long size) 678 662 { 663 + struct setup_indirect *indirect; 679 664 struct setup_data *data; 680 665 u64 paddr, paddr_next; 681 666 682 667 paddr = boot_params.hdr.setup_data; 683 668 while (paddr) { 684 - unsigned int len; 669 + unsigned int len, size; 685 670 686 671 if (phys_addr == paddr) 687 672 return true; 688 673 689 674 data = early_memremap_decrypted(paddr, sizeof(*data)); 675 + if (!data) { 676 + pr_warn("failed to early memremap setup_data entry\n"); 677 + return false; 678 + } 679 + 680 + size = sizeof(*data); 690 681 691 682 paddr_next = data->next; 692 683 len = data->len; 693 684 694 - early_memunmap(data, sizeof(*data)); 685 + if ((phys_addr > paddr) && (phys_addr < (paddr + len))) { 686 + early_memunmap(data, sizeof(*data)); 687 + return true; 688 + } 689 + 690 + if (data->type == SETUP_INDIRECT) { 691 + size += len; 692 + early_memunmap(data, sizeof(*data)); 693 + data = early_memremap_decrypted(paddr, size); 694 + if (!data) { 695 + pr_warn("failed to early memremap indirect setup_data\n"); 696 + return false; 697 + } 698 + 699 + indirect = (struct setup_indirect *)data->data; 700 + 701 + if (indirect->type != SETUP_INDIRECT) { 702 + paddr = indirect->addr; 703 + len = indirect->len; 704 + } 705 + } 706 + 707 + early_memunmap(data, size); 695 708 696 709 if ((phys_addr > paddr) && (phys_addr < (paddr + len))) 697 710 return true;
+1 -1
arch/x86/net/bpf_jit_comp.c
··· 394 394 u8 *prog = *pprog; 395 395 396 396 #ifdef CONFIG_RETPOLINE 397 - if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_AMD)) { 397 + if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) { 398 398 EMIT_LFENCE(); 399 399 EMIT2(0xFF, 0xE0 + reg); 400 400 } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+23 -12
block/blk-mq.c
··· 2718 2718 2719 2719 static struct request *blk_mq_get_new_requests(struct request_queue *q, 2720 2720 struct blk_plug *plug, 2721 - struct bio *bio) 2721 + struct bio *bio, 2722 + unsigned int nsegs) 2722 2723 { 2723 2724 struct blk_mq_alloc_data data = { 2724 2725 .q = q, ··· 2730 2729 2731 2730 if (unlikely(bio_queue_enter(bio))) 2732 2731 return NULL; 2732 + 2733 + if (blk_mq_attempt_bio_merge(q, bio, nsegs)) 2734 + goto queue_exit; 2735 + 2736 + rq_qos_throttle(q, bio); 2733 2737 2734 2738 if (plug) { 2735 2739 data.nr_tags = plug->nr_ios; ··· 2748 2742 rq_qos_cleanup(q, bio); 2749 2743 if (bio->bi_opf & REQ_NOWAIT) 2750 2744 bio_wouldblock_error(bio); 2745 + queue_exit: 2751 2746 blk_queue_exit(q); 2752 2747 return NULL; 2753 2748 } 2754 2749 2755 2750 static inline struct request *blk_mq_get_cached_request(struct request_queue *q, 2756 - struct blk_plug *plug, struct bio *bio) 2751 + struct blk_plug *plug, struct bio **bio, unsigned int nsegs) 2757 2752 { 2758 2753 struct request *rq; 2759 2754 ··· 2764 2757 if (!rq || rq->q != q) 2765 2758 return NULL; 2766 2759 2767 - if (blk_mq_get_hctx_type(bio->bi_opf) != rq->mq_hctx->type) 2760 + if (blk_mq_attempt_bio_merge(q, *bio, nsegs)) { 2761 + *bio = NULL; 2768 2762 return NULL; 2769 - if (op_is_flush(rq->cmd_flags) != op_is_flush(bio->bi_opf)) 2763 + } 2764 + 2765 + rq_qos_throttle(q, *bio); 2766 + 2767 + if (blk_mq_get_hctx_type((*bio)->bi_opf) != rq->mq_hctx->type) 2768 + return NULL; 2769 + if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf)) 2770 2770 return NULL; 2771 2771 2772 - rq->cmd_flags = bio->bi_opf; 2772 + rq->cmd_flags = (*bio)->bi_opf; 2773 2773 plug->cached_rq = rq_list_next(rq); 2774 2774 INIT_LIST_HEAD(&rq->queuelist); 2775 2775 return rq; ··· 2814 2800 if (!bio_integrity_prep(bio)) 2815 2801 return; 2816 2802 2817 - if (blk_mq_attempt_bio_merge(q, bio, nr_segs)) 2818 - return; 2819 - 2820 - rq_qos_throttle(q, bio); 2821 - 2822 - rq = blk_mq_get_cached_request(q, plug, bio); 2803 + rq = blk_mq_get_cached_request(q, plug, &bio, nr_segs); 2823 2804 if (!rq) { 2824 - rq = blk_mq_get_new_requests(q, plug, bio); 2805 + if (!bio) 2806 + return; 2807 + rq = blk_mq_get_new_requests(q, plug, bio, nr_segs); 2825 2808 if (unlikely(!rq)) 2826 2809 return; 2827 2810 }
+2
drivers/atm/firestream.c
··· 1676 1676 dev->hw_base = pci_resource_start(pci_dev, 0); 1677 1677 1678 1678 dev->base = ioremap(dev->hw_base, 0x1000); 1679 + if (!dev->base) 1680 + return 1; 1679 1681 1680 1682 reset_chip (dev); 1681 1683
+10 -14
drivers/auxdisplay/lcd2s.c
··· 238 238 if (buf[1] > 7) 239 239 return 1; 240 240 241 - i = 0; 241 + i = 2; 242 242 shift = 0; 243 243 value = 0; 244 244 while (*esc && i < LCD2S_CHARACTER_SIZE + 2) { ··· 298 298 I2C_FUNC_SMBUS_WRITE_BLOCK_DATA)) 299 299 return -EIO; 300 300 301 + lcd2s = devm_kzalloc(&i2c->dev, sizeof(*lcd2s), GFP_KERNEL); 302 + if (!lcd2s) 303 + return -ENOMEM; 304 + 301 305 /* Test, if the display is responding */ 302 306 err = lcd2s_i2c_smbus_write_byte(i2c, LCD2S_CMD_DISPLAY_OFF); 303 307 if (err < 0) ··· 311 307 if (!lcd) 312 308 return -ENOMEM; 313 309 314 - lcd2s = kzalloc(sizeof(struct lcd2s_data), GFP_KERNEL); 315 - if (!lcd2s) { 316 - err = -ENOMEM; 317 - goto fail1; 318 - } 319 - 320 310 lcd->drvdata = lcd2s; 321 311 lcd2s->i2c = i2c; 322 312 lcd2s->charlcd = lcd; ··· 319 321 err = device_property_read_u32(&i2c->dev, "display-height-chars", 320 322 &lcd->height); 321 323 if (err) 322 - goto fail2; 324 + goto fail1; 323 325 324 326 err = device_property_read_u32(&i2c->dev, "display-width-chars", 325 327 &lcd->width); 326 328 if (err) 327 - goto fail2; 329 + goto fail1; 328 330 329 331 lcd->ops = &lcd2s_ops; 330 332 331 333 err = charlcd_register(lcd2s->charlcd); 332 334 if (err) 333 - goto fail2; 335 + goto fail1; 334 336 335 337 i2c_set_clientdata(i2c, lcd2s); 336 338 return 0; 337 339 338 - fail2: 339 - kfree(lcd2s); 340 340 fail1: 341 - kfree(lcd); 341 + charlcd_free(lcd2s->charlcd); 342 342 return err; 343 343 } 344 344 ··· 345 349 struct lcd2s_data *lcd2s = i2c_get_clientdata(i2c); 346 350 347 351 charlcd_unregister(lcd2s->charlcd); 348 - kfree(lcd2s->charlcd); 352 + charlcd_free(lcd2s->charlcd); 349 353 return 0; 350 354 } 351 355
+9 -11
drivers/block/virtio_blk.c
··· 76 76 */ 77 77 refcount_t refs; 78 78 79 - /* What host tells us, plus 2 for header & tailer. */ 80 - unsigned int sg_elems; 81 - 82 79 /* Ida index - used to track minor number allocations. */ 83 80 int index; 84 81 ··· 318 321 bool notify = false; 319 322 blk_status_t status; 320 323 int err; 321 - 322 - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems); 323 324 324 325 status = virtblk_setup_cmd(vblk->vdev, req, vbr); 325 326 if (unlikely(status)) ··· 778 783 /* Prevent integer overflows and honor max vq size */ 779 784 sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2); 780 785 781 - /* We need extra sg elements at head and tail. */ 782 - sg_elems += 2; 783 786 vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL); 784 787 if (!vblk) { 785 788 err = -ENOMEM; ··· 789 796 mutex_init(&vblk->vdev_mutex); 790 797 791 798 vblk->vdev = vdev; 792 - vblk->sg_elems = sg_elems; 793 799 794 800 INIT_WORK(&vblk->config_work, virtblk_config_changed_work); 795 801 ··· 845 853 set_disk_ro(vblk->disk, 1); 846 854 847 855 /* We can handle whatever the host told us to handle. */ 848 - blk_queue_max_segments(q, vblk->sg_elems-2); 856 + blk_queue_max_segments(q, sg_elems); 849 857 850 858 /* No real sector limit. */ 851 859 blk_queue_max_hw_sectors(q, -1U); ··· 917 925 918 926 virtio_cread(vdev, struct virtio_blk_config, max_discard_seg, 919 927 &v); 928 + 929 + /* 930 + * max_discard_seg == 0 is out of spec but we always 931 + * handled it. 932 + */ 933 + if (!v) 934 + v = sg_elems; 920 935 blk_queue_max_discard_segments(q, 921 - min_not_zero(v, 922 - MAX_DISCARD_SEGMENTS)); 936 + min(v, MAX_DISCARD_SEGMENTS)); 923 937 924 938 blk_queue_flag_set(QUEUE_FLAG_DISCARD, q); 925 939 }
+37 -26
drivers/block/xen-blkfront.c
··· 1288 1288 rinfo->ring_ref[i] = GRANT_INVALID_REF; 1289 1289 } 1290 1290 } 1291 - free_pages((unsigned long)rinfo->ring.sring, get_order(info->nr_ring_pages * XEN_PAGE_SIZE)); 1291 + free_pages_exact(rinfo->ring.sring, 1292 + info->nr_ring_pages * XEN_PAGE_SIZE); 1292 1293 rinfo->ring.sring = NULL; 1293 1294 1294 1295 if (rinfo->irq) ··· 1373 1372 return BLKIF_RSP_OKAY; 1374 1373 } 1375 1374 1376 - static bool blkif_completion(unsigned long *id, 1377 - struct blkfront_ring_info *rinfo, 1378 - struct blkif_response *bret) 1375 + /* 1376 + * Return values: 1377 + * 1 response processed. 1378 + * 0 missing further responses. 1379 + * -1 error while processing. 1380 + */ 1381 + static int blkif_completion(unsigned long *id, 1382 + struct blkfront_ring_info *rinfo, 1383 + struct blkif_response *bret) 1379 1384 { 1380 1385 int i = 0; 1381 1386 struct scatterlist *sg; ··· 1404 1397 1405 1398 /* Wait the second response if not yet here. */ 1406 1399 if (s2->status < REQ_DONE) 1407 - return false; 1400 + return 0; 1408 1401 1409 1402 bret->status = blkif_get_final_status(s->status, 1410 1403 s2->status); ··· 1455 1448 } 1456 1449 /* Add the persistent grant into the list of free grants */ 1457 1450 for (i = 0; i < num_grant; i++) { 1458 - if (gnttab_query_foreign_access(s->grants_used[i]->gref)) { 1451 + if (!gnttab_try_end_foreign_access(s->grants_used[i]->gref)) { 1459 1452 /* 1460 1453 * If the grant is still mapped by the backend (the 1461 1454 * backend has chosen to make this grant persistent) 1462 1455 * we add it at the head of the list, so it will be 1463 1456 * reused first. 1464 1457 */ 1465 - if (!info->feature_persistent) 1466 - pr_alert_ratelimited("backed has not unmapped grant: %u\n", 1467 - s->grants_used[i]->gref); 1458 + if (!info->feature_persistent) { 1459 + pr_alert("backed has not unmapped grant: %u\n", 1460 + s->grants_used[i]->gref); 1461 + return -1; 1462 + } 1468 1463 list_add(&s->grants_used[i]->node, &rinfo->grants); 1469 1464 rinfo->persistent_gnts_c++; 1470 1465 } else { 1471 1466 /* 1472 - * If the grant is not mapped by the backend we end the 1473 - * foreign access and add it to the tail of the list, 1474 - * so it will not be picked again unless we run out of 1475 - * persistent grants. 1467 + * If the grant is not mapped by the backend we add it 1468 + * to the tail of the list, so it will not be picked 1469 + * again unless we run out of persistent grants. 1476 1470 */ 1477 - gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL); 1478 1471 s->grants_used[i]->gref = GRANT_INVALID_REF; 1479 1472 list_add_tail(&s->grants_used[i]->node, &rinfo->grants); 1480 1473 } 1481 1474 } 1482 1475 if (s->req.operation == BLKIF_OP_INDIRECT) { 1483 1476 for (i = 0; i < INDIRECT_GREFS(num_grant); i++) { 1484 - if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) { 1485 - if (!info->feature_persistent) 1486 - pr_alert_ratelimited("backed has not unmapped grant: %u\n", 1487 - s->indirect_grants[i]->gref); 1477 + if (!gnttab_try_end_foreign_access(s->indirect_grants[i]->gref)) { 1478 + if (!info->feature_persistent) { 1479 + pr_alert("backed has not unmapped grant: %u\n", 1480 + s->indirect_grants[i]->gref); 1481 + return -1; 1482 + } 1488 1483 list_add(&s->indirect_grants[i]->node, &rinfo->grants); 1489 1484 rinfo->persistent_gnts_c++; 1490 1485 } else { 1491 1486 struct page *indirect_page; 1492 1487 1493 - gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL); 1494 1488 /* 1495 1489 * Add the used indirect page back to the list of 1496 1490 * available pages for indirect grefs. ··· 1506 1498 } 1507 1499 } 1508 1500 1509 - return true; 1501 + return 1; 1510 1502 } 1511 1503 1512 1504 static irqreturn_t blkif_interrupt(int irq, void *dev_id) ··· 1572 1564 } 1573 1565 1574 1566 if (bret.operation != BLKIF_OP_DISCARD) { 1567 + int ret; 1568 + 1575 1569 /* 1576 1570 * We may need to wait for an extra response if the 1577 1571 * I/O request is split in 2 1578 1572 */ 1579 - if (!blkif_completion(&id, rinfo, &bret)) 1573 + ret = blkif_completion(&id, rinfo, &bret); 1574 + if (!ret) 1580 1575 continue; 1576 + if (unlikely(ret < 0)) 1577 + goto err; 1581 1578 } 1582 1579 1583 1580 if (add_id_to_freelist(rinfo, id)) { ··· 1689 1676 for (i = 0; i < info->nr_ring_pages; i++) 1690 1677 rinfo->ring_ref[i] = GRANT_INVALID_REF; 1691 1678 1692 - sring = (struct blkif_sring *)__get_free_pages(GFP_NOIO | __GFP_HIGH, 1693 - get_order(ring_size)); 1679 + sring = alloc_pages_exact(ring_size, GFP_NOIO); 1694 1680 if (!sring) { 1695 1681 xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring"); 1696 1682 return -ENOMEM; ··· 1699 1687 1700 1688 err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref); 1701 1689 if (err < 0) { 1702 - free_pages((unsigned long)sring, get_order(ring_size)); 1690 + free_pages_exact(sring, ring_size); 1703 1691 rinfo->ring.sring = NULL; 1704 1692 goto fail; 1705 1693 } ··· 2544 2532 list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants, 2545 2533 node) { 2546 2534 if (gnt_list_entry->gref == GRANT_INVALID_REF || 2547 - gnttab_query_foreign_access(gnt_list_entry->gref)) 2535 + !gnttab_try_end_foreign_access(gnt_list_entry->gref)) 2548 2536 continue; 2549 2537 2550 2538 list_del(&gnt_list_entry->node); 2551 - gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL); 2552 2539 rinfo->persistent_gnts_c--; 2553 2540 gnt_list_entry->gref = GRANT_INVALID_REF; 2554 2541 list_add_tail(&gnt_list_entry->node, &rinfo->grants);
+7
drivers/char/virtio_console.c
··· 1957 1957 list_del(&portdev->list); 1958 1958 spin_unlock_irq(&pdrvdata_lock); 1959 1959 1960 + /* Device is going away, exit any polling for buffers */ 1961 + virtio_break_device(vdev); 1962 + if (use_multiport(portdev)) 1963 + flush_work(&portdev->control_work); 1964 + else 1965 + flush_work(&portdev->config_work); 1966 + 1960 1967 /* Disable interrupts for vqs */ 1961 1968 virtio_reset_device(vdev); 1962 1969 /* Finish up work that's lined up */
+2
drivers/clk/Kconfig
··· 231 231 232 232 config COMMON_CLK_LAN966X 233 233 bool "Generic Clock Controller driver for LAN966X SoC" 234 + depends on HAS_IOMEM 235 + depends on OF 234 236 help 235 237 This driver provides support for Generic Clock Controller(GCK) on 236 238 LAN966X SoC. GCK generates and supplies clock to various peripherals
+4 -1
drivers/clk/qcom/dispcc-sc7180.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2019, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2019, 2022, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk-provider.h> ··· 625 625 626 626 static struct gdsc mdss_gdsc = { 627 627 .gdscr = 0x3000, 628 + .en_rest_wait_val = 0x2, 629 + .en_few_wait_val = 0x2, 630 + .clk_dis_wait_val = 0xf, 628 631 .pd = { 629 632 .name = "mdss_gdsc", 630 633 },
+4 -1
drivers/clk/qcom/dispcc-sc7280.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2021, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2021-2022, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk-provider.h> ··· 787 787 788 788 static struct gdsc disp_cc_mdss_core_gdsc = { 789 789 .gdscr = 0x1004, 790 + .en_rest_wait_val = 0x2, 791 + .en_few_wait_val = 0x2, 792 + .clk_dis_wait_val = 0xf, 790 793 .pd = { 791 794 .name = "disp_cc_mdss_core_gdsc", 792 795 },
+4 -1
drivers/clk/qcom/dispcc-sm8250.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2018-2020, 2022, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk-provider.h> ··· 1126 1126 1127 1127 static struct gdsc mdss_gdsc = { 1128 1128 .gdscr = 0x3000, 1129 + .en_rest_wait_val = 0x2, 1130 + .en_few_wait_val = 0x2, 1131 + .clk_dis_wait_val = 0xf, 1129 1132 .pd = { 1130 1133 .name = "mdss_gdsc", 1131 1134 },
+21 -5
drivers/clk/qcom/gdsc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/bitops.h> ··· 35 35 #define CFG_GDSCR_OFFSET 0x4 36 36 37 37 /* Wait 2^n CXO cycles between all states. Here, n=2 (4 cycles). */ 38 - #define EN_REST_WAIT_VAL (0x2 << 20) 39 - #define EN_FEW_WAIT_VAL (0x8 << 16) 40 - #define CLK_DIS_WAIT_VAL (0x2 << 12) 38 + #define EN_REST_WAIT_VAL 0x2 39 + #define EN_FEW_WAIT_VAL 0x8 40 + #define CLK_DIS_WAIT_VAL 0x2 41 + 42 + /* Transition delay shifts */ 43 + #define EN_REST_WAIT_SHIFT 20 44 + #define EN_FEW_WAIT_SHIFT 16 45 + #define CLK_DIS_WAIT_SHIFT 12 41 46 42 47 #define RETAIN_MEM BIT(14) 43 48 #define RETAIN_PERIPH BIT(13) ··· 385 380 */ 386 381 mask = HW_CONTROL_MASK | SW_OVERRIDE_MASK | 387 382 EN_REST_WAIT_MASK | EN_FEW_WAIT_MASK | CLK_DIS_WAIT_MASK; 388 - val = EN_REST_WAIT_VAL | EN_FEW_WAIT_VAL | CLK_DIS_WAIT_VAL; 383 + 384 + if (!sc->en_rest_wait_val) 385 + sc->en_rest_wait_val = EN_REST_WAIT_VAL; 386 + if (!sc->en_few_wait_val) 387 + sc->en_few_wait_val = EN_FEW_WAIT_VAL; 388 + if (!sc->clk_dis_wait_val) 389 + sc->clk_dis_wait_val = CLK_DIS_WAIT_VAL; 390 + 391 + val = sc->en_rest_wait_val << EN_REST_WAIT_SHIFT | 392 + sc->en_few_wait_val << EN_FEW_WAIT_SHIFT | 393 + sc->clk_dis_wait_val << CLK_DIS_WAIT_SHIFT; 394 + 389 395 ret = regmap_update_bits(sc->regmap, sc->gdscr, mask, val); 390 396 if (ret) 391 397 return ret;
+7 -1
drivers/clk/qcom/gdsc.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2015, 2017-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2015, 2017-2018, 2022, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #ifndef __QCOM_GDSC_H__ ··· 22 22 * @cxcs: offsets of branch registers to toggle mem/periph bits in 23 23 * @cxc_count: number of @cxcs 24 24 * @pwrsts: Possible powerdomain power states 25 + * @en_rest_wait_val: transition delay value for receiving enr ack signal 26 + * @en_few_wait_val: transition delay value for receiving enf ack signal 27 + * @clk_dis_wait_val: transition delay value for halting clock 25 28 * @resets: ids of resets associated with this gdsc 26 29 * @reset_count: number of @resets 27 30 * @rcdev: reset controller ··· 39 36 unsigned int clamp_io_ctrl; 40 37 unsigned int *cxcs; 41 38 unsigned int cxc_count; 39 + unsigned int en_rest_wait_val; 40 + unsigned int en_few_wait_val; 41 + unsigned int clk_dis_wait_val; 42 42 const u8 pwrsts; 43 43 /* Powerdomain allowable state bitfields */ 44 44 #define PWRSTS_OFF BIT(0)
+1 -2
drivers/clocksource/timer-ti-dm-systimer.c
··· 241 241 bool quirk_unreliable_oscillator = false; 242 242 243 243 /* Quirk unreliable 32 KiHz oscillator with incomplete dts */ 244 - if (of_machine_is_compatible("ti,omap3-beagle-ab4") || 245 - of_machine_is_compatible("timll,omap3-devkit8000")) { 244 + if (of_machine_is_compatible("ti,omap3-beagle-ab4")) { 246 245 quirk_unreliable_oscillator = true; 247 246 counter_32k = -ENODEV; 248 247 }
+1 -1
drivers/firmware/arm_scmi/driver.c
··· 2112 2112 } 2113 2113 module_exit(scmi_driver_exit); 2114 2114 2115 - MODULE_ALIAS("platform: arm-scmi"); 2115 + MODULE_ALIAS("platform:arm-scmi"); 2116 2116 MODULE_AUTHOR("Sudeep Holla <sudeep.holla@arm.com>"); 2117 2117 MODULE_DESCRIPTION("ARM SCMI protocol driver"); 2118 2118 MODULE_LICENSE("GPL v2");
+10 -7
drivers/firmware/efi/libstub/riscv-stub.c
··· 25 25 26 26 static u32 hartid; 27 27 28 - static u32 get_boot_hartid_from_fdt(void) 28 + static int get_boot_hartid_from_fdt(void) 29 29 { 30 30 const void *fdt; 31 31 int chosen_node, len; ··· 33 33 34 34 fdt = get_efi_config_table(DEVICE_TREE_GUID); 35 35 if (!fdt) 36 - return U32_MAX; 36 + return -EINVAL; 37 37 38 38 chosen_node = fdt_path_offset(fdt, "/chosen"); 39 39 if (chosen_node < 0) 40 - return U32_MAX; 40 + return -EINVAL; 41 41 42 42 prop = fdt_getprop((void *)fdt, chosen_node, "boot-hartid", &len); 43 43 if (!prop || len != sizeof(u32)) 44 - return U32_MAX; 44 + return -EINVAL; 45 45 46 - return fdt32_to_cpu(*prop); 46 + hartid = fdt32_to_cpu(*prop); 47 + return 0; 47 48 } 48 49 49 50 efi_status_t check_platform_features(void) 50 51 { 51 - hartid = get_boot_hartid_from_fdt(); 52 - if (hartid == U32_MAX) { 52 + int ret; 53 + 54 + ret = get_boot_hartid_from_fdt(); 55 + if (ret) { 53 56 efi_err("/chosen/boot-hartid missing or invalid!\n"); 54 57 return EFI_UNSUPPORTED; 55 58 }
+4 -1
drivers/firmware/efi/vars.c
··· 742 742 { 743 743 const struct efivar_operations *ops; 744 744 efi_status_t status; 745 + unsigned long varsize; 745 746 746 747 if (!__efivars) 747 748 return -EINVAL; ··· 765 764 return efivar_entry_set_nonblocking(name, vendor, attributes, 766 765 size, data); 767 766 767 + varsize = size + ucs2_strsize(name, 1024); 768 768 if (!block) { 769 769 if (down_trylock(&efivars_lock)) 770 770 return -EBUSY; 771 + status = check_var_size_nonblocking(attributes, varsize); 771 772 } else { 772 773 if (down_interruptible(&efivars_lock)) 773 774 return -EINTR; 775 + status = check_var_size(attributes, varsize); 774 776 } 775 777 776 - status = check_var_size(attributes, size + ucs2_strsize(name, 1024)); 777 778 if (status != EFI_SUCCESS) { 778 779 up(&efivars_lock); 779 780 return -ENOSPC;
+2 -2
drivers/gpio/gpio-sim.c
··· 547 547 * 548 548 * So we need to store the pointer to the parent struct here. We can 549 549 * dereference it anywhere we need with no checks and no locking as 550 - * it's guaranteed to survive the childred and protected by configfs 550 + * it's guaranteed to survive the children and protected by configfs 551 551 * locks. 552 552 * 553 553 * Same for other structures. ··· 1322 1322 kfree(hog); 1323 1323 } 1324 1324 1325 - struct configfs_item_operations gpio_sim_hog_config_item_ops = { 1325 + static struct configfs_item_operations gpio_sim_hog_config_item_ops = { 1326 1326 .release = gpio_sim_hog_config_item_release, 1327 1327 }; 1328 1328
+2
drivers/gpio/gpio-tegra186.c
··· 1075 1075 .ports = tegra241_main_ports, 1076 1076 .name = "tegra241-gpio", 1077 1077 .instance = 0, 1078 + .num_irqs_per_bank = 8, 1078 1079 }; 1079 1080 1080 1081 #define TEGRA241_AON_GPIO_PORT(_name, _bank, _port, _pins) \ ··· 1096 1095 .ports = tegra241_aon_ports, 1097 1096 .name = "tegra241-gpio-aon", 1098 1097 .instance = 1, 1098 + .num_irqs_per_bank = 8, 1099 1099 }; 1100 1100 1101 1101 static const struct of_device_id tegra186_gpio_of_match[] = {
+19 -5
drivers/gpio/gpio-ts4900.c
··· 1 1 /* 2 2 * Digital I/O driver for Technologic Systems I2C FPGA Core 3 3 * 4 - * Copyright (C) 2015 Technologic Systems 4 + * Copyright (C) 2015, 2018 Technologic Systems 5 5 * Copyright (C) 2016 Savoir-Faire Linux 6 6 * 7 7 * This program is free software; you can redistribute it and/or ··· 55 55 { 56 56 struct ts4900_gpio_priv *priv = gpiochip_get_data(chip); 57 57 58 - /* 59 - * This will clear the output enable bit, the other bits are 60 - * dontcare when this is cleared 58 + /* Only clear the OE bit here, requires a RMW. Prevents potential issue 59 + * with OE and data getting to the physical pin at different times. 61 60 */ 62 - return regmap_write(priv->regmap, offset, 0); 61 + return regmap_update_bits(priv->regmap, offset, TS4900_GPIO_OE, 0); 63 62 } 64 63 65 64 static int ts4900_gpio_direction_output(struct gpio_chip *chip, 66 65 unsigned int offset, int value) 67 66 { 68 67 struct ts4900_gpio_priv *priv = gpiochip_get_data(chip); 68 + unsigned int reg; 69 69 int ret; 70 + 71 + /* If changing from an input to an output, we need to first set the 72 + * proper data bit to what is requested and then set OE bit. This 73 + * prevents a glitch that can occur on the IO line 74 + */ 75 + regmap_read(priv->regmap, offset, &reg); 76 + if (!(reg & TS4900_GPIO_OE)) { 77 + if (value) 78 + reg = TS4900_GPIO_OUT; 79 + else 80 + reg &= ~TS4900_GPIO_OUT; 81 + 82 + regmap_write(priv->regmap, offset, reg); 83 + } 70 84 71 85 if (value) 72 86 ret = regmap_write(priv->regmap, offset, TS4900_GPIO_OE |
+4 -2
drivers/gpio/gpiolib-acpi.c
··· 307 307 if (IS_ERR(desc)) 308 308 return desc; 309 309 310 - ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout); 310 + /* ACPI uses hundredths of milliseconds units */ 311 + ret = gpio_set_debounce_timeout(desc, agpio->debounce_timeout * 10); 311 312 if (ret) 312 313 dev_warn(chip->parent, 313 314 "Failed to set debounce-timeout for pin 0x%04X, err %d\n", ··· 1036 1035 if (ret < 0) 1037 1036 return ret; 1038 1037 1039 - ret = gpio_set_debounce_timeout(desc, info.debounce); 1038 + /* ACPI uses hundredths of milliseconds units */ 1039 + ret = gpio_set_debounce_timeout(desc, info.debounce * 10); 1040 1040 if (ret) 1041 1041 return ret; 1042 1042
+10 -10
drivers/gpio/gpiolib.c
··· 1701 1701 */ 1702 1702 int gpiochip_generic_request(struct gpio_chip *gc, unsigned int offset) 1703 1703 { 1704 - #ifdef CONFIG_PINCTRL 1705 - if (list_empty(&gc->gpiodev->pin_ranges)) 1706 - return 0; 1707 - #endif 1708 - 1709 1704 return pinctrl_gpio_request(gc->gpiodev->base + offset); 1710 1705 } 1711 1706 EXPORT_SYMBOL_GPL(gpiochip_generic_request); ··· 1712 1717 */ 1713 1718 void gpiochip_generic_free(struct gpio_chip *gc, unsigned int offset) 1714 1719 { 1715 - #ifdef CONFIG_PINCTRL 1716 - if (list_empty(&gc->gpiodev->pin_ranges)) 1717 - return; 1718 - #endif 1719 - 1720 1720 pinctrl_gpio_free(gc->gpiodev->base + offset); 1721 1721 } 1722 1722 EXPORT_SYMBOL_GPL(gpiochip_generic_free); ··· 2217 2227 return gpio_set_config_with_argument_optional(desc, bias, arg); 2218 2228 } 2219 2229 2230 + /** 2231 + * gpio_set_debounce_timeout() - Set debounce timeout 2232 + * @desc: GPIO descriptor to set the debounce timeout 2233 + * @debounce: Debounce timeout in microseconds 2234 + * 2235 + * The function calls the certain GPIO driver to set debounce timeout 2236 + * in the hardware. 2237 + * 2238 + * Returns 0 on success, or negative error code otherwise. 2239 + */ 2220 2240 int gpio_set_debounce_timeout(struct gpio_desc *desc, unsigned int debounce) 2221 2241 { 2222 2242 return gpio_set_config_with_argument_optional(desc,
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 777 777 amdgpu_vm_eviction_lock(vm); 778 778 ret = !vm->evicting; 779 779 amdgpu_vm_eviction_unlock(vm); 780 - return ret; 780 + 781 + return ret && list_empty(&vm->evicted); 781 782 } 782 783 783 784 /**
+1
drivers/gpu/drm/arm/Kconfig
··· 6 6 depends on DRM && OF && (ARM || ARM64 || COMPILE_TEST) 7 7 depends on COMMON_CLK 8 8 select DRM_KMS_HELPER 9 + select DRM_GEM_CMA_HELPER 9 10 help 10 11 Choose this option if you have an ARM High Definition Colour LCD 11 12 controller.
+3 -2
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 1802 1802 1803 1803 static void ti_sn65dsi86_runtime_disable(void *data) 1804 1804 { 1805 + pm_runtime_dont_use_autosuspend(data); 1805 1806 pm_runtime_disable(data); 1806 1807 } 1807 1808 ··· 1862 1861 "failed to get reference clock\n"); 1863 1862 1864 1863 pm_runtime_enable(dev); 1864 + pm_runtime_set_autosuspend_delay(pdata->dev, 500); 1865 + pm_runtime_use_autosuspend(pdata->dev); 1865 1866 ret = devm_add_action_or_reset(dev, ti_sn65dsi86_runtime_disable, dev); 1866 1867 if (ret) 1867 1868 return ret; 1868 - pm_runtime_set_autosuspend_delay(pdata->dev, 500); 1869 - pm_runtime_use_autosuspend(pdata->dev); 1870 1869 1871 1870 ti_sn65dsi86_debugfs_init(pdata); 1872 1871
+3
drivers/gpu/drm/drm_connector.c
··· 2330 2330 void drm_connector_set_vrr_capable_property( 2331 2331 struct drm_connector *connector, bool capable) 2332 2332 { 2333 + if (!connector->vrr_capable_property) 2334 + return; 2335 + 2333 2336 drm_object_property_set_value(&connector->base, 2334 2337 connector->vrr_capable_property, 2335 2338 capable);
+3 -9
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 678 678 struct device *dev = &pdev->dev; 679 679 struct decon_context *ctx; 680 680 struct device_node *i80_if_timings; 681 - struct resource *res; 682 681 int ret; 683 682 684 683 if (!dev->of_node) ··· 727 728 goto err_iounmap; 728 729 } 729 730 730 - res = platform_get_resource_byname(pdev, IORESOURCE_IRQ, 731 - ctx->i80_if ? "lcd_sys" : "vsync"); 732 - if (!res) { 733 - dev_err(dev, "irq request failed.\n"); 734 - ret = -ENXIO; 731 + ret = platform_get_irq_byname(pdev, ctx->i80_if ? "lcd_sys" : "vsync"); 732 + if (ret < 0) 735 733 goto err_iounmap; 736 - } 737 734 738 - ret = devm_request_irq(dev, res->start, decon_irq_handler, 739 - 0, "drm_decon", ctx); 735 + ret = devm_request_irq(dev, ret, decon_irq_handler, 0, "drm_decon", ctx); 740 736 if (ret) { 741 737 dev_err(dev, "irq request failed.\n"); 742 738 goto err_iounmap;
+4 -2
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 1334 1334 int ret; 1335 1335 int te_gpio_irq; 1336 1336 1337 - dsi->te_gpio = devm_gpiod_get_optional(dsi->dev, "te", GPIOD_IN); 1338 - if (IS_ERR(dsi->te_gpio)) { 1337 + dsi->te_gpio = gpiod_get_optional(panel, "te", GPIOD_IN); 1338 + if (!dsi->te_gpio) { 1339 + return 0; 1340 + } else if (IS_ERR(dsi->te_gpio)) { 1339 1341 dev_err(dsi->dev, "gpio request failed with %ld\n", 1340 1342 PTR_ERR(dsi->te_gpio)); 1341 1343 return PTR_ERR(dsi->te_gpio);
+5 -8
drivers/gpu/drm/exynos/exynos_drm_fimc.c
··· 1267 1267 struct exynos_drm_ipp_formats *formats; 1268 1268 struct device *dev = &pdev->dev; 1269 1269 struct fimc_context *ctx; 1270 - struct resource *res; 1271 1270 int ret; 1272 1271 int i, j, num_limits, num_formats; 1273 1272 ··· 1329 1330 return PTR_ERR(ctx->regs); 1330 1331 1331 1332 /* resource irq */ 1332 - res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1333 - if (!res) { 1334 - dev_err(dev, "failed to request irq resource.\n"); 1335 - return -ENOENT; 1336 - } 1333 + ret = platform_get_irq(pdev, 0); 1334 + if (ret < 0) 1335 + return ret; 1337 1336 1338 - ret = devm_request_irq(dev, res->start, fimc_irq_handler, 1339 - 0, dev_name(dev), ctx); 1337 + ret = devm_request_irq(dev, ret, fimc_irq_handler, 1338 + 0, dev_name(dev), ctx); 1340 1339 if (ret < 0) { 1341 1340 dev_err(dev, "failed to request irq.\n"); 1342 1341 return ret;
+4 -9
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 1133 1133 struct device *dev = &pdev->dev; 1134 1134 struct fimd_context *ctx; 1135 1135 struct device_node *i80_if_timings; 1136 - struct resource *res; 1137 1136 int ret; 1138 1137 1139 1138 if (!dev->of_node) ··· 1205 1206 if (IS_ERR(ctx->regs)) 1206 1207 return PTR_ERR(ctx->regs); 1207 1208 1208 - res = platform_get_resource_byname(pdev, IORESOURCE_IRQ, 1209 - ctx->i80_if ? "lcd_sys" : "vsync"); 1210 - if (!res) { 1211 - dev_err(dev, "irq request failed.\n"); 1212 - return -ENXIO; 1213 - } 1209 + ret = platform_get_irq_byname(pdev, ctx->i80_if ? "lcd_sys" : "vsync"); 1210 + if (ret < 0) 1211 + return ret; 1214 1212 1215 - ret = devm_request_irq(dev, res->start, fimd_irq_handler, 1216 - 0, "drm_fimd", ctx); 1213 + ret = devm_request_irq(dev, ret, fimd_irq_handler, 0, "drm_fimd", ctx); 1217 1214 if (ret) { 1218 1215 dev_err(dev, "irq request failed.\n"); 1219 1216 return ret;
+3 -7
drivers/gpu/drm/exynos/exynos_drm_gsc.c
··· 1220 1220 struct gsc_driverdata *driver_data; 1221 1221 struct exynos_drm_ipp_formats *formats; 1222 1222 struct gsc_context *ctx; 1223 - struct resource *res; 1224 1223 int num_formats, ret, i, j; 1225 1224 1226 1225 ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); ··· 1274 1275 return PTR_ERR(ctx->regs); 1275 1276 1276 1277 /* resource irq */ 1277 - res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1278 - if (!res) { 1279 - dev_err(dev, "failed to request irq resource.\n"); 1280 - return -ENOENT; 1281 - } 1278 + ctx->irq = platform_get_irq(pdev, 0); 1279 + if (ctx->irq < 0) 1280 + return ctx->irq; 1282 1281 1283 - ctx->irq = res->start; 1284 1282 ret = devm_request_irq(dev, ctx->irq, gsc_irq_handler, 0, 1285 1283 dev_name(dev), ctx); 1286 1284 if (ret < 0) {
+6 -8
drivers/gpu/drm/exynos/exynos_mixer.c
··· 809 809 return -ENXIO; 810 810 } 811 811 812 - res = platform_get_resource(mixer_ctx->pdev, IORESOURCE_IRQ, 0); 813 - if (res == NULL) { 814 - dev_err(dev, "get interrupt resource failed.\n"); 815 - return -ENXIO; 816 - } 812 + ret = platform_get_irq(mixer_ctx->pdev, 0); 813 + if (ret < 0) 814 + return ret; 815 + mixer_ctx->irq = ret; 817 816 818 - ret = devm_request_irq(dev, res->start, mixer_irq_handler, 819 - 0, "drm_mixer", mixer_ctx); 817 + ret = devm_request_irq(dev, mixer_ctx->irq, mixer_irq_handler, 818 + 0, "drm_mixer", mixer_ctx); 820 819 if (ret) { 821 820 dev_err(dev, "request interrupt failed.\n"); 822 821 return ret; 823 822 } 824 - mixer_ctx->irq = res->start; 825 823 826 824 return 0; 827 825 }
+14 -2
drivers/gpu/drm/i915/display/intel_psr.c
··· 1406 1406 PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME; 1407 1407 } 1408 1408 1409 + static inline u32 man_trk_ctl_partial_frame_bit_get(struct drm_i915_private *dev_priv) 1410 + { 1411 + return IS_ALDERLAKE_P(dev_priv) ? 1412 + ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE : 1413 + PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE; 1414 + } 1415 + 1409 1416 static void psr_force_hw_tracking_exit(struct intel_dp *intel_dp) 1410 1417 { 1411 1418 struct drm_i915_private *dev_priv = dp_to_i915(intel_dp); ··· 1517 1510 { 1518 1511 struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 1519 1512 struct drm_i915_private *dev_priv = to_i915(crtc->base.dev); 1520 - u32 val = PSR2_MAN_TRK_CTL_ENABLE; 1513 + u32 val = 0; 1514 + 1515 + if (!IS_ALDERLAKE_P(dev_priv)) 1516 + val = PSR2_MAN_TRK_CTL_ENABLE; 1517 + 1518 + /* SF partial frame enable has to be set even on full update */ 1519 + val |= man_trk_ctl_partial_frame_bit_get(dev_priv); 1521 1520 1522 1521 if (full_update) { 1523 1522 /* ··· 1543 1530 } else { 1544 1531 drm_WARN_ON(crtc_state->uapi.crtc->dev, clip->y1 % 4 || clip->y2 % 4); 1545 1532 1546 - val |= PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE; 1547 1533 val |= PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(clip->y1 / 4 + 1); 1548 1534 val |= PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(clip->y2 / 4 + 1); 1549 1535 }
+1 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
··· 110 110 { 111 111 u32 request[] = { 112 112 GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST, 113 - SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 2), 113 + SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 1), 114 114 id, 115 115 }; 116 116
+1
drivers/gpu/drm/i915/i915_reg.h
··· 4829 4829 #define ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR(val) REG_FIELD_PREP(ADLP_PSR2_MAN_TRK_CTL_SU_REGION_START_ADDR_MASK, val) 4830 4830 #define ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR_MASK REG_GENMASK(12, 0) 4831 4831 #define ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR(val) REG_FIELD_PREP(ADLP_PSR2_MAN_TRK_CTL_SU_REGION_END_ADDR_MASK, val) 4832 + #define ADLP_PSR2_MAN_TRK_CTL_SF_PARTIAL_FRAME_UPDATE REG_BIT(31) 4832 4833 #define ADLP_PSR2_MAN_TRK_CTL_SF_SINGLE_FULL_FRAME REG_BIT(14) 4833 4834 #define ADLP_PSR2_MAN_TRK_CTL_SF_CONTINUOS_FULL_FRAME REG_BIT(13) 4834 4835
+1 -1
drivers/gpu/drm/i915/intel_pch.c
··· 108 108 /* Comet Lake V PCH is based on KBP, which is SPT compatible */ 109 109 return PCH_SPT; 110 110 case INTEL_PCH_ICP_DEVICE_ID_TYPE: 111 + case INTEL_PCH_ICP2_DEVICE_ID_TYPE: 111 112 drm_dbg_kms(&dev_priv->drm, "Found Ice Lake PCH\n"); 112 113 drm_WARN_ON(&dev_priv->drm, !IS_ICELAKE(dev_priv)); 113 114 return PCH_ICP; ··· 124 123 !IS_GEN9_BC(dev_priv)); 125 124 return PCH_TGP; 126 125 case INTEL_PCH_JSP_DEVICE_ID_TYPE: 127 - case INTEL_PCH_JSP2_DEVICE_ID_TYPE: 128 126 drm_dbg_kms(&dev_priv->drm, "Found Jasper Lake PCH\n"); 129 127 drm_WARN_ON(&dev_priv->drm, !IS_JSL_EHL(dev_priv)); 130 128 return PCH_JSP;
+1 -1
drivers/gpu/drm/i915/intel_pch.h
··· 50 50 #define INTEL_PCH_CMP2_DEVICE_ID_TYPE 0x0680 51 51 #define INTEL_PCH_CMP_V_DEVICE_ID_TYPE 0xA380 52 52 #define INTEL_PCH_ICP_DEVICE_ID_TYPE 0x3480 53 + #define INTEL_PCH_ICP2_DEVICE_ID_TYPE 0x3880 53 54 #define INTEL_PCH_MCC_DEVICE_ID_TYPE 0x4B00 54 55 #define INTEL_PCH_TGP_DEVICE_ID_TYPE 0xA080 55 56 #define INTEL_PCH_TGP2_DEVICE_ID_TYPE 0x4380 56 57 #define INTEL_PCH_JSP_DEVICE_ID_TYPE 0x4D80 57 - #define INTEL_PCH_JSP2_DEVICE_ID_TYPE 0x3880 58 58 #define INTEL_PCH_ADP_DEVICE_ID_TYPE 0x7A80 59 59 #define INTEL_PCH_ADP2_DEVICE_ID_TYPE 0x5180 60 60 #define INTEL_PCH_ADP3_DEVICE_ID_TYPE 0x7A00
+1
drivers/gpu/drm/panel/Kconfig
··· 106 106 depends on PM 107 107 select VIDEOMODE_HELPERS 108 108 select DRM_DP_AUX_BUS 109 + select DRM_DP_HELPER 109 110 help 110 111 DRM panel driver for dumb eDP panels that need at most a regulator and 111 112 a GPIO to be powered up. Optionally a backlight can be attached so
+4 -4
drivers/gpu/drm/sun4i/sun8i_mixer.h
··· 111 111 /* format 13 is semi-planar YUV411 VUVU */ 112 112 #define SUN8I_MIXER_FBFMT_YUV411 14 113 113 /* format 15 doesn't exist */ 114 - /* format 16 is P010 YVU */ 115 - #define SUN8I_MIXER_FBFMT_P010_YUV 17 116 - /* format 18 is P210 YVU */ 117 - #define SUN8I_MIXER_FBFMT_P210_YUV 19 114 + #define SUN8I_MIXER_FBFMT_P010_YUV 16 115 + /* format 17 is P010 YVU */ 116 + #define SUN8I_MIXER_FBFMT_P210_YUV 18 117 + /* format 19 is P210 YVU */ 118 118 /* format 20 is packed YVU444 10-bit */ 119 119 /* format 21 is packed YUV444 10-bit */ 120 120
+4 -1
drivers/hid/hid-debug.c
··· 860 860 [KEY_F22] = "F22", [KEY_F23] = "F23", 861 861 [KEY_F24] = "F24", [KEY_PLAYCD] = "PlayCD", 862 862 [KEY_PAUSECD] = "PauseCD", [KEY_PROG3] = "Prog3", 863 - [KEY_PROG4] = "Prog4", [KEY_SUSPEND] = "Suspend", 863 + [KEY_PROG4] = "Prog4", 864 + [KEY_ALL_APPLICATIONS] = "AllApplications", 865 + [KEY_SUSPEND] = "Suspend", 864 866 [KEY_CLOSE] = "Close", [KEY_PLAY] = "Play", 865 867 [KEY_FASTFORWARD] = "FastForward", [KEY_BASSBOOST] = "BassBoost", 866 868 [KEY_PRINT] = "Print", [KEY_HP] = "HP", ··· 971 969 [KEY_ASSISTANT] = "Assistant", 972 970 [KEY_KBD_LAYOUT_NEXT] = "KbdLayoutNext", 973 971 [KEY_EMOJI_PICKER] = "EmojiPicker", 972 + [KEY_DICTATE] = "Dictate", 974 973 [KEY_BRIGHTNESS_MIN] = "BrightnessMin", 975 974 [KEY_BRIGHTNESS_MAX] = "BrightnessMax", 976 975 [KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+1 -6
drivers/hid/hid-elo.c
··· 228 228 { 229 229 struct elo_priv *priv; 230 230 int ret; 231 - struct usb_device *udev; 232 231 233 232 if (!hid_is_usb(hdev)) 234 233 return -EINVAL; ··· 237 238 return -ENOMEM; 238 239 239 240 INIT_DELAYED_WORK(&priv->work, elo_work); 240 - udev = interface_to_usbdev(to_usb_interface(hdev->dev.parent)); 241 - priv->usbdev = usb_get_dev(udev); 241 + priv->usbdev = interface_to_usbdev(to_usb_interface(hdev->dev.parent)); 242 242 243 243 hid_set_drvdata(hdev, priv); 244 244 ··· 260 262 261 263 return 0; 262 264 err_free: 263 - usb_put_dev(udev); 264 265 kfree(priv); 265 266 return ret; 266 267 } ··· 267 270 static void elo_remove(struct hid_device *hdev) 268 271 { 269 272 struct elo_priv *priv = hid_get_drvdata(hdev); 270 - 271 - usb_put_dev(priv->usbdev); 272 273 273 274 hid_hw_stop(hdev); 274 275 cancel_delayed_work_sync(&priv->work);
+3
drivers/hid/hid-input.c
··· 992 992 case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break; 993 993 case 0x0cf: map_key_clear(KEY_VOICECOMMAND); break; 994 994 995 + case 0x0d8: map_key_clear(KEY_DICTATE); break; 995 996 case 0x0d9: map_key_clear(KEY_EMOJI_PICKER); break; 996 997 997 998 case 0x0e0: map_abs_clear(ABS_VOLUME); break; ··· 1083 1082 case 0x28c: map_key_clear(KEY_SEND); break; 1084 1083 1085 1084 case 0x29d: map_key_clear(KEY_KBD_LAYOUT_NEXT); break; 1085 + 1086 + case 0x2a2: map_key_clear(KEY_ALL_APPLICATIONS); break; 1086 1087 1087 1088 case 0x2c7: map_key_clear(KEY_KBDINPUTASSIST_PREV); break; 1088 1089 case 0x2c8: map_key_clear(KEY_KBDINPUTASSIST_NEXT); break;
+1
drivers/hid/hid-logitech-dj.c
··· 1068 1068 workitem.reports_supported |= STD_KEYBOARD; 1069 1069 break; 1070 1070 case 0x0f: 1071 + case 0x11: 1071 1072 device_type = "eQUAD Lightspeed 1.2"; 1072 1073 logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); 1073 1074 workitem.reports_supported |= STD_KEYBOARD;
+4
drivers/hid/hid-nintendo.c
··· 2128 2128 spin_lock_init(&ctlr->lock); 2129 2129 ctlr->rumble_queue = alloc_workqueue("hid-nintendo-rumble_wq", 2130 2130 WQ_FREEZABLE | WQ_MEM_RECLAIM, 0); 2131 + if (!ctlr->rumble_queue) { 2132 + ret = -ENOMEM; 2133 + goto err; 2134 + } 2131 2135 INIT_WORK(&ctlr->rumble_worker, joycon_rumble_worker); 2132 2136 2133 2137 ret = hid_parse(hdev);
+8
drivers/hid/hid-thrustmaster.c
··· 64 64 */ 65 65 static const struct tm_wheel_info tm_wheels_infos[] = { 66 66 {0x0306, 0x0006, "Thrustmaster T150RS"}, 67 + {0x0200, 0x0005, "Thrustmaster T300RS (Missing Attachment)"}, 67 68 {0x0206, 0x0005, "Thrustmaster T300RS"}, 69 + {0x0209, 0x0005, "Thrustmaster T300RS (Open Wheel Attachment)"}, 68 70 {0x0204, 0x0005, "Thrustmaster T300 Ferrari Alcantara Edition"}, 69 71 {0x0002, 0x0002, "Thrustmaster T500RS"} 70 72 //{0x0407, 0x0001, "Thrustmaster TMX"} ··· 157 155 158 156 if (!send_buf) { 159 157 hid_err(hdev, "failed allocating send buffer\n"); 158 + return; 159 + } 160 + 161 + if (usbif->cur_altsetting->desc.bNumEndpoints < 2) { 162 + kfree(send_buf); 163 + hid_err(hdev, "Wrong number of endpoints?\n"); 160 164 return; 161 165 } 162 166
+1 -1
drivers/hid/hid-vivaldi.c
··· 144 144 static int vivaldi_input_configured(struct hid_device *hdev, 145 145 struct hid_input *hidinput) 146 146 { 147 - return sysfs_create_group(&hdev->dev.kobj, &input_attribute_group); 147 + return devm_device_add_group(&hdev->dev, &input_attribute_group); 148 148 } 149 149 150 150 static const struct hid_device_id vivaldi_table[] = {
+1 -1
drivers/input/keyboard/Kconfig
··· 556 556 557 557 config KEYBOARD_SAMSUNG 558 558 tristate "Samsung keypad support" 559 - depends on HAVE_CLK 559 + depends on HAS_IOMEM && HAVE_CLK 560 560 select INPUT_MATRIXKMAP 561 561 help 562 562 Say Y here if you want to use the keypad on your Samsung mobile
+23 -41
drivers/input/mouse/elan_i2c_core.c
··· 186 186 return 0; 187 187 } 188 188 189 - static int elan_enable_power(struct elan_tp_data *data) 189 + static int elan_set_power(struct elan_tp_data *data, bool on) 190 190 { 191 191 int repeat = ETP_RETRY_COUNT; 192 192 int error; 193 193 194 - error = regulator_enable(data->vcc); 195 - if (error) { 196 - dev_err(&data->client->dev, 197 - "failed to enable regulator: %d\n", error); 198 - return error; 199 - } 200 - 201 194 do { 202 - error = data->ops->power_control(data->client, true); 195 + error = data->ops->power_control(data->client, on); 203 196 if (error >= 0) 204 197 return 0; 205 198 206 199 msleep(30); 207 200 } while (--repeat > 0); 208 201 209 - dev_err(&data->client->dev, "failed to enable power: %d\n", error); 210 - return error; 211 - } 212 - 213 - static int elan_disable_power(struct elan_tp_data *data) 214 - { 215 - int repeat = ETP_RETRY_COUNT; 216 - int error; 217 - 218 - do { 219 - error = data->ops->power_control(data->client, false); 220 - if (!error) { 221 - error = regulator_disable(data->vcc); 222 - if (error) { 223 - dev_err(&data->client->dev, 224 - "failed to disable regulator: %d\n", 225 - error); 226 - /* Attempt to power the chip back up */ 227 - data->ops->power_control(data->client, true); 228 - break; 229 - } 230 - 231 - return 0; 232 - } 233 - 234 - msleep(30); 235 - } while (--repeat > 0); 236 - 237 - dev_err(&data->client->dev, "failed to disable power: %d\n", error); 202 + dev_err(&data->client->dev, "failed to set power %s: %d\n", 203 + on ? "on" : "off", error); 238 204 return error; 239 205 } 240 206 ··· 1365 1399 /* Enable wake from IRQ */ 1366 1400 data->irq_wake = (enable_irq_wake(client->irq) == 0); 1367 1401 } else { 1368 - ret = elan_disable_power(data); 1402 + ret = elan_set_power(data, false); 1403 + if (ret) 1404 + goto err; 1405 + 1406 + ret = regulator_disable(data->vcc); 1407 + if (ret) { 1408 + dev_err(dev, "error %d disabling regulator\n", ret); 1409 + /* Attempt to power the chip back up */ 1410 + elan_set_power(data, true); 1411 + } 1369 1412 } 1370 1413 1414 + err: 1371 1415 mutex_unlock(&data->sysfs_mutex); 1372 1416 return ret; 1373 1417 } ··· 1388 1412 struct elan_tp_data *data = i2c_get_clientdata(client); 1389 1413 int error; 1390 1414 1391 - if (device_may_wakeup(dev) && data->irq_wake) { 1415 + if (!device_may_wakeup(dev)) { 1416 + error = regulator_enable(data->vcc); 1417 + if (error) { 1418 + dev_err(dev, "error %d enabling regulator\n", error); 1419 + goto err; 1420 + } 1421 + } else if (data->irq_wake) { 1392 1422 disable_irq_wake(client->irq); 1393 1423 data->irq_wake = false; 1394 1424 } 1395 1425 1396 - error = elan_enable_power(data); 1426 + error = elan_set_power(data, true); 1397 1427 if (error) { 1398 1428 dev_err(dev, "power up when resuming failed: %d\n", error); 1399 1429 goto err;
+17 -17
drivers/input/touchscreen/goodix.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/irq.h> 20 20 #include <linux/interrupt.h> 21 + #include <linux/platform_data/x86/soc.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/acpi.h> 23 24 #include <linux/of.h> ··· 806 805 } 807 806 808 807 #ifdef ACPI_GPIO_SUPPORT 809 - #include <asm/cpu_device_id.h> 810 - #include <asm/intel-family.h> 811 - 812 - static const struct x86_cpu_id baytrail_cpu_ids[] = { 813 - { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_SILVERMONT, X86_FEATURE_ANY, }, 814 - {} 815 - }; 816 - 817 - static inline bool is_byt(void) 818 - { 819 - const struct x86_cpu_id *id = x86_match_cpu(baytrail_cpu_ids); 820 - 821 - return !!id; 822 - } 823 - 824 808 static const struct acpi_gpio_params first_gpio = { 0, 0, false }; 825 809 static const struct acpi_gpio_params second_gpio = { 1, 0, false }; 826 810 ··· 864 878 const struct acpi_gpio_mapping *gpio_mapping = NULL; 865 879 struct device *dev = &ts->client->dev; 866 880 LIST_HEAD(resources); 867 - int ret; 881 + int irq, ret; 868 882 869 883 ts->gpio_count = 0; 870 884 ts->gpio_int_idx = -1; ··· 876 890 } 877 891 878 892 acpi_dev_free_resource_list(&resources); 893 + 894 + /* 895 + * CHT devices should have a GpioInt + a regular GPIO ACPI resource. 896 + * Some CHT devices have a bug (where the also is bogus Interrupt 897 + * resource copied from a previous BYT based generation). i2c-core-acpi 898 + * will use the non-working Interrupt resource, fix this up. 899 + */ 900 + if (soc_intel_is_cht() && ts->gpio_count == 2 && ts->gpio_int_idx != -1) { 901 + irq = acpi_dev_gpio_irq_get(ACPI_COMPANION(dev), 0); 902 + if (irq > 0 && irq != ts->client->irq) { 903 + dev_warn(dev, "Overriding IRQ %d -> %d\n", ts->client->irq, irq); 904 + ts->client->irq = irq; 905 + } 906 + } 879 907 880 908 if (ts->gpio_count == 2 && ts->gpio_int_idx == 0) { 881 909 ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO; ··· 903 903 dev_info(dev, "Using ACPI INTI and INTO methods for IRQ pin access\n"); 904 904 ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_METHOD; 905 905 gpio_mapping = acpi_goodix_reset_only_gpios; 906 - } else if (is_byt() && ts->gpio_count == 2 && ts->gpio_int_idx == -1) { 906 + } else if (soc_intel_is_byt() && ts->gpio_count == 2 && ts->gpio_int_idx == -1) { 907 907 dev_info(dev, "No ACPI GpioInt resource, assuming that the GPIO order is reset, int\n"); 908 908 ts->irq_pin_access_method = IRQ_PIN_ACCESS_ACPI_GPIO; 909 909 gpio_mapping = acpi_goodix_int_last_gpios;
+1
drivers/iommu/amd/amd_iommu.h
··· 14 14 extern irqreturn_t amd_iommu_int_thread(int irq, void *data); 15 15 extern irqreturn_t amd_iommu_int_handler(int irq, void *data); 16 16 extern void amd_iommu_apply_erratum_63(u16 devid); 17 + extern void amd_iommu_restart_event_logging(struct amd_iommu *iommu); 17 18 extern void amd_iommu_reset_cmd_buffer(struct amd_iommu *iommu); 18 19 extern int amd_iommu_init_devices(void); 19 20 extern void amd_iommu_uninit_devices(void);
+1
drivers/iommu/amd/amd_iommu_types.h
··· 110 110 #define PASID_MASK 0x0000ffff 111 111 112 112 /* MMIO status bits */ 113 + #define MMIO_STATUS_EVT_OVERFLOW_INT_MASK (1 << 0) 113 114 #define MMIO_STATUS_EVT_INT_MASK (1 << 1) 114 115 #define MMIO_STATUS_COM_WAIT_INT_MASK (1 << 2) 115 116 #define MMIO_STATUS_PPR_INT_MASK (1 << 6)
+10
drivers/iommu/amd/init.c
··· 658 658 } 659 659 660 660 /* 661 + * This function restarts event logging in case the IOMMU experienced 662 + * an event log buffer overflow. 663 + */ 664 + void amd_iommu_restart_event_logging(struct amd_iommu *iommu) 665 + { 666 + iommu_feature_disable(iommu, CONTROL_EVT_LOG_EN); 667 + iommu_feature_enable(iommu, CONTROL_EVT_LOG_EN); 668 + } 669 + 670 + /* 661 671 * This function resets the command buffer if the IOMMU stopped fetching 662 672 * commands from it. 663 673 */
+6 -6
drivers/iommu/amd/io_pgtable.c
··· 492 492 493 493 dom = container_of(pgtable, struct protection_domain, iop); 494 494 495 - /* Update data structure */ 496 - amd_iommu_domain_clr_pt_root(dom); 497 - 498 - /* Make changes visible to IOMMUs */ 499 - amd_iommu_domain_update(dom); 500 - 501 495 /* Page-table is not visible to IOMMU anymore, so free it */ 502 496 BUG_ON(pgtable->mode < PAGE_MODE_NONE || 503 497 pgtable->mode > PAGE_MODE_6_LEVEL); 504 498 505 499 free_sub_pt(pgtable->root, pgtable->mode, &freelist); 500 + 501 + /* Update data structure */ 502 + amd_iommu_domain_clr_pt_root(dom); 503 + 504 + /* Make changes visible to IOMMUs */ 505 + amd_iommu_domain_update(dom); 506 506 507 507 put_pages_list(&freelist); 508 508 }
+8 -2
drivers/iommu/amd/iommu.c
··· 764 764 #endif /* !CONFIG_IRQ_REMAP */ 765 765 766 766 #define AMD_IOMMU_INT_MASK \ 767 - (MMIO_STATUS_EVT_INT_MASK | \ 767 + (MMIO_STATUS_EVT_OVERFLOW_INT_MASK | \ 768 + MMIO_STATUS_EVT_INT_MASK | \ 768 769 MMIO_STATUS_PPR_INT_MASK | \ 769 770 MMIO_STATUS_GALOG_INT_MASK) 770 771 ··· 775 774 u32 status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET); 776 775 777 776 while (status & AMD_IOMMU_INT_MASK) { 778 - /* Enable EVT and PPR and GA interrupts again */ 777 + /* Enable interrupt sources again */ 779 778 writel(AMD_IOMMU_INT_MASK, 780 779 iommu->mmio_base + MMIO_STATUS_OFFSET); 781 780 ··· 795 794 iommu_poll_ga_log(iommu); 796 795 } 797 796 #endif 797 + 798 + if (status & MMIO_STATUS_EVT_OVERFLOW_INT_MASK) { 799 + pr_info_ratelimited("IOMMU event log overflow\n"); 800 + amd_iommu_restart_event_logging(iommu); 801 + } 798 802 799 803 /* 800 804 * Hardware bug: ERBT1312
+1 -1
drivers/iommu/intel/iommu.c
··· 2738 2738 spin_unlock_irqrestore(&device_domain_lock, flags); 2739 2739 2740 2740 /* PASID table is mandatory for a PCI device in scalable mode. */ 2741 - if (dev && dev_is_pci(dev) && sm_supported(iommu)) { 2741 + if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) { 2742 2742 ret = intel_pasid_alloc_table(dev); 2743 2743 if (ret) { 2744 2744 dev_err(dev, "PASID table allocation failed\n");
+3 -1
drivers/iommu/tegra-smmu.c
··· 808 808 return NULL; 809 809 810 810 mc = platform_get_drvdata(pdev); 811 - if (!mc) 811 + if (!mc) { 812 + put_device(&pdev->dev); 812 813 return NULL; 814 + } 813 815 814 816 return mc->smmu; 815 817 }
+5 -1
drivers/isdn/hardware/mISDN/hfcpci.c
··· 2005 2005 } 2006 2006 /* Allocate memory for FIFOS */ 2007 2007 /* the memory needs to be on a 32k boundary within the first 4G */ 2008 - dma_set_mask(&hc->pdev->dev, 0xFFFF8000); 2008 + if (dma_set_mask(&hc->pdev->dev, 0xFFFF8000)) { 2009 + printk(KERN_WARNING 2010 + "HFC-PCI: No usable DMA configuration!\n"); 2011 + return -EIO; 2012 + } 2009 2013 buffer = dma_alloc_coherent(&hc->pdev->dev, 0x8000, &hc->hw.dmahandle, 2010 2014 GFP_KERNEL); 2011 2015 /* We silently assume the address is okay if nonzero */
+3 -3
drivers/isdn/mISDN/dsp_pipeline.c
··· 192 192 int dsp_pipeline_build(struct dsp_pipeline *pipeline, const char *cfg) 193 193 { 194 194 int found = 0; 195 - char *dup, *tok, *name, *args; 195 + char *dup, *next, *tok, *name, *args; 196 196 struct dsp_element_entry *entry, *n; 197 197 struct dsp_pipeline_entry *pipeline_entry; 198 198 struct mISDN_dsp_element *elem; ··· 203 203 if (!list_empty(&pipeline->list)) 204 204 _dsp_pipeline_destroy(pipeline); 205 205 206 - dup = kstrdup(cfg, GFP_ATOMIC); 206 + dup = next = kstrdup(cfg, GFP_ATOMIC); 207 207 if (!dup) 208 208 return 0; 209 - while ((tok = strsep(&dup, "|"))) { 209 + while ((tok = strsep(&next, "|"))) { 210 210 if (!strlen(tok)) 211 211 continue; 212 212 name = strsep(&tok, "(");
+1 -1
drivers/mmc/core/block.c
··· 1908 1908 1909 1909 cb_data.card = card; 1910 1910 cb_data.status = 0; 1911 - err = __mmc_poll_for_busy(card->host, MMC_BLK_TIMEOUT_MS, 1911 + err = __mmc_poll_for_busy(card->host, 0, MMC_BLK_TIMEOUT_MS, 1912 1912 &mmc_blk_busy_cb, &cb_data); 1913 1913 1914 1914 /*
+1 -1
drivers/mmc/core/mmc.c
··· 1962 1962 goto out_release; 1963 1963 } 1964 1964 1965 - err = __mmc_poll_for_busy(host, timeout_ms, &mmc_sleep_busy_cb, host); 1965 + err = __mmc_poll_for_busy(host, 0, timeout_ms, &mmc_sleep_busy_cb, host); 1966 1966 1967 1967 out_release: 1968 1968 mmc_retune_release(host);
+9 -4
drivers/mmc/core/mmc_ops.c
··· 21 21 22 22 #define MMC_BKOPS_TIMEOUT_MS (120 * 1000) /* 120s */ 23 23 #define MMC_SANITIZE_TIMEOUT_MS (240 * 1000) /* 240s */ 24 + #define MMC_OP_COND_PERIOD_US (1 * 1000) /* 1ms */ 25 + #define MMC_OP_COND_TIMEOUT_MS 1000 /* 1s */ 24 26 25 27 static const u8 tuning_blk_pattern_4bit[] = { 26 28 0xff, 0x0f, 0xff, 0x00, 0xff, 0xcc, 0xc3, 0xcc, ··· 234 232 cmd.arg = mmc_host_is_spi(host) ? 0 : ocr; 235 233 cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R3 | MMC_CMD_BCR; 236 234 237 - err = __mmc_poll_for_busy(host, 1000, &__mmc_send_op_cond_cb, &cb_data); 235 + err = __mmc_poll_for_busy(host, MMC_OP_COND_PERIOD_US, 236 + MMC_OP_COND_TIMEOUT_MS, 237 + &__mmc_send_op_cond_cb, &cb_data); 238 238 if (err) 239 239 return err; 240 240 ··· 499 495 return 0; 500 496 } 501 497 502 - int __mmc_poll_for_busy(struct mmc_host *host, unsigned int timeout_ms, 498 + int __mmc_poll_for_busy(struct mmc_host *host, unsigned int period_us, 499 + unsigned int timeout_ms, 503 500 int (*busy_cb)(void *cb_data, bool *busy), 504 501 void *cb_data) 505 502 { 506 503 int err; 507 504 unsigned long timeout; 508 - unsigned int udelay = 32, udelay_max = 32768; 505 + unsigned int udelay = period_us ? period_us : 32, udelay_max = 32768; 509 506 bool expired = false; 510 507 bool busy = false; 511 508 ··· 551 546 cb_data.retry_crc_err = retry_crc_err; 552 547 cb_data.busy_cmd = busy_cmd; 553 548 554 - return __mmc_poll_for_busy(host, timeout_ms, &mmc_busy_cb, &cb_data); 549 + return __mmc_poll_for_busy(host, 0, timeout_ms, &mmc_busy_cb, &cb_data); 555 550 } 556 551 EXPORT_SYMBOL_GPL(mmc_poll_for_busy); 557 552
+2 -1
drivers/mmc/core/mmc_ops.h
··· 41 41 int mmc_switch_status(struct mmc_card *card, bool crc_err_fatal); 42 42 bool mmc_prepare_busy_cmd(struct mmc_host *host, struct mmc_command *cmd, 43 43 unsigned int timeout_ms); 44 - int __mmc_poll_for_busy(struct mmc_host *host, unsigned int timeout_ms, 44 + int __mmc_poll_for_busy(struct mmc_host *host, unsigned int period_us, 45 + unsigned int timeout_ms, 45 46 int (*busy_cb)(void *cb_data, bool *busy), 46 47 void *cb_data); 47 48 int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
+1 -1
drivers/mmc/core/sd.c
··· 1672 1672 1673 1673 cb_data.card = card; 1674 1674 cb_data.reg_buf = reg_buf; 1675 - err = __mmc_poll_for_busy(card->host, SD_POWEROFF_NOTIFY_TIMEOUT_MS, 1675 + err = __mmc_poll_for_busy(card->host, 0, SD_POWEROFF_NOTIFY_TIMEOUT_MS, 1676 1676 &sd_busy_poweroff_notify_cb, &cb_data); 1677 1677 1678 1678 out:
+8 -7
drivers/mmc/host/meson-gx-mmc.c
··· 173 173 int irq; 174 174 175 175 bool vqmmc_enabled; 176 + bool needs_pre_post_req; 177 + 176 178 }; 177 179 178 180 #define CMD_CFG_LENGTH_MASK GENMASK(8, 0) ··· 665 663 struct meson_host *host = mmc_priv(mmc); 666 664 667 665 host->cmd = NULL; 666 + if (host->needs_pre_post_req) 667 + meson_mmc_post_req(mmc, mrq, 0); 668 668 mmc_request_done(host->mmc, mrq); 669 669 } 670 670 ··· 884 880 static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq) 885 881 { 886 882 struct meson_host *host = mmc_priv(mmc); 887 - bool needs_pre_post_req = mrq->data && 883 + host->needs_pre_post_req = mrq->data && 888 884 !(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE); 889 885 890 886 /* ··· 900 896 } 901 897 } 902 898 903 - if (needs_pre_post_req) { 899 + if (host->needs_pre_post_req) { 904 900 meson_mmc_get_transfer_mode(mmc, mrq); 905 901 if (!meson_mmc_desc_chain_mode(mrq->data)) 906 - needs_pre_post_req = false; 902 + host->needs_pre_post_req = false; 907 903 } 908 904 909 - if (needs_pre_post_req) 905 + if (host->needs_pre_post_req) 910 906 meson_mmc_pre_req(mmc, mrq); 911 907 912 908 /* Stop execution */ 913 909 writel(0, host->regs + SD_EMMC_START); 914 910 915 911 meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd); 916 - 917 - if (needs_pre_post_req) 918 - meson_mmc_post_req(mmc, mrq, 0); 919 912 } 920 913 921 914 static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd)
+1 -2
drivers/mtd/nand/raw/Kconfig
··· 42 42 tristate "OMAP2, OMAP3, OMAP4 and Keystone NAND controller" 43 43 depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST 44 44 depends on HAS_IOMEM 45 - select MEMORY 46 - select OMAP_GPMC 45 + depends on OMAP_GPMC 47 46 help 48 47 Support for NAND flash on Texas Instruments OMAP2, OMAP3, OMAP4 49 48 and Keystone platforms.
+3
drivers/net/arcnet/com20020-pci.c
··· 138 138 return -ENOMEM; 139 139 140 140 ci = (struct com20020_pci_card_info *)id->driver_data; 141 + if (!ci) 142 + return -EINVAL; 143 + 141 144 priv->ci = ci; 142 145 mm = &ci->misc_map; 143 146
+3 -3
drivers/net/can/rcar/rcar_canfd.c
··· 1715 1715 1716 1716 netif_napi_add(ndev, &priv->napi, rcar_canfd_rx_poll, 1717 1717 RCANFD_NAPI_WEIGHT); 1718 + spin_lock_init(&priv->tx_lock); 1719 + devm_can_led_init(ndev); 1720 + gpriv->ch[priv->channel] = priv; 1718 1721 err = register_candev(ndev); 1719 1722 if (err) { 1720 1723 dev_err(&pdev->dev, 1721 1724 "register_candev() failed, error %d\n", err); 1722 1725 goto fail_candev; 1723 1726 } 1724 - spin_lock_init(&priv->tx_lock); 1725 - devm_can_led_init(ndev); 1726 - gpriv->ch[priv->channel] = priv; 1727 1727 dev_info(&pdev->dev, "device registered (channel %u)\n", priv->channel); 1728 1728 return 0; 1729 1729
+5 -4
drivers/net/can/usb/etas_es58x/es58x_core.c
··· 1787 1787 struct es58x_device *es58x_dev = es58x_priv(netdev)->es58x_dev; 1788 1788 int ret; 1789 1789 1790 - if (atomic_inc_return(&es58x_dev->opened_channel_cnt) == 1) { 1790 + if (!es58x_dev->opened_channel_cnt) { 1791 1791 ret = es58x_alloc_rx_urbs(es58x_dev); 1792 1792 if (ret) 1793 1793 return ret; ··· 1805 1805 if (ret) 1806 1806 goto free_urbs; 1807 1807 1808 + es58x_dev->opened_channel_cnt++; 1808 1809 netif_start_queue(netdev); 1809 1810 1810 1811 return ret; 1811 1812 1812 1813 free_urbs: 1813 - if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt)) 1814 + if (!es58x_dev->opened_channel_cnt) 1814 1815 es58x_free_urbs(es58x_dev); 1815 1816 netdev_err(netdev, "%s: Could not open the network device: %pe\n", 1816 1817 __func__, ERR_PTR(ret)); ··· 1846 1845 1847 1846 es58x_flush_pending_tx_msg(netdev); 1848 1847 1849 - if (atomic_dec_and_test(&es58x_dev->opened_channel_cnt)) 1848 + es58x_dev->opened_channel_cnt--; 1849 + if (!es58x_dev->opened_channel_cnt) 1850 1850 es58x_free_urbs(es58x_dev); 1851 1851 1852 1852 return 0; ··· 2217 2215 init_usb_anchor(&es58x_dev->tx_urbs_idle); 2218 2216 init_usb_anchor(&es58x_dev->tx_urbs_busy); 2219 2217 atomic_set(&es58x_dev->tx_urbs_idle_cnt, 0); 2220 - atomic_set(&es58x_dev->opened_channel_cnt, 0); 2221 2218 usb_set_intfdata(intf, es58x_dev); 2222 2219 2223 2220 es58x_dev->rx_pipe = usb_rcvbulkpipe(es58x_dev->udev,
+5 -3
drivers/net/can/usb/etas_es58x/es58x_core.h
··· 373 373 * queue wake/stop logic should prevent this URB from getting 374 374 * empty. Please refer to es58x_get_tx_urb() for more details. 375 375 * @tx_urbs_idle_cnt: number of urbs in @tx_urbs_idle. 376 - * @opened_channel_cnt: number of channels opened (c.f. es58x_open() 377 - * and es58x_stop()). 378 376 * @ktime_req_ns: kernel timestamp when es58x_set_realtime_diff_ns() 379 377 * was called. 380 378 * @realtime_diff_ns: difference in nanoseconds between the clocks of ··· 382 384 * in RX branches. 383 385 * @rx_max_packet_size: Maximum length of bulk-in URB. 384 386 * @num_can_ch: Number of CAN channel (i.e. number of elements of @netdev). 387 + * @opened_channel_cnt: number of channels opened. Free of race 388 + * conditions because its two users (net_device_ops:ndo_open() 389 + * and net_device_ops:ndo_close()) guarantee that the network 390 + * stack big kernel lock (a.k.a. rtnl_mutex) is being hold. 385 391 * @rx_cmd_buf_len: Length of @rx_cmd_buf. 386 392 * @rx_cmd_buf: The device might split the URB commands in an 387 393 * arbitrary amount of pieces. This buffer is used to concatenate ··· 408 406 struct usb_anchor tx_urbs_busy; 409 407 struct usb_anchor tx_urbs_idle; 410 408 atomic_t tx_urbs_idle_cnt; 411 - atomic_t opened_channel_cnt; 412 409 413 410 u64 ktime_req_ns; 414 411 s64 realtime_diff_ns; ··· 416 415 417 416 u16 rx_max_packet_size; 418 417 u8 num_can_ch; 418 + u8 opened_channel_cnt; 419 419 420 420 u16 rx_cmd_buf_len; 421 421 union es58x_urb_cmd rx_cmd_buf;
+5 -5
drivers/net/can/usb/gs_usb.c
··· 191 191 struct gs_usb { 192 192 struct gs_can *canch[GS_MAX_INTF]; 193 193 struct usb_anchor rx_submitted; 194 - atomic_t active_channels; 195 194 struct usb_device *udev; 195 + u8 active_channels; 196 196 }; 197 197 198 198 /* 'allocate' a tx context. ··· 589 589 if (rc) 590 590 return rc; 591 591 592 - if (atomic_add_return(1, &parent->active_channels) == 1) { 592 + if (!parent->active_channels) { 593 593 for (i = 0; i < GS_MAX_RX_URBS; i++) { 594 594 struct urb *urb; 595 595 u8 *buf; ··· 690 690 691 691 dev->can.state = CAN_STATE_ERROR_ACTIVE; 692 692 693 + parent->active_channels++; 693 694 if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)) 694 695 netif_start_queue(netdev); 695 696 ··· 706 705 netif_stop_queue(netdev); 707 706 708 707 /* Stop polling */ 709 - if (atomic_dec_and_test(&parent->active_channels)) 708 + parent->active_channels--; 709 + if (!parent->active_channels) 710 710 usb_kill_anchored_urbs(&parent->rx_submitted); 711 711 712 712 /* Stop sending URBs */ ··· 985 983 } 986 984 987 985 init_usb_anchor(&dev->rx_submitted); 988 - 989 - atomic_set(&dev->active_channels, 0); 990 986 991 987 usb_set_intfdata(intf, dev); 992 988 dev->udev = interface_to_usbdev(intf);
+1 -1
drivers/net/dsa/mt7530.c
··· 2936 2936 2937 2937 phylink_set_port_modes(mask); 2938 2938 2939 - if (state->interface != PHY_INTERFACE_MODE_TRGMII || 2939 + if (state->interface != PHY_INTERFACE_MODE_TRGMII && 2940 2940 !phy_interface_mode_is_8023z(state->interface)) { 2941 2941 phylink_set(mask, 10baseT_Half); 2942 2942 phylink_set(mask, 10baseT_Full);
+5 -5
drivers/net/ethernet/8390/mcf8390.c
··· 405 405 static int mcf8390_probe(struct platform_device *pdev) 406 406 { 407 407 struct net_device *dev; 408 - struct resource *mem, *irq; 408 + struct resource *mem; 409 409 resource_size_t msize; 410 - int ret; 410 + int ret, irq; 411 411 412 - irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 413 - if (irq == NULL) { 412 + irq = platform_get_irq(pdev, 0); 413 + if (irq < 0) { 414 414 dev_err(&pdev->dev, "no IRQ specified?\n"); 415 415 return -ENXIO; 416 416 } ··· 433 433 SET_NETDEV_DEV(dev, &pdev->dev); 434 434 platform_set_drvdata(pdev, dev); 435 435 436 - dev->irq = irq->start; 436 + dev->irq = irq; 437 437 dev->base_addr = mem->start; 438 438 439 439 ret = mcf8390_init(dev);
+3 -2
drivers/net/ethernet/arc/emac_mdio.c
··· 132 132 { 133 133 struct arc_emac_mdio_bus_data *data = &priv->bus_data; 134 134 struct device_node *np = priv->dev->of_node; 135 + const char *name = "Synopsys MII Bus"; 135 136 struct mii_bus *bus; 136 137 int error; 137 138 ··· 143 142 priv->bus = bus; 144 143 bus->priv = priv; 145 144 bus->parent = priv->dev; 146 - bus->name = "Synopsys MII Bus"; 145 + bus->name = name; 147 146 bus->read = &arc_mdio_read; 148 147 bus->write = &arc_mdio_write; 149 148 bus->reset = &arc_mdio_reset; ··· 168 167 if (error) { 169 168 mdiobus_free(bus); 170 169 return dev_err_probe(priv->dev, error, 171 - "cannot register MDIO bus %s\n", bus->name); 170 + "cannot register MDIO bus %s\n", name); 172 171 } 173 172 174 173 return 0;
+1 -1
drivers/net/ethernet/broadcom/bnx2.c
··· 8216 8216 rc = dma_set_coherent_mask(&pdev->dev, persist_dma_mask); 8217 8217 if (rc) { 8218 8218 dev_err(&pdev->dev, 8219 - "pci_set_consistent_dma_mask failed, aborting\n"); 8219 + "dma_set_coherent_mask failed, aborting\n"); 8220 8220 goto err_out_unmap; 8221 8221 } 8222 8222 } else if ((rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) != 0) {
+7
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 40 40 void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) 41 41 { 42 42 struct bcmgenet_priv *priv = netdev_priv(dev); 43 + struct device *kdev = &priv->pdev->dev; 44 + 45 + if (!device_can_wakeup(kdev)) { 46 + wol->supported = 0; 47 + wol->wolopts = 0; 48 + return; 49 + } 43 50 44 51 wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER; 45 52 wol->wolopts = priv->wolopts;
+24 -1
drivers/net/ethernet/cadence/macb_main.c
··· 1573 1573 if (work_done < budget) { 1574 1574 napi_complete_done(napi, work_done); 1575 1575 1576 - /* Packets received while interrupts were disabled */ 1576 + /* RSR bits only seem to propagate to raise interrupts when 1577 + * interrupts are enabled at the time, so if bits are already 1578 + * set due to packets received while interrupts were disabled, 1579 + * they will not cause another interrupt to be generated when 1580 + * interrupts are re-enabled. 1581 + * Check for this case here. This has been seen to happen 1582 + * around 30% of the time under heavy network load. 1583 + */ 1577 1584 status = macb_readl(bp, RSR); 1578 1585 if (status) { 1579 1586 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) ··· 1588 1581 napi_reschedule(napi); 1589 1582 } else { 1590 1583 queue_writel(queue, IER, bp->rx_intr_mask); 1584 + 1585 + /* In rare cases, packets could have been received in 1586 + * the window between the check above and re-enabling 1587 + * interrupts. Therefore, a double-check is required 1588 + * to avoid losing a wakeup. This can potentially race 1589 + * with the interrupt handler doing the same actions 1590 + * if an interrupt is raised just after enabling them, 1591 + * but this should be harmless. 1592 + */ 1593 + status = macb_readl(bp, RSR); 1594 + if (unlikely(status)) { 1595 + queue_writel(queue, IDR, bp->rx_intr_mask); 1596 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1597 + queue_writel(queue, ISR, MACB_BIT(RCOMP)); 1598 + napi_schedule(napi); 1599 + } 1591 1600 } 1592 1601 } 1593 1602
+2
drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
··· 3613 3613 MAC_STATS_ACCUM_SECS : (MAC_STATS_ACCUM_SECS * 10); 3614 3614 adapter->params.pci.vpd_cap_addr = 3615 3615 pci_find_capability(adapter->pdev, PCI_CAP_ID_VPD); 3616 + if (!adapter->params.pci.vpd_cap_addr) 3617 + return -ENODEV; 3616 3618 ret = get_vpd_params(adapter, &adapter->params.vpd); 3617 3619 if (ret < 0) 3618 3620 return ret;
+1
drivers/net/ethernet/freescale/gianfar_ethtool.c
··· 1464 1464 ptp_node = of_find_compatible_node(NULL, NULL, "fsl,etsec-ptp"); 1465 1465 if (ptp_node) { 1466 1466 ptp_dev = of_find_device_by_node(ptp_node); 1467 + of_node_put(ptp_node); 1467 1468 if (ptp_dev) 1468 1469 ptp = platform_get_drvdata(ptp_dev); 1469 1470 }
+155 -28
drivers/net/ethernet/ibm/ibmvnic.c
··· 2213 2213 } 2214 2214 2215 2215 /* 2216 + * Initialize the init_done completion and return code values. We 2217 + * can get a transport event just after registering the CRQ and the 2218 + * tasklet will use this to communicate the transport event. To ensure 2219 + * we don't miss the notification/error, initialize these _before_ 2220 + * regisering the CRQ. 2221 + */ 2222 + static inline void reinit_init_done(struct ibmvnic_adapter *adapter) 2223 + { 2224 + reinit_completion(&adapter->init_done); 2225 + adapter->init_done_rc = 0; 2226 + } 2227 + 2228 + /* 2216 2229 * do_reset returns zero if we are able to keep processing reset events, or 2217 2230 * non-zero if we hit a fatal error and must halt. 2218 2231 */ ··· 2330 2317 * we are coming from the probed state. 2331 2318 */ 2332 2319 adapter->state = VNIC_PROBED; 2320 + 2321 + reinit_init_done(adapter); 2333 2322 2334 2323 if (adapter->reset_reason == VNIC_RESET_CHANGE_PARAM) { 2335 2324 rc = init_crq_queue(adapter); ··· 2476 2461 */ 2477 2462 adapter->state = VNIC_PROBED; 2478 2463 2479 - reinit_completion(&adapter->init_done); 2464 + reinit_init_done(adapter); 2465 + 2480 2466 rc = init_crq_queue(adapter); 2481 2467 if (rc) { 2482 2468 netdev_err(adapter->netdev, ··· 2618 2602 static void __ibmvnic_reset(struct work_struct *work) 2619 2603 { 2620 2604 struct ibmvnic_adapter *adapter; 2621 - bool saved_state = false; 2605 + unsigned int timeout = 5000; 2622 2606 struct ibmvnic_rwi *tmprwi; 2607 + bool saved_state = false; 2623 2608 struct ibmvnic_rwi *rwi; 2624 2609 unsigned long flags; 2625 - u32 reset_state; 2610 + struct device *dev; 2611 + bool need_reset; 2626 2612 int num_fails = 0; 2613 + u32 reset_state; 2627 2614 int rc = 0; 2628 2615 2629 2616 adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset); 2617 + dev = &adapter->vdev->dev; 2630 2618 2631 - if (test_and_set_bit_lock(0, &adapter->resetting)) { 2619 + /* Wait for ibmvnic_probe() to complete. If probe is taking too long 2620 + * or if another reset is in progress, defer work for now. If probe 2621 + * eventually fails it will flush and terminate our work. 2622 + * 2623 + * Three possibilities here: 2624 + * 1. Adpater being removed - just return 2625 + * 2. Timed out on probe or another reset in progress - delay the work 2626 + * 3. Completed probe - perform any resets in queue 2627 + */ 2628 + if (adapter->state == VNIC_PROBING && 2629 + !wait_for_completion_timeout(&adapter->probe_done, timeout)) { 2630 + dev_err(dev, "Reset thread timed out on probe"); 2632 2631 queue_delayed_work(system_long_wq, 2633 2632 &adapter->ibmvnic_delayed_reset, 2634 2633 IBMVNIC_RESET_DELAY); 2635 2634 return; 2636 2635 } 2636 + 2637 + /* adapter is done with probe (i.e state is never VNIC_PROBING now) */ 2638 + if (adapter->state == VNIC_REMOVING) 2639 + return; 2640 + 2641 + /* ->rwi_list is stable now (no one else is removing entries) */ 2642 + 2643 + /* ibmvnic_probe() may have purged the reset queue after we were 2644 + * scheduled to process a reset so there maybe no resets to process. 2645 + * Before setting the ->resetting bit though, we have to make sure 2646 + * that there is infact a reset to process. Otherwise we may race 2647 + * with ibmvnic_open() and end up leaving the vnic down: 2648 + * 2649 + * __ibmvnic_reset() ibmvnic_open() 2650 + * ----------------- -------------- 2651 + * 2652 + * set ->resetting bit 2653 + * find ->resetting bit is set 2654 + * set ->state to IBMVNIC_OPEN (i.e 2655 + * assume reset will open device) 2656 + * return 2657 + * find reset queue empty 2658 + * return 2659 + * 2660 + * Neither performed vnic login/open and vnic stays down 2661 + * 2662 + * If we hold the lock and conditionally set the bit, either we 2663 + * or ibmvnic_open() will complete the open. 2664 + */ 2665 + need_reset = false; 2666 + spin_lock(&adapter->rwi_lock); 2667 + if (!list_empty(&adapter->rwi_list)) { 2668 + if (test_and_set_bit_lock(0, &adapter->resetting)) { 2669 + queue_delayed_work(system_long_wq, 2670 + &adapter->ibmvnic_delayed_reset, 2671 + IBMVNIC_RESET_DELAY); 2672 + } else { 2673 + need_reset = true; 2674 + } 2675 + } 2676 + spin_unlock(&adapter->rwi_lock); 2677 + 2678 + if (!need_reset) 2679 + return; 2637 2680 2638 2681 rwi = get_next_rwi(adapter); 2639 2682 while (rwi) { ··· 2810 2735 __ibmvnic_reset(&adapter->ibmvnic_reset); 2811 2736 } 2812 2737 2738 + static void flush_reset_queue(struct ibmvnic_adapter *adapter) 2739 + { 2740 + struct list_head *entry, *tmp_entry; 2741 + 2742 + if (!list_empty(&adapter->rwi_list)) { 2743 + list_for_each_safe(entry, tmp_entry, &adapter->rwi_list) { 2744 + list_del(entry); 2745 + kfree(list_entry(entry, struct ibmvnic_rwi, list)); 2746 + } 2747 + } 2748 + } 2749 + 2813 2750 static int ibmvnic_reset(struct ibmvnic_adapter *adapter, 2814 2751 enum ibmvnic_reset_reason reason) 2815 2752 { 2816 - struct list_head *entry, *tmp_entry; 2817 - struct ibmvnic_rwi *rwi, *tmp; 2818 2753 struct net_device *netdev = adapter->netdev; 2754 + struct ibmvnic_rwi *rwi, *tmp; 2819 2755 unsigned long flags; 2820 2756 int ret; 2821 2757 ··· 2842 2756 (adapter->failover_pending && reason != VNIC_RESET_FAILOVER)) { 2843 2757 ret = EBUSY; 2844 2758 netdev_dbg(netdev, "Adapter removing or pending failover, skipping reset\n"); 2845 - goto err; 2846 - } 2847 - 2848 - if (adapter->state == VNIC_PROBING) { 2849 - netdev_warn(netdev, "Adapter reset during probe\n"); 2850 - adapter->init_done_rc = -EAGAIN; 2851 - ret = EAGAIN; 2852 2759 goto err; 2853 2760 } 2854 2761 ··· 2862 2783 /* if we just received a transport event, 2863 2784 * flush reset queue and process this reset 2864 2785 */ 2865 - if (adapter->force_reset_recovery && !list_empty(&adapter->rwi_list)) { 2866 - list_for_each_safe(entry, tmp_entry, &adapter->rwi_list) 2867 - list_del(entry); 2868 - } 2786 + if (adapter->force_reset_recovery) 2787 + flush_reset_queue(adapter); 2788 + 2869 2789 rwi->reset_reason = reason; 2870 2790 list_add_tail(&rwi->list, &adapter->rwi_list); 2871 2791 netdev_dbg(adapter->netdev, "Scheduling reset (reason %s)\n", ··· 5399 5321 } 5400 5322 5401 5323 if (!completion_done(&adapter->init_done)) { 5402 - complete(&adapter->init_done); 5403 5324 if (!adapter->init_done_rc) 5404 5325 adapter->init_done_rc = -EAGAIN; 5326 + complete(&adapter->init_done); 5405 5327 } 5406 5328 5407 5329 break; ··· 5424 5346 adapter->fw_done_rc = -EIO; 5425 5347 complete(&adapter->fw_done); 5426 5348 } 5349 + 5350 + /* if we got here during crq-init, retry crq-init */ 5351 + if (!completion_done(&adapter->init_done)) { 5352 + adapter->init_done_rc = -EAGAIN; 5353 + complete(&adapter->init_done); 5354 + } 5355 + 5427 5356 if (!completion_done(&adapter->stats_done)) 5428 5357 complete(&adapter->stats_done); 5429 5358 if (test_bit(0, &adapter->resetting)) ··· 5747 5662 5748 5663 adapter->from_passive_init = false; 5749 5664 5750 - if (reset) 5751 - reinit_completion(&adapter->init_done); 5752 - 5753 - adapter->init_done_rc = 0; 5754 5665 rc = ibmvnic_send_crq_init(adapter); 5755 5666 if (rc) { 5756 5667 dev_err(dev, "Send crq init failed with error %d\n", rc); ··· 5760 5679 5761 5680 if (adapter->init_done_rc) { 5762 5681 release_crq_queue(adapter); 5682 + dev_err(dev, "CRQ-init failed, %d\n", adapter->init_done_rc); 5763 5683 return adapter->init_done_rc; 5764 5684 } 5765 5685 5766 5686 if (adapter->from_passive_init) { 5767 5687 adapter->state = VNIC_OPEN; 5768 5688 adapter->from_passive_init = false; 5689 + dev_err(dev, "CRQ-init failed, passive-init\n"); 5769 5690 return -EINVAL; 5770 5691 } 5771 5692 ··· 5807 5724 struct ibmvnic_adapter *adapter; 5808 5725 struct net_device *netdev; 5809 5726 unsigned char *mac_addr_p; 5727 + unsigned long flags; 5810 5728 bool init_success; 5811 5729 int rc; 5812 5730 ··· 5852 5768 spin_lock_init(&adapter->rwi_lock); 5853 5769 spin_lock_init(&adapter->state_lock); 5854 5770 mutex_init(&adapter->fw_lock); 5771 + init_completion(&adapter->probe_done); 5855 5772 init_completion(&adapter->init_done); 5856 5773 init_completion(&adapter->fw_done); 5857 5774 init_completion(&adapter->reset_done); ··· 5863 5778 5864 5779 init_success = false; 5865 5780 do { 5781 + reinit_init_done(adapter); 5782 + 5783 + /* clear any failovers we got in the previous pass 5784 + * since we are reinitializing the CRQ 5785 + */ 5786 + adapter->failover_pending = false; 5787 + 5788 + /* If we had already initialized CRQ, we may have one or 5789 + * more resets queued already. Discard those and release 5790 + * the CRQ before initializing the CRQ again. 5791 + */ 5792 + release_crq_queue(adapter); 5793 + 5794 + /* Since we are still in PROBING state, __ibmvnic_reset() 5795 + * will not access the ->rwi_list and since we released CRQ, 5796 + * we won't get _new_ transport events. But there maybe an 5797 + * ongoing ibmvnic_reset() call. So serialize access to 5798 + * rwi_list. If we win the race, ibvmnic_reset() could add 5799 + * a reset after we purged but thats ok - we just may end 5800 + * up with an extra reset (i.e similar to having two or more 5801 + * resets in the queue at once). 5802 + * CHECK. 5803 + */ 5804 + spin_lock_irqsave(&adapter->rwi_lock, flags); 5805 + flush_reset_queue(adapter); 5806 + spin_unlock_irqrestore(&adapter->rwi_lock, flags); 5807 + 5866 5808 rc = init_crq_queue(adapter); 5867 5809 if (rc) { 5868 5810 dev_err(&dev->dev, "Couldn't initialize crq. rc=%d\n", ··· 5921 5809 goto ibmvnic_dev_file_err; 5922 5810 5923 5811 netif_carrier_off(netdev); 5924 - rc = register_netdev(netdev); 5925 - if (rc) { 5926 - dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc); 5927 - goto ibmvnic_register_fail; 5928 - } 5929 - dev_info(&dev->dev, "ibmvnic registered\n"); 5930 5812 5931 5813 if (init_success) { 5932 5814 adapter->state = VNIC_PROBED; ··· 5933 5827 5934 5828 adapter->wait_for_reset = false; 5935 5829 adapter->last_reset_time = jiffies; 5830 + 5831 + rc = register_netdev(netdev); 5832 + if (rc) { 5833 + dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc); 5834 + goto ibmvnic_register_fail; 5835 + } 5836 + dev_info(&dev->dev, "ibmvnic registered\n"); 5837 + 5838 + complete(&adapter->probe_done); 5839 + 5936 5840 return 0; 5937 5841 5938 5842 ibmvnic_register_fail: ··· 5957 5841 ibmvnic_init_fail: 5958 5842 release_sub_crqs(adapter, 1); 5959 5843 release_crq_queue(adapter); 5844 + 5845 + /* cleanup worker thread after releasing CRQ so we don't get 5846 + * transport events (i.e new work items for the worker thread). 5847 + */ 5848 + adapter->state = VNIC_REMOVING; 5849 + complete(&adapter->probe_done); 5850 + flush_work(&adapter->ibmvnic_reset); 5851 + flush_delayed_work(&adapter->ibmvnic_delayed_reset); 5852 + 5853 + flush_reset_queue(adapter); 5854 + 5960 5855 mutex_destroy(&adapter->fw_lock); 5961 5856 free_netdev(netdev); 5962 5857
+1
drivers/net/ethernet/ibm/ibmvnic.h
··· 930 930 931 931 struct ibmvnic_tx_pool *tx_pool; 932 932 struct ibmvnic_tx_pool *tso_pool; 933 + struct completion probe_done; 933 934 struct completion init_done; 934 935 int init_done_rc; 935 936
+1
drivers/net/ethernet/intel/e1000e/hw.h
··· 630 630 bool disable_polarity_correction; 631 631 bool is_mdix; 632 632 bool polarity_correction; 633 + bool reset_disable; 633 634 bool speed_downgraded; 634 635 bool autoneg_wait_to_complete; 635 636 };
+6 -2
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 2050 2050 bool blocked = false; 2051 2051 int i = 0; 2052 2052 2053 + /* Check the PHY (LCD) reset flag */ 2054 + if (hw->phy.reset_disable) 2055 + return true; 2056 + 2053 2057 while ((blocked = !(er32(FWSM) & E1000_ICH_FWSM_RSPCIPHY)) && 2054 2058 (i++ < 30)) 2055 2059 usleep_range(10000, 11000); ··· 4140 4136 return ret_val; 4141 4137 4142 4138 if (!(data & valid_csum_mask)) { 4143 - e_dbg("NVM Checksum Invalid\n"); 4139 + e_dbg("NVM Checksum valid bit not set\n"); 4144 4140 4145 - if (hw->mac.type < e1000_pch_cnp) { 4141 + if (hw->mac.type < e1000_pch_tgp) { 4146 4142 data |= valid_csum_mask; 4147 4143 ret_val = e1000_write_nvm(hw, word, 1, &data); 4148 4144 if (ret_val)
+1
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 271 271 #define I217_CGFREG_ENABLE_MTA_RESET 0x0002 272 272 #define I217_MEMPWR PHY_REG(772, 26) 273 273 #define I217_MEMPWR_DISABLE_SMB_RELEASE 0x0010 274 + #define I217_MEMPWR_MOEM 0x1000 274 275 275 276 /* Receive Address Initial CRC Calculation */ 276 277 #define E1000_PCH_RAICC(_n) (0x05F50 + ((_n) * 4))
+26
drivers/net/ethernet/intel/e1000e/netdev.c
··· 6987 6987 struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); 6988 6988 struct e1000_adapter *adapter = netdev_priv(netdev); 6989 6989 struct pci_dev *pdev = to_pci_dev(dev); 6990 + struct e1000_hw *hw = &adapter->hw; 6991 + u16 phy_data; 6990 6992 int rc; 6993 + 6994 + if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID && 6995 + hw->mac.type >= e1000_pch_adp) { 6996 + /* Mask OEM Bits / Gig Disable / Restart AN (772_26[12] = 1) */ 6997 + e1e_rphy(hw, I217_MEMPWR, &phy_data); 6998 + phy_data |= I217_MEMPWR_MOEM; 6999 + e1e_wphy(hw, I217_MEMPWR, phy_data); 7000 + 7001 + /* Disable LCD reset */ 7002 + hw->phy.reset_disable = true; 7003 + } 6991 7004 6992 7005 e1000e_flush_lpic(pdev); 6993 7006 ··· 7023 7010 struct net_device *netdev = pci_get_drvdata(to_pci_dev(dev)); 7024 7011 struct e1000_adapter *adapter = netdev_priv(netdev); 7025 7012 struct pci_dev *pdev = to_pci_dev(dev); 7013 + struct e1000_hw *hw = &adapter->hw; 7014 + u16 phy_data; 7026 7015 int rc; 7027 7016 7028 7017 /* Introduce S0ix implementation */ ··· 7034 7019 rc = __e1000_resume(pdev); 7035 7020 if (rc) 7036 7021 return rc; 7022 + 7023 + if (er32(FWSM) & E1000_ICH_FWSM_FW_VALID && 7024 + hw->mac.type >= e1000_pch_adp) { 7025 + /* Unmask OEM Bits / Gig Disable / Restart AN 772_26[12] = 0 */ 7026 + e1e_rphy(hw, I217_MEMPWR, &phy_data); 7027 + phy_data &= ~I217_MEMPWR_MOEM; 7028 + e1e_wphy(hw, I217_MEMPWR, phy_data); 7029 + 7030 + /* Enable LCD reset */ 7031 + hw->phy.reset_disable = false; 7032 + } 7037 7033 7038 7034 return e1000e_pm_thaw(dev); 7039 7035 }
+2 -4
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 742 742 vsi = pf->vsi[vf->lan_vsi_idx]; 743 743 dev_info(&pf->pdev->dev, "vf %2d: VSI id=%d, seid=%d, qps=%d\n", 744 744 vf_id, vf->lan_vsi_id, vsi->seid, vf->num_queue_pairs); 745 - dev_info(&pf->pdev->dev, " num MDD=%lld, invalid msg=%lld, valid msg=%lld\n", 746 - vf->num_mdd_events, 747 - vf->num_invalid_msgs, 748 - vf->num_valid_msgs); 745 + dev_info(&pf->pdev->dev, " num MDD=%lld\n", 746 + vf->num_mdd_events); 749 747 } else { 750 748 dev_info(&pf->pdev->dev, "invalid VF id %d\n", vf_id); 751 749 }
+7 -50
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1917 1917 /***********************virtual channel routines******************/ 1918 1918 1919 1919 /** 1920 - * i40e_vc_send_msg_to_vf_ex 1920 + * i40e_vc_send_msg_to_vf 1921 1921 * @vf: pointer to the VF info 1922 1922 * @v_opcode: virtual channel opcode 1923 1923 * @v_retval: virtual channel return value 1924 1924 * @msg: pointer to the msg buffer 1925 1925 * @msglen: msg length 1926 - * @is_quiet: true for not printing unsuccessful return values, false otherwise 1927 1926 * 1928 1927 * send msg to VF 1929 1928 **/ 1930 - static int i40e_vc_send_msg_to_vf_ex(struct i40e_vf *vf, u32 v_opcode, 1931 - u32 v_retval, u8 *msg, u16 msglen, 1932 - bool is_quiet) 1929 + static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode, 1930 + u32 v_retval, u8 *msg, u16 msglen) 1933 1931 { 1934 1932 struct i40e_pf *pf; 1935 1933 struct i40e_hw *hw; ··· 1942 1944 hw = &pf->hw; 1943 1945 abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id; 1944 1946 1945 - /* single place to detect unsuccessful return values */ 1946 - if (v_retval && !is_quiet) { 1947 - vf->num_invalid_msgs++; 1948 - dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n", 1949 - vf->vf_id, v_opcode, v_retval); 1950 - if (vf->num_invalid_msgs > 1951 - I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED) { 1952 - dev_err(&pf->pdev->dev, 1953 - "Number of invalid messages exceeded for VF %d\n", 1954 - vf->vf_id); 1955 - dev_err(&pf->pdev->dev, "Use PF Control I/F to enable the VF\n"); 1956 - set_bit(I40E_VF_STATE_DISABLED, &vf->vf_states); 1957 - } 1958 - } else { 1959 - vf->num_valid_msgs++; 1960 - /* reset the invalid counter, if a valid message is received. */ 1961 - vf->num_invalid_msgs = 0; 1962 - } 1963 - 1964 1947 aq_ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id, v_opcode, v_retval, 1965 1948 msg, msglen, NULL); 1966 1949 if (aq_ret) { ··· 1952 1973 } 1953 1974 1954 1975 return 0; 1955 - } 1956 - 1957 - /** 1958 - * i40e_vc_send_msg_to_vf 1959 - * @vf: pointer to the VF info 1960 - * @v_opcode: virtual channel opcode 1961 - * @v_retval: virtual channel return value 1962 - * @msg: pointer to the msg buffer 1963 - * @msglen: msg length 1964 - * 1965 - * send msg to VF 1966 - **/ 1967 - static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode, 1968 - u32 v_retval, u8 *msg, u16 msglen) 1969 - { 1970 - return i40e_vc_send_msg_to_vf_ex(vf, v_opcode, v_retval, 1971 - msg, msglen, false); 1972 1976 } 1973 1977 1974 1978 /** ··· 2784 2822 * i40e_check_vf_permission 2785 2823 * @vf: pointer to the VF info 2786 2824 * @al: MAC address list from virtchnl 2787 - * @is_quiet: set true for printing msg without opcode info, false otherwise 2788 2825 * 2789 2826 * Check that the given list of MAC addresses is allowed. Will return -EPERM 2790 2827 * if any address in the list is not valid. Checks the following conditions: ··· 2798 2837 * addresses might not be accurate. 2799 2838 **/ 2800 2839 static inline int i40e_check_vf_permission(struct i40e_vf *vf, 2801 - struct virtchnl_ether_addr_list *al, 2802 - bool *is_quiet) 2840 + struct virtchnl_ether_addr_list *al) 2803 2841 { 2804 2842 struct i40e_pf *pf = vf->pf; 2805 2843 struct i40e_vsi *vsi = pf->vsi[vf->lan_vsi_idx]; ··· 2806 2846 int mac2add_cnt = 0; 2807 2847 int i; 2808 2848 2809 - *is_quiet = false; 2810 2849 for (i = 0; i < al->num_elements; i++) { 2811 2850 struct i40e_mac_filter *f; 2812 2851 u8 *addr = al->list[i].addr; ··· 2829 2870 !ether_addr_equal(addr, vf->default_lan_addr.addr)) { 2830 2871 dev_err(&pf->pdev->dev, 2831 2872 "VF attempting to override administratively set MAC address, bring down and up the VF interface to resume normal operation\n"); 2832 - *is_quiet = true; 2833 2873 return -EPERM; 2834 2874 } 2835 2875 ··· 2879 2921 (struct virtchnl_ether_addr_list *)msg; 2880 2922 struct i40e_pf *pf = vf->pf; 2881 2923 struct i40e_vsi *vsi = NULL; 2882 - bool is_quiet = false; 2883 2924 i40e_status ret = 0; 2884 2925 int i; 2885 2926 ··· 2895 2938 */ 2896 2939 spin_lock_bh(&vsi->mac_filter_hash_lock); 2897 2940 2898 - ret = i40e_check_vf_permission(vf, al, &is_quiet); 2941 + ret = i40e_check_vf_permission(vf, al); 2899 2942 if (ret) { 2900 2943 spin_unlock_bh(&vsi->mac_filter_hash_lock); 2901 2944 goto error_param; ··· 2933 2976 2934 2977 error_param: 2935 2978 /* send the response to the VF */ 2936 - return i40e_vc_send_msg_to_vf_ex(vf, VIRTCHNL_OP_ADD_ETH_ADDR, 2937 - ret, NULL, 0, is_quiet); 2979 + return i40e_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_ETH_ADDR, 2980 + ret, NULL, 0); 2938 2981 } 2939 2982 2940 2983 /**
-5
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
··· 10 10 11 11 #define I40E_VIRTCHNL_SUPPORTED_QTYPES 2 12 12 13 - #define I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED 10 14 - 15 13 #define I40E_VLAN_PRIORITY_SHIFT 13 16 14 #define I40E_VLAN_MASK 0xFFF 17 15 #define I40E_PRIORITY_MASK 0xE000 ··· 90 92 u8 num_queue_pairs; /* num of qps assigned to VF vsis */ 91 93 u8 num_req_queues; /* num of requested qps */ 92 94 u64 num_mdd_events; /* num of mdd events detected */ 93 - /* num of continuous malformed or invalid msgs detected */ 94 - u64 num_invalid_msgs; 95 - u64 num_valid_msgs; /* num of valid msgs detected */ 96 95 97 96 unsigned long vf_caps; /* vf's adv. capabilities */ 98 97 unsigned long vf_states; /* vf's runtime states */
+6 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 201 201 __IAVF_RUNNING, /* opened, working */ 202 202 }; 203 203 204 + enum iavf_critical_section_t { 205 + __IAVF_IN_REMOVE_TASK, /* device being removed */ 206 + }; 207 + 204 208 #define IAVF_CLOUD_FIELD_OMAC 0x01 205 209 #define IAVF_CLOUD_FIELD_IMAC 0x02 206 210 #define IAVF_CLOUD_FIELD_IVLAN 0x04 ··· 250 246 struct list_head mac_filter_list; 251 247 struct mutex crit_lock; 252 248 struct mutex client_lock; 253 - struct mutex remove_lock; 254 249 /* Lock to protect accesses to MAC and VLAN lists */ 255 250 spinlock_t mac_vlan_list_lock; 256 251 char misc_vector_name[IFNAMSIZ + 9]; ··· 287 284 #define IAVF_FLAG_LEGACY_RX BIT(15) 288 285 #define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16) 289 286 #define IAVF_FLAG_QUEUES_DISABLED BIT(17) 287 + #define IAVF_FLAG_SETUP_NETDEV_FEATURES BIT(18) 288 + #define IAVF_FLAG_REINIT_MSIX_NEEDED BIT(20) 290 289 /* duplicates for common code */ 291 290 #define IAVF_FLAG_DCB_ENABLED 0 292 291 /* flags for admin queue service task */
+117 -55
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 302 302 rd32(hw, IAVF_VFINT_ICR01); 303 303 rd32(hw, IAVF_VFINT_ICR0_ENA1); 304 304 305 - /* schedule work on the private workqueue */ 306 - queue_work(iavf_wq, &adapter->adminq_task); 305 + if (adapter->state != __IAVF_REMOVE) 306 + /* schedule work on the private workqueue */ 307 + queue_work(iavf_wq, &adapter->adminq_task); 307 308 308 309 return IRQ_HANDLED; 309 310 } ··· 1137 1136 rss->state = IAVF_ADV_RSS_DEL_REQUEST; 1138 1137 spin_unlock_bh(&adapter->adv_rss_lock); 1139 1138 1140 - if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) && 1141 - adapter->state != __IAVF_RESETTING) { 1139 + if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) { 1142 1140 /* cancel any current operation */ 1143 1141 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 1144 1142 /* Schedule operations to close down the HW. Don't wait ··· 2120 2120 "Requested %d queues, but PF only gave us %d.\n", 2121 2121 num_req_queues, 2122 2122 adapter->vsi_res->num_queue_pairs); 2123 - adapter->flags |= IAVF_FLAG_REINIT_ITR_NEEDED; 2123 + adapter->flags |= IAVF_FLAG_REINIT_MSIX_NEEDED; 2124 2124 adapter->num_req_queues = adapter->vsi_res->num_queue_pairs; 2125 2125 iavf_schedule_reset(adapter); 2126 2126 ··· 2374 2374 struct iavf_hw *hw = &adapter->hw; 2375 2375 u32 reg_val; 2376 2376 2377 - if (!mutex_trylock(&adapter->crit_lock)) 2377 + if (!mutex_trylock(&adapter->crit_lock)) { 2378 + if (adapter->state == __IAVF_REMOVE) 2379 + return; 2380 + 2378 2381 goto restart_watchdog; 2382 + } 2379 2383 2380 2384 if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) 2381 2385 iavf_change_state(adapter, __IAVF_COMM_FAILED); 2382 2386 2383 - if (adapter->flags & IAVF_FLAG_RESET_NEEDED && 2384 - adapter->state != __IAVF_RESETTING) { 2385 - iavf_change_state(adapter, __IAVF_RESETTING); 2387 + if (adapter->flags & IAVF_FLAG_RESET_NEEDED) { 2386 2388 adapter->aq_required = 0; 2387 2389 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2390 + mutex_unlock(&adapter->crit_lock); 2391 + queue_work(iavf_wq, &adapter->reset_task); 2392 + return; 2388 2393 } 2389 2394 2390 2395 switch (adapter->state) { ··· 2424 2419 msecs_to_jiffies(1)); 2425 2420 return; 2426 2421 case __IAVF_INIT_FAILED: 2422 + if (test_bit(__IAVF_IN_REMOVE_TASK, 2423 + &adapter->crit_section)) { 2424 + /* Do not update the state and do not reschedule 2425 + * watchdog task, iavf_remove should handle this state 2426 + * as it can loop forever 2427 + */ 2428 + mutex_unlock(&adapter->crit_lock); 2429 + return; 2430 + } 2427 2431 if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) { 2428 2432 dev_err(&adapter->pdev->dev, 2429 2433 "Failed to communicate with PF; waiting before retry\n"); ··· 2449 2435 queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ); 2450 2436 return; 2451 2437 case __IAVF_COMM_FAILED: 2438 + if (test_bit(__IAVF_IN_REMOVE_TASK, 2439 + &adapter->crit_section)) { 2440 + /* Set state to __IAVF_INIT_FAILED and perform remove 2441 + * steps. Remove IAVF_FLAG_PF_COMMS_FAILED so the task 2442 + * doesn't bring the state back to __IAVF_COMM_FAILED. 2443 + */ 2444 + iavf_change_state(adapter, __IAVF_INIT_FAILED); 2445 + adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED; 2446 + mutex_unlock(&adapter->crit_lock); 2447 + return; 2448 + } 2452 2449 reg_val = rd32(hw, IAVF_VFGEN_RSTAT) & 2453 2450 IAVF_VFGEN_RSTAT_VFR_STATE_MASK; 2454 2451 if (reg_val == VIRTCHNL_VFR_VFACTIVE || ··· 2532 2507 schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5)); 2533 2508 mutex_unlock(&adapter->crit_lock); 2534 2509 restart_watchdog: 2535 - queue_work(iavf_wq, &adapter->adminq_task); 2510 + if (adapter->state >= __IAVF_DOWN) 2511 + queue_work(iavf_wq, &adapter->adminq_task); 2536 2512 if (adapter->aq_required) 2537 2513 queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2538 2514 msecs_to_jiffies(20)); ··· 2627 2601 /* When device is being removed it doesn't make sense to run the reset 2628 2602 * task, just return in such a case. 2629 2603 */ 2630 - if (mutex_is_locked(&adapter->remove_lock)) 2631 - return; 2604 + if (!mutex_trylock(&adapter->crit_lock)) { 2605 + if (adapter->state != __IAVF_REMOVE) 2606 + queue_work(iavf_wq, &adapter->reset_task); 2632 2607 2633 - if (iavf_lock_timeout(&adapter->crit_lock, 200)) { 2634 - schedule_work(&adapter->reset_task); 2635 2608 return; 2636 2609 } 2610 + 2637 2611 while (!mutex_trylock(&adapter->client_lock)) 2638 2612 usleep_range(500, 1000); 2639 2613 if (CLIENT_ENABLED(adapter)) { ··· 2688 2662 reg_val); 2689 2663 iavf_disable_vf(adapter); 2690 2664 mutex_unlock(&adapter->client_lock); 2665 + mutex_unlock(&adapter->crit_lock); 2691 2666 return; /* Do not attempt to reinit. It's dead, Jim. */ 2692 2667 } 2693 2668 ··· 2697 2670 * ndo_open() returning, so we can't assume it means all our open 2698 2671 * tasks have finished, since we're not holding the rtnl_lock here. 2699 2672 */ 2700 - running = ((adapter->state == __IAVF_RUNNING) || 2701 - (adapter->state == __IAVF_RESETTING)); 2673 + running = adapter->state == __IAVF_RUNNING; 2702 2674 2703 2675 if (running) { 2704 2676 netdev->flags &= ~IFF_UP; ··· 2727 2701 err); 2728 2702 adapter->aq_required = 0; 2729 2703 2730 - if (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED) { 2704 + if ((adapter->flags & IAVF_FLAG_REINIT_MSIX_NEEDED) || 2705 + (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED)) { 2731 2706 err = iavf_reinit_interrupt_scheme(adapter); 2732 2707 if (err) 2733 2708 goto reset_err; ··· 2800 2773 if (err) 2801 2774 goto reset_err; 2802 2775 2803 - if (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED) { 2776 + if ((adapter->flags & IAVF_FLAG_REINIT_MSIX_NEEDED) || 2777 + (adapter->flags & IAVF_FLAG_REINIT_ITR_NEEDED)) { 2804 2778 err = iavf_request_traffic_irqs(adapter, netdev->name); 2805 2779 if (err) 2806 2780 goto reset_err; 2807 2781 2808 - adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 2782 + adapter->flags &= ~IAVF_FLAG_REINIT_MSIX_NEEDED; 2809 2783 } 2810 2784 2811 2785 iavf_configure(adapter); ··· 2821 2793 iavf_change_state(adapter, __IAVF_DOWN); 2822 2794 wake_up(&adapter->down_waitqueue); 2823 2795 } 2796 + 2797 + adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 2798 + 2824 2799 mutex_unlock(&adapter->client_lock); 2825 2800 mutex_unlock(&adapter->crit_lock); 2826 2801 ··· 2857 2826 if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) 2858 2827 goto out; 2859 2828 2829 + if (!mutex_trylock(&adapter->crit_lock)) { 2830 + if (adapter->state == __IAVF_REMOVE) 2831 + return; 2832 + 2833 + queue_work(iavf_wq, &adapter->adminq_task); 2834 + goto out; 2835 + } 2836 + 2860 2837 event.buf_len = IAVF_MAX_AQ_BUF_SIZE; 2861 2838 event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL); 2862 2839 if (!event.msg_buf) 2863 2840 goto out; 2864 2841 2865 - if (iavf_lock_timeout(&adapter->crit_lock, 200)) 2866 - goto freedom; 2867 2842 do { 2868 2843 ret = iavf_clean_arq_element(hw, &event, &pending); 2869 2844 v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high); ··· 2885 2848 } while (pending); 2886 2849 mutex_unlock(&adapter->crit_lock); 2887 2850 2851 + if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES)) { 2852 + if (adapter->netdev_registered || 2853 + !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) { 2854 + struct net_device *netdev = adapter->netdev; 2855 + 2856 + rtnl_lock(); 2857 + netdev_update_features(netdev); 2858 + rtnl_unlock(); 2859 + /* Request VLAN offload settings */ 2860 + if (VLAN_V2_ALLOWED(adapter)) 2861 + iavf_set_vlan_offload_features 2862 + (adapter, 0, netdev->features); 2863 + 2864 + iavf_set_queue_vlan_tag_loc(adapter); 2865 + } 2866 + 2867 + adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES; 2868 + } 2888 2869 if ((adapter->flags & 2889 2870 (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) || 2890 2871 adapter->state == __IAVF_RESETTING) ··· 3855 3800 struct iavf_adapter *adapter = netdev_priv(netdev); 3856 3801 int status; 3857 3802 3858 - if (adapter->state <= __IAVF_DOWN_PENDING) 3859 - return 0; 3803 + mutex_lock(&adapter->crit_lock); 3860 3804 3861 - while (!mutex_trylock(&adapter->crit_lock)) 3862 - usleep_range(500, 1000); 3805 + if (adapter->state <= __IAVF_DOWN_PENDING) { 3806 + mutex_unlock(&adapter->crit_lock); 3807 + return 0; 3808 + } 3863 3809 3864 3810 set_bit(__IAVF_VSI_DOWN, adapter->vsi.state); 3865 3811 if (CLIENT_ENABLED(adapter)) ··· 3909 3853 iavf_notify_client_l2_params(&adapter->vsi); 3910 3854 adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED; 3911 3855 } 3912 - adapter->flags |= IAVF_FLAG_RESET_NEEDED; 3913 - queue_work(iavf_wq, &adapter->reset_task); 3856 + 3857 + if (netif_running(netdev)) { 3858 + adapter->flags |= IAVF_FLAG_RESET_NEEDED; 3859 + queue_work(iavf_wq, &adapter->reset_task); 3860 + } 3914 3861 3915 3862 return 0; 3916 3863 } ··· 4490 4431 */ 4491 4432 mutex_init(&adapter->crit_lock); 4492 4433 mutex_init(&adapter->client_lock); 4493 - mutex_init(&adapter->remove_lock); 4494 4434 mutex_init(&hw->aq.asq_mutex); 4495 4435 mutex_init(&hw->aq.arq_mutex); 4496 4436 ··· 4605 4547 static void iavf_remove(struct pci_dev *pdev) 4606 4548 { 4607 4549 struct iavf_adapter *adapter = iavf_pdev_to_adapter(pdev); 4608 - enum iavf_state_t prev_state = adapter->last_state; 4609 4550 struct net_device *netdev = adapter->netdev; 4610 4551 struct iavf_fdir_fltr *fdir, *fdirtmp; 4611 4552 struct iavf_vlan_filter *vlf, *vlftmp; ··· 4613 4556 struct iavf_cloud_filter *cf, *cftmp; 4614 4557 struct iavf_hw *hw = &adapter->hw; 4615 4558 int err; 4616 - /* Indicate we are in remove and not to run reset_task */ 4617 - mutex_lock(&adapter->remove_lock); 4618 - cancel_work_sync(&adapter->reset_task); 4559 + 4560 + set_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section); 4561 + /* Wait until port initialization is complete. 4562 + * There are flows where register/unregister netdev may race. 4563 + */ 4564 + while (1) { 4565 + mutex_lock(&adapter->crit_lock); 4566 + if (adapter->state == __IAVF_RUNNING || 4567 + adapter->state == __IAVF_DOWN || 4568 + adapter->state == __IAVF_INIT_FAILED) { 4569 + mutex_unlock(&adapter->crit_lock); 4570 + break; 4571 + } 4572 + 4573 + mutex_unlock(&adapter->crit_lock); 4574 + usleep_range(500, 1000); 4575 + } 4619 4576 cancel_delayed_work_sync(&adapter->watchdog_task); 4620 - cancel_delayed_work_sync(&adapter->client_task); 4577 + 4621 4578 if (adapter->netdev_registered) { 4622 - unregister_netdev(netdev); 4579 + rtnl_lock(); 4580 + unregister_netdevice(netdev); 4623 4581 adapter->netdev_registered = false; 4582 + rtnl_unlock(); 4624 4583 } 4625 4584 if (CLIENT_ALLOWED(adapter)) { 4626 4585 err = iavf_lan_del_device(adapter); ··· 4645 4572 err); 4646 4573 } 4647 4574 4575 + mutex_lock(&adapter->crit_lock); 4576 + dev_info(&adapter->pdev->dev, "Remove device\n"); 4577 + iavf_change_state(adapter, __IAVF_REMOVE); 4578 + 4648 4579 iavf_request_reset(adapter); 4649 4580 msleep(50); 4650 4581 /* If the FW isn't responding, kick it once, but only once. */ ··· 4656 4579 iavf_request_reset(adapter); 4657 4580 msleep(50); 4658 4581 } 4659 - if (iavf_lock_timeout(&adapter->crit_lock, 5000)) 4660 - dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", __FUNCTION__); 4661 4582 4662 - dev_info(&adapter->pdev->dev, "Removing device\n"); 4583 + iavf_misc_irq_disable(adapter); 4663 4584 /* Shut down all the garbage mashers on the detention level */ 4664 - iavf_change_state(adapter, __IAVF_REMOVE); 4585 + cancel_work_sync(&adapter->reset_task); 4586 + cancel_delayed_work_sync(&adapter->watchdog_task); 4587 + cancel_work_sync(&adapter->adminq_task); 4588 + cancel_delayed_work_sync(&adapter->client_task); 4589 + 4665 4590 adapter->aq_required = 0; 4666 4591 adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; 4667 4592 4668 4593 iavf_free_all_tx_resources(adapter); 4669 4594 iavf_free_all_rx_resources(adapter); 4670 - iavf_misc_irq_disable(adapter); 4671 4595 iavf_free_misc_irq(adapter); 4672 - 4673 - /* In case we enter iavf_remove from erroneous state, free traffic irqs 4674 - * here, so as to not cause a kernel crash, when calling 4675 - * iavf_reset_interrupt_capability. 4676 - */ 4677 - if ((adapter->last_state == __IAVF_RESETTING && 4678 - prev_state != __IAVF_DOWN) || 4679 - (adapter->last_state == __IAVF_RUNNING && 4680 - !(netdev->flags & IFF_UP))) 4681 - iavf_free_traffic_irqs(adapter); 4682 4596 4683 4597 iavf_reset_interrupt_capability(adapter); 4684 4598 iavf_free_q_vectors(adapter); 4685 - 4686 - cancel_delayed_work_sync(&adapter->watchdog_task); 4687 - 4688 - cancel_work_sync(&adapter->adminq_task); 4689 4599 4690 4600 iavf_free_rss(adapter); 4691 4601 ··· 4685 4621 mutex_destroy(&adapter->client_lock); 4686 4622 mutex_unlock(&adapter->crit_lock); 4687 4623 mutex_destroy(&adapter->crit_lock); 4688 - mutex_unlock(&adapter->remove_lock); 4689 - mutex_destroy(&adapter->remove_lock); 4690 4624 4691 4625 iounmap(hw->hw_addr); 4692 4626 pci_release_regions(pdev);
+41 -23
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 1835 1835 } 1836 1836 1837 1837 /** 1838 + * iavf_netdev_features_vlan_strip_set - update vlan strip status 1839 + * @netdev: ptr to netdev being adjusted 1840 + * @enable: enable or disable vlan strip 1841 + * 1842 + * Helper function to change vlan strip status in netdev->features. 1843 + */ 1844 + static void iavf_netdev_features_vlan_strip_set(struct net_device *netdev, 1845 + const bool enable) 1846 + { 1847 + if (enable) 1848 + netdev->features |= NETIF_F_HW_VLAN_CTAG_RX; 1849 + else 1850 + netdev->features &= ~NETIF_F_HW_VLAN_CTAG_RX; 1851 + } 1852 + 1853 + /** 1838 1854 * iavf_virtchnl_completion 1839 1855 * @adapter: adapter structure 1840 1856 * @v_opcode: opcode sent by PF ··· 2073 2057 } 2074 2058 break; 2075 2059 case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING: 2060 + dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n"); 2061 + /* Vlan stripping could not be enabled by ethtool. 2062 + * Disable it in netdev->features. 2063 + */ 2064 + iavf_netdev_features_vlan_strip_set(netdev, false); 2065 + break; 2076 2066 case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING: 2077 2067 dev_warn(&adapter->pdev->dev, "Changing VLAN Stripping is not allowed when Port VLAN is configured\n"); 2068 + /* Vlan stripping could not be disabled by ethtool. 2069 + * Enable it in netdev->features. 2070 + */ 2071 + iavf_netdev_features_vlan_strip_set(netdev, true); 2078 2072 break; 2079 2073 default: 2080 2074 dev_err(&adapter->pdev->dev, "PF returned error %d (%s) to our request %d\n", ··· 2172 2146 sizeof(adapter->vlan_v2_caps))); 2173 2147 2174 2148 iavf_process_config(adapter); 2175 - 2176 - /* unlock crit_lock before acquiring rtnl_lock as other 2177 - * processes holding rtnl_lock could be waiting for the same 2178 - * crit_lock 2179 - */ 2180 - mutex_unlock(&adapter->crit_lock); 2181 - /* VLAN capabilities can change during VFR, so make sure to 2182 - * update the netdev features with the new capabilities 2183 - */ 2184 - rtnl_lock(); 2185 - netdev_update_features(netdev); 2186 - rtnl_unlock(); 2187 - if (iavf_lock_timeout(&adapter->crit_lock, 10000)) 2188 - dev_warn(&adapter->pdev->dev, "failed to acquire crit_lock in %s\n", 2189 - __FUNCTION__); 2190 - 2191 - /* Request VLAN offload settings */ 2192 - if (VLAN_V2_ALLOWED(adapter)) 2193 - iavf_set_vlan_offload_features(adapter, 0, 2194 - netdev->features); 2195 - 2196 - iavf_set_queue_vlan_tag_loc(adapter); 2197 - 2149 + adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES; 2198 2150 } 2199 2151 break; 2200 2152 case VIRTCHNL_OP_ENABLE_QUEUES: ··· 2337 2333 } 2338 2334 spin_unlock_bh(&adapter->adv_rss_lock); 2339 2335 } 2336 + break; 2337 + case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING: 2338 + /* PF enabled vlan strip on this VF. 2339 + * Update netdev->features if needed to be in sync with ethtool. 2340 + */ 2341 + if (!v_retval) 2342 + iavf_netdev_features_vlan_strip_set(netdev, true); 2343 + break; 2344 + case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING: 2345 + /* PF disabled vlan strip on this VF. 2346 + * Update netdev->features if needed to be in sync with ethtool. 2347 + */ 2348 + if (!v_retval) 2349 + iavf_netdev_features_vlan_strip_set(netdev, false); 2340 2350 break; 2341 2351 default: 2342 2352 if (adapter->current_op && (v_opcode != adapter->current_op))
+11 -1
drivers/net/ethernet/intel/ice/ice.h
··· 483 483 ICE_FLAG_MDD_AUTO_RESET_VF, 484 484 ICE_FLAG_LINK_LENIENT_MODE_ENA, 485 485 ICE_FLAG_PLUG_AUX_DEV, 486 + ICE_FLAG_MTU_CHANGED, 486 487 ICE_PF_FLAGS_NBITS /* must be last */ 487 488 }; 488 489 ··· 898 897 */ 899 898 static inline void ice_clear_rdma_cap(struct ice_pf *pf) 900 899 { 901 - ice_unplug_aux_dev(pf); 900 + /* We can directly unplug aux device here only if the flag bit 901 + * ICE_FLAG_PLUG_AUX_DEV is not set because ice_unplug_aux_dev() 902 + * could race with ice_plug_aux_dev() called from 903 + * ice_service_task(). In this case we only clear that bit now and 904 + * aux device will be unplugged later once ice_plug_aux_device() 905 + * called from ice_service_task() finishes (see ice_service_task()). 906 + */ 907 + if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) 908 + ice_unplug_aux_dev(pf); 909 + 902 910 clear_bit(ICE_FLAG_RDMA_ENA, pf->flags); 903 911 clear_bit(ICE_FLAG_AUX_ENA, pf->flags); 904 912 }
+1 -1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 2298 2298 if (err) 2299 2299 goto done; 2300 2300 2301 - curr_link_speed = pi->phy.link_info.link_speed; 2301 + curr_link_speed = pi->phy.curr_user_speed_req; 2302 2302 adv_link_speed = ice_ksettings_find_adv_link_speed(ks); 2303 2303 2304 2304 /* If speed didn't get set, set it to what it currently is.
+26 -17
drivers/net/ethernet/intel/ice/ice_main.c
··· 2255 2255 return; 2256 2256 } 2257 2257 2258 - if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) 2258 + if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) { 2259 + /* Plug aux device per request */ 2259 2260 ice_plug_aux_dev(pf); 2261 + 2262 + /* Mark plugging as done but check whether unplug was 2263 + * requested during ice_plug_aux_dev() call 2264 + * (e.g. from ice_clear_rdma_cap()) and if so then 2265 + * plug aux device. 2266 + */ 2267 + if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) 2268 + ice_unplug_aux_dev(pf); 2269 + } 2270 + 2271 + if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) { 2272 + struct iidc_event *event; 2273 + 2274 + event = kzalloc(sizeof(*event), GFP_KERNEL); 2275 + if (event) { 2276 + set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type); 2277 + ice_send_event_to_aux(pf, event); 2278 + kfree(event); 2279 + } 2280 + } 2260 2281 2261 2282 ice_clean_adminq_subtask(pf); 2262 2283 ice_check_media_subtask(pf); ··· 3044 3023 struct iidc_event *event; 3045 3024 3046 3025 ena_mask &= ~ICE_AUX_CRIT_ERR; 3047 - event = kzalloc(sizeof(*event), GFP_KERNEL); 3026 + event = kzalloc(sizeof(*event), GFP_ATOMIC); 3048 3027 if (event) { 3049 3028 set_bit(IIDC_EVENT_CRIT_ERR, event->type); 3050 3029 /* report the entire OICR value to AUX driver */ ··· 6843 6822 struct ice_netdev_priv *np = netdev_priv(netdev); 6844 6823 struct ice_vsi *vsi = np->vsi; 6845 6824 struct ice_pf *pf = vsi->back; 6846 - struct iidc_event *event; 6847 6825 u8 count = 0; 6848 6826 int err = 0; 6849 6827 ··· 6877 6857 return -EBUSY; 6878 6858 } 6879 6859 6880 - event = kzalloc(sizeof(*event), GFP_KERNEL); 6881 - if (!event) 6882 - return -ENOMEM; 6883 - 6884 - set_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type); 6885 - ice_send_event_to_aux(pf, event); 6886 - clear_bit(IIDC_EVENT_BEFORE_MTU_CHANGE, event->type); 6887 - 6888 6860 netdev->mtu = (unsigned int)new_mtu; 6889 6861 6890 6862 /* if VSI is up, bring it down and then back up */ ··· 6884 6872 err = ice_down(vsi); 6885 6873 if (err) { 6886 6874 netdev_err(netdev, "change MTU if_down err %d\n", err); 6887 - goto event_after; 6875 + return err; 6888 6876 } 6889 6877 6890 6878 err = ice_up(vsi); 6891 6879 if (err) { 6892 6880 netdev_err(netdev, "change MTU if_up err %d\n", err); 6893 - goto event_after; 6881 + return err; 6894 6882 } 6895 6883 } 6896 6884 6897 6885 netdev_dbg(netdev, "changed MTU to %d\n", new_mtu); 6898 - event_after: 6899 - set_bit(IIDC_EVENT_AFTER_MTU_CHANGE, event->type); 6900 - ice_send_event_to_aux(pf, event); 6901 - kfree(event); 6886 + set_bit(ICE_FLAG_MTU_CHANGED, pf->flags); 6902 6887 6903 6888 return err; 6904 6889 }
-18
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 2182 2182 2183 2183 dev = ice_pf_to_dev(pf); 2184 2184 2185 - /* single place to detect unsuccessful return values */ 2186 - if (v_retval) { 2187 - vf->num_inval_msgs++; 2188 - dev_info(dev, "VF %d failed opcode %d, retval: %d\n", vf->vf_id, 2189 - v_opcode, v_retval); 2190 - if (vf->num_inval_msgs > ICE_DFLT_NUM_INVAL_MSGS_ALLOWED) { 2191 - dev_err(dev, "Number of invalid messages exceeded for VF %d\n", 2192 - vf->vf_id); 2193 - dev_err(dev, "Use PF Control I/F to enable the VF\n"); 2194 - set_bit(ICE_VF_STATE_DIS, vf->vf_states); 2195 - return -EIO; 2196 - } 2197 - } else { 2198 - vf->num_valid_msgs++; 2199 - /* reset the invalid counter, if a valid message is received. */ 2200 - vf->num_inval_msgs = 0; 2201 - } 2202 - 2203 2185 aq_ret = ice_aq_send_msg_to_vf(&pf->hw, vf->vf_id, v_opcode, v_retval, 2204 2186 msg, msglen, NULL); 2205 2187 if (aq_ret && pf->hw.mailboxq.sq_last_status != ICE_AQ_RC_ENOSYS) {
-3
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
··· 14 14 #define ICE_MAX_MACADDR_PER_VF 18 15 15 16 16 /* Malicious Driver Detection */ 17 - #define ICE_DFLT_NUM_INVAL_MSGS_ALLOWED 10 18 17 #define ICE_MDD_EVENTS_THRESHOLD 30 19 18 20 19 /* Static VF transaction/status register def */ ··· 133 134 unsigned int max_tx_rate; /* Maximum Tx bandwidth limit in Mbps */ 134 135 DECLARE_BITMAP(vf_states, ICE_VF_STATES_NBITS); /* VF runtime states */ 135 136 136 - u64 num_inval_msgs; /* number of continuous invalid msgs */ 137 - u64 num_valid_msgs; /* number of valid msgs detected */ 138 137 unsigned long vf_caps; /* VF's adv. capabilities */ 139 138 u8 num_req_qs; /* num of queue pairs requested by VF */ 140 139 u16 num_mac;
-4
drivers/net/ethernet/intel/igc/igc_phy.c
··· 746 746 if (ret_val) 747 747 return ret_val; 748 748 ret_val = igc_write_phy_reg_mdic(hw, offset, data); 749 - if (ret_val) 750 - return ret_val; 751 749 hw->phy.ops.release(hw); 752 750 } else { 753 751 ret_val = igc_write_xmdio_reg(hw, (u16)offset, dev_addr, ··· 777 779 if (ret_val) 778 780 return ret_val; 779 781 ret_val = igc_read_phy_reg_mdic(hw, offset, data); 780 - if (ret_val) 781 - return ret_val; 782 782 hw->phy.ops.release(hw); 783 783 } else { 784 784 ret_val = igc_read_xmdio_reg(hw, (u16)offset, dev_addr,
+4 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 390 390 u32 cmd_type; 391 391 392 392 while (budget-- > 0) { 393 - if (unlikely(!ixgbe_desc_unused(xdp_ring)) || 394 - !netif_carrier_ok(xdp_ring->netdev)) { 393 + if (unlikely(!ixgbe_desc_unused(xdp_ring))) { 395 394 work_done = false; 396 395 break; 397 396 } 397 + 398 + if (!netif_carrier_ok(xdp_ring->netdev)) 399 + break; 398 400 399 401 if (!xsk_tx_peek_desc(pool, &desc)) 400 402 break;
+1 -1
drivers/net/ethernet/lantiq_xrx200.c
··· 260 260 261 261 if (ctl & LTQ_DMA_EOP) { 262 262 ch->skb_head->protocol = eth_type_trans(ch->skb_head, net_dev); 263 - netif_receive_skb(ch->skb_head); 264 263 net_dev->stats.rx_packets++; 265 264 net_dev->stats.rx_bytes += ch->skb_head->len; 265 + netif_receive_skb(ch->skb_head); 266 266 ch->skb_head = NULL; 267 267 ch->skb_tail = NULL; 268 268 ret = XRX200_DMA_PACKET_COMPLETE;
+1
drivers/net/ethernet/marvell/prestera/prestera_main.c
··· 554 554 dev_info(prestera_dev(sw), "using random base mac address\n"); 555 555 } 556 556 of_node_put(base_mac_np); 557 + of_node_put(np); 557 558 558 559 return prestera_hw_switch_mac_set(sw, sw->base_mac); 559 560 }
+8 -7
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 131 131 132 132 static void cmd_free_index(struct mlx5_cmd *cmd, int idx) 133 133 { 134 - unsigned long flags; 135 - 136 - spin_lock_irqsave(&cmd->alloc_lock, flags); 134 + lockdep_assert_held(&cmd->alloc_lock); 137 135 set_bit(idx, &cmd->bitmask); 138 - spin_unlock_irqrestore(&cmd->alloc_lock, flags); 139 136 } 140 137 141 138 static void cmd_ent_get(struct mlx5_cmd_work_ent *ent) ··· 142 145 143 146 static void cmd_ent_put(struct mlx5_cmd_work_ent *ent) 144 147 { 148 + struct mlx5_cmd *cmd = ent->cmd; 149 + unsigned long flags; 150 + 151 + spin_lock_irqsave(&cmd->alloc_lock, flags); 145 152 if (!refcount_dec_and_test(&ent->refcnt)) 146 - return; 153 + goto out; 147 154 148 155 if (ent->idx >= 0) { 149 - struct mlx5_cmd *cmd = ent->cmd; 150 - 151 156 cmd_free_index(cmd, ent->idx); 152 157 up(ent->page_queue ? &cmd->pages_sem : &cmd->sem); 153 158 } 154 159 155 160 cmd_free_ent(ent); 161 + out: 162 + spin_unlock_irqrestore(&cmd->alloc_lock, flags); 156 163 } 157 164 158 165 static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx)
-3
drivers/net/ethernet/mellanox/mlx5/core/en/tir.c
··· 88 88 (MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ - rough_max_l2_l3_hdr_sz) >> 8); 89 89 MLX5_SET(tirc, tirc, lro_timeout_period_usecs, pkt_merge_param->timeout); 90 90 break; 91 - case MLX5E_PACKET_MERGE_SHAMPO: 92 - MLX5_SET(tirc, tirc, packet_merge_mask, MLX5_TIRC_PACKET_MERGE_MASK_SHAMPO); 93 - break; 94 91 default: 95 92 break; 96 93 }
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 3616 3616 goto out; 3617 3617 } 3618 3618 3619 - err = mlx5e_safe_switch_params(priv, &new_params, 3620 - mlx5e_modify_tirs_packet_merge_ctx, NULL, reset); 3619 + err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, reset); 3621 3620 out: 3622 3621 mutex_unlock(&priv->state_lock); 3623 3622 return err;
+8 -3
drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
··· 126 126 return; 127 127 } 128 128 129 + /* Handle multipath entry with lower priority value */ 130 + if (mp->mfi && mp->mfi != fi && fi->fib_priority >= mp->mfi->fib_priority) 131 + return; 132 + 129 133 /* Handle add/replace event */ 130 134 nhs = fib_info_num_path(fi); 131 135 if (nhs == 1) { ··· 139 135 int i = mlx5_lag_dev_get_netdev_idx(ldev, nh_dev); 140 136 141 137 if (i < 0) 142 - i = MLX5_LAG_NORMAL_AFFINITY; 143 - else 144 - ++i; 138 + return; 145 139 140 + i++; 146 141 mlx5_lag_set_port_affinity(ldev, i); 147 142 } 143 + 144 + mp->mfi = fi; 148 145 return; 149 146 } 150 147
-3
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 121 121 122 122 u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) 123 123 { 124 - if (!mlx5_chains_prios_supported(chains)) 125 - return 1; 126 - 127 124 if (mlx5_chains_ignore_flow_level_supported(chains)) 128 125 return UINT_MAX; 129 126
+2
drivers/net/ethernet/microchip/sparx5/sparx5_main.h
··· 16 16 #include <linux/phylink.h> 17 17 #include <linux/hrtimer.h> 18 18 19 + #include "sparx5_main_regs.h" 20 + 19 21 /* Target chip type */ 20 22 enum spx5_target_chiptype { 21 23 SPX5_TARGET_CT_7546 = 0x7546, /* SparX-5-64 Enterprise */
+10 -10
drivers/net/ethernet/microchip/sparx5/sparx5_vlan.c
··· 58 58 struct sparx5 *sparx5 = port->sparx5; 59 59 int ret; 60 60 61 - /* Make the port a member of the VLAN */ 62 - set_bit(port->portno, sparx5->vlan_mask[vid]); 63 - ret = sparx5_vlant_set_mask(sparx5, vid); 64 - if (ret) 65 - return ret; 66 - 67 - /* Default ingress vlan classification */ 68 - if (pvid) 69 - port->pvid = vid; 70 - 71 61 /* Untagged egress vlan classification */ 72 62 if (untagged && port->vid != vid) { 73 63 if (port->vid) { ··· 68 78 } 69 79 port->vid = vid; 70 80 } 81 + 82 + /* Make the port a member of the VLAN */ 83 + set_bit(port->portno, sparx5->vlan_mask[vid]); 84 + ret = sparx5_vlant_set_mask(sparx5, vid); 85 + if (ret) 86 + return ret; 87 + 88 + /* Default ingress vlan classification */ 89 + if (pvid) 90 + port->pvid = vid; 71 91 72 92 sparx5_vlan_port_apply(sparx5, port); 73 93
+4 -1
drivers/net/ethernet/nxp/lpc_eth.c
··· 1471 1471 { 1472 1472 struct net_device *ndev = platform_get_drvdata(pdev); 1473 1473 struct netdata_local *pldat; 1474 + int ret; 1474 1475 1475 1476 if (device_may_wakeup(&pdev->dev)) 1476 1477 disable_irq_wake(ndev->irq); ··· 1481 1480 pldat = netdev_priv(ndev); 1482 1481 1483 1482 /* Enable interface clock */ 1484 - clk_enable(pldat->clk); 1483 + ret = clk_enable(pldat->clk); 1484 + if (ret) 1485 + return ret; 1485 1486 1486 1487 /* Reset and initialize */ 1487 1488 __lpc_eth_reset(pldat);
+11 -7
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 3806 3806 return found; 3807 3807 } 3808 3808 3809 - static void qed_iov_get_link(struct qed_hwfn *p_hwfn, 3810 - u16 vfid, 3811 - struct qed_mcp_link_params *p_params, 3812 - struct qed_mcp_link_state *p_link, 3813 - struct qed_mcp_link_capabilities *p_caps) 3809 + static int qed_iov_get_link(struct qed_hwfn *p_hwfn, 3810 + u16 vfid, 3811 + struct qed_mcp_link_params *p_params, 3812 + struct qed_mcp_link_state *p_link, 3813 + struct qed_mcp_link_capabilities *p_caps) 3814 3814 { 3815 3815 struct qed_vf_info *p_vf = qed_iov_get_vf_info(p_hwfn, 3816 3816 vfid, ··· 3818 3818 struct qed_bulletin_content *p_bulletin; 3819 3819 3820 3820 if (!p_vf) 3821 - return; 3821 + return -EINVAL; 3822 3822 3823 3823 p_bulletin = p_vf->bulletin.p_virt; 3824 3824 ··· 3828 3828 __qed_vf_get_link_state(p_hwfn, p_link, p_bulletin); 3829 3829 if (p_caps) 3830 3830 __qed_vf_get_link_caps(p_hwfn, p_caps, p_bulletin); 3831 + return 0; 3831 3832 } 3832 3833 3833 3834 static int ··· 4687 4686 struct qed_public_vf_info *vf_info; 4688 4687 struct qed_mcp_link_state link; 4689 4688 u32 tx_rate; 4689 + int ret; 4690 4690 4691 4691 /* Sanitize request */ 4692 4692 if (IS_VF(cdev)) ··· 4701 4699 4702 4700 vf_info = qed_iov_get_public_vf_info(hwfn, vf_id, true); 4703 4701 4704 - qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL); 4702 + ret = qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL); 4703 + if (ret) 4704 + return ret; 4705 4705 4706 4706 /* Fill information about VF */ 4707 4707 ivi->vf = vf_id;
+7
drivers/net/ethernet/qlogic/qed/qed_vf.c
··· 513 513 p_iov->bulletin.size, 514 514 &p_iov->bulletin.phys, 515 515 GFP_KERNEL); 516 + if (!p_iov->bulletin.p_virt) 517 + goto free_pf2vf_reply; 518 + 516 519 DP_VERBOSE(p_hwfn, QED_MSG_IOV, 517 520 "VF's bulletin Board [%p virt 0x%llx phys 0x%08x bytes]\n", 518 521 p_iov->bulletin.p_virt, ··· 555 552 556 553 return rc; 557 554 555 + free_pf2vf_reply: 556 + dma_free_coherent(&p_hwfn->cdev->pdev->dev, 557 + sizeof(union pfvf_tlvs), 558 + p_iov->pf2vf_reply, p_iov->pf2vf_reply_phys); 558 559 free_vf2pf_request: 559 560 dma_free_coherent(&p_hwfn->cdev->pdev->dev, 560 561 sizeof(union vfpf_tlvs),
+3 -3
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 2285 2285 char *opt; 2286 2286 2287 2287 if (!str || !*str) 2288 - return -EINVAL; 2288 + return 1; 2289 2289 while ((opt = strsep(&str, ",")) != NULL) { 2290 2290 if (!strncmp(opt, "eee_timer:", 10)) { 2291 2291 if (kstrtoint(opt + 10, 0, &eee_timer)) 2292 2292 goto err; 2293 2293 } 2294 2294 } 2295 - return 0; 2295 + return 1; 2296 2296 2297 2297 err: 2298 2298 pr_err("%s: ERROR broken module parameter conversion\n", __func__); 2299 - return -EINVAL; 2299 + return 1; 2300 2300 } 2301 2301 2302 2302 __setup("sxgbeeth=", sxgbe_cmdline_opt);
+1 -1
drivers/net/ethernet/sfc/mcdi.c
··· 163 163 /* Serialise with efx_mcdi_ev_cpl() and efx_mcdi_ev_death() */ 164 164 spin_lock_bh(&mcdi->iface_lock); 165 165 ++mcdi->seqno; 166 + seqno = mcdi->seqno & SEQ_MASK; 166 167 spin_unlock_bh(&mcdi->iface_lock); 167 168 168 - seqno = mcdi->seqno & SEQ_MASK; 169 169 xflags = 0; 170 170 if (mcdi->mode == MCDI_MODE_EVENTS) 171 171 xflags |= MCDI_HEADER_XFLAGS_EVREQ;
+29 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2262 2262 stmmac_stop_tx(priv, priv->ioaddr, chan); 2263 2263 } 2264 2264 2265 + static void stmmac_enable_all_dma_irq(struct stmmac_priv *priv) 2266 + { 2267 + u32 rx_channels_count = priv->plat->rx_queues_to_use; 2268 + u32 tx_channels_count = priv->plat->tx_queues_to_use; 2269 + u32 dma_csr_ch = max(rx_channels_count, tx_channels_count); 2270 + u32 chan; 2271 + 2272 + for (chan = 0; chan < dma_csr_ch; chan++) { 2273 + struct stmmac_channel *ch = &priv->channel[chan]; 2274 + unsigned long flags; 2275 + 2276 + spin_lock_irqsave(&ch->lock, flags); 2277 + stmmac_enable_dma_irq(priv, priv->ioaddr, chan, 1, 1); 2278 + spin_unlock_irqrestore(&ch->lock, flags); 2279 + } 2280 + } 2281 + 2265 2282 /** 2266 2283 * stmmac_start_all_dma - start all RX and TX DMA channels 2267 2284 * @priv: driver private structure ··· 2921 2904 stmmac_axi(priv, priv->ioaddr, priv->plat->axi); 2922 2905 2923 2906 /* DMA CSR Channel configuration */ 2924 - for (chan = 0; chan < dma_csr_ch; chan++) 2907 + for (chan = 0; chan < dma_csr_ch; chan++) { 2925 2908 stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan); 2909 + stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1); 2910 + } 2926 2911 2927 2912 /* DMA RX Channel Configuration */ 2928 2913 for (chan = 0; chan < rx_channels_count; chan++) { ··· 3780 3761 3781 3762 stmmac_enable_all_queues(priv); 3782 3763 netif_tx_start_all_queues(priv->dev); 3764 + stmmac_enable_all_dma_irq(priv); 3783 3765 3784 3766 return 0; 3785 3767 ··· 6530 6510 } 6531 6511 6532 6512 /* DMA CSR Channel configuration */ 6533 - for (chan = 0; chan < dma_csr_ch; chan++) 6513 + for (chan = 0; chan < dma_csr_ch; chan++) { 6534 6514 stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan); 6515 + stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 1); 6516 + } 6535 6517 6536 6518 /* Adjust Split header */ 6537 6519 sph_en = (priv->hw->rx_csum > 0) && priv->sph; ··· 6594 6572 stmmac_enable_all_queues(priv); 6595 6573 netif_carrier_on(dev); 6596 6574 netif_tx_start_all_queues(dev); 6575 + stmmac_enable_all_dma_irq(priv); 6597 6576 6598 6577 return 0; 6599 6578 ··· 7474 7451 stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw); 7475 7452 7476 7453 stmmac_enable_all_queues(priv); 7454 + stmmac_enable_all_dma_irq(priv); 7477 7455 7478 7456 mutex_unlock(&priv->lock); 7479 7457 rtnl_unlock(); ··· 7491 7467 char *opt; 7492 7468 7493 7469 if (!str || !*str) 7494 - return -EINVAL; 7470 + return 1; 7495 7471 while ((opt = strsep(&str, ",")) != NULL) { 7496 7472 if (!strncmp(opt, "debug:", 6)) { 7497 7473 if (kstrtoint(opt + 6, 0, &debug)) ··· 7522 7498 goto err; 7523 7499 } 7524 7500 } 7525 - return 0; 7501 + return 1; 7526 7502 7527 7503 err: 7528 7504 pr_err("%s: ERROR broken module parameter conversion", __func__); 7529 - return -EINVAL; 7505 + return 1; 7530 7506 } 7531 7507 7532 7508 __setup("stmmaceth=", stmmac_cmdline_opt);
+5 -1
drivers/net/ethernet/sun/sunhme.c
··· 3146 3146 if (err) { 3147 3147 printk(KERN_ERR "happymeal(PCI): Cannot register net device, " 3148 3148 "aborting.\n"); 3149 - goto err_out_iounmap; 3149 + goto err_out_free_coherent; 3150 3150 } 3151 3151 3152 3152 pci_set_drvdata(pdev, hp); ··· 3178 3178 printk("%pM\n", dev->dev_addr); 3179 3179 3180 3180 return 0; 3181 + 3182 + err_out_free_coherent: 3183 + dma_free_coherent(hp->dma_dev, PAGE_SIZE, 3184 + hp->happy_block, hp->hblock_dvma); 3181 3185 3182 3186 err_out_iounmap: 3183 3187 iounmap(hp->gregs);
+3 -1
drivers/net/ethernet/ti/cpts.c
··· 568 568 for (i = 0; i < CPTS_MAX_EVENTS; i++) 569 569 list_add(&cpts->pool_data[i].list, &cpts->pool); 570 570 571 - clk_enable(cpts->refclk); 571 + err = clk_enable(cpts->refclk); 572 + if (err) 573 + return err; 572 574 573 575 cpts_write32(cpts, CPTS_EN, control); 574 576 cpts_write32(cpts, TS_PEND_EN, int_enable);
+3 -1
drivers/net/ethernet/xilinx/xilinx_emaclite.c
··· 1183 1183 if (rc) { 1184 1184 dev_err(dev, 1185 1185 "Cannot register network device, aborting\n"); 1186 - goto error; 1186 + goto put_node; 1187 1187 } 1188 1188 1189 1189 dev_info(dev, ··· 1191 1191 (unsigned long __force)ndev->mem_start, lp->base_addr, ndev->irq); 1192 1192 return 0; 1193 1193 1194 + put_node: 1195 + of_node_put(lp->phy_node); 1194 1196 error: 1195 1197 free_netdev(ndev); 1196 1198 return rc;
+2
drivers/net/ipa/Kconfig
··· 2 2 tristate "Qualcomm IPA support" 3 3 depends on NET && QCOM_SMEM 4 4 depends on ARCH_QCOM || COMPILE_TEST 5 + depends on INTERCONNECT 5 6 depends on QCOM_RPROC_COMMON || (QCOM_RPROC_COMMON=n && COMPILE_TEST) 7 + depends on QCOM_AOSS_QMP || QCOM_AOSS_QMP=n 6 8 select QCOM_MDT_LOADER if ARCH_QCOM 7 9 select QCOM_SCM 8 10 select QCOM_QMI_HELPERS
+1 -1
drivers/net/phy/dp83822.c
··· 274 274 if (err < 0) 275 275 return err; 276 276 277 - err = phy_write(phydev, MII_DP83822_MISR1, 0); 277 + err = phy_write(phydev, MII_DP83822_MISR2, 0); 278 278 if (err < 0) 279 279 return err; 280 280
+20 -11
drivers/net/phy/meson-gxl.c
··· 30 30 #define INTSRC_LINK_DOWN BIT(4) 31 31 #define INTSRC_REMOTE_FAULT BIT(5) 32 32 #define INTSRC_ANEG_COMPLETE BIT(6) 33 + #define INTSRC_ENERGY_DETECT BIT(7) 33 34 #define INTSRC_MASK 30 35 + 36 + #define INT_SOURCES (INTSRC_LINK_DOWN | INTSRC_ANEG_COMPLETE | \ 37 + INTSRC_ENERGY_DETECT) 34 38 35 39 #define BANK_ANALOG_DSP 0 36 40 #define BANK_WOL 1 ··· 204 200 205 201 static int meson_gxl_config_intr(struct phy_device *phydev) 206 202 { 207 - u16 val; 208 203 int ret; 209 204 210 205 if (phydev->interrupts == PHY_INTERRUPT_ENABLED) { ··· 212 209 if (ret) 213 210 return ret; 214 211 215 - val = INTSRC_ANEG_PR 216 - | INTSRC_PARALLEL_FAULT 217 - | INTSRC_ANEG_LP_ACK 218 - | INTSRC_LINK_DOWN 219 - | INTSRC_REMOTE_FAULT 220 - | INTSRC_ANEG_COMPLETE; 221 - ret = phy_write(phydev, INTSRC_MASK, val); 212 + ret = phy_write(phydev, INTSRC_MASK, INT_SOURCES); 222 213 } else { 223 - val = 0; 224 - ret = phy_write(phydev, INTSRC_MASK, val); 214 + ret = phy_write(phydev, INTSRC_MASK, 0); 225 215 226 216 /* Ack any pending IRQ */ 227 217 ret = meson_gxl_ack_interrupt(phydev); ··· 233 237 return IRQ_NONE; 234 238 } 235 239 240 + irq_status &= INT_SOURCES; 241 + 236 242 if (irq_status == 0) 237 243 return IRQ_NONE; 238 244 239 - phy_trigger_machine(phydev); 245 + /* Aneg-complete interrupt is used for link-up detection */ 246 + if (phydev->autoneg == AUTONEG_ENABLE && 247 + irq_status == INTSRC_ENERGY_DETECT) 248 + return IRQ_HANDLED; 249 + 250 + /* Give PHY some time before MAC starts sending data. This works 251 + * around an issue where network doesn't come up properly. 252 + */ 253 + if (!(irq_status & INTSRC_LINK_DOWN)) 254 + phy_queue_state_machine(phydev, msecs_to_jiffies(100)); 255 + else 256 + phy_trigger_machine(phydev); 240 257 241 258 return IRQ_HANDLED; 242 259 }
+20 -8
drivers/net/usb/smsc95xx.c
··· 84 84 ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN 85 85 | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 86 86 0, index, &buf, 4); 87 - if (unlikely(ret < 0)) { 88 - netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n", 89 - index, ret); 87 + if (ret < 0) { 88 + if (ret != -ENODEV) 89 + netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n", 90 + index, ret); 90 91 return ret; 91 92 } 92 93 ··· 117 116 ret = fn(dev, USB_VENDOR_REQUEST_WRITE_REGISTER, USB_DIR_OUT 118 117 | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 119 118 0, index, &buf, 4); 120 - if (unlikely(ret < 0)) 119 + if (ret < 0 && ret != -ENODEV) 121 120 netdev_warn(dev->net, "Failed to write reg index 0x%08x: %d\n", 122 121 index, ret); 123 122 ··· 160 159 do { 161 160 ret = __smsc95xx_read_reg(dev, MII_ADDR, &val, in_pm); 162 161 if (ret < 0) { 162 + /* Ignore -ENODEV error during disconnect() */ 163 + if (ret == -ENODEV) 164 + return 0; 163 165 netdev_warn(dev->net, "Error reading MII_ACCESS\n"); 164 166 return ret; 165 167 } ··· 198 194 addr = mii_address_cmd(phy_id, idx, MII_READ_ | MII_BUSY_); 199 195 ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm); 200 196 if (ret < 0) { 201 - netdev_warn(dev->net, "Error writing MII_ADDR\n"); 197 + if (ret != -ENODEV) 198 + netdev_warn(dev->net, "Error writing MII_ADDR\n"); 202 199 goto done; 203 200 } 204 201 ··· 211 206 212 207 ret = __smsc95xx_read_reg(dev, MII_DATA, &val, in_pm); 213 208 if (ret < 0) { 214 - netdev_warn(dev->net, "Error reading MII_DATA\n"); 209 + if (ret != -ENODEV) 210 + netdev_warn(dev->net, "Error reading MII_DATA\n"); 215 211 goto done; 216 212 } 217 213 ··· 220 214 221 215 done: 222 216 mutex_unlock(&dev->phy_mutex); 217 + 218 + /* Ignore -ENODEV error during disconnect() */ 219 + if (ret == -ENODEV) 220 + return 0; 223 221 return ret; 224 222 } 225 223 ··· 245 235 val = regval; 246 236 ret = __smsc95xx_write_reg(dev, MII_DATA, val, in_pm); 247 237 if (ret < 0) { 248 - netdev_warn(dev->net, "Error writing MII_DATA\n"); 238 + if (ret != -ENODEV) 239 + netdev_warn(dev->net, "Error writing MII_DATA\n"); 249 240 goto done; 250 241 } 251 242 ··· 254 243 addr = mii_address_cmd(phy_id, idx, MII_WRITE_ | MII_BUSY_); 255 244 ret = __smsc95xx_write_reg(dev, MII_ADDR, addr, in_pm); 256 245 if (ret < 0) { 257 - netdev_warn(dev->net, "Error writing MII_ADDR\n"); 246 + if (ret != -ENODEV) 247 + netdev_warn(dev->net, "Error writing MII_ADDR\n"); 258 248 goto done; 259 249 } 260 250
+1
drivers/net/wireless/intel/Makefile
··· 5 5 obj-$(CONFIG_IWLEGACY) += iwlegacy/ 6 6 7 7 obj-$(CONFIG_IWLWIFI) += iwlwifi/ 8 + obj-$(CONFIG_IWLMEI) += iwlwifi/
+1 -2
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 553 553 .has_he = true, 554 554 .he_cap_elem = { 555 555 .mac_cap_info[0] = 556 - IEEE80211_HE_MAC_CAP0_HTC_HE | 557 - IEEE80211_HE_MAC_CAP0_TWT_REQ, 556 + IEEE80211_HE_MAC_CAP0_HTC_HE, 558 557 .mac_cap_info[1] = 559 558 IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US | 560 559 IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,
+8 -3
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ 7 7 #include <linux/vmalloc.h> 8 + #include <linux/err.h> 8 9 #include <linux/ieee80211.h> 9 10 #include <linux/netdevice.h> 10 11 ··· 1858 1857 void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm) 1859 1858 { 1860 1859 struct dentry *bcast_dir __maybe_unused; 1861 - char buf[100]; 1862 1860 1863 1861 spin_lock_init(&mvm->drv_stats_lock); 1864 1862 ··· 1939 1939 * Create a symlink with mac80211. It will be removed when mac80211 1940 1940 * exists (before the opmode exists which removes the target.) 1941 1941 */ 1942 - snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent); 1943 - debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir, buf); 1942 + if (!IS_ERR(mvm->debugfs_dir)) { 1943 + char buf[100]; 1944 + 1945 + snprintf(buf, 100, "../../%pd2", mvm->debugfs_dir->d_parent); 1946 + debugfs_create_symlink("iwlwifi", mvm->hw->wiphy->debugfsdir, 1947 + buf); 1948 + } 1944 1949 }
-1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 226 226 [0] = WLAN_EXT_CAPA1_EXT_CHANNEL_SWITCHING, 227 227 [2] = WLAN_EXT_CAPA3_MULTI_BSSID_SUPPORT, 228 228 [7] = WLAN_EXT_CAPA8_OPMODE_NOTIF, 229 - [9] = WLAN_EXT_CAPA10_TWT_REQUESTER_SUPPORT, 230 229 }; 231 230 232 231 static const struct wiphy_iftype_ext_capab he_iftypes_ext_capa[] = {
+3 -2
drivers/net/wireless/intel/iwlwifi/mvm/vendor-cmd.c
··· 71 71 { 72 72 struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy); 73 73 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); 74 + int ret; 74 75 75 76 mutex_lock(&mvm->mutex); 76 - iwl_mvm_mei_get_ownership(mvm); 77 + ret = iwl_mvm_mei_get_ownership(mvm); 77 78 mutex_unlock(&mvm->mutex); 78 79 79 - return 0; 80 + return ret; 80 81 } 81 82 82 83 static const struct wiphy_vendor_command iwl_mvm_vendor_commands[] = {
+56 -37
drivers/net/xen-netfront.c
··· 424 424 queue->tx_link[id] = TX_LINK_NONE; 425 425 skb = queue->tx_skbs[id]; 426 426 queue->tx_skbs[id] = NULL; 427 - if (unlikely(gnttab_query_foreign_access( 428 - queue->grant_tx_ref[id]) != 0)) { 427 + if (unlikely(!gnttab_end_foreign_access_ref( 428 + queue->grant_tx_ref[id], GNTMAP_readonly))) { 429 429 dev_alert(dev, 430 430 "Grant still in use by backend domain\n"); 431 431 goto err; 432 432 } 433 - gnttab_end_foreign_access_ref( 434 - queue->grant_tx_ref[id], GNTMAP_readonly); 435 433 gnttab_release_grant_reference( 436 434 &queue->gref_tx_head, queue->grant_tx_ref[id]); 437 435 queue->grant_tx_ref[id] = GRANT_INVALID_REF; ··· 840 842 return 0; 841 843 } 842 844 845 + static void xennet_destroy_queues(struct netfront_info *info) 846 + { 847 + unsigned int i; 848 + 849 + for (i = 0; i < info->netdev->real_num_tx_queues; i++) { 850 + struct netfront_queue *queue = &info->queues[i]; 851 + 852 + if (netif_running(info->netdev)) 853 + napi_disable(&queue->napi); 854 + netif_napi_del(&queue->napi); 855 + } 856 + 857 + kfree(info->queues); 858 + info->queues = NULL; 859 + } 860 + 861 + static void xennet_uninit(struct net_device *dev) 862 + { 863 + struct netfront_info *np = netdev_priv(dev); 864 + xennet_destroy_queues(np); 865 + } 866 + 843 867 static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val) 844 868 { 845 869 unsigned long flags; ··· 988 968 struct device *dev = &queue->info->netdev->dev; 989 969 struct bpf_prog *xdp_prog; 990 970 struct xdp_buff xdp; 991 - unsigned long ret; 992 971 int slots = 1; 993 972 int err = 0; 994 973 u32 verdict; ··· 1029 1010 goto next; 1030 1011 } 1031 1012 1032 - ret = gnttab_end_foreign_access_ref(ref, 0); 1033 - BUG_ON(!ret); 1013 + if (!gnttab_end_foreign_access_ref(ref, 0)) { 1014 + dev_alert(dev, 1015 + "Grant still in use by backend domain\n"); 1016 + queue->info->broken = true; 1017 + dev_alert(dev, "Disabled for further use\n"); 1018 + return -EINVAL; 1019 + } 1034 1020 1035 1021 gnttab_release_grant_reference(&queue->gref_rx_head, ref); 1036 1022 ··· 1256 1232 &need_xdp_flush); 1257 1233 1258 1234 if (unlikely(err)) { 1235 + if (queue->info->broken) { 1236 + spin_unlock(&queue->rx_lock); 1237 + return 0; 1238 + } 1259 1239 err: 1260 1240 while ((skb = __skb_dequeue(&tmpq))) 1261 1241 __skb_queue_tail(&errq, skb); ··· 1639 1611 } 1640 1612 1641 1613 static const struct net_device_ops xennet_netdev_ops = { 1614 + .ndo_uninit = xennet_uninit, 1642 1615 .ndo_open = xennet_open, 1643 1616 .ndo_stop = xennet_close, 1644 1617 .ndo_start_xmit = xennet_start_xmit, ··· 1924 1895 struct netfront_queue *queue, unsigned int feature_split_evtchn) 1925 1896 { 1926 1897 struct xen_netif_tx_sring *txs; 1927 - struct xen_netif_rx_sring *rxs; 1898 + struct xen_netif_rx_sring *rxs = NULL; 1928 1899 grant_ref_t gref; 1929 1900 int err; 1930 1901 ··· 1944 1915 1945 1916 err = xenbus_grant_ring(dev, txs, 1, &gref); 1946 1917 if (err < 0) 1947 - goto grant_tx_ring_fail; 1918 + goto fail; 1948 1919 queue->tx_ring_ref = gref; 1949 1920 1950 1921 rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH); 1951 1922 if (!rxs) { 1952 1923 err = -ENOMEM; 1953 1924 xenbus_dev_fatal(dev, err, "allocating rx ring page"); 1954 - goto alloc_rx_ring_fail; 1925 + goto fail; 1955 1926 } 1956 1927 SHARED_RING_INIT(rxs); 1957 1928 FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE); 1958 1929 1959 1930 err = xenbus_grant_ring(dev, rxs, 1, &gref); 1960 1931 if (err < 0) 1961 - goto grant_rx_ring_fail; 1932 + goto fail; 1962 1933 queue->rx_ring_ref = gref; 1963 1934 1964 1935 if (feature_split_evtchn) ··· 1971 1942 err = setup_netfront_single(queue); 1972 1943 1973 1944 if (err) 1974 - goto alloc_evtchn_fail; 1945 + goto fail; 1975 1946 1976 1947 return 0; 1977 1948 1978 1949 /* If we fail to setup netfront, it is safe to just revoke access to 1979 1950 * granted pages because backend is not accessing it at this point. 1980 1951 */ 1981 - alloc_evtchn_fail: 1982 - gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0); 1983 - grant_rx_ring_fail: 1984 - free_page((unsigned long)rxs); 1985 - alloc_rx_ring_fail: 1986 - gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0); 1987 - grant_tx_ring_fail: 1988 - free_page((unsigned long)txs); 1989 - fail: 1952 + fail: 1953 + if (queue->rx_ring_ref != GRANT_INVALID_REF) { 1954 + gnttab_end_foreign_access(queue->rx_ring_ref, 0, 1955 + (unsigned long)rxs); 1956 + queue->rx_ring_ref = GRANT_INVALID_REF; 1957 + } else { 1958 + free_page((unsigned long)rxs); 1959 + } 1960 + if (queue->tx_ring_ref != GRANT_INVALID_REF) { 1961 + gnttab_end_foreign_access(queue->tx_ring_ref, 0, 1962 + (unsigned long)txs); 1963 + queue->tx_ring_ref = GRANT_INVALID_REF; 1964 + } else { 1965 + free_page((unsigned long)txs); 1966 + } 1990 1967 return err; 1991 1968 } 1992 1969 ··· 2136 2101 kfree(path); 2137 2102 xenbus_dev_fatal(dev, err, "%s", message); 2138 2103 return err; 2139 - } 2140 - 2141 - static void xennet_destroy_queues(struct netfront_info *info) 2142 - { 2143 - unsigned int i; 2144 - 2145 - for (i = 0; i < info->netdev->real_num_tx_queues; i++) { 2146 - struct netfront_queue *queue = &info->queues[i]; 2147 - 2148 - if (netif_running(info->netdev)) 2149 - napi_disable(&queue->napi); 2150 - netif_napi_del(&queue->napi); 2151 - } 2152 - 2153 - kfree(info->queues); 2154 - info->queues = NULL; 2155 2104 } 2156 2105 2157 2106
+2
drivers/nfc/port100.c
··· 1612 1612 nfc_digital_free_device(dev->nfc_digital_dev); 1613 1613 1614 1614 error: 1615 + usb_kill_urb(dev->in_urb); 1615 1616 usb_free_urb(dev->in_urb); 1617 + usb_kill_urb(dev->out_urb); 1616 1618 usb_free_urb(dev->out_urb); 1617 1619 usb_put_dev(dev->udev); 1618 1620
+16 -1
drivers/ntb/hw/intel/ntb_hw_gen4.c
··· 168 168 return NTB_TOPO_NONE; 169 169 } 170 170 171 + static enum ntb_topo spr_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd) 172 + { 173 + switch (ppd & SPR_PPD_TOPO_MASK) { 174 + case SPR_PPD_TOPO_B2B_USD: 175 + return NTB_TOPO_B2B_USD; 176 + case SPR_PPD_TOPO_B2B_DSD: 177 + return NTB_TOPO_B2B_DSD; 178 + } 179 + 180 + return NTB_TOPO_NONE; 181 + } 182 + 171 183 int gen4_init_dev(struct intel_ntb_dev *ndev) 172 184 { 173 185 struct pci_dev *pdev = ndev->ntb.pdev; ··· 195 183 } 196 184 197 185 ppd1 = ioread32(ndev->self_mmio + GEN4_PPD1_OFFSET); 198 - ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1); 186 + if (pdev_is_ICX(pdev)) 187 + ndev->ntb.topo = gen4_ppd_topo(ndev, ppd1); 188 + else if (pdev_is_SPR(pdev)) 189 + ndev->ntb.topo = spr_ppd_topo(ndev, ppd1); 199 190 dev_dbg(&pdev->dev, "ppd %#x topo %s\n", ppd1, 200 191 ntb_topo_string(ndev->ntb.topo)); 201 192 if (ndev->ntb.topo == NTB_TOPO_NONE)
+16
drivers/ntb/hw/intel/ntb_hw_gen4.h
··· 49 49 #define GEN4_PPD_CLEAR_TRN 0x0001 50 50 #define GEN4_PPD_LINKTRN 0x0008 51 51 #define GEN4_PPD_CONN_MASK 0x0300 52 + #define SPR_PPD_CONN_MASK 0x0700 52 53 #define GEN4_PPD_CONN_B2B 0x0200 53 54 #define GEN4_PPD_DEV_MASK 0x1000 54 55 #define GEN4_PPD_DEV_DSD 0x1000 55 56 #define GEN4_PPD_DEV_USD 0x0000 57 + #define SPR_PPD_DEV_MASK 0x4000 58 + #define SPR_PPD_DEV_DSD 0x4000 59 + #define SPR_PPD_DEV_USD 0x0000 56 60 #define GEN4_LINK_CTRL_LINK_DISABLE 0x0010 57 61 58 62 #define GEN4_SLOTSTS 0xb05a ··· 65 61 #define GEN4_PPD_TOPO_MASK (GEN4_PPD_CONN_MASK | GEN4_PPD_DEV_MASK) 66 62 #define GEN4_PPD_TOPO_B2B_USD (GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_USD) 67 63 #define GEN4_PPD_TOPO_B2B_DSD (GEN4_PPD_CONN_B2B | GEN4_PPD_DEV_DSD) 64 + 65 + #define SPR_PPD_TOPO_MASK (SPR_PPD_CONN_MASK | SPR_PPD_DEV_MASK) 66 + #define SPR_PPD_TOPO_B2B_USD (GEN4_PPD_CONN_B2B | SPR_PPD_DEV_USD) 67 + #define SPR_PPD_TOPO_B2B_DSD (GEN4_PPD_CONN_B2B | SPR_PPD_DEV_DSD) 68 68 69 69 #define GEN4_DB_COUNT 32 70 70 #define GEN4_DB_LINK 32 ··· 116 108 if (pdev_is_gen4(pdev) && 117 109 pdev->revision >= PCI_DEVICE_REVISION_ICX_MIN && 118 110 pdev->revision <= PCI_DEVICE_REVISION_ICX_MAX) 111 + return 1; 112 + return 0; 113 + } 114 + 115 + static inline int pdev_is_SPR(struct pci_dev *pdev) 116 + { 117 + if (pdev_is_gen4(pdev) && 118 + pdev->revision > PCI_DEVICE_REVISION_ICX_MAX) 119 119 return 1; 120 120 return 0; 121 121 }
+2 -4
drivers/ntb/msi.c
··· 33 33 { 34 34 phys_addr_t mw_phys_addr; 35 35 resource_size_t mw_size; 36 - size_t struct_size; 37 36 int peer_widx; 38 37 int peers; 39 38 int ret; ··· 42 43 if (peers <= 0) 43 44 return -EINVAL; 44 45 45 - struct_size = sizeof(*ntb->msi) + sizeof(*ntb->msi->peer_mws) * peers; 46 - 47 - ntb->msi = devm_kzalloc(&ntb->dev, struct_size, GFP_KERNEL); 46 + ntb->msi = devm_kzalloc(&ntb->dev, struct_size(ntb->msi, peer_mws, peers), 47 + GFP_KERNEL); 48 48 if (!ntb->msi) 49 49 return -ENOMEM; 50 50
+13 -2
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 36 36 #include "../core.h" 37 37 #include "pinctrl-sunxi.h" 38 38 39 + /* 40 + * These lock classes tell lockdep that GPIO IRQs are in a different 41 + * category than their parents, so it won't report false recursion. 42 + */ 43 + static struct lock_class_key sunxi_pinctrl_irq_lock_class; 44 + static struct lock_class_key sunxi_pinctrl_irq_request_class; 45 + 39 46 static struct irq_chip sunxi_pinctrl_edge_irq_chip; 40 47 static struct irq_chip sunxi_pinctrl_level_irq_chip; 41 48 ··· 844 837 { 845 838 struct sunxi_pinctrl *pctl = gpiochip_get_data(chip); 846 839 847 - return sunxi_pmx_gpio_set_direction(pctl->pctl_dev, NULL, offset, true); 840 + return sunxi_pmx_gpio_set_direction(pctl->pctl_dev, NULL, 841 + chip->base + offset, true); 848 842 } 849 843 850 844 static int sunxi_pinctrl_gpio_get(struct gpio_chip *chip, unsigned offset) ··· 898 890 struct sunxi_pinctrl *pctl = gpiochip_get_data(chip); 899 891 900 892 sunxi_pinctrl_gpio_set(chip, offset, value); 901 - return sunxi_pmx_gpio_set_direction(pctl->pctl_dev, NULL, offset, false); 893 + return sunxi_pmx_gpio_set_direction(pctl->pctl_dev, NULL, 894 + chip->base + offset, false); 902 895 } 903 896 904 897 static int sunxi_pinctrl_gpio_of_xlate(struct gpio_chip *gc, ··· 1564 1555 for (i = 0; i < (pctl->desc->irq_banks * IRQ_PER_BANK); i++) { 1565 1556 int irqno = irq_create_mapping(pctl->domain, i); 1566 1557 1558 + irq_set_lockdep_class(irqno, &sunxi_pinctrl_irq_lock_class, 1559 + &sunxi_pinctrl_irq_request_class); 1567 1560 irq_set_chip_and_handler(irqno, &sunxi_pinctrl_edge_irq_chip, 1568 1561 handle_edge_irq); 1569 1562 irq_set_chip_data(irqno, pctl);
+23 -2
drivers/ptp/ptp_ocp.c
··· 607 607 } 608 608 609 609 static void 610 - __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u64 adj_val) 610 + __ptp_ocp_adjtime_locked(struct ptp_ocp *bp, u32 adj_val) 611 611 { 612 612 u32 select, ctrl; 613 613 ··· 615 615 iowrite32(OCP_SELECT_CLK_REG, &bp->reg->select); 616 616 617 617 iowrite32(adj_val, &bp->reg->offset_ns); 618 - iowrite32(adj_val & 0x7f, &bp->reg->offset_window_ns); 618 + iowrite32(NSEC_PER_SEC, &bp->reg->offset_window_ns); 619 619 620 620 ctrl = OCP_CTRL_ADJUST_OFFSET | OCP_CTRL_ENABLE; 621 621 iowrite32(ctrl, &bp->reg->ctrl); ··· 624 624 iowrite32(select >> 16, &bp->reg->select); 625 625 } 626 626 627 + static void 628 + ptp_ocp_adjtime_coarse(struct ptp_ocp *bp, u64 delta_ns) 629 + { 630 + struct timespec64 ts; 631 + unsigned long flags; 632 + int err; 633 + 634 + spin_lock_irqsave(&bp->lock, flags); 635 + err = __ptp_ocp_gettime_locked(bp, &ts, NULL); 636 + if (likely(!err)) { 637 + timespec64_add_ns(&ts, delta_ns); 638 + __ptp_ocp_settime_locked(bp, &ts); 639 + } 640 + spin_unlock_irqrestore(&bp->lock, flags); 641 + } 642 + 627 643 static int 628 644 ptp_ocp_adjtime(struct ptp_clock_info *ptp_info, s64 delta_ns) 629 645 { 630 646 struct ptp_ocp *bp = container_of(ptp_info, struct ptp_ocp, ptp_info); 631 647 unsigned long flags; 632 648 u32 adj_ns, sign; 649 + 650 + if (delta_ns > NSEC_PER_SEC || -delta_ns > NSEC_PER_SEC) { 651 + ptp_ocp_adjtime_coarse(bp, delta_ns); 652 + return 0; 653 + } 633 654 634 655 sign = delta_ns < 0 ? BIT(31) : 0; 635 656 adj_ns = sign ? -delta_ns : delta_ns;
+1 -2
drivers/scsi/xen-scsifront.c
··· 233 233 return; 234 234 235 235 for (i = 0; i < shadow->nr_grants; i++) { 236 - if (unlikely(gnttab_query_foreign_access(shadow->gref[i]))) { 236 + if (unlikely(!gnttab_try_end_foreign_access(shadow->gref[i]))) { 237 237 shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME 238 238 "grant still in use by backend\n"); 239 239 BUG(); 240 240 } 241 - gnttab_end_foreign_access(shadow->gref[i], 0, 0UL); 242 241 } 243 242 244 243 kfree(shadow->sg);
+9 -5
drivers/soc/fsl/guts.c
··· 28 28 static struct guts *guts; 29 29 static struct soc_device_attribute soc_dev_attr; 30 30 static struct soc_device *soc_dev; 31 - static struct device_node *root; 32 31 33 32 34 33 /* SoC die attribute definition for QorIQ platform */ ··· 137 138 138 139 static int fsl_guts_probe(struct platform_device *pdev) 139 140 { 140 - struct device_node *np = pdev->dev.of_node; 141 + struct device_node *root, *np = pdev->dev.of_node; 141 142 struct device *dev = &pdev->dev; 142 143 const struct fsl_soc_die_attr *soc_die; 143 144 const char *machine; ··· 158 159 root = of_find_node_by_path("/"); 159 160 if (of_property_read_string(root, "model", &machine)) 160 161 of_property_read_string_index(root, "compatible", 0, &machine); 161 - if (machine) 162 - soc_dev_attr.machine = machine; 162 + if (machine) { 163 + soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL); 164 + if (!soc_dev_attr.machine) { 165 + of_node_put(root); 166 + return -ENOMEM; 167 + } 168 + } 169 + of_node_put(root); 163 170 164 171 svr = fsl_guts_get_svr(); 165 172 soc_die = fsl_soc_die_match(svr, fsl_soc_die); ··· 200 195 static int fsl_guts_remove(struct platform_device *dev) 201 196 { 202 197 soc_device_unregister(soc_dev); 203 - of_node_put(root); 204 198 return 0; 205 199 } 206 200
+2 -2
drivers/soc/fsl/qe/qe.c
··· 147 147 * memory mapped space. 148 148 * The BRG clock is the QE clock divided by 2. 149 149 * It was set up long ago during the initial boot phase and is 150 - * is given to us. 150 + * given to us. 151 151 * Baud rate clocks are zero-based in the driver code (as that maps 152 152 * to port numbers). Documentation uses 1-based numbering. 153 153 */ ··· 421 421 422 422 for (i = 0; i < be32_to_cpu(ucode->count); i++) 423 423 iowrite32be(be32_to_cpu(code[i]), &qe_immr->iram.idata); 424 - 424 + 425 425 /* Set I-RAM Ready Register */ 426 426 iowrite32be(QE_IRAM_READY, &qe_immr->iram.iready); 427 427 }
+2
drivers/soc/fsl/qe/qe_io.c
··· 35 35 if (ret) 36 36 return ret; 37 37 par_io = ioremap(res.start, resource_size(&res)); 38 + if (!par_io) 39 + return -ENOMEM; 38 40 39 41 if (!of_property_read_u32(np, "num-ports", &num_ports)) 40 42 num_par_io_ports = num_ports;
+2 -1
drivers/soc/imx/gpcv2.c
··· 382 382 return 0; 383 383 384 384 out_clk_disable: 385 - clk_bulk_disable_unprepare(domain->num_clks, domain->clks); 385 + if (!domain->keep_clocks) 386 + clk_bulk_disable_unprepare(domain->num_clks, domain->clks); 386 387 387 388 return ret; 388 389 }
+2 -1
drivers/soc/mediatek/mt8192-mmsys.h
··· 53 53 MT8192_AAL0_SEL_IN_CCORR0 54 54 }, { 55 55 DDP_COMPONENT_DITHER, DDP_COMPONENT_DSI0, 56 - MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0 56 + MT8192_DISP_DSI0_SEL_IN, MT8192_DSI0_SEL_IN_DITHER0, 57 + MT8192_DSI0_SEL_IN_DITHER0 57 58 }, { 58 59 DDP_COMPONENT_RDMA0, DDP_COMPONENT_COLOR0, 59 60 MT8192_DISP_RDMA0_SOUT_SEL, MT8192_RDMA0_SOUT_COLOR0,
+1 -1
drivers/soc/samsung/exynos-chipid.c
··· 204 204 205 205 MODULE_DESCRIPTION("Samsung Exynos ChipID controller and ASV driver"); 206 206 MODULE_AUTHOR("Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>"); 207 - MODULE_AUTHOR("Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>"); 207 + MODULE_AUTHOR("Krzysztof Kozlowski <krzk@kernel.org>"); 208 208 MODULE_AUTHOR("Pankaj Dubey <pankaj.dubey@samsung.com>"); 209 209 MODULE_AUTHOR("Sylwester Nawrocki <s.nawrocki@samsung.com>"); 210 210 MODULE_LICENSE("GPL");
+2 -2
drivers/spi/spi.c
··· 1019 1019 int i, ret; 1020 1020 1021 1021 if (vmalloced_buf || kmap_buf) { 1022 - desc_len = min_t(int, max_seg_size, PAGE_SIZE); 1022 + desc_len = min_t(unsigned int, max_seg_size, PAGE_SIZE); 1023 1023 sgs = DIV_ROUND_UP(len + offset_in_page(buf), desc_len); 1024 1024 } else if (virt_addr_valid(buf)) { 1025 - desc_len = min_t(int, max_seg_size, ctlr->max_dma_len); 1025 + desc_len = min_t(unsigned int, max_seg_size, ctlr->max_dma_len); 1026 1026 sgs = DIV_ROUND_UP(len, desc_len); 1027 1027 } else { 1028 1028 return -EINVAL;
+3 -2
drivers/staging/gdm724x/gdm_lte.c
··· 76 76 77 77 static int gdm_lte_rx(struct sk_buff *skb, struct nic *nic, int nic_type) 78 78 { 79 - int ret; 79 + int ret, len; 80 80 81 + len = skb->len + ETH_HLEN; 81 82 ret = netif_rx_ni(skb); 82 83 if (ret == NET_RX_DROP) { 83 84 nic->stats.rx_dropped++; 84 85 } else { 85 86 nic->stats.rx_packets++; 86 - nic->stats.rx_bytes += skb->len + ETH_HLEN; 87 + nic->stats.rx_bytes += len; 87 88 } 88 89 89 90 return 0;
+5 -2
drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
··· 5907 5907 struct sta_info *psta_bmc; 5908 5908 struct list_head *xmitframe_plist, *xmitframe_phead, *tmp; 5909 5909 struct xmit_frame *pxmitframe = NULL; 5910 + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; 5910 5911 struct sta_priv *pstapriv = &padapter->stapriv; 5911 5912 5912 5913 /* for BC/MC Frames */ ··· 5918 5917 if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) { 5919 5918 msleep(10);/* 10ms, ATIM(HIQ) Windows */ 5920 5919 5921 - spin_lock_bh(&psta_bmc->sleep_q.lock); 5920 + /* spin_lock_bh(&psta_bmc->sleep_q.lock); */ 5921 + spin_lock_bh(&pxmitpriv->lock); 5922 5922 5923 5923 xmitframe_phead = get_list_head(&psta_bmc->sleep_q); 5924 5924 list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) { ··· 5942 5940 rtw_hal_xmitframe_enqueue(padapter, pxmitframe); 5943 5941 } 5944 5942 5945 - spin_unlock_bh(&psta_bmc->sleep_q.lock); 5943 + /* spin_unlock_bh(&psta_bmc->sleep_q.lock); */ 5944 + spin_unlock_bh(&pxmitpriv->lock); 5946 5945 5947 5946 /* check hi queue and bmc_sleepq */ 5948 5947 rtw_chk_hi_queue_cmd(padapter);
+7 -3
drivers/staging/rtl8723bs/core/rtw_recv.c
··· 957 957 if ((psta->state&WIFI_SLEEP_STATE) && (pstapriv->sta_dz_bitmap&BIT(psta->aid))) { 958 958 struct list_head *xmitframe_plist, *xmitframe_phead; 959 959 struct xmit_frame *pxmitframe = NULL; 960 + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; 960 961 961 - spin_lock_bh(&psta->sleep_q.lock); 962 + /* spin_lock_bh(&psta->sleep_q.lock); */ 963 + spin_lock_bh(&pxmitpriv->lock); 962 964 963 965 xmitframe_phead = get_list_head(&psta->sleep_q); 964 966 xmitframe_plist = get_next(xmitframe_phead); ··· 991 989 update_beacon(padapter, WLAN_EID_TIM, NULL, true); 992 990 } 993 991 994 - spin_unlock_bh(&psta->sleep_q.lock); 992 + /* spin_unlock_bh(&psta->sleep_q.lock); */ 993 + spin_unlock_bh(&pxmitpriv->lock); 995 994 996 995 } else { 997 - spin_unlock_bh(&psta->sleep_q.lock); 996 + /* spin_unlock_bh(&psta->sleep_q.lock); */ 997 + spin_unlock_bh(&pxmitpriv->lock); 998 998 999 999 if (pstapriv->tim_bitmap&BIT(psta->aid)) { 1000 1000 if (psta->sleepq_len == 0) {
+11 -13
drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
··· 293 293 294 294 /* list_del_init(&psta->wakeup_list); */ 295 295 296 - spin_lock_bh(&psta->sleep_q.lock); 297 - rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q); 298 - psta->sleepq_len = 0; 299 - spin_unlock_bh(&psta->sleep_q.lock); 300 - 301 296 spin_lock_bh(&pxmitpriv->lock); 302 297 298 + rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q); 299 + psta->sleepq_len = 0; 300 + 303 301 /* vo */ 304 - spin_lock_bh(&pstaxmitpriv->vo_q.sta_pending.lock); 302 + /* spin_lock_bh(&(pxmitpriv->vo_pending.lock)); */ 305 303 rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vo_q.sta_pending); 306 304 list_del_init(&(pstaxmitpriv->vo_q.tx_pending)); 307 305 phwxmit = pxmitpriv->hwxmits; 308 306 phwxmit->accnt -= pstaxmitpriv->vo_q.qcnt; 309 307 pstaxmitpriv->vo_q.qcnt = 0; 310 - spin_unlock_bh(&pstaxmitpriv->vo_q.sta_pending.lock); 308 + /* spin_unlock_bh(&(pxmitpriv->vo_pending.lock)); */ 311 309 312 310 /* vi */ 313 - spin_lock_bh(&pstaxmitpriv->vi_q.sta_pending.lock); 311 + /* spin_lock_bh(&(pxmitpriv->vi_pending.lock)); */ 314 312 rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vi_q.sta_pending); 315 313 list_del_init(&(pstaxmitpriv->vi_q.tx_pending)); 316 314 phwxmit = pxmitpriv->hwxmits+1; 317 315 phwxmit->accnt -= pstaxmitpriv->vi_q.qcnt; 318 316 pstaxmitpriv->vi_q.qcnt = 0; 319 - spin_unlock_bh(&pstaxmitpriv->vi_q.sta_pending.lock); 317 + /* spin_unlock_bh(&(pxmitpriv->vi_pending.lock)); */ 320 318 321 319 /* be */ 322 - spin_lock_bh(&pstaxmitpriv->be_q.sta_pending.lock); 320 + /* spin_lock_bh(&(pxmitpriv->be_pending.lock)); */ 323 321 rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->be_q.sta_pending); 324 322 list_del_init(&(pstaxmitpriv->be_q.tx_pending)); 325 323 phwxmit = pxmitpriv->hwxmits+2; 326 324 phwxmit->accnt -= pstaxmitpriv->be_q.qcnt; 327 325 pstaxmitpriv->be_q.qcnt = 0; 328 - spin_unlock_bh(&pstaxmitpriv->be_q.sta_pending.lock); 326 + /* spin_unlock_bh(&(pxmitpriv->be_pending.lock)); */ 329 327 330 328 /* bk */ 331 - spin_lock_bh(&pstaxmitpriv->bk_q.sta_pending.lock); 329 + /* spin_lock_bh(&(pxmitpriv->bk_pending.lock)); */ 332 330 rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->bk_q.sta_pending); 333 331 list_del_init(&(pstaxmitpriv->bk_q.tx_pending)); 334 332 phwxmit = pxmitpriv->hwxmits+3; 335 333 phwxmit->accnt -= pstaxmitpriv->bk_q.qcnt; 336 334 pstaxmitpriv->bk_q.qcnt = 0; 337 - spin_unlock_bh(&pstaxmitpriv->bk_q.sta_pending.lock); 335 + /* spin_unlock_bh(&(pxmitpriv->bk_pending.lock)); */ 338 336 339 337 spin_unlock_bh(&pxmitpriv->lock); 340 338
+9 -7
drivers/staging/rtl8723bs/core/rtw_xmit.c
··· 1734 1734 struct list_head *plist, *phead, *tmp; 1735 1735 struct xmit_frame *pxmitframe; 1736 1736 1737 + spin_lock_bh(&pframequeue->lock); 1738 + 1737 1739 phead = get_list_head(pframequeue); 1738 1740 list_for_each_safe(plist, tmp, phead) { 1739 1741 pxmitframe = list_entry(plist, struct xmit_frame, list); 1740 1742 1741 1743 rtw_free_xmitframe(pxmitpriv, pxmitframe); 1742 1744 } 1745 + spin_unlock_bh(&pframequeue->lock); 1743 1746 } 1744 1747 1745 1748 s32 rtw_xmitframe_enqueue(struct adapter *padapter, struct xmit_frame *pxmitframe) ··· 1797 1794 struct sta_info *psta; 1798 1795 struct tx_servq *ptxservq; 1799 1796 struct pkt_attrib *pattrib = &pxmitframe->attrib; 1800 - struct xmit_priv *xmit_priv = &padapter->xmitpriv; 1801 1797 struct hw_xmit *phwxmits = padapter->xmitpriv.hwxmits; 1802 1798 signed int res = _SUCCESS; 1803 1799 ··· 1814 1812 1815 1813 ptxservq = rtw_get_sta_pending(padapter, psta, pattrib->priority, (u8 *)(&ac_index)); 1816 1814 1817 - spin_lock_bh(&xmit_priv->lock); 1818 1815 if (list_empty(&ptxservq->tx_pending)) 1819 1816 list_add_tail(&ptxservq->tx_pending, get_list_head(phwxmits[ac_index].sta_queue)); 1820 1817 1821 1818 list_add_tail(&pxmitframe->list, get_list_head(&ptxservq->sta_pending)); 1822 1819 ptxservq->qcnt++; 1823 1820 phwxmits[ac_index].accnt++; 1824 - spin_unlock_bh(&xmit_priv->lock); 1825 1821 1826 1822 exit: 1827 1823 ··· 2202 2202 struct list_head *xmitframe_plist, *xmitframe_phead, *tmp; 2203 2203 struct xmit_frame *pxmitframe = NULL; 2204 2204 struct sta_priv *pstapriv = &padapter->stapriv; 2205 + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; 2205 2206 2206 2207 psta_bmc = rtw_get_bcmc_stainfo(padapter); 2207 2208 2208 - spin_lock_bh(&psta->sleep_q.lock); 2209 + spin_lock_bh(&pxmitpriv->lock); 2209 2210 2210 2211 xmitframe_phead = get_list_head(&psta->sleep_q); 2211 2212 list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) { ··· 2307 2306 2308 2307 _exit: 2309 2308 2310 - spin_unlock_bh(&psta->sleep_q.lock); 2309 + spin_unlock_bh(&pxmitpriv->lock); 2311 2310 2312 2311 if (update_mask) 2313 2312 update_beacon(padapter, WLAN_EID_TIM, NULL, true); ··· 2319 2318 struct list_head *xmitframe_plist, *xmitframe_phead, *tmp; 2320 2319 struct xmit_frame *pxmitframe = NULL; 2321 2320 struct sta_priv *pstapriv = &padapter->stapriv; 2321 + struct xmit_priv *pxmitpriv = &padapter->xmitpriv; 2322 2322 2323 - spin_lock_bh(&psta->sleep_q.lock); 2323 + spin_lock_bh(&pxmitpriv->lock); 2324 2324 2325 2325 xmitframe_phead = get_list_head(&psta->sleep_q); 2326 2326 list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) { ··· 2374 2372 } 2375 2373 } 2376 2374 2377 - spin_unlock_bh(&psta->sleep_q.lock); 2375 + spin_unlock_bh(&pxmitpriv->lock); 2378 2376 } 2379 2377 2380 2378 void enqueue_pending_xmitbuf(struct xmit_priv *pxmitpriv, struct xmit_buf *pxmitbuf)
+2
drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
··· 502 502 rtw_issue_addbareq_cmd(padapter, pxmitframe); 503 503 } 504 504 505 + spin_lock_bh(&pxmitpriv->lock); 505 506 err = rtw_xmitframe_enqueue(padapter, pxmitframe); 507 + spin_unlock_bh(&pxmitpriv->lock); 506 508 if (err != _SUCCESS) { 507 509 rtw_free_xmitframe(pxmitpriv, pxmitframe); 508 510
+6 -2
drivers/staging/rtl8723bs/include/rtw_mlme.h
··· 102 102 since mlme_priv is a shared resource between many threads, 103 103 like ISR/Call-Back functions, the OID handlers, and even timer functions. 104 104 105 - 106 105 Each struct __queue has its own locks, already. 107 - Other items are protected by mlme_priv.lock. 106 + Other items in mlme_priv are protected by mlme_priv.lock, while items in 107 + xmit_priv are protected by xmit_priv.lock. 108 108 109 109 To avoid possible dead lock, any thread trying to modifiying mlme_priv 110 110 SHALL not lock up more than one locks at a time! 111 111 112 + The only exception is that queue functions which take the __queue.lock 113 + may be called with the xmit_priv.lock held. In this case the order 114 + MUST always be first lock xmit_priv.lock and then call any queue functions 115 + which take __queue.lock. 112 116 */ 113 117 114 118
+3 -1
drivers/tee/optee/ffa_abi.c
··· 869 869 optee_supp_init(&optee->supp); 870 870 ffa_dev_set_drvdata(ffa_dev, optee); 871 871 ctx = teedev_open(optee->teedev); 872 - if (IS_ERR(ctx)) 872 + if (IS_ERR(ctx)) { 873 + rc = PTR_ERR(ctx); 873 874 goto err_rhashtable_free; 875 + } 874 876 optee->ctx = ctx; 875 877 rc = optee_notif_init(optee, OPTEE_DEFAULT_MAX_NOTIF_VALUE); 876 878 if (rc)
+3 -1
drivers/tee/optee/smc_abi.c
··· 1417 1417 1418 1418 platform_set_drvdata(pdev, optee); 1419 1419 ctx = teedev_open(optee->teedev); 1420 - if (IS_ERR(ctx)) 1420 + if (IS_ERR(ctx)) { 1421 + rc = PTR_ERR(ctx); 1421 1422 goto err_supp_uninit; 1423 + } 1422 1424 optee->ctx = ctx; 1423 1425 rc = optee_notif_init(optee, max_notif_value); 1424 1426 if (rc)
+3 -2
drivers/thermal/thermal_netlink.c
··· 419 419 for (i = 0; i < tz->trips; i++) { 420 420 421 421 enum thermal_trip_type type; 422 - int temp, hyst; 422 + int temp, hyst = 0; 423 423 424 424 tz->ops->get_trip_type(tz, i, &type); 425 425 tz->ops->get_trip_temp(tz, i, &temp); 426 - tz->ops->get_trip_hyst(tz, i, &hyst); 426 + if (tz->ops->get_trip_hyst) 427 + tz->ops->get_trip_hyst(tz, i, &hyst); 427 428 428 429 if (nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_ID, i) || 429 430 nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_TRIP_TYPE, type) ||
+18 -8
drivers/usb/host/xen-hcd.c
··· 716 716 return 0; 717 717 } 718 718 719 - static void xenhcd_gnttab_done(struct usb_shadow *shadow) 719 + static void xenhcd_gnttab_done(struct xenhcd_info *info, unsigned int id) 720 720 { 721 + struct usb_shadow *shadow = info->shadow + id; 721 722 int nr_segs = 0; 722 723 int i; 723 724 ··· 727 726 if (xenusb_pipeisoc(shadow->req.pipe)) 728 727 nr_segs += shadow->req.u.isoc.nr_frame_desc_segs; 729 728 730 - for (i = 0; i < nr_segs; i++) 731 - gnttab_end_foreign_access(shadow->req.seg[i].gref, 0, 0UL); 729 + for (i = 0; i < nr_segs; i++) { 730 + if (!gnttab_try_end_foreign_access(shadow->req.seg[i].gref)) 731 + xenhcd_set_error(info, "backend didn't release grant"); 732 + } 732 733 733 734 shadow->req.nr_buffer_segs = 0; 734 735 shadow->req.u.isoc.nr_frame_desc_segs = 0; ··· 844 841 list_for_each_entry_safe(urbp, tmp, &info->in_progress_list, list) { 845 842 req_id = urbp->req_id; 846 843 if (!urbp->unlinked) { 847 - xenhcd_gnttab_done(&info->shadow[req_id]); 844 + xenhcd_gnttab_done(info, req_id); 845 + if (info->error) 846 + return; 848 847 if (urbp->urb->status == -EINPROGRESS) 849 848 /* not dequeued */ 850 849 xenhcd_giveback_urb(info, urbp->urb, ··· 947 942 rp = info->urb_ring.sring->rsp_prod; 948 943 if (RING_RESPONSE_PROD_OVERFLOW(&info->urb_ring, rp)) { 949 944 xenhcd_set_error(info, "Illegal index on urb-ring"); 950 - spin_unlock_irqrestore(&info->lock, flags); 951 - return 0; 945 + goto err; 952 946 } 953 947 rmb(); /* ensure we see queued responses up to "rp" */ 954 948 ··· 956 952 id = res.id; 957 953 if (id >= XENUSB_URB_RING_SIZE) { 958 954 xenhcd_set_error(info, "Illegal data on urb-ring"); 959 - continue; 955 + goto err; 960 956 } 961 957 962 958 if (likely(xenusb_pipesubmit(info->shadow[id].req.pipe))) { 963 - xenhcd_gnttab_done(&info->shadow[id]); 959 + xenhcd_gnttab_done(info, id); 960 + if (info->error) 961 + goto err; 964 962 urb = info->shadow[id].urb; 965 963 if (likely(urb)) { 966 964 urb->actual_length = res.actual_length; ··· 984 978 spin_unlock_irqrestore(&info->lock, flags); 985 979 986 980 return more_to_do; 981 + 982 + err: 983 + spin_unlock_irqrestore(&info->lock, flags); 984 + return 0; 987 985 } 988 986 989 987 static int xenhcd_conn_notify(struct xenhcd_info *info)
+32 -2
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 1563 1563 1564 1564 switch (cmd) { 1565 1565 case VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET: 1566 + /* This mq feature check aligns with pre-existing userspace 1567 + * implementation. 1568 + * 1569 + * Without it, an untrusted driver could fake a multiqueue config 1570 + * request down to a non-mq device that may cause kernel to 1571 + * panic due to uninitialized resources for extra vqs. Even with 1572 + * a well behaving guest driver, it is not expected to allow 1573 + * changing the number of vqs on a non-mq device. 1574 + */ 1575 + if (!MLX5_FEATURE(mvdev, VIRTIO_NET_F_MQ)) 1576 + break; 1577 + 1566 1578 read = vringh_iov_pull_iotlb(&cvq->vring, &cvq->riov, (void *)&mq, sizeof(mq)); 1567 1579 if (read != sizeof(mq)) 1568 1580 break; 1569 1581 1570 1582 newqps = mlx5vdpa16_to_cpu(mvdev, mq.virtqueue_pairs); 1583 + if (newqps < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN || 1584 + newqps > mlx5_vdpa_max_qps(mvdev->max_vqs)) 1585 + break; 1586 + 1571 1587 if (ndev->cur_num_vqs == 2 * newqps) { 1572 1588 status = VIRTIO_NET_OK; 1573 1589 break; ··· 1913 1897 return ndev->mvdev.mlx_features; 1914 1898 } 1915 1899 1916 - static int verify_min_features(struct mlx5_vdpa_dev *mvdev, u64 features) 1900 + static int verify_driver_features(struct mlx5_vdpa_dev *mvdev, u64 features) 1917 1901 { 1902 + /* Minimum features to expect */ 1918 1903 if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM))) 1919 1904 return -EOPNOTSUPP; 1905 + 1906 + /* Double check features combination sent down by the driver. 1907 + * Fail invalid features due to absence of the depended feature. 1908 + * 1909 + * Per VIRTIO v1.1 specification, section 5.1.3.1 Feature bit 1910 + * requirements: "VIRTIO_NET_F_MQ Requires VIRTIO_NET_F_CTRL_VQ". 1911 + * By failing the invalid features sent down by untrusted drivers, 1912 + * we're assured the assumption made upon is_index_valid() and 1913 + * is_ctrl_vq_idx() will not be compromised. 1914 + */ 1915 + if ((features & (BIT_ULL(VIRTIO_NET_F_MQ) | BIT_ULL(VIRTIO_NET_F_CTRL_VQ))) == 1916 + BIT_ULL(VIRTIO_NET_F_MQ)) 1917 + return -EINVAL; 1920 1918 1921 1919 return 0; 1922 1920 } ··· 2007 1977 2008 1978 print_features(mvdev, features, true); 2009 1979 2010 - err = verify_min_features(mvdev, features); 1980 + err = verify_driver_features(mvdev, features); 2011 1981 if (err) 2012 1982 return err; 2013 1983
+1 -1
drivers/vdpa/vdpa.c
··· 393 393 * If it does happen we assume a legacy guest. 394 394 */ 395 395 if (!vdev->features_valid) 396 - vdpa_set_features(vdev, 0, true); 396 + vdpa_set_features_unlocked(vdev, 0); 397 397 ops->get_config(vdev, offset, buf, len); 398 398 } 399 399
+1 -1
drivers/vdpa/vdpa_user/iova_domain.c
··· 294 294 295 295 iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true); 296 296 297 - return iova_pfn << shift; 297 + return (dma_addr_t)iova_pfn << shift; 298 298 } 299 299 300 300 static void vduse_domain_free_iova(struct iova_domain *iovad,
+1 -1
drivers/vdpa/virtio_pci/vp_vdpa.c
··· 533 533 { 534 534 struct vp_vdpa *vp_vdpa = pci_get_drvdata(pdev); 535 535 536 - vdpa_unregister_device(&vp_vdpa->vdpa); 537 536 vp_modern_remove(&vp_vdpa->mdev); 537 + vdpa_unregister_device(&vp_vdpa->vdpa); 538 538 } 539 539 540 540 static struct pci_driver vp_vdpa_driver = {
+11
drivers/vhost/iotlb.c
··· 57 57 if (last < start) 58 58 return -EFAULT; 59 59 60 + /* If the range being mapped is [0, ULONG_MAX], split it into two entries 61 + * otherwise its size would overflow u64. 62 + */ 63 + if (start == 0 && last == ULONG_MAX) { 64 + u64 mid = last / 2; 65 + 66 + vhost_iotlb_add_range_ctx(iotlb, start, mid, addr, perm, opaque); 67 + addr += mid + 1; 68 + start = mid + 1; 69 + } 70 + 60 71 if (iotlb->limit && 61 72 iotlb->nmaps == iotlb->limit && 62 73 iotlb->flags & VHOST_IOTLB_FLAG_RETIRE) {
+1 -1
drivers/vhost/vdpa.c
··· 286 286 if (copy_from_user(&features, featurep, sizeof(features))) 287 287 return -EFAULT; 288 288 289 - if (vdpa_set_features(vdpa, features, false)) 289 + if (vdpa_set_features(vdpa, features)) 290 290 return -EINVAL; 291 291 292 292 return 0;
+7 -2
drivers/vhost/vhost.c
··· 1170 1170 goto done; 1171 1171 } 1172 1172 1173 + if (msg.size == 0) { 1174 + ret = -EINVAL; 1175 + goto done; 1176 + } 1177 + 1173 1178 if (dev->msg_handler) 1174 1179 ret = dev->msg_handler(dev, &msg); 1175 1180 else ··· 1986 1981 return 0; 1987 1982 } 1988 1983 1989 - static int vhost_update_avail_event(struct vhost_virtqueue *vq, u16 avail_event) 1984 + static int vhost_update_avail_event(struct vhost_virtqueue *vq) 1990 1985 { 1991 1986 if (vhost_put_avail_event(vq)) 1992 1987 return -EFAULT; ··· 2532 2527 return false; 2533 2528 } 2534 2529 } else { 2535 - r = vhost_update_avail_event(vq, vq->avail_idx); 2530 + r = vhost_update_avail_event(vq); 2536 2531 if (r) { 2537 2532 vq_err(vq, "Failed to update avail event index at %p: %d\n", 2538 2533 vhost_avail_event(vq), r);
-1
drivers/virtio/Kconfig
··· 105 105 106 106 config VIRTIO_MEM 107 107 tristate "Virtio mem driver" 108 - default m 109 108 depends on X86_64 110 109 depends on VIRTIO 111 110 depends on MEMORY_HOTPLUG
+38 -18
drivers/virtio/virtio.c
··· 166 166 } 167 167 EXPORT_SYMBOL_GPL(virtio_add_status); 168 168 169 - int virtio_finalize_features(struct virtio_device *dev) 169 + /* Do some validation, then set FEATURES_OK */ 170 + static int virtio_features_ok(struct virtio_device *dev) 170 171 { 171 - int ret = dev->config->finalize_features(dev); 172 172 unsigned status; 173 + int ret; 173 174 174 175 might_sleep(); 175 - if (ret) 176 - return ret; 177 176 178 177 ret = arch_has_restricted_virtio_memory_access(); 179 178 if (ret) { ··· 201 202 } 202 203 return 0; 203 204 } 204 - EXPORT_SYMBOL_GPL(virtio_finalize_features); 205 205 206 + /** 207 + * virtio_reset_device - quiesce device for removal 208 + * @dev: the device to reset 209 + * 210 + * Prevents device from sending interrupts and accessing memory. 211 + * 212 + * Generally used for cleanup during driver / device removal. 213 + * 214 + * Once this has been invoked, caller must ensure that 215 + * virtqueue_notify / virtqueue_kick are not in progress. 216 + * 217 + * Note: this guarantees that vq callbacks are not in progress, however caller 218 + * is responsible for preventing access from other contexts, such as a system 219 + * call/workqueue/bh. Invoking virtio_break_device then flushing any such 220 + * contexts is one way to handle that. 221 + * */ 206 222 void virtio_reset_device(struct virtio_device *dev) 207 223 { 208 224 dev->config->reset(dev); ··· 259 245 driver_features_legacy = driver_features; 260 246 } 261 247 262 - /* 263 - * Some devices detect legacy solely via F_VERSION_1. Write 264 - * F_VERSION_1 to force LE config space accesses before FEATURES_OK for 265 - * these when needed. 266 - */ 267 - if (drv->validate && !virtio_legacy_is_little_endian() 268 - && device_features & BIT_ULL(VIRTIO_F_VERSION_1)) { 269 - dev->features = BIT_ULL(VIRTIO_F_VERSION_1); 270 - dev->config->finalize_features(dev); 271 - } 272 - 273 248 if (device_features & (1ULL << VIRTIO_F_VERSION_1)) 274 249 dev->features = driver_features & device_features; 275 250 else ··· 269 266 if (device_features & (1ULL << i)) 270 267 __virtio_set_bit(dev, i); 271 268 269 + err = dev->config->finalize_features(dev); 270 + if (err) 271 + goto err; 272 + 272 273 if (drv->validate) { 274 + u64 features = dev->features; 275 + 273 276 err = drv->validate(dev); 274 277 if (err) 275 278 goto err; 279 + 280 + /* Did validation change any features? Then write them again. */ 281 + if (features != dev->features) { 282 + err = dev->config->finalize_features(dev); 283 + if (err) 284 + goto err; 285 + } 276 286 } 277 287 278 - err = virtio_finalize_features(dev); 288 + err = virtio_features_ok(dev); 279 289 if (err) 280 290 goto err; 281 291 ··· 512 496 /* We have a driver! */ 513 497 virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER); 514 498 515 - ret = virtio_finalize_features(dev); 499 + ret = dev->config->finalize_features(dev); 500 + if (ret) 501 + goto err; 502 + 503 + ret = virtio_features_ok(dev); 516 504 if (ret) 517 505 goto err; 518 506
+1 -1
drivers/virtio/virtio_vdpa.c
··· 317 317 /* Give virtio_ring a chance to accept features. */ 318 318 vring_transport_features(vdev); 319 319 320 - return vdpa_set_features(vdpa, vdev->features, false); 320 + return vdpa_set_features(vdpa, vdev->features); 321 321 } 322 322 323 323 static const char *virtio_vdpa_bus_name(struct virtio_device *vdev)
+7 -18
drivers/xen/gntalloc.c
··· 169 169 __del_gref(gref); 170 170 } 171 171 172 - /* It's possible for the target domain to map the just-allocated grant 173 - * references by blindly guessing their IDs; if this is done, then 174 - * __del_gref will leave them in the queue_gref list. They need to be 175 - * added to the global list so that we can free them when they are no 176 - * longer referenced. 177 - */ 178 - if (unlikely(!list_empty(&queue_gref))) 179 - list_splice_tail(&queue_gref, &gref_list); 180 172 mutex_unlock(&gref_mutex); 181 173 return rc; 182 174 } 183 175 184 176 static void __del_gref(struct gntalloc_gref *gref) 185 177 { 178 + unsigned long addr; 179 + 186 180 if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { 187 181 uint8_t *tmp = kmap(gref->page); 188 182 tmp[gref->notify.pgoff] = 0; ··· 190 196 gref->notify.flags = 0; 191 197 192 198 if (gref->gref_id) { 193 - if (gnttab_query_foreign_access(gref->gref_id)) 194 - return; 195 - 196 - if (!gnttab_end_foreign_access_ref(gref->gref_id, 0)) 197 - return; 198 - 199 - gnttab_free_grant_reference(gref->gref_id); 199 + if (gref->page) { 200 + addr = (unsigned long)page_to_virt(gref->page); 201 + gnttab_end_foreign_access(gref->gref_id, 0, addr); 202 + } else 203 + gnttab_free_grant_reference(gref->gref_id); 200 204 } 201 205 202 206 gref_size--; 203 207 list_del(&gref->next_gref); 204 - 205 - if (gref->page) 206 - __free_page(gref->page); 207 208 208 209 kfree(gref); 209 210 }
+39 -32
drivers/xen/grant-table.c
··· 134 134 */ 135 135 unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref); 136 136 /* 137 - * Query the status of a grant entry. Ref parameter is reference of 138 - * queried grant entry, return value is the status of queried entry. 139 - * Detailed status(writing/reading) can be gotten from the return value 140 - * by bit operations. 137 + * Read the frame number related to a given grant reference. 141 138 */ 142 - int (*query_foreign_access)(grant_ref_t ref); 139 + unsigned long (*read_frame)(grant_ref_t ref); 143 140 }; 144 141 145 142 struct unmap_refs_callback_data { ··· 281 284 } 282 285 EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access); 283 286 284 - static int gnttab_query_foreign_access_v1(grant_ref_t ref) 285 - { 286 - return gnttab_shared.v1[ref].flags & (GTF_reading|GTF_writing); 287 - } 288 - 289 - static int gnttab_query_foreign_access_v2(grant_ref_t ref) 290 - { 291 - return grstatus[ref] & (GTF_reading|GTF_writing); 292 - } 293 - 294 - int gnttab_query_foreign_access(grant_ref_t ref) 295 - { 296 - return gnttab_interface->query_foreign_access(ref); 297 - } 298 - EXPORT_SYMBOL_GPL(gnttab_query_foreign_access); 299 - 300 287 static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly) 301 288 { 302 289 u16 flags, nflags; ··· 334 353 } 335 354 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref); 336 355 356 + static unsigned long gnttab_read_frame_v1(grant_ref_t ref) 357 + { 358 + return gnttab_shared.v1[ref].frame; 359 + } 360 + 361 + static unsigned long gnttab_read_frame_v2(grant_ref_t ref) 362 + { 363 + return gnttab_shared.v2[ref].full_page.frame; 364 + } 365 + 337 366 struct deferred_entry { 338 367 struct list_head list; 339 368 grant_ref_t ref; ··· 373 382 spin_unlock_irqrestore(&gnttab_list_lock, flags); 374 383 if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) { 375 384 put_free_entry(entry->ref); 376 - if (entry->page) { 377 - pr_debug("freeing g.e. %#x (pfn %#lx)\n", 378 - entry->ref, page_to_pfn(entry->page)); 379 - put_page(entry->page); 380 - } else 381 - pr_info("freeing g.e. %#x\n", entry->ref); 385 + pr_debug("freeing g.e. %#x (pfn %#lx)\n", 386 + entry->ref, page_to_pfn(entry->page)); 387 + put_page(entry->page); 382 388 kfree(entry); 383 389 entry = NULL; 384 390 } else { ··· 400 412 static void gnttab_add_deferred(grant_ref_t ref, bool readonly, 401 413 struct page *page) 402 414 { 403 - struct deferred_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC); 415 + struct deferred_entry *entry; 416 + gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL; 404 417 const char *what = KERN_WARNING "leaking"; 418 + 419 + entry = kmalloc(sizeof(*entry), gfp); 420 + if (!page) { 421 + unsigned long gfn = gnttab_interface->read_frame(ref); 422 + 423 + page = pfn_to_page(gfn_to_pfn(gfn)); 424 + get_page(page); 425 + } 405 426 406 427 if (entry) { 407 428 unsigned long flags; ··· 432 435 what, ref, page ? page_to_pfn(page) : -1); 433 436 } 434 437 438 + int gnttab_try_end_foreign_access(grant_ref_t ref) 439 + { 440 + int ret = _gnttab_end_foreign_access_ref(ref, 0); 441 + 442 + if (ret) 443 + put_free_entry(ref); 444 + 445 + return ret; 446 + } 447 + EXPORT_SYMBOL_GPL(gnttab_try_end_foreign_access); 448 + 435 449 void gnttab_end_foreign_access(grant_ref_t ref, int readonly, 436 450 unsigned long page) 437 451 { 438 - if (gnttab_end_foreign_access_ref(ref, readonly)) { 439 - put_free_entry(ref); 452 + if (gnttab_try_end_foreign_access(ref)) { 440 453 if (page != 0) 441 454 put_page(virt_to_page(page)); 442 455 } else ··· 1424 1417 .update_entry = gnttab_update_entry_v1, 1425 1418 .end_foreign_access_ref = gnttab_end_foreign_access_ref_v1, 1426 1419 .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v1, 1427 - .query_foreign_access = gnttab_query_foreign_access_v1, 1420 + .read_frame = gnttab_read_frame_v1, 1428 1421 }; 1429 1422 1430 1423 static const struct gnttab_ops gnttab_v2_ops = { ··· 1436 1429 .update_entry = gnttab_update_entry_v2, 1437 1430 .end_foreign_access_ref = gnttab_end_foreign_access_ref_v2, 1438 1431 .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v2, 1439 - .query_foreign_access = gnttab_query_foreign_access_v2, 1432 + .read_frame = gnttab_read_frame_v2, 1440 1433 }; 1441 1434 1442 1435 static bool gnttab_need_v2(void)
+4 -4
drivers/xen/pvcalls-front.c
··· 337 337 if (!map->active.ring) 338 338 return; 339 339 340 - free_pages((unsigned long)map->active.data.in, 341 - map->active.ring->ring_order); 340 + free_pages_exact(map->active.data.in, 341 + PAGE_SIZE << map->active.ring->ring_order); 342 342 free_page((unsigned long)map->active.ring); 343 343 } 344 344 ··· 352 352 goto out; 353 353 354 354 map->active.ring->ring_order = PVCALLS_RING_ORDER; 355 - bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 356 - PVCALLS_RING_ORDER); 355 + bytes = alloc_pages_exact(PAGE_SIZE << PVCALLS_RING_ORDER, 356 + GFP_KERNEL | __GFP_ZERO); 357 357 if (!bytes) 358 358 goto out; 359 359
+11 -13
drivers/xen/xenbus/xenbus_client.c
··· 379 379 unsigned int nr_pages, grant_ref_t *grefs) 380 380 { 381 381 int err; 382 - int i, j; 382 + unsigned int i; 383 + grant_ref_t gref_head; 384 + 385 + err = gnttab_alloc_grant_references(nr_pages, &gref_head); 386 + if (err) { 387 + xenbus_dev_fatal(dev, err, "granting access to ring page"); 388 + return err; 389 + } 383 390 384 391 for (i = 0; i < nr_pages; i++) { 385 392 unsigned long gfn; ··· 396 389 else 397 390 gfn = virt_to_gfn(vaddr); 398 391 399 - err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0); 400 - if (err < 0) { 401 - xenbus_dev_fatal(dev, err, 402 - "granting access to ring page"); 403 - goto fail; 404 - } 405 - grefs[i] = err; 392 + grefs[i] = gnttab_claim_grant_reference(&gref_head); 393 + gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id, 394 + gfn, 0); 406 395 407 396 vaddr = vaddr + XEN_PAGE_SIZE; 408 397 } 409 398 410 399 return 0; 411 - 412 - fail: 413 - for (j = 0; j < i; j++) 414 - gnttab_end_foreign_access_ref(grefs[j], 0); 415 - return err; 416 400 } 417 401 EXPORT_SYMBOL_GPL(xenbus_grant_ring); 418 402
+8 -1
fs/afs/write.c
··· 703 703 struct folio *folio; 704 704 struct page *head_page; 705 705 ssize_t ret; 706 - int n; 706 + int n, skips = 0; 707 707 708 708 _enter("%llx,%llx,", start, end); 709 709 ··· 754 754 #ifdef CONFIG_AFS_FSCACHE 755 755 folio_wait_fscache(folio); 756 756 #endif 757 + } else { 758 + start += folio_size(folio); 757 759 } 758 760 folio_put(folio); 761 + if (wbc->sync_mode == WB_SYNC_NONE) { 762 + if (skips >= 5 || need_resched()) 763 + break; 764 + skips++; 765 + } 759 766 continue; 760 767 } 761 768
+18 -7
fs/binfmt_elf.c
··· 1135 1135 * is then page aligned. 1136 1136 */ 1137 1137 load_bias = ELF_PAGESTART(load_bias - vaddr); 1138 - } 1139 1138 1140 - /* 1141 - * Calculate the entire size of the ELF mapping (total_size). 1142 - * (Note that load_addr_set is set to true later once the 1143 - * initial mapping is performed.) 1144 - */ 1145 - if (!load_addr_set) { 1139 + /* 1140 + * Calculate the entire size of the ELF mapping 1141 + * (total_size), used for the initial mapping, 1142 + * due to load_addr_set which is set to true later 1143 + * once the initial mapping is performed. 1144 + * 1145 + * Note that this is only sensible when the LOAD 1146 + * segments are contiguous (or overlapping). If 1147 + * used for LOADs that are far apart, this would 1148 + * cause the holes between LOADs to be mapped, 1149 + * running the risk of having the mapping fail, 1150 + * as it would be larger than the ELF file itself. 1151 + * 1152 + * As a result, only ET_DYN does this, since 1153 + * some ET_EXEC (e.g. ia64) may have large virtual 1154 + * memory holes between LOADs. 1155 + * 1156 + */ 1146 1157 total_size = total_mapping_size(elf_phdata, 1147 1158 elf_ex->e_phnum); 1148 1159 if (!total_size) {
+10
fs/btrfs/ctree.h
··· 602 602 /* Indicate that we want the transaction kthread to commit right now. */ 603 603 BTRFS_FS_COMMIT_TRANS, 604 604 605 + /* Indicate we have half completed snapshot deletions pending. */ 606 + BTRFS_FS_UNFINISHED_DROPS, 607 + 605 608 #if BITS_PER_LONG == 32 606 609 /* Indicate if we have error/warn message printed on 32bit systems */ 607 610 BTRFS_FS_32BIT_ERROR, ··· 1109 1106 BTRFS_ROOT_QGROUP_FLUSHING, 1110 1107 /* We started the orphan cleanup for this root. */ 1111 1108 BTRFS_ROOT_ORPHAN_CLEANUP, 1109 + /* This root has a drop operation that was started previously. */ 1110 + BTRFS_ROOT_UNFINISHED_DROP, 1112 1111 }; 1112 + 1113 + static inline void btrfs_wake_unfinished_drop(struct btrfs_fs_info *fs_info) 1114 + { 1115 + clear_and_wake_up_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags); 1116 + } 1113 1117 1114 1118 /* 1115 1119 * Record swapped tree blocks of a subvolume tree for delayed subtree trace
+10
fs/btrfs/disk-io.c
··· 3813 3813 3814 3814 set_bit(BTRFS_FS_OPEN, &fs_info->flags); 3815 3815 3816 + /* Kick the cleaner thread so it'll start deleting snapshots. */ 3817 + if (test_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags)) 3818 + wake_up_process(fs_info->cleaner_kthread); 3819 + 3816 3820 clear_oneshot: 3817 3821 btrfs_clear_oneshot_options(fs_info); 3818 3822 return 0; ··· 4541 4537 * still try to wake up the cleaner. 4542 4538 */ 4543 4539 kthread_park(fs_info->cleaner_kthread); 4540 + 4541 + /* 4542 + * If we had UNFINISHED_DROPS we could still be processing them, so 4543 + * clear that bit and wake up relocation so it can stop. 4544 + */ 4545 + btrfs_wake_unfinished_drop(fs_info); 4544 4546 4545 4547 /* wait for the qgroup rescan worker to stop */ 4546 4548 btrfs_qgroup_wait_for_completion(fs_info, false);
+10
fs/btrfs/extent-tree.c
··· 5622 5622 int ret; 5623 5623 int level; 5624 5624 bool root_dropped = false; 5625 + bool unfinished_drop = false; 5625 5626 5626 5627 btrfs_debug(fs_info, "Drop subvolume %llu", root->root_key.objectid); 5627 5628 ··· 5665 5664 * already dropped. 5666 5665 */ 5667 5666 set_bit(BTRFS_ROOT_DELETING, &root->state); 5667 + unfinished_drop = test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state); 5668 + 5668 5669 if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) { 5669 5670 level = btrfs_header_level(root->node); 5670 5671 path->nodes[level] = btrfs_lock_root_node(root); ··· 5841 5838 kfree(wc); 5842 5839 btrfs_free_path(path); 5843 5840 out: 5841 + /* 5842 + * We were an unfinished drop root, check to see if there are any 5843 + * pending, and if not clear and wake up any waiters. 5844 + */ 5845 + if (!err && unfinished_drop) 5846 + btrfs_maybe_wake_unfinished_drop(fs_info); 5847 + 5844 5848 /* 5845 5849 * So if we need to stop dropping the snapshot for whatever reason we 5846 5850 * need to make sure to add it back to the dead root list so that we
+13 -3
fs/btrfs/extent_io.c
··· 6841 6841 { 6842 6842 struct btrfs_fs_info *fs_info = eb->fs_info; 6843 6843 6844 + /* 6845 + * If we are using the commit root we could potentially clear a page 6846 + * Uptodate while we're using the extent buffer that we've previously 6847 + * looked up. We don't want to complain in this case, as the page was 6848 + * valid before, we just didn't write it out. Instead we want to catch 6849 + * the case where we didn't actually read the block properly, which 6850 + * would have !PageUptodate && !PageError, as we clear PageError before 6851 + * reading. 6852 + */ 6844 6853 if (fs_info->sectorsize < PAGE_SIZE) { 6845 - bool uptodate; 6854 + bool uptodate, error; 6846 6855 6847 6856 uptodate = btrfs_subpage_test_uptodate(fs_info, page, 6848 6857 eb->start, eb->len); 6849 - WARN_ON(!uptodate); 6858 + error = btrfs_subpage_test_error(fs_info, page, eb->start, eb->len); 6859 + WARN_ON(!uptodate && !error); 6850 6860 } else { 6851 - WARN_ON(!PageUptodate(page)); 6861 + WARN_ON(!PageUptodate(page) && !PageError(page)); 6852 6862 } 6853 6863 } 6854 6864
+28
fs/btrfs/inode.c
··· 7600 7600 } 7601 7601 7602 7602 len = min(len, em->len - (start - em->start)); 7603 + 7604 + /* 7605 + * If we have a NOWAIT request and the range contains multiple extents 7606 + * (or a mix of extents and holes), then we return -EAGAIN to make the 7607 + * caller fallback to a context where it can do a blocking (without 7608 + * NOWAIT) request. This way we avoid doing partial IO and returning 7609 + * success to the caller, which is not optimal for writes and for reads 7610 + * it can result in unexpected behaviour for an application. 7611 + * 7612 + * When doing a read, because we use IOMAP_DIO_PARTIAL when calling 7613 + * iomap_dio_rw(), we can end up returning less data then what the caller 7614 + * asked for, resulting in an unexpected, and incorrect, short read. 7615 + * That is, the caller asked to read N bytes and we return less than that, 7616 + * which is wrong unless we are crossing EOF. This happens if we get a 7617 + * page fault error when trying to fault in pages for the buffer that is 7618 + * associated to the struct iov_iter passed to iomap_dio_rw(), and we 7619 + * have previously submitted bios for other extents in the range, in 7620 + * which case iomap_dio_rw() may return us EIOCBQUEUED if not all of 7621 + * those bios have completed by the time we get the page fault error, 7622 + * which we return back to our caller - we should only return EIOCBQUEUED 7623 + * after we have submitted bios for all the extents in the range. 7624 + */ 7625 + if ((flags & IOMAP_NOWAIT) && len < length) { 7626 + free_extent_map(em); 7627 + ret = -EAGAIN; 7628 + goto unlock_err; 7629 + } 7630 + 7603 7631 if (write) { 7604 7632 ret = btrfs_get_blocks_direct_write(&em, inode, dio_data, 7605 7633 start, len);
+8 -1
fs/btrfs/qgroup.c
··· 1197 1197 goto out; 1198 1198 1199 1199 /* 1200 + * Unlock the qgroup_ioctl_lock mutex before waiting for the rescan worker to 1201 + * complete. Otherwise we can deadlock because btrfs_remove_qgroup() needs 1202 + * to lock that mutex while holding a transaction handle and the rescan 1203 + * worker needs to commit a transaction. 1204 + */ 1205 + mutex_unlock(&fs_info->qgroup_ioctl_lock); 1206 + 1207 + /* 1200 1208 * Request qgroup rescan worker to complete and wait for it. This wait 1201 1209 * must be done before transaction start for quota disable since it may 1202 1210 * deadlock with transaction by the qgroup rescan worker. 1203 1211 */ 1204 1212 clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags); 1205 1213 btrfs_qgroup_wait_for_completion(fs_info, false); 1206 - mutex_unlock(&fs_info->qgroup_ioctl_lock); 1207 1214 1208 1215 /* 1209 1216 * 1 For the root item
+13
fs/btrfs/relocation.c
··· 3960 3960 int rw = 0; 3961 3961 int err = 0; 3962 3962 3963 + /* 3964 + * This only gets set if we had a half-deleted snapshot on mount. We 3965 + * cannot allow relocation to start while we're still trying to clean up 3966 + * these pending deletions. 3967 + */ 3968 + ret = wait_on_bit(&fs_info->flags, BTRFS_FS_UNFINISHED_DROPS, TASK_INTERRUPTIBLE); 3969 + if (ret) 3970 + return ret; 3971 + 3972 + /* We may have been woken up by close_ctree, so bail if we're closing. */ 3973 + if (btrfs_fs_closing(fs_info)) 3974 + return -EINTR; 3975 + 3963 3976 bg = btrfs_lookup_block_group(fs_info, group_start); 3964 3977 if (!bg) 3965 3978 return -ENOENT;
+15
fs/btrfs/root-tree.c
··· 278 278 279 279 WARN_ON(!test_bit(BTRFS_ROOT_ORPHAN_ITEM_INSERTED, &root->state)); 280 280 if (btrfs_root_refs(&root->root_item) == 0) { 281 + struct btrfs_key drop_key; 282 + 283 + btrfs_disk_key_to_cpu(&drop_key, &root->root_item.drop_progress); 284 + /* 285 + * If we have a non-zero drop_progress then we know we 286 + * made it partly through deleting this snapshot, and 287 + * thus we need to make sure we block any balance from 288 + * happening until this snapshot is completely dropped. 289 + */ 290 + if (drop_key.objectid != 0 || drop_key.type != 0 || 291 + drop_key.offset != 0) { 292 + set_bit(BTRFS_FS_UNFINISHED_DROPS, &fs_info->flags); 293 + set_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state); 294 + } 295 + 281 296 set_bit(BTRFS_ROOT_DEAD_TREE, &root->state); 282 297 btrfs_add_dead_root(root); 283 298 }
+1 -1
fs/btrfs/subpage.c
··· 736 736 * Since we own the page lock, no one else could touch subpage::writers 737 737 * and we are safe to do several atomic operations without spinlock. 738 738 */ 739 - if (atomic_read(&subpage->writers)) 739 + if (atomic_read(&subpage->writers) == 0) 740 740 /* No writers, locked by plain lock_page() */ 741 741 return unlock_page(page); 742 742
+63 -2
fs/btrfs/transaction.c
··· 854 854 static noinline void wait_for_commit(struct btrfs_transaction *commit, 855 855 const enum btrfs_trans_state min_state) 856 856 { 857 - wait_event(commit->commit_wait, commit->state >= min_state); 857 + struct btrfs_fs_info *fs_info = commit->fs_info; 858 + u64 transid = commit->transid; 859 + bool put = false; 860 + 861 + while (1) { 862 + wait_event(commit->commit_wait, commit->state >= min_state); 863 + if (put) 864 + btrfs_put_transaction(commit); 865 + 866 + if (min_state < TRANS_STATE_COMPLETED) 867 + break; 868 + 869 + /* 870 + * A transaction isn't really completed until all of the 871 + * previous transactions are completed, but with fsync we can 872 + * end up with SUPER_COMMITTED transactions before a COMPLETED 873 + * transaction. Wait for those. 874 + */ 875 + 876 + spin_lock(&fs_info->trans_lock); 877 + commit = list_first_entry_or_null(&fs_info->trans_list, 878 + struct btrfs_transaction, 879 + list); 880 + if (!commit || commit->transid > transid) { 881 + spin_unlock(&fs_info->trans_lock); 882 + break; 883 + } 884 + refcount_inc(&commit->use_count); 885 + put = true; 886 + spin_unlock(&fs_info->trans_lock); 887 + } 858 888 } 859 889 860 890 int btrfs_wait_for_commit(struct btrfs_fs_info *fs_info, u64 transid) ··· 1350 1320 } 1351 1321 1352 1322 /* 1323 + * If we had a pending drop we need to see if there are any others left in our 1324 + * dead roots list, and if not clear our bit and wake any waiters. 1325 + */ 1326 + void btrfs_maybe_wake_unfinished_drop(struct btrfs_fs_info *fs_info) 1327 + { 1328 + /* 1329 + * We put the drop in progress roots at the front of the list, so if the 1330 + * first entry doesn't have UNFINISHED_DROP set we can wake everybody 1331 + * up. 1332 + */ 1333 + spin_lock(&fs_info->trans_lock); 1334 + if (!list_empty(&fs_info->dead_roots)) { 1335 + struct btrfs_root *root = list_first_entry(&fs_info->dead_roots, 1336 + struct btrfs_root, 1337 + root_list); 1338 + if (test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state)) { 1339 + spin_unlock(&fs_info->trans_lock); 1340 + return; 1341 + } 1342 + } 1343 + spin_unlock(&fs_info->trans_lock); 1344 + 1345 + btrfs_wake_unfinished_drop(fs_info); 1346 + } 1347 + 1348 + /* 1353 1349 * dead roots are old snapshots that need to be deleted. This allocates 1354 1350 * a dirty root struct and adds it into the list of dead roots that need to 1355 1351 * be deleted ··· 1387 1331 spin_lock(&fs_info->trans_lock); 1388 1332 if (list_empty(&root->root_list)) { 1389 1333 btrfs_grab_root(root); 1390 - list_add_tail(&root->root_list, &fs_info->dead_roots); 1334 + 1335 + /* We want to process the partially complete drops first. */ 1336 + if (test_bit(BTRFS_ROOT_UNFINISHED_DROP, &root->state)) 1337 + list_add(&root->root_list, &fs_info->dead_roots); 1338 + else 1339 + list_add_tail(&root->root_list, &fs_info->dead_roots); 1391 1340 } 1392 1341 spin_unlock(&fs_info->trans_lock); 1393 1342 }
+1
fs/btrfs/transaction.h
··· 216 216 217 217 void btrfs_add_dead_root(struct btrfs_root *root); 218 218 int btrfs_defrag_root(struct btrfs_root *root); 219 + void btrfs_maybe_wake_unfinished_drop(struct btrfs_fs_info *fs_info); 219 220 int btrfs_clean_one_deleted_snapshot(struct btrfs_root *root); 220 221 int btrfs_commit_transaction(struct btrfs_trans_handle *trans); 221 222 void btrfs_commit_transaction_async(struct btrfs_trans_handle *trans);
+9 -9
fs/btrfs/tree-checker.c
··· 1682 1682 */ 1683 1683 for (slot = 0; slot < nritems; slot++) { 1684 1684 u32 item_end_expected; 1685 + u64 item_data_end; 1685 1686 int ret; 1686 1687 1687 1688 btrfs_item_key_to_cpu(leaf, &key, slot); ··· 1697 1696 return -EUCLEAN; 1698 1697 } 1699 1698 1699 + item_data_end = (u64)btrfs_item_offset(leaf, slot) + 1700 + btrfs_item_size(leaf, slot); 1700 1701 /* 1701 1702 * Make sure the offset and ends are right, remember that the 1702 1703 * item data starts at the end of the leaf and grows towards the ··· 1709 1706 else 1710 1707 item_end_expected = btrfs_item_offset(leaf, 1711 1708 slot - 1); 1712 - if (unlikely(btrfs_item_data_end(leaf, slot) != item_end_expected)) { 1709 + if (unlikely(item_data_end != item_end_expected)) { 1713 1710 generic_err(leaf, slot, 1714 - "unexpected item end, have %u expect %u", 1715 - btrfs_item_data_end(leaf, slot), 1716 - item_end_expected); 1711 + "unexpected item end, have %llu expect %u", 1712 + item_data_end, item_end_expected); 1717 1713 return -EUCLEAN; 1718 1714 } 1719 1715 ··· 1721 1719 * just in case all the items are consistent to each other, but 1722 1720 * all point outside of the leaf. 1723 1721 */ 1724 - if (unlikely(btrfs_item_data_end(leaf, slot) > 1725 - BTRFS_LEAF_DATA_SIZE(fs_info))) { 1722 + if (unlikely(item_data_end > BTRFS_LEAF_DATA_SIZE(fs_info))) { 1726 1723 generic_err(leaf, slot, 1727 - "slot end outside of leaf, have %u expect range [0, %u]", 1728 - btrfs_item_data_end(leaf, slot), 1729 - BTRFS_LEAF_DATA_SIZE(fs_info)); 1724 + "slot end outside of leaf, have %llu expect range [0, %u]", 1725 + item_data_end, BTRFS_LEAF_DATA_SIZE(fs_info)); 1730 1726 return -EUCLEAN; 1731 1727 } 1732 1728
+49 -12
fs/btrfs/tree-log.c
··· 1362 1362 inode, name, namelen); 1363 1363 kfree(name); 1364 1364 iput(dir); 1365 + /* 1366 + * Whenever we need to check if a name exists or not, we 1367 + * check the subvolume tree. So after an unlink we must 1368 + * run delayed items, so that future checks for a name 1369 + * during log replay see that the name does not exists 1370 + * anymore. 1371 + */ 1372 + if (!ret) 1373 + ret = btrfs_run_delayed_items(trans); 1365 1374 if (ret) 1366 1375 goto out; 1367 1376 goto again; ··· 1623 1614 */ 1624 1615 if (!ret && inode->i_nlink == 0) 1625 1616 inc_nlink(inode); 1617 + /* 1618 + * Whenever we need to check if a name exists or 1619 + * not, we check the subvolume tree. So after an 1620 + * unlink we must run delayed items, so that future 1621 + * checks for a name during log replay see that the 1622 + * name does not exists anymore. 1623 + */ 1624 + if (!ret) 1625 + ret = btrfs_run_delayed_items(trans); 1626 1626 } 1627 1627 if (ret < 0) 1628 1628 goto out; ··· 4653 4635 4654 4636 /* 4655 4637 * Log all prealloc extents beyond the inode's i_size to make sure we do not 4656 - * lose them after doing a fast fsync and replaying the log. We scan the 4638 + * lose them after doing a full/fast fsync and replaying the log. We scan the 4657 4639 * subvolume's root instead of iterating the inode's extent map tree because 4658 4640 * otherwise we can log incorrect extent items based on extent map conversion. 4659 4641 * That can happen due to the fact that extent maps are merged when they ··· 5432 5414 struct btrfs_log_ctx *ctx, 5433 5415 bool *need_log_inode_item) 5434 5416 { 5417 + const u64 i_size = i_size_read(&inode->vfs_inode); 5435 5418 struct btrfs_root *root = inode->root; 5436 5419 int ins_start_slot = 0; 5437 5420 int ins_nr = 0; ··· 5453 5434 if (min_key->type > max_key->type) 5454 5435 break; 5455 5436 5456 - if (min_key->type == BTRFS_INODE_ITEM_KEY) 5437 + if (min_key->type == BTRFS_INODE_ITEM_KEY) { 5457 5438 *need_log_inode_item = false; 5458 - 5459 - if ((min_key->type == BTRFS_INODE_REF_KEY || 5460 - min_key->type == BTRFS_INODE_EXTREF_KEY) && 5461 - inode->generation == trans->transid && 5462 - !recursive_logging) { 5439 + } else if (min_key->type == BTRFS_EXTENT_DATA_KEY && 5440 + min_key->offset >= i_size) { 5441 + /* 5442 + * Extents at and beyond eof are logged with 5443 + * btrfs_log_prealloc_extents(). 5444 + * Only regular files have BTRFS_EXTENT_DATA_KEY keys, 5445 + * and no keys greater than that, so bail out. 5446 + */ 5447 + break; 5448 + } else if ((min_key->type == BTRFS_INODE_REF_KEY || 5449 + min_key->type == BTRFS_INODE_EXTREF_KEY) && 5450 + inode->generation == trans->transid && 5451 + !recursive_logging) { 5463 5452 u64 other_ino = 0; 5464 5453 u64 other_parent = 0; 5465 5454 ··· 5498 5471 btrfs_release_path(path); 5499 5472 goto next_key; 5500 5473 } 5501 - } 5502 - 5503 - /* Skip xattrs, we log them later with btrfs_log_all_xattrs() */ 5504 - if (min_key->type == BTRFS_XATTR_ITEM_KEY) { 5474 + } else if (min_key->type == BTRFS_XATTR_ITEM_KEY) { 5475 + /* Skip xattrs, logged later with btrfs_log_all_xattrs() */ 5505 5476 if (ins_nr == 0) 5506 5477 goto next_slot; 5507 5478 ret = copy_items(trans, inode, dst_path, path, ··· 5552 5527 break; 5553 5528 } 5554 5529 } 5555 - if (ins_nr) 5530 + if (ins_nr) { 5556 5531 ret = copy_items(trans, inode, dst_path, path, ins_start_slot, 5557 5532 ins_nr, inode_only, logged_isize); 5533 + if (ret) 5534 + return ret; 5535 + } 5536 + 5537 + if (inode_only == LOG_INODE_ALL && S_ISREG(inode->vfs_inode.i_mode)) { 5538 + /* 5539 + * Release the path because otherwise we might attempt to double 5540 + * lock the same leaf with btrfs_log_prealloc_extents() below. 5541 + */ 5542 + btrfs_release_path(path); 5543 + ret = btrfs_log_prealloc_extents(trans, inode, dst_path); 5544 + } 5558 5545 5559 5546 return ret; 5560 5547 }
+1 -1
fs/cachefiles/interface.c
··· 254 254 ret = cachefiles_inject_write_error(); 255 255 if (ret == 0) 256 256 ret = vfs_fallocate(file, FALLOC_FL_ZERO_RANGE, 257 - new_size, dio_size); 257 + new_size, dio_size - new_size); 258 258 if (ret < 0) { 259 259 trace_cachefiles_io_error(object, file_inode(file), ret, 260 260 cachefiles_trace_fallocate_error);
+20 -3
fs/cachefiles/xattr.c
··· 28 28 static const char cachefiles_xattr_cache[] = 29 29 XATTR_USER_PREFIX "CacheFiles.cache"; 30 30 31 + struct cachefiles_vol_xattr { 32 + __be32 reserved; /* Reserved, should be 0 */ 33 + __u8 data[]; /* netfs volume coherency data */ 34 + } __packed; 35 + 31 36 /* 32 37 * set the state xattr on a cache file 33 38 */ ··· 190 185 */ 191 186 bool cachefiles_set_volume_xattr(struct cachefiles_volume *volume) 192 187 { 188 + struct cachefiles_vol_xattr *buf; 193 189 unsigned int len = volume->vcookie->coherency_len; 194 190 const void *p = volume->vcookie->coherency; 195 191 struct dentry *dentry = volume->dentry; ··· 198 192 199 193 _enter("%x,#%d", volume->vcookie->debug_id, len); 200 194 195 + len += sizeof(*buf); 196 + buf = kmalloc(len, GFP_KERNEL); 197 + if (!buf) 198 + return false; 199 + buf->reserved = cpu_to_be32(0); 200 + memcpy(buf->data, p, len); 201 + 201 202 ret = cachefiles_inject_write_error(); 202 203 if (ret == 0) 203 204 ret = vfs_setxattr(&init_user_ns, dentry, cachefiles_xattr_cache, 204 - p, len, 0); 205 + buf, len, 0); 205 206 if (ret < 0) { 206 207 trace_cachefiles_vfs_error(NULL, d_inode(dentry), ret, 207 208 cachefiles_trace_setxattr_error); ··· 222 209 cachefiles_coherency_vol_set_ok); 223 210 } 224 211 212 + kfree(buf); 225 213 _leave(" = %d", ret); 226 214 return ret == 0; 227 215 } ··· 232 218 */ 233 219 int cachefiles_check_volume_xattr(struct cachefiles_volume *volume) 234 220 { 235 - struct cachefiles_xattr *buf; 221 + struct cachefiles_vol_xattr *buf; 236 222 struct dentry *dentry = volume->dentry; 237 223 unsigned int len = volume->vcookie->coherency_len; 238 224 const void *p = volume->vcookie->coherency; ··· 242 228 243 229 _enter(""); 244 230 231 + len += sizeof(*buf); 245 232 buf = kmalloc(len, GFP_KERNEL); 246 233 if (!buf) 247 234 return -ENOMEM; ··· 260 245 "Failed to read xattr with error %zd", xlen); 261 246 } 262 247 why = cachefiles_coherency_vol_check_xattr; 263 - } else if (memcmp(buf->data, p, len) != 0) { 248 + } else if (buf->reserved != cpu_to_be32(0)) { 249 + why = cachefiles_coherency_vol_check_resv; 250 + } else if (memcmp(buf->data, p, len - sizeof(*buf)) != 0) { 264 251 why = cachefiles_coherency_vol_check_cmp; 265 252 } else { 266 253 why = cachefiles_coherency_vol_check_ok;
+1 -1
fs/erofs/internal.h
··· 325 325 unsigned char z_algorithmtype[2]; 326 326 unsigned char z_logical_clusterbits; 327 327 unsigned long z_tailextent_headlcn; 328 - unsigned int z_idataoff; 328 + erofs_off_t z_idataoff; 329 329 unsigned short z_idata_size; 330 330 }; 331 331 #endif /* CONFIG_EROFS_FS_ZIP */
+11 -1
fs/fuse/dev.c
··· 941 941 942 942 while (count) { 943 943 if (cs->write && cs->pipebufs && page) { 944 - return fuse_ref_page(cs, page, offset, count); 944 + /* 945 + * Can't control lifetime of pipe buffers, so always 946 + * copy user pages. 947 + */ 948 + if (cs->req->args->user_pages) { 949 + err = fuse_copy_fill(cs); 950 + if (err) 951 + return err; 952 + } else { 953 + return fuse_ref_page(cs, page, offset, count); 954 + } 945 955 } else if (!cs->len) { 946 956 if (cs->move_pages && page && 947 957 offset == 0 && count == PAGE_SIZE) {
+1
fs/fuse/file.c
··· 1413 1413 (PAGE_SIZE - ret) & (PAGE_SIZE - 1); 1414 1414 } 1415 1415 1416 + ap->args.user_pages = true; 1416 1417 if (write) 1417 1418 ap->args.in_pages = true; 1418 1419 else
+1
fs/fuse/fuse_i.h
··· 256 256 bool nocreds:1; 257 257 bool in_pages:1; 258 258 bool out_pages:1; 259 + bool user_pages:1; 259 260 bool out_argvar:1; 260 261 bool page_zeroing:1; 261 262 bool page_replace:1;
+1 -2
fs/fuse/inode.c
··· 23 23 #include <linux/exportfs.h> 24 24 #include <linux/posix_acl.h> 25 25 #include <linux/pid_namespace.h> 26 + #include <uapi/linux/magic.h> 26 27 27 28 MODULE_AUTHOR("Miklos Szeredi <miklos@szeredi.hu>"); 28 29 MODULE_DESCRIPTION("Filesystem in Userspace"); ··· 50 49 MODULE_PARM_DESC(max_user_congthresh, 51 50 "Global limit for the maximum congestion threshold an " 52 51 "unprivileged user can set"); 53 - 54 - #define FUSE_SUPER_MAGIC 0x65735546 55 52 56 53 #define FUSE_DEFAULT_BLKSIZE 512 57 54
+6 -3
fs/fuse/ioctl.c
··· 394 394 args.out_args[1].value = ptr; 395 395 396 396 err = fuse_simple_request(fm, &args); 397 - if (!err && outarg.flags & FUSE_IOCTL_RETRY) 398 - err = -EIO; 399 - 397 + if (!err) { 398 + if (outarg.result < 0) 399 + err = outarg.result; 400 + else if (outarg.flags & FUSE_IOCTL_RETRY) 401 + err = -EIO; 402 + } 400 403 return err; 401 404 } 402 405
+7 -4
fs/pipe.c
··· 253 253 */ 254 254 was_full = pipe_full(pipe->head, pipe->tail, pipe->max_usage); 255 255 for (;;) { 256 - unsigned int head = pipe->head; 256 + /* Read ->head with a barrier vs post_one_notification() */ 257 + unsigned int head = smp_load_acquire(&pipe->head); 257 258 unsigned int tail = pipe->tail; 258 259 unsigned int mask = pipe->ring_size - 1; 259 260 ··· 832 831 int i; 833 832 834 833 #ifdef CONFIG_WATCH_QUEUE 835 - if (pipe->watch_queue) { 834 + if (pipe->watch_queue) 836 835 watch_queue_clear(pipe->watch_queue); 837 - put_watch_queue(pipe->watch_queue); 838 - } 839 836 #endif 840 837 841 838 (void) account_pipe_buffers(pipe->user, pipe->nr_accounted, 0); ··· 843 844 if (buf->ops) 844 845 pipe_buf_release(pipe, buf); 845 846 } 847 + #ifdef CONFIG_WATCH_QUEUE 848 + if (pipe->watch_queue) 849 + put_watch_queue(pipe->watch_queue); 850 + #endif 846 851 if (pipe->tmp_page) 847 852 __free_page(pipe->tmp_page); 848 853 kfree(pipe->bufs);
+5 -4
fs/proc/task_mmu.c
··· 309 309 310 310 name = arch_vma_name(vma); 311 311 if (!name) { 312 - const char *anon_name; 312 + struct anon_vma_name *anon_name; 313 313 314 314 if (!mm) { 315 315 name = "[vdso]"; ··· 327 327 goto done; 328 328 } 329 329 330 - anon_name = vma_anon_name(vma); 330 + anon_name = anon_vma_name(vma); 331 331 if (anon_name) { 332 332 seq_pad(m, ' '); 333 - seq_printf(m, "[anon:%s]", anon_name); 333 + seq_printf(m, "[anon:%s]", anon_name->name); 334 334 } 335 335 } 336 336 ··· 1597 1597 * Bits 5-54 swap offset if swapped 1598 1598 * Bit 55 pte is soft-dirty (see Documentation/admin-guide/mm/soft-dirty.rst) 1599 1599 * Bit 56 page exclusively mapped 1600 - * Bits 57-60 zero 1600 + * Bit 57 pte is uffd-wp write-protected 1601 + * Bits 58-60 zero 1601 1602 * Bit 61 page is file-page or shared-anon 1602 1603 * Bit 62 page swapped 1603 1604 * Bit 63 page present
+3 -3
fs/userfaultfd.c
··· 878 878 new_flags, vma->anon_vma, 879 879 vma->vm_file, vma->vm_pgoff, 880 880 vma_policy(vma), 881 - NULL_VM_UFFD_CTX, vma_anon_name(vma)); 881 + NULL_VM_UFFD_CTX, anon_vma_name(vma)); 882 882 if (prev) 883 883 vma = prev; 884 884 else ··· 1438 1438 vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1439 1439 vma_policy(vma), 1440 1440 ((struct vm_userfaultfd_ctx){ ctx }), 1441 - vma_anon_name(vma)); 1441 + anon_vma_name(vma)); 1442 1442 if (prev) { 1443 1443 vma = prev; 1444 1444 goto next; ··· 1615 1615 prev = vma_merge(mm, prev, start, vma_end, new_flags, 1616 1616 vma->anon_vma, vma->vm_file, vma->vm_pgoff, 1617 1617 vma_policy(vma), 1618 - NULL_VM_UFFD_CTX, vma_anon_name(vma)); 1618 + NULL_VM_UFFD_CTX, anon_vma_name(vma)); 1619 1619 if (prev) { 1620 1620 vma = prev; 1621 1621 goto next;
+5
include/linux/arm-smccc.h
··· 92 92 ARM_SMCCC_SMC_32, \ 93 93 0, 0x7fff) 94 94 95 + #define ARM_SMCCC_ARCH_WORKAROUND_3 \ 96 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 97 + ARM_SMCCC_SMC_32, \ 98 + 0, 0x3fff) 99 + 95 100 #define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID \ 96 101 ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 97 102 ARM_SMCCC_SMC_32, \
+11
include/linux/bpf.h
··· 1792 1792 int bpf_core_apply(struct bpf_core_ctx *ctx, const struct bpf_core_relo *relo, 1793 1793 int relo_idx, void *insn); 1794 1794 1795 + static inline bool unprivileged_ebpf_enabled(void) 1796 + { 1797 + return !sysctl_unprivileged_bpf_disabled; 1798 + } 1799 + 1795 1800 #else /* !CONFIG_BPF_SYSCALL */ 1796 1801 static inline struct bpf_prog *bpf_prog_get(u32 ufd) 1797 1802 { ··· 2016 2011 { 2017 2012 return NULL; 2018 2013 } 2014 + 2015 + static inline bool unprivileged_ebpf_enabled(void) 2016 + { 2017 + return false; 2018 + } 2019 + 2019 2020 #endif /* CONFIG_BPF_SYSCALL */ 2020 2021 2021 2022 void __bpf_free_used_btfs(struct bpf_prog_aux *aux,
-8
include/linux/dma-mapping.h
··· 62 62 #define DMA_ATTR_PRIVILEGED (1UL << 9) 63 63 64 64 /* 65 - * This is a hint to the DMA-mapping subsystem that the device is expected 66 - * to overwrite the entire mapped size, thus the caller does not require any 67 - * of the previous buffer contents to be preserved. This allows 68 - * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers. 69 - */ 70 - #define DMA_ATTR_OVERWRITE (1UL << 10) 71 - 72 - /* 73 65 * A dma_addr_t can hold any valid DMA or bus address for the platform. It can 74 66 * be given to a device to use as a DMA source or target. It is specific to a 75 67 * given device and there may be a translation between the CPU physical address
+2 -3
include/linux/mlx5/mlx5_ifc.h
··· 3434 3434 enum { 3435 3435 MLX5_TIRC_PACKET_MERGE_MASK_IPV4_LRO = BIT(0), 3436 3436 MLX5_TIRC_PACKET_MERGE_MASK_IPV6_LRO = BIT(1), 3437 - MLX5_TIRC_PACKET_MERGE_MASK_SHAMPO = BIT(2), 3438 3437 }; 3439 3438 3440 3439 enum { ··· 9899 9900 u8 reserved_at_0[0x6]; 9900 9901 u8 lossy[0x1]; 9901 9902 u8 epsb[0x1]; 9902 - u8 reserved_at_8[0xc]; 9903 - u8 size[0xc]; 9903 + u8 reserved_at_8[0x8]; 9904 + u8 size[0x10]; 9904 9905 9905 9906 u8 xoff_threshold[0x10]; 9906 9907 u8 xon_threshold[0x10];
+4 -3
include/linux/mm.h
··· 2626 2626 extern struct vm_area_struct *vma_merge(struct mm_struct *, 2627 2627 struct vm_area_struct *prev, unsigned long addr, unsigned long end, 2628 2628 unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t, 2629 - struct mempolicy *, struct vm_userfaultfd_ctx, const char *); 2629 + struct mempolicy *, struct vm_userfaultfd_ctx, struct anon_vma_name *); 2630 2630 extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *); 2631 2631 extern int __split_vma(struct mm_struct *, struct vm_area_struct *, 2632 2632 unsigned long addr, int new_below); ··· 3372 3372 3373 3373 #ifdef CONFIG_ANON_VMA_NAME 3374 3374 int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, 3375 - unsigned long len_in, const char *name); 3375 + unsigned long len_in, 3376 + struct anon_vma_name *anon_name); 3376 3377 #else 3377 3378 static inline int 3378 3379 madvise_set_anon_name(struct mm_struct *mm, unsigned long start, 3379 - unsigned long len_in, const char *name) { 3380 + unsigned long len_in, struct anon_vma_name *anon_name) { 3380 3381 return 0; 3381 3382 } 3382 3383 #endif
+70 -29
include/linux/mm_inline.h
··· 140 140 141 141 #ifdef CONFIG_ANON_VMA_NAME 142 142 /* 143 - * mmap_lock should be read-locked when calling vma_anon_name() and while using 144 - * the returned pointer. 143 + * mmap_lock should be read-locked when calling anon_vma_name(). Caller should 144 + * either keep holding the lock while using the returned pointer or it should 145 + * raise anon_vma_name refcount before releasing the lock. 145 146 */ 146 - extern const char *vma_anon_name(struct vm_area_struct *vma); 147 - 148 - /* 149 - * mmap_lock should be read-locked for orig_vma->vm_mm. 150 - * mmap_lock should be write-locked for new_vma->vm_mm or new_vma should be 151 - * isolated. 152 - */ 153 - extern void dup_vma_anon_name(struct vm_area_struct *orig_vma, 154 - struct vm_area_struct *new_vma); 155 - 156 - /* 157 - * mmap_lock should be write-locked or vma should have been isolated under 158 - * write-locked mmap_lock protection. 159 - */ 160 - extern void free_vma_anon_name(struct vm_area_struct *vma); 147 + extern struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma); 148 + extern struct anon_vma_name *anon_vma_name_alloc(const char *name); 149 + extern void anon_vma_name_free(struct kref *kref); 161 150 162 151 /* mmap_lock should be read-locked */ 163 - static inline bool is_same_vma_anon_name(struct vm_area_struct *vma, 164 - const char *name) 152 + static inline void anon_vma_name_get(struct anon_vma_name *anon_name) 165 153 { 166 - const char *vma_name = vma_anon_name(vma); 154 + if (anon_name) 155 + kref_get(&anon_name->kref); 156 + } 167 157 168 - /* either both NULL, or pointers to same string */ 169 - if (vma_name == name) 158 + static inline void anon_vma_name_put(struct anon_vma_name *anon_name) 159 + { 160 + if (anon_name) 161 + kref_put(&anon_name->kref, anon_vma_name_free); 162 + } 163 + 164 + static inline 165 + struct anon_vma_name *anon_vma_name_reuse(struct anon_vma_name *anon_name) 166 + { 167 + /* Prevent anon_name refcount saturation early on */ 168 + if (kref_read(&anon_name->kref) < REFCOUNT_MAX) { 169 + anon_vma_name_get(anon_name); 170 + return anon_name; 171 + 172 + } 173 + return anon_vma_name_alloc(anon_name->name); 174 + } 175 + 176 + static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma, 177 + struct vm_area_struct *new_vma) 178 + { 179 + struct anon_vma_name *anon_name = anon_vma_name(orig_vma); 180 + 181 + if (anon_name) 182 + new_vma->anon_name = anon_vma_name_reuse(anon_name); 183 + } 184 + 185 + static inline void free_anon_vma_name(struct vm_area_struct *vma) 186 + { 187 + /* 188 + * Not using anon_vma_name because it generates a warning if mmap_lock 189 + * is not held, which might be the case here. 190 + */ 191 + if (!vma->vm_file) 192 + anon_vma_name_put(vma->anon_name); 193 + } 194 + 195 + static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, 196 + struct anon_vma_name *anon_name2) 197 + { 198 + if (anon_name1 == anon_name2) 170 199 return true; 171 200 172 - return name && vma_name && !strcmp(name, vma_name); 201 + return anon_name1 && anon_name2 && 202 + !strcmp(anon_name1->name, anon_name2->name); 173 203 } 204 + 174 205 #else /* CONFIG_ANON_VMA_NAME */ 175 - static inline const char *vma_anon_name(struct vm_area_struct *vma) 206 + static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma) 176 207 { 177 208 return NULL; 178 209 } 179 - static inline void dup_vma_anon_name(struct vm_area_struct *orig_vma, 180 - struct vm_area_struct *new_vma) {} 181 - static inline void free_vma_anon_name(struct vm_area_struct *vma) {} 182 - static inline bool is_same_vma_anon_name(struct vm_area_struct *vma, 183 - const char *name) 210 + 211 + static inline struct anon_vma_name *anon_vma_name_alloc(const char *name) 212 + { 213 + return NULL; 214 + } 215 + 216 + static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {} 217 + static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {} 218 + static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma, 219 + struct vm_area_struct *new_vma) {} 220 + static inline void free_anon_vma_name(struct vm_area_struct *vma) {} 221 + 222 + static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1, 223 + struct anon_vma_name *anon_name2) 184 224 { 185 225 return true; 186 226 } 227 + 187 228 #endif /* CONFIG_ANON_VMA_NAME */ 188 229 189 230 static inline void init_tlb_flush_pending(struct mm_struct *mm)
+4 -1
include/linux/mm_types.h
··· 416 416 struct rb_node rb; 417 417 unsigned long rb_subtree_last; 418 418 } shared; 419 - /* Serialized by mmap_sem. */ 419 + /* 420 + * Serialized by mmap_sem. Never use directly because it is 421 + * valid only when vm_file is NULL. Use anon_vma_name instead. 422 + */ 420 423 struct anon_vma_name *anon_name; 421 424 }; 422 425
+2
include/linux/netdevice.h
··· 4602 4602 4603 4603 struct sk_buff *__skb_gso_segment(struct sk_buff *skb, 4604 4604 netdev_features_t features, bool tx_path); 4605 + struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, 4606 + netdev_features_t features, __be16 type); 4605 4607 struct sk_buff *skb_mac_gso_segment(struct sk_buff *skb, 4606 4608 netdev_features_t features); 4607 4609
+4
include/linux/netfilter_netdev.h
··· 101 101 nf_hook_state_init(&state, NF_NETDEV_EGRESS, 102 102 NFPROTO_NETDEV, dev, NULL, NULL, 103 103 dev_net(dev), NULL); 104 + 105 + /* nf assumes rcu_read_lock, not just read_lock_bh */ 106 + rcu_read_lock(); 104 107 ret = nf_hook_slow(skb, &state, e, 0); 108 + rcu_read_unlock(); 105 109 106 110 if (ret == 1) { 107 111 return skb;
+2 -2
include/linux/phy.h
··· 87 87 * 88 88 * @PHY_INTERFACE_MODE_NA: Not Applicable - don't touch 89 89 * @PHY_INTERFACE_MODE_INTERNAL: No interface, MAC and PHY combined 90 - * @PHY_INTERFACE_MODE_MII: Median-independent interface 91 - * @PHY_INTERFACE_MODE_GMII: Gigabit median-independent interface 90 + * @PHY_INTERFACE_MODE_MII: Media-independent interface 91 + * @PHY_INTERFACE_MODE_GMII: Gigabit media-independent interface 92 92 * @PHY_INTERFACE_MODE_SGMII: Serial gigabit media-independent interface 93 93 * @PHY_INTERFACE_MODE_TBI: Ten Bit Interface 94 94 * @PHY_INTERFACE_MODE_REVMII: Reverse Media Independent Interface
+5
include/linux/rfkill.h
··· 308 308 return false; 309 309 } 310 310 311 + static inline bool rfkill_soft_blocked(struct rfkill *rfkill) 312 + { 313 + return false; 314 + } 315 + 311 316 static inline enum rfkill_type rfkill_find_type(const char *name) 312 317 { 313 318 return RFKILL_TYPE_ALL;
+12 -6
include/linux/vdpa.h
··· 401 401 return ret; 402 402 } 403 403 404 - static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features, bool locked) 404 + static inline int vdpa_set_features_unlocked(struct vdpa_device *vdev, u64 features) 405 405 { 406 406 const struct vdpa_config_ops *ops = vdev->config; 407 407 int ret; 408 408 409 - if (!locked) 410 - mutex_lock(&vdev->cf_mutex); 411 - 412 409 vdev->features_valid = true; 413 410 ret = ops->set_driver_features(vdev, features); 414 - if (!locked) 415 - mutex_unlock(&vdev->cf_mutex); 411 + 412 + return ret; 413 + } 414 + 415 + static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features) 416 + { 417 + int ret; 418 + 419 + mutex_lock(&vdev->cf_mutex); 420 + ret = vdpa_set_features_unlocked(vdev, features); 421 + mutex_unlock(&vdev->cf_mutex); 416 422 417 423 return ret; 418 424 }
-1
include/linux/virtio.h
··· 133 133 void virtio_break_device(struct virtio_device *dev); 134 134 135 135 void virtio_config_changed(struct virtio_device *dev); 136 - int virtio_finalize_features(struct virtio_device *dev); 137 136 #ifdef CONFIG_PM_SLEEP 138 137 int virtio_device_freeze(struct virtio_device *dev); 139 138 int virtio_device_restore(struct virtio_device *dev);
+2 -1
include/linux/virtio_config.h
··· 64 64 * Returns the first 64 feature bits (all we currently need). 65 65 * @finalize_features: confirm what device features we'll be using. 66 66 * vdev: the virtio_device 67 - * This gives the final feature bits for the device: it can change 67 + * This sends the driver feature bits to the device: it can change 68 68 * the dev->feature bits if it wants. 69 + * Note: despite the name this can be called any number of times. 69 70 * Returns 0 on success or error status 70 71 * @bus_name: return the bus name associated with the device (optional) 71 72 * vdev: the virtio_device
+2 -1
include/linux/watch_queue.h
··· 28 28 struct watch_filter { 29 29 union { 30 30 struct rcu_head rcu; 31 - unsigned long type_filter[2]; /* Bitmask of accepted types */ 31 + /* Bitmask of accepted types */ 32 + DECLARE_BITMAP(type_filter, WATCH_TYPE__NR); 32 33 }; 33 34 u32 nr_filters; /* Number of filters */ 34 35 struct watch_type_filter filters[];
+1 -2
include/net/bluetooth/bluetooth.h
··· 506 506 507 507 tmp = bt_skb_sendmsg(sk, msg, len, mtu, headroom, tailroom); 508 508 if (IS_ERR(tmp)) { 509 - kfree_skb(skb); 510 - return tmp; 509 + return skb; 511 510 } 512 511 513 512 len -= tmp->len;
+8
include/net/bluetooth/hci_core.h
··· 1489 1489 /* Extended advertising support */ 1490 1490 #define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV)) 1491 1491 1492 + /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 1789: 1493 + * 1494 + * C24: Mandatory if the LE Controller supports Connection State and either 1495 + * LE Feature (LL Privacy) or LE Feature (Extended Advertising) is supported 1496 + */ 1497 + #define use_enhanced_conn_complete(dev) (ll_privacy_capable(dev) || \ 1498 + ext_adv_capable(dev)) 1499 + 1492 1500 /* ----- HCI protocols ----- */ 1493 1501 #define HCI_PROTO_DEFER 0x01 1494 1502
+2
include/net/esp.h
··· 4 4 5 5 #include <linux/skbuff.h> 6 6 7 + #define ESP_SKB_FRAG_MAXSIZE (PAGE_SIZE << SKB_FRAG_PAGE_ORDER) 8 + 7 9 struct ip_esp_hdr; 8 10 9 11 static inline struct ip_esp_hdr *ip_esp_hdr(const struct sk_buff *skb)
+2 -2
include/net/ndisc.h
··· 475 475 void igmp6_cleanup(void); 476 476 void igmp6_late_cleanup(void); 477 477 478 - int igmp6_event_query(struct sk_buff *skb); 478 + void igmp6_event_query(struct sk_buff *skb); 479 479 480 - int igmp6_event_report(struct sk_buff *skb); 480 + void igmp6_event_report(struct sk_buff *skb); 481 481 482 482 483 483 #ifdef CONFIG_SYSCTL
+5 -1
include/net/netfilter/nf_flow_table.h
··· 96 96 FLOW_OFFLOAD_XMIT_NEIGH, 97 97 FLOW_OFFLOAD_XMIT_XFRM, 98 98 FLOW_OFFLOAD_XMIT_DIRECT, 99 + FLOW_OFFLOAD_XMIT_TC, 99 100 }; 100 101 101 102 #define NF_FLOW_TABLE_ENCAP_MAX 2 ··· 128 127 struct { } __hash; 129 128 130 129 u8 dir:2, 131 - xmit_type:2, 130 + xmit_type:3, 132 131 encap_num:2, 133 132 in_vlan_ingress:2; 134 133 u16 mtu; ··· 143 142 u8 h_source[ETH_ALEN]; 144 143 u8 h_dest[ETH_ALEN]; 145 144 } out; 145 + struct { 146 + u32 iifidx; 147 + } tc; 146 148 }; 147 149 }; 148 150
+1 -1
include/net/netfilter/nf_queue.h
··· 37 37 void nf_unregister_queue_handler(void); 38 38 void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict); 39 39 40 - void nf_queue_entry_get_refs(struct nf_queue_entry *entry); 40 + bool nf_queue_entry_get_refs(struct nf_queue_entry *entry); 41 41 void nf_queue_entry_free(struct nf_queue_entry *entry); 42 42 43 43 static inline void init_hashrandom(u32 *jhash_initval)
+3 -3
include/net/xfrm.h
··· 1568 1568 void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si); 1569 1569 u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq); 1570 1570 int xfrm_init_replay(struct xfrm_state *x); 1571 - u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu); 1572 1571 u32 xfrm_state_mtu(struct xfrm_state *x, int mtu); 1573 1572 int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload); 1574 1573 int xfrm_init_state(struct xfrm_state *x); ··· 1680 1681 const struct xfrm_migrate *m, int num_bundles, 1681 1682 const struct xfrm_kmaddress *k, 1682 1683 const struct xfrm_encap_tmpl *encap); 1683 - struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net); 1684 + struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net, 1685 + u32 if_id); 1684 1686 struct xfrm_state *xfrm_state_migrate(struct xfrm_state *x, 1685 1687 struct xfrm_migrate *m, 1686 1688 struct xfrm_encap_tmpl *encap); 1687 1689 int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type, 1688 1690 struct xfrm_migrate *m, int num_bundles, 1689 1691 struct xfrm_kmaddress *k, struct net *net, 1690 - struct xfrm_encap_tmpl *encap); 1692 + struct xfrm_encap_tmpl *encap, u32 if_id); 1691 1693 #endif 1692 1694 1693 1695 int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, __be16 sport);
+2 -1
include/soc/fsl/dpaa2-fd.h
··· 7 7 #ifndef __FSL_DPAA2_FD_H 8 8 #define __FSL_DPAA2_FD_H 9 9 10 - #include <linux/kernel.h> 10 + #include <linux/byteorder/generic.h> 11 + #include <linux/types.h> 11 12 12 13 /** 13 14 * DOC: DPAA2 FD - Frame Descriptor APIs for DPAA2
+2 -1
include/soc/fsl/qe/immap_qe.h
··· 13 13 #define _ASM_POWERPC_IMMAP_QE_H 14 14 #ifdef __KERNEL__ 15 15 16 - #include <linux/kernel.h> 16 + #include <linux/types.h> 17 + 17 18 #include <asm/io.h> 18 19 19 20 #define QE_IMMAP_SIZE (1024 * 1024) /* 1MB from 1MB+IMMR */
+3 -1
include/soc/fsl/qe/qe_tdm.h
··· 10 10 #ifndef _QE_TDM_H_ 11 11 #define _QE_TDM_H_ 12 12 13 - #include <linux/kernel.h> 14 13 #include <linux/list.h> 14 + #include <linux/types.h> 15 15 16 16 #include <soc/fsl/qe/immap_qe.h> 17 17 #include <soc/fsl/qe/qe.h> 18 18 19 19 #include <soc/fsl/qe/ucc.h> 20 20 #include <soc/fsl/qe/ucc_fast.h> 21 + 22 + struct device_node; 21 23 22 24 /* SI RAM entries */ 23 25 #define SIR_LAST 0x0001
+1 -1
include/soc/fsl/qe/ucc_fast.h
··· 10 10 #ifndef __UCC_FAST_H__ 11 11 #define __UCC_FAST_H__ 12 12 13 - #include <linux/kernel.h> 13 + #include <linux/types.h> 14 14 15 15 #include <soc/fsl/qe/immap_qe.h> 16 16 #include <soc/fsl/qe/qe.h>
+1 -1
include/soc/fsl/qe/ucc_slow.h
··· 11 11 #ifndef __UCC_SLOW_H__ 12 12 #define __UCC_SLOW_H__ 13 13 14 - #include <linux/kernel.h> 14 + #include <linux/types.h> 15 15 16 16 #include <soc/fsl/qe/immap_qe.h> 17 17 #include <soc/fsl/qe/qe.h>
+2
include/trace/events/cachefiles.h
··· 56 56 cachefiles_coherency_set_ok, 57 57 cachefiles_coherency_vol_check_cmp, 58 58 cachefiles_coherency_vol_check_ok, 59 + cachefiles_coherency_vol_check_resv, 59 60 cachefiles_coherency_vol_check_xattr, 60 61 cachefiles_coherency_vol_set_fail, 61 62 cachefiles_coherency_vol_set_ok, ··· 140 139 EM(cachefiles_coherency_set_ok, "SET ok ") \ 141 140 EM(cachefiles_coherency_vol_check_cmp, "VOL BAD cmp ") \ 142 141 EM(cachefiles_coherency_vol_check_ok, "VOL OK ") \ 142 + EM(cachefiles_coherency_vol_check_resv, "VOL BAD resv") \ 143 143 EM(cachefiles_coherency_vol_check_xattr,"VOL BAD xatt") \ 144 144 EM(cachefiles_coherency_vol_set_fail, "VOL SET fail") \ 145 145 E_(cachefiles_coherency_vol_set_ok, "VOL SET ok ")
+3 -1
include/uapi/linux/input-event-codes.h
··· 278 278 #define KEY_PAUSECD 201 279 279 #define KEY_PROG3 202 280 280 #define KEY_PROG4 203 281 - #define KEY_DASHBOARD 204 /* AL Dashboard */ 281 + #define KEY_ALL_APPLICATIONS 204 /* AC Desktop Show All Applications */ 282 + #define KEY_DASHBOARD KEY_ALL_APPLICATIONS 282 283 #define KEY_SUSPEND 205 283 284 #define KEY_CLOSE 206 /* AC Close */ 284 285 #define KEY_PLAY 207 ··· 613 612 #define KEY_ASSISTANT 0x247 /* AL Context-aware desktop assistant */ 614 613 #define KEY_KBD_LAYOUT_NEXT 0x248 /* AC Next Keyboard Layout Select */ 615 614 #define KEY_EMOJI_PICKER 0x249 /* Show/hide emoji picker (HUTRR101) */ 615 + #define KEY_DICTATE 0x24a /* Start or Stop Voice Dictation Session (HUTRR99) */ 616 616 617 617 #define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */ 618 618 #define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */
+1
include/uapi/linux/magic.h
··· 36 36 #define EFIVARFS_MAGIC 0xde5e81e4 37 37 #define HOSTFS_SUPER_MAGIC 0x00c0ffee 38 38 #define OVERLAYFS_SUPER_MAGIC 0x794c7630 39 + #define FUSE_SUPER_MAGIC 0x65735546 39 40 40 41 #define MINIX_SUPER_MAGIC 0x137F /* minix v1 fs, 14 char names */ 41 42 #define MINIX_SUPER_MAGIC2 0x138F /* minix v1 fs, 30 char names */
+6
include/uapi/linux/xfrm.h
··· 511 511 int ifindex; 512 512 __u8 flags; 513 513 }; 514 + /* This flag was exposed without any kernel code that supporting it. 515 + * Unfortunately, strongswan has the code that uses sets this flag, 516 + * which makes impossible to reuse this bit. 517 + * 518 + * So leave it here to make sure that it won't be reused by mistake. 519 + */ 514 520 #define XFRM_OFFLOAD_IPV6 1 515 521 #define XFRM_OFFLOAD_INBOUND 2 516 522
+17 -2
include/xen/grant_table.h
··· 104 104 * access has been ended, free the given page too. Access will be ended 105 105 * immediately iff the grant entry is not in use, otherwise it will happen 106 106 * some time later. page may be 0, in which case no freeing will occur. 107 + * Note that the granted page might still be accessed (read or write) by the 108 + * other side after gnttab_end_foreign_access() returns, so even if page was 109 + * specified as 0 it is not allowed to just reuse the page for other 110 + * purposes immediately. gnttab_end_foreign_access() will take an additional 111 + * reference to the granted page in this case, which is dropped only after 112 + * the grant is no longer in use. 113 + * This requires that multi page allocations for areas subject to 114 + * gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing 115 + * via free_pages_exact()) in order to avoid high order pages. 107 116 */ 108 117 void gnttab_end_foreign_access(grant_ref_t ref, int readonly, 109 118 unsigned long page); 119 + 120 + /* 121 + * End access through the given grant reference, iff the grant entry is 122 + * no longer in use. In case of success ending foreign access, the 123 + * grant reference is deallocated. 124 + * Return 1 if the grant entry was freed, 0 if it is still in use. 125 + */ 126 + int gnttab_try_end_foreign_access(grant_ref_t ref); 110 127 111 128 int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn); 112 129 113 130 unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref); 114 131 unsigned long gnttab_end_foreign_transfer(grant_ref_t ref); 115 - 116 - int gnttab_query_foreign_access(grant_ref_t ref); 117 132 118 133 /* 119 134 * operations on reserved batches of grant references
+1 -1
kernel/configs/debug.config
··· 16 16 # 17 17 # Compile-time checks and compiler options 18 18 # 19 - CONFIG_DEBUG_INFO=y 19 + CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y 20 20 CONFIG_DEBUG_SECTION_MISMATCH=y 21 21 CONFIG_FRAME_WARN=2048 22 22 CONFIG_SECTION_MISMATCH_WARN_ONLY=y
+15 -8
kernel/dma/swiotlb.c
··· 627 627 for (i = 0; i < nr_slots(alloc_size + offset); i++) 628 628 mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); 629 629 tlb_addr = slot_addr(mem->start, index) + offset; 630 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && 631 - (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || 632 - dir == DMA_BIDIRECTIONAL)) 633 - swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); 630 + /* 631 + * When dir == DMA_FROM_DEVICE we could omit the copy from the orig 632 + * to the tlb buffer, if we knew for sure the device will 633 + * overwirte the entire current content. But we don't. Thus 634 + * unconditional bounce may prevent leaking swiotlb content (i.e. 635 + * kernel memory) to user-space. 636 + */ 637 + swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); 634 638 return tlb_addr; 635 639 } 636 640 ··· 701 697 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr, 702 698 size_t size, enum dma_data_direction dir) 703 699 { 704 - if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) 705 - swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE); 706 - else 707 - BUG_ON(dir != DMA_FROM_DEVICE); 700 + /* 701 + * Unconditional bounce is necessary to avoid corruption on 702 + * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite 703 + * the whole lengt of the bounce buffer. 704 + */ 705 + swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE); 706 + BUG_ON(!valid_dma_direction(dir)); 708 707 } 709 708 710 709 void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
+2 -2
kernel/fork.c
··· 366 366 *new = data_race(*orig); 367 367 INIT_LIST_HEAD(&new->anon_vma_chain); 368 368 new->vm_next = new->vm_prev = NULL; 369 - dup_vma_anon_name(orig, new); 369 + dup_anon_vma_name(orig, new); 370 370 } 371 371 return new; 372 372 } 373 373 374 374 void vm_area_free(struct vm_area_struct *vma) 375 375 { 376 - free_vma_anon_name(vma); 376 + free_anon_vma_name(vma); 377 377 kmem_cache_free(vm_area_cachep, vma); 378 378 } 379 379
+12 -7
kernel/sys.c
··· 7 7 8 8 #include <linux/export.h> 9 9 #include <linux/mm.h> 10 + #include <linux/mm_inline.h> 10 11 #include <linux/utsname.h> 11 12 #include <linux/mman.h> 12 13 #include <linux/reboot.h> ··· 2287 2286 { 2288 2287 struct mm_struct *mm = current->mm; 2289 2288 const char __user *uname; 2290 - char *name, *pch; 2289 + struct anon_vma_name *anon_name = NULL; 2291 2290 int error; 2292 2291 2293 2292 switch (opt) { 2294 2293 case PR_SET_VMA_ANON_NAME: 2295 2294 uname = (const char __user *)arg; 2296 2295 if (uname) { 2297 - name = strndup_user(uname, ANON_VMA_NAME_MAX_LEN); 2296 + char *name, *pch; 2298 2297 2298 + name = strndup_user(uname, ANON_VMA_NAME_MAX_LEN); 2299 2299 if (IS_ERR(name)) 2300 2300 return PTR_ERR(name); 2301 2301 ··· 2306 2304 return -EINVAL; 2307 2305 } 2308 2306 } 2309 - } else { 2310 - /* Reset the name */ 2311 - name = NULL; 2307 + /* anon_vma has its own copy */ 2308 + anon_name = anon_vma_name_alloc(name); 2309 + kfree(name); 2310 + if (!anon_name) 2311 + return -ENOMEM; 2312 + 2312 2313 } 2313 2314 2314 2315 mmap_write_lock(mm); 2315 - error = madvise_set_anon_name(mm, addr, size, name); 2316 + error = madvise_set_anon_name(mm, addr, size, anon_name); 2316 2317 mmap_write_unlock(mm); 2317 - kfree(name); 2318 + anon_vma_name_put(anon_name); 2318 2319 break; 2319 2320 default: 2320 2321 error = -EINVAL;
+7
kernel/sysctl.c
··· 180 180 return ret; 181 181 } 182 182 183 + void __weak unpriv_ebpf_notify(int new_state) 184 + { 185 + } 186 + 183 187 static int bpf_unpriv_handler(struct ctl_table *table, int write, 184 188 void *buffer, size_t *lenp, loff_t *ppos) 185 189 { ··· 201 197 return -EPERM; 202 198 *(int *)table->data = unpriv_enable; 203 199 } 200 + 201 + unpriv_ebpf_notify(unpriv_enable); 202 + 204 203 return ret; 205 204 } 206 205 #endif /* CONFIG_BPF_SYSCALL && CONFIG_SYSCTL */
+18 -8
kernel/trace/blktrace.c
··· 310 310 local_irq_restore(flags); 311 311 } 312 312 313 - static void blk_trace_free(struct blk_trace *bt) 313 + static void blk_trace_free(struct request_queue *q, struct blk_trace *bt) 314 314 { 315 315 relay_close(bt->rchan); 316 - debugfs_remove(bt->dir); 316 + 317 + /* 318 + * If 'bt->dir' is not set, then both 'dropped' and 'msg' are created 319 + * under 'q->debugfs_dir', thus lookup and remove them. 320 + */ 321 + if (!bt->dir) { 322 + debugfs_remove(debugfs_lookup("dropped", q->debugfs_dir)); 323 + debugfs_remove(debugfs_lookup("msg", q->debugfs_dir)); 324 + } else { 325 + debugfs_remove(bt->dir); 326 + } 317 327 free_percpu(bt->sequence); 318 328 free_percpu(bt->msg_data); 319 329 kfree(bt); ··· 345 335 mutex_unlock(&blk_probe_mutex); 346 336 } 347 337 348 - static void blk_trace_cleanup(struct blk_trace *bt) 338 + static void blk_trace_cleanup(struct request_queue *q, struct blk_trace *bt) 349 339 { 350 340 synchronize_rcu(); 351 - blk_trace_free(bt); 341 + blk_trace_free(q, bt); 352 342 put_probe_ref(); 353 343 } 354 344 ··· 362 352 return -EINVAL; 363 353 364 354 if (bt->trace_state != Blktrace_running) 365 - blk_trace_cleanup(bt); 355 + blk_trace_cleanup(q, bt); 366 356 367 357 return 0; 368 358 } ··· 582 572 ret = 0; 583 573 err: 584 574 if (ret) 585 - blk_trace_free(bt); 575 + blk_trace_free(q, bt); 586 576 return ret; 587 577 } 588 578 ··· 1626 1616 1627 1617 put_probe_ref(); 1628 1618 synchronize_rcu(); 1629 - blk_trace_free(bt); 1619 + blk_trace_free(q, bt); 1630 1620 return 0; 1631 1621 } 1632 1622 ··· 1657 1647 return 0; 1658 1648 1659 1649 free_bt: 1660 - blk_trace_free(bt); 1650 + blk_trace_free(q, bt); 1661 1651 return ret; 1662 1652 } 1663 1653
+2 -2
kernel/trace/ftrace.c
··· 7790 7790 7791 7791 /** 7792 7792 * register_ftrace_function - register a function for profiling 7793 - * @ops - ops structure that holds the function for profiling. 7793 + * @ops: ops structure that holds the function for profiling. 7794 7794 * 7795 7795 * Register a function to be called by all functions in the 7796 7796 * kernel. ··· 7817 7817 7818 7818 /** 7819 7819 * unregister_ftrace_function - unregister a function for profiling. 7820 - * @ops - ops structure that holds the function to unregister 7820 + * @ops: ops structure that holds the function to unregister 7821 7821 * 7822 7822 * Unregister a function that was added to be called by ftrace profiling. 7823 7823 */
+2 -2
kernel/trace/trace.c
··· 235 235 static int __init set_trace_boot_options(char *str) 236 236 { 237 237 strlcpy(trace_boot_options_buf, str, MAX_TRACER_SIZE); 238 - return 0; 238 + return 1; 239 239 } 240 240 __setup("trace_options=", set_trace_boot_options); 241 241 ··· 246 246 { 247 247 strlcpy(trace_boot_clock_buf, str, MAX_TRACER_SIZE); 248 248 trace_boot_clock = trace_boot_clock_buf; 249 - return 0; 249 + return 1; 250 250 } 251 251 __setup("trace_clock=", set_trace_boot_clock); 252 252
+3 -3
kernel/trace/trace_events_hist.c
··· 2289 2289 /* 2290 2290 * For backward compatibility, if field_name 2291 2291 * was "cpu", then we treat this the same as 2292 - * common_cpu. 2292 + * common_cpu. This also works for "CPU". 2293 2293 */ 2294 - if (strcmp(field_name, "cpu") == 0) { 2294 + if (field && field->filter_type == FILTER_CPU) { 2295 2295 *flags |= HIST_FIELD_FL_CPU; 2296 2296 } else { 2297 2297 hist_err(tr, HIST_ERR_FIELD_NOT_FOUND, ··· 4832 4832 4833 4833 if (hist_field->flags & HIST_FIELD_FL_STACKTRACE) 4834 4834 cmp_fn = tracing_map_cmp_none; 4835 - else if (!field) 4835 + else if (!field || hist_field->flags & HIST_FIELD_FL_CPU) 4836 4836 cmp_fn = tracing_map_cmp_num(hist_field->size, 4837 4837 hist_field->is_signed); 4838 4838 else if (is_string_field(field))
+1 -1
kernel/trace/trace_kprobe.c
··· 32 32 strlcpy(kprobe_boot_events_buf, str, COMMAND_LINE_SIZE); 33 33 disable_tracing_selftest("running kprobe events"); 34 34 35 - return 0; 35 + return 1; 36 36 } 37 37 __setup("kprobe_event=", set_kprobe_boot_events); 38 38
+31
kernel/trace/trace_osnoise.c
··· 1387 1387 } 1388 1388 1389 1389 /* 1390 + * In some cases, notably when running on a nohz_full CPU with 1391 + * a stopped tick PREEMPT_RCU has no way to account for QSs. 1392 + * This will eventually cause unwarranted noise as PREEMPT_RCU 1393 + * will force preemption as the means of ending the current 1394 + * grace period. We avoid this problem by calling 1395 + * rcu_momentary_dyntick_idle(), which performs a zero duration 1396 + * EQS allowing PREEMPT_RCU to end the current grace period. 1397 + * This call shouldn't be wrapped inside an RCU critical 1398 + * section. 1399 + * 1400 + * Note that in non PREEMPT_RCU kernels QSs are handled through 1401 + * cond_resched() 1402 + */ 1403 + if (IS_ENABLED(CONFIG_PREEMPT_RCU)) { 1404 + local_irq_disable(); 1405 + rcu_momentary_dyntick_idle(); 1406 + local_irq_enable(); 1407 + } 1408 + 1409 + /* 1390 1410 * For the non-preemptive kernel config: let threads runs, if 1391 1411 * they so wish. 1392 1412 */ ··· 2218 2198 * the last instance, and the workload can stop. 2219 2199 */ 2220 2200 if (osnoise_has_registered_instances()) 2201 + return; 2202 + 2203 + /* 2204 + * If callbacks were already disabled in a previous stop 2205 + * call, there is no need to disable then again. 2206 + * 2207 + * For instance, this happens when tracing is stopped via: 2208 + * echo 0 > tracing_on 2209 + * echo nop > current_tracer. 2210 + */ 2211 + if (!trace_osnoise_callback_enabled) 2221 2212 return; 2222 2213 2223 2214 trace_osnoise_callback_enabled = false;
+13 -1
kernel/user_namespace.c
··· 58 58 cred->user_ns = user_ns; 59 59 } 60 60 61 + static unsigned long enforced_nproc_rlimit(void) 62 + { 63 + unsigned long limit = RLIM_INFINITY; 64 + 65 + /* Is RLIMIT_NPROC currently enforced? */ 66 + if (!uid_eq(current_uid(), GLOBAL_ROOT_UID) || 67 + (current_user_ns() != &init_user_ns)) 68 + limit = rlimit(RLIMIT_NPROC); 69 + 70 + return limit; 71 + } 72 + 61 73 /* 62 74 * Create a new user namespace, deriving the creator from the user in the 63 75 * passed credentials, and replacing that user with the new root user for the ··· 134 122 for (i = 0; i < MAX_PER_NAMESPACE_UCOUNTS; i++) { 135 123 ns->ucount_max[i] = INT_MAX; 136 124 } 137 - set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_NPROC, rlimit(RLIMIT_NPROC)); 125 + set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_NPROC, enforced_nproc_rlimit()); 138 126 set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_MSGQUEUE, rlimit(RLIMIT_MSGQUEUE)); 139 127 set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_SIGPENDING, rlimit(RLIMIT_SIGPENDING)); 140 128 set_rlimit_ucount_max(ns, UCOUNT_RLIMIT_MEMLOCK, rlimit(RLIMIT_MEMLOCK));
+11 -11
kernel/watch_queue.c
··· 54 54 bit += page->index; 55 55 56 56 set_bit(bit, wqueue->notes_bitmap); 57 + generic_pipe_buf_release(pipe, buf); 57 58 } 58 59 59 60 // No try_steal function => no stealing ··· 113 112 buf->offset = offset; 114 113 buf->len = len; 115 114 buf->flags = PIPE_BUF_FLAG_WHOLE; 116 - pipe->head = head + 1; 115 + smp_store_release(&pipe->head, head + 1); /* vs pipe_read() */ 117 116 118 117 if (!test_and_clear_bit(note, wqueue->notes_bitmap)) { 119 118 spin_unlock_irq(&pipe->rd_wait.lock); ··· 220 219 struct page **pages; 221 220 unsigned long *bitmap; 222 221 unsigned long user_bufs; 223 - unsigned int bmsize; 224 222 int ret, i, nr_pages; 225 223 226 224 if (!wqueue) ··· 243 243 goto error; 244 244 } 245 245 246 - ret = pipe_resize_ring(pipe, nr_notes); 246 + nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE; 247 + ret = pipe_resize_ring(pipe, roundup_pow_of_two(nr_notes)); 247 248 if (ret < 0) 248 249 goto error; 249 250 ··· 259 258 pages[i]->index = i * WATCH_QUEUE_NOTES_PER_PAGE; 260 259 } 261 260 262 - bmsize = (nr_notes + BITS_PER_LONG - 1) / BITS_PER_LONG; 263 - bmsize *= sizeof(unsigned long); 264 - bitmap = kmalloc(bmsize, GFP_KERNEL); 261 + bitmap = bitmap_alloc(nr_notes, GFP_KERNEL); 265 262 if (!bitmap) 266 263 goto error_p; 267 264 268 - memset(bitmap, 0xff, bmsize); 265 + bitmap_fill(bitmap, nr_notes); 269 266 wqueue->notes = pages; 270 267 wqueue->notes_bitmap = bitmap; 271 268 wqueue->nr_pages = nr_pages; 272 - wqueue->nr_notes = nr_pages * WATCH_QUEUE_NOTES_PER_PAGE; 269 + wqueue->nr_notes = nr_notes; 273 270 return 0; 274 271 275 272 error_p: ··· 319 320 tf[i].info_mask & WATCH_INFO_LENGTH) 320 321 goto err_filter; 321 322 /* Ignore any unknown types */ 322 - if (tf[i].type >= sizeof(wfilter->type_filter) * 8) 323 + if (tf[i].type >= WATCH_TYPE__NR) 323 324 continue; 324 325 nr_filter++; 325 326 } ··· 335 336 336 337 q = wfilter->filters; 337 338 for (i = 0; i < filter.nr_filters; i++) { 338 - if (tf[i].type >= sizeof(wfilter->type_filter) * BITS_PER_LONG) 339 + if (tf[i].type >= WATCH_TYPE__NR) 339 340 continue; 340 341 341 342 q->type = tf[i].type; ··· 370 371 371 372 for (i = 0; i < wqueue->nr_pages; i++) 372 373 __free_page(wqueue->notes[i]); 374 + bitmap_free(wqueue->notes_bitmap); 373 375 374 376 wfilter = rcu_access_pointer(wqueue->filter); 375 377 if (wfilter) ··· 566 566 rcu_read_lock(); 567 567 spin_lock_bh(&wqueue->lock); 568 568 569 - /* Prevent new additions and prevent notifications from happening */ 569 + /* Prevent new notifications from being stored. */ 570 570 wqueue->defunct = true; 571 571 572 572 while (!hlist_empty(&wqueue->watches)) {
-1
lib/Kconfig
··· 45 45 config HAVE_ARCH_BITREVERSE 46 46 bool 47 47 default n 48 - depends on BITREVERSE 49 48 help 50 49 This option enables the use of hardware bit-reversal instructions on 51 50 architectures which support such operations.
+22 -41
mm/gup.c
··· 1729 1729 * @uaddr: start of address range 1730 1730 * @size: length of address range 1731 1731 * 1732 - * Faults in an address range using get_user_pages, i.e., without triggering 1733 - * hardware page faults. This is primarily useful when we already know that 1734 - * some or all of the pages in the address range aren't in memory. 1732 + * Faults in an address range for writing. This is primarily useful when we 1733 + * already know that some or all of the pages in the address range aren't in 1734 + * memory. 1735 1735 * 1736 - * Other than fault_in_writeable(), this function is non-destructive. 1736 + * Unlike fault_in_writeable(), this function is non-destructive. 1737 1737 * 1738 1738 * Note that we don't pin or otherwise hold the pages referenced that we fault 1739 1739 * in. There's no guarantee that they'll stay in memory for any duration of ··· 1744 1744 */ 1745 1745 size_t fault_in_safe_writeable(const char __user *uaddr, size_t size) 1746 1746 { 1747 - unsigned long start = (unsigned long)untagged_addr(uaddr); 1748 - unsigned long end, nstart, nend; 1747 + unsigned long start = (unsigned long)uaddr, end; 1749 1748 struct mm_struct *mm = current->mm; 1750 - struct vm_area_struct *vma = NULL; 1751 - int locked = 0; 1749 + bool unlocked = false; 1752 1750 1753 - nstart = start & PAGE_MASK; 1754 - end = PAGE_ALIGN(start + size); 1755 - if (end < nstart) 1756 - end = 0; 1757 - for (; nstart != end; nstart = nend) { 1758 - unsigned long nr_pages; 1759 - long ret; 1760 - 1761 - if (!locked) { 1762 - locked = 1; 1763 - mmap_read_lock(mm); 1764 - vma = find_vma(mm, nstart); 1765 - } else if (nstart >= vma->vm_end) 1766 - vma = vma->vm_next; 1767 - if (!vma || vma->vm_start >= end) 1768 - break; 1769 - nend = end ? min(end, vma->vm_end) : vma->vm_end; 1770 - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) 1771 - continue; 1772 - if (nstart < vma->vm_start) 1773 - nstart = vma->vm_start; 1774 - nr_pages = (nend - nstart) / PAGE_SIZE; 1775 - ret = __get_user_pages_locked(mm, nstart, nr_pages, 1776 - NULL, NULL, &locked, 1777 - FOLL_TOUCH | FOLL_WRITE); 1778 - if (ret <= 0) 1779 - break; 1780 - nend = nstart + ret * PAGE_SIZE; 1781 - } 1782 - if (locked) 1783 - mmap_read_unlock(mm); 1784 - if (nstart == end) 1751 + if (unlikely(size == 0)) 1785 1752 return 0; 1786 - return size - min_t(size_t, nstart - start, size); 1753 + end = PAGE_ALIGN(start + size); 1754 + if (end < start) 1755 + end = 0; 1756 + 1757 + mmap_read_lock(mm); 1758 + do { 1759 + if (fixup_user_fault(mm, start, FAULT_FLAG_WRITE, &unlocked)) 1760 + break; 1761 + start = (start + PAGE_SIZE) & PAGE_MASK; 1762 + } while (start != end); 1763 + mmap_read_unlock(mm); 1764 + 1765 + if (size > (unsigned long)uaddr - start) 1766 + return size - ((unsigned long)uaddr - start); 1767 + return 0; 1787 1768 } 1788 1769 EXPORT_SYMBOL(fault_in_safe_writeable); 1789 1770
+34 -58
mm/madvise.c
··· 65 65 } 66 66 67 67 #ifdef CONFIG_ANON_VMA_NAME 68 - static struct anon_vma_name *anon_vma_name_alloc(const char *name) 68 + struct anon_vma_name *anon_vma_name_alloc(const char *name) 69 69 { 70 70 struct anon_vma_name *anon_name; 71 71 size_t count; ··· 81 81 return anon_name; 82 82 } 83 83 84 - static void vma_anon_name_free(struct kref *kref) 84 + void anon_vma_name_free(struct kref *kref) 85 85 { 86 86 struct anon_vma_name *anon_name = 87 87 container_of(kref, struct anon_vma_name, kref); 88 88 kfree(anon_name); 89 89 } 90 90 91 - static inline bool has_vma_anon_name(struct vm_area_struct *vma) 91 + struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma) 92 92 { 93 - return !vma->vm_file && vma->anon_name; 94 - } 95 - 96 - const char *vma_anon_name(struct vm_area_struct *vma) 97 - { 98 - if (!has_vma_anon_name(vma)) 99 - return NULL; 100 - 101 93 mmap_assert_locked(vma->vm_mm); 102 94 103 - return vma->anon_name->name; 104 - } 95 + if (vma->vm_file) 96 + return NULL; 105 97 106 - void dup_vma_anon_name(struct vm_area_struct *orig_vma, 107 - struct vm_area_struct *new_vma) 108 - { 109 - if (!has_vma_anon_name(orig_vma)) 110 - return; 111 - 112 - kref_get(&orig_vma->anon_name->kref); 113 - new_vma->anon_name = orig_vma->anon_name; 114 - } 115 - 116 - void free_vma_anon_name(struct vm_area_struct *vma) 117 - { 118 - struct anon_vma_name *anon_name; 119 - 120 - if (!has_vma_anon_name(vma)) 121 - return; 122 - 123 - anon_name = vma->anon_name; 124 - vma->anon_name = NULL; 125 - kref_put(&anon_name->kref, vma_anon_name_free); 98 + return vma->anon_name; 126 99 } 127 100 128 101 /* mmap_lock should be write-locked */ 129 - static int replace_vma_anon_name(struct vm_area_struct *vma, const char *name) 102 + static int replace_anon_vma_name(struct vm_area_struct *vma, 103 + struct anon_vma_name *anon_name) 130 104 { 131 - const char *anon_name; 105 + struct anon_vma_name *orig_name = anon_vma_name(vma); 132 106 133 - if (!name) { 134 - free_vma_anon_name(vma); 107 + if (!anon_name) { 108 + vma->anon_name = NULL; 109 + anon_vma_name_put(orig_name); 135 110 return 0; 136 111 } 137 112 138 - anon_name = vma_anon_name(vma); 139 - if (anon_name) { 140 - /* Same name, nothing to do here */ 141 - if (!strcmp(name, anon_name)) 142 - return 0; 113 + if (anon_vma_name_eq(orig_name, anon_name)) 114 + return 0; 143 115 144 - free_vma_anon_name(vma); 145 - } 146 - vma->anon_name = anon_vma_name_alloc(name); 147 - if (!vma->anon_name) 148 - return -ENOMEM; 116 + vma->anon_name = anon_vma_name_reuse(anon_name); 117 + anon_vma_name_put(orig_name); 149 118 150 119 return 0; 151 120 } 152 121 #else /* CONFIG_ANON_VMA_NAME */ 153 - static int replace_vma_anon_name(struct vm_area_struct *vma, const char *name) 122 + static int replace_anon_vma_name(struct vm_area_struct *vma, 123 + struct anon_vma_name *anon_name) 154 124 { 155 - if (name) 125 + if (anon_name) 156 126 return -EINVAL; 157 127 158 128 return 0; ··· 131 161 /* 132 162 * Update the vm_flags on region of a vma, splitting it or merging it as 133 163 * necessary. Must be called with mmap_sem held for writing; 164 + * Caller should ensure anon_name stability by raising its refcount even when 165 + * anon_name belongs to a valid vma because this function might free that vma. 134 166 */ 135 167 static int madvise_update_vma(struct vm_area_struct *vma, 136 168 struct vm_area_struct **prev, unsigned long start, 137 169 unsigned long end, unsigned long new_flags, 138 - const char *name) 170 + struct anon_vma_name *anon_name) 139 171 { 140 172 struct mm_struct *mm = vma->vm_mm; 141 173 int error; 142 174 pgoff_t pgoff; 143 175 144 - if (new_flags == vma->vm_flags && is_same_vma_anon_name(vma, name)) { 176 + if (new_flags == vma->vm_flags && anon_vma_name_eq(anon_vma_name(vma), anon_name)) { 145 177 *prev = vma; 146 178 return 0; 147 179 } ··· 151 179 pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 152 180 *prev = vma_merge(mm, *prev, start, end, new_flags, vma->anon_vma, 153 181 vma->vm_file, pgoff, vma_policy(vma), 154 - vma->vm_userfaultfd_ctx, name); 182 + vma->vm_userfaultfd_ctx, anon_name); 155 183 if (*prev) { 156 184 vma = *prev; 157 185 goto success; ··· 181 209 */ 182 210 vma->vm_flags = new_flags; 183 211 if (!vma->vm_file) { 184 - error = replace_vma_anon_name(vma, name); 212 + error = replace_anon_vma_name(vma, anon_name); 185 213 if (error) 186 214 return error; 187 215 } ··· 947 975 unsigned long behavior) 948 976 { 949 977 int error; 978 + struct anon_vma_name *anon_name; 950 979 unsigned long new_flags = vma->vm_flags; 951 980 952 981 switch (behavior) { ··· 1013 1040 break; 1014 1041 } 1015 1042 1043 + anon_name = anon_vma_name(vma); 1044 + anon_vma_name_get(anon_name); 1016 1045 error = madvise_update_vma(vma, prev, start, end, new_flags, 1017 - vma_anon_name(vma)); 1046 + anon_name); 1047 + anon_vma_name_put(anon_name); 1018 1048 1019 1049 out: 1020 1050 /* ··· 1201 1225 static int madvise_vma_anon_name(struct vm_area_struct *vma, 1202 1226 struct vm_area_struct **prev, 1203 1227 unsigned long start, unsigned long end, 1204 - unsigned long name) 1228 + unsigned long anon_name) 1205 1229 { 1206 1230 int error; 1207 1231 ··· 1210 1234 return -EBADF; 1211 1235 1212 1236 error = madvise_update_vma(vma, prev, start, end, vma->vm_flags, 1213 - (const char *)name); 1237 + (struct anon_vma_name *)anon_name); 1214 1238 1215 1239 /* 1216 1240 * madvise() returns EAGAIN if kernel resources, such as ··· 1222 1246 } 1223 1247 1224 1248 int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, 1225 - unsigned long len_in, const char *name) 1249 + unsigned long len_in, struct anon_vma_name *anon_name) 1226 1250 { 1227 1251 unsigned long end; 1228 1252 unsigned long len; ··· 1242 1266 if (end == start) 1243 1267 return 0; 1244 1268 1245 - return madvise_walk_vmas(mm, start, end, (unsigned long)name, 1269 + return madvise_walk_vmas(mm, start, end, (unsigned long)anon_name, 1246 1270 madvise_vma_anon_name); 1247 1271 } 1248 1272 #endif /* CONFIG_ANON_VMA_NAME */
+29 -13
mm/memfd.c
··· 31 31 static void memfd_tag_pins(struct xa_state *xas) 32 32 { 33 33 struct page *page; 34 - unsigned int tagged = 0; 34 + int latency = 0; 35 + int cache_count; 35 36 36 37 lru_add_drain(); 37 38 38 39 xas_lock_irq(xas); 39 40 xas_for_each(xas, page, ULONG_MAX) { 40 - if (xa_is_value(page)) 41 - continue; 42 - page = find_subpage(page, xas->xa_index); 43 - if (page_count(page) - page_mapcount(page) > 1) 44 - xas_set_mark(xas, MEMFD_TAG_PINNED); 41 + cache_count = 1; 42 + if (!xa_is_value(page) && 43 + PageTransHuge(page) && !PageHuge(page)) 44 + cache_count = HPAGE_PMD_NR; 45 45 46 - if (++tagged % XA_CHECK_SCHED) 46 + if (!xa_is_value(page) && 47 + page_count(page) - total_mapcount(page) != cache_count) 48 + xas_set_mark(xas, MEMFD_TAG_PINNED); 49 + if (cache_count != 1) 50 + xas_set(xas, page->index + cache_count); 51 + 52 + latency += cache_count; 53 + if (latency < XA_CHECK_SCHED) 47 54 continue; 55 + latency = 0; 48 56 49 57 xas_pause(xas); 50 58 xas_unlock_irq(xas); ··· 81 73 82 74 error = 0; 83 75 for (scan = 0; scan <= LAST_SCAN; scan++) { 84 - unsigned int tagged = 0; 76 + int latency = 0; 77 + int cache_count; 85 78 86 79 if (!xas_marked(&xas, MEMFD_TAG_PINNED)) 87 80 break; ··· 96 87 xas_lock_irq(&xas); 97 88 xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) { 98 89 bool clear = true; 99 - if (xa_is_value(page)) 100 - continue; 101 - page = find_subpage(page, xas.xa_index); 102 - if (page_count(page) - page_mapcount(page) != 1) { 90 + 91 + cache_count = 1; 92 + if (!xa_is_value(page) && 93 + PageTransHuge(page) && !PageHuge(page)) 94 + cache_count = HPAGE_PMD_NR; 95 + 96 + if (!xa_is_value(page) && cache_count != 97 + page_count(page) - total_mapcount(page)) { 103 98 /* 104 99 * On the last scan, we clean up all those tags 105 100 * we inserted; but make a note that we still ··· 116 103 } 117 104 if (clear) 118 105 xas_clear_mark(&xas, MEMFD_TAG_PINNED); 119 - if (++tagged % XA_CHECK_SCHED) 106 + 107 + latency += cache_count; 108 + if (latency < XA_CHECK_SCHED) 120 109 continue; 110 + latency = 0; 121 111 122 112 xas_pause(&xas); 123 113 xas_unlock_irq(&xas);
+1 -1
mm/mempolicy.c
··· 814 814 prev = vma_merge(mm, prev, vmstart, vmend, vma->vm_flags, 815 815 vma->anon_vma, vma->vm_file, pgoff, 816 816 new_pol, vma->vm_userfaultfd_ctx, 817 - vma_anon_name(vma)); 817 + anon_vma_name(vma)); 818 818 if (prev) { 819 819 vma = prev; 820 820 next = vma->vm_next;
+1 -1
mm/mlock.c
··· 512 512 pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 513 513 *prev = vma_merge(mm, *prev, start, end, newflags, vma->anon_vma, 514 514 vma->vm_file, pgoff, vma_policy(vma), 515 - vma->vm_userfaultfd_ctx, vma_anon_name(vma)); 515 + vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 516 516 if (*prev) { 517 517 vma = *prev; 518 518 goto success;
+6 -6
mm/mmap.c
··· 1031 1031 static inline int is_mergeable_vma(struct vm_area_struct *vma, 1032 1032 struct file *file, unsigned long vm_flags, 1033 1033 struct vm_userfaultfd_ctx vm_userfaultfd_ctx, 1034 - const char *anon_name) 1034 + struct anon_vma_name *anon_name) 1035 1035 { 1036 1036 /* 1037 1037 * VM_SOFTDIRTY should not prevent from VMA merging, if we ··· 1049 1049 return 0; 1050 1050 if (!is_mergeable_vm_userfaultfd_ctx(vma, vm_userfaultfd_ctx)) 1051 1051 return 0; 1052 - if (!is_same_vma_anon_name(vma, anon_name)) 1052 + if (!anon_vma_name_eq(anon_vma_name(vma), anon_name)) 1053 1053 return 0; 1054 1054 return 1; 1055 1055 } ··· 1084 1084 struct anon_vma *anon_vma, struct file *file, 1085 1085 pgoff_t vm_pgoff, 1086 1086 struct vm_userfaultfd_ctx vm_userfaultfd_ctx, 1087 - const char *anon_name) 1087 + struct anon_vma_name *anon_name) 1088 1088 { 1089 1089 if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, anon_name) && 1090 1090 is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { ··· 1106 1106 struct anon_vma *anon_vma, struct file *file, 1107 1107 pgoff_t vm_pgoff, 1108 1108 struct vm_userfaultfd_ctx vm_userfaultfd_ctx, 1109 - const char *anon_name) 1109 + struct anon_vma_name *anon_name) 1110 1110 { 1111 1111 if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, anon_name) && 1112 1112 is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { ··· 1167 1167 struct anon_vma *anon_vma, struct file *file, 1168 1168 pgoff_t pgoff, struct mempolicy *policy, 1169 1169 struct vm_userfaultfd_ctx vm_userfaultfd_ctx, 1170 - const char *anon_name) 1170 + struct anon_vma_name *anon_name) 1171 1171 { 1172 1172 pgoff_t pglen = (end - addr) >> PAGE_SHIFT; 1173 1173 struct vm_area_struct *area, *next; ··· 3256 3256 return NULL; /* should never get here */ 3257 3257 new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags, 3258 3258 vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), 3259 - vma->vm_userfaultfd_ctx, vma_anon_name(vma)); 3259 + vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 3260 3260 if (new_vma) { 3261 3261 /* 3262 3262 * Source vma may have been merged into new_vma
+1 -1
mm/mprotect.c
··· 464 464 pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); 465 465 *pprev = vma_merge(mm, *pprev, start, end, newflags, 466 466 vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma), 467 - vma->vm_userfaultfd_ctx, vma_anon_name(vma)); 467 + vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 468 468 if (*pprev) { 469 469 vma = *pprev; 470 470 VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
+3 -1
mm/util.c
··· 587 587 return ret; 588 588 589 589 /* Don't even allow crazy sizes */ 590 - if (WARN_ON_ONCE(size > INT_MAX)) 590 + if (unlikely(size > INT_MAX)) { 591 + WARN_ON_ONCE(!(flags & __GFP_NOWARN)); 591 592 return NULL; 593 + } 592 594 593 595 return __vmalloc_node(size, 1, flags, node, 594 596 __builtin_return_address(0));
+6 -8
net/9p/trans_xen.c
··· 281 281 ref = priv->rings[i].intf->ref[j]; 282 282 gnttab_end_foreign_access(ref, 0, 0); 283 283 } 284 - free_pages((unsigned long)priv->rings[i].data.in, 285 - priv->rings[i].intf->ring_order - 286 - (PAGE_SHIFT - XEN_PAGE_SHIFT)); 284 + free_pages_exact(priv->rings[i].data.in, 285 + 1UL << (priv->rings[i].intf->ring_order + 286 + XEN_PAGE_SHIFT)); 287 287 } 288 288 gnttab_end_foreign_access(priv->rings[i].ref, 0, 0); 289 289 free_page((unsigned long)priv->rings[i].intf); ··· 322 322 if (ret < 0) 323 323 goto out; 324 324 ring->ref = ret; 325 - bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 326 - order - (PAGE_SHIFT - XEN_PAGE_SHIFT)); 325 + bytes = alloc_pages_exact(1UL << (order + XEN_PAGE_SHIFT), 326 + GFP_KERNEL | __GFP_ZERO); 327 327 if (!bytes) { 328 328 ret = -ENOMEM; 329 329 goto out; ··· 354 354 if (bytes) { 355 355 for (i--; i >= 0; i--) 356 356 gnttab_end_foreign_access(ring->intf->ref[i], 0, 0); 357 - free_pages((unsigned long)bytes, 358 - ring->intf->ring_order - 359 - (PAGE_SHIFT - XEN_PAGE_SHIFT)); 357 + free_pages_exact(bytes, 1UL << (order + XEN_PAGE_SHIFT)); 360 358 } 361 359 gnttab_end_foreign_access(ring->ref, 0, 0); 362 360 free_page((unsigned long)ring->intf);
+7
net/ax25/af_ax25.c
··· 87 87 ax25_for_each(s, &ax25_list) { 88 88 if (s->ax25_dev == ax25_dev) { 89 89 sk = s->sk; 90 + if (!sk) { 91 + spin_unlock_bh(&ax25_list_lock); 92 + s->ax25_dev = NULL; 93 + ax25_disconnect(s, ENETUNREACH); 94 + spin_lock_bh(&ax25_list_lock); 95 + goto again; 96 + } 90 97 sock_hold(sk); 91 98 spin_unlock_bh(&ax25_list_lock); 92 99 lock_sock(sk);
+20 -9
net/batman-adv/hard-interface.c
··· 149 149 struct net *net = dev_net(net_dev); 150 150 struct net_device *parent_dev; 151 151 struct net *parent_net; 152 + int iflink; 152 153 bool ret; 153 154 154 155 /* check if this is a batman-adv mesh interface */ 155 156 if (batadv_softif_is_valid(net_dev)) 156 157 return true; 157 158 158 - /* no more parents..stop recursion */ 159 - if (dev_get_iflink(net_dev) == 0 || 160 - dev_get_iflink(net_dev) == net_dev->ifindex) 159 + iflink = dev_get_iflink(net_dev); 160 + if (iflink == 0) 161 161 return false; 162 162 163 163 parent_net = batadv_getlink_net(net_dev, net); 164 164 165 + /* iflink to itself, most likely physical device */ 166 + if (net == parent_net && iflink == net_dev->ifindex) 167 + return false; 168 + 165 169 /* recurse over the parent device */ 166 - parent_dev = __dev_get_by_index((struct net *)parent_net, 167 - dev_get_iflink(net_dev)); 170 + parent_dev = __dev_get_by_index((struct net *)parent_net, iflink); 168 171 /* if we got a NULL parent_dev there is something broken.. */ 169 172 if (!parent_dev) { 170 173 pr_err("Cannot find parent device\n"); ··· 217 214 struct net_device *real_netdev = NULL; 218 215 struct net *real_net; 219 216 struct net *net; 220 - int ifindex; 217 + int iflink; 221 218 222 219 ASSERT_RTNL(); 223 220 224 221 if (!netdev) 225 222 return NULL; 226 223 227 - if (netdev->ifindex == dev_get_iflink(netdev)) { 224 + iflink = dev_get_iflink(netdev); 225 + if (iflink == 0) { 228 226 dev_hold(netdev); 229 227 return netdev; 230 228 } ··· 235 231 goto out; 236 232 237 233 net = dev_net(hard_iface->soft_iface); 238 - ifindex = dev_get_iflink(netdev); 239 234 real_net = batadv_getlink_net(netdev, net); 240 - real_netdev = dev_get_by_index(real_net, ifindex); 235 + 236 + /* iflink to itself, most likely physical device */ 237 + if (net == real_net && netdev->ifindex == iflink) { 238 + real_netdev = netdev; 239 + dev_hold(real_netdev); 240 + goto out; 241 + } 242 + 243 + real_netdev = dev_get_by_index(real_net, iflink); 241 244 242 245 out: 243 246 batadv_hardif_put(hard_iface);
+1
net/bluetooth/hci_core.c
··· 2738 2738 hci_dev_unlock(hdev); 2739 2739 2740 2740 ida_simple_remove(&hci_index_ida, hdev->id); 2741 + kfree_skb(hdev->sent_cmd); 2741 2742 kfree(hdev); 2742 2743 } 2743 2744 EXPORT_SYMBOL(hci_release_dev);
+48 -39
net/bluetooth/hci_sync.c
··· 276 276 static void hci_cmd_sync_work(struct work_struct *work) 277 277 { 278 278 struct hci_dev *hdev = container_of(work, struct hci_dev, cmd_sync_work); 279 - struct hci_cmd_sync_work_entry *entry; 280 - hci_cmd_sync_work_func_t func; 281 - hci_cmd_sync_work_destroy_t destroy; 282 - void *data; 283 279 284 280 bt_dev_dbg(hdev, ""); 285 281 286 - mutex_lock(&hdev->cmd_sync_work_lock); 287 - entry = list_first_entry(&hdev->cmd_sync_work_list, 288 - struct hci_cmd_sync_work_entry, list); 289 - if (entry) { 290 - list_del(&entry->list); 291 - func = entry->func; 292 - data = entry->data; 293 - destroy = entry->destroy; 282 + /* Dequeue all entries and run them */ 283 + while (1) { 284 + struct hci_cmd_sync_work_entry *entry; 285 + 286 + mutex_lock(&hdev->cmd_sync_work_lock); 287 + entry = list_first_entry_or_null(&hdev->cmd_sync_work_list, 288 + struct hci_cmd_sync_work_entry, 289 + list); 290 + if (entry) 291 + list_del(&entry->list); 292 + mutex_unlock(&hdev->cmd_sync_work_lock); 293 + 294 + if (!entry) 295 + break; 296 + 297 + bt_dev_dbg(hdev, "entry %p", entry); 298 + 299 + if (entry->func) { 300 + int err; 301 + 302 + hci_req_sync_lock(hdev); 303 + err = entry->func(hdev, entry->data); 304 + if (entry->destroy) 305 + entry->destroy(hdev, entry->data, err); 306 + hci_req_sync_unlock(hdev); 307 + } 308 + 294 309 kfree(entry); 295 - } else { 296 - func = NULL; 297 - data = NULL; 298 - destroy = NULL; 299 - } 300 - mutex_unlock(&hdev->cmd_sync_work_lock); 301 - 302 - if (func) { 303 - int err; 304 - 305 - hci_req_sync_lock(hdev); 306 - 307 - err = func(hdev, data); 308 - 309 - if (destroy) 310 - destroy(hdev, data, err); 311 - 312 - hci_req_sync_unlock(hdev); 313 310 } 314 311 } 315 312 ··· 1838 1841 struct bdaddr_list *b, *t; 1839 1842 u8 num_entries = 0; 1840 1843 bool pend_conn, pend_report; 1844 + u8 filter_policy; 1841 1845 int err; 1842 1846 1843 1847 /* Pause advertising if resolving list can be used as controllers are ··· 1925 1927 err = -EINVAL; 1926 1928 1927 1929 done: 1930 + filter_policy = err ? 0x00 : 0x01; 1931 + 1928 1932 /* Enable address resolution when LL Privacy is enabled. */ 1929 1933 err = hci_le_set_addr_resolution_enable_sync(hdev, 0x01); 1930 1934 if (err) ··· 1937 1937 hci_resume_advertising_sync(hdev); 1938 1938 1939 1939 /* Select filter policy to use accept list */ 1940 - return err ? 0x00 : 0x01; 1940 + return filter_policy; 1941 1941 } 1942 1942 1943 1943 /* Returns true if an le connection is in the scanning state */ ··· 3262 3262 if (hdev->le_features[0] & HCI_LE_DATA_LEN_EXT) 3263 3263 events[0] |= 0x40; /* LE Data Length Change */ 3264 3264 3265 - /* If the controller supports LL Privacy feature, enable 3266 - * the corresponding event. 3265 + /* If the controller supports LL Privacy feature or LE Extended Adv, 3266 + * enable the corresponding event. 3267 3267 */ 3268 - if (hdev->le_features[0] & HCI_LE_LL_PRIVACY) 3268 + if (use_enhanced_conn_complete(hdev)) 3269 3269 events[1] |= 0x02; /* LE Enhanced Connection Complete */ 3270 3270 3271 3271 /* If the controller supports Extended Scanner Filter ··· 4106 4106 hci_inquiry_cache_flush(hdev); 4107 4107 hci_pend_le_actions_clear(hdev); 4108 4108 hci_conn_hash_flush(hdev); 4109 - hci_dev_unlock(hdev); 4110 - 4109 + /* Prevent data races on hdev->smp_data or hdev->smp_bredr_data */ 4111 4110 smp_unregister(hdev); 4111 + hci_dev_unlock(hdev); 4112 4112 4113 4113 hci_sock_dev_event(hdev, HCI_DEV_DOWN); 4114 4114 ··· 5185 5185 return __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_EXT_CREATE_CONN, 5186 5186 plen, data, 5187 5187 HCI_EV_LE_ENHANCED_CONN_COMPLETE, 5188 - HCI_CMD_TIMEOUT, NULL); 5188 + conn->conn_timeout, NULL); 5189 5189 } 5190 5190 5191 5191 int hci_le_create_conn_sync(struct hci_dev *hdev, struct hci_conn *conn) ··· 5270 5270 cp.min_ce_len = cpu_to_le16(0x0000); 5271 5271 cp.max_ce_len = cpu_to_le16(0x0000); 5272 5272 5273 + /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E page 2261: 5274 + * 5275 + * If this event is unmasked and the HCI_LE_Connection_Complete event 5276 + * is unmasked, only the HCI_LE_Enhanced_Connection_Complete event is 5277 + * sent when a new connection has been created. 5278 + */ 5273 5279 err = __hci_cmd_sync_status_sk(hdev, HCI_OP_LE_CREATE_CONN, 5274 - sizeof(cp), &cp, HCI_EV_LE_CONN_COMPLETE, 5275 - HCI_CMD_TIMEOUT, NULL); 5280 + sizeof(cp), &cp, 5281 + use_enhanced_conn_complete(hdev) ? 5282 + HCI_EV_LE_ENHANCED_CONN_COMPLETE : 5283 + HCI_EV_LE_CONN_COMPLETE, 5284 + conn->conn_timeout, NULL); 5276 5285 5277 5286 done: 5278 5287 /* Re-enable advertising after the connection attempt is finished. */
+64 -37
net/bluetooth/mgmt.c
··· 1218 1218 static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err) 1219 1219 { 1220 1220 struct mgmt_pending_cmd *cmd = data; 1221 - struct mgmt_mode *cp = cmd->param; 1221 + struct mgmt_mode *cp; 1222 + 1223 + /* Make sure cmd still outstanding. */ 1224 + if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) 1225 + return; 1226 + 1227 + cp = cmd->param; 1222 1228 1223 1229 bt_dev_dbg(hdev, "err %d", err); 1224 1230 ··· 1248 1242 mgmt_status(err)); 1249 1243 } 1250 1244 1251 - mgmt_pending_free(cmd); 1245 + mgmt_pending_remove(cmd); 1252 1246 } 1253 1247 1254 1248 static int set_powered_sync(struct hci_dev *hdev, void *data) ··· 1287 1281 goto failed; 1288 1282 } 1289 1283 1290 - cmd = mgmt_pending_new(sk, MGMT_OP_SET_POWERED, hdev, data, len); 1284 + cmd = mgmt_pending_add(sk, MGMT_OP_SET_POWERED, hdev, data, len); 1291 1285 if (!cmd) { 1292 1286 err = -ENOMEM; 1293 1287 goto failed; ··· 1295 1289 1296 1290 err = hci_cmd_sync_queue(hdev, set_powered_sync, cmd, 1297 1291 mgmt_set_powered_complete); 1292 + 1293 + if (err < 0) 1294 + mgmt_pending_remove(cmd); 1298 1295 1299 1296 failed: 1300 1297 hci_dev_unlock(hdev); ··· 1392 1383 1393 1384 bt_dev_dbg(hdev, "err %d", err); 1394 1385 1386 + /* Make sure cmd still outstanding. */ 1387 + if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) 1388 + return; 1389 + 1395 1390 hci_dev_lock(hdev); 1396 1391 1397 1392 if (err) { ··· 1415 1402 new_settings(hdev, cmd->sk); 1416 1403 1417 1404 done: 1418 - mgmt_pending_free(cmd); 1405 + mgmt_pending_remove(cmd); 1419 1406 hci_dev_unlock(hdev); 1420 1407 } 1421 1408 ··· 1524 1511 goto failed; 1525 1512 } 1526 1513 1527 - cmd = mgmt_pending_new(sk, MGMT_OP_SET_DISCOVERABLE, hdev, data, len); 1514 + cmd = mgmt_pending_add(sk, MGMT_OP_SET_DISCOVERABLE, hdev, data, len); 1528 1515 if (!cmd) { 1529 1516 err = -ENOMEM; 1530 1517 goto failed; ··· 1551 1538 err = hci_cmd_sync_queue(hdev, set_discoverable_sync, cmd, 1552 1539 mgmt_set_discoverable_complete); 1553 1540 1541 + if (err < 0) 1542 + mgmt_pending_remove(cmd); 1543 + 1554 1544 failed: 1555 1545 hci_dev_unlock(hdev); 1556 1546 return err; ··· 1565 1549 struct mgmt_pending_cmd *cmd = data; 1566 1550 1567 1551 bt_dev_dbg(hdev, "err %d", err); 1552 + 1553 + /* Make sure cmd still outstanding. */ 1554 + if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) 1555 + return; 1568 1556 1569 1557 hci_dev_lock(hdev); 1570 1558 ··· 1582 1562 new_settings(hdev, cmd->sk); 1583 1563 1584 1564 done: 1585 - mgmt_pending_free(cmd); 1565 + if (cmd) 1566 + mgmt_pending_remove(cmd); 1567 + 1586 1568 hci_dev_unlock(hdev); 1587 1569 } 1588 1570 ··· 1656 1634 goto failed; 1657 1635 } 1658 1636 1659 - cmd = mgmt_pending_new(sk, MGMT_OP_SET_CONNECTABLE, hdev, data, len); 1637 + cmd = mgmt_pending_add(sk, MGMT_OP_SET_CONNECTABLE, hdev, data, len); 1660 1638 if (!cmd) { 1661 1639 err = -ENOMEM; 1662 1640 goto failed; ··· 1675 1653 1676 1654 err = hci_cmd_sync_queue(hdev, set_connectable_sync, cmd, 1677 1655 mgmt_set_connectable_complete); 1656 + 1657 + if (err < 0) 1658 + mgmt_pending_remove(cmd); 1678 1659 1679 1660 failed: 1680 1661 hci_dev_unlock(hdev); ··· 1798 1773 struct mgmt_mode *cp = cmd->param; 1799 1774 u8 enable = cp->val; 1800 1775 bool changed; 1776 + 1777 + /* Make sure cmd still outstanding. */ 1778 + if (cmd != pending_find(MGMT_OP_SET_SSP, hdev)) 1779 + return; 1801 1780 1802 1781 if (err) { 1803 1782 u8 mgmt_err = mgmt_status(err); ··· 3350 3321 3351 3322 bt_dev_dbg(hdev, "err %d", err); 3352 3323 3324 + if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) 3325 + return; 3326 + 3353 3327 if (status) { 3354 3328 mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_LOCAL_NAME, 3355 3329 status); ··· 3524 3492 struct mgmt_pending_cmd *cmd = data; 3525 3493 struct sk_buff *skb = cmd->skb; 3526 3494 u8 status = mgmt_status(err); 3495 + 3496 + if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) 3497 + return; 3527 3498 3528 3499 if (!status) { 3529 3500 if (!skb) ··· 3793 3758 MGMT_STATUS_INVALID_PARAMS); 3794 3759 3795 3760 hci_dev_lock(hdev); 3796 - 3797 - if (pending_find(MGMT_OP_SET_WIDEBAND_SPEECH, hdev)) { 3798 - err = mgmt_cmd_status(sk, hdev->id, 3799 - MGMT_OP_SET_WIDEBAND_SPEECH, 3800 - MGMT_STATUS_BUSY); 3801 - goto unlock; 3802 - } 3803 3761 3804 3762 if (hdev_is_powered(hdev) && 3805 3763 !!cp->val != hci_dev_test_flag(hdev, ··· 4541 4513 } 4542 4514 } 4543 4515 4544 - done: 4545 4516 hci_dev_unlock(hdev); 4546 4517 4518 + done: 4547 4519 if (status == MGMT_STATUS_SUCCESS) 4548 4520 device_flags_changed(sk, hdev, &cp->addr.bdaddr, cp->addr.type, 4549 4521 supported_flags, current_flags); ··· 5064 5036 goto unlock; 5065 5037 } 5066 5038 5067 - if (pending_find(MGMT_OP_READ_LOCAL_OOB_DATA, hdev)) { 5068 - err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_READ_LOCAL_OOB_DATA, 5069 - MGMT_STATUS_BUSY); 5070 - goto unlock; 5071 - } 5072 - 5073 5039 cmd = mgmt_pending_new(sk, MGMT_OP_READ_LOCAL_OOB_DATA, hdev, NULL, 0); 5074 5040 if (!cmd) 5075 5041 err = -ENOMEM; ··· 5283 5261 { 5284 5262 struct mgmt_pending_cmd *cmd = data; 5285 5263 5264 + if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) && 5265 + cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) && 5266 + cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) 5267 + return; 5268 + 5286 5269 bt_dev_dbg(hdev, "err %d", err); 5287 5270 5288 5271 mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 5289 5272 cmd->param, 1); 5290 - mgmt_pending_free(cmd); 5273 + mgmt_pending_remove(cmd); 5291 5274 5292 5275 hci_discovery_set_state(hdev, err ? DISCOVERY_STOPPED: 5293 5276 DISCOVERY_FINDING); ··· 5354 5327 else 5355 5328 hdev->discovery.limited = false; 5356 5329 5357 - cmd = mgmt_pending_new(sk, op, hdev, data, len); 5330 + cmd = mgmt_pending_add(sk, op, hdev, data, len); 5358 5331 if (!cmd) { 5359 5332 err = -ENOMEM; 5360 5333 goto failed; ··· 5363 5336 err = hci_cmd_sync_queue(hdev, start_discovery_sync, cmd, 5364 5337 start_discovery_complete); 5365 5338 if (err < 0) { 5366 - mgmt_pending_free(cmd); 5339 + mgmt_pending_remove(cmd); 5367 5340 goto failed; 5368 5341 } 5369 5342 ··· 5457 5430 goto failed; 5458 5431 } 5459 5432 5460 - cmd = mgmt_pending_new(sk, MGMT_OP_START_SERVICE_DISCOVERY, 5433 + cmd = mgmt_pending_add(sk, MGMT_OP_START_SERVICE_DISCOVERY, 5461 5434 hdev, data, len); 5462 5435 if (!cmd) { 5463 5436 err = -ENOMEM; ··· 5490 5463 err = hci_cmd_sync_queue(hdev, start_discovery_sync, cmd, 5491 5464 start_discovery_complete); 5492 5465 if (err < 0) { 5493 - mgmt_pending_free(cmd); 5466 + mgmt_pending_remove(cmd); 5494 5467 goto failed; 5495 5468 } 5496 5469 ··· 5522 5495 { 5523 5496 struct mgmt_pending_cmd *cmd = data; 5524 5497 5498 + if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) 5499 + return; 5500 + 5525 5501 bt_dev_dbg(hdev, "err %d", err); 5526 5502 5527 5503 mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 5528 5504 cmd->param, 1); 5529 - mgmt_pending_free(cmd); 5505 + mgmt_pending_remove(cmd); 5530 5506 5531 5507 if (!err) 5532 5508 hci_discovery_set_state(hdev, DISCOVERY_STOPPED); ··· 5565 5535 goto unlock; 5566 5536 } 5567 5537 5568 - cmd = mgmt_pending_new(sk, MGMT_OP_STOP_DISCOVERY, hdev, data, len); 5538 + cmd = mgmt_pending_add(sk, MGMT_OP_STOP_DISCOVERY, hdev, data, len); 5569 5539 if (!cmd) { 5570 5540 err = -ENOMEM; 5571 5541 goto unlock; ··· 5574 5544 err = hci_cmd_sync_queue(hdev, stop_discovery_sync, cmd, 5575 5545 stop_discovery_complete); 5576 5546 if (err < 0) { 5577 - mgmt_pending_free(cmd); 5547 + mgmt_pending_remove(cmd); 5578 5548 goto unlock; 5579 5549 } 5580 5550 ··· 7504 7474 u8 status = mgmt_status(err); 7505 7475 u16 eir_len; 7506 7476 7477 + if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) 7478 + return; 7479 + 7507 7480 if (!status) { 7508 7481 if (!skb) 7509 7482 status = MGMT_STATUS_FAILED; ··· 8002 7969 8003 7970 static bool adv_busy(struct hci_dev *hdev) 8004 7971 { 8005 - return (pending_find(MGMT_OP_ADD_ADVERTISING, hdev) || 8006 - pending_find(MGMT_OP_REMOVE_ADVERTISING, hdev) || 8007 - pending_find(MGMT_OP_SET_LE, hdev) || 8008 - pending_find(MGMT_OP_ADD_EXT_ADV_PARAMS, hdev) || 8009 - pending_find(MGMT_OP_ADD_EXT_ADV_DATA, hdev)); 7972 + return pending_find(MGMT_OP_SET_LE, hdev); 8010 7973 } 8011 7974 8012 7975 static void add_adv_complete(struct hci_dev *hdev, struct sock *sk, u8 instance, ··· 8592 8563 goto unlock; 8593 8564 } 8594 8565 8595 - if (pending_find(MGMT_OP_ADD_ADVERTISING, hdev) || 8596 - pending_find(MGMT_OP_REMOVE_ADVERTISING, hdev) || 8597 - pending_find(MGMT_OP_SET_LE, hdev)) { 8566 + if (pending_find(MGMT_OP_SET_LE, hdev)) { 8598 8567 err = mgmt_cmd_status(sk, hdev->id, MGMT_OP_REMOVE_ADVERTISING, 8599 8568 MGMT_STATUS_BUSY); 8600 8569 goto unlock;
+2 -1
net/bluetooth/mgmt_util.c
··· 77 77 { 78 78 struct hci_dev *hdev; 79 79 struct mgmt_hdr *hdr; 80 - int len = skb->len; 80 + int len; 81 81 82 82 if (!skb) 83 83 return -EINVAL; 84 84 85 + len = skb->len; 85 86 hdev = bt_cb(skb)->mgmt.hdev; 86 87 87 88 /* Time stamp */
+25
net/core/gro.c
··· 93 93 EXPORT_SYMBOL(dev_remove_offload); 94 94 95 95 /** 96 + * skb_eth_gso_segment - segmentation handler for ethernet protocols. 97 + * @skb: buffer to segment 98 + * @features: features for the output path (see dev->features) 99 + * @type: Ethernet Protocol ID 100 + */ 101 + struct sk_buff *skb_eth_gso_segment(struct sk_buff *skb, 102 + netdev_features_t features, __be16 type) 103 + { 104 + struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT); 105 + struct packet_offload *ptype; 106 + 107 + rcu_read_lock(); 108 + list_for_each_entry_rcu(ptype, &offload_base, list) { 109 + if (ptype->type == type && ptype->callbacks.gso_segment) { 110 + segs = ptype->callbacks.gso_segment(skb, features); 111 + break; 112 + } 113 + } 114 + rcu_read_unlock(); 115 + 116 + return segs; 117 + } 118 + EXPORT_SYMBOL(skb_eth_gso_segment); 119 + 120 + /** 96 121 * skb_mac_gso_segment - mac layer segmentation handler. 97 122 * @skb: buffer to segment 98 123 * @features: features for the output path (see dev->features)
+1 -1
net/core/skbuff.c
··· 3876 3876 list_skb = list_skb->next; 3877 3877 3878 3878 err = 0; 3879 + delta_truesize += nskb->truesize; 3879 3880 if (skb_shared(nskb)) { 3880 3881 tmp = skb_clone(nskb, GFP_ATOMIC); 3881 3882 if (tmp) { ··· 3901 3900 tail = nskb; 3902 3901 3903 3902 delta_len += nskb->len; 3904 - delta_truesize += nskb->truesize; 3905 3903 3906 3904 skb_push(nskb, -skb_network_offset(nskb) + offset); 3907 3905
+1 -1
net/core/skmsg.c
··· 1153 1153 struct sk_psock *psock; 1154 1154 struct bpf_prog *prog; 1155 1155 int ret = __SK_DROP; 1156 - int len = skb->len; 1156 + int len = orig_len; 1157 1157 1158 1158 /* clone here so sk_eat_skb() in tcp_read_sock does not drop our data */ 1159 1159 skb = skb_clone(skb, GFP_ATOMIC);
+2 -1
net/core/xdp.c
··· 357 357 if (IS_ERR(xdp_alloc)) 358 358 return PTR_ERR(xdp_alloc); 359 359 360 - trace_mem_connect(xdp_alloc, xdp_rxq); 360 + if (trace_mem_connect_enabled() && xdp_alloc) 361 + trace_mem_connect(xdp_alloc, xdp_rxq); 361 362 return 0; 362 363 } 363 364
+44
net/dcb/dcbnl.c
··· 2073 2073 } 2074 2074 EXPORT_SYMBOL(dcb_ieee_getapp_default_prio_mask); 2075 2075 2076 + static void dcbnl_flush_dev(struct net_device *dev) 2077 + { 2078 + struct dcb_app_type *itr, *tmp; 2079 + 2080 + spin_lock_bh(&dcb_lock); 2081 + 2082 + list_for_each_entry_safe(itr, tmp, &dcb_app_list, list) { 2083 + if (itr->ifindex == dev->ifindex) { 2084 + list_del(&itr->list); 2085 + kfree(itr); 2086 + } 2087 + } 2088 + 2089 + spin_unlock_bh(&dcb_lock); 2090 + } 2091 + 2092 + static int dcbnl_netdevice_event(struct notifier_block *nb, 2093 + unsigned long event, void *ptr) 2094 + { 2095 + struct net_device *dev = netdev_notifier_info_to_dev(ptr); 2096 + 2097 + switch (event) { 2098 + case NETDEV_UNREGISTER: 2099 + if (!dev->dcbnl_ops) 2100 + return NOTIFY_DONE; 2101 + 2102 + dcbnl_flush_dev(dev); 2103 + 2104 + return NOTIFY_OK; 2105 + default: 2106 + return NOTIFY_DONE; 2107 + } 2108 + } 2109 + 2110 + static struct notifier_block dcbnl_nb __read_mostly = { 2111 + .notifier_call = dcbnl_netdevice_event, 2112 + }; 2113 + 2076 2114 static int __init dcbnl_init(void) 2077 2115 { 2116 + int err; 2117 + 2118 + err = register_netdevice_notifier(&dcbnl_nb); 2119 + if (err) 2120 + return err; 2121 + 2078 2122 rtnl_register(PF_UNSPEC, RTM_GETDCB, dcb_doit, NULL, 0); 2079 2123 rtnl_register(PF_UNSPEC, RTM_SETDCB, dcb_doit, NULL, 0); 2080 2124
+4 -4
net/dsa/dsa2.c
··· 1058 1058 static int dsa_tree_setup_master(struct dsa_switch_tree *dst) 1059 1059 { 1060 1060 struct dsa_port *dp; 1061 - int err; 1061 + int err = 0; 1062 1062 1063 1063 rtnl_lock(); 1064 1064 ··· 1066 1066 if (dsa_port_is_cpu(dp)) { 1067 1067 err = dsa_master_setup(dp->master, dp); 1068 1068 if (err) 1069 - return err; 1069 + break; 1070 1070 } 1071 1071 } 1072 1072 1073 1073 rtnl_unlock(); 1074 1074 1075 - return 0; 1075 + return err; 1076 1076 } 1077 1077 1078 1078 static void dsa_tree_teardown_master(struct dsa_switch_tree *dst) ··· 1261 1261 info.tag_ops = tag_ops; 1262 1262 err = dsa_tree_notify(dst, DSA_NOTIFIER_TAG_PROTO, &info); 1263 1263 if (err) 1264 - return err; 1264 + goto out_unwind_tagger; 1265 1265 1266 1266 err = dsa_tree_bind_tag_proto(dst, tag_ops); 1267 1267 if (err)
+6 -1
net/ipv4/esp4.c
··· 446 446 struct page *page; 447 447 struct sk_buff *trailer; 448 448 int tailen = esp->tailen; 449 + unsigned int allocsz; 449 450 450 451 /* this is non-NULL only with TCP/UDP Encapsulation */ 451 452 if (x->encap) { ··· 455 454 if (err < 0) 456 455 return err; 457 456 } 457 + 458 + allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES); 459 + if (allocsz > ESP_SKB_FRAG_MAXSIZE) 460 + goto cow; 458 461 459 462 if (!skb_cloned(skb)) { 460 463 if (tailen <= skb_tailroom(skb)) { ··· 676 671 struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb); 677 672 u32 padto; 678 673 679 - padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached)); 674 + padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached)); 680 675 if (skb->len < padto) 681 676 esp.tfclen = padto - skb->len; 682 677 }
+4 -2
net/ipv4/esp4_offload.c
··· 110 110 struct sk_buff *skb, 111 111 netdev_features_t features) 112 112 { 113 - __skb_push(skb, skb->mac_len); 114 - return skb_mac_gso_segment(skb, features); 113 + return skb_eth_gso_segment(skb, features, htons(ETH_P_IP)); 115 114 } 116 115 117 116 static struct sk_buff *xfrm4_transport_gso_segment(struct xfrm_state *x, ··· 158 159 if (proto == IPPROTO_TCP) 159 160 skb_shinfo(skb)->gso_type |= SKB_GSO_TCPV4; 160 161 } 162 + 163 + if (proto == IPPROTO_IPV6) 164 + skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP4; 161 165 162 166 __skb_pull(skb, skb_transport_offset(skb)); 163 167 ops = rcu_dereference(inet_offloads[proto]);
+6 -4
net/ipv4/tcp.c
··· 1684 1684 if (!copied) 1685 1685 copied = used; 1686 1686 break; 1687 - } else if (used <= len) { 1688 - seq += used; 1689 - copied += used; 1690 - offset += used; 1691 1687 } 1688 + if (WARN_ON_ONCE(used > len)) 1689 + used = len; 1690 + seq += used; 1691 + copied += used; 1692 + offset += used; 1693 + 1692 1694 /* If recv_actor drops the lock (e.g. TCP splice 1693 1695 * receive) the skb pointer might be invalid when 1694 1696 * getting here: tcp_collapse might have deleted it
+6 -2
net/ipv6/addrconf.c
··· 3732 3732 struct inet6_dev *idev; 3733 3733 struct inet6_ifaddr *ifa, *tmp; 3734 3734 bool keep_addr = false; 3735 + bool was_ready; 3735 3736 int state, i; 3736 3737 3737 3738 ASSERT_RTNL(); ··· 3798 3797 3799 3798 addrconf_del_rs_timer(idev); 3800 3799 3801 - /* Step 2: clear flags for stateless addrconf */ 3800 + /* Step 2: clear flags for stateless addrconf, repeated down 3801 + * detection 3802 + */ 3803 + was_ready = idev->if_flags & IF_READY; 3802 3804 if (!unregister) 3803 3805 idev->if_flags &= ~(IF_RS_SENT|IF_RA_RCVD|IF_READY); 3804 3806 ··· 3875 3871 if (unregister) { 3876 3872 ipv6_ac_destroy_dev(idev); 3877 3873 ipv6_mc_destroy_dev(idev); 3878 - } else { 3874 + } else if (was_ready) { 3879 3875 ipv6_mc_down(idev); 3880 3876 } 3881 3877
+6 -1
net/ipv6/esp6.c
··· 482 482 struct page *page; 483 483 struct sk_buff *trailer; 484 484 int tailen = esp->tailen; 485 + unsigned int allocsz; 485 486 486 487 if (x->encap) { 487 488 int err = esp6_output_encap(x, skb, esp); ··· 490 489 if (err < 0) 491 490 return err; 492 491 } 492 + 493 + allocsz = ALIGN(skb->data_len + tailen, L1_CACHE_BYTES); 494 + if (allocsz > ESP_SKB_FRAG_MAXSIZE) 495 + goto cow; 493 496 494 497 if (!skb_cloned(skb)) { 495 498 if (tailen <= skb_tailroom(skb)) { ··· 712 707 struct xfrm_dst *dst = (struct xfrm_dst *)skb_dst(skb); 713 708 u32 padto; 714 709 715 - padto = min(x->tfcpad, __xfrm_state_mtu(x, dst->child_mtu_cached)); 710 + padto = min(x->tfcpad, xfrm_state_mtu(x, dst->child_mtu_cached)); 716 711 if (skb->len < padto) 717 712 esp.tfclen = padto - skb->len; 718 713 }
+4 -2
net/ipv6/esp6_offload.c
··· 145 145 struct sk_buff *skb, 146 146 netdev_features_t features) 147 147 { 148 - __skb_push(skb, skb->mac_len); 149 - return skb_mac_gso_segment(skb, features); 148 + return skb_eth_gso_segment(skb, features, htons(ETH_P_IPV6)); 150 149 } 151 150 152 151 static struct sk_buff *xfrm6_transport_gso_segment(struct xfrm_state *x, ··· 197 198 skb->transport_header += 198 199 ipv6_skip_exthdr(skb, 0, &proto, &frag); 199 200 } 201 + 202 + if (proto == IPPROTO_IPIP) 203 + skb_shinfo(skb)->gso_type |= SKB_GSO_IPXIP6; 200 204 201 205 __skb_pull(skb, skb_transport_offset(skb)); 202 206 ops = rcu_dereference(inet6_offloads[proto]);
+7 -4
net/ipv6/ip6_output.c
··· 1408 1408 if (np->frag_size) 1409 1409 mtu = np->frag_size; 1410 1410 } 1411 - if (mtu < IPV6_MIN_MTU) 1412 - return -EINVAL; 1413 1411 cork->base.fragsize = mtu; 1414 1412 cork->base.gso_size = ipc6->gso_size; 1415 1413 cork->base.tx_flags = 0; ··· 1469 1471 1470 1472 fragheaderlen = sizeof(struct ipv6hdr) + rt->rt6i_nfheader_len + 1471 1473 (opt ? opt->opt_nflen : 0); 1472 - maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen - 1473 - sizeof(struct frag_hdr); 1474 1474 1475 1475 headersize = sizeof(struct ipv6hdr) + 1476 1476 (opt ? opt->opt_flen + opt->opt_nflen : 0) + 1477 1477 (dst_allfrag(&rt->dst) ? 1478 1478 sizeof(struct frag_hdr) : 0) + 1479 1479 rt->rt6i_nfheader_len; 1480 + 1481 + if (mtu < fragheaderlen || 1482 + ((mtu - fragheaderlen) & ~7) + fragheaderlen < sizeof(struct frag_hdr)) 1483 + goto emsgsize; 1484 + 1485 + maxfraglen = ((mtu - fragheaderlen) & ~7) + fragheaderlen - 1486 + sizeof(struct frag_hdr); 1480 1487 1481 1488 /* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit 1482 1489 * the first fragment
+12 -20
net/ipv6/mcast.c
··· 1371 1371 } 1372 1372 1373 1373 /* called with rcu_read_lock() */ 1374 - int igmp6_event_query(struct sk_buff *skb) 1374 + void igmp6_event_query(struct sk_buff *skb) 1375 1375 { 1376 1376 struct inet6_dev *idev = __in6_dev_get(skb->dev); 1377 1377 1378 - if (!idev) 1379 - return -EINVAL; 1380 - 1381 - if (idev->dead) { 1382 - kfree_skb(skb); 1383 - return -ENODEV; 1384 - } 1378 + if (!idev || idev->dead) 1379 + goto out; 1385 1380 1386 1381 spin_lock_bh(&idev->mc_query_lock); 1387 1382 if (skb_queue_len(&idev->mc_query_queue) < MLD_MAX_SKBS) { 1388 1383 __skb_queue_tail(&idev->mc_query_queue, skb); 1389 1384 if (!mod_delayed_work(mld_wq, &idev->mc_query_work, 0)) 1390 1385 in6_dev_hold(idev); 1386 + skb = NULL; 1391 1387 } 1392 1388 spin_unlock_bh(&idev->mc_query_lock); 1393 - 1394 - return 0; 1389 + out: 1390 + kfree_skb(skb); 1395 1391 } 1396 1392 1397 1393 static void __mld_query_work(struct sk_buff *skb) ··· 1538 1542 } 1539 1543 1540 1544 /* called with rcu_read_lock() */ 1541 - int igmp6_event_report(struct sk_buff *skb) 1545 + void igmp6_event_report(struct sk_buff *skb) 1542 1546 { 1543 1547 struct inet6_dev *idev = __in6_dev_get(skb->dev); 1544 1548 1545 - if (!idev) 1546 - return -EINVAL; 1547 - 1548 - if (idev->dead) { 1549 - kfree_skb(skb); 1550 - return -ENODEV; 1551 - } 1549 + if (!idev || idev->dead) 1550 + goto out; 1552 1551 1553 1552 spin_lock_bh(&idev->mc_report_lock); 1554 1553 if (skb_queue_len(&idev->mc_report_queue) < MLD_MAX_SKBS) { 1555 1554 __skb_queue_tail(&idev->mc_report_queue, skb); 1556 1555 if (!mod_delayed_work(mld_wq, &idev->mc_report_work, 0)) 1557 1556 in6_dev_hold(idev); 1557 + skb = NULL; 1558 1558 } 1559 1559 spin_unlock_bh(&idev->mc_report_lock); 1560 - 1561 - return 0; 1560 + out: 1561 + kfree_skb(skb); 1562 1562 } 1563 1563 1564 1564 static void __mld_report_work(struct sk_buff *skb)
+16
net/ipv6/xfrm6_output.c
··· 45 45 return xfrm_output(sk, skb); 46 46 } 47 47 48 + static int xfrm6_noneed_fragment(struct sk_buff *skb) 49 + { 50 + struct frag_hdr *fh; 51 + u8 prevhdr = ipv6_hdr(skb)->nexthdr; 52 + 53 + if (prevhdr != NEXTHDR_FRAGMENT) 54 + return 0; 55 + fh = (struct frag_hdr *)(skb->data + sizeof(struct ipv6hdr)); 56 + if (fh->nexthdr == NEXTHDR_ESP || fh->nexthdr == NEXTHDR_AUTH) 57 + return 1; 58 + return 0; 59 + } 60 + 48 61 static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb) 49 62 { 50 63 struct dst_entry *dst = skb_dst(skb); ··· 86 73 xfrm6_local_rxpmtu(skb, mtu); 87 74 kfree_skb(skb); 88 75 return -EMSGSIZE; 76 + } else if (toobig && xfrm6_noneed_fragment(skb)) { 77 + skb->ignore_df = 1; 78 + goto skip_frag; 89 79 } else if (!skb->ignore_df && toobig && skb->sk) { 90 80 xfrm_local_error(skb, mtu); 91 81 kfree_skb(skb);
+1 -1
net/key/af_key.c
··· 2623 2623 } 2624 2624 2625 2625 return xfrm_migrate(&sel, dir, XFRM_POLICY_TYPE_MAIN, m, i, 2626 - kma ? &k : NULL, net, NULL); 2626 + kma ? &k : NULL, net, NULL, 0); 2627 2627 2628 2628 out: 2629 2629 return err;
+9 -1
net/mac80211/agg-tx.c
··· 9 9 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 10 10 * Copyright 2007-2010, Intel Corporation 11 11 * Copyright(c) 2015-2017 Intel Deutschland GmbH 12 - * Copyright (C) 2018 - 2021 Intel Corporation 12 + * Copyright (C) 2018 - 2022 Intel Corporation 13 13 */ 14 14 15 15 #include <linux/ieee80211.h> ··· 622 622 if (test_sta_flag(sta, WLAN_STA_BLOCK_BA)) { 623 623 ht_dbg(sdata, 624 624 "BA sessions blocked - Denying BA session request %pM tid %d\n", 625 + sta->sta.addr, tid); 626 + return -EINVAL; 627 + } 628 + 629 + if (test_sta_flag(sta, WLAN_STA_MFP) && 630 + !test_sta_flag(sta, WLAN_STA_AUTHORIZED)) { 631 + ht_dbg(sdata, 632 + "MFP STA not authorized - deny BA session request %pM tid %d\n", 625 633 sta->sta.addr, tid); 626 634 return -EINVAL; 627 635 }
+1 -1
net/mac80211/ieee80211_i.h
··· 376 376 377 377 u8 key[WLAN_KEY_LEN_WEP104]; 378 378 u8 key_len, key_idx; 379 - bool done; 379 + bool done, waiting; 380 380 bool peer_confirmed; 381 381 bool timeout_started; 382 382
+12 -4
net/mac80211/mlme.c
··· 37 37 #define IEEE80211_AUTH_TIMEOUT_SAE (HZ * 2) 38 38 #define IEEE80211_AUTH_MAX_TRIES 3 39 39 #define IEEE80211_AUTH_WAIT_ASSOC (HZ * 5) 40 + #define IEEE80211_AUTH_WAIT_SAE_RETRY (HZ * 2) 40 41 #define IEEE80211_ASSOC_TIMEOUT (HZ / 5) 41 42 #define IEEE80211_ASSOC_TIMEOUT_LONG (HZ / 2) 42 43 #define IEEE80211_ASSOC_TIMEOUT_SHORT (HZ / 10) ··· 3012 3011 (status_code == WLAN_STATUS_ANTI_CLOG_REQUIRED || 3013 3012 (auth_transaction == 1 && 3014 3013 (status_code == WLAN_STATUS_SAE_HASH_TO_ELEMENT || 3015 - status_code == WLAN_STATUS_SAE_PK)))) 3014 + status_code == WLAN_STATUS_SAE_PK)))) { 3015 + /* waiting for userspace now */ 3016 + ifmgd->auth_data->waiting = true; 3017 + ifmgd->auth_data->timeout = 3018 + jiffies + IEEE80211_AUTH_WAIT_SAE_RETRY; 3019 + ifmgd->auth_data->timeout_started = true; 3020 + run_again(sdata, ifmgd->auth_data->timeout); 3016 3021 goto notify_driver; 3022 + } 3017 3023 3018 3024 sdata_info(sdata, "%pM denied authentication (status %d)\n", 3019 3025 mgmt->sa, status_code); ··· 4611 4603 4612 4604 if (ifmgd->auth_data && ifmgd->auth_data->timeout_started && 4613 4605 time_after(jiffies, ifmgd->auth_data->timeout)) { 4614 - if (ifmgd->auth_data->done) { 4606 + if (ifmgd->auth_data->done || ifmgd->auth_data->waiting) { 4615 4607 /* 4616 - * ok ... we waited for assoc but userspace didn't, 4617 - * so let's just kill the auth data 4608 + * ok ... we waited for assoc or continuation but 4609 + * userspace didn't do it, so kill the auth data 4618 4610 */ 4619 4611 ieee80211_destroy_auth_data(sdata, false); 4620 4612 } else if (ieee80211_auth(sdata)) {
+5 -9
net/mac80211/rx.c
··· 2607 2607 * address, so that the authenticator (e.g. hostapd) will see 2608 2608 * the frame, but bridge won't forward it anywhere else. Note 2609 2609 * that due to earlier filtering, the only other address can 2610 - * be the PAE group address. 2610 + * be the PAE group address, unless the hardware allowed them 2611 + * through in 802.3 offloaded mode. 2611 2612 */ 2612 2613 if (unlikely(skb->protocol == sdata->control_port_protocol && 2613 2614 !ether_addr_equal(ehdr->h_dest, sdata->vif.addr))) ··· 2923 2922 ether_addr_equal(sdata->vif.addr, hdr->addr3)) 2924 2923 return RX_CONTINUE; 2925 2924 2926 - ac = ieee80211_select_queue_80211(sdata, skb, hdr); 2925 + ac = ieee802_1d_to_ac[skb->priority]; 2927 2926 q = sdata->vif.hw_queue[ac]; 2928 2927 if (ieee80211_queue_stopped(&local->hw, q)) { 2929 2928 IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_congestion); 2930 2929 return RX_DROP_MONITOR; 2931 2930 } 2932 - skb_set_queue_mapping(skb, q); 2931 + skb_set_queue_mapping(skb, ac); 2933 2932 2934 2933 if (!--mesh_hdr->ttl) { 2935 2934 if (!is_multicast_ether_addr(hdr->addr1)) ··· 4515 4514 4516 4515 /* deliver to local stack */ 4517 4516 skb->protocol = eth_type_trans(skb, fast_rx->dev); 4518 - memset(skb->cb, 0, sizeof(skb->cb)); 4519 - if (rx->list) 4520 - list_add_tail(&skb->list, rx->list); 4521 - else 4522 - netif_receive_skb(skb); 4523 - 4517 + ieee80211_deliver_skb_to_local_stack(skb, rx); 4524 4518 } 4525 4519 4526 4520 static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
+16 -2
net/mptcp/protocol.c
··· 466 466 static void mptcp_set_datafin_timeout(const struct sock *sk) 467 467 { 468 468 struct inet_connection_sock *icsk = inet_csk(sk); 469 + u32 retransmits; 469 470 470 - mptcp_sk(sk)->timer_ival = min(TCP_RTO_MAX, 471 - TCP_RTO_MIN << icsk->icsk_retransmits); 471 + retransmits = min_t(u32, icsk->icsk_retransmits, 472 + ilog2(TCP_RTO_MAX / TCP_RTO_MIN)); 473 + 474 + mptcp_sk(sk)->timer_ival = TCP_RTO_MIN << retransmits; 472 475 } 473 476 474 477 static void __mptcp_set_timeout(struct sock *sk, long tout) ··· 3297 3294 return 0; 3298 3295 3299 3296 delta = msk->write_seq - v; 3297 + if (__mptcp_check_fallback(msk) && msk->first) { 3298 + struct tcp_sock *tp = tcp_sk(msk->first); 3299 + 3300 + /* the first subflow is disconnected after close - see 3301 + * __mptcp_close_ssk(). tcp_disconnect() moves the write_seq 3302 + * so ignore that status, too. 3303 + */ 3304 + if (!((1 << msk->first->sk_state) & 3305 + (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE))) 3306 + delta += READ_ONCE(tp->write_seq) - tp->snd_una; 3307 + } 3300 3308 if (delta > INT_MAX) 3301 3309 delta = INT_MAX; 3302 3310
+3 -2
net/netfilter/core.c
··· 428 428 p = nf_entry_dereference(*pp); 429 429 new_hooks = nf_hook_entries_grow(p, reg); 430 430 431 - if (!IS_ERR(new_hooks)) 431 + if (!IS_ERR(new_hooks)) { 432 + hooks_validate(new_hooks); 432 433 rcu_assign_pointer(*pp, new_hooks); 434 + } 433 435 434 436 mutex_unlock(&nf_hook_mutex); 435 437 if (IS_ERR(new_hooks)) 436 438 return PTR_ERR(new_hooks); 437 439 438 - hooks_validate(new_hooks); 439 440 #ifdef CONFIG_NETFILTER_INGRESS 440 441 if (nf_ingress_hook(reg, pf)) 441 442 net_inc_ingress_queue();
+5 -1
net/netfilter/nf_flow_table_offload.c
··· 110 110 nf_flow_rule_lwt_match(match, tun_info); 111 111 } 112 112 113 - key->meta.ingress_ifindex = tuple->iifidx; 113 + if (tuple->xmit_type == FLOW_OFFLOAD_XMIT_TC) 114 + key->meta.ingress_ifindex = tuple->tc.iifidx; 115 + else 116 + key->meta.ingress_ifindex = tuple->iifidx; 117 + 114 118 mask->meta.ingress_ifindex = 0xffffffff; 115 119 116 120 if (tuple->encap_num > 0 && !(tuple->in_vlan_ingress & BIT(0)) &&
+31 -5
net/netfilter/nf_queue.c
··· 46 46 } 47 47 EXPORT_SYMBOL(nf_unregister_queue_handler); 48 48 49 + static void nf_queue_sock_put(struct sock *sk) 50 + { 51 + #ifdef CONFIG_INET 52 + sock_gen_put(sk); 53 + #else 54 + sock_put(sk); 55 + #endif 56 + } 57 + 49 58 static void nf_queue_entry_release_refs(struct nf_queue_entry *entry) 50 59 { 51 60 struct nf_hook_state *state = &entry->state; ··· 63 54 dev_put(state->in); 64 55 dev_put(state->out); 65 56 if (state->sk) 66 - sock_put(state->sk); 57 + nf_queue_sock_put(state->sk); 67 58 68 59 #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER) 69 60 dev_put(entry->physin); ··· 96 87 } 97 88 98 89 /* Bump dev refs so they don't vanish while packet is out */ 99 - void nf_queue_entry_get_refs(struct nf_queue_entry *entry) 90 + bool nf_queue_entry_get_refs(struct nf_queue_entry *entry) 100 91 { 101 92 struct nf_hook_state *state = &entry->state; 102 93 94 + if (state->sk && !refcount_inc_not_zero(&state->sk->sk_refcnt)) 95 + return false; 96 + 103 97 dev_hold(state->in); 104 98 dev_hold(state->out); 105 - if (state->sk) 106 - sock_hold(state->sk); 107 99 108 100 #if IS_ENABLED(CONFIG_BRIDGE_NETFILTER) 109 101 dev_hold(entry->physin); 110 102 dev_hold(entry->physout); 111 103 #endif 104 + return true; 112 105 } 113 106 EXPORT_SYMBOL_GPL(nf_queue_entry_get_refs); 114 107 ··· 180 169 break; 181 170 } 182 171 172 + if (skb_sk_is_prefetched(skb)) { 173 + struct sock *sk = skb->sk; 174 + 175 + if (!sk_is_refcounted(sk)) { 176 + if (!refcount_inc_not_zero(&sk->sk_refcnt)) 177 + return -ENOTCONN; 178 + 179 + /* drop refcount on skb_orphan */ 180 + skb->destructor = sock_edemux; 181 + } 182 + } 183 + 183 184 entry = kmalloc(sizeof(*entry) + route_key_size, GFP_ATOMIC); 184 185 if (!entry) 185 186 return -ENOMEM; ··· 210 187 211 188 __nf_queue_entry_init_physdevs(entry); 212 189 213 - nf_queue_entry_get_refs(entry); 190 + if (!nf_queue_entry_get_refs(entry)) { 191 + kfree(entry); 192 + return -ENOTCONN; 193 + } 214 194 215 195 switch (entry->state.pf) { 216 196 case AF_INET:
+2 -2
net/netfilter/nf_tables_api.c
··· 4502 4502 list_for_each_entry_safe(catchall, next, &set->catchall_list, list) { 4503 4503 list_del_rcu(&catchall->list); 4504 4504 nft_set_elem_destroy(set, catchall->elem, true); 4505 - kfree_rcu(catchall); 4505 + kfree_rcu(catchall, rcu); 4506 4506 } 4507 4507 } 4508 4508 ··· 5669 5669 list_for_each_entry_safe(catchall, next, &set->catchall_list, list) { 5670 5670 if (catchall->elem == elem->priv) { 5671 5671 list_del_rcu(&catchall->list); 5672 - kfree_rcu(catchall); 5672 + kfree_rcu(catchall, rcu); 5673 5673 break; 5674 5674 } 5675 5675 }
+9 -4
net/sched/act_ct.c
··· 361 361 } 362 362 } 363 363 364 + static void tcf_ct_flow_tc_ifidx(struct flow_offload *entry, 365 + struct nf_conn_act_ct_ext *act_ct_ext, u8 dir) 366 + { 367 + entry->tuplehash[dir].tuple.xmit_type = FLOW_OFFLOAD_XMIT_TC; 368 + entry->tuplehash[dir].tuple.tc.iifidx = act_ct_ext->ifindex[dir]; 369 + } 370 + 364 371 static void tcf_ct_flow_table_add(struct tcf_ct_flow_table *ct_ft, 365 372 struct nf_conn *ct, 366 373 bool tcp) ··· 392 385 393 386 act_ct_ext = nf_conn_act_ct_ext_find(ct); 394 387 if (act_ct_ext) { 395 - entry->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.iifidx = 396 - act_ct_ext->ifindex[IP_CT_DIR_ORIGINAL]; 397 - entry->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.iifidx = 398 - act_ct_ext->ifindex[IP_CT_DIR_REPLY]; 388 + tcf_ct_flow_tc_ifidx(entry, act_ct_ext, FLOW_OFFLOAD_DIR_ORIGINAL); 389 + tcf_ct_flow_tc_ifidx(entry, act_ct_ext, FLOW_OFFLOAD_DIR_REPLY); 399 390 } 400 391 401 392 err = flow_offload_add(&ct_ft->nf_ft, entry);
+3 -6
net/sctp/diag.c
··· 61 61 r->idiag_timer = SCTP_EVENT_TIMEOUT_T3_RTX; 62 62 r->idiag_retrans = asoc->rtx_data_chunks; 63 63 r->idiag_expires = jiffies_to_msecs(t3_rtx->expires - jiffies); 64 - } else { 65 - r->idiag_timer = 0; 66 - r->idiag_retrans = 0; 67 - r->idiag_expires = 0; 68 64 } 69 65 } 70 66 ··· 140 144 r = nlmsg_data(nlh); 141 145 BUG_ON(!sk_fullsock(sk)); 142 146 147 + r->idiag_timer = 0; 148 + r->idiag_retrans = 0; 149 + r->idiag_expires = 0; 143 150 if (asoc) { 144 151 inet_diag_msg_sctpasoc_fill(r, sk, asoc); 145 152 } else { 146 153 inet_diag_msg_common_fill(r, sk); 147 154 r->idiag_state = sk->sk_state; 148 - r->idiag_timer = 0; 149 - r->idiag_retrans = 0; 150 155 } 151 156 152 157 if (inet_diag_msg_attrs_fill(sk, skb, r, ext, user_ns, net_admin))
+11 -3
net/smc/af_smc.c
··· 183 183 { 184 184 struct sock *sk = sock->sk; 185 185 struct smc_sock *smc; 186 - int rc = 0; 186 + int old_state, rc = 0; 187 187 188 188 if (!sk) 189 189 goto out; ··· 191 191 sock_hold(sk); /* sock_put below */ 192 192 smc = smc_sk(sk); 193 193 194 + old_state = sk->sk_state; 195 + 194 196 /* cleanup for a dangling non-blocking connect */ 195 - if (smc->connect_nonblock && sk->sk_state == SMC_INIT) 197 + if (smc->connect_nonblock && old_state == SMC_INIT) 196 198 tcp_abort(smc->clcsock->sk, ECONNABORTED); 197 199 198 200 if (cancel_work_sync(&smc->connect_work)) ··· 207 205 lock_sock_nested(sk, SINGLE_DEPTH_NESTING); 208 206 else 209 207 lock_sock(sk); 208 + 209 + if (old_state == SMC_INIT && sk->sk_state == SMC_ACTIVE && 210 + !smc->use_fallback) 211 + smc_close_active_abort(smc); 210 212 211 213 rc = __smc_release(smc); 212 214 ··· 3087 3081 rc = tcp_register_ulp(&smc_ulp_ops); 3088 3082 if (rc) { 3089 3083 pr_err("%s: tcp_ulp_register fails with %d\n", __func__, rc); 3090 - goto out_sock; 3084 + goto out_ib; 3091 3085 } 3092 3086 3093 3087 static_branch_enable(&tcp_have_smc); 3094 3088 return 0; 3095 3089 3090 + out_ib: 3091 + smc_ib_unregister_client(); 3096 3092 out_sock: 3097 3093 sock_unregister(PF_SMC); 3098 3094 out_proto6:
+3 -2
net/smc/smc_core.c
··· 1161 1161 cancel_work_sync(&conn->abort_work); 1162 1162 } 1163 1163 if (!list_empty(&lgr->list)) { 1164 - smc_lgr_unregister_conn(conn); 1165 1164 smc_buf_unuse(conn, lgr); /* allow buffer reuse */ 1165 + smc_lgr_unregister_conn(conn); 1166 1166 } 1167 1167 1168 1168 if (!lgr->conns_num) ··· 1864 1864 (ini->smcd_version == SMC_V2 || 1865 1865 lgr->vlan_id == ini->vlan_id) && 1866 1866 (role == SMC_CLNT || ini->is_smcd || 1867 - lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) { 1867 + (lgr->conns_num < SMC_RMBS_PER_LGR_MAX && 1868 + !bitmap_full(lgr->rtokens_used_mask, SMC_RMBS_PER_LGR_MAX)))) { 1868 1869 /* link group found */ 1869 1870 ini->first_contact_local = 0; 1870 1871 conn->lgr = lgr;
+7 -5
net/tipc/bearer.c
··· 352 352 goto rejected; 353 353 } 354 354 355 + /* Create monitoring data before accepting activate messages */ 356 + if (tipc_mon_create(net, bearer_id)) { 357 + bearer_disable(net, b); 358 + kfree_skb(skb); 359 + return -ENOMEM; 360 + } 361 + 355 362 test_and_set_bit_lock(0, &b->up); 356 363 rcu_assign_pointer(tn->bearer_list[bearer_id], b); 357 364 if (skb) 358 365 tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr); 359 - 360 - if (tipc_mon_create(net, bearer_id)) { 361 - bearer_disable(net, b); 362 - return -ENOMEM; 363 - } 364 366 365 367 pr_info("Enabled bearer <%s>, priority %u\n", name, prio); 366 368
+5 -4
net/tipc/link.c
··· 2286 2286 break; 2287 2287 2288 2288 case STATE_MSG: 2289 + /* Validate Gap ACK blocks, drop if invalid */ 2290 + glen = tipc_get_gap_ack_blks(&ga, l, hdr, true); 2291 + if (glen > dlen) 2292 + break; 2293 + 2289 2294 l->rcv_nxt_state = msg_seqno(hdr) + 1; 2290 2295 2291 2296 /* Update own tolerance if peer indicates a non-zero value */ ··· 2316 2311 break; 2317 2312 } 2318 2313 2319 - /* Receive Gap ACK blocks from peer if any */ 2320 - glen = tipc_get_gap_ack_blks(&ga, l, hdr, true); 2321 - if(glen > dlen) 2322 - break; 2323 2314 tipc_mon_rcv(l->net, data + glen, dlen - glen, l->addr, 2324 2315 &l->mon_state, l->bearer_id); 2325 2316
+1 -1
net/wireless/Makefile
··· 33 33 echo 'unsigned int shipped_regdb_certs_len = sizeof(shipped_regdb_certs);'; \ 34 34 ) > $@ 35 35 36 - $(obj)/extra-certs.c: $(CONFIG_CFG80211_EXTRA_REGDB_KEYDI) \ 36 + $(obj)/extra-certs.c: $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR) \ 37 37 $(wildcard $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR)/*.x509) 38 38 @$(kecho) " GEN $@" 39 39 $(Q)(set -e; \
+14 -1
net/wireless/nl80211.c
··· 13411 13411 i = 0; 13412 13412 nla_for_each_nested(attr, attr_filter, rem) { 13413 13413 filter[i].filter = nla_memdup(attr, GFP_KERNEL); 13414 + if (!filter[i].filter) 13415 + goto err; 13416 + 13414 13417 filter[i].len = nla_len(attr); 13415 13418 i++; 13416 13419 } ··· 13426 13423 } 13427 13424 13428 13425 return 0; 13426 + 13427 + err: 13428 + i = 0; 13429 + nla_for_each_nested(attr, attr_filter, rem) { 13430 + kfree(filter[i].filter); 13431 + i++; 13432 + } 13433 + kfree(filter); 13434 + return -ENOMEM; 13429 13435 } 13430 13436 13431 13437 static int nl80211_nan_add_func(struct sk_buff *skb, ··· 17828 17816 wdev->chandef = *chandef; 17829 17817 wdev->preset_chandef = *chandef; 17830 17818 17831 - if (wdev->iftype == NL80211_IFTYPE_STATION && 17819 + if ((wdev->iftype == NL80211_IFTYPE_STATION || 17820 + wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) && 17832 17821 !WARN_ON(!wdev->current_bss)) 17833 17822 cfg80211_update_assoc_bss_entry(wdev, chandef->chan); 17834 17823
+5 -1
net/xfrm/xfrm_device.c
··· 223 223 if (x->encap || x->tfcpad) 224 224 return -EINVAL; 225 225 226 + if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND)) 227 + return -EINVAL; 228 + 226 229 dev = dev_get_by_index(net, xuo->ifindex); 227 230 if (!dev) { 228 231 if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) { ··· 265 262 netdev_tracker_alloc(dev, &xso->dev_tracker, GFP_ATOMIC); 266 263 xso->real_dev = dev; 267 264 xso->num_exthdrs = 1; 268 - xso->flags = xuo->flags; 265 + /* Don't forward bit that is not implemented */ 266 + xso->flags = xuo->flags & ~XFRM_OFFLOAD_IPV6; 269 267 270 268 err = dev->xfrmdev_ops->xdo_dev_state_add(x); 271 269 if (err) {
+5 -2
net/xfrm/xfrm_interface.c
··· 304 304 if (mtu < IPV6_MIN_MTU) 305 305 mtu = IPV6_MIN_MTU; 306 306 307 - icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 307 + if (skb->len > 1280) 308 + icmpv6_ndo_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu); 309 + else 310 + goto xmit; 308 311 } else { 309 312 if (!(ip_hdr(skb)->frag_off & htons(IP_DF))) 310 313 goto xmit; ··· 676 673 struct net *net = xi->net; 677 674 struct xfrm_if_parms p = {}; 678 675 676 + xfrmi_netlink_parms(data, &p); 679 677 if (!p.if_id) { 680 678 NL_SET_ERR_MSG(extack, "if_id must be non zero"); 681 679 return -EINVAL; 682 680 } 683 681 684 - xfrmi_netlink_parms(data, &p); 685 682 xi = xfrmi_locate(net, &p); 686 683 if (!xi) { 687 684 xi = netdev_priv(dev);
+8 -6
net/xfrm/xfrm_policy.c
··· 4256 4256 } 4257 4257 4258 4258 static struct xfrm_policy *xfrm_migrate_policy_find(const struct xfrm_selector *sel, 4259 - u8 dir, u8 type, struct net *net) 4259 + u8 dir, u8 type, struct net *net, u32 if_id) 4260 4260 { 4261 4261 struct xfrm_policy *pol, *ret = NULL; 4262 4262 struct hlist_head *chain; ··· 4265 4265 spin_lock_bh(&net->xfrm.xfrm_policy_lock); 4266 4266 chain = policy_hash_direct(net, &sel->daddr, &sel->saddr, sel->family, dir); 4267 4267 hlist_for_each_entry(pol, chain, bydst) { 4268 - if (xfrm_migrate_selector_match(sel, &pol->selector) && 4268 + if ((if_id == 0 || pol->if_id == if_id) && 4269 + xfrm_migrate_selector_match(sel, &pol->selector) && 4269 4270 pol->type == type) { 4270 4271 ret = pol; 4271 4272 priority = ret->priority; ··· 4278 4277 if ((pol->priority >= priority) && ret) 4279 4278 break; 4280 4279 4281 - if (xfrm_migrate_selector_match(sel, &pol->selector) && 4280 + if ((if_id == 0 || pol->if_id == if_id) && 4281 + xfrm_migrate_selector_match(sel, &pol->selector) && 4282 4282 pol->type == type) { 4283 4283 ret = pol; 4284 4284 break; ··· 4395 4393 int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type, 4396 4394 struct xfrm_migrate *m, int num_migrate, 4397 4395 struct xfrm_kmaddress *k, struct net *net, 4398 - struct xfrm_encap_tmpl *encap) 4396 + struct xfrm_encap_tmpl *encap, u32 if_id) 4399 4397 { 4400 4398 int i, err, nx_cur = 0, nx_new = 0; 4401 4399 struct xfrm_policy *pol = NULL; ··· 4414 4412 } 4415 4413 4416 4414 /* Stage 1 - find policy */ 4417 - if ((pol = xfrm_migrate_policy_find(sel, dir, type, net)) == NULL) { 4415 + if ((pol = xfrm_migrate_policy_find(sel, dir, type, net, if_id)) == NULL) { 4418 4416 err = -ENOENT; 4419 4417 goto out; 4420 4418 } 4421 4419 4422 4420 /* Stage 2 - find and update state(s) */ 4423 4421 for (i = 0, mp = m; i < num_migrate; i++, mp++) { 4424 - if ((x = xfrm_migrate_state_find(mp, net))) { 4422 + if ((x = xfrm_migrate_state_find(mp, net, if_id))) { 4425 4423 x_cur[nx_cur] = x; 4426 4424 nx_cur++; 4427 4425 xc = xfrm_state_migrate(x, mp, encap);
+13 -16
net/xfrm/xfrm_state.c
··· 1579 1579 memcpy(&x->mark, &orig->mark, sizeof(x->mark)); 1580 1580 memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark)); 1581 1581 1582 - if (xfrm_init_state(x) < 0) 1583 - goto error; 1584 - 1585 1582 x->props.flags = orig->props.flags; 1586 1583 x->props.extra_flags = orig->props.extra_flags; 1587 1584 ··· 1603 1606 return NULL; 1604 1607 } 1605 1608 1606 - struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net) 1609 + struct xfrm_state *xfrm_migrate_state_find(struct xfrm_migrate *m, struct net *net, 1610 + u32 if_id) 1607 1611 { 1608 1612 unsigned int h; 1609 1613 struct xfrm_state *x = NULL; ··· 1620 1622 continue; 1621 1623 if (m->reqid && x->props.reqid != m->reqid) 1622 1624 continue; 1625 + if (if_id != 0 && x->if_id != if_id) 1626 + continue; 1623 1627 if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr, 1624 1628 m->old_family) || 1625 1629 !xfrm_addr_equal(&x->props.saddr, &m->old_saddr, ··· 1636 1636 hlist_for_each_entry(x, net->xfrm.state_bysrc+h, bysrc) { 1637 1637 if (x->props.mode != m->mode || 1638 1638 x->id.proto != m->proto) 1639 + continue; 1640 + if (if_id != 0 && x->if_id != if_id) 1639 1641 continue; 1640 1642 if (!xfrm_addr_equal(&x->id.daddr, &m->old_daddr, 1641 1643 m->old_family) || ··· 1664 1662 xc = xfrm_state_clone(x, encap); 1665 1663 if (!xc) 1666 1664 return NULL; 1665 + 1666 + xc->props.family = m->new_family; 1667 + 1668 + if (xfrm_init_state(xc) < 0) 1669 + goto error; 1667 1670 1668 1671 memcpy(&xc->id.daddr, &m->new_daddr, sizeof(xc->id.daddr)); 1669 1672 memcpy(&xc->props.saddr, &m->new_saddr, sizeof(xc->props.saddr)); ··· 2579 2572 } 2580 2573 EXPORT_SYMBOL(xfrm_state_delete_tunnel); 2581 2574 2582 - u32 __xfrm_state_mtu(struct xfrm_state *x, int mtu) 2575 + u32 xfrm_state_mtu(struct xfrm_state *x, int mtu) 2583 2576 { 2584 2577 const struct xfrm_type *type = READ_ONCE(x->type); 2585 2578 struct crypto_aead *aead; ··· 2610 2603 return ((mtu - x->props.header_len - crypto_aead_authsize(aead) - 2611 2604 net_adj) & ~(blksize - 1)) + net_adj - 2; 2612 2605 } 2613 - EXPORT_SYMBOL_GPL(__xfrm_state_mtu); 2614 - 2615 - u32 xfrm_state_mtu(struct xfrm_state *x, int mtu) 2616 - { 2617 - mtu = __xfrm_state_mtu(x, mtu); 2618 - 2619 - if (x->props.family == AF_INET6 && mtu < IPV6_MIN_MTU) 2620 - return IPV6_MIN_MTU; 2621 - 2622 - return mtu; 2623 - } 2606 + EXPORT_SYMBOL_GPL(xfrm_state_mtu); 2624 2607 2625 2608 int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload) 2626 2609 {
+8 -19
net/xfrm/xfrm_user.c
··· 630 630 631 631 xfrm_smark_init(attrs, &x->props.smark); 632 632 633 - if (attrs[XFRMA_IF_ID]) { 633 + if (attrs[XFRMA_IF_ID]) 634 634 x->if_id = nla_get_u32(attrs[XFRMA_IF_ID]); 635 - if (!x->if_id) { 636 - err = -EINVAL; 637 - goto error; 638 - } 639 - } 640 635 641 636 err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV]); 642 637 if (err) ··· 1427 1432 1428 1433 mark = xfrm_mark_get(attrs, &m); 1429 1434 1430 - if (attrs[XFRMA_IF_ID]) { 1435 + if (attrs[XFRMA_IF_ID]) 1431 1436 if_id = nla_get_u32(attrs[XFRMA_IF_ID]); 1432 - if (!if_id) { 1433 - err = -EINVAL; 1434 - goto out_noput; 1435 - } 1436 - } 1437 1437 1438 1438 if (p->info.seq) { 1439 1439 x = xfrm_find_acq_byseq(net, mark, p->info.seq); ··· 1741 1751 1742 1752 xfrm_mark_get(attrs, &xp->mark); 1743 1753 1744 - if (attrs[XFRMA_IF_ID]) { 1754 + if (attrs[XFRMA_IF_ID]) 1745 1755 xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]); 1746 - if (!xp->if_id) { 1747 - err = -EINVAL; 1748 - goto error; 1749 - } 1750 - } 1751 1756 1752 1757 return xp; 1753 1758 error: ··· 2593 2608 int n = 0; 2594 2609 struct net *net = sock_net(skb->sk); 2595 2610 struct xfrm_encap_tmpl *encap = NULL; 2611 + u32 if_id = 0; 2596 2612 2597 2613 if (attrs[XFRMA_MIGRATE] == NULL) 2598 2614 return -EINVAL; ··· 2618 2632 return -ENOMEM; 2619 2633 } 2620 2634 2621 - err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap); 2635 + if (attrs[XFRMA_IF_ID]) 2636 + if_id = nla_get_u32(attrs[XFRMA_IF_ID]); 2637 + 2638 + err = xfrm_migrate(&pi->sel, pi->dir, type, m, n, kmp, net, encap, if_id); 2622 2639 2623 2640 kfree(encap); 2624 2641
+1 -2
sound/soc/codecs/cs4265.c
··· 150 150 SOC_SINGLE("E to F Buffer Disable Switch", CS4265_SPDIF_CTL1, 151 151 6, 1, 0), 152 152 SOC_ENUM("C Data Access", cam_mode_enum), 153 - SOC_SINGLE("SPDIF Switch", CS4265_SPDIF_CTL2, 5, 1, 1), 154 153 SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2, 155 154 3, 1, 0), 156 155 SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum), ··· 185 186 186 187 SND_SOC_DAPM_SWITCH("Loopback", SND_SOC_NOPM, 0, 0, 187 188 &loopback_ctl), 188 - SND_SOC_DAPM_SWITCH("SPDIF", SND_SOC_NOPM, 0, 0, 189 + SND_SOC_DAPM_SWITCH("SPDIF", CS4265_SPDIF_CTL2, 5, 1, 189 190 &spdif_switch), 190 191 SND_SOC_DAPM_SWITCH("DAC", CS4265_PWRCTL, 1, 1, 191 192 &dac_switch),
+2 -2
sound/soc/soc-ops.c
··· 319 319 if (ucontrol->value.integer.value[0] < 0) 320 320 return -EINVAL; 321 321 val = ucontrol->value.integer.value[0]; 322 - if (mc->platform_max && val > mc->platform_max) 322 + if (mc->platform_max && ((int)val + min) > mc->platform_max) 323 323 return -EINVAL; 324 324 if (val > max - min) 325 325 return -EINVAL; ··· 332 332 if (ucontrol->value.integer.value[1] < 0) 333 333 return -EINVAL; 334 334 val2 = ucontrol->value.integer.value[1]; 335 - if (mc->platform_max && val2 > mc->platform_max) 335 + if (mc->platform_max && ((int)val2 + min) > mc->platform_max) 336 336 return -EINVAL; 337 337 if (val2 > max - min) 338 338 return -EINVAL;
+1 -1
sound/x86/intel_hdmi_audio.c
··· 1261 1261 { 1262 1262 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 1263 1263 return remap_pfn_range(vma, vma->vm_start, 1264 - substream->dma_buffer.addr >> PAGE_SHIFT, 1264 + substream->runtime->dma_addr >> PAGE_SHIFT, 1265 1265 vma->vm_end - vma->vm_start, vma->vm_page_prot); 1266 1266 } 1267 1267
+5
tools/arch/arm64/include/uapi/asm/kvm.h
··· 281 281 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED 3 282 282 #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED (1U << 4) 283 283 284 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3 KVM_REG_ARM_FW_REG(3) 285 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL 0 286 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL 1 287 + #define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED 2 288 + 284 289 /* SVE registers */ 285 290 #define KVM_REG_ARM64_SVE (0x15 << KVM_REG_ARM_COPROC_SHIFT) 286 291
+1 -1
tools/arch/x86/include/asm/cpufeatures.h
··· 204 204 /* FREE! ( 7*32+10) */ 205 205 #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ 206 206 #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ 207 - #define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ 207 + #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ 208 208 #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */ 209 209 #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */ 210 210 #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
+1 -1
tools/perf/bench/epoll-ctl.c
··· 106 106 printinfo("Nesting level(s): %d\n", nested); 107 107 108 108 epollfdp = calloc(nested, sizeof(int)); 109 - if (!epollfd) 109 + if (!epollfdp) 110 110 err(EXIT_FAILURE, "calloc"); 111 111 112 112 for (i = 0; i < nested; i++) {
+5 -3
tools/perf/util/parse-events.c
··· 1648 1648 { 1649 1649 struct parse_events_term *term; 1650 1650 struct list_head *list = NULL; 1651 + struct list_head *orig_head = NULL; 1651 1652 struct perf_pmu *pmu = NULL; 1652 1653 int ok = 0; 1653 1654 char *config; ··· 1675 1674 } 1676 1675 list_add_tail(&term->list, head); 1677 1676 1678 - 1679 1677 /* Add it for all PMUs that support the alias */ 1680 1678 list = malloc(sizeof(struct list_head)); 1681 1679 if (!list) ··· 1687 1687 1688 1688 list_for_each_entry(alias, &pmu->aliases, list) { 1689 1689 if (!strcasecmp(alias->name, str)) { 1690 + parse_events_copy_term_list(head, &orig_head); 1690 1691 if (!parse_events_add_pmu(parse_state, list, 1691 - pmu->name, head, 1692 + pmu->name, orig_head, 1692 1693 true, true)) { 1693 1694 pr_debug("%s -> %s/%s/\n", str, 1694 1695 pmu->name, alias->str); 1695 1696 ok++; 1696 1697 } 1698 + parse_events_terms__delete(orig_head); 1697 1699 } 1698 1700 } 1699 1701 } ··· 2195 2193 for (i = 0; i < ARRAY_SIZE(symbols); i++, tmp++) { 2196 2194 tmp->type = symbols[i].type; 2197 2195 tmp->symbol = strdup(symbols[i].symbol); 2198 - if (!list->symbol) 2196 + if (!tmp->symbol) 2199 2197 goto err_free; 2200 2198 } 2201 2199
+1 -1
tools/testing/selftests/drivers/net/mlxsw/spectrum/resource_scale.sh
··· 50 50 else 51 51 log_test "'$current_test' [$profile] overflow $target" 52 52 fi 53 + RET_FIN=$(( RET_FIN || RET )) 53 54 done 54 - RET_FIN=$(( RET_FIN || RET )) 55 55 done 56 56 done 57 57 current_test=""
+2 -1
tools/testing/selftests/drivers/net/mlxsw/tc_police_scale.sh
··· 60 60 61 61 tc_police_rules_create $count $should_fail 62 62 63 - offload_count=$(tc filter show dev $swp1 ingress | grep in_hw | wc -l) 63 + offload_count=$(tc -j filter show dev $swp1 ingress | 64 + jq "[.[] | select(.options.in_hw == true)] | length") 64 65 ((offload_count == count)) 65 66 check_err_fail $should_fail $? "tc police offload count" 66 67 }
+6 -1
tools/testing/selftests/kvm/aarch64/arch_timer.c
··· 366 366 { 367 367 struct kvm_vm *vm; 368 368 unsigned int i; 369 + int ret; 369 370 int nr_vcpus = test_args.nr_vcpus; 370 371 371 372 vm = vm_create_default_with_vcpus(nr_vcpus, 0, 0, guest_code, NULL); ··· 383 382 384 383 ucall_init(vm, NULL); 385 384 test_init_timer_irq(vm); 386 - vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); 385 + ret = vgic_v3_setup(vm, nr_vcpus, 64, GICD_BASE_GPA, GICR_BASE_GPA); 386 + if (ret < 0) { 387 + print_skip("Failed to create vgic-v3"); 388 + exit(KSFT_SKIP); 389 + } 387 390 388 391 /* Make all the test's cmdline args visible to the guest */ 389 392 sync_global_to_guest(vm, test_args);
+4
tools/testing/selftests/kvm/aarch64/vgic_irq.c
··· 761 761 762 762 gic_fd = vgic_v3_setup(vm, 1, nr_irqs, 763 763 GICD_BASE_GPA, GICR_BASE_GPA); 764 + if (gic_fd < 0) { 765 + print_skip("Failed to create vgic-v3, skipping"); 766 + exit(KSFT_SKIP); 767 + } 764 768 765 769 vm_install_exception_handler(vm, VECTOR_IRQ_CURRENT, 766 770 guest_irq_handlers[args.eoi_split][args.level_sensitive]);
+3 -1
tools/testing/selftests/kvm/lib/aarch64/vgic.c
··· 52 52 nr_vcpus, nr_vcpus_created); 53 53 54 54 /* Distributor setup */ 55 - gic_fd = kvm_create_device(vm, KVM_DEV_TYPE_ARM_VGIC_V3, false); 55 + if (_kvm_create_device(vm, KVM_DEV_TYPE_ARM_VGIC_V3, 56 + false, &gic_fd) != 0) 57 + return -1; 56 58 57 59 kvm_device_access(gic_fd, KVM_DEV_ARM_VGIC_GRP_NR_IRQS, 58 60 0, &nr_irqs, true);
+2 -2
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 763 763 run_tests_lo "$ns1" "$ns1" dead:beef:1::1 1 "-I 3 -i $old_cin" 764 764 765 765 # restore previous status 766 - cout=$old_cout 767 - cout_disconnect="$cout".disconnect 766 + sin=$old_sin 767 + sin_disconnect="$cout".disconnect 768 768 cin=$old_cin 769 769 cin_disconnect="$cin".disconnect 770 770 connect_per_transfer=1
+17 -4
tools/testing/selftests/net/pmtu.sh
··· 374 374 return $rc 375 375 } 376 376 377 + run_cmd_bg() { 378 + cmd="$*" 379 + 380 + if [ "$VERBOSE" = "1" ]; then 381 + printf " COMMAND: %s &\n" "${cmd}" 382 + fi 383 + 384 + $cmd 2>&1 & 385 + } 386 + 377 387 # Find the auto-generated name for this namespace 378 388 nsname() { 379 389 eval echo \$NS_$1 ··· 680 670 [ ${1} -eq 6 ] && proto="-6" || proto="" 681 671 port=${2} 682 672 683 - run_cmd ${ns_a} nettest ${proto} -q -D -s -x -p ${port} -t 5 & 673 + run_cmd_bg "${ns_a}" nettest "${proto}" -q -D -s -x -p "${port}" -t 5 684 674 nettest_pids="${nettest_pids} $!" 685 675 686 - run_cmd ${ns_b} nettest ${proto} -q -D -s -x -p ${port} -t 5 & 676 + run_cmd_bg "${ns_b}" nettest "${proto}" -q -D -s -x -p "${port}" -t 5 687 677 nettest_pids="${nettest_pids} $!" 688 678 } 689 679 ··· 875 865 setup() { 876 866 [ "$(id -u)" -ne 0 ] && echo " need to run as root" && return $ksft_skip 877 867 878 - cleanup 879 868 for arg do 880 869 eval setup_${arg} || { echo " ${arg} not supported"; return 1; } 881 870 done ··· 885 876 886 877 for arg do 887 878 [ "${ns_cmd}" = "" ] && ns_cmd="${arg}" && continue 888 - ${ns_cmd} tcpdump -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null & 879 + ${ns_cmd} tcpdump --immediate-mode -s 0 -i "${arg}" -w "${name}_${arg}.pcap" 2> /dev/null & 889 880 tcpdump_pids="${tcpdump_pids} $!" 890 881 ns_cmd= 891 882 done ··· 1844 1835 tdesc="$2" 1845 1836 1846 1837 unset IFS 1838 + 1839 + # Since cleanup() relies on variables modified by this subshell, it 1840 + # has to run in this context. 1841 + trap cleanup EXIT 1847 1842 1848 1843 if [ "$VERBOSE" = "1" ]; then 1849 1844 printf "\n##########################################################################\n\n"
+1
tools/testing/selftests/netfilter/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 nf-queue 3 + connect_close
+1 -1
tools/testing/selftests/netfilter/Makefile
··· 9 9 conntrack_vrf.sh nft_synproxy.sh 10 10 11 11 LDLIBS = -lmnl 12 - TEST_GEN_FILES = nf-queue 12 + TEST_GEN_FILES = nf-queue connect_close 13 13 14 14 include ../lib.mk
+136
tools/testing/selftests/netfilter/connect_close.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <stdio.h> 4 + #include <stdlib.h> 5 + #include <fcntl.h> 6 + #include <string.h> 7 + #include <unistd.h> 8 + #include <signal.h> 9 + 10 + #include <arpa/inet.h> 11 + #include <sys/socket.h> 12 + 13 + #define PORT 12345 14 + #define RUNTIME 10 15 + 16 + static struct { 17 + unsigned int timeout; 18 + unsigned int port; 19 + } opts = { 20 + .timeout = RUNTIME, 21 + .port = PORT, 22 + }; 23 + 24 + static void handler(int sig) 25 + { 26 + _exit(sig == SIGALRM ? 0 : 1); 27 + } 28 + 29 + static void set_timeout(void) 30 + { 31 + struct sigaction action = { 32 + .sa_handler = handler, 33 + }; 34 + 35 + sigaction(SIGALRM, &action, NULL); 36 + 37 + alarm(opts.timeout); 38 + } 39 + 40 + static void do_connect(const struct sockaddr_in *dst) 41 + { 42 + int s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); 43 + 44 + if (s >= 0) 45 + fcntl(s, F_SETFL, O_NONBLOCK); 46 + 47 + connect(s, (struct sockaddr *)dst, sizeof(*dst)); 48 + close(s); 49 + } 50 + 51 + static void do_accept(const struct sockaddr_in *src) 52 + { 53 + int c, one = 1, s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); 54 + 55 + if (s < 0) 56 + return; 57 + 58 + setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one)); 59 + setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &one, sizeof(one)); 60 + 61 + bind(s, (struct sockaddr *)src, sizeof(*src)); 62 + 63 + listen(s, 16); 64 + 65 + c = accept(s, NULL, NULL); 66 + if (c >= 0) 67 + close(c); 68 + 69 + close(s); 70 + } 71 + 72 + static int accept_loop(void) 73 + { 74 + struct sockaddr_in src = { 75 + .sin_family = AF_INET, 76 + .sin_port = htons(opts.port), 77 + }; 78 + 79 + inet_pton(AF_INET, "127.0.0.1", &src.sin_addr); 80 + 81 + set_timeout(); 82 + 83 + for (;;) 84 + do_accept(&src); 85 + 86 + return 1; 87 + } 88 + 89 + static int connect_loop(void) 90 + { 91 + struct sockaddr_in dst = { 92 + .sin_family = AF_INET, 93 + .sin_port = htons(opts.port), 94 + }; 95 + 96 + inet_pton(AF_INET, "127.0.0.1", &dst.sin_addr); 97 + 98 + set_timeout(); 99 + 100 + for (;;) 101 + do_connect(&dst); 102 + 103 + return 1; 104 + } 105 + 106 + static void parse_opts(int argc, char **argv) 107 + { 108 + int c; 109 + 110 + while ((c = getopt(argc, argv, "t:p:")) != -1) { 111 + switch (c) { 112 + case 't': 113 + opts.timeout = atoi(optarg); 114 + break; 115 + case 'p': 116 + opts.port = atoi(optarg); 117 + break; 118 + } 119 + } 120 + } 121 + 122 + int main(int argc, char *argv[]) 123 + { 124 + pid_t p; 125 + 126 + parse_opts(argc, argv); 127 + 128 + p = fork(); 129 + if (p < 0) 130 + return 111; 131 + 132 + if (p > 0) 133 + return accept_loop(); 134 + 135 + return connect_loop(); 136 + }
+19
tools/testing/selftests/netfilter/nft_queue.sh
··· 113 113 chain output { 114 114 type filter hook output priority $prio; policy accept; 115 115 tcp dport 12345 queue num 3 116 + tcp sport 23456 queue num 3 116 117 jump nfq 117 118 } 118 119 chain post { ··· 297 296 wait 2>/dev/null 298 297 } 299 298 299 + test_tcp_localhost_connectclose() 300 + { 301 + tmpfile=$(mktemp) || exit 1 302 + 303 + ip netns exec ${nsrouter} ./connect_close -p 23456 -t $timeout & 304 + 305 + ip netns exec ${nsrouter} ./nf-queue -q 3 -t $timeout & 306 + local nfqpid=$! 307 + 308 + sleep 1 309 + rm -f "$tmpfile" 310 + 311 + wait $rpid 312 + [ $? -eq 0 ] && echo "PASS: tcp via loopback with connect/close" 313 + wait 2>/dev/null 314 + } 315 + 300 316 test_tcp_localhost_requeue() 301 317 { 302 318 ip netns exec ${nsrouter} nft -f /dev/stdin <<EOF ··· 442 424 443 425 test_tcp_forward 444 426 test_tcp_localhost 427 + test_tcp_localhost_connectclose 445 428 test_tcp_localhost_requeue 446 429 test_icmp_vrf 447 430
+19 -7
tools/testing/selftests/vm/hugepage-mremap.c
··· 3 3 * hugepage-mremap: 4 4 * 5 5 * Example of remapping huge page memory in a user application using the 6 - * mremap system call. Code assumes a hugetlbfs filesystem is mounted 7 - * at './huge'. The amount of memory used by this test is decided by a command 8 - * line argument in MBs. If missing, the default amount is 10MB. 6 + * mremap system call. The path to a file in a hugetlbfs filesystem must 7 + * be passed as the last argument to this test. The amount of memory used 8 + * by this test in MBs can optionally be passed as an argument. If no memory 9 + * amount is passed, the default amount is 10MB. 9 10 * 10 11 * To make sure the test triggers pmd sharing and goes through the 'unshare' 11 12 * path in the mremap code use 1GB (1024) or more. ··· 26 25 #define DEFAULT_LENGTH_MB 10UL 27 26 #define MB_TO_BYTES(x) (x * 1024 * 1024) 28 27 29 - #define FILE_NAME "huge/hugepagefile" 30 28 #define PROTECTION (PROT_READ | PROT_WRITE | PROT_EXEC) 31 29 #define FLAGS (MAP_SHARED | MAP_ANONYMOUS) 32 30 ··· 107 107 108 108 int main(int argc, char *argv[]) 109 109 { 110 + size_t length; 111 + 112 + if (argc != 2 && argc != 3) { 113 + printf("Usage: %s [length_in_MB] <hugetlb_file>\n", argv[0]); 114 + exit(1); 115 + } 116 + 110 117 /* Read memory length as the first arg if valid, otherwise fallback to 111 - * the default length. Any additional args are ignored. 118 + * the default length. 112 119 */ 113 - size_t length = argc > 1 ? (size_t)atoi(argv[1]) : 0UL; 120 + if (argc == 3) 121 + length = argc > 2 ? (size_t)atoi(argv[1]) : 0UL; 114 122 115 123 length = length > 0 ? length : DEFAULT_LENGTH_MB; 116 124 length = MB_TO_BYTES(length); 117 125 118 126 int ret = 0; 119 127 120 - int fd = open(FILE_NAME, O_CREAT | O_RDWR, 0755); 128 + /* last arg is the hugetlb file name */ 129 + int fd = open(argv[argc-1], O_CREAT | O_RDWR, 0755); 121 130 122 131 if (fd < 0) { 123 132 perror("Open failed"); ··· 177 168 ret = read_bytes(addr, length); 178 169 179 170 munmap(addr, length); 171 + 172 + close(fd); 173 + unlink(argv[argc-1]); 180 174 181 175 return ret; 182 176 }
+2 -1
tools/testing/selftests/vm/run_vmtests.sh
··· 111 111 echo "-----------------------" 112 112 echo "running hugepage-mremap" 113 113 echo "-----------------------" 114 - ./hugepage-mremap 256 114 + ./hugepage-mremap $mnt/huge_mremap 115 115 if [ $? -ne 0 ]; then 116 116 echo "[FAIL]" 117 117 exitcode=1 118 118 else 119 119 echo "[PASS]" 120 120 fi 121 + rm -f $mnt/huge_mremap 121 122 122 123 echo "NOTE: The above hugetlb tests provide minimal coverage. Use" 123 124 echo " https://github.com/libhugetlbfs/libhugetlbfs.git for"
+1
tools/testing/selftests/vm/userfaultfd.c
··· 46 46 #include <signal.h> 47 47 #include <poll.h> 48 48 #include <string.h> 49 + #include <linux/mman.h> 49 50 #include <sys/mman.h> 50 51 #include <sys/syscall.h> 51 52 #include <sys/ioctl.h>
+3
tools/virtio/linux/mm_types.h
··· 1 + struct folio { 2 + struct page page; 3 + };
+1
tools/virtio/virtio_test.c
··· 130 130 memset(dev, 0, sizeof *dev); 131 131 dev->vdev.features = features; 132 132 INIT_LIST_HEAD(&dev->vdev.vqs); 133 + spin_lock_init(&dev->vdev.vqs_list_lock); 133 134 dev->buf_size = 1024; 134 135 dev->buf = malloc(dev->buf_size); 135 136 assert(dev->buf);