Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v7.0-rc5' into driver-core-next

We need the driver-core fixes in here as well to build on top of.

Signed-off-by: Danilo Krummrich <dakr@kernel.org>

+3573 -1512
+1
.mailmap
··· 327 327 Herbert Xu <herbert@gondor.apana.org.au> 328 328 Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com> 329 329 Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn> 330 + Ignat Korchagin <ignat@linux.win> <ignat@cloudflare.com> 330 331 Ike Panhc <ikepanhc@gmail.com> <ike.pan@canonical.com> 331 332 J. Bruce Fields <bfields@fieldses.org> <bfields@redhat.com> 332 333 J. Bruce Fields <bfields@fieldses.org> <bfields@citi.umich.edu>
+2
Documentation/dev-tools/kunit/run_wrapper.rst
··· 336 336 - ``--list_tests_attr``: If set, lists all tests that will be run and all of their 337 337 attributes. 338 338 339 + - ``--list_suites``: If set, lists all suites that will be run. 340 + 339 341 Command-line completion 340 342 ============================== 341 343
+19 -7
Documentation/devicetree/bindings/mtd/st,spear600-smi.yaml
··· 19 19 Flash sub nodes describe the memory range and optional per-flash 20 20 properties. 21 21 22 - allOf: 23 - - $ref: mtd.yaml# 24 - 25 22 properties: 26 23 compatible: 27 24 const: st,spear600-smi ··· 39 42 $ref: /schemas/types.yaml#/definitions/uint32 40 43 description: Functional clock rate of the SMI controller in Hz. 41 44 42 - st,smi-fast-mode: 43 - type: boolean 44 - description: Indicates that the attached flash supports fast read mode. 45 + patternProperties: 46 + "^flash@.*$": 47 + $ref: /schemas/mtd/mtd.yaml# 48 + 49 + properties: 50 + reg: 51 + maxItems: 1 52 + 53 + st,smi-fast-mode: 54 + type: boolean 55 + description: Indicates that the attached flash supports fast read mode. 56 + 57 + unevaluatedProperties: false 58 + 59 + required: 60 + - reg 45 61 46 62 required: 47 63 - compatible 48 64 - reg 49 65 - clock-rate 66 + - "#address-cells" 67 + - "#size-cells" 50 68 51 69 unevaluatedProperties: false 52 70 ··· 76 64 interrupts = <12>; 77 65 clock-rate = <50000000>; /* 50 MHz */ 78 66 79 - flash@f8000000 { 67 + flash@fc000000 { 80 68 reg = <0xfc000000 0x1000>; 81 69 st,smi-fast-mode; 82 70 };
+2 -2
Documentation/devicetree/bindings/regulator/regulator.yaml
··· 168 168 offset from voltage set to regulator. 169 169 170 170 regulator-uv-protection-microvolt: 171 - description: Set over under voltage protection limit. This is a limit where 171 + description: Set under voltage protection limit. This is a limit where 172 172 hardware performs emergency shutdown. Zero can be passed to disable 173 173 protection and value '1' indicates that protection should be enabled but 174 174 limit setting can be omitted. Limit is given as microvolt offset from ··· 182 182 is given as microvolt offset from voltage set to regulator. 183 183 184 184 regulator-uv-warn-microvolt: 185 - description: Set over under voltage warning limit. This is a limit where 185 + description: Set under voltage warning limit. This is a limit where 186 186 hardware is assumed still to be functional but approaching limit where 187 187 it gets damaged. Recovery actions should be initiated. Zero can be passed 188 188 to disable detection and value '1' indicates that detection should
+48
Documentation/driver-api/driver-model/binding.rst
··· 99 99 When a driver is removed, the list of devices that it supports is 100 100 iterated over, and the driver's remove callback is called for each 101 101 one. The device is removed from that list and the symlinks removed. 102 + 103 + 104 + Driver Override 105 + ~~~~~~~~~~~~~~~ 106 + 107 + Userspace may override the standard matching by writing a driver name to 108 + a device's ``driver_override`` sysfs attribute. When set, only a driver 109 + whose name matches the override will be considered during binding. This 110 + bypasses all bus-specific matching (OF, ACPI, ID tables, etc.). 111 + 112 + The override may be cleared by writing an empty string, which returns 113 + the device to standard matching rules. Writing to ``driver_override`` 114 + does not automatically unbind the device from its current driver or 115 + make any attempt to load the specified driver. 116 + 117 + Buses opt into this mechanism by setting the ``driver_override`` flag in 118 + their ``struct bus_type``:: 119 + 120 + const struct bus_type example_bus_type = { 121 + ... 122 + .driver_override = true, 123 + }; 124 + 125 + When the flag is set, the driver core automatically creates the 126 + ``driver_override`` sysfs attribute for every device on that bus. 127 + 128 + The bus's ``match()`` callback should check the override before performing 129 + its own matching, using ``device_match_driver_override()``:: 130 + 131 + static int example_match(struct device *dev, const struct device_driver *drv) 132 + { 133 + int ret; 134 + 135 + ret = device_match_driver_override(dev, drv); 136 + if (ret >= 0) 137 + return ret; 138 + 139 + /* Fall through to bus-specific matching... */ 140 + } 141 + 142 + ``device_match_driver_override()`` returns > 0 if the override matches 143 + the given driver, 0 if the override is set but does not match, or < 0 if 144 + no override is set at all. 145 + 146 + Additional helpers are available: 147 + 148 + - ``device_set_driver_override()`` - set or clear the override from kernel code. 149 + - ``device_has_driver_override()`` - check whether an override is set.
+6 -6
Documentation/netlink/specs/net_shaper.yaml
··· 247 247 flags: [admin-perm] 248 248 249 249 do: 250 - pre: net-shaper-nl-pre-doit 251 - post: net-shaper-nl-post-doit 250 + pre: net-shaper-nl-pre-doit-write 251 + post: net-shaper-nl-post-doit-write 252 252 request: 253 253 attributes: 254 254 - ifindex ··· 278 278 flags: [admin-perm] 279 279 280 280 do: 281 - pre: net-shaper-nl-pre-doit 282 - post: net-shaper-nl-post-doit 281 + pre: net-shaper-nl-pre-doit-write 282 + post: net-shaper-nl-post-doit-write 283 283 request: 284 284 attributes: *ns-binding 285 285 ··· 309 309 flags: [admin-perm] 310 310 311 311 do: 312 - pre: net-shaper-nl-pre-doit 313 - post: net-shaper-nl-post-doit 312 + pre: net-shaper-nl-pre-doit-write 313 + post: net-shaper-nl-post-doit-write 314 314 request: 315 315 attributes: 316 316 - ifindex
+8 -6
MAINTAINERS
··· 4022 4022 ASYMMETRIC KEYS 4023 4023 M: David Howells <dhowells@redhat.com> 4024 4024 M: Lukas Wunner <lukas@wunner.de> 4025 - M: Ignat Korchagin <ignat@cloudflare.com> 4025 + M: Ignat Korchagin <ignat@linux.win> 4026 4026 L: keyrings@vger.kernel.org 4027 4027 L: linux-crypto@vger.kernel.org 4028 4028 S: Maintained ··· 4035 4035 4036 4036 ASYMMETRIC KEYS - ECDSA 4037 4037 M: Lukas Wunner <lukas@wunner.de> 4038 - M: Ignat Korchagin <ignat@cloudflare.com> 4038 + M: Ignat Korchagin <ignat@linux.win> 4039 4039 R: Stefan Berger <stefanb@linux.ibm.com> 4040 4040 L: linux-crypto@vger.kernel.org 4041 4041 S: Maintained ··· 4045 4045 4046 4046 ASYMMETRIC KEYS - GOST 4047 4047 M: Lukas Wunner <lukas@wunner.de> 4048 - M: Ignat Korchagin <ignat@cloudflare.com> 4048 + M: Ignat Korchagin <ignat@linux.win> 4049 4049 L: linux-crypto@vger.kernel.org 4050 4050 S: Odd fixes 4051 4051 F: crypto/ecrdsa* 4052 4052 4053 4053 ASYMMETRIC KEYS - RSA 4054 4054 M: Lukas Wunner <lukas@wunner.de> 4055 - M: Ignat Korchagin <ignat@cloudflare.com> 4055 + M: Ignat Korchagin <ignat@linux.win> 4056 4056 L: linux-crypto@vger.kernel.org 4057 4057 S: Maintained 4058 4058 F: crypto/rsa* ··· 7998 7998 F: drivers/gpu/drm/tiny/hx8357d.c 7999 7999 8000 8000 DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE 8001 - M: Deepak Rawat <drawat.floss@gmail.com> 8001 + M: Dexuan Cui <decui@microsoft.com> 8002 + M: Long Li <longli@microsoft.com> 8003 + M: Saurabh Sengar <ssengar@linux.microsoft.com> 8002 8004 L: linux-hyperv@vger.kernel.org 8003 8005 L: dri-devel@lists.freedesktop.org 8004 8006 S: Maintained ··· 24902 24900 F: drivers/pinctrl/spear/ 24903 24901 24904 24902 SPI NOR SUBSYSTEM 24905 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 24906 24903 M: Pratyush Yadav <pratyush@kernel.org> 24907 24904 M: Michael Walle <mwalle@kernel.org> 24905 + R: Takahiro Kuwano <takahiro.kuwano@infineon.com> 24908 24906 L: linux-mtd@lists.infradead.org 24909 24907 S: Maintained 24910 24908 W: http://www.linux-mtd.infradead.org/
+1 -1
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-1
arch/arm/configs/multi_v7_defconfig
··· 279 279 CONFIG_TI_CPTS=y 280 280 CONFIG_TI_KEYSTONE_NETCP=y 281 281 CONFIG_TI_KEYSTONE_NETCP_ETHSS=y 282 - CONFIG_TI_PRUSS=m 283 282 CONFIG_TI_PRUETH=m 284 283 CONFIG_XILINX_EMACLITE=y 285 284 CONFIG_SFP=m
+8 -8
arch/arm64/boot/dts/renesas/r8a78000.dtsi
··· 698 698 compatible = "renesas,scif-r8a78000", 699 699 "renesas,rcar-gen5-scif", "renesas,scif"; 700 700 reg = <0 0xc0700000 0 0x40>; 701 - interrupts = <GIC_SPI 4074 IRQ_TYPE_LEVEL_HIGH>; 701 + interrupts = <GIC_ESPI 10 IRQ_TYPE_LEVEL_HIGH>; 702 702 clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>; 703 703 clock-names = "fck", "brg_int", "scif_clk"; 704 704 status = "disabled"; ··· 708 708 compatible = "renesas,scif-r8a78000", 709 709 "renesas,rcar-gen5-scif", "renesas,scif"; 710 710 reg = <0 0xc0704000 0 0x40>; 711 - interrupts = <GIC_SPI 4075 IRQ_TYPE_LEVEL_HIGH>; 711 + interrupts = <GIC_ESPI 11 IRQ_TYPE_LEVEL_HIGH>; 712 712 clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>; 713 713 clock-names = "fck", "brg_int", "scif_clk"; 714 714 status = "disabled"; ··· 718 718 compatible = "renesas,scif-r8a78000", 719 719 "renesas,rcar-gen5-scif", "renesas,scif"; 720 720 reg = <0 0xc0708000 0 0x40>; 721 - interrupts = <GIC_SPI 4076 IRQ_TYPE_LEVEL_HIGH>; 721 + interrupts = <GIC_ESPI 12 IRQ_TYPE_LEVEL_HIGH>; 722 722 clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>; 723 723 clock-names = "fck", "brg_int", "scif_clk"; 724 724 status = "disabled"; ··· 728 728 compatible = "renesas,scif-r8a78000", 729 729 "renesas,rcar-gen5-scif", "renesas,scif"; 730 730 reg = <0 0xc070c000 0 0x40>; 731 - interrupts = <GIC_SPI 4077 IRQ_TYPE_LEVEL_HIGH>; 731 + interrupts = <GIC_ESPI 13 IRQ_TYPE_LEVEL_HIGH>; 732 732 clocks = <&dummy_clk_sgasyncd16>, <&dummy_clk_sgasyncd16>, <&scif_clk>; 733 733 clock-names = "fck", "brg_int", "scif_clk"; 734 734 status = "disabled"; ··· 738 738 compatible = "renesas,hscif-r8a78000", 739 739 "renesas,rcar-gen5-hscif", "renesas,hscif"; 740 740 reg = <0 0xc0710000 0 0x60>; 741 - interrupts = <GIC_SPI 4078 IRQ_TYPE_LEVEL_HIGH>; 741 + interrupts = <GIC_ESPI 14 IRQ_TYPE_LEVEL_HIGH>; 742 742 clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>; 743 743 clock-names = "fck", "brg_int", "scif_clk"; 744 744 status = "disabled"; ··· 748 748 compatible = "renesas,hscif-r8a78000", 749 749 "renesas,rcar-gen5-hscif", "renesas,hscif"; 750 750 reg = <0 0xc0714000 0 0x60>; 751 - interrupts = <GIC_SPI 4079 IRQ_TYPE_LEVEL_HIGH>; 751 + interrupts = <GIC_ESPI 15 IRQ_TYPE_LEVEL_HIGH>; 752 752 clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>; 753 753 clock-names = "fck", "brg_int", "scif_clk"; 754 754 status = "disabled"; ··· 758 758 compatible = "renesas,hscif-r8a78000", 759 759 "renesas,rcar-gen5-hscif", "renesas,hscif"; 760 760 reg = <0 0xc0718000 0 0x60>; 761 - interrupts = <GIC_SPI 4080 IRQ_TYPE_LEVEL_HIGH>; 761 + interrupts = <GIC_ESPI 16 IRQ_TYPE_LEVEL_HIGH>; 762 762 clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>; 763 763 clock-names = "fck", "brg_int", "scif_clk"; 764 764 status = "disabled"; ··· 768 768 compatible = "renesas,hscif-r8a78000", 769 769 "renesas,rcar-gen5-hscif", "renesas,hscif"; 770 770 reg = <0 0xc071c000 0 0x60>; 771 - interrupts = <GIC_SPI 4081 IRQ_TYPE_LEVEL_HIGH>; 771 + interrupts = <GIC_ESPI 17 IRQ_TYPE_LEVEL_HIGH>; 772 772 clocks = <&dummy_clk_sgasyncd4>, <&dummy_clk_sgasyncd4>, <&scif_clk>; 773 773 clock-names = "fck", "brg_int", "scif_clk"; 774 774 status = "disabled";
-30
arch/arm64/boot/dts/renesas/r9a09g057.dtsi
··· 581 581 status = "disabled"; 582 582 }; 583 583 584 - wdt0: watchdog@11c00400 { 585 - compatible = "renesas,r9a09g057-wdt"; 586 - reg = <0 0x11c00400 0 0x400>; 587 - clocks = <&cpg CPG_MOD 0x4b>, <&cpg CPG_MOD 0x4c>; 588 - clock-names = "pclk", "oscclk"; 589 - resets = <&cpg 0x75>; 590 - power-domains = <&cpg>; 591 - status = "disabled"; 592 - }; 593 - 594 584 wdt1: watchdog@14400000 { 595 585 compatible = "renesas,r9a09g057-wdt"; 596 586 reg = <0 0x14400000 0 0x400>; 597 587 clocks = <&cpg CPG_MOD 0x4d>, <&cpg CPG_MOD 0x4e>; 598 588 clock-names = "pclk", "oscclk"; 599 589 resets = <&cpg 0x76>; 600 - power-domains = <&cpg>; 601 - status = "disabled"; 602 - }; 603 - 604 - wdt2: watchdog@13000000 { 605 - compatible = "renesas,r9a09g057-wdt"; 606 - reg = <0 0x13000000 0 0x400>; 607 - clocks = <&cpg CPG_MOD 0x4f>, <&cpg CPG_MOD 0x50>; 608 - clock-names = "pclk", "oscclk"; 609 - resets = <&cpg 0x77>; 610 - power-domains = <&cpg>; 611 - status = "disabled"; 612 - }; 613 - 614 - wdt3: watchdog@13000400 { 615 - compatible = "renesas,r9a09g057-wdt"; 616 - reg = <0 0x13000400 0 0x400>; 617 - clocks = <&cpg CPG_MOD 0x51>, <&cpg CPG_MOD 0x52>; 618 - clock-names = "pclk", "oscclk"; 619 - resets = <&cpg 0x78>; 620 590 power-domains = <&cpg>; 621 591 status = "disabled"; 622 592 };
+2 -2
arch/arm64/boot/dts/renesas/r9a09g077.dtsi
··· 974 974 975 975 cpg: clock-controller@80280000 { 976 976 compatible = "renesas,r9a09g077-cpg-mssr"; 977 - reg = <0 0x80280000 0 0x1000>, 978 - <0 0x81280000 0 0x9000>; 977 + reg = <0 0x80280000 0 0x10000>, 978 + <0 0x81280000 0 0x10000>; 979 979 clocks = <&extal_clk>; 980 980 clock-names = "extal"; 981 981 #clock-cells = <2>;
+2 -2
arch/arm64/boot/dts/renesas/r9a09g087.dtsi
··· 977 977 978 978 cpg: clock-controller@80280000 { 979 979 compatible = "renesas,r9a09g087-cpg-mssr"; 980 - reg = <0 0x80280000 0 0x1000>, 981 - <0 0x81280000 0 0x9000>; 980 + reg = <0 0x80280000 0 0x10000>, 981 + <0 0x81280000 0 0x10000>; 982 982 clocks = <&extal_clk>; 983 983 clock-names = "extal"; 984 984 #clock-cells = <2>;
+1 -1
arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
··· 162 162 <100000000>; 163 163 renesas,settings = [ 164 164 80 00 11 19 4c 42 dc 2f 06 7d 20 1a 5f 1e f2 27 165 - 00 40 00 00 00 00 00 00 06 0c 19 02 3f f0 90 86 165 + 00 40 00 00 00 00 00 00 06 0c 19 02 3b f0 90 86 166 166 a0 80 30 30 9c 167 167 ]; 168 168 };
+1
arch/arm64/boot/dts/renesas/rzt2h-n2h-evk-common.dtsi
··· 53 53 regulator-max-microvolt = <3300000>; 54 54 gpios-states = <0>; 55 55 states = <3300000 0>, <1800000 1>; 56 + regulator-ramp-delay = <60>; 56 57 }; 57 58 #endif 58 59
+1
arch/arm64/boot/dts/renesas/rzv2-evk-cn15-sd.dtso
··· 25 25 regulator-max-microvolt = <3300000>; 26 26 gpios-states = <0>; 27 27 states = <3300000 0>, <1800000 1>; 28 + regulator-ramp-delay = <60>; 28 29 }; 29 30 }; 30 31
+23 -14
arch/arm64/crypto/aes-neonbs-glue.c
··· 76 76 unsigned int key_len) 77 77 { 78 78 struct aesbs_ctx *ctx = crypto_skcipher_ctx(tfm); 79 - struct crypto_aes_ctx rk; 79 + struct crypto_aes_ctx *rk; 80 80 int err; 81 81 82 - err = aes_expandkey(&rk, in_key, key_len); 82 + rk = kmalloc(sizeof(*rk), GFP_KERNEL); 83 + if (!rk) 84 + return -ENOMEM; 85 + 86 + err = aes_expandkey(rk, in_key, key_len); 83 87 if (err) 84 - return err; 88 + goto out; 85 89 86 90 ctx->rounds = 6 + key_len / 4; 87 91 88 92 scoped_ksimd() 89 - aesbs_convert_key(ctx->rk, rk.key_enc, ctx->rounds); 90 - 91 - return 0; 93 + aesbs_convert_key(ctx->rk, rk->key_enc, ctx->rounds); 94 + out: 95 + kfree_sensitive(rk); 96 + return err; 92 97 } 93 98 94 99 static int __ecb_crypt(struct skcipher_request *req, ··· 138 133 unsigned int key_len) 139 134 { 140 135 struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm); 141 - struct crypto_aes_ctx rk; 136 + struct crypto_aes_ctx *rk; 142 137 int err; 143 138 144 - err = aes_expandkey(&rk, in_key, key_len); 139 + rk = kmalloc(sizeof(*rk), GFP_KERNEL); 140 + if (!rk) 141 + return -ENOMEM; 142 + 143 + err = aes_expandkey(rk, in_key, key_len); 145 144 if (err) 146 - return err; 145 + goto out; 147 146 148 147 ctx->key.rounds = 6 + key_len / 4; 149 148 150 - memcpy(ctx->enc, rk.key_enc, sizeof(ctx->enc)); 149 + memcpy(ctx->enc, rk->key_enc, sizeof(ctx->enc)); 151 150 152 151 scoped_ksimd() 153 - aesbs_convert_key(ctx->key.rk, rk.key_enc, ctx->key.rounds); 154 - memzero_explicit(&rk, sizeof(rk)); 155 - 156 - return 0; 152 + aesbs_convert_key(ctx->key.rk, rk->key_enc, ctx->key.rounds); 153 + out: 154 + kfree_sensitive(rk); 155 + return err; 157 156 } 158 157 159 158 static int cbc_encrypt(struct skcipher_request *req)
+8
arch/arm64/kernel/pi/patch-scs.c
··· 192 192 size -= 2; 193 193 break; 194 194 195 + case DW_CFA_advance_loc4: 196 + loc += *opcode++ * code_alignment_factor; 197 + loc += (*opcode++ << 8) * code_alignment_factor; 198 + loc += (*opcode++ << 16) * code_alignment_factor; 199 + loc += (*opcode++ << 24) * code_alignment_factor; 200 + size -= 4; 201 + break; 202 + 195 203 case DW_CFA_def_cfa: 196 204 case DW_CFA_offset_extended: 197 205 size = skip_xleb128(&opcode, size);
+2 -1
arch/arm64/kernel/rsi.c
··· 12 12 13 13 #include <asm/io.h> 14 14 #include <asm/mem_encrypt.h> 15 + #include <asm/pgtable.h> 15 16 #include <asm/rsi.h> 16 17 17 18 static struct realm_config config; ··· 147 146 return; 148 147 if (WARN_ON(rsi_get_realm_config(&config))) 149 148 return; 150 - prot_ns_shared = BIT(config.ipa_bits - 1); 149 + prot_ns_shared = __phys_to_pte_val(BIT(config.ipa_bits - 1)); 151 150 152 151 if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) 153 152 return;
+3
arch/loongarch/Kconfig
··· 304 304 config AS_HAS_LVZ_EXTENSION 305 305 def_bool $(as-instr,hvcl 0) 306 306 307 + config AS_HAS_SCQ_EXTENSION 308 + def_bool $(as-instr,sc.q \$t0$(comma)\$t1$(comma)\$t2) 309 + 307 310 config CC_HAS_ANNOTATE_TABLEJUMP 308 311 def_bool $(cc-option,-mannotate-tablejump) 309 312
+5
arch/loongarch/include/asm/cmpxchg.h
··· 238 238 arch_cmpxchg((ptr), (o), (n)); \ 239 239 }) 240 240 241 + #ifdef CONFIG_AS_HAS_SCQ_EXTENSION 242 + 241 243 union __u128_halves { 242 244 u128 full; 243 245 struct { ··· 292 290 BUILD_BUG_ON(sizeof(*(ptr)) != 16); \ 293 291 __arch_cmpxchg128(ptr, o, n, ""); \ 294 292 }) 293 + 294 + #endif /* CONFIG_AS_HAS_SCQ_EXTENSION */ 295 + 295 296 #else 296 297 #include <asm-generic/cmpxchg-local.h> 297 298 #define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+12 -2
arch/loongarch/include/asm/uaccess.h
··· 253 253 \ 254 254 __get_kernel_common(*((type *)(dst)), sizeof(type), \ 255 255 (__force type *)(src)); \ 256 - if (unlikely(__gu_err)) \ 256 + if (unlikely(__gu_err)) { \ 257 + pr_info("%s: memory access failed, ecode 0x%x\n", \ 258 + __func__, read_csr_excode()); \ 259 + pr_info("%s: the caller is %pS\n", \ 260 + __func__, __builtin_return_address(0)); \ 257 261 goto err_label; \ 262 + } \ 258 263 } while (0) 259 264 260 265 #define __put_kernel_nofault(dst, src, type, err_label) \ ··· 269 264 \ 270 265 __pu_val = *(__force type *)(src); \ 271 266 __put_kernel_common(((type *)(dst)), sizeof(type)); \ 272 - if (unlikely(__pu_err)) \ 267 + if (unlikely(__pu_err)) { \ 268 + pr_info("%s: memory access failed, ecode 0x%x\n", \ 269 + __func__, read_csr_excode()); \ 270 + pr_info("%s: the caller is %pS\n", \ 271 + __func__, __builtin_return_address(0)); \ 273 272 goto err_label; \ 273 + } \ 274 274 } while (0) 275 275 276 276 extern unsigned long __copy_user(void *to, const void *from, __kernel_size_t n);
+25 -6
arch/loongarch/kernel/inst.c
··· 246 246 247 247 if (smp_processor_id() == copy->cpu) { 248 248 ret = copy_to_kernel_nofault(copy->dst, copy->src, copy->len); 249 - if (ret) 249 + if (ret) { 250 250 pr_err("%s: operation failed\n", __func__); 251 + return ret; 252 + } 251 253 } 252 254 253 255 flush_icache_range((unsigned long)copy->dst, (unsigned long)copy->dst + copy->len); 254 256 255 - return ret; 257 + return 0; 256 258 } 257 259 258 260 int larch_insn_text_copy(void *dst, void *src, size_t len) 259 261 { 260 262 int ret = 0; 263 + int err = 0; 261 264 size_t start, end; 262 265 struct insn_copy copy = { 263 266 .dst = dst, 264 267 .src = src, 265 268 .len = len, 266 - .cpu = smp_processor_id(), 269 + .cpu = raw_smp_processor_id(), 267 270 }; 271 + 272 + /* 273 + * Ensure copy.cpu won't be hot removed before stop_machine. 274 + * If it is removed nobody will really update the text. 275 + */ 276 + lockdep_assert_cpus_held(); 268 277 269 278 start = round_down((size_t)dst, PAGE_SIZE); 270 279 end = round_up((size_t)dst + len, PAGE_SIZE); 271 280 272 - set_memory_rw(start, (end - start) / PAGE_SIZE); 273 - ret = stop_machine(text_copy_cb, &copy, cpu_online_mask); 274 - set_memory_rox(start, (end - start) / PAGE_SIZE); 281 + err = set_memory_rw(start, (end - start) / PAGE_SIZE); 282 + if (err) { 283 + pr_info("%s: set_memory_rw() failed\n", __func__); 284 + return err; 285 + } 286 + 287 + ret = stop_machine_cpuslocked(text_copy_cb, &copy, cpu_online_mask); 288 + 289 + err = set_memory_rox(start, (end - start) / PAGE_SIZE); 290 + if (err) { 291 + pr_info("%s: set_memory_rox() failed\n", __func__); 292 + return err; 293 + } 275 294 276 295 return ret; 277 296 }
+2 -2
arch/loongarch/kvm/vm.c
··· 49 49 kvm->arch.kvm_features |= BIT(KVM_LOONGARCH_VM_FEAT_PMU); 50 50 51 51 /* Enable all PV features by default */ 52 - kvm->arch.pv_features = BIT(KVM_FEATURE_IPI); 53 - kvm->arch.kvm_features = BIT(KVM_LOONGARCH_VM_FEAT_PV_IPI); 52 + kvm->arch.pv_features |= BIT(KVM_FEATURE_IPI); 53 + kvm->arch.kvm_features |= BIT(KVM_LOONGARCH_VM_FEAT_PV_IPI); 54 54 if (kvm_pvtime_supported()) { 55 55 kvm->arch.pv_features |= BIT(KVM_FEATURE_PREEMPT); 56 56 kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);
+11
arch/loongarch/net/bpf_jit.c
··· 1379 1379 { 1380 1380 int ret; 1381 1381 1382 + cpus_read_lock(); 1382 1383 mutex_lock(&text_mutex); 1383 1384 ret = larch_insn_text_copy(dst, src, len); 1384 1385 mutex_unlock(&text_mutex); 1386 + cpus_read_unlock(); 1385 1387 1386 1388 return ret ? ERR_PTR(-EINVAL) : dst; 1387 1389 } ··· 1431 1429 if (ret) 1432 1430 return ret; 1433 1431 1432 + cpus_read_lock(); 1434 1433 mutex_lock(&text_mutex); 1435 1434 if (memcmp(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES)) 1436 1435 ret = larch_insn_text_copy(ip, new_insns, LOONGARCH_LONG_JUMP_NBYTES); 1437 1436 mutex_unlock(&text_mutex); 1437 + cpus_read_unlock(); 1438 1438 1439 1439 return ret; 1440 1440 } ··· 1454 1450 for (i = 0; i < (len / sizeof(u32)); i++) 1455 1451 inst[i] = INSN_BREAK; 1456 1452 1453 + cpus_read_lock(); 1457 1454 mutex_lock(&text_mutex); 1458 1455 if (larch_insn_text_copy(dst, inst, len)) 1459 1456 ret = -EINVAL; 1460 1457 mutex_unlock(&text_mutex); 1458 + cpus_read_unlock(); 1461 1459 1462 1460 kvfree(inst); 1463 1461 ··· 1572 1566 void arch_free_bpf_trampoline(void *image, unsigned int size) 1573 1567 { 1574 1568 bpf_prog_pack_free(image, size); 1569 + } 1570 + 1571 + int arch_protect_bpf_trampoline(void *image, unsigned int size) 1572 + { 1573 + return 0; 1575 1574 } 1576 1575 1577 1576 /*
+2 -2
arch/parisc/kernel/cache.c
··· 953 953 #else 954 954 "1: cmpb,<<,n %0,%2,1b\n" 955 955 #endif 956 - " fic,m %3(%4,%0)\n" 956 + " fdc,m %3(%4,%0)\n" 957 957 "2: sync\n" 958 958 ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 959 959 : "+r" (start), "+r" (error) ··· 968 968 #else 969 969 "1: cmpb,<<,n %0,%2,1b\n" 970 970 #endif 971 - " fdc,m %3(%4,%0)\n" 971 + " fic,m %3(%4,%0)\n" 972 972 "2: sync\n" 973 973 ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 974 974 : "+r" (start), "+r" (error)
+2
arch/riscv/boot/dts/microchip/mpfs.dtsi
··· 428 428 clocks = <&clkcfg CLK_CAN0>, <&clkcfg CLK_MSSPLL3>; 429 429 interrupt-parent = <&plic>; 430 430 interrupts = <56>; 431 + resets = <&mss_top_sysreg CLK_CAN0>; 431 432 status = "disabled"; 432 433 }; 433 434 ··· 438 437 clocks = <&clkcfg CLK_CAN1>, <&clkcfg CLK_MSSPLL3>; 439 438 interrupt-parent = <&plic>; 440 439 interrupts = <57>; 440 + resets = <&mss_top_sysreg CLK_CAN1>; 441 441 status = "disabled"; 442 442 }; 443 443
-4
arch/sh/drivers/platform_early.c
··· 26 26 struct platform_device *pdev = to_platform_device(dev); 27 27 struct platform_driver *pdrv = to_platform_driver(drv); 28 28 29 - /* When driver_override is set, only bind to the matching driver */ 30 - if (pdev->driver_override) 31 - return !strcmp(pdev->driver_override, drv->name); 32 - 33 29 /* Then try to match against the id table */ 34 30 if (pdrv->id_table) 35 31 return platform_match_id(pdrv->id_table, pdev) != NULL;
+1 -1
arch/x86/entry/vdso/common/vclock_gettime.c
··· 13 13 #include <linux/types.h> 14 14 #include <vdso/gettime.h> 15 15 16 - #include "../../../../lib/vdso/gettimeofday.c" 16 + #include "lib/vdso/gettimeofday.c" 17 17 18 18 int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz) 19 19 {
+5 -2
arch/x86/events/core.c
··· 1372 1372 else if (i < n_running) 1373 1373 continue; 1374 1374 1375 - if (hwc->state & PERF_HES_ARCH) 1375 + cpuc->events[hwc->idx] = event; 1376 + 1377 + if (hwc->state & PERF_HES_ARCH) { 1378 + static_call(x86_pmu_set_period)(event); 1376 1379 continue; 1380 + } 1377 1381 1378 1382 /* 1379 1383 * if cpuc->enabled = 0, then no wrmsr as 1380 1384 * per x86_pmu_enable_event() 1381 1385 */ 1382 - cpuc->events[hwc->idx] = event; 1383 1386 x86_pmu_start(event, PERF_EF_RELOAD); 1384 1387 } 1385 1388 cpuc->n_added = 0;
+21 -10
arch/x86/events/intel/core.c
··· 4628 4628 event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64); 4629 4629 } 4630 4630 4631 + static inline int intel_set_branch_counter_constr(struct perf_event *event, 4632 + int *num) 4633 + { 4634 + if (branch_sample_call_stack(event)) 4635 + return -EINVAL; 4636 + if (branch_sample_counters(event)) { 4637 + (*num)++; 4638 + event->hw.dyn_constraint &= x86_pmu.lbr_counters; 4639 + } 4640 + 4641 + return 0; 4642 + } 4643 + 4631 4644 static int intel_pmu_hw_config(struct perf_event *event) 4632 4645 { 4633 4646 int ret = x86_pmu_hw_config(event); ··· 4711 4698 * group, which requires the extra space to store the counters. 4712 4699 */ 4713 4700 leader = event->group_leader; 4714 - if (branch_sample_call_stack(leader)) 4701 + if (intel_set_branch_counter_constr(leader, &num)) 4715 4702 return -EINVAL; 4716 - if (branch_sample_counters(leader)) { 4717 - num++; 4718 - leader->hw.dyn_constraint &= x86_pmu.lbr_counters; 4719 - } 4720 4703 leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS; 4721 4704 4722 4705 for_each_sibling_event(sibling, leader) { 4723 - if (branch_sample_call_stack(sibling)) 4706 + if (intel_set_branch_counter_constr(sibling, &num)) 4724 4707 return -EINVAL; 4725 - if (branch_sample_counters(sibling)) { 4726 - num++; 4727 - sibling->hw.dyn_constraint &= x86_pmu.lbr_counters; 4728 - } 4708 + } 4709 + 4710 + /* event isn't installed as a sibling yet. */ 4711 + if (event != leader) { 4712 + if (intel_set_branch_counter_constr(event, &num)) 4713 + return -EINVAL; 4729 4714 } 4730 4715 4731 4716 if (num > fls(x86_pmu.lbr_counters))
+7 -4
arch/x86/events/intel/ds.c
··· 345 345 if (omr.omr_remote) 346 346 val |= REM; 347 347 348 - val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); 349 - 350 348 if (omr.omr_source == 0x2) { 351 - u8 snoop = omr.omr_snoop | omr.omr_promoted; 349 + u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1); 352 350 353 - if (snoop == 0x0) 351 + if (omr.omr_hitm) 352 + val |= P(SNOOP, HITM); 353 + else if (snoop == 0x0) 354 354 val |= P(SNOOP, NA); 355 355 else if (snoop == 0x1) 356 356 val |= P(SNOOP, MISS); ··· 359 359 else if (snoop == 0x3) 360 360 val |= P(SNOOP, NONE); 361 361 } else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) { 362 + val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); 362 363 val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0; 364 + } else { 365 + val |= P(SNOOP, NONE); 363 366 } 364 367 365 368 return val;
+61 -57
arch/x86/hyperv/hv_crash.c
··· 107 107 cpu_relax(); 108 108 } 109 109 110 - /* This cannot be inlined as it needs stack */ 111 - static noinline __noclone void hv_crash_restore_tss(void) 110 + static void hv_crash_restore_tss(void) 112 111 { 113 112 load_TR_desc(); 114 113 } 115 114 116 - /* This cannot be inlined as it needs stack */ 117 - static noinline void hv_crash_clear_kernpt(void) 115 + static void hv_crash_clear_kernpt(void) 118 116 { 119 117 pgd_t *pgd; 120 118 p4d_t *p4d; ··· 123 125 native_p4d_clear(p4d); 124 126 } 125 127 126 - /* 127 - * This is the C entry point from the asm glue code after the disable hypercall. 128 - * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel 129 - * page tables with our below 4G page identity mapped, but using a temporary 130 - * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not 131 - * available. We restore kernel GDT, and rest of the context, and continue 132 - * to kexec. 133 - */ 134 - static asmlinkage void __noreturn hv_crash_c_entry(void) 128 + 129 + static void __noreturn hv_crash_handle(void) 135 130 { 136 - struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; 137 - 138 - /* first thing, restore kernel gdt */ 139 - native_load_gdt(&ctxt->gdtr); 140 - 141 - asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss)); 142 - asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp)); 143 - 144 - asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds)); 145 - asm volatile("movw %%ax, %%es" : : "a"(ctxt->es)); 146 - asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs)); 147 - asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs)); 148 - 149 - native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat); 150 - asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0)); 151 - 152 - asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8)); 153 - asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4)); 154 - asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4)); 155 - 156 - native_load_idt(&ctxt->idtr); 157 - native_wrmsrq(MSR_GS_BASE, ctxt->gsbase); 158 - native_wrmsrq(MSR_EFER, ctxt->efer); 159 - 160 - /* restore the original kernel CS now via far return */ 161 - asm volatile("movzwq %0, %%rax\n\t" 162 - "pushq %%rax\n\t" 163 - "pushq $1f\n\t" 164 - "lretq\n\t" 165 - "1:nop\n\t" : : "m"(ctxt->cs) : "rax"); 166 - 167 - /* We are in asmlinkage without stack frame, hence make C function 168 - * calls which will buy stack frames. 169 - */ 170 131 hv_crash_restore_tss(); 171 132 hv_crash_clear_kernpt(); 172 133 ··· 134 177 135 178 hv_panic_timeout_reboot(); 136 179 } 137 - /* Tell gcc we are using lretq long jump in the above function intentionally */ 180 + 181 + /* 182 + * __naked functions do not permit function calls, not even to __always_inline 183 + * functions that only contain asm() blocks themselves. So use a macro instead. 184 + */ 185 + #define hv_wrmsr(msr, val) \ 186 + asm volatile("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory") 187 + 188 + /* 189 + * This is the C entry point from the asm glue code after the disable hypercall. 190 + * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel 191 + * page tables with our below 4G page identity mapped, but using a temporary 192 + * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not 193 + * available. We restore kernel GDT, and rest of the context, and continue 194 + * to kexec. 195 + */ 196 + static void __naked hv_crash_c_entry(void) 197 + { 198 + /* first thing, restore kernel gdt */ 199 + asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr)); 200 + 201 + asm volatile("movw %0, %%ss\n\t" 202 + "movq %1, %%rsp" 203 + :: "m"(hv_crash_ctxt.ss), "m"(hv_crash_ctxt.rsp)); 204 + 205 + asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds)); 206 + asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es)); 207 + asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs)); 208 + asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs)); 209 + 210 + hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat); 211 + asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0)); 212 + 213 + asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8)); 214 + asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4)); 215 + asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr2)); 216 + 217 + asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr)); 218 + hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase); 219 + hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer); 220 + 221 + /* restore the original kernel CS now via far return */ 222 + asm volatile("pushq %q0\n\t" 223 + "pushq %q1\n\t" 224 + "lretq" 225 + :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle)); 226 + } 227 + /* Tell objtool we are using lretq long jump in the above function intentionally */ 138 228 STACK_FRAME_NON_STANDARD(hv_crash_c_entry); 139 229 140 230 static void hv_mark_tss_not_busy(void) ··· 199 195 { 200 196 struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; 201 197 202 - asm volatile("movq %%rsp,%0" : "=m"(ctxt->rsp)); 198 + ctxt->rsp = current_stack_pointer; 203 199 204 200 ctxt->cr0 = native_read_cr0(); 205 201 ctxt->cr4 = native_read_cr4(); 206 202 207 - asm volatile("movq %%cr2, %0" : "=a"(ctxt->cr2)); 208 - asm volatile("movq %%cr8, %0" : "=a"(ctxt->cr8)); 203 + asm volatile("movq %%cr2, %0" : "=r"(ctxt->cr2)); 204 + asm volatile("movq %%cr8, %0" : "=r"(ctxt->cr8)); 209 205 210 - asm volatile("movl %%cs, %%eax" : "=a"(ctxt->cs)); 211 - asm volatile("movl %%ss, %%eax" : "=a"(ctxt->ss)); 212 - asm volatile("movl %%ds, %%eax" : "=a"(ctxt->ds)); 213 - asm volatile("movl %%es, %%eax" : "=a"(ctxt->es)); 214 - asm volatile("movl %%fs, %%eax" : "=a"(ctxt->fs)); 215 - asm volatile("movl %%gs, %%eax" : "=a"(ctxt->gs)); 206 + asm volatile("movw %%cs, %0" : "=m"(ctxt->cs)); 207 + asm volatile("movw %%ss, %0" : "=m"(ctxt->ss)); 208 + asm volatile("movw %%ds, %0" : "=m"(ctxt->ds)); 209 + asm volatile("movw %%es, %0" : "=m"(ctxt->es)); 210 + asm volatile("movw %%fs, %0" : "=m"(ctxt->fs)); 211 + asm volatile("movw %%gs, %0" : "=m"(ctxt->gs)); 216 212 217 213 native_store_gdt(&ctxt->gdtr); 218 214 store_idt(&ctxt->idtr);
+16 -2
arch/x86/kernel/apic/x2apic_uv_x.c
··· 1708 1708 struct uv_hub_info_s *new_hub; 1709 1709 1710 1710 /* Allocate & fill new per hub info list */ 1711 - new_hub = (bid == 0) ? &uv_hub_info_node0 1712 - : kzalloc_node(bytes, GFP_KERNEL, uv_blade_to_node(bid)); 1711 + if (bid == 0) { 1712 + new_hub = &uv_hub_info_node0; 1713 + } else { 1714 + int nid; 1715 + 1716 + /* 1717 + * Deconfigured sockets are mapped to SOCK_EMPTY. Use 1718 + * NUMA_NO_NODE to allocate on a valid node. 1719 + */ 1720 + nid = uv_blade_to_node(bid); 1721 + if (nid == SOCK_EMPTY) 1722 + nid = NUMA_NO_NODE; 1723 + 1724 + new_hub = kzalloc_node(bytes, GFP_KERNEL, nid); 1725 + } 1726 + 1713 1727 if (WARN_ON_ONCE(!new_hub)) { 1714 1728 /* do not kfree() bid 0, which is statically allocated */ 1715 1729 while (--bid > 0)
+11 -6
arch/x86/kernel/cpu/mce/amd.c
··· 875 875 { 876 876 amd_reset_thr_limit(m->bank); 877 877 878 - /* Clear MCA_DESTAT for all deferred errors even those logged in MCA_STATUS. */ 879 - if (m->status & MCI_STATUS_DEFERRED) 880 - mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0); 878 + if (mce_flags.smca) { 879 + /* 880 + * Clear MCA_DESTAT for all deferred errors even those 881 + * logged in MCA_STATUS. 882 + */ 883 + if (m->status & MCI_STATUS_DEFERRED) 884 + mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0); 881 885 882 - /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */ 883 - if (m->kflags & MCE_CHECK_DFR_REGS) 884 - return; 886 + /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */ 887 + if (m->kflags & MCE_CHECK_DFR_REGS) 888 + return; 889 + } 885 890 886 891 mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0); 887 892 }
+3 -2
arch/x86/kernel/cpu/mshyperv.c
··· 496 496 test_and_set_bit(HYPERV_DBG_FASTFAIL_VECTOR, system_vectors)) 497 497 BUG(); 498 498 499 - pr_info("Hyper-V: reserve vectors: %d %d %d\n", HYPERV_DBG_ASSERT_VECTOR, 500 - HYPERV_DBG_SERVICE_VECTOR, HYPERV_DBG_FASTFAIL_VECTOR); 499 + pr_info("Hyper-V: reserve vectors: 0x%x 0x%x 0x%x\n", 500 + HYPERV_DBG_ASSERT_VECTOR, HYPERV_DBG_SERVICE_VECTOR, 501 + HYPERV_DBG_FASTFAIL_VECTOR); 501 502 } 502 503 503 504 static void __init ms_hyperv_init_platform(void)
+8 -7
drivers/acpi/acpi_processor.c
··· 113 113 PCI_ANY_ID, PCI_ANY_ID, NULL); 114 114 if (ide_dev) { 115 115 errata.piix4.bmisx = pci_resource_start(ide_dev, 4); 116 + if (errata.piix4.bmisx) 117 + dev_dbg(&ide_dev->dev, 118 + "Bus master activity detection (BM-IDE) erratum enabled\n"); 119 + 116 120 pci_dev_put(ide_dev); 117 121 } 118 122 ··· 135 131 if (isa_dev) { 136 132 pci_read_config_byte(isa_dev, 0x76, &value1); 137 133 pci_read_config_byte(isa_dev, 0x77, &value2); 138 - if ((value1 & 0x80) || (value2 & 0x80)) 134 + if ((value1 & 0x80) || (value2 & 0x80)) { 139 135 errata.piix4.fdma = 1; 136 + dev_dbg(&isa_dev->dev, 137 + "Type-F DMA livelock erratum (C3 disabled)\n"); 138 + } 140 139 pci_dev_put(isa_dev); 141 140 } 142 141 143 142 break; 144 143 } 145 - 146 - if (ide_dev) 147 - dev_dbg(&ide_dev->dev, "Bus master activity detection (BM-IDE) erratum enabled\n"); 148 - 149 - if (isa_dev) 150 - dev_dbg(&isa_dev->dev, "Type-F DMA livelock erratum (C3 disabled)\n"); 151 144 152 145 return 0; 153 146 }
+1 -1
drivers/acpi/acpica/acpredef.h
··· 451 451 452 452 {{"_DSM", 453 453 METHOD_4ARGS(ACPI_TYPE_BUFFER, ACPI_TYPE_INTEGER, ACPI_TYPE_INTEGER, 454 - ACPI_TYPE_ANY | ACPI_TYPE_PACKAGE) | 454 + ACPI_TYPE_PACKAGE | ACPI_TYPE_ANY) | 455 455 ARG_COUNT_IS_MINIMUM, 456 456 METHOD_RETURNS(ACPI_RTYPE_ALL)}}, /* Must return a value, but it can be of any type */ 457 457
-3
drivers/acpi/bus.c
··· 818 818 if (list_empty(&adev->pnp.ids)) 819 819 return NULL; 820 820 821 - if (adev->pnp.type.backlight) 822 - return adev; 823 - 824 821 return acpi_primary_dev_companion(adev, dev); 825 822 } 826 823
+3
drivers/ata/libata-core.c
··· 4188 4188 { "ST3320[68]13AS", "SD1[5-9]", ATA_QUIRK_NONCQ | 4189 4189 ATA_QUIRK_FIRMWARE_WARN }, 4190 4190 4191 + /* ADATA devices with LPM issues. */ 4192 + { "ADATA SU680", NULL, ATA_QUIRK_NOLPM }, 4193 + 4191 4194 /* Seagate disks with LPM issues */ 4192 4195 { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM }, 4193 4196 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },
+1 -1
drivers/ata/libata-scsi.c
··· 3600 3600 3601 3601 if (cdb[2] != 1 && cdb[2] != 3) { 3602 3602 ata_dev_warn(dev, "invalid command format %d\n", cdb[2]); 3603 - ata_scsi_set_invalid_field(dev, cmd, 1, 0xff); 3603 + ata_scsi_set_invalid_field(dev, cmd, 2, 0xff); 3604 3604 return 0; 3605 3605 } 3606 3606
+42 -1
drivers/base/bus.c
··· 504 504 } 505 505 EXPORT_SYMBOL_GPL(bus_for_each_drv); 506 506 507 + static ssize_t driver_override_store(struct device *dev, 508 + struct device_attribute *attr, 509 + const char *buf, size_t count) 510 + { 511 + int ret; 512 + 513 + ret = __device_set_driver_override(dev, buf, count); 514 + if (ret) 515 + return ret; 516 + 517 + return count; 518 + } 519 + 520 + static ssize_t driver_override_show(struct device *dev, 521 + struct device_attribute *attr, char *buf) 522 + { 523 + guard(spinlock)(&dev->driver_override.lock); 524 + return sysfs_emit(buf, "%s\n", dev->driver_override.name); 525 + } 526 + static DEVICE_ATTR_RW(driver_override); 527 + 528 + static struct attribute *driver_override_dev_attrs[] = { 529 + &dev_attr_driver_override.attr, 530 + NULL, 531 + }; 532 + 533 + static const struct attribute_group driver_override_dev_group = { 534 + .attrs = driver_override_dev_attrs, 535 + }; 536 + 507 537 /** 508 538 * bus_add_device - add device to bus 509 539 * @dev: device being added ··· 567 537 if (error) 568 538 goto out_put; 569 539 540 + if (dev->bus->driver_override) { 541 + error = device_add_group(dev, &driver_override_dev_group); 542 + if (error) 543 + goto out_groups; 544 + } 545 + 570 546 error = sysfs_create_link(&sp->devices_kset->kobj, &dev->kobj, dev_name(dev)); 571 547 if (error) 572 - goto out_groups; 548 + goto out_override; 573 549 574 550 error = sysfs_create_link(&dev->kobj, &sp->subsys.kobj, "subsystem"); 575 551 if (error) ··· 586 550 587 551 out_subsys: 588 552 sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev)); 553 + out_override: 554 + if (dev->bus->driver_override) 555 + device_remove_group(dev, &driver_override_dev_group); 589 556 out_groups: 590 557 device_remove_groups(dev, sp->bus->dev_groups); 591 558 out_put: ··· 646 607 647 608 sysfs_remove_link(&dev->kobj, "subsystem"); 648 609 sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev)); 610 + if (dev->bus->driver_override) 611 + device_remove_group(dev, &driver_override_dev_group); 649 612 device_remove_groups(dev, dev->bus->dev_groups); 650 613 if (klist_node_attached(&dev->p->knode_bus)) 651 614 klist_del(&dev->p->knode_bus);
+2
drivers/base/core.c
··· 2556 2556 devres_release_all(dev); 2557 2557 2558 2558 kfree(dev->dma_range_map); 2559 + kfree(dev->driver_override.name); 2559 2560 2560 2561 if (dev->release) 2561 2562 dev->release(dev); ··· 3161 3160 kobject_init(&dev->kobj, &device_ktype); 3162 3161 INIT_LIST_HEAD(&dev->dma_pools); 3163 3162 mutex_init(&dev->mutex); 3163 + spin_lock_init(&dev->driver_override.lock); 3164 3164 lockdep_set_novalidate_class(&dev->mutex); 3165 3165 spin_lock_init(&dev->devres_lock); 3166 3166 INIT_LIST_HEAD(&dev->devres_head);
+60
drivers/base/dd.c
··· 381 381 } 382 382 __exitcall(deferred_probe_exit); 383 383 384 + int __device_set_driver_override(struct device *dev, const char *s, size_t len) 385 + { 386 + const char *new, *old; 387 + char *cp; 388 + 389 + if (!s) 390 + return -EINVAL; 391 + 392 + /* 393 + * The stored value will be used in sysfs show callback (sysfs_emit()), 394 + * which has a length limit of PAGE_SIZE and adds a trailing newline. 395 + * Thus we can store one character less to avoid truncation during sysfs 396 + * show. 397 + */ 398 + if (len >= (PAGE_SIZE - 1)) 399 + return -EINVAL; 400 + 401 + /* 402 + * Compute the real length of the string in case userspace sends us a 403 + * bunch of \0 characters like python likes to do. 404 + */ 405 + len = strlen(s); 406 + 407 + if (!len) { 408 + /* Empty string passed - clear override */ 409 + spin_lock(&dev->driver_override.lock); 410 + old = dev->driver_override.name; 411 + dev->driver_override.name = NULL; 412 + spin_unlock(&dev->driver_override.lock); 413 + kfree(old); 414 + 415 + return 0; 416 + } 417 + 418 + cp = strnchr(s, len, '\n'); 419 + if (cp) 420 + len = cp - s; 421 + 422 + new = kstrndup(s, len, GFP_KERNEL); 423 + if (!new) 424 + return -ENOMEM; 425 + 426 + spin_lock(&dev->driver_override.lock); 427 + old = dev->driver_override.name; 428 + if (cp != s) { 429 + dev->driver_override.name = new; 430 + spin_unlock(&dev->driver_override.lock); 431 + } else { 432 + /* "\n" passed - clear override */ 433 + dev->driver_override.name = NULL; 434 + spin_unlock(&dev->driver_override.lock); 435 + 436 + kfree(new); 437 + } 438 + kfree(old); 439 + 440 + return 0; 441 + } 442 + EXPORT_SYMBOL_GPL(__device_set_driver_override); 443 + 384 444 /** 385 445 * device_is_bound() - Check if device is bound to a driver 386 446 * @dev: device to check
+5 -32
drivers/base/platform.c
··· 603 603 kfree(pa->pdev.dev.platform_data); 604 604 kfree(pa->pdev.mfd_cell); 605 605 kfree(pa->pdev.resource); 606 - kfree(pa->pdev.driver_override); 607 606 kfree(pa); 608 607 } 609 608 ··· 1308 1309 } 1309 1310 static DEVICE_ATTR_RO(numa_node); 1310 1311 1311 - static ssize_t driver_override_show(struct device *dev, 1312 - struct device_attribute *attr, char *buf) 1313 - { 1314 - struct platform_device *pdev = to_platform_device(dev); 1315 - ssize_t len; 1316 - 1317 - device_lock(dev); 1318 - len = sysfs_emit(buf, "%s\n", pdev->driver_override); 1319 - device_unlock(dev); 1320 - 1321 - return len; 1322 - } 1323 - 1324 - static ssize_t driver_override_store(struct device *dev, 1325 - struct device_attribute *attr, 1326 - const char *buf, size_t count) 1327 - { 1328 - struct platform_device *pdev = to_platform_device(dev); 1329 - int ret; 1330 - 1331 - ret = driver_set_override(dev, &pdev->driver_override, buf, count); 1332 - if (ret) 1333 - return ret; 1334 - 1335 - return count; 1336 - } 1337 - static DEVICE_ATTR_RW(driver_override); 1338 - 1339 1312 static struct attribute *platform_dev_attrs[] = { 1340 1313 &dev_attr_modalias.attr, 1341 1314 &dev_attr_numa_node.attr, 1342 - &dev_attr_driver_override.attr, 1343 1315 NULL, 1344 1316 }; 1345 1317 ··· 1348 1378 { 1349 1379 struct platform_device *pdev = to_platform_device(dev); 1350 1380 struct platform_driver *pdrv = to_platform_driver(drv); 1381 + int ret; 1351 1382 1352 1383 /* When driver_override is set, only bind to the matching driver */ 1353 - if (pdev->driver_override) 1354 - return !strcmp(pdev->driver_override, drv->name); 1384 + ret = device_match_driver_override(dev, drv); 1385 + if (ret >= 0) 1386 + return ret; 1355 1387 1356 1388 /* Attempt an OF style match first */ 1357 1389 if (of_driver_match_device(dev, drv)) ··· 1488 1516 const struct bus_type platform_bus_type = { 1489 1517 .name = "platform", 1490 1518 .dev_groups = platform_dev_groups, 1519 + .driver_override = true, 1491 1520 .match = platform_match, 1492 1521 .uevent = platform_uevent, 1493 1522 .probe = platform_probe,
+1
drivers/base/power/runtime.c
··· 1895 1895 void pm_runtime_remove(struct device *dev) 1896 1896 { 1897 1897 __pm_runtime_disable(dev, false); 1898 + flush_work(&dev->power.work); 1898 1899 pm_runtime_reinit(dev); 1899 1900 } 1900 1901
+2
drivers/bluetooth/btqca.c
··· 787 787 */ 788 788 if (soc_type == QCA_WCN3988) 789 789 rom_ver = ((soc_ver & 0x00000f00) >> 0x05) | (soc_ver & 0x0000000f); 790 + else if (soc_type == QCA_WCN3998) 791 + rom_ver = ((soc_ver & 0x0000f000) >> 0x07) | (soc_ver & 0x0000000f); 790 792 else 791 793 rom_ver = ((soc_ver & 0x00000f00) >> 0x04) | (soc_ver & 0x0000000f); 792 794
+2 -2
drivers/bus/simple-pm-bus.c
··· 36 36 * that's not listed in simple_pm_bus_of_match. We don't want to do any 37 37 * of the simple-pm-bus tasks for these devices, so return early. 38 38 */ 39 - if (pdev->driver_override) 39 + if (device_has_driver_override(&pdev->dev)) 40 40 return 0; 41 41 42 42 match = of_match_device(dev->driver->of_match_table, dev); ··· 78 78 { 79 79 const void *data = of_device_get_match_data(&pdev->dev); 80 80 81 - if (pdev->driver_override || data) 81 + if (device_has_driver_override(&pdev->dev) || data) 82 82 return; 83 83 84 84 dev_dbg(&pdev->dev, "%s\n", __func__);
+2 -2
drivers/cache/ax45mp_cache.c
··· 178 178 179 179 static int __init ax45mp_cache_init(void) 180 180 { 181 - struct device_node *np; 182 181 struct resource res; 183 182 int ret; 184 183 185 - np = of_find_matching_node(NULL, ax45mp_cache_ids); 184 + struct device_node *np __free(device_node) = 185 + of_find_matching_node(NULL, ax45mp_cache_ids); 186 186 if (!of_device_is_available(np)) 187 187 return -ENODEV; 188 188
+1 -2
drivers/clk/imx/clk-scu.c
··· 706 706 if (ret) 707 707 goto put_device; 708 708 709 - ret = driver_set_override(&pdev->dev, &pdev->driver_override, 710 - "imx-scu-clk", strlen("imx-scu-clk")); 709 + ret = device_set_driver_override(&pdev->dev, "imx-scu-clk"); 711 710 if (ret) 712 711 goto put_device; 713 712
+1 -3
drivers/crypto/ccp/sev-dev.c
··· 2408 2408 * in Firmware state on failure. Use snp_reclaim_pages() to 2409 2409 * transition either case back to Hypervisor-owned state. 2410 2410 */ 2411 - if (snp_reclaim_pages(__pa(data), 1, true)) { 2412 - snp_leak_pages(__page_to_pfn(status_page), 1); 2411 + if (snp_reclaim_pages(__pa(data), 1, true)) 2413 2412 return -EFAULT; 2414 - } 2415 2413 } 2416 2414 2417 2415 if (ret)
+7
drivers/crypto/padlock-sha.c
··· 332 332 if (!x86_match_cpu(padlock_sha_ids) || !boot_cpu_has(X86_FEATURE_PHE_EN)) 333 333 return -ENODEV; 334 334 335 + /* 336 + * Skip family 0x07 and newer used by Zhaoxin processors, 337 + * as the driver's self-tests fail on these CPUs. 338 + */ 339 + if (c->x86 >= 0x07) 340 + return -ENODEV; 341 + 335 342 /* Register the newly added algorithm module if on * 336 343 * VIA Nano processor, or else just do as before */ 337 344 if (c->x86_model < 0x0f) {
+3 -2
drivers/firewire/net.c
··· 257 257 memcpy((u8 *)hh->hh_data + HH_DATA_OFF(FWNET_HLEN), haddr, net->addr_len); 258 258 } 259 259 260 - static int fwnet_header_parse(const struct sk_buff *skb, unsigned char *haddr) 260 + static int fwnet_header_parse(const struct sk_buff *skb, const struct net_device *dev, 261 + unsigned char *haddr) 261 262 { 262 - memcpy(haddr, skb->dev->dev_addr, FWNET_ALEN); 263 + memcpy(haddr, dev->dev_addr, FWNET_ALEN); 263 264 264 265 return FWNET_ALEN; 265 266 }
+4 -4
drivers/firmware/arm_ffa/driver.c
··· 205 205 return 0; 206 206 } 207 207 208 - static int ffa_rxtx_unmap(u16 vm_id) 208 + static int ffa_rxtx_unmap(void) 209 209 { 210 210 ffa_value_t ret; 211 211 212 212 invoke_ffa_fn((ffa_value_t){ 213 - .a0 = FFA_RXTX_UNMAP, .a1 = PACK_TARGET_INFO(vm_id, 0), 213 + .a0 = FFA_RXTX_UNMAP, 214 214 }, &ret); 215 215 216 216 if (ret.a0 == FFA_ERROR) ··· 2097 2097 2098 2098 pr_err("failed to setup partitions\n"); 2099 2099 ffa_notifications_cleanup(); 2100 - ffa_rxtx_unmap(drv_info->vm_id); 2100 + ffa_rxtx_unmap(); 2101 2101 free_pages: 2102 2102 if (drv_info->tx_buffer) 2103 2103 free_pages_exact(drv_info->tx_buffer, rxtx_bufsz); ··· 2112 2112 { 2113 2113 ffa_notifications_cleanup(); 2114 2114 ffa_partitions_cleanup(); 2115 - ffa_rxtx_unmap(drv_info->vm_id); 2115 + ffa_rxtx_unmap(); 2116 2116 free_pages_exact(drv_info->tx_buffer, drv_info->rxtx_bufsz); 2117 2117 free_pages_exact(drv_info->rx_buffer, drv_info->rxtx_bufsz); 2118 2118 kfree(drv_info);
+2 -2
drivers/firmware/arm_scmi/notify.c
··· 1066 1066 * since at creation time we usually want to have all setup and ready before 1067 1067 * events really start flowing. 1068 1068 * 1069 - * Return: A properly refcounted handler on Success, NULL on Failure 1069 + * Return: A properly refcounted handler on Success, ERR_PTR on Failure 1070 1070 */ 1071 1071 static inline struct scmi_event_handler * 1072 1072 __scmi_event_handler_get_ops(struct scmi_notify_instance *ni, ··· 1113 1113 } 1114 1114 mutex_unlock(&ni->pending_mtx); 1115 1115 1116 - return hndl; 1116 + return hndl ?: ERR_PTR(-ENODEV); 1117 1117 } 1118 1118 1119 1119 static struct scmi_event_handler *
+2 -2
drivers/firmware/arm_scmi/protocols.h
··· 189 189 190 190 /** 191 191 * struct scmi_iterator_state - Iterator current state descriptor 192 - * @desc_index: Starting index for the current mulit-part request. 192 + * @desc_index: Starting index for the current multi-part request. 193 193 * @num_returned: Number of returned items in the last multi-part reply. 194 194 * @num_remaining: Number of remaining items in the multi-part message. 195 195 * @max_resources: Maximum acceptable number of items, configured by the caller 196 196 * depending on the underlying resources that it is querying. 197 197 * @loop_idx: The iterator loop index in the current multi-part reply. 198 - * @rx_len: Size in bytes of the currenly processed message; it can be used by 198 + * @rx_len: Size in bytes of the currently processed message; it can be used by 199 199 * the user of the iterator to verify a reply size. 200 200 * @priv: Optional pointer to some additional state-related private data setup 201 201 * by the caller during the iterations.
+3 -2
drivers/firmware/arm_scpi.c
··· 18 18 19 19 #include <linux/bitmap.h> 20 20 #include <linux/bitfield.h> 21 + #include <linux/cleanup.h> 21 22 #include <linux/device.h> 22 23 #include <linux/err.h> 23 24 #include <linux/export.h> ··· 941 940 int idx = scpi_drvinfo->num_chans; 942 941 struct scpi_chan *pchan = scpi_drvinfo->channels + idx; 943 942 struct mbox_client *cl = &pchan->cl; 944 - struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 943 + struct device_node *shmem __free(device_node) = 944 + of_parse_phandle(np, "shmem", idx); 945 945 946 946 if (!of_match_node(shmem_of_match, shmem)) 947 947 return -ENXIO; 948 948 949 949 ret = of_address_to_resource(shmem, 0, &res); 950 - of_node_put(shmem); 951 950 if (ret) { 952 951 dev_err(dev, "failed to get SCPI payload mem resource\n"); 953 952 return ret;
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
··· 36 36 37 37 #define AMDGPU_BO_LIST_MAX_PRIORITY 32u 38 38 #define AMDGPU_BO_LIST_NUM_BUCKETS (AMDGPU_BO_LIST_MAX_PRIORITY + 1) 39 + #define AMDGPU_BO_LIST_MAX_ENTRIES (128 * 1024) 39 40 40 41 static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu) 41 42 { ··· 188 187 const uint32_t bo_info_size = in->bo_info_size; 189 188 const uint32_t bo_number = in->bo_number; 190 189 struct drm_amdgpu_bo_list_entry *info; 190 + 191 + if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES) 192 + return -EINVAL; 191 193 192 194 /* copy the handle array from userspace to a kernel buffer */ 193 195 if (likely(info_size == bo_info_size)) {
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1069 1069 } 1070 1070 1071 1071 /* Prepare a TLB flush fence to be attached to PTs */ 1072 - if (!params->unlocked) { 1072 + /* The check for need_tlb_fence should be dropped once we 1073 + * sort out the issues with KIQ/MES TLB invalidation timeouts. 1074 + */ 1075 + if (!params->unlocked && vm->need_tlb_fence) { 1073 1076 amdgpu_vm_tlb_fence_create(params->adev, vm, fence); 1074 1077 1075 1078 /* Makes sure no PD/PT is freed before the flush */ ··· 2605 2602 ttm_lru_bulk_move_init(&vm->lru_bulk_move); 2606 2603 2607 2604 vm->is_compute_context = false; 2605 + vm->need_tlb_fence = amdgpu_userq_enabled(&adev->ddev); 2608 2606 2609 2607 vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode & 2610 2608 AMDGPU_VM_USE_CPU_FOR_GFX); ··· 2743 2739 dma_fence_put(vm->last_update); 2744 2740 vm->last_update = dma_fence_get_stub(); 2745 2741 vm->is_compute_context = true; 2742 + vm->need_tlb_fence = true; 2746 2743 2747 2744 unreserve_bo: 2748 2745 amdgpu_bo_unreserve(vm->root.bo);
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 441 441 struct ttm_lru_bulk_move lru_bulk_move; 442 442 /* Flag to indicate if VM is used for compute */ 443 443 bool is_compute_context; 444 + /* Flag to indicate if VM needs a TLB fence (KFD or KGD) */ 445 + bool need_tlb_fence; 444 446 445 447 /* Memory partition number, -1 means any partition */ 446 448 int8_t mem_id;
+14 -7
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 662 662 } else { 663 663 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 664 664 case IP_VERSION(9, 0, 0): 665 - mmhub_cid = mmhub_client_ids_vega10[cid][rw]; 665 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega10) ? 666 + mmhub_client_ids_vega10[cid][rw] : NULL; 666 667 break; 667 668 case IP_VERSION(9, 3, 0): 668 - mmhub_cid = mmhub_client_ids_vega12[cid][rw]; 669 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega12) ? 670 + mmhub_client_ids_vega12[cid][rw] : NULL; 669 671 break; 670 672 case IP_VERSION(9, 4, 0): 671 - mmhub_cid = mmhub_client_ids_vega20[cid][rw]; 673 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega20) ? 674 + mmhub_client_ids_vega20[cid][rw] : NULL; 672 675 break; 673 676 case IP_VERSION(9, 4, 1): 674 - mmhub_cid = mmhub_client_ids_arcturus[cid][rw]; 677 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_arcturus) ? 678 + mmhub_client_ids_arcturus[cid][rw] : NULL; 675 679 break; 676 680 case IP_VERSION(9, 1, 0): 677 681 case IP_VERSION(9, 2, 0): 678 - mmhub_cid = mmhub_client_ids_raven[cid][rw]; 682 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_raven) ? 683 + mmhub_client_ids_raven[cid][rw] : NULL; 679 684 break; 680 685 case IP_VERSION(1, 5, 0): 681 686 case IP_VERSION(2, 4, 0): 682 - mmhub_cid = mmhub_client_ids_renoir[cid][rw]; 687 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_renoir) ? 688 + mmhub_client_ids_renoir[cid][rw] : NULL; 683 689 break; 684 690 case IP_VERSION(1, 8, 0): 685 691 case IP_VERSION(9, 4, 2): 686 - mmhub_cid = mmhub_client_ids_aldebaran[cid][rw]; 692 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_aldebaran) ? 693 + mmhub_client_ids_aldebaran[cid][rw] : NULL; 687 694 break; 688 695 default: 689 696 mmhub_cid = NULL;
+2 -2
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 129 129 if (!pdev) 130 130 return -EINVAL; 131 131 132 - if (!dev->type->name) { 132 + if (!dev->type || !dev->type->name) { 133 133 drm_dbg(&adev->ddev, "Invalid device type to add\n"); 134 134 goto exit; 135 135 } ··· 165 165 if (!pdev) 166 166 return -EINVAL; 167 167 168 - if (!dev->type->name) { 168 + if (!dev->type || !dev->type->name) { 169 169 drm_dbg(&adev->ddev, "Invalid device type to remove\n"); 170 170 goto exit; 171 171 }
+6 -3
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
··· 154 154 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 155 155 case IP_VERSION(2, 0, 0): 156 156 case IP_VERSION(2, 0, 2): 157 - mmhub_cid = mmhub_client_ids_navi1x[cid][rw]; 157 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_navi1x) ? 158 + mmhub_client_ids_navi1x[cid][rw] : NULL; 158 159 break; 159 160 case IP_VERSION(2, 1, 0): 160 161 case IP_VERSION(2, 1, 1): 161 - mmhub_cid = mmhub_client_ids_sienna_cichlid[cid][rw]; 162 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_sienna_cichlid) ? 163 + mmhub_client_ids_sienna_cichlid[cid][rw] : NULL; 162 164 break; 163 165 case IP_VERSION(2, 1, 2): 164 - mmhub_cid = mmhub_client_ids_beige_goby[cid][rw]; 166 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_beige_goby) ? 167 + mmhub_client_ids_beige_goby[cid][rw] : NULL; 165 168 break; 166 169 default: 167 170 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
··· 94 94 case IP_VERSION(2, 3, 0): 95 95 case IP_VERSION(2, 4, 0): 96 96 case IP_VERSION(2, 4, 1): 97 - mmhub_cid = mmhub_client_ids_vangogh[cid][rw]; 97 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vangogh) ? 98 + mmhub_client_ids_vangogh[cid][rw] : NULL; 98 99 break; 99 100 default: 100 101 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
··· 110 110 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 111 111 case IP_VERSION(3, 0, 0): 112 112 case IP_VERSION(3, 0, 1): 113 - mmhub_cid = mmhub_client_ids_v3_0_0[cid][rw]; 113 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_0) ? 114 + mmhub_client_ids_v3_0_0[cid][rw] : NULL; 114 115 break; 115 116 default: 116 117 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
··· 117 117 118 118 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 119 119 case IP_VERSION(3, 0, 1): 120 - mmhub_cid = mmhub_client_ids_v3_0_1[cid][rw]; 120 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_1) ? 121 + mmhub_client_ids_v3_0_1[cid][rw] : NULL; 121 122 break; 122 123 default: 123 124 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
··· 108 108 "MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n", 109 109 status); 110 110 111 - mmhub_cid = mmhub_client_ids_v3_0_2[cid][rw]; 111 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_2) ? 112 + mmhub_client_ids_v3_0_2[cid][rw] : NULL; 112 113 dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n", 113 114 mmhub_cid ? mmhub_cid : "unknown", cid); 114 115 dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
··· 102 102 status); 103 103 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 104 104 case IP_VERSION(4, 1, 0): 105 - mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw]; 105 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_1_0) ? 106 + mmhub_client_ids_v4_1_0[cid][rw] : NULL; 106 107 break; 107 108 default: 108 109 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
··· 688 688 status); 689 689 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 690 690 case IP_VERSION(4, 2, 0): 691 - mmhub_cid = mmhub_client_ids_v4_2_0[cid][rw]; 691 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_2_0) ? 692 + mmhub_client_ids_v4_2_0[cid][rw] : NULL; 692 693 break; 693 694 default: 694 695 mmhub_cid = NULL;
+3 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2554 2554 fw_meta_info_params.fw_inst_const = adev->dm.dmub_fw->data + 2555 2555 le32_to_cpu(hdr->header.ucode_array_offset_bytes) + 2556 2556 PSP_HEADER_BYTES_256; 2557 - fw_meta_info_params.fw_bss_data = region_params.bss_data_size ? adev->dm.dmub_fw->data + 2557 + fw_meta_info_params.fw_bss_data = fw_meta_info_params.bss_data_size ? adev->dm.dmub_fw->data + 2558 2558 le32_to_cpu(hdr->header.ucode_array_offset_bytes) + 2559 2559 le32_to_cpu(hdr->inst_const_bytes) : NULL; 2560 2560 fw_meta_info_params.custom_psp_footer_size = 0; ··· 13119 13119 u16 min_vfreq; 13120 13120 u16 max_vfreq; 13121 13121 13122 - if (edid == NULL || edid->extensions == 0) 13122 + if (!edid || !edid->extensions) 13123 13123 return; 13124 13124 13125 13125 /* Find DisplayID extension */ ··· 13129 13129 break; 13130 13130 } 13131 13131 13132 - if (edid_ext == NULL) 13132 + if (i == edid->extensions) 13133 13133 return; 13134 13134 13135 13135 while (j < EDID_LENGTH) {
+3 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_colorop.c
··· 37 37 BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) | 38 38 BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) | 39 39 BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) | 40 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 40 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 41 41 42 42 const u64 amdgpu_dm_supported_shaper_tfs = 43 43 BIT(DRM_COLOROP_1D_CURVE_SRGB_INV_EOTF) | 44 44 BIT(DRM_COLOROP_1D_CURVE_PQ_125_INV_EOTF) | 45 45 BIT(DRM_COLOROP_1D_CURVE_BT2020_OETF) | 46 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 46 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 47 47 48 48 const u64 amdgpu_dm_supported_blnd_tfs = 49 49 BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) | 50 50 BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) | 51 51 BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) | 52 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 52 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 53 53 54 54 #define MAX_COLOR_PIPELINE_OPS 10 55 55
+4 -4
drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
··· 255 255 BREAK_TO_DEBUGGER(); 256 256 return NULL; 257 257 } 258 + if (ctx->dce_version == DCN_VERSION_2_01) { 259 + dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 260 + return &clk_mgr->base; 261 + } 258 262 if (ASICREV_IS_SIENNA_CICHLID_P(asic_id.hw_internal_rev)) { 259 263 dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 260 264 return &clk_mgr->base; ··· 269 265 } 270 266 if (ASICREV_IS_BEIGE_GOBY_P(asic_id.hw_internal_rev)) { 271 267 dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 272 - return &clk_mgr->base; 273 - } 274 - if (ctx->dce_version == DCN_VERSION_2_01) { 275 - dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 276 268 return &clk_mgr->base; 277 269 } 278 270 dcn20_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+3
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
··· 1785 1785 1786 1786 dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel); 1787 1787 1788 + DC_FP_START(); 1788 1789 dcn32_override_min_req_memclk(dc, context); 1790 + DC_FP_END(); 1791 + 1789 1792 dcn32_override_min_req_dcfclk(dc, context); 1790 1793 1791 1794 BW_VAL_TRACE_END_WATERMARKS();
+3 -1
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 3454 3454 if (adev->asic_type == CHIP_HAINAN) { 3455 3455 if ((adev->pdev->revision == 0x81) || 3456 3456 (adev->pdev->revision == 0xC3) || 3457 + (adev->pdev->device == 0x6660) || 3457 3458 (adev->pdev->device == 0x6664) || 3458 3459 (adev->pdev->device == 0x6665) || 3459 - (adev->pdev->device == 0x6667)) { 3460 + (adev->pdev->device == 0x6667) || 3461 + (adev->pdev->device == 0x666F)) { 3460 3462 max_sclk = 75000; 3461 3463 } 3462 3464 if ((adev->pdev->revision == 0xC3) ||
+1 -1
drivers/gpu/drm/bridge/synopsys/dw-hdmi-qp.c
··· 848 848 849 849 regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS0, &header_bytes, 1); 850 850 regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS1, &buffer[3], 1); 851 - regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[4], 1); 851 + regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[7], 1); 852 852 853 853 /* Enable ACR, AUDI, AMD */ 854 854 dw_hdmi_qp_mod(hdmi,
+4 -1
drivers/gpu/drm/drm_file.c
··· 233 233 void drm_file_free(struct drm_file *file) 234 234 { 235 235 struct drm_device *dev; 236 + int idx; 236 237 237 238 if (!file) 238 239 return; ··· 250 249 251 250 drm_events_release(file); 252 251 253 - if (drm_core_check_feature(dev, DRIVER_MODESET)) { 252 + if (drm_core_check_feature(dev, DRIVER_MODESET) && 253 + drm_dev_enter(dev, &idx)) { 254 254 drm_fb_release(file); 255 255 drm_property_destroy_user_blobs(dev, file); 256 + drm_dev_exit(idx); 256 257 } 257 258 258 259 if (drm_core_check_feature(dev, DRIVER_SYNCOBJ))
+6 -3
drivers/gpu/drm/drm_mode_config.c
··· 577 577 */ 578 578 WARN_ON(!list_empty(&dev->mode_config.fb_list)); 579 579 list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) { 580 - struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 580 + if (list_empty(&fb->filp_head) || drm_framebuffer_read_refcount(fb) > 1) { 581 + struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 581 582 582 - drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 583 - drm_framebuffer_print_info(&p, 1, fb); 583 + drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 584 + drm_framebuffer_print_info(&p, 1, fb); 585 + } 586 + list_del_init(&fb->filp_head); 584 587 drm_framebuffer_free(&fb->base.refcount); 585 588 } 586 589
+5 -9
drivers/gpu/drm/drm_pagemap_util.c
··· 65 65 drm_dbg(cache->shrinker->drm, "Destroying dpagemap cache.\n"); 66 66 spin_lock(&cache->lock); 67 67 dpagemap = cache->dpagemap; 68 - if (!dpagemap) { 69 - spin_unlock(&cache->lock); 70 - goto out; 71 - } 68 + cache->dpagemap = NULL; 69 + if (dpagemap && !drm_pagemap_shrinker_cancel(dpagemap)) 70 + dpagemap = NULL; 71 + spin_unlock(&cache->lock); 72 72 73 - if (drm_pagemap_shrinker_cancel(dpagemap)) { 74 - cache->dpagemap = NULL; 75 - spin_unlock(&cache->lock); 73 + if (dpagemap) 76 74 drm_pagemap_destroy(dpagemap, false); 77 - } 78 75 79 - out: 80 76 mutex_destroy(&cache->lookup_mutex); 81 77 kfree(cache); 82 78 }
+1 -1
drivers/gpu/drm/i915/display/intel_display_power_well.c
··· 806 806 power_domains->dc_state, val & mask); 807 807 808 808 enable_dc6 = state & DC_STATE_EN_UPTO_DC6; 809 - dc6_was_enabled = val & DC_STATE_EN_UPTO_DC6; 809 + dc6_was_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6; 810 810 if (!dc6_was_enabled && enable_dc6) 811 811 intel_dmc_update_dc6_allowed_count(display, true); 812 812
+1
drivers/gpu/drm/i915/display/intel_display_types.h
··· 1186 1186 u32 dc3co_exitline; 1187 1187 u16 su_y_granularity; 1188 1188 u8 active_non_psr_pipes; 1189 + u8 entry_setup_frames; 1189 1190 const char *no_psr_reason; 1190 1191 1191 1192 /*
+1 -2
drivers/gpu/drm/i915/display/intel_dmc.c
··· 1599 1599 return false; 1600 1600 1601 1601 mutex_lock(&power_domains->lock); 1602 - dc6_enabled = intel_de_read(display, DC_STATE_EN) & 1603 - DC_STATE_EN_UPTO_DC6; 1602 + dc6_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6; 1604 1603 if (dc6_enabled) 1605 1604 intel_dmc_update_dc6_allowed_count(display, false); 1606 1605
+5 -2
drivers/gpu/drm/i915/display/intel_psr.c
··· 1717 1717 entry_setup_frames = intel_psr_entry_setup_frames(intel_dp, conn_state, adjusted_mode); 1718 1718 1719 1719 if (entry_setup_frames >= 0) { 1720 - intel_dp->psr.entry_setup_frames = entry_setup_frames; 1720 + crtc_state->entry_setup_frames = entry_setup_frames; 1721 1721 } else { 1722 1722 crtc_state->no_psr_reason = "PSR setup timing not met"; 1723 1723 drm_dbg_kms(display->drm, ··· 1815 1815 { 1816 1816 struct intel_display *display = to_intel_display(intel_dp); 1817 1817 1818 - return (DISPLAY_VER(display) == 20 && intel_dp->psr.entry_setup_frames > 0 && 1818 + return (DISPLAY_VER(display) == 20 && crtc_state->entry_setup_frames > 0 && 1819 1819 !crtc_state->has_sel_update); 1820 1820 } 1821 1821 ··· 2189 2189 intel_dp->psr.pkg_c_latency_used = crtc_state->pkg_c_latency_used; 2190 2190 intel_dp->psr.io_wake_lines = crtc_state->alpm_state.io_wake_lines; 2191 2191 intel_dp->psr.fast_wake_lines = crtc_state->alpm_state.fast_wake_lines; 2192 + intel_dp->psr.entry_setup_frames = crtc_state->entry_setup_frames; 2192 2193 2193 2194 if (!psr_interrupt_error_check(intel_dp)) 2194 2195 return; ··· 3110 3109 * - Display WA #1136: skl, bxt 3111 3110 */ 3112 3111 if (intel_crtc_needs_modeset(new_crtc_state) || 3112 + new_crtc_state->update_m_n || 3113 + new_crtc_state->update_lrr || 3113 3114 !new_crtc_state->has_psr || 3114 3115 !new_crtc_state->active_planes || 3115 3116 new_crtc_state->has_sel_update != psr->sel_update_enabled ||
+2 -1
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 1967 1967 if (engine->sanitize) 1968 1968 engine->sanitize(engine); 1969 1969 1970 - engine->set_default_submission(engine); 1970 + if (engine->set_default_submission) 1971 + engine->set_default_submission(engine); 1971 1972 } 1972 1973 } 1973 1974
-17
drivers/gpu/drm/imagination/pvr_device.c
··· 225 225 } 226 226 227 227 if (pvr_dev->has_safety_events) { 228 - int err; 229 - 230 - /* 231 - * Ensure the GPU is powered on since some safety events (such 232 - * as ECC faults) can happen outside of job submissions, which 233 - * are otherwise the only time a power reference is held. 234 - */ 235 - err = pvr_power_get(pvr_dev); 236 - if (err) { 237 - drm_err_ratelimited(drm_dev, 238 - "%s: could not take power reference (%d)\n", 239 - __func__, err); 240 - return ret; 241 - } 242 - 243 228 while (pvr_device_safety_irq_pending(pvr_dev)) { 244 229 pvr_device_safety_irq_clear(pvr_dev); 245 230 pvr_device_handle_safety_events(pvr_dev); 246 231 247 232 ret = IRQ_HANDLED; 248 233 } 249 - 250 - pvr_power_put(pvr_dev); 251 234 } 252 235 253 236 return ret;
+39 -12
drivers/gpu/drm/imagination/pvr_power.c
··· 90 90 } 91 91 92 92 static int 93 - pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset) 93 + pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset, bool rpm_suspend) 94 94 { 95 - if (!hard_reset) { 96 - int err; 95 + int err; 97 96 97 + if (!hard_reset) { 98 98 cancel_delayed_work_sync(&pvr_dev->watchdog.work); 99 99 100 100 err = pvr_power_request_idle(pvr_dev); ··· 106 106 return err; 107 107 } 108 108 109 - return pvr_fw_stop(pvr_dev); 109 + if (rpm_suspend) { 110 + /* This also waits for late processing of GPU or firmware IRQs in other cores */ 111 + disable_irq(pvr_dev->irq); 112 + } 113 + 114 + err = pvr_fw_stop(pvr_dev); 115 + if (err && rpm_suspend) 116 + enable_irq(pvr_dev->irq); 117 + 118 + return err; 110 119 } 111 120 112 121 static int 113 - pvr_power_fw_enable(struct pvr_device *pvr_dev) 122 + pvr_power_fw_enable(struct pvr_device *pvr_dev, bool rpm_resume) 114 123 { 115 124 int err; 116 125 126 + if (rpm_resume) 127 + enable_irq(pvr_dev->irq); 128 + 117 129 err = pvr_fw_start(pvr_dev); 118 130 if (err) 119 - return err; 131 + goto out; 120 132 121 133 err = pvr_wait_for_fw_boot(pvr_dev); 122 134 if (err) { 123 135 drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n"); 124 136 pvr_fw_stop(pvr_dev); 125 - return err; 137 + goto out; 126 138 } 127 139 128 140 queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work, 129 141 msecs_to_jiffies(WATCHDOG_TIME_MS)); 130 142 131 143 return 0; 144 + 145 + out: 146 + if (rpm_resume) 147 + disable_irq(pvr_dev->irq); 148 + 149 + return err; 132 150 } 133 151 134 152 bool ··· 379 361 return -EIO; 380 362 381 363 if (pvr_dev->fw_dev.booted) { 382 - err = pvr_power_fw_disable(pvr_dev, false); 364 + err = pvr_power_fw_disable(pvr_dev, false, true); 383 365 if (err) 384 366 goto err_drm_dev_exit; 385 367 } ··· 409 391 goto err_drm_dev_exit; 410 392 411 393 if (pvr_dev->fw_dev.booted) { 412 - err = pvr_power_fw_enable(pvr_dev); 394 + err = pvr_power_fw_enable(pvr_dev, true); 413 395 if (err) 414 396 goto err_power_off; 415 397 } ··· 528 510 } 529 511 530 512 /* Disable IRQs for the duration of the reset. */ 531 - disable_irq(pvr_dev->irq); 513 + if (hard_reset) { 514 + disable_irq(pvr_dev->irq); 515 + } else { 516 + /* 517 + * Soft reset is triggered as a response to a FW command to the Host and is 518 + * processed from the threaded IRQ handler. This code cannot (nor needs to) 519 + * wait for any IRQ processing to complete. 520 + */ 521 + disable_irq_nosync(pvr_dev->irq); 522 + } 532 523 533 524 do { 534 525 if (hard_reset) { ··· 545 518 queues_disabled = true; 546 519 } 547 520 548 - err = pvr_power_fw_disable(pvr_dev, hard_reset); 521 + err = pvr_power_fw_disable(pvr_dev, hard_reset, false); 549 522 if (!err) { 550 523 if (hard_reset) { 551 524 pvr_dev->fw_dev.booted = false; ··· 568 541 569 542 pvr_fw_irq_clear(pvr_dev); 570 543 571 - err = pvr_power_fw_enable(pvr_dev); 544 + err = pvr_power_fw_enable(pvr_dev, false); 572 545 } 573 546 574 547 if (err && hard_reset)
+3 -1
drivers/gpu/drm/radeon/si_dpm.c
··· 2915 2915 if (rdev->family == CHIP_HAINAN) { 2916 2916 if ((rdev->pdev->revision == 0x81) || 2917 2917 (rdev->pdev->revision == 0xC3) || 2918 + (rdev->pdev->device == 0x6660) || 2918 2919 (rdev->pdev->device == 0x6664) || 2919 2920 (rdev->pdev->device == 0x6665) || 2920 - (rdev->pdev->device == 0x6667)) { 2921 + (rdev->pdev->device == 0x6667) || 2922 + (rdev->pdev->device == 0x666F)) { 2921 2923 max_sclk = 75000; 2922 2924 } 2923 2925 if ((rdev->pdev->revision == 0xC3) ||
+57 -36
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 96 96 97 97 struct vmw_res_func; 98 98 99 + struct vmw_bo; 100 + struct vmw_bo; 101 + struct vmw_resource_dirty; 102 + 99 103 /** 100 - * struct vmw-resource - base class for hardware resources 104 + * struct vmw_resource - base class for hardware resources 101 105 * 102 106 * @kref: For refcounting. 103 107 * @dev_priv: Pointer to the device private for this resource. Immutable. 104 108 * @id: Device id. Protected by @dev_priv::resource_lock. 109 + * @used_prio: Priority for this resource. 105 110 * @guest_memory_size: Guest memory buffer size. Immutable. 106 111 * @res_dirty: Resource contains data not yet in the guest memory buffer. 107 112 * Protected by resource reserved. ··· 122 117 * pin-count greater than zero. It is not on the resource LRU lists and its 123 118 * guest memory buffer is pinned. Hence it can't be evicted. 124 119 * @func: Method vtable for this resource. Immutable. 125 - * @mob_node; Node for the MOB guest memory rbtree. Protected by 120 + * @mob_node: Node for the MOB guest memory rbtree. Protected by 126 121 * @guest_memory_bo reserved. 127 122 * @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock. 128 123 * @binding_head: List head for the context binding list. Protected by 129 124 * the @dev_priv::binding_mutex 125 + * @dirty: resource's dirty tracker 130 126 * @res_free: The resource destructor. 131 127 * @hw_destroy: Callback to destroy the resource on the device, as part of 132 128 * resource destruction. 133 129 */ 134 - struct vmw_bo; 135 - struct vmw_bo; 136 - struct vmw_resource_dirty; 137 130 struct vmw_resource { 138 131 struct kref kref; 139 132 struct vmw_private *dev_priv; ··· 199 196 * @quality_level: Quality level. 200 197 * @autogen_filter: Filter for automatically generated mipmaps. 201 198 * @array_size: Number of array elements for a 1D/2D texture. For cubemap 202 - texture number of faces * array_size. This should be 0 for pre 203 - SM4 device. 199 + * texture number of faces * array_size. This should be 0 for pre 200 + * SM4 device. 204 201 * @buffer_byte_stride: Buffer byte stride. 205 202 * @num_sizes: Size of @sizes. For GB surface this should always be 1. 206 203 * @base_size: Surface dimension. ··· 268 265 struct vmw_res_cache_entry { 269 266 uint32_t handle; 270 267 struct vmw_resource *res; 268 + /* private: */ 271 269 void *private; 270 + /* public: */ 272 271 unsigned short valid_handle; 273 272 unsigned short valid; 274 273 }; 275 274 276 275 /** 277 276 * enum vmw_dma_map_mode - indicate how to perform TTM page dma mappings. 277 + * @vmw_dma_alloc_coherent: Use TTM coherent pages 278 + * @vmw_dma_map_populate: Unmap from DMA just after unpopulate 279 + * @vmw_dma_map_bind: Unmap from DMA just before unbind 278 280 */ 279 281 enum vmw_dma_map_mode { 280 - vmw_dma_alloc_coherent, /* Use TTM coherent pages */ 281 - vmw_dma_map_populate, /* Unmap from DMA just after unpopulate */ 282 - vmw_dma_map_bind, /* Unmap from DMA just before unbind */ 282 + vmw_dma_alloc_coherent, 283 + vmw_dma_map_populate, 284 + vmw_dma_map_bind, 285 + /* private: */ 283 286 vmw_dma_map_max 284 287 }; 285 288 ··· 293 284 * struct vmw_sg_table - Scatter/gather table for binding, with additional 294 285 * device-specific information. 295 286 * 287 + * @mode: which page mapping mode to use 288 + * @pages: Array of page pointers to the pages. 289 + * @addrs: DMA addresses to the pages if coherent pages are used. 296 290 * @sgt: Pointer to a struct sg_table with binding information 297 - * @num_regions: Number of regions with device-address contiguous pages 291 + * @num_pages: Number of @pages 298 292 */ 299 293 struct vmw_sg_table { 300 294 enum vmw_dma_map_mode mode; ··· 365 353 * than from user-space 366 354 * @fp: If @kernel is false, points to the file of the client. Otherwise 367 355 * NULL 356 + * @filp: DRM state for this file 368 357 * @cmd_bounce: Command bounce buffer used for command validation before 369 358 * copying to fifo space 370 359 * @cmd_bounce_size: Current command bounce buffer size ··· 742 729 bool vmwgfx_supported(struct vmw_private *vmw); 743 730 744 731 745 - /** 732 + /* 746 733 * GMR utilities - vmwgfx_gmr.c 747 734 */ 748 735 ··· 752 739 int gmr_id); 753 740 extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id); 754 741 755 - /** 742 + /* 756 743 * User handles 757 744 */ 758 745 struct vmw_user_object { ··· 772 759 void vmw_user_object_unmap(struct vmw_user_object *uo); 773 760 bool vmw_user_object_is_mapped(struct vmw_user_object *uo); 774 761 775 - /** 762 + /* 776 763 * Resource utilities - vmwgfx_resource.c 777 764 */ 778 765 struct vmw_user_resource_conv; ··· 832 819 return !RB_EMPTY_NODE(&res->mob_node); 833 820 } 834 821 835 - /** 822 + /* 836 823 * GEM related functionality - vmwgfx_gem.c 837 824 */ 838 825 struct vmw_bo_params; ··· 846 833 struct drm_file *filp); 847 834 extern void vmw_debugfs_gem_init(struct vmw_private *vdev); 848 835 849 - /** 836 + /* 850 837 * Misc Ioctl functionality - vmwgfx_ioctl.c 851 838 */ 852 839 ··· 859 846 extern int vmw_present_readback_ioctl(struct drm_device *dev, void *data, 860 847 struct drm_file *file_priv); 861 848 862 - /** 849 + /* 863 850 * Fifo utilities - vmwgfx_fifo.c 864 851 */ 865 852 ··· 893 880 894 881 895 882 /** 896 - * vmw_fifo_caps - Returns the capabilities of the FIFO command 883 + * vmw_fifo_caps - Get the capabilities of the FIFO command 897 884 * queue or 0 if fifo memory isn't present. 898 885 * @dev_priv: The device private context 886 + * 887 + * Returns: capabilities of the FIFO command or %0 if fifo memory not present 899 888 */ 900 889 static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv) 901 890 { ··· 908 893 909 894 910 895 /** 911 - * vmw_is_cursor_bypass3_enabled - Returns TRUE iff Cursor Bypass 3 912 - * is enabled in the FIFO. 896 + * vmw_is_cursor_bypass3_enabled - check Cursor Bypass 3 enabled setting 897 + * in the FIFO. 913 898 * @dev_priv: The device private context 899 + * 900 + * Returns: %true iff Cursor Bypass 3 is enabled in the FIFO 914 901 */ 915 902 static inline bool 916 903 vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv) ··· 920 903 return (vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_CURSOR_BYPASS_3) != 0; 921 904 } 922 905 923 - /** 906 + /* 924 907 * TTM buffer object driver - vmwgfx_ttm_buffer.c 925 908 */ 926 909 ··· 944 927 * 945 928 * @viter: Pointer to the iterator to advance. 946 929 * 947 - * Returns false if past the list of pages, true otherwise. 930 + * Returns: false if past the list of pages, true otherwise. 948 931 */ 949 932 static inline bool vmw_piter_next(struct vmw_piter *viter) 950 933 { ··· 956 939 * 957 940 * @viter: Pointer to the iterator 958 941 * 959 - * Returns the DMA address of the page pointed to by @viter. 942 + * Returns: the DMA address of the page pointed to by @viter. 960 943 */ 961 944 static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter) 962 945 { ··· 968 951 * 969 952 * @viter: Pointer to the iterator 970 953 * 971 - * Returns the DMA address of the page pointed to by @viter. 954 + * Returns: the DMA address of the page pointed to by @viter. 972 955 */ 973 956 static inline struct page *vmw_piter_page(struct vmw_piter *viter) 974 957 { 975 958 return viter->pages[viter->i]; 976 959 } 977 960 978 - /** 961 + /* 979 962 * Command submission - vmwgfx_execbuf.c 980 963 */ 981 964 ··· 1010 993 int32_t out_fence_fd); 1011 994 bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd); 1012 995 1013 - /** 996 + /* 1014 997 * IRQs and wating - vmwgfx_irq.c 1015 998 */ 1016 999 ··· 1033 1016 bool vmw_generic_waiter_remove(struct vmw_private *dev_priv, 1034 1017 u32 flag, int *waiter_count); 1035 1018 1036 - /** 1019 + /* 1037 1020 * Kernel modesetting - vmwgfx_kms.c 1038 1021 */ 1039 1022 ··· 1065 1048 extern void vmw_resource_unpin(struct vmw_resource *res); 1066 1049 extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res); 1067 1050 1068 - /** 1051 + /* 1069 1052 * Overlay control - vmwgfx_overlay.c 1070 1053 */ 1071 1054 ··· 1080 1063 int vmw_overlay_num_overlays(struct vmw_private *dev_priv); 1081 1064 int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv); 1082 1065 1083 - /** 1066 + /* 1084 1067 * GMR Id manager 1085 1068 */ 1086 1069 1087 1070 int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type); 1088 1071 void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type); 1089 1072 1090 - /** 1073 + /* 1091 1074 * System memory manager 1092 1075 */ 1093 1076 int vmw_sys_man_init(struct vmw_private *dev_priv); 1094 1077 void vmw_sys_man_fini(struct vmw_private *dev_priv); 1095 1078 1096 - /** 1079 + /* 1097 1080 * Prime - vmwgfx_prime.c 1098 1081 */ 1099 1082 ··· 1309 1292 * @line: The current line of the blit. 1310 1293 * @line_offset: Offset of the current line segment. 1311 1294 * @cpp: Bytes per pixel (granularity information). 1312 - * @memcpy: Which memcpy function to use. 1295 + * @do_cpy: Which memcpy function to use. 1313 1296 */ 1314 1297 struct vmw_diff_cpy { 1315 1298 struct drm_rect rect; ··· 1397 1380 1398 1381 /** 1399 1382 * VMW_DEBUG_KMS - Debug output for kernel mode-setting 1383 + * @fmt: format string for the args 1400 1384 * 1401 1385 * This macro is for debugging vmwgfx mode-setting code. 1402 1386 */ 1403 1387 #define VMW_DEBUG_KMS(fmt, ...) \ 1404 1388 DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__) 1405 1389 1406 - /** 1390 + /* 1407 1391 * Inline helper functions 1408 1392 */ 1409 1393 ··· 1435 1417 1436 1418 /** 1437 1419 * vmw_fifo_mem_read - Perform a MMIO read from the fifo memory 1438 - * 1420 + * @vmw: The device private structure 1439 1421 * @fifo_reg: The fifo register to read from 1440 1422 * 1441 1423 * This function is intended to be equivalent to ioread32() on 1442 1424 * memremap'd memory, but without byteswapping. 1425 + * 1426 + * Returns: the value read 1443 1427 */ 1444 1428 static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg) 1445 1429 { ··· 1451 1431 1452 1432 /** 1453 1433 * vmw_fifo_mem_write - Perform a MMIO write to volatile memory 1454 - * 1455 - * @addr: The fifo register to write to 1434 + * @vmw: The device private structure 1435 + * @fifo_reg: The fifo register to write to 1436 + * @value: The value to write 1456 1437 * 1457 1438 * This function is intended to be equivalent to iowrite32 on 1458 1439 * memremap'd memory, but without byteswapping.
+2 -1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 771 771 ret = vmw_bo_dirty_add(bo); 772 772 if (!ret && surface && surface->res.func->dirty_alloc) { 773 773 surface->res.coherent = true; 774 - ret = surface->res.func->dirty_alloc(&surface->res); 774 + if (surface->res.dirty == NULL) 775 + ret = surface->res.func->dirty_alloc(&surface->res); 775 776 } 776 777 ttm_bo_unreserve(&bo->tbo); 777 778 }
+4 -6
drivers/gpu/drm/xe/xe_ggtt.c
··· 313 313 { 314 314 struct xe_ggtt *ggtt = arg; 315 315 316 + scoped_guard(mutex, &ggtt->lock) 317 + ggtt->flags &= ~XE_GGTT_FLAGS_ONLINE; 316 318 drain_workqueue(ggtt->wq); 317 319 } 318 320 ··· 379 377 if (err) 380 378 return err; 381 379 380 + ggtt->flags |= XE_GGTT_FLAGS_ONLINE; 382 381 err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt); 383 382 if (err) 384 383 return err; ··· 413 410 static void ggtt_node_remove(struct xe_ggtt_node *node) 414 411 { 415 412 struct xe_ggtt *ggtt = node->ggtt; 416 - struct xe_device *xe = tile_to_xe(ggtt->tile); 417 413 bool bound; 418 - int idx; 419 - 420 - bound = drm_dev_enter(&xe->drm, &idx); 421 414 422 415 mutex_lock(&ggtt->lock); 416 + bound = ggtt->flags & XE_GGTT_FLAGS_ONLINE; 423 417 if (bound) 424 418 xe_ggtt_clear(ggtt, node->base.start, node->base.size); 425 419 drm_mm_remove_node(&node->base); ··· 428 428 429 429 if (node->invalidate_on_remove) 430 430 xe_ggtt_invalidate(ggtt); 431 - 432 - drm_dev_exit(idx); 433 431 434 432 free_node: 435 433 xe_ggtt_node_fini(node);
+4 -1
drivers/gpu/drm/xe/xe_ggtt_types.h
··· 28 28 /** @size: Total usable size of this GGTT */ 29 29 u64 size; 30 30 31 - #define XE_GGTT_FLAGS_64K BIT(0) 31 + #define XE_GGTT_FLAGS_64K BIT(0) 32 + #define XE_GGTT_FLAGS_ONLINE BIT(1) 32 33 /** 33 34 * @flags: Flags for this GGTT 34 35 * Acceptable flags: 35 36 * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K. 37 + * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock 38 + * after init 36 39 */ 37 40 unsigned int flags; 38 41 /** @scratch: Internal object allocation used as a scratch page */
+2
drivers/gpu/drm/xe/xe_gt_ccs_mode.c
··· 12 12 #include "xe_gt_printk.h" 13 13 #include "xe_gt_sysfs.h" 14 14 #include "xe_mmio.h" 15 + #include "xe_pm.h" 15 16 #include "xe_sriov.h" 16 17 17 18 static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines) ··· 151 150 xe_gt_info(gt, "Setting compute mode to %d\n", num_engines); 152 151 gt->ccs_mode = num_engines; 153 152 xe_gt_record_user_engines(gt); 153 + guard(xe_pm_runtime)(xe); 154 154 xe_gt_reset(gt); 155 155 } 156 156
+27 -5
drivers/gpu/drm/xe/xe_guc.c
··· 1124 1124 struct xe_guc_pc *guc_pc = &gt->uc.guc.pc; 1125 1125 u32 before_freq, act_freq, cur_freq; 1126 1126 u32 status = 0, tries = 0; 1127 + int load_result, ret; 1127 1128 ktime_t before; 1128 1129 u64 delta_ms; 1129 - int ret; 1130 1130 1131 1131 before_freq = xe_guc_pc_get_act_freq(guc_pc); 1132 1132 before = ktime_get(); 1133 1133 1134 - ret = poll_timeout_us(ret = guc_load_done(gt, &status, &tries), ret, 1134 + ret = poll_timeout_us(load_result = guc_load_done(gt, &status, &tries), load_result, 1135 1135 10 * USEC_PER_MSEC, 1136 1136 GUC_LOAD_TIMEOUT_SEC * USEC_PER_SEC, false); 1137 1137 ··· 1139 1139 act_freq = xe_guc_pc_get_act_freq(guc_pc); 1140 1140 cur_freq = xe_guc_pc_get_cur_freq_fw(guc_pc); 1141 1141 1142 - if (ret) { 1142 + if (ret || load_result <= 0) { 1143 1143 xe_gt_err(gt, "load failed: status = 0x%08X, time = %lldms, freq = %dMHz (req %dMHz)\n", 1144 1144 status, delta_ms, xe_guc_pc_get_act_freq(guc_pc), 1145 1145 xe_guc_pc_get_cur_freq_fw(guc_pc)); ··· 1347 1347 return 0; 1348 1348 } 1349 1349 1350 - int xe_guc_suspend(struct xe_guc *guc) 1350 + /** 1351 + * xe_guc_softreset() - Soft reset GuC 1352 + * @guc: The GuC object 1353 + * 1354 + * Send soft reset command to GuC through mmio send. 1355 + * 1356 + * Return: 0 if success, otherwise error code 1357 + */ 1358 + int xe_guc_softreset(struct xe_guc *guc) 1351 1359 { 1352 - struct xe_gt *gt = guc_to_gt(guc); 1353 1360 u32 action[] = { 1354 1361 XE_GUC_ACTION_CLIENT_SOFT_RESET, 1355 1362 }; 1356 1363 int ret; 1357 1364 1365 + if (!xe_uc_fw_is_running(&guc->fw)) 1366 + return 0; 1367 + 1358 1368 ret = xe_guc_mmio_send(guc, action, ARRAY_SIZE(action)); 1369 + if (ret) 1370 + return ret; 1371 + 1372 + return 0; 1373 + } 1374 + 1375 + int xe_guc_suspend(struct xe_guc *guc) 1376 + { 1377 + struct xe_gt *gt = guc_to_gt(guc); 1378 + int ret; 1379 + 1380 + ret = xe_guc_softreset(guc); 1359 1381 if (ret) { 1360 1382 xe_gt_err(gt, "GuC suspend failed: %pe\n", ERR_PTR(ret)); 1361 1383 return ret;
+1
drivers/gpu/drm/xe/xe_guc.h
··· 44 44 void xe_guc_runtime_suspend(struct xe_guc *guc); 45 45 void xe_guc_runtime_resume(struct xe_guc *guc); 46 46 int xe_guc_suspend(struct xe_guc *guc); 47 + int xe_guc_softreset(struct xe_guc *guc); 47 48 void xe_guc_notify(struct xe_guc *guc); 48 49 int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr); 49 50 int xe_guc_mmio_send(struct xe_guc *guc, const u32 *request, u32 len);
+1
drivers/gpu/drm/xe/xe_guc_ct.c
··· 345 345 { 346 346 struct xe_guc_ct *ct = arg; 347 347 348 + xe_guc_ct_stop(ct); 348 349 guc_ct_change_state(ct, XE_GUC_CT_STATE_DISABLED); 349 350 } 350 351
+61 -25
drivers/gpu/drm/xe/xe_guc_submit.c
··· 48 48 49 49 #define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 6 50 50 51 + static int guc_submit_reset_prepare(struct xe_guc *guc); 52 + 51 53 static struct xe_guc * 52 54 exec_queue_to_guc(struct xe_exec_queue *q) 53 55 { ··· 241 239 EXEC_QUEUE_STATE_BANNED)); 242 240 } 243 241 244 - static void guc_submit_fini(struct drm_device *drm, void *arg) 242 + static void guc_submit_sw_fini(struct drm_device *drm, void *arg) 245 243 { 246 244 struct xe_guc *guc = arg; 247 245 struct xe_device *xe = guc_to_xe(guc); ··· 257 255 xe_gt_assert(gt, ret); 258 256 259 257 xa_destroy(&guc->submission_state.exec_queue_lookup); 258 + } 259 + 260 + static void guc_submit_fini(void *arg) 261 + { 262 + struct xe_guc *guc = arg; 263 + 264 + /* Forcefully kill any remaining exec queues */ 265 + xe_guc_ct_stop(&guc->ct); 266 + guc_submit_reset_prepare(guc); 267 + xe_guc_softreset(guc); 268 + xe_guc_submit_stop(guc); 269 + xe_uc_fw_sanitize(&guc->fw); 270 + xe_guc_submit_pause_abort(guc); 260 271 } 261 272 262 273 static void guc_submit_wedged_fini(void *arg) ··· 341 326 342 327 guc->submission_state.initialized = true; 343 328 344 - return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc); 329 + err = drmm_add_action_or_reset(&xe->drm, guc_submit_sw_fini, guc); 330 + if (err) 331 + return err; 332 + 333 + return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc); 345 334 } 346 335 347 336 /* ··· 1271 1252 */ 1272 1253 void xe_guc_submit_wedge(struct xe_guc *guc) 1273 1254 { 1255 + struct xe_device *xe = guc_to_xe(guc); 1274 1256 struct xe_gt *gt = guc_to_gt(guc); 1275 1257 struct xe_exec_queue *q; 1276 1258 unsigned long index; ··· 1286 1266 if (!guc->submission_state.initialized) 1287 1267 return; 1288 1268 1289 - err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev, 1290 - guc_submit_wedged_fini, guc); 1291 - if (err) { 1292 - xe_gt_err(gt, "Failed to register clean-up in wedged.mode=%s; " 1293 - "Although device is wedged.\n", 1294 - xe_wedged_mode_to_string(XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET)); 1295 - return; 1296 - } 1269 + if (xe->wedged.mode == 2) { 1270 + err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev, 1271 + guc_submit_wedged_fini, guc); 1272 + if (err) { 1273 + xe_gt_err(gt, "Failed to register clean-up on wedged.mode=2; " 1274 + "Although device is wedged.\n"); 1275 + return; 1276 + } 1297 1277 1298 - mutex_lock(&guc->submission_state.lock); 1299 - xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1300 - if (xe_exec_queue_get_unless_zero(q)) 1301 - set_exec_queue_wedged(q); 1302 - mutex_unlock(&guc->submission_state.lock); 1278 + mutex_lock(&guc->submission_state.lock); 1279 + xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1280 + if (xe_exec_queue_get_unless_zero(q)) 1281 + set_exec_queue_wedged(q); 1282 + mutex_unlock(&guc->submission_state.lock); 1283 + } else { 1284 + /* Forcefully kill any remaining exec queues, signal fences */ 1285 + guc_submit_reset_prepare(guc); 1286 + xe_guc_submit_stop(guc); 1287 + xe_guc_softreset(guc); 1288 + xe_uc_fw_sanitize(&guc->fw); 1289 + xe_guc_submit_pause_abort(guc); 1290 + } 1303 1291 } 1304 1292 1305 1293 static bool guc_submit_hint_wedged(struct xe_guc *guc) ··· 2258 2230 static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q) 2259 2231 { 2260 2232 struct xe_gpu_scheduler *sched = &q->guc->sched; 2233 + bool do_destroy = false; 2261 2234 2262 2235 /* Stop scheduling + flush any DRM scheduler operations */ 2263 2236 xe_sched_submission_stop(sched); ··· 2266 2237 /* Clean up lost G2H + reset engine state */ 2267 2238 if (exec_queue_registered(q)) { 2268 2239 if (exec_queue_destroyed(q)) 2269 - __guc_exec_queue_destroy(guc, q); 2240 + do_destroy = true; 2270 2241 } 2271 2242 if (q->guc->suspend_pending) { 2272 2243 set_exec_queue_suspended(q); ··· 2302 2273 xe_guc_exec_queue_trigger_cleanup(q); 2303 2274 } 2304 2275 } 2276 + 2277 + if (do_destroy) 2278 + __guc_exec_queue_destroy(guc, q); 2305 2279 } 2306 2280 2307 - int xe_guc_submit_reset_prepare(struct xe_guc *guc) 2281 + static int guc_submit_reset_prepare(struct xe_guc *guc) 2308 2282 { 2309 2283 int ret; 2310 - 2311 - if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc))) 2312 - return 0; 2313 - 2314 - if (!guc->submission_state.initialized) 2315 - return 0; 2316 2284 2317 2285 /* 2318 2286 * Using an atomic here rather than submission_state.lock as this ··· 2323 2297 wake_up_all(&guc->ct.wq); 2324 2298 2325 2299 return ret; 2300 + } 2301 + 2302 + int xe_guc_submit_reset_prepare(struct xe_guc *guc) 2303 + { 2304 + if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc))) 2305 + return 0; 2306 + 2307 + if (!guc->submission_state.initialized) 2308 + return 0; 2309 + 2310 + return guc_submit_reset_prepare(guc); 2326 2311 } 2327 2312 2328 2313 void xe_guc_submit_reset_wait(struct xe_guc *guc) ··· 2732 2695 continue; 2733 2696 2734 2697 xe_sched_submission_start(sched); 2735 - if (exec_queue_killed_or_banned_or_wedged(q)) 2736 - xe_guc_exec_queue_trigger_cleanup(q); 2698 + guc_exec_queue_kill(q); 2737 2699 } 2738 2700 mutex_unlock(&guc->submission_state.lock); 2739 2701 }
+2 -2
drivers/gpu/drm/xe/xe_lrc.c
··· 2413 2413 * @lrc: Pointer to the lrc. 2414 2414 * 2415 2415 * Return latest ctx timestamp. With support for active contexts, the 2416 - * calculation may bb slightly racy, so follow a read-again logic to ensure that 2416 + * calculation may be slightly racy, so follow a read-again logic to ensure that 2417 2417 * the context is still active before returning the right timestamp. 2418 2418 * 2419 2419 * Returns: New ctx timestamp value 2420 2420 */ 2421 2421 u64 xe_lrc_timestamp(struct xe_lrc *lrc) 2422 2422 { 2423 - u64 lrc_ts, reg_ts, new_ts; 2423 + u64 lrc_ts, reg_ts, new_ts = lrc->ctx_timestamp; 2424 2424 u32 engine_id; 2425 2425 2426 2426 lrc_ts = xe_lrc_ctx_timestamp(lrc);
+5 -2
drivers/gpu/drm/xe/xe_oa.c
··· 543 543 size_t offset = 0; 544 544 int ret; 545 545 546 - /* Can't read from disabled streams */ 547 - if (!stream->enabled || !stream->sample) 546 + if (!stream->sample) 548 547 return -EINVAL; 549 548 550 549 if (!(file->f_flags & O_NONBLOCK)) { ··· 1459 1460 1460 1461 if (stream->sample) 1461 1462 hrtimer_cancel(&stream->poll_check_timer); 1463 + 1464 + /* Update stream->oa_buffer.tail to allow any final reports to be read */ 1465 + if (xe_oa_buffer_check_unlocked(stream)) 1466 + wake_up(&stream->poll_wq); 1462 1467 } 1463 1468 1464 1469 static int xe_oa_enable_preempt_timeslice(struct xe_oa_stream *stream)
+29 -9
drivers/gpu/drm/xe/xe_pt.c
··· 1655 1655 XE_WARN_ON(!level); 1656 1656 /* Check for leaf node */ 1657 1657 if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) && 1658 - (!xe_child->base.children || !xe_child->base.children[first])) { 1658 + xe_child->level <= MAX_HUGEPTE_LEVEL) { 1659 1659 struct iosys_map *leaf_map = &xe_child->bo->vmap; 1660 1660 pgoff_t count = xe_pt_num_entries(addr, next, xe_child->level, walk); 1661 1661 1662 1662 for (pgoff_t i = 0; i < count; i++) { 1663 - u64 pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64); 1663 + u64 pte; 1664 1664 int ret; 1665 + 1666 + /* 1667 + * If not a leaf pt, skip unless non-leaf pt is interleaved between 1668 + * leaf ptes which causes the page walk to skip over the child leaves 1669 + */ 1670 + if (xe_child->base.children && xe_child->base.children[first + i]) { 1671 + u64 pt_size = 1ULL << walk->shifts[xe_child->level]; 1672 + bool edge_pt = (i == 0 && !IS_ALIGNED(addr, pt_size)) || 1673 + (i == count - 1 && !IS_ALIGNED(next, pt_size)); 1674 + 1675 + if (!edge_pt) { 1676 + xe_page_reclaim_list_abort(xe_walk->tile->primary_gt, 1677 + xe_walk->prl, 1678 + "PT is skipped by walk at level=%u offset=%lu", 1679 + xe_child->level, first + i); 1680 + break; 1681 + } 1682 + continue; 1683 + } 1684 + 1685 + pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64); 1665 1686 1666 1687 /* 1667 1688 * In rare scenarios, pte may not be written yet due to racy conditions. ··· 1695 1674 } 1696 1675 1697 1676 /* Ensure it is a defined page */ 1698 - xe_tile_assert(xe_walk->tile, 1699 - xe_child->level == 0 || 1700 - (pte & (XE_PTE_PS64 | XE_PDE_PS_2M | XE_PDPE_PS_1G))); 1677 + xe_tile_assert(xe_walk->tile, xe_child->level == 0 || 1678 + (pte & (XE_PDE_PS_2M | XE_PDPE_PS_1G))); 1701 1679 1702 1680 /* An entry should be added for 64KB but contigious 4K have XE_PTE_PS64 */ 1703 1681 if (pte & XE_PTE_PS64) ··· 1721 1701 killed = xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk); 1722 1702 1723 1703 /* 1724 - * Verify PRL is active and if entry is not a leaf pte (base.children conditions), 1725 - * there is a potential need to invalidate the PRL if any PTE (num_live) are dropped. 1704 + * Verify if any PTE are potentially dropped at non-leaf levels, either from being 1705 + * killed or the page walk covers the region. 1726 1706 */ 1727 - if (xe_walk->prl && level > 1 && xe_child->num_live && 1728 - xe_child->base.children && xe_child->base.children[first]) { 1707 + if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) && 1708 + xe_child->level > MAX_HUGEPTE_LEVEL && xe_child->num_live) { 1729 1709 bool covered = xe_pt_covers(addr, next, xe_child->level, &xe_walk->base); 1730 1710 1731 1711 /*
+2
drivers/hid/bpf/hid_bpf_dispatch.c
··· 444 444 (u64)(long)ctx, 445 445 true); /* prevent infinite recursions */ 446 446 447 + if (ret > size) 448 + ret = size; 447 449 if (ret > 0) 448 450 memcpy(buf, dma_data, ret); 449 451
+3 -2
drivers/hid/hid-appletb-kbd.c
··· 476 476 return 0; 477 477 } 478 478 479 - static int appletb_kbd_reset_resume(struct hid_device *hdev) 479 + static int appletb_kbd_resume(struct hid_device *hdev) 480 480 { 481 481 struct appletb_kbd *kbd = hid_get_drvdata(hdev); 482 482 ··· 500 500 .event = appletb_kbd_hid_event, 501 501 .input_configured = appletb_kbd_input_configured, 502 502 .suspend = pm_ptr(appletb_kbd_suspend), 503 - .reset_resume = pm_ptr(appletb_kbd_reset_resume), 503 + .resume = pm_ptr(appletb_kbd_resume), 504 + .reset_resume = pm_ptr(appletb_kbd_resume), 504 505 .driver.dev_groups = appletb_kbd_groups, 505 506 }; 506 507 module_hid_driver(appletb_kbd_hid_driver);
+3
drivers/hid/hid-asus.c
··· 1498 1498 USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X), 1499 1499 QUIRK_USE_KBD_BACKLIGHT | QUIRK_ROG_NKEY_KEYBOARD | QUIRK_ROG_ALLY_XPAD }, 1500 1500 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1501 + USB_DEVICE_ID_ASUSTEK_XGM_2022), 1502 + }, 1503 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1501 1504 USB_DEVICE_ID_ASUSTEK_XGM_2023), 1502 1505 }, 1503 1506 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK,
+4 -3
drivers/hid/hid-core.c
··· 2057 2057 rsize = max_buffer_size; 2058 2058 2059 2059 if (csize < rsize) { 2060 - dbg_hid("report %d is too short, (%d < %d)\n", report->id, 2061 - csize, rsize); 2062 - memset(cdata + csize, 0, rsize - csize); 2060 + hid_warn_ratelimited(hid, "Event data for report %d was too short (%d vs %d)\n", 2061 + report->id, rsize, csize); 2062 + ret = -EINVAL; 2063 + goto out; 2063 2064 } 2064 2065 2065 2066 if ((hid->claimed & HID_CLAIMED_HIDDEV) && hid->hiddev_report_event)
+1 -2
drivers/hid/hid-ids.h
··· 229 229 #define USB_DEVICE_ID_ASUSTEK_ROG_NKEY_ALLY_X 0x1b4c 230 230 #define USB_DEVICE_ID_ASUSTEK_ROG_CLAYMORE_II_KEYBOARD 0x196b 231 231 #define USB_DEVICE_ID_ASUSTEK_FX503VD_KEYBOARD 0x1869 232 + #define USB_DEVICE_ID_ASUSTEK_XGM_2022 0x1970 232 233 #define USB_DEVICE_ID_ASUSTEK_XGM_2023 0x1a9a 233 234 234 235 #define USB_VENDOR_ID_ATEN 0x0557 ··· 455 454 #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 456 455 #define USB_DEVICE_ID_HP_X2 0x074d 457 456 #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755 458 - #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 459 - #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 460 457 #define I2C_DEVICE_ID_CHROMEBOOK_TROGDOR_POMPOM 0x2F81 461 458 462 459 #define USB_VENDOR_ID_ELECOM 0x056e
+11 -7
drivers/hid/hid-input.c
··· 354 354 #define HID_BATTERY_QUIRK_FEATURE (1 << 1) /* ask for feature report */ 355 355 #define HID_BATTERY_QUIRK_IGNORE (1 << 2) /* completely ignore the battery */ 356 356 #define HID_BATTERY_QUIRK_AVOID_QUERY (1 << 3) /* do not query the battery */ 357 + #define HID_BATTERY_QUIRK_DYNAMIC (1 << 4) /* report present only after life signs */ 357 358 358 359 static const struct hid_device_id hid_battery_quirks[] = { 359 360 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, ··· 387 386 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 388 387 USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), 389 388 HID_BATTERY_QUIRK_IGNORE }, 390 - { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 391 - HID_BATTERY_QUIRK_IGNORE }, 392 - { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), 393 - HID_BATTERY_QUIRK_IGNORE }, 394 389 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_L), 395 390 HID_BATTERY_QUIRK_AVOID_QUERY }, 396 391 { HID_USB_DEVICE(USB_VENDOR_ID_UGEE, USB_DEVICE_ID_UGEE_XPPEN_TABLET_DECO_PRO_MW), ··· 399 402 * Elan HID touchscreens seem to all report a non present battery, 400 403 * set HID_BATTERY_QUIRK_IGNORE for all Elan I2C and USB HID devices. 401 404 */ 402 - { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE }, 403 - { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_IGNORE }, 405 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_DYNAMIC }, 406 + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_BATTERY_QUIRK_DYNAMIC }, 404 407 {} 405 408 }; 406 409 ··· 457 460 int ret = 0; 458 461 459 462 switch (prop) { 460 - case POWER_SUPPLY_PROP_PRESENT: 461 463 case POWER_SUPPLY_PROP_ONLINE: 462 464 val->intval = 1; 465 + break; 466 + 467 + case POWER_SUPPLY_PROP_PRESENT: 468 + val->intval = dev->battery_present; 463 469 break; 464 470 465 471 case POWER_SUPPLY_PROP_CAPACITY: ··· 577 577 if (quirks & HID_BATTERY_QUIRK_AVOID_QUERY) 578 578 dev->battery_avoid_query = true; 579 579 580 + dev->battery_present = (quirks & HID_BATTERY_QUIRK_DYNAMIC) ? false : true; 581 + 580 582 dev->battery = power_supply_register(&dev->dev, psy_desc, &psy_cfg); 581 583 if (IS_ERR(dev->battery)) { 582 584 error = PTR_ERR(dev->battery); ··· 634 632 return; 635 633 636 634 if (hidinput_update_battery_charge_status(dev, usage, value)) { 635 + dev->battery_present = true; 637 636 power_supply_changed(dev->battery); 638 637 return; 639 638 } ··· 650 647 if (dev->battery_status != HID_BATTERY_REPORTED || 651 648 capacity != dev->battery_capacity || 652 649 ktime_after(ktime_get_coarse(), dev->battery_ratelimit_time)) { 650 + dev->battery_present = true; 653 651 dev->battery_capacity = capacity; 654 652 dev->battery_status = HID_BATTERY_REPORTED; 655 653 dev->battery_ratelimit_time =
+5 -1
drivers/hid/hid-logitech-hidpp.c
··· 4487 4487 if (!ret) 4488 4488 ret = hidpp_ff_init(hidpp, &data); 4489 4489 4490 - if (ret) 4490 + if (ret) { 4491 4491 hid_warn(hidpp->hid_dev, 4492 4492 "Unable to initialize force feedback support, errno %d\n", 4493 4493 ret); 4494 + ret = 0; 4495 + } 4494 4496 } 4495 4497 4496 4498 /* ··· 4670 4668 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb038) }, 4671 4669 { /* Slim Solar+ K980 Keyboard over Bluetooth */ 4672 4670 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb391) }, 4671 + { /* MX Master 4 mouse over Bluetooth */ 4672 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb042) }, 4673 4673 {} 4674 4674 }; 4675 4675
+7
drivers/hid/hid-multitouch.c
··· 526 526 dev_warn(&hdev->dev, "failed to fetch feature %d\n", 527 527 report->id); 528 528 } else { 529 + /* The report ID in the request and the response should match */ 530 + if (report->id != buf[0]) { 531 + hid_err(hdev, "Returned feature report did not match the request\n"); 532 + goto free; 533 + } 534 + 529 535 ret = hid_report_raw_event(hdev, HID_FEATURE_REPORT, buf, 530 536 size, 0); 531 537 if (ret) 532 538 dev_warn(&hdev->dev, "failed to report feature\n"); 533 539 } 534 540 541 + free: 535 542 kfree(buf); 536 543 } 537 544
+1
drivers/hid/intel-thc-hid/intel-quicki2c/quicki2c-hid.c
··· 127 127 hid->product = le16_to_cpu(qcdev->dev_desc.product_id); 128 128 snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "quicki2c-hid", 129 129 hid->vendor, hid->product); 130 + strscpy(hid->phys, dev_name(qcdev->dev), sizeof(hid->phys)); 130 131 131 132 ret = hid_add_device(hid); 132 133 if (ret) {
+1
drivers/hid/intel-thc-hid/intel-quickspi/quickspi-hid.c
··· 118 118 hid->product = le16_to_cpu(qsdev->dev_desc.product_id); 119 119 snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", "quickspi-hid", 120 120 hid->vendor, hid->product); 121 + strscpy(hid->phys, dev_name(qsdev->dev), sizeof(hid->phys)); 121 122 122 123 ret = hid_add_device(hid); 123 124 if (ret) {
+10
drivers/hid/wacom_wac.c
··· 1208 1208 1209 1209 switch (data[0]) { 1210 1210 case 0x04: 1211 + if (len < 32) { 1212 + dev_warn(wacom->pen_input->dev.parent, 1213 + "Report 0x04 too short: %zu bytes\n", len); 1214 + break; 1215 + } 1211 1216 wacom_intuos_bt_process_data(wacom, data + i); 1212 1217 i += 10; 1213 1218 fallthrough; 1214 1219 case 0x03: 1220 + if (i == 1 && len < 22) { 1221 + dev_warn(wacom->pen_input->dev.parent, 1222 + "Report 0x03 too short: %zu bytes\n", len); 1223 + break; 1224 + } 1215 1225 wacom_intuos_bt_process_data(wacom, data + i); 1216 1226 i += 10; 1217 1227 wacom_intuos_bt_process_data(wacom, data + i);
+4 -2
drivers/hv/mshv_regions.c
··· 314 314 ret = pin_user_pages_fast(userspace_addr, nr_pages, 315 315 FOLL_WRITE | FOLL_LONGTERM, 316 316 pages); 317 - if (ret < 0) 317 + if (ret != nr_pages) 318 318 goto release_pages; 319 319 } 320 320 321 321 return 0; 322 322 323 323 release_pages: 324 + if (ret > 0) 325 + done_count += ret; 324 326 mshv_region_invalidate_pages(region, 0, done_count); 325 - return ret; 327 + return ret < 0 ? ret : -ENOMEM; 326 328 } 327 329 328 330 static int mshv_region_chunk_unmap(struct mshv_mem_region *region,
+2 -3
drivers/hv/mshv_root.h
··· 190 190 }; 191 191 192 192 struct mshv_root { 193 - struct hv_synic_pages __percpu *synic_pages; 194 193 spinlock_t pt_ht_lock; 195 194 DECLARE_HASHTABLE(pt_htable, MSHV_PARTITIONS_HASH_BITS); 196 195 struct hv_partition_property_vmm_capabilities vmm_caps; ··· 248 249 void mshv_unregister_doorbell(u64 partition_id, int doorbell_portid); 249 250 250 251 void mshv_isr(void); 251 - int mshv_synic_init(unsigned int cpu); 252 - int mshv_synic_cleanup(unsigned int cpu); 252 + int mshv_synic_init(struct device *dev); 253 + void mshv_synic_exit(void); 253 254 254 255 static inline bool mshv_partition_encrypted(struct mshv_partition *partition) 255 256 {
+22 -71
drivers/hv/mshv_root_main.c
··· 120 120 HVCALL_SET_VP_REGISTERS, 121 121 HVCALL_TRANSLATE_VIRTUAL_ADDRESS, 122 122 HVCALL_CLEAR_VIRTUAL_INTERRUPT, 123 - HVCALL_SCRUB_PARTITION, 124 123 HVCALL_REGISTER_INTERCEPT_RESULT, 125 124 HVCALL_ASSERT_VIRTUAL_INTERRUPT, 126 125 HVCALL_GET_GPA_PAGES_ACCESS_STATES, ··· 1288 1289 */ 1289 1290 static long 1290 1291 mshv_map_user_memory(struct mshv_partition *partition, 1291 - struct mshv_user_mem_region mem) 1292 + struct mshv_user_mem_region *mem) 1292 1293 { 1293 1294 struct mshv_mem_region *region; 1294 1295 struct vm_area_struct *vma; ··· 1296 1297 ulong mmio_pfn; 1297 1298 long ret; 1298 1299 1299 - if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || 1300 - !access_ok((const void __user *)mem.userspace_addr, mem.size)) 1300 + if (mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || 1301 + !access_ok((const void __user *)mem->userspace_addr, mem->size)) 1301 1302 return -EINVAL; 1302 1303 1303 1304 mmap_read_lock(current->mm); 1304 - vma = vma_lookup(current->mm, mem.userspace_addr); 1305 + vma = vma_lookup(current->mm, mem->userspace_addr); 1305 1306 is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0; 1306 1307 mmio_pfn = is_mmio ? vma->vm_pgoff : 0; 1307 1308 mmap_read_unlock(current->mm); ··· 1309 1310 if (!vma) 1310 1311 return -EINVAL; 1311 1312 1312 - ret = mshv_partition_create_region(partition, &mem, &region, 1313 + ret = mshv_partition_create_region(partition, mem, &region, 1313 1314 is_mmio); 1314 1315 if (ret) 1315 1316 return ret; ··· 1347 1348 return 0; 1348 1349 1349 1350 errout: 1350 - vfree(region); 1351 + mshv_region_put(region); 1351 1352 return ret; 1352 1353 } 1353 1354 1354 1355 /* Called for unmapping both the guest ram and the mmio space */ 1355 1356 static long 1356 1357 mshv_unmap_user_memory(struct mshv_partition *partition, 1357 - struct mshv_user_mem_region mem) 1358 + struct mshv_user_mem_region *mem) 1358 1359 { 1359 1360 struct mshv_mem_region *region; 1360 1361 1361 - if (!(mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP))) 1362 + if (!(mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP))) 1362 1363 return -EINVAL; 1363 1364 1364 1365 spin_lock(&partition->pt_mem_regions_lock); 1365 1366 1366 - region = mshv_partition_region_by_gfn(partition, mem.guest_pfn); 1367 + region = mshv_partition_region_by_gfn(partition, mem->guest_pfn); 1367 1368 if (!region) { 1368 1369 spin_unlock(&partition->pt_mem_regions_lock); 1369 1370 return -ENOENT; 1370 1371 } 1371 1372 1372 1373 /* Paranoia check */ 1373 - if (region->start_uaddr != mem.userspace_addr || 1374 - region->start_gfn != mem.guest_pfn || 1375 - region->nr_pages != HVPFN_DOWN(mem.size)) { 1374 + if (region->start_uaddr != mem->userspace_addr || 1375 + region->start_gfn != mem->guest_pfn || 1376 + region->nr_pages != HVPFN_DOWN(mem->size)) { 1376 1377 spin_unlock(&partition->pt_mem_regions_lock); 1377 1378 return -EINVAL; 1378 1379 } ··· 1403 1404 return -EINVAL; 1404 1405 1405 1406 if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP)) 1406 - return mshv_unmap_user_memory(partition, mem); 1407 + return mshv_unmap_user_memory(partition, &mem); 1407 1408 1408 - return mshv_map_user_memory(partition, mem); 1409 + return mshv_map_user_memory(partition, &mem); 1409 1410 } 1410 1411 1411 1412 static long ··· 2063 2064 return 0; 2064 2065 } 2065 2066 2066 - static int mshv_cpuhp_online; 2067 2067 static int mshv_root_sched_online; 2068 2068 2069 2069 static const char *scheduler_type_to_string(enum hv_scheduler_type type) ··· 2247 2249 free_percpu(root_scheduler_output); 2248 2250 } 2249 2251 2250 - static int mshv_reboot_notify(struct notifier_block *nb, 2251 - unsigned long code, void *unused) 2252 - { 2253 - cpuhp_remove_state(mshv_cpuhp_online); 2254 - return 0; 2255 - } 2256 - 2257 - struct notifier_block mshv_reboot_nb = { 2258 - .notifier_call = mshv_reboot_notify, 2259 - }; 2260 - 2261 - static void mshv_root_partition_exit(void) 2262 - { 2263 - unregister_reboot_notifier(&mshv_reboot_nb); 2264 - } 2265 - 2266 - static int __init mshv_root_partition_init(struct device *dev) 2267 - { 2268 - return register_reboot_notifier(&mshv_reboot_nb); 2269 - } 2270 - 2271 2252 static int __init mshv_init_vmm_caps(struct device *dev) 2272 2253 { 2273 2254 int ret; ··· 2291 2314 MSHV_HV_MAX_VERSION); 2292 2315 } 2293 2316 2294 - mshv_root.synic_pages = alloc_percpu(struct hv_synic_pages); 2295 - if (!mshv_root.synic_pages) { 2296 - dev_err(dev, "Failed to allocate percpu synic page\n"); 2297 - ret = -ENOMEM; 2317 + ret = mshv_synic_init(dev); 2318 + if (ret) 2298 2319 goto device_deregister; 2299 - } 2300 - 2301 - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic", 2302 - mshv_synic_init, 2303 - mshv_synic_cleanup); 2304 - if (ret < 0) { 2305 - dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret); 2306 - goto free_synic_pages; 2307 - } 2308 - 2309 - mshv_cpuhp_online = ret; 2310 2320 2311 2321 ret = mshv_init_vmm_caps(dev); 2312 2322 if (ret) 2313 - goto remove_cpu_state; 2323 + goto synic_cleanup; 2314 2324 2315 2325 ret = mshv_retrieve_scheduler_type(dev); 2316 2326 if (ret) 2317 - goto remove_cpu_state; 2318 - 2319 - if (hv_root_partition()) 2320 - ret = mshv_root_partition_init(dev); 2321 - if (ret) 2322 - goto remove_cpu_state; 2327 + goto synic_cleanup; 2323 2328 2324 2329 ret = root_scheduler_init(dev); 2325 2330 if (ret) 2326 - goto exit_partition; 2331 + goto synic_cleanup; 2327 2332 2328 2333 ret = mshv_debugfs_init(); 2329 2334 if (ret) ··· 2326 2367 mshv_debugfs_exit(); 2327 2368 deinit_root_scheduler: 2328 2369 root_scheduler_deinit(); 2329 - exit_partition: 2330 - if (hv_root_partition()) 2331 - mshv_root_partition_exit(); 2332 - remove_cpu_state: 2333 - cpuhp_remove_state(mshv_cpuhp_online); 2334 - free_synic_pages: 2335 - free_percpu(mshv_root.synic_pages); 2370 + synic_cleanup: 2371 + mshv_synic_exit(); 2336 2372 device_deregister: 2337 2373 misc_deregister(&mshv_dev); 2338 2374 return ret; ··· 2341 2387 misc_deregister(&mshv_dev); 2342 2388 mshv_irqfd_wq_cleanup(); 2343 2389 root_scheduler_deinit(); 2344 - if (hv_root_partition()) 2345 - mshv_root_partition_exit(); 2346 - cpuhp_remove_state(mshv_cpuhp_online); 2347 - free_percpu(mshv_root.synic_pages); 2390 + mshv_synic_exit(); 2348 2391 } 2349 2392 2350 2393 module_init(mshv_parent_partition_init);
+173 -15
drivers/hv/mshv_synic.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/mm.h> 13 + #include <linux/interrupt.h> 13 14 #include <linux/io.h> 14 15 #include <linux/random.h> 16 + #include <linux/cpuhotplug.h> 17 + #include <linux/reboot.h> 15 18 #include <asm/mshyperv.h> 19 + #include <linux/acpi.h> 16 20 17 21 #include "mshv_eventfd.h" 18 22 #include "mshv.h" 23 + 24 + static int synic_cpuhp_online; 25 + static struct hv_synic_pages __percpu *synic_pages; 26 + static int mshv_sint_vector = -1; /* hwirq for the SynIC SINTs */ 27 + static int mshv_sint_irq = -1; /* Linux IRQ for mshv_sint_vector */ 19 28 20 29 static u32 synic_event_ring_get_queued_port(u32 sint_index) 21 30 { ··· 35 26 u32 message; 36 27 u8 tail; 37 28 38 - spages = this_cpu_ptr(mshv_root.synic_pages); 29 + spages = this_cpu_ptr(synic_pages); 39 30 event_ring_page = &spages->synic_event_ring_page; 40 31 synic_eventring_tail = (u8 **)this_cpu_ptr(hv_synic_eventring_tail); 41 32 ··· 402 393 403 394 void mshv_isr(void) 404 395 { 405 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 396 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 406 397 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 407 398 struct hv_message *msg; 408 399 bool handled; ··· 446 437 if (msg->header.message_flags.msg_pending) 447 438 hv_set_non_nested_msr(HV_MSR_EOM, 0); 448 439 449 - #ifdef HYPERVISOR_CALLBACK_VECTOR 450 - add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR); 451 - #endif 440 + add_interrupt_randomness(mshv_sint_vector); 452 441 } else { 453 442 pr_warn_once("%s: unknown message type 0x%x\n", __func__, 454 443 msg->header.message_type); 455 444 } 456 445 } 457 446 458 - int mshv_synic_init(unsigned int cpu) 447 + static int mshv_synic_cpu_init(unsigned int cpu) 459 448 { 460 449 union hv_synic_simp simp; 461 450 union hv_synic_siefp siefp; 462 451 union hv_synic_sirbp sirbp; 463 - #ifdef HYPERVISOR_CALLBACK_VECTOR 464 452 union hv_synic_sint sint; 465 - #endif 466 453 union hv_synic_scontrol sctrl; 467 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 454 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 468 455 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 469 456 struct hv_synic_event_flags_page **event_flags_page = 470 457 &spages->synic_event_flags_page; ··· 501 496 502 497 hv_set_non_nested_msr(HV_MSR_SIRBP, sirbp.as_uint64); 503 498 504 - #ifdef HYPERVISOR_CALLBACK_VECTOR 499 + if (mshv_sint_irq != -1) 500 + enable_percpu_irq(mshv_sint_irq, 0); 501 + 505 502 /* Enable intercepts */ 506 503 sint.as_uint64 = 0; 507 - sint.vector = HYPERVISOR_CALLBACK_VECTOR; 504 + sint.vector = mshv_sint_vector; 508 505 sint.masked = false; 509 506 sint.auto_eoi = hv_recommend_using_aeoi(); 510 507 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_INTERCEPTION_SINT_INDEX, ··· 514 507 515 508 /* Doorbell SINT */ 516 509 sint.as_uint64 = 0; 517 - sint.vector = HYPERVISOR_CALLBACK_VECTOR; 510 + sint.vector = mshv_sint_vector; 518 511 sint.masked = false; 519 512 sint.as_intercept = 1; 520 513 sint.auto_eoi = hv_recommend_using_aeoi(); 521 514 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX, 522 515 sint.as_uint64); 523 - #endif 524 516 525 517 /* Enable global synic bit */ 526 518 sctrl.as_uint64 = hv_get_non_nested_msr(HV_MSR_SCONTROL); ··· 548 542 return -EFAULT; 549 543 } 550 544 551 - int mshv_synic_cleanup(unsigned int cpu) 545 + static int mshv_synic_cpu_exit(unsigned int cpu) 552 546 { 553 547 union hv_synic_sint sint; 554 548 union hv_synic_simp simp; 555 549 union hv_synic_siefp siefp; 556 550 union hv_synic_sirbp sirbp; 557 551 union hv_synic_scontrol sctrl; 558 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 552 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 559 553 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 560 554 struct hv_synic_event_flags_page **event_flags_page = 561 555 &spages->synic_event_flags_page; ··· 573 567 sint.masked = true; 574 568 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX, 575 569 sint.as_uint64); 570 + 571 + if (mshv_sint_irq != -1) 572 + disable_percpu_irq(mshv_sint_irq); 576 573 577 574 /* Disable Synic's event ring page */ 578 575 sirbp.as_uint64 = hv_get_non_nested_msr(HV_MSR_SIRBP); ··· 671 662 hv_call_delete_port(hv_current_partition_id, port_id); 672 663 673 664 mshv_portid_free(doorbell_portid); 665 + } 666 + 667 + static int mshv_synic_reboot_notify(struct notifier_block *nb, 668 + unsigned long code, void *unused) 669 + { 670 + if (!hv_root_partition()) 671 + return 0; 672 + 673 + cpuhp_remove_state(synic_cpuhp_online); 674 + return 0; 675 + } 676 + 677 + static struct notifier_block mshv_synic_reboot_nb = { 678 + .notifier_call = mshv_synic_reboot_notify, 679 + }; 680 + 681 + #ifndef HYPERVISOR_CALLBACK_VECTOR 682 + static DEFINE_PER_CPU(long, mshv_evt); 683 + 684 + static irqreturn_t mshv_percpu_isr(int irq, void *dev_id) 685 + { 686 + mshv_isr(); 687 + return IRQ_HANDLED; 688 + } 689 + 690 + #ifdef CONFIG_ACPI 691 + static int __init mshv_acpi_setup_sint_irq(void) 692 + { 693 + return acpi_register_gsi(NULL, mshv_sint_vector, ACPI_EDGE_SENSITIVE, 694 + ACPI_ACTIVE_HIGH); 695 + } 696 + 697 + static void mshv_acpi_cleanup_sint_irq(void) 698 + { 699 + acpi_unregister_gsi(mshv_sint_vector); 700 + } 701 + #else 702 + static int __init mshv_acpi_setup_sint_irq(void) 703 + { 704 + return -ENODEV; 705 + } 706 + 707 + static void mshv_acpi_cleanup_sint_irq(void) 708 + { 709 + } 710 + #endif 711 + 712 + static int __init mshv_sint_vector_setup(void) 713 + { 714 + int ret; 715 + struct hv_register_assoc reg = { 716 + .name = HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID, 717 + }; 718 + union hv_input_vtl input_vtl = { 0 }; 719 + 720 + if (acpi_disabled) 721 + return -ENODEV; 722 + 723 + ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF, 724 + 1, input_vtl, &reg); 725 + if (ret || !reg.value.reg64) 726 + return -ENODEV; 727 + 728 + mshv_sint_vector = reg.value.reg64; 729 + ret = mshv_acpi_setup_sint_irq(); 730 + if (ret < 0) { 731 + pr_err("Failed to setup IRQ for MSHV SINT vector %d: %d\n", 732 + mshv_sint_vector, ret); 733 + goto out_fail; 734 + } 735 + 736 + mshv_sint_irq = ret; 737 + 738 + ret = request_percpu_irq(mshv_sint_irq, mshv_percpu_isr, "MSHV", 739 + &mshv_evt); 740 + if (ret) 741 + goto out_unregister; 742 + 743 + return 0; 744 + 745 + out_unregister: 746 + mshv_acpi_cleanup_sint_irq(); 747 + out_fail: 748 + return ret; 749 + } 750 + 751 + static void mshv_sint_vector_cleanup(void) 752 + { 753 + free_percpu_irq(mshv_sint_irq, &mshv_evt); 754 + mshv_acpi_cleanup_sint_irq(); 755 + } 756 + #else /* !HYPERVISOR_CALLBACK_VECTOR */ 757 + static int __init mshv_sint_vector_setup(void) 758 + { 759 + mshv_sint_vector = HYPERVISOR_CALLBACK_VECTOR; 760 + return 0; 761 + } 762 + 763 + static void mshv_sint_vector_cleanup(void) 764 + { 765 + } 766 + #endif /* HYPERVISOR_CALLBACK_VECTOR */ 767 + 768 + int __init mshv_synic_init(struct device *dev) 769 + { 770 + int ret = 0; 771 + 772 + ret = mshv_sint_vector_setup(); 773 + if (ret) 774 + return ret; 775 + 776 + synic_pages = alloc_percpu(struct hv_synic_pages); 777 + if (!synic_pages) { 778 + dev_err(dev, "Failed to allocate percpu synic page\n"); 779 + ret = -ENOMEM; 780 + goto sint_vector_cleanup; 781 + } 782 + 783 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic", 784 + mshv_synic_cpu_init, 785 + mshv_synic_cpu_exit); 786 + if (ret < 0) { 787 + dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret); 788 + goto free_synic_pages; 789 + } 790 + 791 + synic_cpuhp_online = ret; 792 + 793 + ret = register_reboot_notifier(&mshv_synic_reboot_nb); 794 + if (ret) 795 + goto remove_cpuhp_state; 796 + 797 + return 0; 798 + 799 + remove_cpuhp_state: 800 + cpuhp_remove_state(synic_cpuhp_online); 801 + free_synic_pages: 802 + free_percpu(synic_pages); 803 + sint_vector_cleanup: 804 + mshv_sint_vector_cleanup(); 805 + return ret; 806 + } 807 + 808 + void mshv_synic_exit(void) 809 + { 810 + unregister_reboot_notifier(&mshv_synic_reboot_nb); 811 + cpuhp_remove_state(synic_cpuhp_online); 812 + free_percpu(synic_pages); 813 + mshv_sint_vector_cleanup(); 674 814 }
+1 -1
drivers/hwmon/axi-fan-control.c
··· 507 507 ret = devm_request_threaded_irq(&pdev->dev, ctl->irq, NULL, 508 508 axi_fan_control_irq_handler, 509 509 IRQF_ONESHOT | IRQF_TRIGGER_HIGH, 510 - pdev->driver_override, ctl); 510 + NULL, ctl); 511 511 if (ret) 512 512 return dev_err_probe(&pdev->dev, ret, 513 513 "failed to request an irq\n");
+5 -5
drivers/hwmon/max6639.c
··· 232 232 static int max6639_set_ppr(struct max6639_data *data, int channel, u8 ppr) 233 233 { 234 234 /* Decrement the PPR value and shift left by 6 to match the register format */ 235 - return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), ppr-- << 6); 235 + return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), --ppr << 6); 236 236 } 237 237 238 238 static int max6639_write_fan(struct device *dev, u32 attr, int channel, ··· 524 524 525 525 { 526 526 struct device *dev = &client->dev; 527 - u32 i; 528 - int err, val; 527 + u32 i, val; 528 + int err; 529 529 530 530 err = of_property_read_u32(child, "reg", &i); 531 531 if (err) { ··· 540 540 541 541 err = of_property_read_u32(child, "pulses-per-revolution", &val); 542 542 if (!err) { 543 - if (val < 1 || val > 5) { 544 - dev_err(dev, "invalid pulses-per-revolution %d of %pOFn\n", val, child); 543 + if (val < 1 || val > 4) { 544 + dev_err(dev, "invalid pulses-per-revolution %u of %pOFn\n", val, child); 545 545 return -EINVAL; 546 546 } 547 547 data->ppr[i] = val;
+2
drivers/hwmon/pmbus/hac300s.c
··· 58 58 case PMBUS_MFR_VOUT_MIN: 59 59 case PMBUS_READ_VOUT: 60 60 rv = pmbus_read_word_data(client, page, phase, reg); 61 + if (rv < 0) 62 + return rv; 61 63 return FIELD_GET(LINEAR11_MANTISSA_MASK, rv); 62 64 default: 63 65 return -ENODATA;
+2
drivers/hwmon/pmbus/ina233.c
··· 67 67 switch (reg) { 68 68 case PMBUS_VIRT_READ_VMON: 69 69 ret = pmbus_read_word_data(client, 0, 0xff, MFR_READ_VSHUNT); 70 + if (ret < 0) 71 + return ret; 70 72 71 73 /* Adjust returned value to match VIN coefficients */ 72 74 /* VIN: 1.25 mV VSHUNT: 2.5 uV LSB */
+5 -2
drivers/hwmon/pmbus/isl68137.c
··· 98 98 { 99 99 int val = pmbus_read_byte_data(client, page, PMBUS_OPERATION); 100 100 101 - return sprintf(buf, "%d\n", 102 - (val & ISL68137_VOUT_AVS) == ISL68137_VOUT_AVS ? 1 : 0); 101 + if (val < 0) 102 + return val; 103 + 104 + return sysfs_emit(buf, "%d\n", 105 + (val & ISL68137_VOUT_AVS) == ISL68137_VOUT_AVS); 103 106 } 104 107 105 108 static ssize_t isl68137_avs_enable_store_page(struct i2c_client *client,
+21 -14
drivers/hwmon/pmbus/mp2869.c
··· 165 165 { 166 166 const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 167 167 struct mp2869_data *data = to_mp2869_data(info); 168 - int ret; 168 + int ret, mfr; 169 169 170 170 switch (reg) { 171 171 case PMBUS_VOUT_MODE: ··· 188 188 if (ret < 0) 189 189 return ret; 190 190 191 + mfr = pmbus_read_byte_data(client, page, 192 + PMBUS_STATUS_MFR_SPECIFIC); 193 + if (mfr < 0) 194 + return mfr; 195 + 191 196 ret = (ret & ~GENMASK(2, 2)) | 192 197 FIELD_PREP(GENMASK(2, 2), 193 - FIELD_GET(GENMASK(1, 1), 194 - pmbus_read_byte_data(client, page, 195 - PMBUS_STATUS_MFR_SPECIFIC))); 198 + FIELD_GET(GENMASK(1, 1), mfr)); 196 199 break; 197 200 case PMBUS_STATUS_TEMPERATURE: 198 201 /* ··· 210 207 if (ret < 0) 211 208 return ret; 212 209 210 + mfr = pmbus_read_byte_data(client, page, 211 + PMBUS_STATUS_MFR_SPECIFIC); 212 + if (mfr < 0) 213 + return mfr; 214 + 213 215 ret = (ret & ~GENMASK(7, 6)) | 214 216 FIELD_PREP(GENMASK(6, 6), 215 - FIELD_GET(GENMASK(1, 1), 216 - pmbus_read_byte_data(client, page, 217 - PMBUS_STATUS_MFR_SPECIFIC))) | 217 + FIELD_GET(GENMASK(1, 1), mfr)) | 218 218 FIELD_PREP(GENMASK(7, 7), 219 - FIELD_GET(GENMASK(1, 1), 220 - pmbus_read_byte_data(client, page, 221 - PMBUS_STATUS_MFR_SPECIFIC))); 219 + FIELD_GET(GENMASK(1, 1), mfr)); 222 220 break; 223 221 default: 224 222 ret = -ENODATA; ··· 234 230 { 235 231 const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 236 232 struct mp2869_data *data = to_mp2869_data(info); 237 - int ret; 233 + int ret, mfr; 238 234 239 235 switch (reg) { 240 236 case PMBUS_STATUS_WORD: ··· 250 246 if (ret < 0) 251 247 return ret; 252 248 249 + mfr = pmbus_read_byte_data(client, page, 250 + PMBUS_STATUS_MFR_SPECIFIC); 251 + if (mfr < 0) 252 + return mfr; 253 + 253 254 ret = (ret & ~GENMASK(2, 2)) | 254 255 FIELD_PREP(GENMASK(2, 2), 255 - FIELD_GET(GENMASK(1, 1), 256 - pmbus_read_byte_data(client, page, 257 - PMBUS_STATUS_MFR_SPECIFIC))); 256 + FIELD_GET(GENMASK(1, 1), mfr)); 258 257 break; 259 258 case PMBUS_READ_VIN: 260 259 /*
+2
drivers/hwmon/pmbus/mp2975.c
··· 313 313 case PMBUS_STATUS_WORD: 314 314 /* MP2973 & MP2971 return PGOOD instead of PB_STATUS_POWER_GOOD_N. */ 315 315 ret = pmbus_read_word_data(client, page, phase, reg); 316 + if (ret < 0) 317 + return ret; 316 318 ret ^= PB_STATUS_POWER_GOOD_N; 317 319 break; 318 320 case PMBUS_OT_FAULT_LIMIT:
+2
drivers/i2c/busses/Kconfig
··· 1213 1213 tristate "NVIDIA Tegra internal I2C controller" 1214 1214 depends on ARCH_TEGRA || (COMPILE_TEST && (ARC || ARM || ARM64 || M68K || RISCV || SUPERH || SPARC)) 1215 1215 # COMPILE_TEST needs architectures with readsX()/writesX() primitives 1216 + depends on PINCTRL 1217 + # ARCH_TEGRA implies PINCTRL, but the COMPILE_TEST side doesn't. 1216 1218 help 1217 1219 If you say yes to this option, support will be included for the 1218 1220 I2C controller embedded in NVIDIA Tegra SOCs
+3
drivers/i2c/busses/i2c-cp2615.c
··· 298 298 if (!adap) 299 299 return -ENOMEM; 300 300 301 + if (!usbdev->serial) 302 + return -EINVAL; 303 + 301 304 strscpy(adap->name, usbdev->serial, sizeof(adap->name)); 302 305 adap->owner = THIS_MODULE; 303 306 adap->dev.parent = &usbif->dev;
+1
drivers/i2c/busses/i2c-fsi.c
··· 729 729 rc = i2c_add_adapter(&port->adapter); 730 730 if (rc < 0) { 731 731 dev_err(dev, "Failed to register adapter: %d\n", rc); 732 + of_node_put(np); 732 733 kfree(port); 733 734 continue; 734 735 }
+16 -1
drivers/i2c/busses/i2c-pxa.c
··· 268 268 struct pinctrl *pinctrl; 269 269 struct pinctrl_state *pinctrl_default; 270 270 struct pinctrl_state *pinctrl_recovery; 271 + bool reset_before_xfer; 271 272 }; 272 273 273 274 #define _IBMR(i2c) ((i2c)->reg_ibmr) ··· 1145 1144 { 1146 1145 struct pxa_i2c *i2c = adap->algo_data; 1147 1146 1147 + if (i2c->reset_before_xfer) { 1148 + i2c_pxa_reset(i2c); 1149 + i2c->reset_before_xfer = false; 1150 + } 1151 + 1148 1152 return i2c_pxa_internal_xfer(i2c, msgs, num, i2c_pxa_do_xfer); 1149 1153 } 1150 1154 ··· 1527 1521 } 1528 1522 } 1529 1523 1530 - i2c_pxa_reset(i2c); 1524 + /* 1525 + * Skip reset on Armada 3700 when recovery is used to avoid 1526 + * controller hang due to the pinctrl state changes done by 1527 + * the generic recovery initialization code. The reset will 1528 + * be performed later, prior to the first transfer. 1529 + */ 1530 + if (i2c_type == REGS_A3700 && i2c->adap.bus_recovery_info) 1531 + i2c->reset_before_xfer = true; 1532 + else 1533 + i2c_pxa_reset(i2c); 1531 1534 1532 1535 ret = i2c_add_numbered_adapter(&i2c->adap); 1533 1536 if (ret < 0)
+4 -1
drivers/i2c/busses/i2c-tegra.c
··· 2047 2047 * 2048 2048 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't 2049 2049 * be used for atomic transfers. ACPI device is not IRQ safe also. 2050 + * 2051 + * Devices with pinctrl states cannot be marked IRQ-safe as the pinctrl 2052 + * state transitions during runtime PM require mutexes. 2050 2053 */ 2051 - if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev)) 2054 + if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev) && !i2c_dev->dev->pins) 2052 2055 pm_runtime_irq_safe(i2c_dev->dev); 2053 2056 2054 2057 pm_runtime_enable(i2c_dev->dev);
+14 -1
drivers/iommu/amd/iommu.c
··· 2909 2909 2910 2910 static struct protection_domain identity_domain; 2911 2911 2912 + static int amd_iommu_identity_attach(struct iommu_domain *dom, struct device *dev, 2913 + struct iommu_domain *old) 2914 + { 2915 + /* 2916 + * Don't allow attaching a device to the identity domain if SNP is 2917 + * enabled. 2918 + */ 2919 + if (amd_iommu_snp_en) 2920 + return -EINVAL; 2921 + 2922 + return amd_iommu_attach_device(dom, dev, old); 2923 + } 2924 + 2912 2925 static const struct iommu_domain_ops identity_domain_ops = { 2913 - .attach_dev = amd_iommu_attach_device, 2926 + .attach_dev = amd_iommu_identity_attach, 2914 2927 }; 2915 2928 2916 2929 void amd_iommu_init_identity_domain(void)
+1 -2
drivers/iommu/intel/dmar.c
··· 1314 1314 if (fault & DMA_FSTS_ITE) { 1315 1315 head = readl(iommu->reg + DMAR_IQH_REG); 1316 1316 head = ((head >> shift) - 1 + QI_LENGTH) % QI_LENGTH; 1317 - head |= 1; 1318 1317 tail = readl(iommu->reg + DMAR_IQT_REG); 1319 1318 tail = ((tail >> shift) - 1 + QI_LENGTH) % QI_LENGTH; 1320 1319 ··· 1330 1331 do { 1331 1332 if (qi->desc_status[head] == QI_IN_USE) 1332 1333 qi->desc_status[head] = QI_ABORT; 1333 - head = (head - 2 + QI_LENGTH) % QI_LENGTH; 1334 + head = (head - 1 + QI_LENGTH) % QI_LENGTH; 1334 1335 } while (head != tail); 1335 1336 1336 1337 /*
+8 -4
drivers/iommu/intel/svm.c
··· 164 164 if (IS_ERR(dev_pasid)) 165 165 return PTR_ERR(dev_pasid); 166 166 167 - ret = iopf_for_domain_replace(domain, old, dev); 168 - if (ret) 169 - goto out_remove_dev_pasid; 167 + /* SVA with non-IOMMU/PRI IOPF handling is allowed. */ 168 + if (info->pri_supported) { 169 + ret = iopf_for_domain_replace(domain, old, dev); 170 + if (ret) 171 + goto out_remove_dev_pasid; 172 + } 170 173 171 174 /* Setup the pasid table: */ 172 175 sflags = cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0; ··· 184 181 185 182 return 0; 186 183 out_unwind_iopf: 187 - iopf_for_domain_replace(old, domain, dev); 184 + if (info->pri_supported) 185 + iopf_for_domain_replace(old, domain, dev); 188 186 out_remove_dev_pasid: 189 187 domain_remove_dev_pasid(domain, dev, pasid); 190 188 return ret;
+6 -6
drivers/iommu/iommu-sva.c
··· 182 182 iommu_detach_device_pasid(domain, dev, iommu_mm->pasid); 183 183 if (--domain->users == 0) { 184 184 list_del(&domain->next); 185 - iommu_domain_free(domain); 186 - } 185 + if (list_empty(&iommu_mm->sva_domains)) { 186 + list_del(&iommu_mm->mm_list_elm); 187 + if (list_empty(&iommu_sva_mms)) 188 + iommu_sva_present = false; 189 + } 187 190 188 - if (list_empty(&iommu_mm->sva_domains)) { 189 - list_del(&iommu_mm->mm_list_elm); 190 - if (list_empty(&iommu_sva_mms)) 191 - iommu_sva_present = false; 191 + iommu_domain_free(domain); 192 192 } 193 193 194 194 mutex_unlock(&iommu_sva_lock);
+5 -1
drivers/iommu/iommu.c
··· 1213 1213 if (addr == end) 1214 1214 goto map_end; 1215 1215 1216 - phys_addr = iommu_iova_to_phys(domain, addr); 1216 + /* 1217 + * Return address by iommu_iova_to_phys for 0 is 1218 + * ambiguous. Offset to address 1 if addr is 0. 1219 + */ 1220 + phys_addr = iommu_iova_to_phys(domain, addr ? addr : 1); 1217 1221 if (!phys_addr) { 1218 1222 map_size += pg_size; 1219 1223 continue;
+1
drivers/irqchip/irq-riscv-rpmi-sysmsi.c
··· 250 250 rc = riscv_acpi_get_gsi_info(fwnode, &priv->gsi_base, &id, 251 251 &nr_irqs, NULL); 252 252 if (rc) { 253 + mbox_free_channel(priv->chan); 253 254 dev_err(dev, "failed to find GSI mapping\n"); 254 255 return rc; 255 256 }
+9
drivers/mmc/host/sdhci-pci-gli.c
··· 68 68 #define GLI_9750_MISC_TX1_DLY_VALUE 0x5 69 69 #define SDHCI_GLI_9750_MISC_SSC_OFF BIT(26) 70 70 71 + #define SDHCI_GLI_9750_GM_BURST_SIZE 0x510 72 + #define SDHCI_GLI_9750_GM_BURST_SIZE_R_OSRC_LMT GENMASK(17, 16) 73 + 71 74 #define SDHCI_GLI_9750_TUNING_CONTROL 0x540 72 75 #define SDHCI_GLI_9750_TUNING_CONTROL_EN BIT(4) 73 76 #define GLI_9750_TUNING_CONTROL_EN_ON 0x1 ··· 348 345 u32 misc_value; 349 346 u32 parameter_value; 350 347 u32 control_value; 348 + u32 burst_value; 351 349 u16 ctrl2; 352 350 353 351 gl9750_wt_on(host); 352 + 353 + /* clear R_OSRC_Lmt to avoid DMA write corruption */ 354 + burst_value = sdhci_readl(host, SDHCI_GLI_9750_GM_BURST_SIZE); 355 + burst_value &= ~SDHCI_GLI_9750_GM_BURST_SIZE_R_OSRC_LMT; 356 + sdhci_writel(host, burst_value, SDHCI_GLI_9750_GM_BURST_SIZE); 354 357 355 358 driving_value = sdhci_readl(host, SDHCI_GLI_9750_DRIVING); 356 359 pll_value = sdhci_readl(host, SDHCI_GLI_9750_PLL);
+8 -1
drivers/mmc/host/sdhci.c
··· 4532 4532 * their platform code before calling sdhci_add_host(), and we 4533 4533 * won't assume 8-bit width for hosts without that CAP. 4534 4534 */ 4535 - if (!(host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA)) 4535 + if (host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA) { 4536 + host->caps1 &= ~(SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50); 4537 + if (host->quirks2 & SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400) 4538 + host->caps1 &= ~SDHCI_SUPPORT_HS400; 4539 + mmc->caps2 &= ~(MMC_CAP2_HS200 | MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); 4540 + mmc->caps &= ~(MMC_CAP_DDR | MMC_CAP_UHS); 4541 + } else { 4536 4542 mmc->caps |= MMC_CAP_4_BIT_DATA; 4543 + } 4537 4544 4538 4545 if (host->quirks2 & SDHCI_QUIRK2_HOST_NO_CMD23) 4539 4546 mmc->caps &= ~MMC_CAP_CMD23;
+2 -4
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 2350 2350 for (i = 0; i < ctrl->max_oob; i += 4) 2351 2351 oob_reg_write(ctrl, i, 0xffffffff); 2352 2352 2353 - if (mtd->oops_panic_write) 2353 + if (mtd->oops_panic_write) { 2354 2354 /* switch to interrupt polling and PIO mode */ 2355 2355 disable_ctrl_irqs(ctrl); 2356 - 2357 - if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) { 2356 + } else if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) { 2358 2357 if (ctrl->dma_trans(host, addr, (u32 *)buf, oob, mtd->writesize, 2359 2358 CMD_PROGRAM_PAGE)) 2360 - 2361 2359 ret = -EIO; 2362 2360 2363 2361 goto out;
+1 -1
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 3133 3133 sizeof(*cdns_ctrl->cdma_desc), 3134 3134 &cdns_ctrl->dma_cdma_desc, 3135 3135 GFP_KERNEL); 3136 - if (!cdns_ctrl->dma_cdma_desc) 3136 + if (!cdns_ctrl->cdma_desc) 3137 3137 return -ENOMEM; 3138 3138 3139 3139 cdns_ctrl->buf_size = SZ_16K;
+12 -2
drivers/mtd/nand/raw/nand_base.c
··· 4737 4737 static int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 4738 4738 { 4739 4739 struct nand_chip *chip = mtd_to_nand(mtd); 4740 + int ret; 4740 4741 4741 4742 if (!chip->ops.lock_area) 4742 4743 return -ENOTSUPP; 4743 4744 4744 - return chip->ops.lock_area(chip, ofs, len); 4745 + nand_get_device(chip); 4746 + ret = chip->ops.lock_area(chip, ofs, len); 4747 + nand_release_device(chip); 4748 + 4749 + return ret; 4745 4750 } 4746 4751 4747 4752 /** ··· 4758 4753 static int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 4759 4754 { 4760 4755 struct nand_chip *chip = mtd_to_nand(mtd); 4756 + int ret; 4761 4757 4762 4758 if (!chip->ops.unlock_area) 4763 4759 return -ENOTSUPP; 4764 4760 4765 - return chip->ops.unlock_area(chip, ofs, len); 4761 + nand_get_device(chip); 4762 + ret = chip->ops.unlock_area(chip, ofs, len); 4763 + nand_release_device(chip); 4764 + 4765 + return ret; 4766 4766 } 4767 4767 4768 4768 /* Set default functions */
+3
drivers/mtd/nand/raw/pl35x-nand-controller.c
··· 862 862 PL35X_SMC_NAND_TAR_CYCLES(tmgs.t_ar) | 863 863 PL35X_SMC_NAND_TRR_CYCLES(tmgs.t_rr); 864 864 865 + writel(plnand->timings, nfc->conf_regs + PL35X_SMC_CYCLES); 866 + pl35x_smc_update_regs(nfc); 867 + 865 868 return 0; 866 869 } 867 870
+3 -3
drivers/mtd/parsers/redboot.c
··· 270 270 271 271 strcpy(names, fl->img->name); 272 272 #ifdef CONFIG_MTD_REDBOOT_PARTS_READONLY 273 - if (!memcmp(names, "RedBoot", 8) || 274 - !memcmp(names, "RedBoot config", 15) || 275 - !memcmp(names, "FIS directory", 14)) { 273 + if (!strcmp(names, "RedBoot") || 274 + !strcmp(names, "RedBoot config") || 275 + !strcmp(names, "FIS directory")) { 276 276 parts[i].mask_flags = MTD_WRITEABLE; 277 277 } 278 278 #endif
+7 -7
drivers/mtd/spi-nor/core.c
··· 2345 2345 } 2346 2346 2347 2347 /** 2348 - * spi_nor_spimem_check_op - check if the operation is supported 2349 - * by controller 2348 + * spi_nor_spimem_check_read_pp_op - check if a read or a page program operation is 2349 + * supported by controller 2350 2350 *@nor: pointer to a 'struct spi_nor' 2351 2351 *@op: pointer to op template to be checked 2352 2352 * 2353 2353 * Returns 0 if operation is supported, -EOPNOTSUPP otherwise. 2354 2354 */ 2355 - static int spi_nor_spimem_check_op(struct spi_nor *nor, 2356 - struct spi_mem_op *op) 2355 + static int spi_nor_spimem_check_read_pp_op(struct spi_nor *nor, 2356 + struct spi_mem_op *op) 2357 2357 { 2358 2358 /* 2359 2359 * First test with 4 address bytes. The opcode itself might ··· 2396 2396 if (spi_nor_protocol_is_dtr(nor->read_proto)) 2397 2397 op.dummy.nbytes *= 2; 2398 2398 2399 - return spi_nor_spimem_check_op(nor, &op); 2399 + return spi_nor_spimem_check_read_pp_op(nor, &op); 2400 2400 } 2401 2401 2402 2402 /** ··· 2414 2414 2415 2415 spi_nor_spimem_setup_op(nor, &op, pp->proto); 2416 2416 2417 - return spi_nor_spimem_check_op(nor, &op); 2417 + return spi_nor_spimem_check_read_pp_op(nor, &op); 2418 2418 } 2419 2419 2420 2420 /** ··· 2466 2466 2467 2467 spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 2468 2468 2469 - if (spi_nor_spimem_check_op(nor, &op)) 2469 + if (!spi_mem_supports_op(nor->spimem, &op)) 2470 2470 nor->flags |= SNOR_F_NO_READ_CR; 2471 2471 } 2472 2472 }
+11 -5
drivers/net/bonding/bond_debugfs.c
··· 34 34 for (; hash_index != RLB_NULL_INDEX; 35 35 hash_index = client_info->used_next) { 36 36 client_info = &(bond_info->rx_hashtbl[hash_index]); 37 - seq_printf(m, "%-15pI4 %-15pI4 %-17pM %s\n", 38 - &client_info->ip_src, 39 - &client_info->ip_dst, 40 - &client_info->mac_dst, 41 - client_info->slave->dev->name); 37 + if (client_info->slave) 38 + seq_printf(m, "%-15pI4 %-15pI4 %-17pM %s\n", 39 + &client_info->ip_src, 40 + &client_info->ip_dst, 41 + &client_info->mac_dst, 42 + client_info->slave->dev->name); 43 + else 44 + seq_printf(m, "%-15pI4 %-15pI4 %-17pM (none)\n", 45 + &client_info->ip_src, 46 + &client_info->ip_dst, 47 + &client_info->mac_dst); 42 48 } 43 49 44 50 spin_unlock_bh(&bond->mode_lock);
+5 -3
drivers/net/bonding/bond_main.c
··· 1530 1530 return ret; 1531 1531 } 1532 1532 1533 - static int bond_header_parse(const struct sk_buff *skb, unsigned char *haddr) 1533 + static int bond_header_parse(const struct sk_buff *skb, 1534 + const struct net_device *dev, 1535 + unsigned char *haddr) 1534 1536 { 1535 - struct bonding *bond = netdev_priv(skb->dev); 1537 + struct bonding *bond = netdev_priv(dev); 1536 1538 const struct header_ops *slave_ops; 1537 1539 struct slave *slave; 1538 1540 int ret = 0; ··· 1544 1542 if (slave) { 1545 1543 slave_ops = READ_ONCE(slave->dev->header_ops); 1546 1544 if (slave_ops && slave_ops->parse) 1547 - ret = slave_ops->parse(skb, haddr); 1545 + ret = slave_ops->parse(skb, slave->dev, haddr); 1548 1546 } 1549 1547 rcu_read_unlock(); 1550 1548 return ret;
+6 -2
drivers/net/dsa/bcm_sf2.c
··· 980 980 ret = bcm_sf2_sw_rst(priv); 981 981 if (ret) { 982 982 pr_err("%s: failed to software reset switch\n", __func__); 983 + if (!priv->wol_ports_mask) 984 + clk_disable_unprepare(priv->clk); 983 985 return ret; 984 986 } 985 987 986 988 bcm_sf2_crossbar_setup(priv); 987 989 988 990 ret = bcm_sf2_cfp_resume(ds); 989 - if (ret) 991 + if (ret) { 992 + if (!priv->wol_ports_mask) 993 + clk_disable_unprepare(priv->clk); 990 994 return ret; 991 - 995 + } 992 996 if (priv->hw_params.num_gphy == 1) 993 997 bcm_sf2_gphy_enable_set(ds, true); 994 998
-1
drivers/net/ethernet/airoha/airoha_eth.c
··· 3083 3083 if (!port) 3084 3084 continue; 3085 3085 3086 - airoha_dev_stop(port->dev); 3087 3086 unregister_netdev(port->dev); 3088 3087 airoha_metadata_dst_free(port); 3089 3088 }
+2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2929 2929 u16 type = (u16)BNXT_EVENT_BUF_PRODUCER_TYPE(data1); 2930 2930 u32 offset = BNXT_EVENT_BUF_PRODUCER_OFFSET(data2); 2931 2931 2932 + if (type >= ARRAY_SIZE(bp->bs_trace)) 2933 + goto async_event_process_exit; 2932 2934 bnxt_bs_trace_check_wrap(&bp->bs_trace[type], offset); 2933 2935 goto async_event_process_exit; 2934 2936 }
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 2146 2146 }; 2147 2147 2148 2148 #define BNXT_TRACE_BUF_MAGIC_BYTE ((u8)0xbc) 2149 - #define BNXT_TRACE_MAX 11 2149 + #define BNXT_TRACE_MAX (DBG_LOG_BUFFER_FLUSH_REQ_TYPE_ERR_QPC_TRACE + 1) 2150 2150 2151 2151 struct bnxt_bs_trace_info { 2152 2152 u8 *magic_byte;
+1 -1
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 123 123 while (!(bcmgenet_rbuf_readl(priv, RBUF_STATUS) 124 124 & RBUF_STATUS_WOL)) { 125 125 retries++; 126 - if (retries > 5) { 126 + if (retries > 50) { 127 127 netdev_crit(dev, "polling wol mode timeout\n"); 128 128 return -ETIMEDOUT; 129 129 }
+11
drivers/net/ethernet/broadcom/tg3.c
··· 17029 17029 return err; 17030 17030 } 17031 17031 17032 + static int tg3_is_default_mac_address(u8 *addr) 17033 + { 17034 + static const u8 default_mac_address[ETH_ALEN] = { 0x00, 0x10, 0x18, 0x00, 0x00, 0x00 }; 17035 + 17036 + return ether_addr_equal(default_mac_address, addr); 17037 + } 17038 + 17032 17039 static int tg3_get_device_address(struct tg3 *tp, u8 *addr) 17033 17040 { 17034 17041 u32 hi, lo, mac_offset; ··· 17109 17102 17110 17103 if (!is_valid_ether_addr(addr)) 17111 17104 return -EINVAL; 17105 + 17106 + if (tg3_is_default_mac_address(addr)) 17107 + return device_get_mac_address(&tp->pdev->dev, addr); 17108 + 17112 17109 return 0; 17113 17110 } 17114 17111
+22 -4
drivers/net/ethernet/cadence/macb_main.c
··· 2669 2669 desc->ctrl = 0; 2670 2670 } 2671 2671 2672 + static void gem_init_rx_ring(struct macb_queue *queue) 2673 + { 2674 + queue->rx_tail = 0; 2675 + queue->rx_prepared_head = 0; 2676 + 2677 + gem_rx_refill(queue); 2678 + } 2679 + 2672 2680 static void gem_init_rings(struct macb *bp) 2673 2681 { 2674 2682 struct macb_queue *queue; ··· 2694 2686 queue->tx_head = 0; 2695 2687 queue->tx_tail = 0; 2696 2688 2697 - queue->rx_tail = 0; 2698 - queue->rx_prepared_head = 0; 2699 - 2700 - gem_rx_refill(queue); 2689 + gem_init_rx_ring(queue); 2701 2690 } 2702 2691 2703 2692 macb_init_tieoff(bp); ··· 3982 3977 { 3983 3978 struct macb *bp = netdev_priv(netdev); 3984 3979 int ret; 3980 + 3981 + if (!(netdev->hw_features & NETIF_F_NTUPLE)) 3982 + return -EOPNOTSUPP; 3985 3983 3986 3984 switch (cmd->cmd) { 3987 3985 case ETHTOOL_SRXCLSRLINS: ··· 5955 5947 rtnl_unlock(); 5956 5948 } 5957 5949 5950 + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) 5951 + macb_init_buffers(bp); 5952 + 5958 5953 for (q = 0, queue = bp->queues; q < bp->num_queues; 5959 5954 ++q, ++queue) { 5955 + if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) { 5956 + if (macb_is_gem(bp)) 5957 + gem_init_rx_ring(queue); 5958 + else 5959 + macb_init_rx_ring(queue); 5960 + } 5961 + 5960 5962 napi_enable(&queue->napi_rx); 5961 5963 napi_enable(&queue->napi_tx); 5962 5964 }
+3 -1
drivers/net/ethernet/cadence/macb_ptp.c
··· 357 357 { 358 358 struct macb *bp = netdev_priv(ndev); 359 359 360 - if (bp->ptp_clock) 360 + if (bp->ptp_clock) { 361 361 ptp_clock_unregister(bp->ptp_clock); 362 + bp->ptp_clock = NULL; 363 + } 362 364 363 365 gem_ptp_clear_timer(bp); 364 366
+6 -3
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 757 757 adapter->num_vlan_filters++; 758 758 iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER); 759 759 } else if (f->state == IAVF_VLAN_REMOVE) { 760 - /* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed. 761 - * We can safely only change the state here. 760 + /* Re-add the filter since we cannot tell whether the 761 + * pending delete has already been processed by the PF. 762 + * A duplicate add is harmless. 762 763 */ 763 - f->state = IAVF_VLAN_ACTIVE; 764 + f->state = IAVF_VLAN_ADD; 765 + iavf_schedule_aq_request(adapter, 766 + IAVF_FLAG_AQ_ADD_VLAN_FILTER); 764 767 } 765 768 766 769 clearout:
+2
drivers/net/ethernet/intel/igc/igc.h
··· 781 781 struct kernel_hwtstamp_config *config, 782 782 struct netlink_ext_ack *extack); 783 783 void igc_ptp_tx_hang(struct igc_adapter *adapter); 784 + void igc_ptp_clear_xsk_tx_tstamp_queue(struct igc_adapter *adapter, 785 + u16 queue_id); 784 786 void igc_ptp_read(struct igc_adapter *adapter, struct timespec64 *ts); 785 787 void igc_ptp_tx_tstamp_event(struct igc_adapter *adapter); 786 788
+9 -5
drivers/net/ethernet/intel/igc/igc_main.c
··· 264 264 /* reset next_to_use and next_to_clean */ 265 265 tx_ring->next_to_use = 0; 266 266 tx_ring->next_to_clean = 0; 267 + 268 + /* Clear any lingering XSK TX timestamp requests */ 269 + if (test_bit(IGC_RING_FLAG_TX_HWTSTAMP, &tx_ring->flags)) { 270 + struct igc_adapter *adapter = netdev_priv(tx_ring->netdev); 271 + 272 + igc_ptp_clear_xsk_tx_tstamp_queue(adapter, tx_ring->queue_index); 273 + } 267 274 } 268 275 269 276 /** ··· 1737 1730 /* The minimum packet size with TCTL.PSP set is 17 so pad the skb 1738 1731 * in order to meet this minimum size requirement. 1739 1732 */ 1740 - if (skb->len < 17) { 1741 - if (skb_padto(skb, 17)) 1742 - return NETDEV_TX_OK; 1743 - skb->len = 17; 1744 - } 1733 + if (skb_put_padto(skb, 17)) 1734 + return NETDEV_TX_OK; 1745 1735 1746 1736 return igc_xmit_frame_ring(skb, igc_tx_queue_mapping(adapter, skb)); 1747 1737 }
+33
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 577 577 spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags); 578 578 } 579 579 580 + /** 581 + * igc_ptp_clear_xsk_tx_tstamp_queue - Clear pending XSK TX timestamps for a queue 582 + * @adapter: Board private structure 583 + * @queue_id: TX queue index to clear timestamps for 584 + * 585 + * Iterates over all TX timestamp registers and releases any pending 586 + * timestamp requests associated with the given TX queue. This is 587 + * called when an XDP pool is being disabled to ensure no stale 588 + * timestamp references remain. 589 + */ 590 + void igc_ptp_clear_xsk_tx_tstamp_queue(struct igc_adapter *adapter, u16 queue_id) 591 + { 592 + unsigned long flags; 593 + int i; 594 + 595 + spin_lock_irqsave(&adapter->ptp_tx_lock, flags); 596 + 597 + for (i = 0; i < IGC_MAX_TX_TSTAMP_REGS; i++) { 598 + struct igc_tx_timestamp_request *tstamp = &adapter->tx_tstamp[i]; 599 + 600 + if (tstamp->buffer_type != IGC_TX_BUFFER_TYPE_XSK) 601 + continue; 602 + if (tstamp->xsk_queue_index != queue_id) 603 + continue; 604 + if (!tstamp->xsk_tx_buffer) 605 + continue; 606 + 607 + igc_ptp_free_tx_buffer(adapter, tstamp); 608 + } 609 + 610 + spin_unlock_irqrestore(&adapter->ptp_tx_lock, flags); 611 + } 612 + 580 613 static void igc_ptp_disable_tx_timestamp(struct igc_adapter *adapter) 581 614 { 582 615 struct igc_hw *hw = &adapter->hw;
+36 -13
drivers/net/ethernet/intel/libie/fwlog.c
··· 433 433 module = libie_find_module_by_dentry(fwlog->debugfs_modules, dentry); 434 434 if (module < 0) { 435 435 dev_info(dev, "unknown module\n"); 436 - return -EINVAL; 436 + count = -EINVAL; 437 + goto free_cmd_buf; 437 438 } 438 439 439 440 cnt = sscanf(cmd_buf, "%s", user_val); 440 - if (cnt != 1) 441 - return -EINVAL; 441 + if (cnt != 1) { 442 + count = -EINVAL; 443 + goto free_cmd_buf; 444 + } 442 445 443 446 log_level = sysfs_match_string(libie_fwlog_level_string, user_val); 444 447 if (log_level < 0) { 445 448 dev_info(dev, "unknown log level '%s'\n", user_val); 446 - return -EINVAL; 449 + count = -EINVAL; 450 + goto free_cmd_buf; 447 451 } 448 452 449 453 if (module != LIBIE_AQC_FW_LOG_ID_MAX) { ··· 461 457 for (i = 0; i < LIBIE_AQC_FW_LOG_ID_MAX; i++) 462 458 fwlog->cfg.module_entries[i].log_level = log_level; 463 459 } 460 + 461 + free_cmd_buf: 462 + kfree(cmd_buf); 464 463 465 464 return count; 466 465 } ··· 522 515 return PTR_ERR(cmd_buf); 523 516 524 517 ret = sscanf(cmd_buf, "%s", user_val); 525 - if (ret != 1) 526 - return -EINVAL; 518 + if (ret != 1) { 519 + count = -EINVAL; 520 + goto free_cmd_buf; 521 + } 527 522 528 523 ret = kstrtos16(user_val, 0, &nr_messages); 529 - if (ret) 530 - return ret; 524 + if (ret) { 525 + count = ret; 526 + goto free_cmd_buf; 527 + } 531 528 532 529 if (nr_messages < LIBIE_AQC_FW_LOG_MIN_RESOLUTION || 533 530 nr_messages > LIBIE_AQC_FW_LOG_MAX_RESOLUTION) { 534 531 dev_err(dev, "Invalid FW log number of messages %d, value must be between %d - %d\n", 535 532 nr_messages, LIBIE_AQC_FW_LOG_MIN_RESOLUTION, 536 533 LIBIE_AQC_FW_LOG_MAX_RESOLUTION); 537 - return -EINVAL; 534 + count = -EINVAL; 535 + goto free_cmd_buf; 538 536 } 539 537 540 538 fwlog->cfg.log_resolution = nr_messages; 539 + 540 + free_cmd_buf: 541 + kfree(cmd_buf); 541 542 542 543 return count; 543 544 } ··· 603 588 return PTR_ERR(cmd_buf); 604 589 605 590 ret = sscanf(cmd_buf, "%s", user_val); 606 - if (ret != 1) 607 - return -EINVAL; 591 + if (ret != 1) { 592 + ret = -EINVAL; 593 + goto free_cmd_buf; 594 + } 608 595 609 596 ret = kstrtobool(user_val, &enable); 610 597 if (ret) ··· 641 624 */ 642 625 if (WARN_ON(ret != (ssize_t)count && ret >= 0)) 643 626 ret = -EIO; 627 + free_cmd_buf: 628 + kfree(cmd_buf); 644 629 645 630 return ret; 646 631 } ··· 701 682 return PTR_ERR(cmd_buf); 702 683 703 684 ret = sscanf(cmd_buf, "%s", user_val); 704 - if (ret != 1) 705 - return -EINVAL; 685 + if (ret != 1) { 686 + ret = -EINVAL; 687 + goto free_cmd_buf; 688 + } 706 689 707 690 index = sysfs_match_string(libie_fwlog_log_size, user_val); 708 691 if (index < 0) { ··· 733 712 */ 734 713 if (WARN_ON(ret != (ssize_t)count && ret >= 0)) 735 714 ret = -EIO; 715 + free_cmd_buf: 716 + kfree(cmd_buf); 736 717 737 718 return ret; 738 719 }
+2 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 5016 5016 if (priv->percpu_pools) 5017 5017 numbufs = port->nrxqs * 2; 5018 5018 5019 - if (change_percpu) 5019 + if (change_percpu && priv->global_tx_fc) 5020 5020 mvpp2_bm_pool_update_priv_fc(priv, false); 5021 5021 5022 5022 for (i = 0; i < numbufs; i++) ··· 5041 5041 mvpp2_open(port->dev); 5042 5042 } 5043 5043 5044 - if (change_percpu) 5044 + if (change_percpu && priv->global_tx_fc) 5045 5045 mvpp2_bm_pool_update_priv_fc(priv, true); 5046 5046 5047 5047 return 0;
+1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
··· 287 287 struct mlx5e_ipsec_dwork *dwork; 288 288 struct mlx5e_ipsec_limits limits; 289 289 u32 rx_mapped_id; 290 + u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)]; 290 291 }; 291 292 292 293 struct mlx5_accel_pol_xfrm_attrs {
+23 -29
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
··· 310 310 mlx5e_ipsec_aso_query(sa_entry, data); 311 311 } 312 312 313 - static void mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry, 314 - u32 mode_param) 313 + static void 314 + mlx5e_ipsec_update_esn_state(struct mlx5e_ipsec_sa_entry *sa_entry, 315 + u32 mode_param, 316 + struct mlx5_accel_esp_xfrm_attrs *attrs) 315 317 { 316 - struct mlx5_accel_esp_xfrm_attrs attrs = {}; 317 318 struct mlx5_wqe_aso_ctrl_seg data = {}; 318 319 319 320 if (mode_param < MLX5E_IPSEC_ESN_SCOPE_MID) { ··· 324 323 sa_entry->esn_state.overlap = 1; 325 324 } 326 325 327 - mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &attrs); 328 - 329 - /* It is safe to execute the modify below unlocked since the only flows 330 - * that could affect this HW object, are create, destroy and this work. 331 - * 332 - * Creation flow can't co-exist with this modify work, the destruction 333 - * flow would cancel this work, and this work is a single entity that 334 - * can't conflict with it self. 335 - */ 336 - spin_unlock_bh(&sa_entry->x->lock); 337 - mlx5_accel_esp_modify_xfrm(sa_entry, &attrs); 338 - spin_lock_bh(&sa_entry->x->lock); 326 + mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, attrs); 339 327 340 328 data.data_offset_condition_operand = 341 329 MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET; ··· 360 370 static void mlx5e_ipsec_handle_limits(struct mlx5e_ipsec_sa_entry *sa_entry) 361 371 { 362 372 struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs; 363 - struct mlx5e_ipsec *ipsec = sa_entry->ipsec; 364 - struct mlx5e_ipsec_aso *aso = ipsec->aso; 365 373 bool soft_arm, hard_arm; 366 374 u64 hard_cnt; 367 375 368 376 lockdep_assert_held(&sa_entry->x->lock); 369 377 370 - soft_arm = !MLX5_GET(ipsec_aso, aso->ctx, soft_lft_arm); 371 - hard_arm = !MLX5_GET(ipsec_aso, aso->ctx, hard_lft_arm); 378 + soft_arm = !MLX5_GET(ipsec_aso, sa_entry->ctx, soft_lft_arm); 379 + hard_arm = !MLX5_GET(ipsec_aso, sa_entry->ctx, hard_lft_arm); 372 380 if (!soft_arm && !hard_arm) 373 381 /* It is not lifetime event */ 374 382 return; 375 383 376 - hard_cnt = MLX5_GET(ipsec_aso, aso->ctx, remove_flow_pkt_cnt); 384 + hard_cnt = MLX5_GET(ipsec_aso, sa_entry->ctx, remove_flow_pkt_cnt); 377 385 if (!hard_cnt || hard_arm) { 378 386 /* It is possible to see packet counter equal to zero without 379 387 * hard limit event armed. Such situation can be if packet ··· 441 453 struct mlx5e_ipsec_work *work = 442 454 container_of(_work, struct mlx5e_ipsec_work, work); 443 455 struct mlx5e_ipsec_sa_entry *sa_entry = work->data; 456 + struct mlx5_accel_esp_xfrm_attrs tmp = {}; 444 457 struct mlx5_accel_esp_xfrm_attrs *attrs; 445 - struct mlx5e_ipsec_aso *aso; 458 + bool need_modify = false; 446 459 int ret; 447 460 448 - aso = sa_entry->ipsec->aso; 449 461 attrs = &sa_entry->attrs; 450 462 451 463 spin_lock_bh(&sa_entry->x->lock); ··· 453 465 if (ret) 454 466 goto unlock; 455 467 456 - if (attrs->replay_esn.trigger && 457 - !MLX5_GET(ipsec_aso, aso->ctx, esn_event_arm)) { 458 - u32 mode_param = MLX5_GET(ipsec_aso, aso->ctx, mode_parameter); 459 - 460 - mlx5e_ipsec_update_esn_state(sa_entry, mode_param); 461 - } 462 - 463 468 if (attrs->lft.soft_packet_limit != XFRM_INF) 464 469 mlx5e_ipsec_handle_limits(sa_entry); 465 470 471 + if (attrs->replay_esn.trigger && 472 + !MLX5_GET(ipsec_aso, sa_entry->ctx, esn_event_arm)) { 473 + u32 mode_param = MLX5_GET(ipsec_aso, sa_entry->ctx, 474 + mode_parameter); 475 + 476 + mlx5e_ipsec_update_esn_state(sa_entry, mode_param, &tmp); 477 + need_modify = true; 478 + } 479 + 466 480 unlock: 467 481 spin_unlock_bh(&sa_entry->x->lock); 482 + if (need_modify) 483 + mlx5_accel_esp_modify_xfrm(sa_entry, &tmp); 468 484 kfree(work); 469 485 } 470 486 ··· 621 629 /* We are in atomic context */ 622 630 udelay(10); 623 631 } while (ret && time_is_after_jiffies(expires)); 632 + if (!ret) 633 + memcpy(sa_entry->ctx, aso->ctx, MLX5_ST_SZ_BYTES(ipsec_aso)); 624 634 spin_unlock_bh(&aso->lock); 625 635 return ret; 626 636 }
+9 -14
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 1489 1489 return err; 1490 1490 } 1491 1491 1492 - static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev) 1492 + static u32 mlx5_esw_qos_lag_link_speed_get(struct mlx5_core_dev *mdev, 1493 + bool take_rtnl) 1493 1494 { 1494 1495 struct ethtool_link_ksettings lksettings; 1495 1496 struct net_device *slave, *master; 1496 1497 u32 speed = SPEED_UNKNOWN; 1497 1498 1498 - /* Lock ensures a stable reference to master and slave netdevice 1499 - * while port speed of master is queried. 1500 - */ 1501 - ASSERT_RTNL(); 1502 - 1503 1499 slave = mlx5_uplink_netdev_get(mdev); 1504 1500 if (!slave) 1505 1501 goto out; 1506 1502 1503 + if (take_rtnl) 1504 + rtnl_lock(); 1507 1505 master = netdev_master_upper_dev_get(slave); 1508 1506 if (master && !__ethtool_get_link_ksettings(master, &lksettings)) 1509 1507 speed = lksettings.base.speed; 1508 + if (take_rtnl) 1509 + rtnl_unlock(); 1510 1510 1511 1511 out: 1512 1512 mlx5_uplink_netdev_put(mdev, slave); ··· 1514 1514 } 1515 1515 1516 1516 static int mlx5_esw_qos_max_link_speed_get(struct mlx5_core_dev *mdev, u32 *link_speed_max, 1517 - bool hold_rtnl_lock, struct netlink_ext_ack *extack) 1517 + bool take_rtnl, 1518 + struct netlink_ext_ack *extack) 1518 1519 { 1519 1520 int err; 1520 1521 1521 1522 if (!mlx5_lag_is_active(mdev)) 1522 1523 goto skip_lag; 1523 1524 1524 - if (hold_rtnl_lock) 1525 - rtnl_lock(); 1526 - 1527 - *link_speed_max = mlx5_esw_qos_lag_link_speed_get_locked(mdev); 1528 - 1529 - if (hold_rtnl_lock) 1530 - rtnl_unlock(); 1525 + *link_speed_max = mlx5_esw_qos_lag_link_speed_get(mdev, take_rtnl); 1531 1526 1532 1527 if (*link_speed_max != (u32)SPEED_UNKNOWN) 1533 1528 return 0;
+3 -3
drivers/net/ethernet/microsoft/mana/hw_channel.c
··· 814 814 gc->max_num_cqs = 0; 815 815 } 816 816 817 - kfree(hwc->caller_ctx); 818 - hwc->caller_ctx = NULL; 819 - 820 817 if (hwc->txq) 821 818 mana_hwc_destroy_wq(hwc, hwc->txq); 822 819 ··· 822 825 823 826 if (hwc->cq) 824 827 mana_hwc_destroy_cq(hwc->gdma_dev->gdma_context, hwc->cq); 828 + 829 + kfree(hwc->caller_ctx); 830 + hwc->caller_ctx = NULL; 825 831 826 832 mana_gd_free_res_map(&hwc->inflight_msg_res); 827 833
+5
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 1075 1075 xdp_prepare_buff(&xdp, pa, PRUETH_HEADROOM, pkt_len, false); 1076 1076 1077 1077 *xdp_state = emac_run_xdp(emac, &xdp, &pkt_len); 1078 + if (*xdp_state == ICSSG_XDP_CONSUMED) { 1079 + page_pool_recycle_direct(pool, page); 1080 + goto requeue; 1081 + } 1082 + 1078 1083 if (*xdp_state != ICSSG_XDP_PASS) 1079 1084 goto requeue; 1080 1085 headroom = xdp.data - xdp.data_hard_start;
+4 -1
drivers/net/netdevsim/netdev.c
··· 109 109 int ret; 110 110 111 111 ret = __dev_forward_skb(rx_dev, skb); 112 - if (ret) 112 + if (ret) { 113 + if (psp_ext) 114 + __skb_ext_put(psp_ext); 113 115 return ret; 116 + } 114 117 115 118 nsim_psp_handle_ext(skb, psp_ext); 116 119
+6 -6
drivers/net/usb/aqc111.c
··· 1395 1395 aqc111_write16_cmd_nopm(dev, AQ_ACCESS_MAC, 1396 1396 SFR_MEDIUM_STATUS_MODE, 2, &reg16); 1397 1397 1398 - aqc111_write_cmd(dev, AQ_WOL_CFG, 0, 0, 1399 - WOL_CFG_SIZE, &wol_cfg); 1400 - aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, 1401 - &aqc111_data->phy_cfg); 1398 + aqc111_write_cmd_nopm(dev, AQ_WOL_CFG, 0, 0, 1399 + WOL_CFG_SIZE, &wol_cfg); 1400 + aqc111_write32_cmd_nopm(dev, AQ_PHY_OPS, 0, 0, 1401 + &aqc111_data->phy_cfg); 1402 1402 } else { 1403 1403 aqc111_data->phy_cfg |= AQ_LOW_POWER; 1404 - aqc111_write32_cmd(dev, AQ_PHY_OPS, 0, 0, 1405 - &aqc111_data->phy_cfg); 1404 + aqc111_write32_cmd_nopm(dev, AQ_PHY_OPS, 0, 0, 1405 + &aqc111_data->phy_cfg); 1406 1406 1407 1407 /* Disable RX path */ 1408 1408 aqc111_read16_cmd_nopm(dev, AQ_ACCESS_MAC,
+6 -4
drivers/net/usb/cdc_ncm.c
··· 1656 1656 struct usbnet *dev = netdev_priv(skb_in->dev); 1657 1657 struct usb_cdc_ncm_ndp16 *ndp16; 1658 1658 int ret = -EINVAL; 1659 + size_t ndp_len; 1659 1660 1660 1661 if ((ndpoffset + sizeof(struct usb_cdc_ncm_ndp16)) > skb_in->len) { 1661 1662 netif_dbg(dev, rx_err, dev->net, "invalid NDP offset <%u>\n", ··· 1676 1675 sizeof(struct usb_cdc_ncm_dpe16)); 1677 1676 ret--; /* we process NDP entries except for the last one */ 1678 1677 1679 - if ((sizeof(struct usb_cdc_ncm_ndp16) + 1680 - ret * (sizeof(struct usb_cdc_ncm_dpe16))) > skb_in->len) { 1678 + ndp_len = struct_size_t(struct usb_cdc_ncm_ndp16, dpe16, ret); 1679 + if (ndpoffset + ndp_len > skb_in->len) { 1681 1680 netif_dbg(dev, rx_err, dev->net, "Invalid nframes = %d\n", ret); 1682 1681 ret = -EINVAL; 1683 1682 } ··· 1693 1692 struct usbnet *dev = netdev_priv(skb_in->dev); 1694 1693 struct usb_cdc_ncm_ndp32 *ndp32; 1695 1694 int ret = -EINVAL; 1695 + size_t ndp_len; 1696 1696 1697 1697 if ((ndpoffset + sizeof(struct usb_cdc_ncm_ndp32)) > skb_in->len) { 1698 1698 netif_dbg(dev, rx_err, dev->net, "invalid NDP offset <%u>\n", ··· 1713 1711 sizeof(struct usb_cdc_ncm_dpe32)); 1714 1712 ret--; /* we process NDP entries except for the last one */ 1715 1713 1716 - if ((sizeof(struct usb_cdc_ncm_ndp32) + 1717 - ret * (sizeof(struct usb_cdc_ncm_dpe32))) > skb_in->len) { 1714 + ndp_len = struct_size_t(struct usb_cdc_ncm_ndp32, dpe32, ret); 1715 + if (ndpoffset + ndp_len > skb_in->len) { 1718 1716 netif_dbg(dev, rx_err, dev->net, "Invalid nframes = %d\n", ret); 1719 1717 ret = -EINVAL; 1720 1718 }
+2 -4
drivers/net/wireless/ath/ath9k/channel.c
··· 1006 1006 skb_set_queue_mapping(skb, IEEE80211_AC_VO); 1007 1007 1008 1008 if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, NULL)) 1009 - goto error; 1009 + return; 1010 1010 1011 1011 txctl.txq = sc->tx.txq_map[IEEE80211_AC_VO]; 1012 1012 if (ath_tx_start(sc->hw, skb, &txctl)) ··· 1119 1119 1120 1120 skb->priority = 7; 1121 1121 skb_set_queue_mapping(skb, IEEE80211_AC_VO); 1122 - if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta)) { 1123 - dev_kfree_skb_any(skb); 1122 + if (!ieee80211_tx_prepare_skb(sc->hw, vif, skb, band, &sta)) 1124 1123 return false; 1125 - } 1126 1124 break; 1127 1125 default: 1128 1126 return false;
+1 -3
drivers/net/wireless/mediatek/mt76/scan.c
··· 63 63 64 64 rcu_read_lock(); 65 65 66 - if (!ieee80211_tx_prepare_skb(phy->hw, vif, skb, band, NULL)) { 67 - ieee80211_free_txskb(phy->hw, skb); 66 + if (!ieee80211_tx_prepare_skb(phy->hw, vif, skb, band, NULL)) 68 67 goto out; 69 - } 70 68 71 69 info = IEEE80211_SKB_CB(skb); 72 70 if (req->no_cck)
+1 -1
drivers/net/wireless/ti/wlcore/tx.c
··· 210 210 if (skb_headroom(skb) < (total_len - skb->len) && 211 211 pskb_expand_head(skb, (total_len - skb->len), 0, GFP_ATOMIC)) { 212 212 wl1271_free_tx_id(wl, id); 213 - return -EAGAIN; 213 + return -ENOMEM; 214 214 } 215 215 desc = skb_push(skb, total_len - skb->len); 216 216
+1 -2
drivers/net/wireless/virtual/mac80211_hwsim.c
··· 3021 3021 hwsim->tmp_chan->band, 3022 3022 NULL)) { 3023 3023 rcu_read_unlock(); 3024 - kfree_skb(probe); 3025 3024 continue; 3026 3025 } 3027 3026 ··· 6488 6489 if (info->attrs[HWSIM_ATTR_PMSR_SUPPORT]) { 6489 6490 struct cfg80211_pmsr_capabilities *pmsr_capa; 6490 6491 6491 - pmsr_capa = kmalloc_obj(*pmsr_capa); 6492 + pmsr_capa = kzalloc_obj(*pmsr_capa); 6492 6493 if (!pmsr_capa) { 6493 6494 ret = -ENOMEM; 6494 6495 goto out_free;
+2 -2
drivers/nfc/nxp-nci/i2c.c
··· 47 47 { 48 48 struct nxp_nci_i2c_phy *phy = (struct nxp_nci_i2c_phy *) phy_id; 49 49 50 - gpiod_set_value(phy->gpiod_fw, (mode == NXP_NCI_MODE_FW) ? 1 : 0); 51 - gpiod_set_value(phy->gpiod_en, (mode != NXP_NCI_MODE_COLD) ? 1 : 0); 50 + gpiod_set_value_cansleep(phy->gpiod_fw, (mode == NXP_NCI_MODE_FW) ? 1 : 0); 51 + gpiod_set_value_cansleep(phy->gpiod_en, (mode != NXP_NCI_MODE_COLD) ? 1 : 0); 52 52 usleep_range(10000, 15000); 53 53 54 54 if (mode == NXP_NCI_MODE_COLD)
+3 -2
drivers/nvdimm/bus.c
··· 486 486 static void nd_async_device_register(void *d, async_cookie_t cookie) 487 487 { 488 488 struct device *dev = d; 489 + struct device *parent = dev->parent; 489 490 490 491 if (device_add(dev) != 0) { 491 492 dev_err(dev, "%s: failed\n", __func__); 492 493 put_device(dev); 493 494 } 494 495 put_device(dev); 495 - if (dev->parent) 496 - put_device(dev->parent); 496 + if (parent) 497 + put_device(parent); 497 498 } 498 499 499 500 static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+5
drivers/pci/endpoint/functions/pci-epf-test.c
··· 894 894 dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 895 895 bar->submap = old_submap; 896 896 bar->num_submap = old_nsub; 897 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 898 + if (ret) 899 + dev_warn(&epf->dev, "Failed to restore the original BAR mapping: %d\n", 900 + ret); 901 + 897 902 kfree(submap); 898 903 goto err; 899 904 }
+41 -13
drivers/pci/pwrctrl/core.c
··· 268 268 } 269 269 EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices); 270 270 271 + /* 272 + * Check whether the pwrctrl device really needs to be created or not. The 273 + * pwrctrl device will only be created if the node satisfies below requirements: 274 + * 275 + * 1. Presence of compatible property with "pci" prefix to match against the 276 + * pwrctrl driver (AND) 277 + * 2. At least one of the power supplies defined in the devicetree node of the 278 + * device (OR) in the remote endpoint parent node to indicate pwrctrl 279 + * requirement. 280 + */ 281 + static bool pci_pwrctrl_is_required(struct device_node *np) 282 + { 283 + struct device_node *endpoint; 284 + const char *compat; 285 + int ret; 286 + 287 + ret = of_property_read_string(np, "compatible", &compat); 288 + if (ret < 0) 289 + return false; 290 + 291 + if (!strstarts(compat, "pci")) 292 + return false; 293 + 294 + if (of_pci_supply_present(np)) 295 + return true; 296 + 297 + if (of_graph_is_present(np)) { 298 + for_each_endpoint_of_node(np, endpoint) { 299 + struct device_node *remote __free(device_node) = 300 + of_graph_get_remote_port_parent(endpoint); 301 + if (remote) { 302 + if (of_pci_supply_present(remote)) 303 + return true; 304 + } 305 + } 306 + } 307 + 308 + return false; 309 + } 310 + 271 311 static int pci_pwrctrl_create_device(struct device_node *np, 272 312 struct device *parent) 273 313 { ··· 327 287 return 0; 328 288 } 329 289 330 - /* 331 - * Sanity check to make sure that the node has the compatible property 332 - * to allow driver binding. 333 - */ 334 - if (!of_property_present(np, "compatible")) 335 - return 0; 336 - 337 - /* 338 - * Check whether the pwrctrl device really needs to be created or not. 339 - * This is decided based on at least one of the power supplies defined 340 - * in the devicetree node of the device or the graph property. 341 - */ 342 - if (!of_pci_supply_present(np) && !of_graph_is_present(np)) { 290 + if (!pci_pwrctrl_is_required(np)) { 343 291 dev_dbg(parent, "Skipping OF node: %s\n", np->name); 344 292 return 0; 345 293 }
+4 -8
drivers/pmdomain/bcm/bcm2835-power.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/delay.h> 11 11 #include <linux/io.h> 12 + #include <linux/iopoll.h> 12 13 #include <linux/mfd/bcm2835-pm.h> 13 14 #include <linux/module.h> 14 15 #include <linux/platform_device.h> ··· 154 153 static int bcm2835_asb_control(struct bcm2835_power *power, u32 reg, bool enable) 155 154 { 156 155 void __iomem *base = power->asb; 157 - u64 start; 158 156 u32 val; 159 157 160 158 switch (reg) { ··· 166 166 break; 167 167 } 168 168 169 - start = ktime_get_ns(); 170 - 171 169 /* Enable the module's async AXI bridges. */ 172 170 if (enable) { 173 171 val = readl(base + reg) & ~ASB_REQ_STOP; ··· 174 176 } 175 177 writel(PM_PASSWORD | val, base + reg); 176 178 177 - while (!!(readl(base + reg) & ASB_ACK) == enable) { 178 - cpu_relax(); 179 - if (ktime_get_ns() - start >= 1000) 180 - return -ETIMEDOUT; 181 - } 179 + if (readl_poll_timeout_atomic(base + reg, val, 180 + !!(val & ASB_ACK) != enable, 0, 5)) 181 + return -ETIMEDOUT; 182 182 183 183 return 0; 184 184 }
+1 -1
drivers/pmdomain/mediatek/mtk-pm-domains.c
··· 1203 1203 scpsys->soc_data = soc; 1204 1204 1205 1205 scpsys->pd_data.domains = scpsys->domains; 1206 - scpsys->pd_data.num_domains = soc->num_domains; 1206 + scpsys->pd_data.num_domains = num_domains; 1207 1207 1208 1208 parent = dev->parent; 1209 1209 if (!parent) {
+2
drivers/resctrl/mpam_devices.c
··· 1428 1428 static int mpam_restore_mbwu_state(void *_ris) 1429 1429 { 1430 1430 int i; 1431 + u64 val; 1431 1432 struct mon_read mwbu_arg; 1432 1433 struct mpam_msc_ris *ris = _ris; 1433 1434 struct mpam_class *class = ris->vmsc->comp->class; ··· 1438 1437 mwbu_arg.ris = ris; 1439 1438 mwbu_arg.ctx = &ris->mbwu_state[i].cfg; 1440 1439 mwbu_arg.type = mpam_msmon_choose_counter(class); 1440 + mwbu_arg.val = &val; 1441 1441 1442 1442 __ris_msmon_read(&mwbu_arg); 1443 1443 }
+15 -7
drivers/resctrl/test_mpam_devices.c
··· 322 322 mutex_unlock(&mpam_list_lock); 323 323 } 324 324 325 + static void __test_mpam_reset_msc_bitmap(struct mpam_msc *msc, u16 reg, u16 wd) 326 + { 327 + /* Avoid warnings when running with CONFIG_DEBUG_PREEMPT */ 328 + guard(preempt)(); 329 + 330 + mpam_reset_msc_bitmap(msc, reg, wd); 331 + } 332 + 325 333 static void test_mpam_reset_msc_bitmap(struct kunit *test) 326 334 { 327 - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); 335 + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); 328 336 struct mpam_msc fake_msc = {}; 329 337 u32 *test_result; 330 338 ··· 347 339 mutex_init(&fake_msc.part_sel_lock); 348 340 mutex_lock(&fake_msc.part_sel_lock); 349 341 350 - test_result = (u32 *)(buf + MPAMCFG_CPBM); 342 + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); 351 343 352 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); 344 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); 353 345 KUNIT_EXPECT_EQ(test, test_result[0], 0); 354 346 KUNIT_EXPECT_EQ(test, test_result[1], 0); 355 347 test_result[0] = 0; 356 348 test_result[1] = 0; 357 349 358 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 1); 350 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 1); 359 351 KUNIT_EXPECT_EQ(test, test_result[0], 1); 360 352 KUNIT_EXPECT_EQ(test, test_result[1], 0); 361 353 test_result[0] = 0; 362 354 test_result[1] = 0; 363 355 364 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 16); 356 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 16); 365 357 KUNIT_EXPECT_EQ(test, test_result[0], 0xffff); 366 358 KUNIT_EXPECT_EQ(test, test_result[1], 0); 367 359 test_result[0] = 0; 368 360 test_result[1] = 0; 369 361 370 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 32); 362 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 32); 371 363 KUNIT_EXPECT_EQ(test, test_result[0], 0xffffffff); 372 364 KUNIT_EXPECT_EQ(test, test_result[1], 0); 373 365 test_result[0] = 0; 374 366 test_result[1] = 0; 375 367 376 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 33); 368 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 33); 377 369 KUNIT_EXPECT_EQ(test, test_result[0], 0xffffffff); 378 370 KUNIT_EXPECT_EQ(test, test_result[1], 1); 379 371 test_result[0] = 0;
+3
drivers/reset/reset-rzg2l-usbphy-ctrl.c
··· 136 136 { 137 137 u32 val = power_on ? 0 : 1; 138 138 139 + if (!pwrrdy) 140 + return 0; 141 + 139 142 /* The initialization path guarantees that the mask is 1 bit long. */ 140 143 return regmap_field_update_bits(pwrrdy, 1, val); 141 144 }
+2 -4
drivers/slimbus/qcom-ngd-ctrl.c
··· 1535 1535 ngd->id = id; 1536 1536 ngd->pdev->dev.parent = parent; 1537 1537 1538 - ret = driver_set_override(&ngd->pdev->dev, 1539 - &ngd->pdev->driver_override, 1540 - QCOM_SLIM_NGD_DRV_NAME, 1541 - strlen(QCOM_SLIM_NGD_DRV_NAME)); 1538 + ret = device_set_driver_override(&ngd->pdev->dev, 1539 + QCOM_SLIM_NGD_DRV_NAME); 1542 1540 if (ret) { 1543 1541 platform_device_put(ngd->pdev); 1544 1542 kfree(ngd);
+22 -2
drivers/soc/fsl/qbman/qman.c
··· 1827 1827 1828 1828 void qman_destroy_fq(struct qman_fq *fq) 1829 1829 { 1830 + int leaked; 1831 + 1830 1832 /* 1831 1833 * We don't need to lock the FQ as it is a pre-condition that the FQ be 1832 1834 * quiesced. Instead, run some checks. ··· 1836 1834 switch (fq->state) { 1837 1835 case qman_fq_state_parked: 1838 1836 case qman_fq_state_oos: 1839 - if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID)) 1840 - qman_release_fqid(fq->fqid); 1837 + /* 1838 + * There's a race condition here on releasing the fqid, 1839 + * setting the fq_table to NULL, and freeing the fqid. 1840 + * To prevent it, this order should be respected: 1841 + */ 1842 + if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID)) { 1843 + leaked = qman_shutdown_fq(fq->fqid); 1844 + if (leaked) 1845 + pr_debug("FQID %d leaked\n", fq->fqid); 1846 + } 1841 1847 1842 1848 DPAA_ASSERT(fq_table[fq->idx]); 1843 1849 fq_table[fq->idx] = NULL; 1850 + 1851 + if (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID) && !leaked) { 1852 + /* 1853 + * fq_table[fq->idx] should be set to null before 1854 + * freeing fq->fqid otherwise it could by allocated by 1855 + * qman_alloc_fqid() while still being !NULL 1856 + */ 1857 + smp_wmb(); 1858 + gen_pool_free(qm_fqalloc, fq->fqid | DPAA_GENALLOC_OFF, 1); 1859 + } 1844 1860 return; 1845 1861 default: 1846 1862 break;
+2 -2
drivers/soc/fsl/qe/qmc.c
··· 1790 1790 return -EINVAL; 1791 1791 qmc->dpram_offset = res->start - qe_muram_dma(qe_muram_addr(0)); 1792 1792 qmc->dpram = devm_ioremap_resource(qmc->dev, res); 1793 - if (IS_ERR(qmc->scc_pram)) 1794 - return PTR_ERR(qmc->scc_pram); 1793 + if (IS_ERR(qmc->dpram)) 1794 + return PTR_ERR(qmc->dpram); 1795 1795 1796 1796 return 0; 1797 1797 }
+9 -4
drivers/soc/microchip/mpfs-sys-controller.c
··· 142 142 143 143 sys_controller->flash = of_get_mtd_device_by_node(np); 144 144 of_node_put(np); 145 - if (IS_ERR(sys_controller->flash)) 146 - return dev_err_probe(dev, PTR_ERR(sys_controller->flash), "Failed to get flash\n"); 145 + if (IS_ERR(sys_controller->flash)) { 146 + ret = dev_err_probe(dev, PTR_ERR(sys_controller->flash), "Failed to get flash\n"); 147 + goto out_free; 148 + } 147 149 148 150 no_flash: 149 151 sys_controller->client.dev = dev; ··· 157 155 if (IS_ERR(sys_controller->chan)) { 158 156 ret = dev_err_probe(dev, PTR_ERR(sys_controller->chan), 159 157 "Failed to get mbox channel\n"); 160 - kfree(sys_controller); 161 - return ret; 158 + goto out_free; 162 159 } 163 160 164 161 init_completion(&sys_controller->c); ··· 175 174 dev_info(&pdev->dev, "Registered MPFS system controller\n"); 176 175 177 176 return 0; 177 + 178 + out_free: 179 + kfree(sys_controller); 180 + return ret; 178 181 } 179 182 180 183 static void mpfs_sys_controller_remove(struct platform_device *pdev)
+1
drivers/soc/rockchip/grf.c
··· 231 231 grf = syscon_node_to_regmap(np); 232 232 if (IS_ERR(grf)) { 233 233 pr_err("%s: could not get grf syscon\n", __func__); 234 + of_node_put(np); 234 235 return PTR_ERR(grf); 235 236 } 236 237
+7 -39
drivers/spi/spi-amlogic-spifc-a4.c
··· 1083 1083 return clk_set_rate(sfc->core_clk, SFC_BUS_DEFAULT_CLK); 1084 1084 } 1085 1085 1086 - static int aml_sfc_disable_clk(struct aml_sfc *sfc) 1087 - { 1088 - clk_disable_unprepare(sfc->core_clk); 1089 - clk_disable_unprepare(sfc->gate_clk); 1090 - 1091 - return 0; 1092 - } 1093 - 1094 1086 static int aml_sfc_probe(struct platform_device *pdev) 1095 1087 { 1096 1088 struct device_node *np = pdev->dev.of_node; ··· 1133 1141 1134 1142 /* Enable Amlogic flash controller spi mode */ 1135 1143 ret = regmap_write(sfc->regmap_base, SFC_SPI_CFG, SPI_MODE_EN); 1136 - if (ret) { 1137 - dev_err(dev, "failed to enable SPI mode\n"); 1138 - goto err_out; 1139 - } 1144 + if (ret) 1145 + return dev_err_probe(dev, ret, "failed to enable SPI mode\n"); 1140 1146 1141 1147 ret = dma_set_mask(sfc->dev, DMA_BIT_MASK(32)); 1142 - if (ret) { 1143 - dev_err(sfc->dev, "failed to set dma mask\n"); 1144 - goto err_out; 1145 - } 1148 + if (ret) 1149 + return dev_err_probe(sfc->dev, ret, "failed to set dma mask\n"); 1146 1150 1147 1151 sfc->ecc_eng.dev = &pdev->dev; 1148 1152 sfc->ecc_eng.integration = NAND_ECC_ENGINE_INTEGRATION_PIPELINED; ··· 1146 1158 sfc->ecc_eng.priv = sfc; 1147 1159 1148 1160 ret = nand_ecc_register_on_host_hw_engine(&sfc->ecc_eng); 1149 - if (ret) { 1150 - dev_err(&pdev->dev, "failed to register Aml host ecc engine.\n"); 1151 - goto err_out; 1152 - } 1161 + if (ret) 1162 + return dev_err_probe(&pdev->dev, ret, "failed to register Aml host ecc engine.\n"); 1153 1163 1154 1164 ret = of_property_read_u32(np, "amlogic,rx-adj", &val); 1155 1165 if (!ret) ··· 1163 1177 ctrl->min_speed_hz = SFC_MIN_FREQUENCY; 1164 1178 ctrl->num_chipselect = SFC_MAX_CS_NUM; 1165 1179 1166 - ret = devm_spi_register_controller(dev, ctrl); 1167 - if (ret) 1168 - goto err_out; 1169 - 1170 - return 0; 1171 - 1172 - err_out: 1173 - aml_sfc_disable_clk(sfc); 1174 - 1175 - return ret; 1176 - } 1177 - 1178 - static void aml_sfc_remove(struct platform_device *pdev) 1179 - { 1180 - struct spi_controller *ctlr = platform_get_drvdata(pdev); 1181 - struct aml_sfc *sfc = spi_controller_get_devdata(ctlr); 1182 - 1183 - aml_sfc_disable_clk(sfc); 1180 + return devm_spi_register_controller(dev, ctrl); 1184 1181 } 1185 1182 1186 1183 static const struct of_device_id aml_sfc_of_match[] = { ··· 1181 1212 .of_match_table = aml_sfc_of_match, 1182 1213 }, 1183 1214 .probe = aml_sfc_probe, 1184 - .remove = aml_sfc_remove, 1185 1215 }; 1186 1216 module_platform_driver(aml_sfc_driver); 1187 1217
+4 -8
drivers/spi/spi-amlogic-spisg.c
··· 729 729 }; 730 730 731 731 if (of_property_read_bool(dev->of_node, "spi-slave")) 732 - ctlr = spi_alloc_target(dev, sizeof(*spisg)); 732 + ctlr = devm_spi_alloc_target(dev, sizeof(*spisg)); 733 733 else 734 - ctlr = spi_alloc_host(dev, sizeof(*spisg)); 734 + ctlr = devm_spi_alloc_host(dev, sizeof(*spisg)); 735 735 if (!ctlr) 736 736 return -ENOMEM; 737 737 ··· 750 750 return dev_err_probe(dev, PTR_ERR(spisg->map), "regmap init failed\n"); 751 751 752 752 irq = platform_get_irq(pdev, 0); 753 - if (irq < 0) { 754 - ret = irq; 755 - goto out_controller; 756 - } 753 + if (irq < 0) 754 + return irq; 757 755 758 756 ret = device_reset_optional(dev); 759 757 if (ret) ··· 815 817 if (spisg->core) 816 818 clk_disable_unprepare(spisg->core); 817 819 clk_disable_unprepare(spisg->pclk); 818 - out_controller: 819 - spi_controller_put(ctlr); 820 820 821 821 return ret; 822 822 }
+11 -20
drivers/spi/spi-axiado.c
··· 765 765 platform_set_drvdata(pdev, ctlr); 766 766 767 767 xspi->regs = devm_platform_ioremap_resource(pdev, 0); 768 - if (IS_ERR(xspi->regs)) { 769 - ret = PTR_ERR(xspi->regs); 770 - goto remove_ctlr; 771 - } 768 + if (IS_ERR(xspi->regs)) 769 + return PTR_ERR(xspi->regs); 772 770 773 771 xspi->pclk = devm_clk_get(&pdev->dev, "pclk"); 774 - if (IS_ERR(xspi->pclk)) { 775 - dev_err(&pdev->dev, "pclk clock not found.\n"); 776 - ret = PTR_ERR(xspi->pclk); 777 - goto remove_ctlr; 778 - } 772 + if (IS_ERR(xspi->pclk)) 773 + return dev_err_probe(&pdev->dev, PTR_ERR(xspi->pclk), 774 + "pclk clock not found.\n"); 779 775 780 776 xspi->ref_clk = devm_clk_get(&pdev->dev, "ref"); 781 - if (IS_ERR(xspi->ref_clk)) { 782 - dev_err(&pdev->dev, "ref clock not found.\n"); 783 - ret = PTR_ERR(xspi->ref_clk); 784 - goto remove_ctlr; 785 - } 777 + if (IS_ERR(xspi->ref_clk)) 778 + return dev_err_probe(&pdev->dev, PTR_ERR(xspi->ref_clk), 779 + "ref clock not found.\n"); 786 780 787 781 ret = clk_prepare_enable(xspi->pclk); 788 - if (ret) { 789 - dev_err(&pdev->dev, "Unable to enable APB clock.\n"); 790 - goto remove_ctlr; 791 - } 782 + if (ret) 783 + return dev_err_probe(&pdev->dev, ret, "Unable to enable APB clock.\n"); 792 784 793 785 ret = clk_prepare_enable(xspi->ref_clk); 794 786 if (ret) { ··· 861 869 clk_disable_unprepare(xspi->ref_clk); 862 870 clk_dis_apb: 863 871 clk_disable_unprepare(xspi->pclk); 864 - remove_ctlr: 865 - spi_controller_put(ctlr); 872 + 866 873 return ret; 867 874 } 868 875
+7 -6
drivers/spi/spi-geni-qcom.c
··· 359 359 writel((spi_slv->mode & SPI_LOOP) ? LOOPBACK_ENABLE : 0, se->base + SE_SPI_LOOPBACK); 360 360 if (cs_changed) 361 361 writel(chipselect, se->base + SE_SPI_DEMUX_SEL); 362 - if (mode_changed & SE_SPI_CPHA) 362 + if (mode_changed & SPI_CPHA) 363 363 writel((spi_slv->mode & SPI_CPHA) ? CPHA : 0, se->base + SE_SPI_CPHA); 364 - if (mode_changed & SE_SPI_CPOL) 364 + if (mode_changed & SPI_CPOL) 365 365 writel((spi_slv->mode & SPI_CPOL) ? CPOL : 0, se->base + SE_SPI_CPOL); 366 366 if ((mode_changed & SPI_CS_HIGH) || (cs_changed && (spi_slv->mode & SPI_CS_HIGH))) 367 367 writel((spi_slv->mode & SPI_CS_HIGH) ? BIT(chipselect) : 0, se->base + SE_SPI_DEMUX_OUTPUT_INV); ··· 906 906 struct spi_controller *spi = data; 907 907 struct spi_geni_master *mas = spi_controller_get_devdata(spi); 908 908 struct geni_se *se = &mas->se; 909 - u32 m_irq; 909 + u32 m_irq, dma_tx_status, dma_rx_status; 910 910 911 911 m_irq = readl(se->base + SE_GENI_M_IRQ_STATUS); 912 - if (!m_irq) 912 + dma_tx_status = readl_relaxed(se->base + SE_DMA_TX_IRQ_STAT); 913 + dma_rx_status = readl_relaxed(se->base + SE_DMA_RX_IRQ_STAT); 914 + 915 + if (!m_irq && !dma_tx_status && !dma_rx_status) 913 916 return IRQ_NONE; 914 917 915 918 if (m_irq & (M_CMD_OVERRUN_EN | M_ILLEGAL_CMD_EN | M_CMD_FAILURE_EN | ··· 960 957 } 961 958 } else if (mas->cur_xfer_mode == GENI_SE_DMA) { 962 959 const struct spi_transfer *xfer = mas->cur_xfer; 963 - u32 dma_tx_status = readl_relaxed(se->base + SE_DMA_TX_IRQ_STAT); 964 - u32 dma_rx_status = readl_relaxed(se->base + SE_DMA_RX_IRQ_STAT); 965 960 966 961 if (dma_tx_status) 967 962 writel(dma_tx_status, se->base + SE_DMA_TX_IRQ_CLR);
+12 -13
drivers/spi/spi.c
··· 3049 3049 struct spi_controller *ctlr; 3050 3050 3051 3051 ctlr = container_of(dev, struct spi_controller, dev); 3052 + 3053 + free_percpu(ctlr->pcpu_statistics); 3052 3054 kfree(ctlr); 3053 3055 } 3054 3056 ··· 3193 3191 ctlr = kzalloc(size + ctlr_size, GFP_KERNEL); 3194 3192 if (!ctlr) 3195 3193 return NULL; 3194 + 3195 + ctlr->pcpu_statistics = spi_alloc_pcpu_stats(NULL); 3196 + if (!ctlr->pcpu_statistics) { 3197 + kfree(ctlr); 3198 + return NULL; 3199 + } 3196 3200 3197 3201 device_initialize(&ctlr->dev); 3198 3202 INIT_LIST_HEAD(&ctlr->queue); ··· 3488 3480 dev_info(dev, "controller is unqueued, this is deprecated\n"); 3489 3481 } else if (ctlr->transfer_one || ctlr->transfer_one_message) { 3490 3482 status = spi_controller_initialize_queue(ctlr); 3491 - if (status) { 3492 - device_del(&ctlr->dev); 3493 - goto free_bus_id; 3494 - } 3495 - } 3496 - /* Add statistics */ 3497 - ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev); 3498 - if (!ctlr->pcpu_statistics) { 3499 - dev_err(dev, "Error allocating per-cpu statistics\n"); 3500 - status = -ENOMEM; 3501 - goto destroy_queue; 3483 + if (status) 3484 + goto del_ctrl; 3502 3485 } 3503 3486 3504 3487 mutex_lock(&board_lock); ··· 3503 3504 acpi_register_spi_devices(ctlr); 3504 3505 return status; 3505 3506 3506 - destroy_queue: 3507 - spi_destroy_queue(ctlr); 3507 + del_ctrl: 3508 + device_del(&ctlr->dev); 3508 3509 free_bus_id: 3509 3510 mutex_lock(&board_lock); 3510 3511 idr_remove(&spi_controller_idr, ctlr->bus_num);
-27
drivers/tee/tee_shm.c
··· 23 23 struct page *page; 24 24 }; 25 25 26 - static void shm_put_kernel_pages(struct page **pages, size_t page_count) 27 - { 28 - size_t n; 29 - 30 - for (n = 0; n < page_count; n++) 31 - put_page(pages[n]); 32 - } 33 - 34 - static void shm_get_kernel_pages(struct page **pages, size_t page_count) 35 - { 36 - size_t n; 37 - 38 - for (n = 0; n < page_count; n++) 39 - get_page(pages[n]); 40 - } 41 - 42 26 static void release_registered_pages(struct tee_shm *shm) 43 27 { 44 28 if (shm->pages) { 45 29 if (shm->flags & TEE_SHM_USER_MAPPED) 46 30 unpin_user_pages(shm->pages, shm->num_pages); 47 - else 48 - shm_put_kernel_pages(shm->pages, shm->num_pages); 49 31 50 32 kfree(shm->pages); 51 33 } ··· 459 477 goto err_put_shm_pages; 460 478 } 461 479 462 - /* 463 - * iov_iter_extract_kvec_pages does not get reference on the pages, 464 - * get a reference on them. 465 - */ 466 - if (iov_iter_is_kvec(iter)) 467 - shm_get_kernel_pages(shm->pages, num_pages); 468 - 469 480 shm->offset = off; 470 481 shm->size = len; 471 482 shm->num_pages = num_pages; ··· 474 499 err_put_shm_pages: 475 500 if (!iov_iter_is_kvec(iter)) 476 501 unpin_user_pages(shm->pages, shm->num_pages); 477 - else 478 - shm_put_kernel_pages(shm->pages, shm->num_pages); 479 502 err_free_shm_pages: 480 503 kfree(shm->pages); 481 504 err_free_shm:
+25
drivers/tty/serial/8250/8250.h
··· 175 175 return value; 176 176 } 177 177 178 + void serial8250_clear_fifos(struct uart_8250_port *p); 178 179 void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p); 180 + void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count); 179 181 180 182 void serial8250_rpm_get(struct uart_8250_port *p); 181 183 void serial8250_rpm_put(struct uart_8250_port *p); ··· 402 400 403 401 return dma && dma->tx_running; 404 402 } 403 + 404 + static inline void serial8250_tx_dma_pause(struct uart_8250_port *p) 405 + { 406 + struct uart_8250_dma *dma = p->dma; 407 + 408 + if (!dma->tx_running) 409 + return; 410 + 411 + dmaengine_pause(dma->txchan); 412 + } 413 + 414 + static inline void serial8250_tx_dma_resume(struct uart_8250_port *p) 415 + { 416 + struct uart_8250_dma *dma = p->dma; 417 + 418 + if (!dma->tx_running) 419 + return; 420 + 421 + dmaengine_resume(dma->txchan); 422 + } 405 423 #else 406 424 static inline int serial8250_tx_dma(struct uart_8250_port *p) 407 425 { ··· 443 421 { 444 422 return false; 445 423 } 424 + 425 + static inline void serial8250_tx_dma_pause(struct uart_8250_port *p) { } 426 + static inline void serial8250_tx_dma_resume(struct uart_8250_port *p) { } 446 427 #endif 447 428 448 429 static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+15
drivers/tty/serial/8250/8250_dma.c
··· 162 162 */ 163 163 dma->tx_size = 0; 164 164 165 + /* 166 + * We can't use `dmaengine_terminate_sync` because `uart_flush_buffer` is 167 + * holding the uart port spinlock. 168 + */ 165 169 dmaengine_terminate_async(dma->txchan); 170 + 171 + /* 172 + * The callback might or might not run. If it doesn't run, we need to ensure 173 + * that `tx_running` is cleared so that we can schedule new transactions. 174 + * If it does run, then the zombie callback will clear `tx_running` again 175 + * and perform a no-op since `tx_size` was cleared above. 176 + * 177 + * In either case, we ASSUME the DMA transaction will terminate before we 178 + * issue a new `serial8250_tx_dma`. 179 + */ 180 + dma->tx_running = 0; 166 181 } 167 182 168 183 int serial8250_rx_dma(struct uart_8250_port *p)
+239 -65
drivers/tty/serial/8250/8250_dw.c
··· 9 9 * LCR is written whilst busy. If it is, then a busy detect interrupt is 10 10 * raised, the LCR needs to be rewritten and the uart status register read. 11 11 */ 12 + #include <linux/bitfield.h> 13 + #include <linux/bits.h> 14 + #include <linux/cleanup.h> 12 15 #include <linux/clk.h> 13 16 #include <linux/delay.h> 14 17 #include <linux/device.h> 15 18 #include <linux/io.h> 19 + #include <linux/lockdep.h> 16 20 #include <linux/mod_devicetable.h> 17 21 #include <linux/module.h> 18 22 #include <linux/notifier.h> ··· 44 40 #define RZN1_UART_RDMACR 0x110 /* DMA Control Register Receive Mode */ 45 41 46 42 /* DesignWare specific register fields */ 43 + #define DW_UART_IIR_IID GENMASK(3, 0) 44 + 47 45 #define DW_UART_MCR_SIRE BIT(6) 46 + 47 + #define DW_UART_USR_BUSY BIT(0) 48 48 49 49 /* Renesas specific register fields */ 50 50 #define RZN1_UART_xDMACR_DMA_EN BIT(0) ··· 64 56 #define DW_UART_QUIRK_IS_DMA_FC BIT(3) 65 57 #define DW_UART_QUIRK_APMC0D08 BIT(4) 66 58 #define DW_UART_QUIRK_CPR_VALUE BIT(5) 59 + #define DW_UART_QUIRK_IER_KICK BIT(6) 60 + 61 + /* 62 + * Number of consecutive IIR_NO_INT interrupts required to trigger interrupt 63 + * storm prevention code. 64 + */ 65 + #define DW_UART_QUIRK_IER_KICK_THRES 4 67 66 68 67 struct dw8250_platform_data { 69 68 u8 usr_reg; ··· 92 77 93 78 unsigned int skip_autocfg:1; 94 79 unsigned int uart_16550_compatible:1; 80 + unsigned int in_idle:1; 81 + 82 + u8 no_int_count; 95 83 }; 96 84 97 85 static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data) ··· 125 107 return value; 126 108 } 127 109 128 - /* 129 - * This function is being called as part of the uart_port::serial_out() 130 - * routine. Hence, it must not call serial_port_out() or serial_out() 131 - * against the modified registers here, i.e. LCR. 132 - */ 133 - static void dw8250_force_idle(struct uart_port *p) 110 + static void dw8250_idle_exit(struct uart_port *p) 134 111 { 112 + struct dw8250_data *d = to_dw8250_data(p->private_data); 135 113 struct uart_8250_port *up = up_to_u8250p(p); 136 - unsigned int lsr; 137 114 138 - /* 139 - * The following call currently performs serial_out() 140 - * against the FCR register. Because it differs to LCR 141 - * there will be no infinite loop, but if it ever gets 142 - * modified, we might need a new custom version of it 143 - * that avoids infinite recursion. 144 - */ 145 - serial8250_clear_and_reinit_fifos(up); 115 + if (d->uart_16550_compatible) 116 + return; 146 117 147 - /* 148 - * With PSLVERR_RESP_EN parameter set to 1, the device generates an 149 - * error response when an attempt to read an empty RBR with FIFO 150 - * enabled. 151 - */ 152 - if (up->fcr & UART_FCR_ENABLE_FIFO) { 153 - lsr = serial_port_in(p, UART_LSR); 154 - if (!(lsr & UART_LSR_DR)) 155 - return; 118 + if (up->capabilities & UART_CAP_FIFO) 119 + serial_port_out(p, UART_FCR, up->fcr); 120 + serial_port_out(p, UART_MCR, up->mcr); 121 + serial_port_out(p, UART_IER, up->ier); 122 + 123 + /* DMA Rx is restarted by IRQ handler as needed. */ 124 + if (up->dma) 125 + serial8250_tx_dma_resume(up); 126 + 127 + d->in_idle = 0; 128 + } 129 + 130 + /* 131 + * Ensure BUSY is not asserted. If DW UART is configured with 132 + * !uart_16550_compatible, the writes to LCR, DLL, and DLH fail while 133 + * BUSY is asserted. 134 + * 135 + * Context: port's lock must be held 136 + */ 137 + static int dw8250_idle_enter(struct uart_port *p) 138 + { 139 + struct dw8250_data *d = to_dw8250_data(p->private_data); 140 + unsigned int usr_reg = d->pdata ? d->pdata->usr_reg : DW_UART_USR; 141 + struct uart_8250_port *up = up_to_u8250p(p); 142 + int retries; 143 + u32 lsr; 144 + 145 + lockdep_assert_held_once(&p->lock); 146 + 147 + if (d->uart_16550_compatible) 148 + return 0; 149 + 150 + d->in_idle = 1; 151 + 152 + /* Prevent triggering interrupt from RBR filling */ 153 + serial_port_out(p, UART_IER, 0); 154 + 155 + if (up->dma) { 156 + serial8250_rx_dma_flush(up); 157 + if (serial8250_tx_dma_running(up)) 158 + serial8250_tx_dma_pause(up); 156 159 } 157 160 158 - serial_port_in(p, UART_RX); 161 + /* 162 + * Wait until Tx becomes empty + one extra frame time to ensure all bits 163 + * have been sent on the wire. 164 + * 165 + * FIXME: frame_time delay is too long with very low baudrates. 166 + */ 167 + serial8250_fifo_wait_for_lsr_thre(up, p->fifosize); 168 + ndelay(p->frame_time); 169 + 170 + serial_port_out(p, UART_MCR, up->mcr | UART_MCR_LOOP); 171 + 172 + retries = 4; /* Arbitrary limit, 2 was always enough in tests */ 173 + do { 174 + serial8250_clear_fifos(up); 175 + if (!(serial_port_in(p, usr_reg) & DW_UART_USR_BUSY)) 176 + break; 177 + /* FIXME: frame_time delay is too long with very low baudrates. */ 178 + ndelay(p->frame_time); 179 + } while (--retries); 180 + 181 + lsr = serial_lsr_in(up); 182 + if (lsr & UART_LSR_DR) { 183 + serial_port_in(p, UART_RX); 184 + up->lsr_saved_flags = 0; 185 + } 186 + 187 + /* Now guaranteed to have BUSY deasserted? Just sanity check */ 188 + if (serial_port_in(p, usr_reg) & DW_UART_USR_BUSY) { 189 + dw8250_idle_exit(p); 190 + return -EBUSY; 191 + } 192 + 193 + return 0; 194 + } 195 + 196 + static void dw8250_set_divisor(struct uart_port *p, unsigned int baud, 197 + unsigned int quot, unsigned int quot_frac) 198 + { 199 + struct uart_8250_port *up = up_to_u8250p(p); 200 + int ret; 201 + 202 + ret = dw8250_idle_enter(p); 203 + if (ret < 0) 204 + return; 205 + 206 + serial_port_out(p, UART_LCR, up->lcr | UART_LCR_DLAB); 207 + if (!(serial_port_in(p, UART_LCR) & UART_LCR_DLAB)) 208 + goto idle_failed; 209 + 210 + serial_dl_write(up, quot); 211 + serial_port_out(p, UART_LCR, up->lcr); 212 + 213 + idle_failed: 214 + dw8250_idle_exit(p); 159 215 } 160 216 161 217 /* 162 218 * This function is being called as part of the uart_port::serial_out() 163 - * routine. Hence, it must not call serial_port_out() or serial_out() 164 - * against the modified registers here, i.e. LCR. 219 + * routine. Hence, special care must be taken when serial_port_out() or 220 + * serial_out() against the modified registers here, i.e. LCR (d->in_idle is 221 + * used to break recursion loop). 165 222 */ 166 223 static void dw8250_check_lcr(struct uart_port *p, unsigned int offset, u32 value) 167 224 { 168 225 struct dw8250_data *d = to_dw8250_data(p->private_data); 169 - void __iomem *addr = p->membase + (offset << p->regshift); 170 - int tries = 1000; 226 + u32 lcr; 227 + int ret; 171 228 172 229 if (offset != UART_LCR || d->uart_16550_compatible) 173 230 return; 174 231 232 + lcr = serial_port_in(p, UART_LCR); 233 + 175 234 /* Make sure LCR write wasn't ignored */ 176 - while (tries--) { 177 - u32 lcr = serial_port_in(p, offset); 235 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 236 + return; 178 237 179 - if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 180 - return; 238 + if (d->in_idle) 239 + goto write_err; 181 240 182 - dw8250_force_idle(p); 241 + ret = dw8250_idle_enter(p); 242 + if (ret < 0) 243 + goto write_err; 183 244 184 - #ifdef CONFIG_64BIT 185 - if (p->type == PORT_OCTEON) 186 - __raw_writeq(value & 0xff, addr); 187 - else 188 - #endif 189 - if (p->iotype == UPIO_MEM32) 190 - writel(value, addr); 191 - else if (p->iotype == UPIO_MEM32BE) 192 - iowrite32be(value, addr); 193 - else 194 - writeb(value, addr); 195 - } 245 + serial_port_out(p, UART_LCR, value); 246 + dw8250_idle_exit(p); 247 + return; 248 + 249 + write_err: 196 250 /* 197 251 * FIXME: this deadlocks if port->lock is already held 198 252 * dev_err(p->dev, "Couldn't set LCR to %d\n", value); 199 253 */ 254 + return; /* Silences "label at the end of compound statement" */ 255 + } 256 + 257 + /* 258 + * With BUSY, LCR writes can be very expensive (IRQ + complex retry logic). 259 + * If the write does not change the value of the LCR register, skip it entirely. 260 + */ 261 + static bool dw8250_can_skip_reg_write(struct uart_port *p, unsigned int offset, u32 value) 262 + { 263 + struct dw8250_data *d = to_dw8250_data(p->private_data); 264 + u32 lcr; 265 + 266 + if (offset != UART_LCR || d->uart_16550_compatible) 267 + return false; 268 + 269 + lcr = serial_port_in(p, offset); 270 + return lcr == value; 200 271 } 201 272 202 273 /* Returns once the transmitter is empty or we run out of retries */ ··· 314 207 315 208 static void dw8250_serial_out(struct uart_port *p, unsigned int offset, u32 value) 316 209 { 210 + if (dw8250_can_skip_reg_write(p, offset, value)) 211 + return; 212 + 317 213 writeb(value, p->membase + (offset << p->regshift)); 318 214 dw8250_check_lcr(p, offset, value); 319 215 } 320 216 321 217 static void dw8250_serial_out38x(struct uart_port *p, unsigned int offset, u32 value) 322 218 { 219 + if (dw8250_can_skip_reg_write(p, offset, value)) 220 + return; 221 + 323 222 /* Allow the TX to drain before we reconfigure */ 324 223 if (offset == UART_LCR) 325 224 dw8250_tx_wait_empty(p); ··· 350 237 351 238 static void dw8250_serial_outq(struct uart_port *p, unsigned int offset, u32 value) 352 239 { 240 + if (dw8250_can_skip_reg_write(p, offset, value)) 241 + return; 242 + 353 243 value &= 0xff; 354 244 __raw_writeq(value, p->membase + (offset << p->regshift)); 355 245 /* Read back to ensure register write ordering. */ ··· 364 248 365 249 static void dw8250_serial_out32(struct uart_port *p, unsigned int offset, u32 value) 366 250 { 251 + if (dw8250_can_skip_reg_write(p, offset, value)) 252 + return; 253 + 367 254 writel(value, p->membase + (offset << p->regshift)); 368 255 dw8250_check_lcr(p, offset, value); 369 256 } ··· 380 261 381 262 static void dw8250_serial_out32be(struct uart_port *p, unsigned int offset, u32 value) 382 263 { 264 + if (dw8250_can_skip_reg_write(p, offset, value)) 265 + return; 266 + 383 267 iowrite32be(value, p->membase + (offset << p->regshift)); 384 268 dw8250_check_lcr(p, offset, value); 385 269 } ··· 394 272 return dw8250_modify_msr(p, offset, value); 395 273 } 396 274 275 + /* 276 + * INTC10EE UART can IRQ storm while reporting IIR_NO_INT. Inducing IIR value 277 + * change has been observed to break the storm. 278 + * 279 + * If Tx is empty (THRE asserted), we use here IER_THRI to cause IIR_NO_INT -> 280 + * IIR_THRI transition. 281 + */ 282 + static void dw8250_quirk_ier_kick(struct uart_port *p) 283 + { 284 + struct uart_8250_port *up = up_to_u8250p(p); 285 + u32 lsr; 286 + 287 + if (up->ier & UART_IER_THRI) 288 + return; 289 + 290 + lsr = serial_lsr_in(up); 291 + if (!(lsr & UART_LSR_THRE)) 292 + return; 293 + 294 + serial_port_out(p, UART_IER, up->ier | UART_IER_THRI); 295 + serial_port_in(p, UART_LCR); /* safe, no side-effects */ 296 + serial_port_out(p, UART_IER, up->ier); 297 + } 397 298 398 299 static int dw8250_handle_irq(struct uart_port *p) 399 300 { ··· 426 281 bool rx_timeout = (iir & 0x3f) == UART_IIR_RX_TIMEOUT; 427 282 unsigned int quirks = d->pdata->quirks; 428 283 unsigned int status; 429 - unsigned long flags; 284 + 285 + guard(uart_port_lock_irqsave)(p); 286 + 287 + switch (FIELD_GET(DW_UART_IIR_IID, iir)) { 288 + case UART_IIR_NO_INT: 289 + if (d->uart_16550_compatible || up->dma) 290 + return 0; 291 + 292 + if (quirks & DW_UART_QUIRK_IER_KICK && 293 + d->no_int_count == (DW_UART_QUIRK_IER_KICK_THRES - 1)) 294 + dw8250_quirk_ier_kick(p); 295 + d->no_int_count = (d->no_int_count + 1) % DW_UART_QUIRK_IER_KICK_THRES; 296 + 297 + return 0; 298 + 299 + case UART_IIR_BUSY: 300 + /* Clear the USR */ 301 + serial_port_in(p, d->pdata->usr_reg); 302 + 303 + d->no_int_count = 0; 304 + 305 + return 1; 306 + } 307 + 308 + d->no_int_count = 0; 430 309 431 310 /* 432 311 * There are ways to get Designware-based UARTs into a state where ··· 463 294 * so we limit the workaround only to non-DMA mode. 464 295 */ 465 296 if (!up->dma && rx_timeout) { 466 - uart_port_lock_irqsave(p, &flags); 467 297 status = serial_lsr_in(up); 468 298 469 299 if (!(status & (UART_LSR_DR | UART_LSR_BI))) 470 300 serial_port_in(p, UART_RX); 471 - 472 - uart_port_unlock_irqrestore(p, flags); 473 301 } 474 302 475 303 /* Manually stop the Rx DMA transfer when acting as flow controller */ 476 304 if (quirks & DW_UART_QUIRK_IS_DMA_FC && up->dma && up->dma->rx_running && rx_timeout) { 477 - uart_port_lock_irqsave(p, &flags); 478 305 status = serial_lsr_in(up); 479 - uart_port_unlock_irqrestore(p, flags); 480 306 481 307 if (status & (UART_LSR_DR | UART_LSR_BI)) { 482 308 dw8250_writel_ext(p, RZN1_UART_RDMACR, 0); ··· 479 315 } 480 316 } 481 317 482 - if (serial8250_handle_irq(p, iir)) 483 - return 1; 318 + serial8250_handle_irq_locked(p, iir); 484 319 485 - if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) { 486 - /* Clear the USR */ 487 - serial_port_in(p, d->pdata->usr_reg); 488 - 489 - return 1; 490 - } 491 - 492 - return 0; 320 + return 1; 493 321 } 494 322 495 323 static void dw8250_clk_work_cb(struct work_struct *work) ··· 683 527 reset_control_assert(data); 684 528 } 685 529 530 + static void dw8250_shutdown(struct uart_port *port) 531 + { 532 + struct dw8250_data *d = to_dw8250_data(port->private_data); 533 + 534 + serial8250_do_shutdown(port); 535 + d->no_int_count = 0; 536 + } 537 + 686 538 static int dw8250_probe(struct platform_device *pdev) 687 539 { 688 540 struct uart_8250_port uart = {}, *up = &uart; ··· 709 545 p->type = PORT_8250; 710 546 p->flags = UPF_FIXED_PORT; 711 547 p->dev = dev; 548 + 712 549 p->set_ldisc = dw8250_set_ldisc; 713 550 p->set_termios = dw8250_set_termios; 551 + p->set_divisor = dw8250_set_divisor; 714 552 715 553 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 716 554 if (!data) ··· 820 654 dw8250_quirks(p, data); 821 655 822 656 /* If the Busy Functionality is not implemented, don't handle it */ 823 - if (data->uart_16550_compatible) 657 + if (data->uart_16550_compatible) { 824 658 p->handle_irq = NULL; 825 - else if (data->pdata) 659 + } else if (data->pdata) { 826 660 p->handle_irq = dw8250_handle_irq; 661 + p->shutdown = dw8250_shutdown; 662 + } 827 663 828 664 dw8250_setup_dma_filter(p, data); 829 665 ··· 957 789 .quirks = DW_UART_QUIRK_SKIP_SET_RATE, 958 790 }; 959 791 792 + static const struct dw8250_platform_data dw8250_intc10ee = { 793 + .usr_reg = DW_UART_USR, 794 + .quirks = DW_UART_QUIRK_IER_KICK, 795 + }; 796 + 960 797 static const struct of_device_id dw8250_of_match[] = { 961 798 { .compatible = "snps,dw-apb-uart", .data = &dw8250_dw_apb }, 962 799 { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data }, ··· 991 818 { "INT33C5", (kernel_ulong_t)&dw8250_dw_apb }, 992 819 { "INT3434", (kernel_ulong_t)&dw8250_dw_apb }, 993 820 { "INT3435", (kernel_ulong_t)&dw8250_dw_apb }, 994 - { "INTC10EE", (kernel_ulong_t)&dw8250_dw_apb }, 821 + { "INTC10EE", (kernel_ulong_t)&dw8250_intc10ee }, 995 822 { }, 996 823 }; 997 824 MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match); ··· 1009 836 1010 837 module_platform_driver(dw8250_platform_driver); 1011 838 839 + MODULE_IMPORT_NS("SERIAL_8250"); 1012 840 MODULE_AUTHOR("Jamie Iles"); 1013 841 MODULE_LICENSE("GPL"); 1014 842 MODULE_DESCRIPTION("Synopsys DesignWare 8250 serial port driver");
+17
drivers/tty/serial/8250/8250_pci.c
··· 137 137 }; 138 138 139 139 #define PCI_DEVICE_ID_HPE_PCI_SERIAL 0x37e 140 + #define PCIE_VENDOR_ID_ASIX 0x125B 141 + #define PCIE_DEVICE_ID_AX99100 0x9100 140 142 141 143 static const struct pci_device_id pci_use_msi[] = { 142 144 { PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900, ··· 151 149 0xA000, 0x1000) }, 152 150 { PCI_DEVICE_SUB(PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL, 153 151 PCI_ANY_ID, PCI_ANY_ID) }, 152 + { PCI_DEVICE_SUB(PCIE_VENDOR_ID_ASIX, PCIE_DEVICE_ID_AX99100, 153 + 0xA000, 0x1000) }, 154 154 { } 155 155 }; 156 156 ··· 924 920 case PCI_DEVICE_ID_NETMOS_9912: 925 921 case PCI_DEVICE_ID_NETMOS_9922: 926 922 case PCI_DEVICE_ID_NETMOS_9900: 923 + case PCIE_DEVICE_ID_AX99100: 927 924 num_serial = pci_netmos_9900_numports(dev); 928 925 break; 929 926 ··· 2543 2538 */ 2544 2539 { 2545 2540 .vendor = PCI_VENDOR_ID_NETMOS, 2541 + .device = PCI_ANY_ID, 2542 + .subvendor = PCI_ANY_ID, 2543 + .subdevice = PCI_ANY_ID, 2544 + .init = pci_netmos_init, 2545 + .setup = pci_netmos_9900_setup, 2546 + }, 2547 + { 2548 + .vendor = PCIE_VENDOR_ID_ASIX, 2546 2549 .device = PCI_ANY_ID, 2547 2550 .subvendor = PCI_ANY_ID, 2548 2551 .subdevice = PCI_ANY_ID, ··· 6077 6064 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900, 6078 6065 0xA000, 0x3002, 6079 6066 0, 0, pbn_NETMOS9900_2s_115200 }, 6067 + 6068 + { PCIE_VENDOR_ID_ASIX, PCIE_DEVICE_ID_AX99100, 6069 + 0xA000, 0x1000, 6070 + 0, 0, pbn_b0_1_115200 }, 6080 6071 6081 6072 /* 6082 6073 * Best Connectivity and Rosewill PCI Multi I/O cards
+45 -30
drivers/tty/serial/8250/8250_port.c
··· 18 18 #include <linux/irq.h> 19 19 #include <linux/console.h> 20 20 #include <linux/gpio/consumer.h> 21 + #include <linux/lockdep.h> 21 22 #include <linux/sysrq.h> 22 23 #include <linux/delay.h> 23 24 #include <linux/platform_device.h> ··· 489 488 /* 490 489 * FIFO support. 491 490 */ 492 - static void serial8250_clear_fifos(struct uart_8250_port *p) 491 + void serial8250_clear_fifos(struct uart_8250_port *p) 493 492 { 494 493 if (p->capabilities & UART_CAP_FIFO) { 495 494 serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO); ··· 498 497 serial_out(p, UART_FCR, 0); 499 498 } 500 499 } 500 + EXPORT_SYMBOL_NS_GPL(serial8250_clear_fifos, "SERIAL_8250"); 501 501 502 502 static enum hrtimer_restart serial8250_em485_handle_start_tx(struct hrtimer *t); 503 503 static enum hrtimer_restart serial8250_em485_handle_stop_tx(struct hrtimer *t); ··· 1784 1782 } 1785 1783 1786 1784 /* 1787 - * This handles the interrupt from one port. 1785 + * Context: port's lock must be held by the caller. 1788 1786 */ 1789 - int serial8250_handle_irq(struct uart_port *port, unsigned int iir) 1787 + void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir) 1790 1788 { 1791 1789 struct uart_8250_port *up = up_to_u8250p(port); 1792 1790 struct tty_port *tport = &port->state->port; 1793 1791 bool skip_rx = false; 1794 - unsigned long flags; 1795 1792 u16 status; 1796 1793 1797 - if (iir & UART_IIR_NO_INT) 1798 - return 0; 1799 - 1800 - uart_port_lock_irqsave(port, &flags); 1794 + lockdep_assert_held_once(&port->lock); 1801 1795 1802 1796 status = serial_lsr_in(up); 1803 1797 ··· 1826 1828 else if (!up->dma->tx_running) 1827 1829 __stop_tx(up); 1828 1830 } 1831 + } 1832 + EXPORT_SYMBOL_NS_GPL(serial8250_handle_irq_locked, "SERIAL_8250"); 1829 1833 1830 - uart_unlock_and_check_sysrq_irqrestore(port, flags); 1834 + /* 1835 + * This handles the interrupt from one port. 1836 + */ 1837 + int serial8250_handle_irq(struct uart_port *port, unsigned int iir) 1838 + { 1839 + if (iir & UART_IIR_NO_INT) 1840 + return 0; 1841 + 1842 + guard(uart_port_lock_irqsave)(port); 1843 + serial8250_handle_irq_locked(port, iir); 1831 1844 1832 1845 return 1; 1833 1846 } ··· 2156 2147 if (up->port.flags & UPF_NO_THRE_TEST) 2157 2148 return; 2158 2149 2159 - if (port->irqflags & IRQF_SHARED) 2160 - disable_irq_nosync(port->irq); 2150 + disable_irq(port->irq); 2161 2151 2162 2152 /* 2163 2153 * Test for UARTs that do not reassert THRE when the transmitter is idle and the interrupt ··· 2178 2170 serial_port_out(port, UART_IER, 0); 2179 2171 } 2180 2172 2181 - if (port->irqflags & IRQF_SHARED) 2182 - enable_irq(port->irq); 2173 + enable_irq(port->irq); 2183 2174 2184 2175 /* 2185 2176 * If the interrupt is not reasserted, or we otherwise don't trust the iir, setup a timer to ··· 2357 2350 void serial8250_do_shutdown(struct uart_port *port) 2358 2351 { 2359 2352 struct uart_8250_port *up = up_to_u8250p(port); 2353 + u32 lcr; 2360 2354 2361 2355 serial8250_rpm_get(up); 2362 2356 /* ··· 2384 2376 port->mctrl &= ~TIOCM_OUT2; 2385 2377 2386 2378 serial8250_set_mctrl(port, port->mctrl); 2379 + 2380 + /* Disable break condition */ 2381 + lcr = serial_port_in(port, UART_LCR); 2382 + lcr &= ~UART_LCR_SBC; 2383 + serial_port_out(port, UART_LCR, lcr); 2387 2384 } 2388 2385 2389 - /* 2390 - * Disable break condition and FIFOs 2391 - */ 2392 - serial_port_out(port, UART_LCR, 2393 - serial_port_in(port, UART_LCR) & ~UART_LCR_SBC); 2394 2386 serial8250_clear_fifos(up); 2395 2387 2396 2388 rsa_disable(up); ··· 2400 2392 * the IRQ chain. 2401 2393 */ 2402 2394 serial_port_in(port, UART_RX); 2395 + /* 2396 + * LCR writes on DW UART can trigger late (unmaskable) IRQs. 2397 + * Handle them before releasing the handler. 2398 + */ 2399 + synchronize_irq(port->irq); 2400 + 2403 2401 serial8250_rpm_put(up); 2404 2402 2405 2403 up->ops->release_irq(up); ··· 3199 3185 } 3200 3186 EXPORT_SYMBOL_GPL(serial8250_set_defaults); 3201 3187 3188 + void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count) 3189 + { 3190 + unsigned int i; 3191 + 3192 + for (i = 0; i < count; i++) { 3193 + if (wait_for_lsr(up, UART_LSR_THRE)) 3194 + return; 3195 + } 3196 + } 3197 + EXPORT_SYMBOL_NS_GPL(serial8250_fifo_wait_for_lsr_thre, "SERIAL_8250"); 3198 + 3202 3199 #ifdef CONFIG_SERIAL_8250_CONSOLE 3203 3200 3204 3201 static void serial8250_console_putchar(struct uart_port *port, unsigned char ch) ··· 3251 3226 serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS); 3252 3227 } 3253 3228 3254 - static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count) 3255 - { 3256 - unsigned int i; 3257 - 3258 - for (i = 0; i < count; i++) { 3259 - if (wait_for_lsr(up, UART_LSR_THRE)) 3260 - return; 3261 - } 3262 - } 3263 - 3264 3229 /* 3265 3230 * Print a string to the serial port using the device FIFO 3266 3231 * ··· 3269 3254 3270 3255 while (s != end) { 3271 3256 /* Allow timeout for each byte of a possibly full FIFO */ 3272 - fifo_wait_for_lsr(up, fifosize); 3257 + serial8250_fifo_wait_for_lsr_thre(up, fifosize); 3273 3258 3274 3259 for (i = 0; i < fifosize && s != end; ++i) { 3275 3260 if (*s == '\n' && !cr_sent) { ··· 3287 3272 * Allow timeout for each byte written since the caller will only wait 3288 3273 * for UART_LSR_BOTH_EMPTY using the timeout of a single character 3289 3274 */ 3290 - fifo_wait_for_lsr(up, tx_count); 3275 + serial8250_fifo_wait_for_lsr_thre(up, tx_count); 3291 3276 } 3292 3277 3293 3278 /*
+4 -1
drivers/tty/serial/serial_core.c
··· 643 643 unsigned int ret; 644 644 645 645 port = uart_port_ref_lock(state, &flags); 646 - ret = kfifo_avail(&state->port.xmit_fifo); 646 + if (!state->port.xmit_buf) 647 + ret = 0; 648 + else 649 + ret = kfifo_avail(&state->port.xmit_fifo); 647 650 uart_port_unlock_deref(port, flags); 648 651 return ret; 649 652 }
+1
drivers/tty/serial/uartlite.c
··· 878 878 pm_runtime_use_autosuspend(&pdev->dev); 879 879 pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT); 880 880 pm_runtime_set_active(&pdev->dev); 881 + pm_runtime_get_noresume(&pdev->dev); 881 882 pm_runtime_enable(&pdev->dev); 882 883 883 884 ret = ulite_assign(&pdev->dev, id, res->start, irq, pdata);
+8
drivers/tty/vt/vt.c
··· 1339 1339 kfree(vc->vc_saved_screen); 1340 1340 vc->vc_saved_screen = NULL; 1341 1341 } 1342 + vc_uniscr_free(vc->vc_saved_uni_lines); 1343 + vc->vc_saved_uni_lines = NULL; 1342 1344 } 1343 1345 return vc; 1344 1346 } ··· 1886 1884 vc->vc_saved_screen = kmemdup((u16 *)vc->vc_origin, size, GFP_KERNEL); 1887 1885 if (vc->vc_saved_screen == NULL) 1888 1886 return; 1887 + vc->vc_saved_uni_lines = vc->vc_uni_lines; 1888 + vc->vc_uni_lines = NULL; 1889 1889 vc->vc_saved_rows = vc->vc_rows; 1890 1890 vc->vc_saved_cols = vc->vc_cols; 1891 1891 save_cur(vc); ··· 1909 1905 dest = ((u16 *)vc->vc_origin) + r * vc->vc_cols; 1910 1906 memcpy(dest, src, 2 * cols); 1911 1907 } 1908 + vc_uniscr_set(vc, vc->vc_saved_uni_lines); 1909 + vc->vc_saved_uni_lines = NULL; 1912 1910 restore_cur(vc); 1913 1911 /* Update the entire screen */ 1914 1912 if (con_should_update(vc)) ··· 2233 2227 if (vc->vc_saved_screen != NULL) { 2234 2228 kfree(vc->vc_saved_screen); 2235 2229 vc->vc_saved_screen = NULL; 2230 + vc_uniscr_free(vc->vc_saved_uni_lines); 2231 + vc->vc_saved_uni_lines = NULL; 2236 2232 vc->vc_saved_rows = 0; 2237 2233 vc->vc_saved_cols = 0; 2238 2234 }
+6
fs/binfmt_elf_fdpic.c
··· 595 595 #ifdef ELF_HWCAP2 596 596 nitems++; 597 597 #endif 598 + #ifdef ELF_HWCAP3 599 + nitems++; 600 + #endif 601 + #ifdef ELF_HWCAP4 602 + nitems++; 603 + #endif 598 604 599 605 csp = sp; 600 606 sp -= nitems * 2 * sizeof(unsigned long);
+28
fs/btrfs/backref.c
··· 1393 1393 .indirect_missing_keys = PREFTREE_INIT 1394 1394 }; 1395 1395 1396 + if (unlikely(!root)) { 1397 + btrfs_err(ctx->fs_info, 1398 + "missing extent root for extent at bytenr %llu", 1399 + ctx->bytenr); 1400 + return -EUCLEAN; 1401 + } 1402 + 1396 1403 /* Roots ulist is not needed when using a sharedness check context. */ 1397 1404 if (sc) 1398 1405 ASSERT(ctx->roots == NULL); ··· 2211 2204 struct btrfs_extent_item *ei; 2212 2205 struct btrfs_key key; 2213 2206 2207 + if (unlikely(!extent_root)) { 2208 + btrfs_err(fs_info, 2209 + "missing extent root for extent at bytenr %llu", 2210 + logical); 2211 + return -EUCLEAN; 2212 + } 2213 + 2214 2214 key.objectid = logical; 2215 2215 if (btrfs_fs_incompat(fs_info, SKINNY_METADATA)) 2216 2216 key.type = BTRFS_METADATA_ITEM_KEY; ··· 2865 2851 struct btrfs_key key; 2866 2852 int ret; 2867 2853 2854 + if (unlikely(!extent_root)) { 2855 + btrfs_err(fs_info, 2856 + "missing extent root for extent at bytenr %llu", 2857 + bytenr); 2858 + return -EUCLEAN; 2859 + } 2860 + 2868 2861 key.objectid = bytenr; 2869 2862 key.type = BTRFS_METADATA_ITEM_KEY; 2870 2863 key.offset = (u64)-1; ··· 3008 2987 3009 2988 /* We're at keyed items, there is no inline item, go to the next one */ 3010 2989 extent_root = btrfs_extent_root(iter->fs_info, iter->bytenr); 2990 + if (unlikely(!extent_root)) { 2991 + btrfs_err(iter->fs_info, 2992 + "missing extent root for extent at bytenr %llu", 2993 + iter->bytenr); 2994 + return -EUCLEAN; 2995 + } 2996 + 3011 2997 ret = btrfs_next_item(extent_root, iter->path); 3012 2998 if (ret) 3013 2999 return ret;
+36
fs/btrfs/block-group.c
··· 739 739 740 740 last = max_t(u64, block_group->start, BTRFS_SUPER_INFO_OFFSET); 741 741 extent_root = btrfs_extent_root(fs_info, last); 742 + if (unlikely(!extent_root)) { 743 + btrfs_err(fs_info, 744 + "missing extent root for block group at offset %llu", 745 + block_group->start); 746 + return -EUCLEAN; 747 + } 742 748 743 749 #ifdef CONFIG_BTRFS_DEBUG 744 750 /* ··· 1067 1061 int ret; 1068 1062 1069 1063 root = btrfs_block_group_root(fs_info); 1064 + if (unlikely(!root)) { 1065 + btrfs_err(fs_info, "missing block group root"); 1066 + return -EUCLEAN; 1067 + } 1068 + 1070 1069 key.objectid = block_group->start; 1071 1070 key.type = BTRFS_BLOCK_GROUP_ITEM_KEY; 1072 1071 key.offset = block_group->length; ··· 1359 1348 struct btrfs_root *root = btrfs_block_group_root(fs_info); 1360 1349 struct btrfs_chunk_map *map; 1361 1350 unsigned int num_items; 1351 + 1352 + if (unlikely(!root)) { 1353 + btrfs_err(fs_info, "missing block group root"); 1354 + return ERR_PTR(-EUCLEAN); 1355 + } 1362 1356 1363 1357 map = btrfs_find_chunk_map(fs_info, chunk_offset, 1); 1364 1358 ASSERT(map != NULL); ··· 2156 2140 int ret; 2157 2141 struct btrfs_key found_key; 2158 2142 2143 + if (unlikely(!root)) { 2144 + btrfs_err(fs_info, "missing block group root"); 2145 + return -EUCLEAN; 2146 + } 2147 + 2159 2148 btrfs_for_each_slot(root, key, &found_key, path, ret) { 2160 2149 if (found_key.objectid >= key->objectid && 2161 2150 found_key.type == BTRFS_BLOCK_GROUP_ITEM_KEY) { ··· 2734 2713 size_t size; 2735 2714 int ret; 2736 2715 2716 + if (unlikely(!root)) { 2717 + btrfs_err(fs_info, "missing block group root"); 2718 + return -EUCLEAN; 2719 + } 2720 + 2737 2721 spin_lock(&block_group->lock); 2738 2722 btrfs_set_stack_block_group_v2_used(&bgi, block_group->used); 2739 2723 btrfs_set_stack_block_group_v2_chunk_objectid(&bgi, block_group->global_root_id); ··· 3074 3048 int ret; 3075 3049 bool dirty_bg_running; 3076 3050 3051 + if (unlikely(!root)) { 3052 + btrfs_err(fs_info, "missing block group root"); 3053 + return -EUCLEAN; 3054 + } 3055 + 3077 3056 /* 3078 3057 * This can only happen when we are doing read-only scrub on read-only 3079 3058 * mount. ··· 3222 3191 u32 old_last_identity_remap_count; 3223 3192 u64 used, remap_bytes; 3224 3193 u32 identity_remap_count; 3194 + 3195 + if (unlikely(!root)) { 3196 + btrfs_err(fs_info, "missing block group root"); 3197 + return -EUCLEAN; 3198 + } 3225 3199 3226 3200 /* 3227 3201 * Block group items update can be triggered out of commit transaction
+8 -3
fs/btrfs/compression.c
··· 320 320 321 321 ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->sectorsize)); 322 322 ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->sectorsize)); 323 - ASSERT(cb->writeback); 323 + /* 324 + * This flag determines if we should clear the writeback flag from the 325 + * page cache. But this function is only utilized by encoded writes, it 326 + * never goes through the page cache. 327 + */ 328 + ASSERT(!cb->writeback); 324 329 325 330 cb->start = ordered->file_offset; 326 331 cb->len = ordered->num_bytes; 332 + ASSERT(cb->bbio.bio.bi_iter.bi_size == ordered->disk_num_bytes); 327 333 cb->compressed_len = ordered->disk_num_bytes; 328 334 cb->bbio.bio.bi_iter.bi_sector = ordered->disk_bytenr >> SECTOR_SHIFT; 329 335 cb->bbio.ordered = ordered; ··· 351 345 cb = alloc_compressed_bio(inode, start, REQ_OP_WRITE, end_bbio_compressed_write); 352 346 cb->start = start; 353 347 cb->len = len; 354 - cb->writeback = true; 355 - 348 + cb->writeback = false; 356 349 return cb; 357 350 } 358 351
+17 -3
fs/btrfs/disk-io.c
··· 1591 1591 * this will bump the backup pointer by one when it is 1592 1592 * done 1593 1593 */ 1594 - static void backup_super_roots(struct btrfs_fs_info *info) 1594 + static int backup_super_roots(struct btrfs_fs_info *info) 1595 1595 { 1596 1596 const int next_backup = info->backup_root_index; 1597 1597 struct btrfs_root_backup *root_backup; ··· 1622 1622 if (!btrfs_fs_incompat(info, EXTENT_TREE_V2)) { 1623 1623 struct btrfs_root *extent_root = btrfs_extent_root(info, 0); 1624 1624 struct btrfs_root *csum_root = btrfs_csum_root(info, 0); 1625 + 1626 + if (unlikely(!extent_root)) { 1627 + btrfs_err(info, "missing extent root for extent at bytenr 0"); 1628 + return -EUCLEAN; 1629 + } 1630 + if (unlikely(!csum_root)) { 1631 + btrfs_err(info, "missing csum root for extent at bytenr 0"); 1632 + return -EUCLEAN; 1633 + } 1625 1634 1626 1635 btrfs_set_backup_extent_root(root_backup, 1627 1636 extent_root->node->start); ··· 1679 1670 memcpy(&info->super_copy->super_roots, 1680 1671 &info->super_for_commit->super_roots, 1681 1672 sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS); 1673 + 1674 + return 0; 1682 1675 } 1683 1676 1684 1677 /* ··· 4062 4051 * not from fsync where the tree roots in fs_info have not 4063 4052 * been consistent on disk. 4064 4053 */ 4065 - if (max_mirrors == 0) 4066 - backup_super_roots(fs_info); 4054 + if (max_mirrors == 0) { 4055 + ret = backup_super_roots(fs_info); 4056 + if (ret < 0) 4057 + return ret; 4058 + } 4067 4059 4068 4060 sb = fs_info->super_for_commit; 4069 4061 dev_item = &sb->dev_item;
+93 -5
fs/btrfs/extent-tree.c
··· 75 75 struct btrfs_key key; 76 76 BTRFS_PATH_AUTO_FREE(path); 77 77 78 + if (unlikely(!root)) { 79 + btrfs_err(fs_info, 80 + "missing extent root for extent at bytenr %llu", start); 81 + return -EUCLEAN; 82 + } 83 + 78 84 path = btrfs_alloc_path(); 79 85 if (!path) 80 86 return -ENOMEM; ··· 137 131 key.offset = offset; 138 132 139 133 extent_root = btrfs_extent_root(fs_info, bytenr); 134 + if (unlikely(!extent_root)) { 135 + btrfs_err(fs_info, 136 + "missing extent root for extent at bytenr %llu", bytenr); 137 + return -EUCLEAN; 138 + } 139 + 140 140 ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0); 141 141 if (ret < 0) 142 142 return ret; ··· 448 436 int recow; 449 437 int ret; 450 438 439 + if (unlikely(!root)) { 440 + btrfs_err(trans->fs_info, 441 + "missing extent root for extent at bytenr %llu", bytenr); 442 + return -EUCLEAN; 443 + } 444 + 451 445 key.objectid = bytenr; 452 446 if (parent) { 453 447 key.type = BTRFS_SHARED_DATA_REF_KEY; ··· 527 509 u32 size; 528 510 u32 num_refs; 529 511 int ret; 512 + 513 + if (unlikely(!root)) { 514 + btrfs_err(trans->fs_info, 515 + "missing extent root for extent at bytenr %llu", bytenr); 516 + return -EUCLEAN; 517 + } 530 518 531 519 key.objectid = bytenr; 532 520 if (node->parent) { ··· 692 668 struct btrfs_key key; 693 669 int ret; 694 670 671 + if (unlikely(!root)) { 672 + btrfs_err(trans->fs_info, 673 + "missing extent root for extent at bytenr %llu", bytenr); 674 + return -EUCLEAN; 675 + } 676 + 695 677 key.objectid = bytenr; 696 678 if (parent) { 697 679 key.type = BTRFS_SHARED_BLOCK_REF_KEY; ··· 721 691 struct btrfs_root *root = btrfs_extent_root(trans->fs_info, bytenr); 722 692 struct btrfs_key key; 723 693 int ret; 694 + 695 + if (unlikely(!root)) { 696 + btrfs_err(trans->fs_info, 697 + "missing extent root for extent at bytenr %llu", bytenr); 698 + return -EUCLEAN; 699 + } 724 700 725 701 key.objectid = bytenr; 726 702 if (node->parent) { ··· 817 781 int ret; 818 782 bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA); 819 783 int needed; 784 + 785 + if (unlikely(!root)) { 786 + btrfs_err(fs_info, 787 + "missing extent root for extent at bytenr %llu", bytenr); 788 + return -EUCLEAN; 789 + } 820 790 821 791 key.objectid = bytenr; 822 792 key.type = BTRFS_EXTENT_ITEM_KEY; ··· 1722 1680 } 1723 1681 1724 1682 root = btrfs_extent_root(fs_info, key.objectid); 1683 + if (unlikely(!root)) { 1684 + btrfs_err(fs_info, 1685 + "missing extent root for extent at bytenr %llu", 1686 + key.objectid); 1687 + return -EUCLEAN; 1688 + } 1725 1689 again: 1726 1690 ret = btrfs_search_slot(trans, root, &key, path, 0, 1); 1727 1691 if (ret < 0) { ··· 1974 1926 struct btrfs_root *csum_root; 1975 1927 1976 1928 csum_root = btrfs_csum_root(fs_info, head->bytenr); 1977 - ret = btrfs_del_csums(trans, csum_root, head->bytenr, 1978 - head->num_bytes); 1929 + if (unlikely(!csum_root)) { 1930 + btrfs_err(fs_info, 1931 + "missing csum root for extent at bytenr %llu", 1932 + head->bytenr); 1933 + ret = -EUCLEAN; 1934 + } else { 1935 + ret = btrfs_del_csums(trans, csum_root, head->bytenr, 1936 + head->num_bytes); 1937 + } 1979 1938 } 1980 1939 } 1981 1940 ··· 2433 2378 u32 expected_size; 2434 2379 int type; 2435 2380 int ret; 2381 + 2382 + if (unlikely(!extent_root)) { 2383 + btrfs_err(fs_info, 2384 + "missing extent root for extent at bytenr %llu", bytenr); 2385 + return -EUCLEAN; 2386 + } 2436 2387 2437 2388 key.objectid = bytenr; 2438 2389 key.type = BTRFS_EXTENT_ITEM_KEY; ··· 3154 3093 struct btrfs_root *csum_root; 3155 3094 3156 3095 csum_root = btrfs_csum_root(trans->fs_info, bytenr); 3096 + if (unlikely(!csum_root)) { 3097 + ret = -EUCLEAN; 3098 + btrfs_abort_transaction(trans, ret); 3099 + btrfs_err(trans->fs_info, 3100 + "missing csum root for extent at bytenr %llu", 3101 + bytenr); 3102 + return ret; 3103 + } 3104 + 3157 3105 ret = btrfs_del_csums(trans, csum_root, bytenr, num_bytes); 3158 3106 if (unlikely(ret)) { 3159 3107 btrfs_abort_transaction(trans, ret); ··· 3292 3222 u64 delayed_ref_root = href->owning_root; 3293 3223 3294 3224 extent_root = btrfs_extent_root(info, bytenr); 3295 - ASSERT(extent_root); 3225 + if (unlikely(!extent_root)) { 3226 + btrfs_err(info, 3227 + "missing extent root for extent at bytenr %llu", bytenr); 3228 + return -EUCLEAN; 3229 + } 3296 3230 3297 3231 path = btrfs_alloc_path(); 3298 3232 if (!path) ··· 5013 4939 size += btrfs_extent_inline_ref_size(BTRFS_EXTENT_OWNER_REF_KEY); 5014 4940 size += btrfs_extent_inline_ref_size(type); 5015 4941 4942 + extent_root = btrfs_extent_root(fs_info, ins->objectid); 4943 + if (unlikely(!extent_root)) { 4944 + btrfs_err(fs_info, 4945 + "missing extent root for extent at bytenr %llu", 4946 + ins->objectid); 4947 + return -EUCLEAN; 4948 + } 4949 + 5016 4950 path = btrfs_alloc_path(); 5017 4951 if (!path) 5018 4952 return -ENOMEM; 5019 4953 5020 - extent_root = btrfs_extent_root(fs_info, ins->objectid); 5021 4954 ret = btrfs_insert_empty_item(trans, extent_root, path, ins, size); 5022 4955 if (ret) { 5023 4956 btrfs_free_path(path); ··· 5100 5019 size += sizeof(*block_info); 5101 5020 } 5102 5021 5022 + extent_root = btrfs_extent_root(fs_info, extent_key.objectid); 5023 + if (unlikely(!extent_root)) { 5024 + btrfs_err(fs_info, 5025 + "missing extent root for extent at bytenr %llu", 5026 + extent_key.objectid); 5027 + return -EUCLEAN; 5028 + } 5029 + 5103 5030 path = btrfs_alloc_path(); 5104 5031 if (!path) 5105 5032 return -ENOMEM; 5106 5033 5107 - extent_root = btrfs_extent_root(fs_info, extent_key.objectid); 5108 5034 ret = btrfs_insert_empty_item(trans, extent_root, path, &extent_key, 5109 5035 size); 5110 5036 if (ret) {
+7
fs/btrfs/file-item.c
··· 308 308 /* Current item doesn't contain the desired range, search again */ 309 309 btrfs_release_path(path); 310 310 csum_root = btrfs_csum_root(fs_info, disk_bytenr); 311 + if (unlikely(!csum_root)) { 312 + btrfs_err(fs_info, 313 + "missing csum root for extent at bytenr %llu", 314 + disk_bytenr); 315 + return -EUCLEAN; 316 + } 317 + 311 318 item = btrfs_lookup_csum(NULL, csum_root, path, disk_bytenr, 0); 312 319 if (IS_ERR(item)) { 313 320 ret = PTR_ERR(item);
+8 -1
fs/btrfs/free-space-tree.c
··· 1073 1073 if (ret) 1074 1074 return ret; 1075 1075 1076 + extent_root = btrfs_extent_root(trans->fs_info, block_group->start); 1077 + if (unlikely(!extent_root)) { 1078 + btrfs_err(trans->fs_info, 1079 + "missing extent root for block group at offset %llu", 1080 + block_group->start); 1081 + return -EUCLEAN; 1082 + } 1083 + 1076 1084 mutex_lock(&block_group->free_space_lock); 1077 1085 1078 1086 /* ··· 1094 1086 key.type = BTRFS_EXTENT_ITEM_KEY; 1095 1087 key.offset = 0; 1096 1088 1097 - extent_root = btrfs_extent_root(trans->fs_info, key.objectid); 1098 1089 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0); 1099 1090 if (ret < 0) 1100 1091 goto out_locked;
+20 -5
fs/btrfs/inode.c
··· 2012 2012 */ 2013 2013 2014 2014 csum_root = btrfs_csum_root(root->fs_info, io_start); 2015 + if (unlikely(!csum_root)) { 2016 + btrfs_err(root->fs_info, 2017 + "missing csum root for extent at bytenr %llu", io_start); 2018 + ret = -EUCLEAN; 2019 + goto out; 2020 + } 2021 + 2015 2022 ret = btrfs_lookup_csums_list(csum_root, io_start, 2016 2023 io_start + args->file_extent.num_bytes - 1, 2017 2024 NULL, nowait); ··· 2756 2749 int ret; 2757 2750 2758 2751 list_for_each_entry(sum, list, list) { 2759 - trans->adding_csums = true; 2760 - if (!csum_root) 2752 + if (!csum_root) { 2761 2753 csum_root = btrfs_csum_root(trans->fs_info, 2762 2754 sum->logical); 2755 + if (unlikely(!csum_root)) { 2756 + btrfs_err(trans->fs_info, 2757 + "missing csum root for extent at bytenr %llu", 2758 + sum->logical); 2759 + return -EUCLEAN; 2760 + } 2761 + } 2762 + trans->adding_csums = true; 2763 2763 ret = btrfs_csum_file_blocks(trans, csum_root, sum); 2764 2764 trans->adding_csums = false; 2765 2765 if (ret) ··· 9888 9874 int compression; 9889 9875 size_t orig_count; 9890 9876 const u32 min_folio_size = btrfs_min_folio_size(fs_info); 9877 + const u32 blocksize = fs_info->sectorsize; 9891 9878 u64 start, end; 9892 9879 u64 num_bytes, ram_bytes, disk_num_bytes; 9893 9880 struct btrfs_key ins; ··· 9999 9984 ret = -EFAULT; 10000 9985 goto out_cb; 10001 9986 } 10002 - if (bytes < min_folio_size) 10003 - folio_zero_range(folio, bytes, min_folio_size - bytes); 10004 - ret = bio_add_folio(&cb->bbio.bio, folio, folio_size(folio), 0); 9987 + if (!IS_ALIGNED(bytes, blocksize)) 9988 + folio_zero_range(folio, bytes, round_up(bytes, blocksize) - bytes); 9989 + ret = bio_add_folio(&cb->bbio.bio, folio, round_up(bytes, blocksize), 0); 10005 9990 if (unlikely(!ret)) { 10006 9991 folio_put(folio); 10007 9992 ret = -EINVAL;
+9 -3
fs/btrfs/ioctl.c
··· 3617 3617 } 3618 3618 } 3619 3619 3620 - trans = btrfs_join_transaction(root); 3620 + /* 2 BTRFS_QGROUP_RELATION_KEY items. */ 3621 + trans = btrfs_start_transaction(root, 2); 3621 3622 if (IS_ERR(trans)) { 3622 3623 ret = PTR_ERR(trans); 3623 3624 goto out; ··· 3690 3689 goto out; 3691 3690 } 3692 3691 3693 - trans = btrfs_join_transaction(root); 3692 + /* 3693 + * 1 BTRFS_QGROUP_INFO_KEY item. 3694 + * 1 BTRFS_QGROUP_LIMIT_KEY item. 3695 + */ 3696 + trans = btrfs_start_transaction(root, 2); 3694 3697 if (IS_ERR(trans)) { 3695 3698 ret = PTR_ERR(trans); 3696 3699 goto out; ··· 3743 3738 goto drop_write; 3744 3739 } 3745 3740 3746 - trans = btrfs_join_transaction(root); 3741 + /* 1 BTRFS_QGROUP_LIMIT_KEY item. */ 3742 + trans = btrfs_start_transaction(root, 1); 3747 3743 if (IS_ERR(trans)) { 3748 3744 ret = PTR_ERR(trans); 3749 3745 goto out;
+2 -2
fs/btrfs/lzo.c
··· 429 429 int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) 430 430 { 431 431 struct workspace *workspace = list_entry(ws, struct workspace, list); 432 - const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; 432 + struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; 433 433 const u32 sectorsize = fs_info->sectorsize; 434 434 struct folio_iter fi; 435 435 char *kaddr; ··· 447 447 /* There must be a compressed folio and matches the sectorsize. */ 448 448 if (unlikely(!fi.folio)) 449 449 return -EINVAL; 450 - ASSERT(folio_size(fi.folio) == sectorsize); 450 + ASSERT(folio_size(fi.folio) == btrfs_min_folio_size(fs_info)); 451 451 kaddr = kmap_local_folio(fi.folio, 0); 452 452 len_in = read_compress_length(kaddr); 453 453 kunmap_local(kaddr);
+8
fs/btrfs/qgroup.c
··· 3739 3739 mutex_lock(&fs_info->qgroup_rescan_lock); 3740 3740 extent_root = btrfs_extent_root(fs_info, 3741 3741 fs_info->qgroup_rescan_progress.objectid); 3742 + if (unlikely(!extent_root)) { 3743 + btrfs_err(fs_info, 3744 + "missing extent root for extent at bytenr %llu", 3745 + fs_info->qgroup_rescan_progress.objectid); 3746 + mutex_unlock(&fs_info->qgroup_rescan_lock); 3747 + return -EUCLEAN; 3748 + } 3749 + 3742 3750 ret = btrfs_search_slot_for_read(extent_root, 3743 3751 &fs_info->qgroup_rescan_progress, 3744 3752 path, 1, 0);
+10 -2
fs/btrfs/raid56.c
··· 2297 2297 static void fill_data_csums(struct btrfs_raid_bio *rbio) 2298 2298 { 2299 2299 struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; 2300 - struct btrfs_root *csum_root = btrfs_csum_root(fs_info, 2301 - rbio->bioc->full_stripe_logical); 2300 + struct btrfs_root *csum_root; 2302 2301 const u64 start = rbio->bioc->full_stripe_logical; 2303 2302 const u32 len = (rbio->nr_data * rbio->stripe_nsectors) << 2304 2303 fs_info->sectorsize_bits; ··· 2327 2328 GFP_NOFS); 2328 2329 if (!rbio->csum_buf || !rbio->csum_bitmap) { 2329 2330 ret = -ENOMEM; 2331 + goto error; 2332 + } 2333 + 2334 + csum_root = btrfs_csum_root(fs_info, rbio->bioc->full_stripe_logical); 2335 + if (unlikely(!csum_root)) { 2336 + btrfs_err(fs_info, 2337 + "missing csum root for extent at bytenr %llu", 2338 + rbio->bioc->full_stripe_logical); 2339 + ret = -EUCLEAN; 2330 2340 goto error; 2331 2341 } 2332 2342
+32 -7
fs/btrfs/relocation.c
··· 4185 4185 dest_addr = ins.objectid; 4186 4186 dest_length = ins.offset; 4187 4187 4188 + dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4189 + 4188 4190 if (!is_data && !IS_ALIGNED(dest_length, fs_info->nodesize)) { 4189 4191 u64 new_length = ALIGN_DOWN(dest_length, fs_info->nodesize); 4190 4192 ··· 4297 4295 if (unlikely(ret)) 4298 4296 goto end; 4299 4297 4300 - dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4301 - 4302 4298 adjust_block_group_remap_bytes(trans, dest_bg, dest_length); 4303 4299 4304 4300 mutex_lock(&dest_bg->free_space_lock); 4305 4301 bg_needs_free_space = test_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, 4306 4302 &dest_bg->runtime_flags); 4307 4303 mutex_unlock(&dest_bg->free_space_lock); 4308 - btrfs_put_block_group(dest_bg); 4309 4304 4310 4305 if (bg_needs_free_space) { 4311 4306 ret = btrfs_add_block_group_free_space(trans, dest_bg); ··· 4332 4333 btrfs_end_transaction(trans); 4333 4334 } 4334 4335 } else { 4335 - dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4336 4336 btrfs_free_reserved_bytes(dest_bg, dest_length, 0); 4337 - btrfs_put_block_group(dest_bg); 4338 4337 4339 4338 ret = btrfs_commit_transaction(trans); 4340 4339 } 4340 + 4341 + btrfs_put_block_group(dest_bg); 4341 4342 4342 4343 return ret; 4343 4344 } ··· 4953 4954 struct btrfs_space_info *sinfo = src_bg->space_info; 4954 4955 4955 4956 extent_root = btrfs_extent_root(fs_info, src_bg->start); 4957 + if (unlikely(!extent_root)) { 4958 + btrfs_err(fs_info, 4959 + "missing extent root for block group at offset %llu", 4960 + src_bg->start); 4961 + return -EUCLEAN; 4962 + } 4956 4963 4957 4964 trans = btrfs_start_transaction(extent_root, 0); 4958 4965 if (IS_ERR(trans)) ··· 5311 5306 int ret; 5312 5307 bool bg_is_ro = false; 5313 5308 5309 + if (unlikely(!extent_root)) { 5310 + btrfs_err(fs_info, 5311 + "missing extent root for block group at offset %llu", 5312 + group_start); 5313 + return -EUCLEAN; 5314 + } 5315 + 5314 5316 /* 5315 5317 * This only gets set if we had a half-deleted snapshot on mount. We 5316 5318 * cannot allow relocation to start while we're still trying to clean up ··· 5548 5536 goto out; 5549 5537 } 5550 5538 5539 + rc->extent_root = btrfs_extent_root(fs_info, 0); 5540 + if (unlikely(!rc->extent_root)) { 5541 + btrfs_err(fs_info, "missing extent root for extent at bytenr 0"); 5542 + ret = -EUCLEAN; 5543 + goto out; 5544 + } 5545 + 5551 5546 ret = reloc_chunk_start(fs_info); 5552 5547 if (ret < 0) 5553 5548 goto out_end; 5554 - 5555 - rc->extent_root = btrfs_extent_root(fs_info, 0); 5556 5549 5557 5550 set_reloc_control(rc); 5558 5551 ··· 5652 5635 struct btrfs_root *csum_root = btrfs_csum_root(fs_info, disk_bytenr); 5653 5636 LIST_HEAD(list); 5654 5637 int ret; 5638 + 5639 + if (unlikely(!csum_root)) { 5640 + btrfs_mark_ordered_extent_error(ordered); 5641 + btrfs_err(fs_info, 5642 + "missing csum root for extent at bytenr %llu", 5643 + disk_bytenr); 5644 + return -EUCLEAN; 5645 + } 5655 5646 5656 5647 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr, 5657 5648 disk_bytenr + ordered->num_bytes - 1,
+18 -1
fs/btrfs/tree-checker.c
··· 1284 1284 } 1285 1285 if (unlikely(btrfs_root_drop_level(&ri) >= BTRFS_MAX_LEVEL)) { 1286 1286 generic_err(leaf, slot, 1287 - "invalid root level, have %u expect [0, %u]", 1287 + "invalid root drop_level, have %u expect [0, %u]", 1288 1288 btrfs_root_drop_level(&ri), BTRFS_MAX_LEVEL - 1); 1289 + return -EUCLEAN; 1290 + } 1291 + /* 1292 + * If drop_progress.objectid is non-zero, a btrfs_drop_snapshot() was 1293 + * interrupted and the resume point was recorded in drop_progress and 1294 + * drop_level. In that case drop_level must be >= 1: level 0 is the 1295 + * leaf level and drop_snapshot never saves a checkpoint there (it 1296 + * only records checkpoints at internal node levels in DROP_REFERENCE 1297 + * stage). A zero drop_level combined with a non-zero drop_progress 1298 + * objectid indicates on-disk corruption and would cause a BUG_ON in 1299 + * merge_reloc_root() and btrfs_drop_snapshot() at mount time. 1300 + */ 1301 + if (unlikely(btrfs_disk_key_objectid(&ri.drop_progress) != 0 && 1302 + btrfs_root_drop_level(&ri) == 0)) { 1303 + generic_err(leaf, slot, 1304 + "invalid root drop_level 0 with non-zero drop_progress objectid %llu", 1305 + btrfs_disk_key_objectid(&ri.drop_progress)); 1289 1306 return -EUCLEAN; 1290 1307 } 1291 1308
+27
fs/btrfs/tree-log.c
··· 984 984 985 985 sums = list_first_entry(&ordered_sums, struct btrfs_ordered_sum, list); 986 986 csum_root = btrfs_csum_root(fs_info, sums->logical); 987 + if (unlikely(!csum_root)) { 988 + btrfs_err(fs_info, 989 + "missing csum root for extent at bytenr %llu", 990 + sums->logical); 991 + ret = -EUCLEAN; 992 + } 993 + 987 994 if (!ret) { 988 995 ret = btrfs_del_csums(trans, csum_root, sums->logical, 989 996 sums->len); ··· 4897 4890 } 4898 4891 4899 4892 csum_root = btrfs_csum_root(trans->fs_info, disk_bytenr); 4893 + if (unlikely(!csum_root)) { 4894 + btrfs_err(trans->fs_info, 4895 + "missing csum root for extent at bytenr %llu", 4896 + disk_bytenr); 4897 + return -EUCLEAN; 4898 + } 4899 + 4900 4900 disk_bytenr += extent_offset; 4901 4901 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr, 4902 4902 disk_bytenr + extent_num_bytes - 1, ··· 5100 5086 /* block start is already adjusted for the file extent offset. */ 5101 5087 block_start = btrfs_extent_map_block_start(em); 5102 5088 csum_root = btrfs_csum_root(trans->fs_info, block_start); 5089 + if (unlikely(!csum_root)) { 5090 + btrfs_err(trans->fs_info, 5091 + "missing csum root for extent at bytenr %llu", 5092 + block_start); 5093 + return -EUCLEAN; 5094 + } 5095 + 5103 5096 ret = btrfs_lookup_csums_list(csum_root, block_start + csum_offset, 5104 5097 block_start + csum_offset + csum_len - 1, 5105 5098 &ordered_sums, false); ··· 6216 6195 struct btrfs_root *root, 6217 6196 struct btrfs_log_ctx *ctx) 6218 6197 { 6198 + const bool orig_log_new_dentries = ctx->log_new_dentries; 6219 6199 int ret = 0; 6220 6200 6221 6201 /* ··· 6278 6256 * dir index key range logged for the directory. So we 6279 6257 * must make sure the deletion is recorded. 6280 6258 */ 6259 + ctx->log_new_dentries = false; 6281 6260 ret = btrfs_log_inode(trans, inode, LOG_INODE_ALL, ctx); 6261 + if (!ret && ctx->log_new_dentries) 6262 + ret = log_new_dir_dentries(trans, inode, ctx); 6263 + 6282 6264 btrfs_add_delayed_iput(inode); 6283 6265 if (ret) 6284 6266 break; ··· 6317 6291 break; 6318 6292 } 6319 6293 6294 + ctx->log_new_dentries = orig_log_new_dentries; 6320 6295 ctx->logging_conflict_inodes = false; 6321 6296 if (ret) 6322 6297 free_conflicting_inodes(ctx);
+18 -9
fs/btrfs/volumes.c
··· 3587 3587 3588 3588 /* step one, relocate all the extents inside this chunk */ 3589 3589 btrfs_scrub_pause(fs_info); 3590 - ret = btrfs_relocate_block_group(fs_info, chunk_offset, true); 3590 + ret = btrfs_relocate_block_group(fs_info, chunk_offset, verbose); 3591 3591 btrfs_scrub_continue(fs_info); 3592 3592 if (ret) { 3593 3593 /* ··· 4277 4277 end: 4278 4278 while (!list_empty(chunks)) { 4279 4279 bool is_unused; 4280 + struct btrfs_block_group *bg; 4280 4281 4281 4282 rci = list_first_entry(chunks, struct remap_chunk_info, list); 4282 4283 4283 - spin_lock(&rci->bg->lock); 4284 - is_unused = !btrfs_is_block_group_used(rci->bg); 4285 - spin_unlock(&rci->bg->lock); 4284 + bg = rci->bg; 4285 + if (bg) { 4286 + /* 4287 + * This is a bit racy and the 'used' status can change 4288 + * but this is not a problem as later functions will 4289 + * verify it again. 4290 + */ 4291 + spin_lock(&bg->lock); 4292 + is_unused = !btrfs_is_block_group_used(bg); 4293 + spin_unlock(&bg->lock); 4286 4294 4287 - if (is_unused) 4288 - btrfs_mark_bg_unused(rci->bg); 4295 + if (is_unused) 4296 + btrfs_mark_bg_unused(bg); 4289 4297 4290 - if (rci->made_ro) 4291 - btrfs_dec_block_group_ro(rci->bg); 4298 + if (rci->made_ro) 4299 + btrfs_dec_block_group_ro(bg); 4292 4300 4293 - btrfs_put_block_group(rci->bg); 4301 + btrfs_put_block_group(bg); 4302 + } 4294 4303 4295 4304 list_del(&rci->list); 4296 4305 kfree(rci);
+11 -2
fs/btrfs/zoned.c
··· 337 337 if (!btrfs_fs_incompat(fs_info, ZONED)) 338 338 return 0; 339 339 340 - mutex_lock(&fs_devices->device_list_mutex); 340 + /* 341 + * No need to take the device_list mutex here, we're still in the mount 342 + * path and devices cannot be added to or removed from the list yet. 343 + */ 341 344 list_for_each_entry(device, &fs_devices->devices, dev_list) { 342 345 /* We can skip reading of zone info for missing devices */ 343 346 if (!device->bdev) ··· 350 347 if (ret) 351 348 break; 352 349 } 353 - mutex_unlock(&fs_devices->device_list_mutex); 354 350 355 351 return ret; 356 352 } ··· 1261 1259 key.offset = 0; 1262 1260 1263 1261 root = btrfs_extent_root(fs_info, key.objectid); 1262 + if (unlikely(!root)) { 1263 + btrfs_err(fs_info, 1264 + "missing extent root for extent at bytenr %llu", 1265 + key.objectid); 1266 + return -EUCLEAN; 1267 + } 1268 + 1264 1269 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 1265 1270 /* We should not find the exact match */ 1266 1271 if (unlikely(!ret))
+1 -1
fs/btrfs/zstd.c
··· 600 600 bio_first_folio(&fi, &cb->bbio.bio, 0); 601 601 if (unlikely(!fi.folio)) 602 602 return -EINVAL; 603 - ASSERT(folio_size(fi.folio) == blocksize); 603 + ASSERT(folio_size(fi.folio) == min_folio_size); 604 604 605 605 stream = zstd_init_dstream( 606 606 ZSTD_BTRFS_MAX_INPUT, workspace->mem, workspace->size);
+54 -9
fs/nfsd/export.c
··· 36 36 * second map contains a reference to the entry in the first map. 37 37 */ 38 38 39 + static struct workqueue_struct *nfsd_export_wq; 40 + 39 41 #define EXPKEY_HASHBITS 8 40 42 #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS) 41 43 #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1) 42 44 43 - static void expkey_put(struct kref *ref) 45 + static void expkey_release(struct work_struct *work) 44 46 { 45 - struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 47 + struct svc_expkey *key = container_of(to_rcu_work(work), 48 + struct svc_expkey, ek_rwork); 46 49 47 50 if (test_bit(CACHE_VALID, &key->h.flags) && 48 51 !test_bit(CACHE_NEGATIVE, &key->h.flags)) 49 52 path_put(&key->ek_path); 50 53 auth_domain_put(key->ek_client); 51 - kfree_rcu(key, ek_rcu); 54 + kfree(key); 55 + } 56 + 57 + static void expkey_put(struct kref *ref) 58 + { 59 + struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref); 60 + 61 + INIT_RCU_WORK(&key->ek_rwork, expkey_release); 62 + queue_rcu_work(nfsd_export_wq, &key->ek_rwork); 52 63 } 53 64 54 65 static int expkey_upcall(struct cache_detail *cd, struct cache_head *h) ··· 364 353 EXP_STATS_COUNTERS_NUM); 365 354 } 366 355 367 - static void svc_export_release(struct rcu_head *rcu_head) 356 + static void svc_export_release(struct work_struct *work) 368 357 { 369 - struct svc_export *exp = container_of(rcu_head, struct svc_export, 370 - ex_rcu); 358 + struct svc_export *exp = container_of(to_rcu_work(work), 359 + struct svc_export, ex_rwork); 371 360 361 + path_put(&exp->ex_path); 362 + auth_domain_put(exp->ex_client); 372 363 nfsd4_fslocs_free(&exp->ex_fslocs); 373 364 export_stats_destroy(exp->ex_stats); 374 365 kfree(exp->ex_stats); ··· 382 369 { 383 370 struct svc_export *exp = container_of(ref, struct svc_export, h.ref); 384 371 385 - path_put(&exp->ex_path); 386 - auth_domain_put(exp->ex_client); 387 - call_rcu(&exp->ex_rcu, svc_export_release); 372 + INIT_RCU_WORK(&exp->ex_rwork, svc_export_release); 373 + queue_rcu_work(nfsd_export_wq, &exp->ex_rwork); 388 374 } 389 375 390 376 static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h) ··· 1491 1479 .show = e_show, 1492 1480 }; 1493 1481 1482 + /** 1483 + * nfsd_export_wq_init - allocate the export release workqueue 1484 + * 1485 + * Called once at module load. The workqueue runs deferred svc_export and 1486 + * svc_expkey release work scheduled by queue_rcu_work() in the cache put 1487 + * callbacks. 1488 + * 1489 + * Return values: 1490 + * %0: workqueue allocated 1491 + * %-ENOMEM: allocation failed 1492 + */ 1493 + int nfsd_export_wq_init(void) 1494 + { 1495 + nfsd_export_wq = alloc_workqueue("nfsd_export", WQ_UNBOUND, 0); 1496 + if (!nfsd_export_wq) 1497 + return -ENOMEM; 1498 + return 0; 1499 + } 1500 + 1501 + /** 1502 + * nfsd_export_wq_shutdown - drain and free the export release workqueue 1503 + * 1504 + * Called once at module unload. Per-namespace teardown in 1505 + * nfsd_export_shutdown() has already drained all deferred work. 1506 + */ 1507 + void nfsd_export_wq_shutdown(void) 1508 + { 1509 + destroy_workqueue(nfsd_export_wq); 1510 + } 1511 + 1494 1512 /* 1495 1513 * Initialize the exports module. 1496 1514 */ ··· 1582 1540 1583 1541 cache_unregister_net(nn->svc_expkey_cache, net); 1584 1542 cache_unregister_net(nn->svc_export_cache, net); 1543 + /* Drain deferred export and expkey release work. */ 1544 + rcu_barrier(); 1545 + flush_workqueue(nfsd_export_wq); 1585 1546 cache_destroy_net(nn->svc_expkey_cache, net); 1586 1547 cache_destroy_net(nn->svc_export_cache, net); 1587 1548 svcauth_unix_purge(net);
+5 -2
fs/nfsd/export.h
··· 7 7 8 8 #include <linux/sunrpc/cache.h> 9 9 #include <linux/percpu_counter.h> 10 + #include <linux/workqueue.h> 10 11 #include <uapi/linux/nfsd/export.h> 11 12 #include <linux/nfs4.h> 12 13 ··· 76 75 u32 ex_layout_types; 77 76 struct nfsd4_deviceid_map *ex_devid_map; 78 77 struct cache_detail *cd; 79 - struct rcu_head ex_rcu; 78 + struct rcu_work ex_rwork; 80 79 unsigned long ex_xprtsec_modes; 81 80 struct export_stats *ex_stats; 82 81 }; ··· 93 92 u32 ek_fsid[6]; 94 93 95 94 struct path ek_path; 96 - struct rcu_head ek_rcu; 95 + struct rcu_work ek_rwork; 97 96 }; 98 97 99 98 #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC)) ··· 111 110 /* 112 111 * Function declarations 113 112 */ 113 + int nfsd_export_wq_init(void); 114 + void nfsd_export_wq_shutdown(void); 114 115 int nfsd_export_init(struct net *); 115 116 void nfsd_export_shutdown(struct net *); 116 117 void nfsd_export_flush(struct net *);
+7 -2
fs/nfsd/nfs4xdr.c
··· 6281 6281 int len = xdr->buf->len - (op_status_offset + XDR_UNIT); 6282 6282 6283 6283 so->so_replay.rp_status = op->status; 6284 - so->so_replay.rp_buflen = len; 6285 - read_bytes_from_xdr_buf(xdr->buf, op_status_offset + XDR_UNIT, 6284 + if (len <= NFSD4_REPLAY_ISIZE) { 6285 + so->so_replay.rp_buflen = len; 6286 + read_bytes_from_xdr_buf(xdr->buf, 6287 + op_status_offset + XDR_UNIT, 6286 6288 so->so_replay.rp_buf, len); 6289 + } else { 6290 + so->so_replay.rp_buflen = 0; 6291 + } 6287 6292 } 6288 6293 status: 6289 6294 op->status = nfsd4_map_status(op->status,
+19 -3
fs/nfsd/nfsctl.c
··· 149 149 150 150 seq = file->private_data; 151 151 seq->private = nn->svc_export_cache; 152 + get_net(net); 152 153 return 0; 154 + } 155 + 156 + static int exports_release(struct inode *inode, struct file *file) 157 + { 158 + struct seq_file *seq = file->private_data; 159 + struct cache_detail *cd = seq->private; 160 + 161 + put_net(cd->net); 162 + return seq_release(inode, file); 153 163 } 154 164 155 165 static int exports_nfsd_open(struct inode *inode, struct file *file) ··· 171 161 .open = exports_nfsd_open, 172 162 .read = seq_read, 173 163 .llseek = seq_lseek, 174 - .release = seq_release, 164 + .release = exports_release, 175 165 }; 176 166 177 167 static int export_features_show(struct seq_file *m, void *v) ··· 1386 1376 .proc_open = exports_proc_open, 1387 1377 .proc_read = seq_read, 1388 1378 .proc_lseek = seq_lseek, 1389 - .proc_release = seq_release, 1379 + .proc_release = exports_release, 1390 1380 }; 1391 1381 1392 1382 static int create_proc_exports_entry(void) ··· 2269 2259 if (retval) 2270 2260 goto out_free_pnfs; 2271 2261 nfsd_lockd_init(); /* lockd->nfsd callbacks */ 2262 + retval = nfsd_export_wq_init(); 2263 + if (retval) 2264 + goto out_free_lockd; 2272 2265 retval = register_pernet_subsys(&nfsd_net_ops); 2273 2266 if (retval < 0) 2274 - goto out_free_lockd; 2267 + goto out_free_export_wq; 2275 2268 retval = register_cld_notifier(); 2276 2269 if (retval) 2277 2270 goto out_free_subsys; ··· 2303 2290 unregister_cld_notifier(); 2304 2291 out_free_subsys: 2305 2292 unregister_pernet_subsys(&nfsd_net_ops); 2293 + out_free_export_wq: 2294 + nfsd_export_wq_shutdown(); 2306 2295 out_free_lockd: 2307 2296 nfsd_lockd_shutdown(); 2308 2297 nfsd_drc_slab_free(); ··· 2325 2310 nfsd4_destroy_laundry_wq(); 2326 2311 unregister_cld_notifier(); 2327 2312 unregister_pernet_subsys(&nfsd_net_ops); 2313 + nfsd_export_wq_shutdown(); 2328 2314 nfsd_drc_slab_free(); 2329 2315 nfsd_lockd_shutdown(); 2330 2316 nfsd4_free_slabs();
+12 -5
fs/nfsd/state.h
··· 541 541 struct xdr_netobj cr_princhash; 542 542 }; 543 543 544 - /* A reasonable value for REPLAY_ISIZE was estimated as follows: 545 - * The OPEN response, typically the largest, requires 546 - * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 8(verifier) + 547 - * 4(deleg. type) + 8(deleg. stateid) + 4(deleg. recall flag) + 548 - * 20(deleg. space limit) + ~32(deleg. ace) = 112 bytes 544 + /* 545 + * REPLAY_ISIZE is sized for an OPEN response with delegation: 546 + * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 547 + * 8(verifier) + 4(deleg. type) + 8(deleg. stateid) + 548 + * 4(deleg. recall flag) + 20(deleg. space limit) + 549 + * ~32(deleg. ace) = 112 bytes 550 + * 551 + * Some responses can exceed this. A LOCK denial includes the conflicting 552 + * lock owner, which can be up to 1024 bytes (NFS4_OPAQUE_LIMIT). Responses 553 + * larger than REPLAY_ISIZE are not cached in rp_ibuf; only rp_status is 554 + * saved. Enlarging this constant increases the size of every 555 + * nfs4_stateowner. 549 556 */ 550 557 551 558 #define NFSD4_REPLAY_ISIZE 112
+6
fs/smb/client/cifsglob.h
··· 2386 2386 return opts; 2387 2387 } 2388 2388 2389 + /* 2390 + * The number of blocks is not related to (i_size / i_blksize), but instead 2391 + * 512 byte (2**9) size is required for calculating num blocks. 2392 + */ 2393 + #define CIFS_INO_BLOCKS(size) DIV_ROUND_UP_ULL((u64)(size), 512) 2394 + 2389 2395 #endif /* _CIFS_GLOB_H */
+4
fs/smb/client/connect.c
··· 1955 1955 case Kerberos: 1956 1956 if (!uid_eq(ctx->cred_uid, ses->cred_uid)) 1957 1957 return 0; 1958 + if (strncmp(ses->user_name ?: "", 1959 + ctx->username ?: "", 1960 + CIFS_MAX_USERNAME_LEN)) 1961 + return 0; 1958 1962 break; 1959 1963 case NTLMv2: 1960 1964 case RawNTLMSSP:
-1
fs/smb/client/file.c
··· 993 993 if (!rc) { 994 994 netfs_resize_file(&cinode->netfs, 0, true); 995 995 cifs_setsize(inode, 0); 996 - inode->i_blocks = 0; 997 996 } 998 997 } 999 998 if (cfile)
+6 -15
fs/smb/client/inode.c
··· 219 219 */ 220 220 if (is_size_safe_to_change(cifs_i, fattr->cf_eof, from_readdir)) { 221 221 i_size_write(inode, fattr->cf_eof); 222 - 223 - /* 224 - * i_blocks is not related to (i_size / i_blksize), 225 - * but instead 512 byte (2**9) size is required for 226 - * calculating num blocks. 227 - */ 228 - inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9; 222 + inode->i_blocks = CIFS_INO_BLOCKS(fattr->cf_bytes); 229 223 } 230 224 231 225 if (S_ISLNK(fattr->cf_mode) && fattr->cf_symlink_target) { ··· 3009 3015 { 3010 3016 spin_lock(&inode->i_lock); 3011 3017 i_size_write(inode, offset); 3018 + /* 3019 + * Until we can query the server for actual allocation size, 3020 + * this is best estimate we have for blocks allocated for a file. 3021 + */ 3022 + inode->i_blocks = CIFS_INO_BLOCKS(offset); 3012 3023 spin_unlock(&inode->i_lock); 3013 3024 inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); 3014 3025 truncate_pagecache(inode, offset); ··· 3086 3087 if (rc == 0) { 3087 3088 netfs_resize_file(&cifsInode->netfs, size, true); 3088 3089 cifs_setsize(inode, size); 3089 - /* 3090 - * i_blocks is not related to (i_size / i_blksize), but instead 3091 - * 512 byte (2**9) size is required for calculating num blocks. 3092 - * Until we can query the server for actual allocation size, 3093 - * this is best estimate we have for blocks allocated for a file 3094 - * Number of blocks must be rounded up so size 1 is not 0 blocks 3095 - */ 3096 - inode->i_blocks = (512 - 1 + size) >> 9; 3097 3090 } 3098 3091 3099 3092 return rc;
+1 -1
fs/smb/client/smb1transport.c
··· 460 460 return 0; 461 461 462 462 /* 463 - * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING 463 + * Windows NT server returns error response (e.g. STATUS_DELETE_PENDING 464 464 * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other) 465 465 * for some TRANS2 requests without the RESPONSE flag set in header. 466 466 */
+4 -16
fs/smb/client/smb2ops.c
··· 1497 1497 { 1498 1498 struct smb2_file_network_open_info file_inf; 1499 1499 struct inode *inode; 1500 + u64 asize; 1500 1501 int rc; 1501 1502 1502 1503 rc = __SMB2_close(xid, tcon, cfile->fid.persistent_fid, ··· 1521 1520 inode_set_atime_to_ts(inode, 1522 1521 cifs_NTtimeToUnix(file_inf.LastAccessTime)); 1523 1522 1524 - /* 1525 - * i_blocks is not related to (i_size / i_blksize), 1526 - * but instead 512 byte (2**9) size is required for 1527 - * calculating num blocks. 1528 - */ 1529 - if (le64_to_cpu(file_inf.AllocationSize) > 4096) 1530 - inode->i_blocks = 1531 - (512 - 1 + le64_to_cpu(file_inf.AllocationSize)) >> 9; 1523 + asize = le64_to_cpu(file_inf.AllocationSize); 1524 + if (asize > 4096) 1525 + inode->i_blocks = CIFS_INO_BLOCKS(asize); 1532 1526 1533 1527 /* End of file and Attributes should not have to be updated on close */ 1534 1528 spin_unlock(&inode->i_lock); ··· 2200 2204 rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false); 2201 2205 if (rc) 2202 2206 goto duplicate_extents_out; 2203 - 2204 - /* 2205 - * Although also could set plausible allocation size (i_blocks) 2206 - * here in addition to setting the file size, in reflink 2207 - * it is likely that the target file is sparse. Its allocation 2208 - * size will be queried on next revalidate, but it is important 2209 - * to make sure that file's cached size is updated immediately 2210 - */ 2211 2207 netfs_resize_file(netfs_inode(inode), dest_off + len, true); 2212 2208 cifs_setsize(inode, dest_off + len); 2213 2209 }
+6 -3
fs/smb/server/mgmt/tree_connect.c
··· 102 102 103 103 void ksmbd_tree_connect_put(struct ksmbd_tree_connect *tcon) 104 104 { 105 - if (atomic_dec_and_test(&tcon->refcount)) 105 + if (atomic_dec_and_test(&tcon->refcount)) { 106 + ksmbd_share_config_put(tcon->share_conf); 106 107 kfree(tcon); 108 + } 107 109 } 108 110 109 111 static int __ksmbd_tree_conn_disconnect(struct ksmbd_session *sess, ··· 115 113 116 114 ret = ksmbd_ipc_tree_disconnect_request(sess->id, tree_conn->id); 117 115 ksmbd_release_tree_conn_id(sess, tree_conn->id); 118 - ksmbd_share_config_put(tree_conn->share_conf); 119 116 ksmbd_counter_dec(KSMBD_COUNTER_TREE_CONNS); 120 - if (atomic_dec_and_test(&tree_conn->refcount)) 117 + if (atomic_dec_and_test(&tree_conn->refcount)) { 118 + ksmbd_share_config_put(tree_conn->share_conf); 121 119 kfree(tree_conn); 120 + } 122 121 return ret; 123 122 } 124 123
+12 -5
fs/smb/server/smb2pdu.c
··· 126 126 pr_err("The first operation in the compound does not have tcon\n"); 127 127 return -EINVAL; 128 128 } 129 + if (work->tcon->t_state != TREE_CONNECTED) 130 + return -ENOENT; 129 131 if (tree_id != UINT_MAX && work->tcon->id != tree_id) { 130 132 pr_err("tree id(%u) is different with id(%u) in first operation\n", 131 133 tree_id, work->tcon->id); ··· 1950 1948 } 1951 1949 } 1952 1950 smb2_set_err_rsp(work); 1951 + conn->binding = false; 1953 1952 } else { 1954 1953 unsigned int iov_len; 1955 1954 ··· 2831 2828 goto out; 2832 2829 } 2833 2830 2834 - dh_info->fp->conn = conn; 2831 + if (dh_info->fp->conn) { 2832 + ksmbd_put_durable_fd(dh_info->fp); 2833 + err = -EBADF; 2834 + goto out; 2835 + } 2835 2836 dh_info->reconnected = true; 2836 2837 goto out; 2837 2838 } ··· 5459 5452 struct smb2_query_info_req *req, 5460 5453 struct smb2_query_info_rsp *rsp) 5461 5454 { 5462 - struct ksmbd_session *sess = work->sess; 5463 5455 struct ksmbd_conn *conn = work->conn; 5464 5456 struct ksmbd_share_config *share = work->tcon->share_conf; 5465 5457 int fsinfoclass = 0; ··· 5595 5589 5596 5590 info = (struct object_id_info *)(rsp->Buffer); 5597 5591 5598 - if (!user_guest(sess->user)) 5599 - memcpy(info->objid, user_passkey(sess->user), 16); 5592 + if (path.mnt->mnt_sb->s_uuid_len == 16) 5593 + memcpy(info->objid, path.mnt->mnt_sb->s_uuid.b, 5594 + path.mnt->mnt_sb->s_uuid_len); 5600 5595 else 5601 - memset(info->objid, 0, 16); 5596 + memcpy(info->objid, &stfs.f_fsid, sizeof(stfs.f_fsid)); 5602 5597 5603 5598 info->extended_info.magic = cpu_to_le32(EXTENDED_INFO_MAGIC); 5604 5599 info->extended_info.version = cpu_to_le32(1);
-3
fs/tests/exec_kunit.c
··· 94 94 { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * 3 + sizeof(void *)), 95 95 .argc = 0, .envc = 0 }, 96 96 .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) }, 97 - { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * + sizeof(void *)), 98 - .argc = 0, .envc = 0 }, 99 - .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) }, 100 97 { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * _STK_LIM, 101 98 .argc = 0, .envc = 0 }, 102 99 .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
+2 -1
include/hyperv/hvgdk_mini.h
··· 477 477 #define HVCALL_NOTIFY_PARTITION_EVENT 0x0087 478 478 #define HVCALL_ENTER_SLEEP_STATE 0x0084 479 479 #define HVCALL_NOTIFY_PORT_RING_EMPTY 0x008b 480 - #define HVCALL_SCRUB_PARTITION 0x008d 481 480 #define HVCALL_REGISTER_INTERCEPT_RESULT 0x0091 482 481 #define HVCALL_ASSERT_VIRTUAL_INTERRUPT 0x0094 483 482 #define HVCALL_CREATE_PORT 0x0095 ··· 1120 1121 HV_X64_REGISTER_MSR_MTRR_FIX4KF8000 = 0x0008007A, 1121 1122 1122 1123 HV_X64_REGISTER_REG_PAGE = 0x0009001C, 1124 + #elif defined(CONFIG_ARM64) 1125 + HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID = 0x00070001, 1123 1126 #endif 1124 1127 }; 1125 1128
+1 -1
include/linux/auxvec.h
··· 4 4 5 5 #include <uapi/linux/auxvec.h> 6 6 7 - #define AT_VECTOR_SIZE_BASE 22 /* NEW_AUX_ENT entries in auxiliary table */ 7 + #define AT_VECTOR_SIZE_BASE 24 /* NEW_AUX_ENT entries in auxiliary table */ 8 8 /* number of "#define AT_.*" above, minus {AT_NULL, AT_IGNORE, AT_NOTELF} */ 9 9 #endif /* _LINUX_AUXVEC_H */
+3 -1
include/linux/build_bug.h
··· 32 32 /** 33 33 * BUILD_BUG_ON_MSG - break compile if a condition is true & emit supplied 34 34 * error message. 35 - * @condition: the condition which the compiler should know is false. 35 + * @cond: the condition which the compiler should know is false. 36 + * @msg: build-time error message 36 37 * 37 38 * See BUILD_BUG_ON for description. 38 39 */ ··· 61 60 62 61 /** 63 62 * static_assert - check integer constant expression at build time 63 + * @expr: expression to be checked 64 64 * 65 65 * static_assert() is a wrapper for the C11 _Static_assert, with a 66 66 * little macro magic to make the message optional (defaulting to the
+1
include/linux/console_struct.h
··· 160 160 struct uni_pagedict **uni_pagedict_loc; /* [!] Location of uni_pagedict variable for this console */ 161 161 u32 **vc_uni_lines; /* unicode screen content */ 162 162 u16 *vc_saved_screen; 163 + u32 **vc_saved_uni_lines; 163 164 unsigned int vc_saved_cols; 164 165 unsigned int vc_saved_rows; 165 166 /* additional information is in vt_kern.h */
+54
include/linux/device.h
··· 483 483 * on. This shrinks the "Board Support Packages" (BSPs) and 484 484 * minimizes board-specific #ifdefs in drivers. 485 485 * @driver_data: Private pointer for driver specific info. 486 + * @driver_override: Driver name to force a match. Do not touch directly; use 487 + * device_set_driver_override() instead. 486 488 * @links: Links to suppliers and consumers of this device. 487 489 * @power: For device power management. 488 490 * See Documentation/driver-api/pm/devices.rst for details. ··· 578 576 core doesn't touch it */ 579 577 void *driver_data; /* Driver data, set and get with 580 578 dev_set_drvdata/dev_get_drvdata */ 579 + struct { 580 + const char *name; 581 + spinlock_t lock; 582 + } driver_override; 581 583 struct mutex mutex; /* mutex to synchronize calls to 582 584 * its driver. 583 585 */ ··· 706 700 }; 707 701 708 702 #define kobj_to_dev(__kobj) container_of_const(__kobj, struct device, kobj) 703 + 704 + int __device_set_driver_override(struct device *dev, const char *s, size_t len); 705 + 706 + /** 707 + * device_set_driver_override() - Helper to set or clear driver override. 708 + * @dev: Device to change 709 + * @s: NUL-terminated string, new driver name to force a match, pass empty 710 + * string to clear it ("" or "\n", where the latter is only for sysfs 711 + * interface). 712 + * 713 + * Helper to set or clear driver override of a device. 714 + * 715 + * Returns: 0 on success or a negative error code on failure. 716 + */ 717 + static inline int device_set_driver_override(struct device *dev, const char *s) 718 + { 719 + return __device_set_driver_override(dev, s, s ? strlen(s) : 0); 720 + } 721 + 722 + /** 723 + * device_has_driver_override() - Check if a driver override has been set. 724 + * @dev: device to check 725 + * 726 + * Returns true if a driver override has been set for this device. 727 + */ 728 + static inline bool device_has_driver_override(struct device *dev) 729 + { 730 + guard(spinlock)(&dev->driver_override.lock); 731 + return !!dev->driver_override.name; 732 + } 733 + 734 + /** 735 + * device_match_driver_override() - Match a driver against the device's driver_override. 736 + * @dev: device to check 737 + * @drv: driver to match against 738 + * 739 + * Returns > 0 if a driver override is set and matches the given driver, 0 if a 740 + * driver override is set but does not match, or < 0 if a driver override is not 741 + * set at all. 742 + */ 743 + static inline int device_match_driver_override(struct device *dev, 744 + const struct device_driver *drv) 745 + { 746 + guard(spinlock)(&dev->driver_override.lock); 747 + if (dev->driver_override.name) 748 + return !strcmp(dev->driver_override.name, drv->name); 749 + return -1; 750 + } 709 751 710 752 /** 711 753 * device_iommu_mapped - Returns true when the device DMA is translated
+4
include/linux/device/bus.h
··· 65 65 * this bus. 66 66 * @pm: Power management operations of this bus, callback the specific 67 67 * device driver's pm-ops. 68 + * @driver_override: Set to true if this bus supports the driver_override 69 + * mechanism, which allows userspace to force a specific 70 + * driver to bind to a device via a sysfs attribute. 68 71 * @need_parent_lock: When probing or removing a device on this bus, the 69 72 * device core should lock the device's parent. 70 73 * ··· 109 106 110 107 const struct dev_pm_ops *pm; 111 108 109 + bool driver_override; 112 110 bool need_parent_lock; 113 111 }; 114 112
+2 -1
include/linux/etherdevice.h
··· 42 42 43 43 int eth_header(struct sk_buff *skb, struct net_device *dev, unsigned short type, 44 44 const void *daddr, const void *saddr, unsigned len); 45 - int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr); 45 + int eth_header_parse(const struct sk_buff *skb, const struct net_device *dev, 46 + unsigned char *haddr); 46 47 int eth_header_cache(const struct neighbour *neigh, struct hh_cache *hh, 47 48 __be16 type); 48 49 void eth_header_cache_update(struct hh_cache *hh, const struct net_device *dev,
+1
include/linux/hid.h
··· 682 682 __s32 battery_charge_status; 683 683 enum hid_battery_status battery_status; 684 684 bool battery_avoid_query; 685 + bool battery_present; 685 686 ktime_t battery_ratelimit_time; 686 687 #endif 687 688
+2 -1
include/linux/if_ether.h
··· 40 40 return (struct ethhdr *)skb_inner_mac_header(skb); 41 41 } 42 42 43 - int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr); 43 + int eth_header_parse(const struct sk_buff *skb, const struct net_device *dev, 44 + unsigned char *haddr); 44 45 45 46 extern ssize_t sysfs_format_mac(char *buf, const unsigned char *addr, int len); 46 47
+8 -2
include/linux/io-pgtable.h
··· 53 53 * tables. 54 54 * @ias: Input address (iova) size, in bits. 55 55 * @oas: Output address (paddr) size, in bits. 56 - * @coherent_walk A flag to indicate whether or not page table walks made 56 + * @coherent_walk: A flag to indicate whether or not page table walks made 57 57 * by the IOMMU are coherent with the CPU caches. 58 58 * @tlb: TLB management callbacks for this set of tables. 59 59 * @iommu_dev: The device representing the DMA configuration for the ··· 136 136 void (*free)(void *cookie, void *pages, size_t size); 137 137 138 138 /* Low-level data specific to the table format */ 139 + /* private: */ 139 140 union { 140 141 struct { 141 142 u64 ttbr; ··· 204 203 * @unmap_pages: Unmap a range of virtually contiguous pages of the same size. 205 204 * @iova_to_phys: Translate iova to physical address. 206 205 * @pgtable_walk: (optional) Perform a page table walk for a given iova. 206 + * @read_and_clear_dirty: Record dirty info per IOVA. If an IOVA is dirty, 207 + * clear its dirty state from the PTE unless the 208 + * IOMMU_DIRTY_NO_CLEAR flag is passed in. 207 209 * 208 210 * These functions map directly onto the iommu_ops member functions with 209 211 * the same names. ··· 235 231 * the configuration actually provided by the allocator (e.g. the 236 232 * pgsize_bitmap may be restricted). 237 233 * @cookie: An opaque token provided by the IOMMU driver and passed back to 238 - * the callback routines in cfg->tlb. 234 + * the callback routines. 235 + * 236 + * Returns: Pointer to the &struct io_pgtable_ops for this set of page tables. 239 237 */ 240 238 struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, 241 239 struct io_pgtable_cfg *cfg,
+3
include/linux/io_uring_types.h
··· 541 541 REQ_F_BL_NO_RECYCLE_BIT, 542 542 REQ_F_BUFFERS_COMMIT_BIT, 543 543 REQ_F_BUF_NODE_BIT, 544 + REQ_F_BUF_MORE_BIT, 544 545 REQ_F_HAS_METADATA_BIT, 545 546 REQ_F_IMPORT_BUFFER_BIT, 546 547 REQ_F_SQE_COPIED_BIT, ··· 627 626 REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT), 628 627 /* buf node is valid */ 629 628 REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT), 629 + /* incremental buffer consumption, more space available */ 630 + REQ_F_BUF_MORE = IO_REQ_FLAG(REQ_F_BUF_MORE_BIT), 630 631 /* request has read/write metadata assigned */ 631 632 REQ_F_HAS_METADATA = IO_REQ_FLAG(REQ_F_HAS_METADATA_BIT), 632 633 /*
+1 -1
include/linux/local_lock_internal.h
··· 315 315 316 316 #endif /* CONFIG_PREEMPT_RT */ 317 317 318 - #if defined(WARN_CONTEXT_ANALYSIS) 318 + #if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__) 319 319 /* 320 320 * Because the compiler only knows about the base per-CPU variable, use this 321 321 * helper function to make the compiler think we lock/unlock the @base variable,
+5 -4
include/linux/netdevice.h
··· 311 311 int (*create) (struct sk_buff *skb, struct net_device *dev, 312 312 unsigned short type, const void *daddr, 313 313 const void *saddr, unsigned int len); 314 - int (*parse)(const struct sk_buff *skb, unsigned char *haddr); 314 + int (*parse)(const struct sk_buff *skb, 315 + const struct net_device *dev, 316 + unsigned char *haddr); 315 317 int (*cache)(const struct neighbour *neigh, struct hh_cache *hh, __be16 type); 316 318 void (*cache_update)(struct hh_cache *hh, 317 319 const struct net_device *dev, ··· 2157 2155 unsigned long state; 2158 2156 unsigned int flags; 2159 2157 unsigned short hard_header_len; 2158 + enum netdev_stat_type pcpu_stat_type:8; 2160 2159 netdev_features_t features; 2161 2160 struct inet6_dev __rcu *ip6_ptr; 2162 2161 __cacheline_group_end(net_device_read_txrx); ··· 2406 2403 /* mid-layer private */ 2407 2404 void *ml_priv; 2408 2405 enum netdev_ml_priv_type ml_priv_type; 2409 - 2410 - enum netdev_stat_type pcpu_stat_type:8; 2411 2406 2412 2407 #if IS_ENABLED(CONFIG_GARP) 2413 2408 struct garp_port __rcu *garp_port; ··· 3447 3446 3448 3447 if (!dev->header_ops || !dev->header_ops->parse) 3449 3448 return 0; 3450 - return dev->header_ops->parse(skb, haddr); 3449 + return dev->header_ops->parse(skb, dev, haddr); 3451 3450 } 3452 3451 3453 3452 static inline __be16 dev_parse_header_protocol(const struct sk_buff *skb)
-5
include/linux/platform_device.h
··· 31 31 struct resource *resource; 32 32 33 33 const struct platform_device_id *id_entry; 34 - /* 35 - * Driver name to force a match. Do not set directly, because core 36 - * frees it. Use driver_set_override() to set or clear it. 37 - */ 38 - const char *driver_override; 39 34 40 35 /* MFD cell pointer */ 41 36 struct mfd_cell *mfd_cell;
+1
include/linux/serial_8250.h
··· 195 195 void serial8250_do_set_divisor(struct uart_port *port, unsigned int baud, 196 196 unsigned int quot); 197 197 int fsl8250_handle_irq(struct uart_port *port); 198 + void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir); 198 199 int serial8250_handle_irq(struct uart_port *port, unsigned int iir); 199 200 u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr); 200 201 void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
+22 -6
include/net/ip_tunnels.h
··· 665 665 static inline void iptunnel_xmit_stats(struct net_device *dev, int pkt_len) 666 666 { 667 667 if (pkt_len > 0) { 668 - struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats); 668 + if (dev->pcpu_stat_type == NETDEV_PCPU_STAT_DSTATS) { 669 + struct pcpu_dstats *dstats = get_cpu_ptr(dev->dstats); 669 670 670 - u64_stats_update_begin(&tstats->syncp); 671 - u64_stats_add(&tstats->tx_bytes, pkt_len); 672 - u64_stats_inc(&tstats->tx_packets); 673 - u64_stats_update_end(&tstats->syncp); 674 - put_cpu_ptr(tstats); 671 + u64_stats_update_begin(&dstats->syncp); 672 + u64_stats_add(&dstats->tx_bytes, pkt_len); 673 + u64_stats_inc(&dstats->tx_packets); 674 + u64_stats_update_end(&dstats->syncp); 675 + put_cpu_ptr(dstats); 676 + return; 677 + } 678 + if (dev->pcpu_stat_type == NETDEV_PCPU_STAT_TSTATS) { 679 + struct pcpu_sw_netstats *tstats = get_cpu_ptr(dev->tstats); 680 + 681 + u64_stats_update_begin(&tstats->syncp); 682 + u64_stats_add(&tstats->tx_bytes, pkt_len); 683 + u64_stats_inc(&tstats->tx_packets); 684 + u64_stats_update_end(&tstats->syncp); 685 + put_cpu_ptr(tstats); 686 + return; 687 + } 688 + pr_err_once("iptunnel_xmit_stats pcpu_stat_type=%d\n", 689 + dev->pcpu_stat_type); 690 + WARN_ON_ONCE(1); 675 691 return; 676 692 } 677 693
+3 -1
include/net/mac80211.h
··· 7407 7407 * @band: the band to transmit on 7408 7408 * @sta: optional pointer to get the station to send the frame to 7409 7409 * 7410 - * Return: %true if the skb was prepared, %false otherwise 7410 + * Return: %true if the skb was prepared, %false otherwise. 7411 + * On failure, the skb is freed by this function; callers must not 7412 + * free it again. 7411 7413 * 7412 7414 * Note: must be called under RCU lock 7413 7415 */
+2 -4
include/net/netfilter/nf_tables.h
··· 277 277 unsigned char data[]; 278 278 }; 279 279 280 - #define NFT_SET_ELEM_INTERNAL_LAST 0x1 281 - 282 280 /* placeholder structure for opaque set element backend representation. */ 283 281 struct nft_elem_priv { }; 284 282 ··· 286 288 * @key: element key 287 289 * @key_end: closing element key 288 290 * @data: element data 289 - * @flags: flags 290 291 * @priv: element private data and extensions 291 292 */ 292 293 struct nft_set_elem { ··· 301 304 u32 buf[NFT_DATA_VALUE_MAXLEN / sizeof(u32)]; 302 305 struct nft_data val; 303 306 } data; 304 - u32 flags; 305 307 struct nft_elem_priv *priv; 306 308 }; 307 309 ··· 874 878 u64 timeout, u64 expiration, gfp_t gfp); 875 879 int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set, 876 880 struct nft_expr *expr_array[]); 881 + void nft_set_elem_expr_destroy(const struct nft_ctx *ctx, 882 + struct nft_set_elem_expr *elem_expr); 877 883 void nft_set_elem_destroy(const struct nft_set *set, 878 884 const struct nft_elem_priv *elem_priv, 879 885 bool destroy_expr);
+33
include/net/sch_generic.h
··· 716 716 void qdisc_put(struct Qdisc *qdisc); 717 717 void qdisc_put_unlocked(struct Qdisc *qdisc); 718 718 void qdisc_tree_reduce_backlog(struct Qdisc *qdisc, int n, int len); 719 + 720 + static inline void dev_reset_queue(struct net_device *dev, 721 + struct netdev_queue *dev_queue, 722 + void *_unused) 723 + { 724 + struct Qdisc *qdisc; 725 + bool nolock; 726 + 727 + qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 728 + if (!qdisc) 729 + return; 730 + 731 + nolock = qdisc->flags & TCQ_F_NOLOCK; 732 + 733 + if (nolock) 734 + spin_lock_bh(&qdisc->seqlock); 735 + spin_lock_bh(qdisc_lock(qdisc)); 736 + 737 + qdisc_reset(qdisc); 738 + 739 + spin_unlock_bh(qdisc_lock(qdisc)); 740 + if (nolock) { 741 + clear_bit(__QDISC_STATE_MISSED, &qdisc->state); 742 + clear_bit(__QDISC_STATE_DRAINING, &qdisc->state); 743 + spin_unlock_bh(&qdisc->seqlock); 744 + } 745 + } 746 + 719 747 #ifdef CONFIG_NET_SCHED 720 748 int qdisc_offload_dump_helper(struct Qdisc *q, enum tc_setup_type type, 721 749 void *type_data); ··· 1456 1428 struct mini_Qdisc __rcu **p_miniq); 1457 1429 void mini_qdisc_pair_block_init(struct mini_Qdisc_pair *miniqp, 1458 1430 struct tcf_block *block); 1431 + 1432 + static inline bool mini_qdisc_pair_inited(struct mini_Qdisc_pair *miniqp) 1433 + { 1434 + return !!miniqp->p_miniq; 1435 + } 1459 1436 1460 1437 void mq_change_real_num_tx(struct Qdisc *sch, unsigned int new_real_tx); 1461 1438
+1 -1
include/net/udp_tunnel.h
··· 52 52 static inline int udp_sock_create6(struct net *net, struct udp_port_cfg *cfg, 53 53 struct socket **sockp) 54 54 { 55 - return 0; 55 + return -EPFNOSUPPORT; 56 56 } 57 57 #endif 58 58
+5 -2
include/trace/events/task.h
··· 38 38 TP_ARGS(task, comm), 39 39 40 40 TP_STRUCT__entry( 41 + __field( pid_t, pid) 41 42 __array( char, oldcomm, TASK_COMM_LEN) 42 43 __array( char, newcomm, TASK_COMM_LEN) 43 44 __field( short, oom_score_adj) 44 45 ), 45 46 46 47 TP_fast_assign( 48 + __entry->pid = task->pid; 47 49 memcpy(entry->oldcomm, task->comm, TASK_COMM_LEN); 48 50 strscpy(entry->newcomm, comm, TASK_COMM_LEN); 49 51 __entry->oom_score_adj = task->signal->oom_score_adj; 50 52 ), 51 53 52 - TP_printk("oldcomm=%s newcomm=%s oom_score_adj=%hd", 53 - __entry->oldcomm, __entry->newcomm, __entry->oom_score_adj) 54 + TP_printk("pid=%d oldcomm=%s newcomm=%s oom_score_adj=%hd", 55 + __entry->pid, __entry->oldcomm, 56 + __entry->newcomm, __entry->oom_score_adj) 54 57 ); 55 58 56 59 /**
+11 -3
io_uring/kbuf.c
··· 34 34 35 35 static bool io_kbuf_inc_commit(struct io_buffer_list *bl, int len) 36 36 { 37 + /* No data consumed, return false early to avoid consuming the buffer */ 38 + if (!len) 39 + return false; 40 + 37 41 while (len) { 38 42 struct io_uring_buf *buf; 39 43 u32 buf_len, this_len; ··· 216 212 sel.addr = u64_to_user_ptr(READ_ONCE(buf->addr)); 217 213 218 214 if (io_should_commit(req, issue_flags)) { 219 - io_kbuf_commit(req, sel.buf_list, *len, 1); 215 + if (!io_kbuf_commit(req, sel.buf_list, *len, 1)) 216 + req->flags |= REQ_F_BUF_MORE; 220 217 sel.buf_list = NULL; 221 218 } 222 219 return sel; ··· 350 345 */ 351 346 if (ret > 0) { 352 347 req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE; 353 - io_kbuf_commit(req, sel->buf_list, arg->out_len, ret); 348 + if (!io_kbuf_commit(req, sel->buf_list, arg->out_len, ret)) 349 + req->flags |= REQ_F_BUF_MORE; 354 350 } 355 351 } else { 356 352 ret = io_provided_buffers_select(req, &arg->out_len, sel->buf_list, arg->iovs); ··· 397 391 398 392 if (bl) 399 393 ret = io_kbuf_commit(req, bl, len, nr); 394 + if (ret && (req->flags & REQ_F_BUF_MORE)) 395 + ret = false; 400 396 401 - req->flags &= ~REQ_F_BUFFER_RING; 397 + req->flags &= ~(REQ_F_BUFFER_RING | REQ_F_BUF_MORE); 402 398 return ret; 403 399 } 404 400
+7 -2
io_uring/poll.c
··· 272 272 atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs); 273 273 v &= ~IO_POLL_RETRY_FLAG; 274 274 } 275 + v &= IO_POLL_REF_MASK; 275 276 } 276 277 277 278 /* the mask was stashed in __io_poll_execute */ ··· 305 304 return IOU_POLL_REMOVE_POLL_USE_RES; 306 305 } 307 306 } else { 308 - int ret = io_poll_issue(req, tw); 307 + int ret; 309 308 309 + /* multiple refs and HUP, ensure we loop once more */ 310 + if ((req->cqe.res & (POLLHUP | POLLRDHUP)) && v != 1) 311 + v--; 312 + 313 + ret = io_poll_issue(req, tw); 310 314 if (ret == IOU_COMPLETE) 311 315 return IOU_POLL_REMOVE_POLL_USE_RES; 312 316 else if (ret == IOU_REQUEUE) ··· 327 321 * Release all references, retry if someone tried to restart 328 322 * task_work while we were executing it. 329 323 */ 330 - v &= IO_POLL_REF_MASK; 331 324 } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK); 332 325 333 326 io_napi_add(req);
+20 -4
kernel/bpf/btf.c
··· 1787 1787 * of the _bh() version. 1788 1788 */ 1789 1789 spin_lock_irqsave(&btf_idr_lock, flags); 1790 - idr_remove(&btf_idr, btf->id); 1790 + if (btf->id) { 1791 + idr_remove(&btf_idr, btf->id); 1792 + /* 1793 + * Clear the id here to make this function idempotent, since it will get 1794 + * called a couple of times for module BTFs: on module unload, and then 1795 + * the final btf_put(). btf_alloc_id() starts IDs with 1, so we can use 1796 + * 0 as sentinel value. 1797 + */ 1798 + WRITE_ONCE(btf->id, 0); 1799 + } 1791 1800 spin_unlock_irqrestore(&btf_idr_lock, flags); 1792 1801 } 1793 1802 ··· 8124 8115 { 8125 8116 const struct btf *btf = filp->private_data; 8126 8117 8127 - seq_printf(m, "btf_id:\t%u\n", btf->id); 8118 + seq_printf(m, "btf_id:\t%u\n", READ_ONCE(btf->id)); 8128 8119 } 8129 8120 #endif 8130 8121 ··· 8206 8197 if (copy_from_user(&info, uinfo, info_copy)) 8207 8198 return -EFAULT; 8208 8199 8209 - info.id = btf->id; 8200 + info.id = READ_ONCE(btf->id); 8210 8201 ubtf = u64_to_user_ptr(info.btf); 8211 8202 btf_copy = min_t(u32, btf->data_size, info.btf_size); 8212 8203 if (copy_to_user(ubtf, btf->data, btf_copy)) ··· 8269 8260 8270 8261 u32 btf_obj_id(const struct btf *btf) 8271 8262 { 8272 - return btf->id; 8263 + return READ_ONCE(btf->id); 8273 8264 } 8274 8265 8275 8266 bool btf_is_kernel(const struct btf *btf) ··· 8391 8382 if (btf_mod->module != module) 8392 8383 continue; 8393 8384 8385 + /* 8386 + * For modules, we do the freeing of BTF IDR as soon as 8387 + * module goes away to disable BTF discovery, since the 8388 + * btf_try_get_module() on such BTFs will fail. This may 8389 + * be called again on btf_put(), but it's ok to do so. 8390 + */ 8391 + btf_free_id(btf_mod->btf); 8394 8392 list_del(&btf_mod->list); 8395 8393 if (btf_mod->sysfs_attr) 8396 8394 sysfs_remove_bin_file(btf_kobj, btf_mod->sysfs_attr);
+35 -8
kernel/bpf/core.c
··· 1422 1422 *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); 1423 1423 *to++ = BPF_STX_MEM(from->code, from->dst_reg, BPF_REG_AX, from->off); 1424 1424 break; 1425 + 1426 + case BPF_ST | BPF_PROBE_MEM32 | BPF_DW: 1427 + case BPF_ST | BPF_PROBE_MEM32 | BPF_W: 1428 + case BPF_ST | BPF_PROBE_MEM32 | BPF_H: 1429 + case BPF_ST | BPF_PROBE_MEM32 | BPF_B: 1430 + *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ 1431 + from->imm); 1432 + *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); 1433 + /* 1434 + * Cannot use BPF_STX_MEM() macro here as it 1435 + * hardcodes BPF_MEM mode, losing PROBE_MEM32 1436 + * and breaking arena addressing in the JIT. 1437 + */ 1438 + *to++ = (struct bpf_insn) { 1439 + .code = BPF_STX | BPF_PROBE_MEM32 | 1440 + BPF_SIZE(from->code), 1441 + .dst_reg = from->dst_reg, 1442 + .src_reg = BPF_REG_AX, 1443 + .off = from->off, 1444 + }; 1445 + break; 1425 1446 } 1426 1447 out: 1427 1448 return to - to_buff; ··· 1757 1736 } 1758 1737 1759 1738 #ifndef CONFIG_BPF_JIT_ALWAYS_ON 1739 + /* Absolute value of s32 without undefined behavior for S32_MIN */ 1740 + static u32 abs_s32(s32 x) 1741 + { 1742 + return x >= 0 ? (u32)x : -(u32)x; 1743 + } 1744 + 1760 1745 /** 1761 1746 * ___bpf_prog_run - run eBPF program on a given context 1762 1747 * @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers ··· 1927 1900 DST = do_div(AX, (u32) SRC); 1928 1901 break; 1929 1902 case 1: 1930 - AX = abs((s32)DST); 1931 - AX = do_div(AX, abs((s32)SRC)); 1903 + AX = abs_s32((s32)DST); 1904 + AX = do_div(AX, abs_s32((s32)SRC)); 1932 1905 if ((s32)DST < 0) 1933 1906 DST = (u32)-AX; 1934 1907 else ··· 1955 1928 DST = do_div(AX, (u32) IMM); 1956 1929 break; 1957 1930 case 1: 1958 - AX = abs((s32)DST); 1959 - AX = do_div(AX, abs((s32)IMM)); 1931 + AX = abs_s32((s32)DST); 1932 + AX = do_div(AX, abs_s32((s32)IMM)); 1960 1933 if ((s32)DST < 0) 1961 1934 DST = (u32)-AX; 1962 1935 else ··· 1982 1955 DST = (u32) AX; 1983 1956 break; 1984 1957 case 1: 1985 - AX = abs((s32)DST); 1986 - do_div(AX, abs((s32)SRC)); 1958 + AX = abs_s32((s32)DST); 1959 + do_div(AX, abs_s32((s32)SRC)); 1987 1960 if (((s32)DST < 0) == ((s32)SRC < 0)) 1988 1961 DST = (u32)AX; 1989 1962 else ··· 2009 1982 DST = (u32) AX; 2010 1983 break; 2011 1984 case 1: 2012 - AX = abs((s32)DST); 2013 - do_div(AX, abs((s32)IMM)); 1985 + AX = abs_s32((s32)DST); 1986 + do_div(AX, abs_s32((s32)IMM)); 2014 1987 if (((s32)DST < 0) == ((s32)IMM < 0)) 2015 1988 DST = (u32)AX; 2016 1989 else
+25 -8
kernel/bpf/verifier.c
··· 15910 15910 /* Apply bswap if alu64 or switch between big-endian and little-endian machines */ 15911 15911 bool need_bswap = alu64 || (to_le == is_big_endian); 15912 15912 15913 + /* 15914 + * If the register is mutated, manually reset its scalar ID to break 15915 + * any existing ties and avoid incorrect bounds propagation. 15916 + */ 15917 + if (need_bswap || insn->imm == 16 || insn->imm == 32) 15918 + dst_reg->id = 0; 15919 + 15913 15920 if (need_bswap) { 15914 15921 if (insn->imm == 16) 15915 15922 dst_reg->var_off = tnum_bswap16(dst_reg->var_off); ··· 15999 15992 else 16000 15993 return 0; 16001 15994 16002 - branch = push_stack(env, env->insn_idx + 1, env->insn_idx, false); 15995 + branch = push_stack(env, env->insn_idx, env->insn_idx, false); 16003 15996 if (IS_ERR(branch)) 16004 15997 return PTR_ERR(branch); 16005 15998 ··· 17415 17408 continue; 17416 17409 if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST)) 17417 17410 continue; 17411 + /* 17412 + * Skip mixed 32/64-bit links: the delta relationship doesn't 17413 + * hold across different ALU widths. 17414 + */ 17415 + if (((reg->id ^ known_reg->id) & BPF_ADD_CONST) == BPF_ADD_CONST) 17416 + continue; 17418 17417 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) || 17419 17418 reg->off == known_reg->off) { 17420 17419 s32 saved_subreg_def = reg->subreg_def; ··· 17448 17435 scalar32_min_max_add(reg, &fake_reg); 17449 17436 scalar_min_max_add(reg, &fake_reg); 17450 17437 reg->var_off = tnum_add(reg->var_off, fake_reg.var_off); 17451 - if (known_reg->id & BPF_ADD_CONST32) 17438 + if ((reg->id | known_reg->id) & BPF_ADD_CONST32) 17452 17439 zext_32_to_64(reg); 17453 17440 reg_bounds_sync(reg); 17454 17441 } ··· 19876 19863 * Also verify that new value satisfies old value range knowledge. 19877 19864 */ 19878 19865 19879 - /* ADD_CONST mismatch: different linking semantics */ 19880 - if ((rold->id & BPF_ADD_CONST) && !(rcur->id & BPF_ADD_CONST)) 19881 - return false; 19882 - 19883 - if (rold->id && !(rold->id & BPF_ADD_CONST) && (rcur->id & BPF_ADD_CONST)) 19866 + /* 19867 + * ADD_CONST flags must match exactly: BPF_ADD_CONST32 and 19868 + * BPF_ADD_CONST64 have different linking semantics in 19869 + * sync_linked_regs() (alu32 zero-extends, alu64 does not), 19870 + * so pruning across different flag types is unsafe. 19871 + */ 19872 + if (rold->id && 19873 + (rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST)) 19884 19874 return false; 19885 19875 19886 19876 /* Both have offset linkage: offsets must match */ ··· 20920 20904 * state when it exits. 20921 20905 */ 20922 20906 int err = check_resource_leak(env, exception_exit, 20923 - !env->cur_state->curframe, 20907 + exception_exit || !env->cur_state->curframe, 20908 + exception_exit ? "bpf_throw" : 20924 20909 "BPF_EXIT instruction in main prog"); 20925 20910 if (err) 20926 20911 return err;
+2 -2
kernel/crash_dump_dm_crypt.c
··· 168 168 169 169 memcpy(dm_key->data, ukp->data, ukp->datalen); 170 170 dm_key->key_size = ukp->datalen; 171 - kexec_dprintk("Get dm crypt key (size=%u) %s: %8ph\n", dm_key->key_size, 172 - dm_key->key_desc, dm_key->data); 171 + kexec_dprintk("Get dm crypt key (size=%u) %s\n", dm_key->key_size, 172 + dm_key->key_desc); 173 173 174 174 out: 175 175 up_read(&key->sem);
+8 -11
kernel/events/core.c
··· 4813 4813 struct perf_event *sub, *event = data->event; 4814 4814 struct perf_event_context *ctx = event->ctx; 4815 4815 struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); 4816 - struct pmu *pmu = event->pmu; 4816 + struct pmu *pmu; 4817 4817 4818 4818 /* 4819 4819 * If this is a task context, we need to check whether it is ··· 4825 4825 if (ctx->task && cpuctx->task_ctx != ctx) 4826 4826 return; 4827 4827 4828 - raw_spin_lock(&ctx->lock); 4828 + guard(raw_spinlock)(&ctx->lock); 4829 4829 ctx_time_update_event(ctx, event); 4830 4830 4831 4831 perf_event_update_time(event); ··· 4833 4833 perf_event_update_sibling_time(event); 4834 4834 4835 4835 if (event->state != PERF_EVENT_STATE_ACTIVE) 4836 - goto unlock; 4836 + return; 4837 4837 4838 4838 if (!data->group) { 4839 - pmu->read(event); 4839 + perf_pmu_read(event); 4840 4840 data->ret = 0; 4841 - goto unlock; 4841 + return; 4842 4842 } 4843 4843 4844 + pmu = event->pmu_ctx->pmu; 4844 4845 pmu->start_txn(pmu, PERF_PMU_TXN_READ); 4845 4846 4846 - pmu->read(event); 4847 - 4847 + perf_pmu_read(event); 4848 4848 for_each_sibling_event(sub, event) 4849 4849 perf_pmu_read(sub); 4850 4850 4851 4851 data->ret = pmu->commit_txn(pmu); 4852 - 4853 - unlock: 4854 - raw_spin_unlock(&ctx->lock); 4855 4852 } 4856 4853 4857 4854 static inline u64 perf_event_count(struct perf_event *event, bool self) ··· 14741 14744 get_ctx(child_ctx); 14742 14745 child_event->ctx = child_ctx; 14743 14746 14744 - pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event); 14747 + pmu_ctx = find_get_pmu_context(parent_event->pmu_ctx->pmu, child_ctx, child_event); 14745 14748 if (IS_ERR(pmu_ctx)) { 14746 14749 free_event(child_event); 14747 14750 return ERR_CAST(pmu_ctx);
+21 -9
kernel/sched/idle.c
··· 161 161 return cpuidle_enter(drv, dev, next_state); 162 162 } 163 163 164 + static void idle_call_stop_or_retain_tick(bool stop_tick) 165 + { 166 + if (stop_tick || tick_nohz_tick_stopped()) 167 + tick_nohz_idle_stop_tick(); 168 + else 169 + tick_nohz_idle_retain_tick(); 170 + } 171 + 164 172 /** 165 173 * cpuidle_idle_call - the main idle function 166 174 * ··· 178 170 * set, and it returns with polling set. If it ever stops polling, it 179 171 * must clear the polling bit. 180 172 */ 181 - static void cpuidle_idle_call(void) 173 + static void cpuidle_idle_call(bool stop_tick) 182 174 { 183 175 struct cpuidle_device *dev = cpuidle_get_device(); 184 176 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); ··· 194 186 } 195 187 196 188 if (cpuidle_not_available(drv, dev)) { 197 - tick_nohz_idle_stop_tick(); 189 + idle_call_stop_or_retain_tick(stop_tick); 198 190 199 191 default_idle_call(); 200 192 goto exit_idle; ··· 230 222 next_state = cpuidle_find_deepest_state(drv, dev, max_latency_ns); 231 223 call_cpuidle(drv, dev, next_state); 232 224 } else if (drv->state_count > 1) { 233 - bool stop_tick = true; 225 + /* 226 + * stop_tick is expected to be true by default by cpuidle 227 + * governors, which allows them to select idle states with 228 + * target residency above the tick period length. 229 + */ 230 + stop_tick = true; 234 231 235 232 /* 236 233 * Ask the cpuidle framework to choose a convenient idle state. 237 234 */ 238 235 next_state = cpuidle_select(drv, dev, &stop_tick); 239 236 240 - if (stop_tick || tick_nohz_tick_stopped()) 241 - tick_nohz_idle_stop_tick(); 242 - else 243 - tick_nohz_idle_retain_tick(); 237 + idle_call_stop_or_retain_tick(stop_tick); 244 238 245 239 entered_state = call_cpuidle(drv, dev, next_state); 246 240 /* ··· 250 240 */ 251 241 cpuidle_reflect(dev, entered_state); 252 242 } else { 253 - tick_nohz_idle_retain_tick(); 243 + idle_call_stop_or_retain_tick(stop_tick); 254 244 255 245 /* 256 246 * If there is only a single idle state (or none), there is ··· 278 268 static void do_idle(void) 279 269 { 280 270 int cpu = smp_processor_id(); 271 + bool got_tick = false; 281 272 282 273 /* 283 274 * Check if we need to update blocked load ··· 349 338 tick_nohz_idle_restart_tick(); 350 339 cpu_idle_poll(); 351 340 } else { 352 - cpuidle_idle_call(); 341 + cpuidle_idle_call(got_tick); 353 342 } 343 + got_tick = tick_nohz_idle_got_tick(); 354 344 arch_cpu_idle_exit(); 355 345 } 356 346
+2 -2
kernel/trace/ftrace.c
··· 6606 6606 if (!orig_hash) 6607 6607 goto unlock; 6608 6608 6609 - /* Enable the tmp_ops to have the same functions as the direct ops */ 6609 + /* Enable the tmp_ops to have the same functions as the hash object. */ 6610 6610 ftrace_ops_init(&tmp_ops); 6611 - tmp_ops.func_hash = ops->func_hash; 6611 + tmp_ops.func_hash->filter_hash = hash; 6612 6612 6613 6613 err = register_ftrace_function_nolock(&tmp_ops); 6614 6614 if (err)
+1 -1
kernel/trace/ring_buffer.c
··· 2053 2053 2054 2054 entries += ret; 2055 2055 entry_bytes += local_read(&head_page->page->commit); 2056 - local_set(&cpu_buffer->head_page->entries, ret); 2056 + local_set(&head_page->entries, ret); 2057 2057 2058 2058 if (head_page == cpu_buffer->commit_page) 2059 2059 break;
+27 -9
kernel/trace/trace.c
··· 555 555 lockdep_assert_held(&event_mutex); 556 556 557 557 if (enabled) { 558 - if (!list_empty(&tr->marker_list)) 558 + if (tr->trace_flags & TRACE_ITER(COPY_MARKER)) 559 559 return false; 560 560 561 561 list_add_rcu(&tr->marker_list, &marker_copies); ··· 563 563 return true; 564 564 } 565 565 566 - if (list_empty(&tr->marker_list)) 566 + if (!(tr->trace_flags & TRACE_ITER(COPY_MARKER))) 567 567 return false; 568 568 569 - list_del_init(&tr->marker_list); 569 + list_del_rcu(&tr->marker_list); 570 570 tr->trace_flags &= ~TRACE_ITER(COPY_MARKER); 571 571 return true; 572 572 } ··· 6784 6784 6785 6785 do { 6786 6786 /* 6787 + * It is possible that something is trying to migrate this 6788 + * task. What happens then, is when preemption is enabled, 6789 + * the migration thread will preempt this task, try to 6790 + * migrate it, fail, then let it run again. That will 6791 + * cause this to loop again and never succeed. 6792 + * On failures, enabled and disable preemption with 6793 + * migration enabled, to allow the migration thread to 6794 + * migrate this task. 6795 + */ 6796 + if (trys) { 6797 + preempt_enable_notrace(); 6798 + preempt_disable_notrace(); 6799 + cpu = smp_processor_id(); 6800 + buffer = per_cpu_ptr(tinfo->tbuf, cpu)->buf; 6801 + } 6802 + 6803 + /* 6787 6804 * If for some reason, copy_from_user() always causes a context 6788 6805 * switch, this would then cause an infinite loop. 6789 6806 * If this task is preempted by another user space task, it ··· 9761 9744 9762 9745 list_del(&tr->list); 9763 9746 9747 + if (printk_trace == tr) 9748 + update_printk_trace(&global_trace); 9749 + 9750 + /* Must be done before disabling all the flags */ 9751 + if (update_marker_trace(tr, 0)) 9752 + synchronize_rcu(); 9753 + 9764 9754 /* Disable all the flags that were enabled coming in */ 9765 9755 for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++) { 9766 9756 if ((1ULL << i) & ZEROED_TRACE_FLAGS) 9767 9757 set_tracer_flag(tr, 1ULL << i, 0); 9768 9758 } 9769 - 9770 - if (printk_trace == tr) 9771 - update_printk_trace(&global_trace); 9772 - 9773 - if (update_marker_trace(tr, 0)) 9774 - synchronize_rcu(); 9775 9759 9776 9760 tracing_set_nop(tr); 9777 9761 clear_ftrace_function_probes(tr);
+2 -1
lib/bootconfig.c
··· 723 723 if (op == ':') { 724 724 unsigned short nidx = child->next; 725 725 726 - xbc_init_node(child, v, XBC_VALUE); 726 + if (xbc_init_node(child, v, XBC_VALUE) < 0) 727 + return xbc_parse_error("Failed to override value", v); 727 728 child->next = nidx; /* keep subkeys */ 728 729 goto array; 729 730 }
+3
lib/crypto/Makefile
··· 55 55 libaes-$(CONFIG_X86) += x86/aes-aesni.o 56 56 endif # CONFIG_CRYPTO_LIB_AES_ARCH 57 57 58 + # clean-files must be defined unconditionally 59 + clean-files += powerpc/aesp8-ppc.S 60 + 58 61 ################################################################################ 59 62 60 63 obj-$(CONFIG_CRYPTO_LIB_AESCFB) += libaescfb.o
+2 -1
mm/huge_memory.c
··· 2797 2797 _dst_pmd = pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma); 2798 2798 } else { 2799 2799 src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); 2800 - _dst_pmd = folio_mk_pmd(src_folio, dst_vma->vm_page_prot); 2800 + _dst_pmd = move_soft_dirty_pmd(src_pmdval); 2801 + _dst_pmd = clear_uffd_wp_pmd(_dst_pmd); 2801 2802 } 2802 2803 set_pmd_at(mm, dst_addr, dst_pmd, _dst_pmd); 2803 2804
+17 -4
mm/rmap.c
··· 1955 1955 if (userfaultfd_wp(vma)) 1956 1956 return 1; 1957 1957 1958 - return folio_pte_batch(folio, pvmw->pte, pte, max_nr); 1958 + /* 1959 + * If unmap fails, we need to restore the ptes. To avoid accidentally 1960 + * upgrading write permissions for ptes that were not originally 1961 + * writable, and to avoid losing the soft-dirty bit, use the 1962 + * appropriate FPB flags. 1963 + */ 1964 + return folio_pte_batch_flags(folio, vma, pvmw->pte, &pte, max_nr, 1965 + FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY); 1959 1966 } 1960 1967 1961 1968 /* ··· 2450 2443 __maybe_unused pmd_t pmdval; 2451 2444 2452 2445 if (flags & TTU_SPLIT_HUGE_PMD) { 2446 + /* 2447 + * split_huge_pmd_locked() might leave the 2448 + * folio mapped through PTEs. Retry the walk 2449 + * so we can detect this scenario and properly 2450 + * abort the walk. 2451 + */ 2453 2452 split_huge_pmd_locked(vma, pvmw.address, 2454 2453 pvmw.pmd, true); 2455 - ret = false; 2456 - page_vma_mapped_walk_done(&pvmw); 2457 - break; 2454 + flags &= ~TTU_SPLIT_HUGE_PMD; 2455 + page_vma_mapped_walk_restart(&pvmw); 2456 + continue; 2458 2457 } 2459 2458 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION 2460 2459 pmdval = pmdp_get(pvmw.pmd);
+47 -25
net/atm/lec.c
··· 154 154 /* 0x01 is topology change */ 155 155 156 156 priv = netdev_priv(dev); 157 - atm_force_charge(priv->lecd, skb2->truesize); 158 - sk = sk_atm(priv->lecd); 159 - skb_queue_tail(&sk->sk_receive_queue, skb2); 160 - sk->sk_data_ready(sk); 157 + struct atm_vcc *vcc; 158 + 159 + rcu_read_lock(); 160 + vcc = rcu_dereference(priv->lecd); 161 + if (vcc) { 162 + atm_force_charge(vcc, skb2->truesize); 163 + sk = sk_atm(vcc); 164 + skb_queue_tail(&sk->sk_receive_queue, skb2); 165 + sk->sk_data_ready(sk); 166 + } else { 167 + dev_kfree_skb(skb2); 168 + } 169 + rcu_read_unlock(); 161 170 } 162 171 } 163 172 #endif /* IS_ENABLED(CONFIG_BRIDGE) */ ··· 225 216 int is_rdesc; 226 217 227 218 pr_debug("called\n"); 228 - if (!priv->lecd) { 219 + if (!rcu_access_pointer(priv->lecd)) { 229 220 pr_info("%s:No lecd attached\n", dev->name); 230 221 dev->stats.tx_errors++; 231 222 netif_stop_queue(dev); ··· 458 449 break; 459 450 skb2->len = sizeof(struct atmlec_msg); 460 451 skb_copy_to_linear_data(skb2, mesg, sizeof(*mesg)); 461 - atm_force_charge(priv->lecd, skb2->truesize); 462 - sk = sk_atm(priv->lecd); 463 - skb_queue_tail(&sk->sk_receive_queue, skb2); 464 - sk->sk_data_ready(sk); 452 + struct atm_vcc *vcc; 453 + 454 + rcu_read_lock(); 455 + vcc = rcu_dereference(priv->lecd); 456 + if (vcc) { 457 + atm_force_charge(vcc, skb2->truesize); 458 + sk = sk_atm(vcc); 459 + skb_queue_tail(&sk->sk_receive_queue, skb2); 460 + sk->sk_data_ready(sk); 461 + } else { 462 + dev_kfree_skb(skb2); 463 + } 464 + rcu_read_unlock(); 465 465 } 466 466 } 467 467 #endif /* IS_ENABLED(CONFIG_BRIDGE) */ ··· 486 468 487 469 static void lec_atm_close(struct atm_vcc *vcc) 488 470 { 489 - struct sk_buff *skb; 490 471 struct net_device *dev = (struct net_device *)vcc->proto_data; 491 472 struct lec_priv *priv = netdev_priv(dev); 492 473 493 - priv->lecd = NULL; 474 + rcu_assign_pointer(priv->lecd, NULL); 475 + synchronize_rcu(); 494 476 /* Do something needful? */ 495 477 496 478 netif_stop_queue(dev); 497 479 lec_arp_destroy(priv); 498 - 499 - if (skb_peek(&sk_atm(vcc)->sk_receive_queue)) 500 - pr_info("%s closing with messages pending\n", dev->name); 501 - while ((skb = skb_dequeue(&sk_atm(vcc)->sk_receive_queue))) { 502 - atm_return(vcc, skb->truesize); 503 - dev_kfree_skb(skb); 504 - } 505 480 506 481 pr_info("%s: Shut down!\n", dev->name); 507 482 module_put(THIS_MODULE); ··· 521 510 const unsigned char *mac_addr, const unsigned char *atm_addr, 522 511 struct sk_buff *data) 523 512 { 513 + struct atm_vcc *vcc; 524 514 struct sock *sk; 525 515 struct sk_buff *skb; 526 516 struct atmlec_msg *mesg; 527 517 528 - if (!priv || !priv->lecd) 518 + if (!priv || !rcu_access_pointer(priv->lecd)) 529 519 return -1; 520 + 530 521 skb = alloc_skb(sizeof(struct atmlec_msg), GFP_ATOMIC); 531 522 if (!skb) 532 523 return -1; ··· 545 532 if (atm_addr) 546 533 memcpy(&mesg->content.normal.atm_addr, atm_addr, ATM_ESA_LEN); 547 534 548 - atm_force_charge(priv->lecd, skb->truesize); 549 - sk = sk_atm(priv->lecd); 535 + rcu_read_lock(); 536 + vcc = rcu_dereference(priv->lecd); 537 + if (!vcc) { 538 + rcu_read_unlock(); 539 + kfree_skb(skb); 540 + return -1; 541 + } 542 + 543 + atm_force_charge(vcc, skb->truesize); 544 + sk = sk_atm(vcc); 550 545 skb_queue_tail(&sk->sk_receive_queue, skb); 551 546 sk->sk_data_ready(sk); 552 547 553 548 if (data != NULL) { 554 549 pr_debug("about to send %d bytes of data\n", data->len); 555 - atm_force_charge(priv->lecd, data->truesize); 550 + atm_force_charge(vcc, data->truesize); 556 551 skb_queue_tail(&sk->sk_receive_queue, data); 557 552 sk->sk_data_ready(sk); 558 553 } 559 554 555 + rcu_read_unlock(); 560 556 return 0; 561 557 } 562 558 ··· 640 618 641 619 atm_return(vcc, skb->truesize); 642 620 if (*(__be16 *) skb->data == htons(priv->lecid) || 643 - !priv->lecd || !(dev->flags & IFF_UP)) { 621 + !rcu_access_pointer(priv->lecd) || !(dev->flags & IFF_UP)) { 644 622 /* 645 623 * Probably looping back, or if lecd is missing, 646 624 * lecd has gone down ··· 775 753 priv = netdev_priv(dev_lec[i]); 776 754 } else { 777 755 priv = netdev_priv(dev_lec[i]); 778 - if (priv->lecd) 756 + if (rcu_access_pointer(priv->lecd)) 779 757 return -EADDRINUSE; 780 758 } 781 759 lec_arp_init(priv); 782 760 priv->itfnum = i; /* LANE2 addition */ 783 - priv->lecd = vcc; 761 + rcu_assign_pointer(priv->lecd, vcc); 784 762 vcc->dev = &lecatm_dev; 785 763 vcc_insert_socket(sk_atm(vcc)); 786 764
+1 -1
net/atm/lec.h
··· 91 91 */ 92 92 spinlock_t lec_arp_lock; 93 93 struct atm_vcc *mcast_vcc; /* Default Multicast Send VCC */ 94 - struct atm_vcc *lecd; 94 + struct atm_vcc __rcu *lecd; 95 95 struct delayed_work lec_arp_work; /* C10 */ 96 96 unsigned int maximum_unknown_frame_count; 97 97 /*
+3
net/batman-adv/bat_iv_ogm.c
··· 473 473 if (aggregated_bytes > max_bytes) 474 474 return false; 475 475 476 + if (skb_tailroom(forw_packet->skb) < packet_len) 477 + return false; 478 + 476 479 if (packet_num >= BATADV_MAX_AGGREGATION_PACKETS) 477 480 return false; 478 481
+2 -2
net/bluetooth/hci_conn.c
··· 1944 1944 return false; 1945 1945 1946 1946 done: 1947 + conn->iso_qos = *qos; 1948 + 1947 1949 if (hci_cmd_sync_queue(hdev, set_cig_params_sync, 1948 1950 UINT_PTR(qos->ucast.cig), NULL) < 0) 1949 1951 return false; ··· 2015 2013 } 2016 2014 2017 2015 hci_conn_hold(cis); 2018 - 2019 - cis->iso_qos = *qos; 2020 2016 cis->state = BT_BOUND; 2021 2017 2022 2018 return cis;
+1 -1
net/bluetooth/hci_sync.c
··· 6627 6627 * state. 6628 6628 */ 6629 6629 if (hci_dev_test_flag(hdev, HCI_LE_SCAN)) { 6630 - hci_scan_disable_sync(hdev); 6631 6630 hci_dev_set_flag(hdev, HCI_LE_SCAN_INTERRUPTED); 6631 + hci_scan_disable_sync(hdev); 6632 6632 } 6633 6633 6634 6634 /* Update random address, but set require_privacy to false so
+14 -2
net/bluetooth/hidp/core.c
··· 986 986 skb_queue_purge(&session->intr_transmit); 987 987 fput(session->intr_sock->file); 988 988 fput(session->ctrl_sock->file); 989 - l2cap_conn_put(session->conn); 989 + if (session->conn) 990 + l2cap_conn_put(session->conn); 990 991 kfree(session); 991 992 } 992 993 ··· 1165 1164 1166 1165 down_write(&hidp_session_sem); 1167 1166 1167 + /* Drop L2CAP reference immediately to indicate that 1168 + * l2cap_unregister_user() shall not be called as it is already 1169 + * considered removed. 1170 + */ 1171 + if (session->conn) { 1172 + l2cap_conn_put(session->conn); 1173 + session->conn = NULL; 1174 + } 1175 + 1168 1176 hidp_session_terminate(session); 1169 1177 1170 1178 cancel_work_sync(&session->dev_init); ··· 1311 1301 * Instead, this call has the same semantics as if user-space tried to 1312 1302 * delete the session. 1313 1303 */ 1314 - l2cap_unregister_user(session->conn, &session->user); 1304 + if (session->conn) 1305 + l2cap_unregister_user(session->conn, &session->user); 1306 + 1315 1307 hidp_session_put(session); 1316 1308 1317 1309 module_put_and_kthread_exit(0);
+31 -20
net/bluetooth/l2cap_core.c
··· 1678 1678 1679 1679 int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user) 1680 1680 { 1681 - struct hci_dev *hdev = conn->hcon->hdev; 1682 1681 int ret; 1683 1682 1684 1683 /* We need to check whether l2cap_conn is registered. If it is not, we 1685 - * must not register the l2cap_user. l2cap_conn_del() is unregisters 1686 - * l2cap_conn objects, but doesn't provide its own locking. Instead, it 1687 - * relies on the parent hci_conn object to be locked. This itself relies 1688 - * on the hci_dev object to be locked. So we must lock the hci device 1689 - * here, too. */ 1684 + * must not register the l2cap_user. l2cap_conn_del() unregisters 1685 + * l2cap_conn objects under conn->lock, and we use the same lock here 1686 + * to protect access to conn->users and conn->hchan. 1687 + */ 1690 1688 1691 - hci_dev_lock(hdev); 1689 + mutex_lock(&conn->lock); 1692 1690 1693 1691 if (!list_empty(&user->list)) { 1694 1692 ret = -EINVAL; ··· 1707 1709 ret = 0; 1708 1710 1709 1711 out_unlock: 1710 - hci_dev_unlock(hdev); 1712 + mutex_unlock(&conn->lock); 1711 1713 return ret; 1712 1714 } 1713 1715 EXPORT_SYMBOL(l2cap_register_user); 1714 1716 1715 1717 void l2cap_unregister_user(struct l2cap_conn *conn, struct l2cap_user *user) 1716 1718 { 1717 - struct hci_dev *hdev = conn->hcon->hdev; 1718 - 1719 - hci_dev_lock(hdev); 1719 + mutex_lock(&conn->lock); 1720 1720 1721 1721 if (list_empty(&user->list)) 1722 1722 goto out_unlock; ··· 1723 1727 user->remove(conn, user); 1724 1728 1725 1729 out_unlock: 1726 - hci_dev_unlock(hdev); 1730 + mutex_unlock(&conn->lock); 1727 1731 } 1728 1732 EXPORT_SYMBOL(l2cap_unregister_user); 1729 1733 ··· 4612 4616 4613 4617 switch (type) { 4614 4618 case L2CAP_IT_FEAT_MASK: 4615 - conn->feat_mask = get_unaligned_le32(rsp->data); 4619 + if (cmd_len >= sizeof(*rsp) + sizeof(u32)) 4620 + conn->feat_mask = get_unaligned_le32(rsp->data); 4616 4621 4617 4622 if (conn->feat_mask & L2CAP_FEAT_FIXED_CHAN) { 4618 4623 struct l2cap_info_req req; ··· 4632 4635 break; 4633 4636 4634 4637 case L2CAP_IT_FIXED_CHAN: 4635 - conn->remote_fixed_chan = rsp->data[0]; 4638 + if (cmd_len >= sizeof(*rsp) + sizeof(rsp->data[0])) 4639 + conn->remote_fixed_chan = rsp->data[0]; 4636 4640 conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE; 4637 4641 conn->info_ident = 0; 4638 4642 ··· 5057 5059 u16 mtu, mps; 5058 5060 __le16 psm; 5059 5061 u8 result, rsp_len = 0; 5060 - int i, num_scid; 5062 + int i, num_scid = 0; 5061 5063 bool defer = false; 5062 5064 5063 5065 if (!enable_ecred) ··· 5066 5068 memset(pdu, 0, sizeof(*pdu)); 5067 5069 5068 5070 if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) { 5071 + result = L2CAP_CR_LE_INVALID_PARAMS; 5072 + goto response; 5073 + } 5074 + 5075 + /* Check if there are no pending channels with the same ident */ 5076 + __l2cap_chan_list_id(conn, cmd->ident, l2cap_ecred_list_defer, 5077 + &num_scid); 5078 + if (num_scid) { 5069 5079 result = L2CAP_CR_LE_INVALID_PARAMS; 5070 5080 goto response; 5071 5081 } ··· 5430 5424 u8 *data) 5431 5425 { 5432 5426 struct l2cap_chan *chan, *tmp; 5433 - struct l2cap_ecred_conn_rsp *rsp = (void *) data; 5427 + struct l2cap_ecred_reconf_rsp *rsp = (void *)data; 5434 5428 u16 result; 5435 5429 5436 5430 if (cmd_len < sizeof(*rsp)) ··· 5438 5432 5439 5433 result = __le16_to_cpu(rsp->result); 5440 5434 5441 - BT_DBG("result 0x%4.4x", rsp->result); 5435 + BT_DBG("result 0x%4.4x", result); 5442 5436 5443 5437 if (!result) 5444 5438 return 0; ··· 6668 6662 return -ENOBUFS; 6669 6663 } 6670 6664 6671 - if (chan->imtu < skb->len) { 6672 - BT_ERR("Too big LE L2CAP PDU"); 6665 + if (skb->len > chan->imtu) { 6666 + BT_ERR("Too big LE L2CAP PDU: len %u > %u", skb->len, 6667 + chan->imtu); 6668 + l2cap_send_disconn_req(chan, ECONNRESET); 6673 6669 return -ENOBUFS; 6674 6670 } 6675 6671 ··· 6697 6689 sdu_len, skb->len, chan->imtu); 6698 6690 6699 6691 if (sdu_len > chan->imtu) { 6700 - BT_ERR("Too big LE L2CAP SDU length received"); 6692 + BT_ERR("Too big LE L2CAP SDU length: len %u > %u", 6693 + skb->len, sdu_len); 6694 + l2cap_send_disconn_req(chan, ECONNRESET); 6701 6695 err = -EMSGSIZE; 6702 6696 goto failed; 6703 6697 } ··· 6735 6725 6736 6726 if (chan->sdu->len + skb->len > chan->sdu_len) { 6737 6727 BT_ERR("Too much LE L2CAP data received"); 6728 + l2cap_send_disconn_req(chan, ECONNRESET); 6738 6729 err = -EINVAL; 6739 6730 goto failed; 6740 6731 }
+2 -5
net/bluetooth/mgmt.c
··· 2195 2195 sk = cmd->sk; 2196 2196 2197 2197 if (status) { 2198 - mgmt_cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER, 2199 - status); 2200 - mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true, 2201 - cmd_status_rsp, &status); 2198 + mgmt_cmd_status(cmd->sk, hdev->id, cmd->opcode, status); 2202 2199 goto done; 2203 2200 } 2204 2201 ··· 5374 5377 5375 5378 mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 5376 5379 mgmt_status(status), &rp, sizeof(rp)); 5377 - mgmt_pending_remove(cmd); 5380 + mgmt_pending_free(cmd); 5378 5381 5379 5382 hci_dev_unlock(hdev); 5380 5383 bt_dev_dbg(hdev, "add monitor %d complete, status %d",
+1 -1
net/bluetooth/smp.c
··· 2743 2743 if (!test_bit(SMP_FLAG_DEBUG_KEY, &smp->flags) && 2744 2744 !crypto_memneq(key, smp->local_pk, 64)) { 2745 2745 bt_dev_err(hdev, "Remote and local public keys are identical"); 2746 - return SMP_UNSPECIFIED; 2746 + return SMP_DHKEY_CHECK_FAILED; 2747 2747 } 2748 2748 2749 2749 memcpy(smp->remote_pk, key, 64);
+2 -2
net/bridge/br_cfm.c
··· 576 576 577 577 /* Empty and free peer MEP list */ 578 578 hlist_for_each_entry_safe(peer_mep, n_store, &mep->peer_mep_list, head) { 579 - cancel_delayed_work_sync(&peer_mep->ccm_rx_dwork); 579 + disable_delayed_work_sync(&peer_mep->ccm_rx_dwork); 580 580 hlist_del_rcu(&peer_mep->head); 581 581 kfree_rcu(peer_mep, rcu); 582 582 } ··· 732 732 return -ENOENT; 733 733 } 734 734 735 - cc_peer_disable(peer_mep); 735 + disable_delayed_work_sync(&peer_mep->ccm_rx_dwork); 736 736 737 737 hlist_del_rcu(&peer_mep->head); 738 738 kfree_rcu(peer_mep, rcu);
+3 -6
net/ethernet/eth.c
··· 193 193 } 194 194 EXPORT_SYMBOL(eth_type_trans); 195 195 196 - /** 197 - * eth_header_parse - extract hardware address from packet 198 - * @skb: packet to extract header from 199 - * @haddr: destination buffer 200 - */ 201 - int eth_header_parse(const struct sk_buff *skb, unsigned char *haddr) 196 + int eth_header_parse(const struct sk_buff *skb, const struct net_device *dev, 197 + unsigned char *haddr) 202 198 { 203 199 const struct ethhdr *eth = eth_hdr(skb); 200 + 204 201 memcpy(haddr, eth->h_source, ETH_ALEN); 205 202 return ETH_ALEN; 206 203 }
+3 -1
net/ipv4/icmp.c
··· 1079 1079 1080 1080 static bool icmp_tag_validation(int proto) 1081 1081 { 1082 + const struct net_protocol *ipprot; 1082 1083 bool ok; 1083 1084 1084 1085 rcu_read_lock(); 1085 - ok = rcu_dereference(inet_protos[proto])->icmp_strict_tag_validation; 1086 + ipprot = rcu_dereference(inet_protos[proto]); 1087 + ok = ipprot ? ipprot->icmp_strict_tag_validation : false; 1086 1088 rcu_read_unlock(); 1087 1089 return ok; 1088 1090 }
+2 -1
net/ipv4/ip_gre.c
··· 919 919 return -(t->hlen + sizeof(*iph)); 920 920 } 921 921 922 - static int ipgre_header_parse(const struct sk_buff *skb, unsigned char *haddr) 922 + static int ipgre_header_parse(const struct sk_buff *skb, const struct net_device *dev, 923 + unsigned char *haddr) 923 924 { 924 925 const struct iphdr *iph = (const struct iphdr *) skb_mac_header(skb); 925 926 memcpy(haddr, &iph->saddr, 4);
+4
net/ipv6/exthdrs.c
··· 379 379 hdr = (struct ipv6_sr_hdr *)skb_transport_header(skb); 380 380 381 381 idev = __in6_dev_get(skb->dev); 382 + if (!idev) { 383 + kfree_skb(skb); 384 + return -1; 385 + } 382 386 383 387 accept_seg6 = min(READ_ONCE(net->ipv6.devconf_all->seg6_enabled), 384 388 READ_ONCE(idev->cnf.seg6_enabled));
+2
net/ipv6/seg6_hmac.c
··· 184 184 int require_hmac; 185 185 186 186 idev = __in6_dev_get(skb->dev); 187 + if (!idev) 188 + return false; 187 189 188 190 srh = (struct ipv6_sr_hdr *)skb_transport_header(skb); 189 191
+6 -6
net/mac80211/cfg.c
··· 1904 1904 1905 1905 __sta_info_flush(sdata, true, link_id, NULL); 1906 1906 1907 - ieee80211_remove_link_keys(link, &keys); 1908 - if (!list_empty(&keys)) { 1909 - synchronize_net(); 1910 - ieee80211_free_key_list(local, &keys); 1911 - } 1912 - 1913 1907 ieee80211_stop_mbssid(sdata); 1914 1908 RCU_INIT_POINTER(link_conf->tx_bss_conf, NULL); 1915 1909 ··· 1914 1920 clear_bit(SDATA_STATE_OFFCHANNEL_BEACON_STOPPED, &sdata->state); 1915 1921 ieee80211_link_info_change_notify(sdata, link, 1916 1922 BSS_CHANGED_BEACON_ENABLED); 1923 + 1924 + ieee80211_remove_link_keys(link, &keys); 1925 + if (!list_empty(&keys)) { 1926 + synchronize_net(); 1927 + ieee80211_free_key_list(local, &keys); 1928 + } 1917 1929 1918 1930 if (sdata->wdev.links[link_id].cac_started) { 1919 1931 chandef = link_conf->chanreq.oper;
+4 -2
net/mac80211/chan.c
··· 561 561 rcu_read_lock(); 562 562 list_for_each_entry_rcu(sta, &local->sta_list, 563 563 list) { 564 - struct ieee80211_sub_if_data *sdata = sta->sdata; 564 + struct ieee80211_sub_if_data *sdata; 565 565 enum ieee80211_sta_rx_bandwidth new_sta_bw; 566 566 unsigned int link_id; 567 567 568 568 if (!ieee80211_sdata_running(sta->sdata)) 569 569 continue; 570 570 571 - for (link_id = 0; link_id < ARRAY_SIZE(sta->sdata->link); link_id++) { 571 + sdata = get_bss_sdata(sta->sdata); 572 + 573 + for (link_id = 0; link_id < ARRAY_SIZE(sdata->link); link_id++) { 572 574 struct ieee80211_link_data *link = 573 575 rcu_dereference(sdata->link[link_id]); 574 576 struct ieee80211_bss_conf *link_conf;
+5 -9
net/mac80211/debugfs.c
··· 320 320 static ssize_t aql_enable_write(struct file *file, const char __user *user_buf, 321 321 size_t count, loff_t *ppos) 322 322 { 323 - bool aql_disabled = static_key_false(&aql_disable.key); 324 323 char buf[3]; 325 324 size_t len; 326 325 ··· 334 335 if (len > 0 && buf[len - 1] == '\n') 335 336 buf[len - 1] = 0; 336 337 337 - if (buf[0] == '0' && buf[1] == '\0') { 338 - if (!aql_disabled) 339 - static_branch_inc(&aql_disable); 340 - } else if (buf[0] == '1' && buf[1] == '\0') { 341 - if (aql_disabled) 342 - static_branch_dec(&aql_disable); 343 - } else { 338 + if (buf[0] == '0' && buf[1] == '\0') 339 + static_branch_enable(&aql_disable); 340 + else if (buf[0] == '1' && buf[1] == '\0') 341 + static_branch_disable(&aql_disable); 342 + else 344 343 return -EINVAL; 345 - } 346 344 347 345 return count; 348 346 }
+3
net/mac80211/mesh.c
··· 79 79 * - MDA enabled 80 80 * - Power management control on fc 81 81 */ 82 + if (!ie->mesh_config) 83 + return false; 84 + 82 85 if (!(ifmsh->mesh_id_len == ie->mesh_id_len && 83 86 memcmp(ifmsh->mesh_id, ie->mesh_id, ie->mesh_id_len) == 0 && 84 87 (ifmsh->mesh_pp_id == ie->mesh_config->meshconf_psel) &&
+5 -2
net/mac80211/sta_info.c
··· 2782 2782 } 2783 2783 2784 2784 link_sinfo->inactive_time = 2785 - jiffies_to_msecs(jiffies - ieee80211_sta_last_active(sta, link_id)); 2785 + jiffies_delta_to_msecs(jiffies - 2786 + ieee80211_sta_last_active(sta, 2787 + link_id)); 2786 2788 2787 2789 if (!(link_sinfo->filled & (BIT_ULL(NL80211_STA_INFO_TX_BYTES64) | 2788 2790 BIT_ULL(NL80211_STA_INFO_TX_BYTES)))) { ··· 3017 3015 sinfo->connected_time = ktime_get_seconds() - sta->last_connected; 3018 3016 sinfo->assoc_at = sta->assoc_at; 3019 3017 sinfo->inactive_time = 3020 - jiffies_to_msecs(jiffies - ieee80211_sta_last_active(sta, -1)); 3018 + jiffies_delta_to_msecs(jiffies - 3019 + ieee80211_sta_last_active(sta, -1)); 3021 3020 3022 3021 if (!(sinfo->filled & (BIT_ULL(NL80211_STA_INFO_TX_BYTES64) | 3023 3022 BIT_ULL(NL80211_STA_INFO_TX_BYTES)))) {
+1 -1
net/mac80211/tdls.c
··· 1449 1449 } 1450 1450 1451 1451 sta = sta_info_get(sdata, peer); 1452 - if (!sta) 1452 + if (!sta || !sta->sta.tdls) 1453 1453 return -ENOLINK; 1454 1454 1455 1455 iee80211_tdls_recalc_chanctx(sdata, sta);
+3 -1
net/mac80211/tx.c
··· 1899 1899 struct ieee80211_tx_data tx; 1900 1900 struct sk_buff *skb2; 1901 1901 1902 - if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP) 1902 + if (ieee80211_tx_prepare(sdata, &tx, NULL, skb) == TX_DROP) { 1903 + kfree_skb(skb); 1903 1904 return false; 1905 + } 1904 1906 1905 1907 info->band = band; 1906 1908 info->control.vif = vif;
+3 -1
net/mac802154/iface.c
··· 469 469 } 470 470 471 471 static int 472 - mac802154_header_parse(const struct sk_buff *skb, unsigned char *haddr) 472 + mac802154_header_parse(const struct sk_buff *skb, 473 + const struct net_device *dev, 474 + unsigned char *haddr) 473 475 { 474 476 struct ieee802154_hdr hdr; 475 477
+1
net/mpls/af_mpls.c
··· 2854 2854 rtnl_af_unregister(&mpls_af_ops); 2855 2855 out_unregister_dev_type: 2856 2856 dev_remove_pack(&mpls_packet_type); 2857 + unregister_netdevice_notifier(&mpls_dev_notifier); 2857 2858 out_unregister_pernet: 2858 2859 unregister_pernet_subsys(&mpls_net_ops); 2859 2860 goto out;
+1 -1
net/mptcp/pm_kernel.c
··· 838 838 static int mptcp_pm_nl_create_listen_socket(struct sock *sk, 839 839 struct mptcp_pm_addr_entry *entry) 840 840 { 841 - bool is_ipv6 = sk->sk_family == AF_INET6; 841 + bool is_ipv6 = entry->addr.family == AF_INET6; 842 842 int addrlen = sizeof(struct sockaddr_in); 843 843 struct sockaddr_storage addr; 844 844 struct sock *newsk, *ssk;
+1 -1
net/netfilter/nf_bpf_link.c
··· 170 170 171 171 static const struct bpf_link_ops bpf_nf_link_lops = { 172 172 .release = bpf_nf_link_release, 173 - .dealloc = bpf_nf_link_dealloc, 173 + .dealloc_deferred = bpf_nf_link_dealloc, 174 174 .detach = bpf_nf_link_detach, 175 175 .show_fdinfo = bpf_nf_link_show_info, 176 176 .fill_link_info = bpf_nf_link_fill_link_info,
+4
net/netfilter/nf_conntrack_h323_asn1.c
··· 331 331 if (nf_h323_error_boundary(bs, 0, 2)) 332 332 return H323_ERROR_BOUND; 333 333 len = get_bits(bs, 2) + 1; 334 + if (nf_h323_error_boundary(bs, len, 0)) 335 + return H323_ERROR_BOUND; 334 336 BYTE_ALIGN(bs); 335 337 if (base && (f->attr & DECODE)) { /* timeToLive */ 336 338 unsigned int v = get_uint(bs, len) + f->lb; ··· 924 922 break; 925 923 p++; 926 924 len--; 925 + if (len <= 0) 926 + break; 927 927 return DecodeH323_UserInformation(buf, p, len, 928 928 &q931->UUIE); 929 929 }
+26 -2
net/netfilter/nf_conntrack_netlink.c
··· 3212 3212 { 3213 3213 struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh); 3214 3214 struct nf_conn *ct = cb->data; 3215 - struct nf_conn_help *help = nfct_help(ct); 3215 + struct nf_conn_help *help; 3216 3216 u_int8_t l3proto = nfmsg->nfgen_family; 3217 3217 unsigned long last_id = cb->args[1]; 3218 3218 struct nf_conntrack_expect *exp; 3219 3219 3220 3220 if (cb->args[0]) 3221 + return 0; 3222 + 3223 + help = nfct_help(ct); 3224 + if (!help) 3221 3225 return 0; 3222 3226 3223 3227 rcu_read_lock(); ··· 3253 3249 return skb->len; 3254 3250 } 3255 3251 3252 + static int ctnetlink_dump_exp_ct_start(struct netlink_callback *cb) 3253 + { 3254 + struct nf_conn *ct = cb->data; 3255 + 3256 + if (!refcount_inc_not_zero(&ct->ct_general.use)) 3257 + return -ENOENT; 3258 + return 0; 3259 + } 3260 + 3261 + static int ctnetlink_dump_exp_ct_done(struct netlink_callback *cb) 3262 + { 3263 + struct nf_conn *ct = cb->data; 3264 + 3265 + if (ct) 3266 + nf_ct_put(ct); 3267 + return 0; 3268 + } 3269 + 3256 3270 static int ctnetlink_dump_exp_ct(struct net *net, struct sock *ctnl, 3257 3271 struct sk_buff *skb, 3258 3272 const struct nlmsghdr *nlh, ··· 3286 3264 struct nf_conntrack_zone zone; 3287 3265 struct netlink_dump_control c = { 3288 3266 .dump = ctnetlink_exp_ct_dump_table, 3267 + .start = ctnetlink_dump_exp_ct_start, 3268 + .done = ctnetlink_dump_exp_ct_done, 3289 3269 }; 3290 3270 3291 3271 err = ctnetlink_parse_tuple(cda, &tuple, CTA_EXPECT_MASTER, ··· 3489 3465 3490 3466 #if IS_ENABLED(CONFIG_NF_NAT) 3491 3467 static const struct nla_policy exp_nat_nla_policy[CTA_EXPECT_NAT_MAX+1] = { 3492 - [CTA_EXPECT_NAT_DIR] = { .type = NLA_U32 }, 3468 + [CTA_EXPECT_NAT_DIR] = NLA_POLICY_MAX(NLA_BE32, IP_CT_DIR_REPLY), 3493 3469 [CTA_EXPECT_NAT_TUPLE] = { .type = NLA_NESTED }, 3494 3470 }; 3495 3471 #endif
+2 -1
net/netfilter/nf_conntrack_proto_sctp.c
··· 582 582 } 583 583 584 584 static const struct nla_policy sctp_nla_policy[CTA_PROTOINFO_SCTP_MAX+1] = { 585 - [CTA_PROTOINFO_SCTP_STATE] = { .type = NLA_U8 }, 585 + [CTA_PROTOINFO_SCTP_STATE] = NLA_POLICY_MAX(NLA_U8, 586 + SCTP_CONNTRACK_HEARTBEAT_SENT), 586 587 [CTA_PROTOINFO_SCTP_VTAG_ORIGINAL] = { .type = NLA_U32 }, 587 588 [CTA_PROTOINFO_SCTP_VTAG_REPLY] = { .type = NLA_U32 }, 588 589 };
+5 -1
net/netfilter/nf_conntrack_sip.c
··· 1534 1534 { 1535 1535 struct tcphdr *th, _tcph; 1536 1536 unsigned int dataoff, datalen; 1537 - unsigned int matchoff, matchlen, clen; 1537 + unsigned int matchoff, matchlen; 1538 1538 unsigned int msglen, origlen; 1539 1539 const char *dptr, *end; 1540 1540 s16 diff, tdiff = 0; 1541 1541 int ret = NF_ACCEPT; 1542 + unsigned long clen; 1542 1543 bool term; 1543 1544 1544 1545 if (ctinfo != IP_CT_ESTABLISHED && ··· 1572 1571 1573 1572 clen = simple_strtoul(dptr + matchoff, (char **)&end, 10); 1574 1573 if (dptr + matchoff == end) 1574 + break; 1575 + 1576 + if (clen > datalen) 1575 1577 break; 1576 1578 1577 1579 term = false;
+1
net/netfilter/nf_flow_table_ip.c
··· 738 738 switch (tuple->encap[i].proto) { 739 739 case htons(ETH_P_8021Q): 740 740 case htons(ETH_P_8021AD): 741 + skb_reset_mac_header(skb); 741 742 if (skb_vlan_push(skb, tuple->encap[i].proto, 742 743 tuple->encap[i].id) < 0) 743 744 return -1;
+7 -19
net/netfilter/nf_tables_api.c
··· 6744 6744 } 6745 6745 } 6746 6746 6747 - static void nft_set_elem_expr_destroy(const struct nft_ctx *ctx, 6748 - struct nft_set_elem_expr *elem_expr) 6747 + void nft_set_elem_expr_destroy(const struct nft_ctx *ctx, 6748 + struct nft_set_elem_expr *elem_expr) 6749 6749 { 6750 6750 struct nft_expr *expr; 6751 6751 u32 size; ··· 7156 7156 } 7157 7157 7158 7158 static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, 7159 - const struct nlattr *attr, u32 nlmsg_flags, 7160 - bool last) 7159 + const struct nlattr *attr, u32 nlmsg_flags) 7161 7160 { 7162 7161 struct nft_expr *expr_array[NFT_SET_EXPR_MAX] = {}; 7163 7162 struct nlattr *nla[NFTA_SET_ELEM_MAX + 1]; ··· 7443 7444 if (flags) 7444 7445 *nft_set_ext_flags(ext) = flags; 7445 7446 7446 - if (last) 7447 - elem.flags = NFT_SET_ELEM_INTERNAL_LAST; 7448 - else 7449 - elem.flags = 0; 7450 - 7451 7447 if (obj) 7452 7448 *nft_set_ext_obj(ext) = obj; 7453 7449 ··· 7607 7613 nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); 7608 7614 7609 7615 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 7610 - err = nft_add_set_elem(&ctx, set, attr, info->nlh->nlmsg_flags, 7611 - nla_is_last(attr, rem)); 7616 + err = nft_add_set_elem(&ctx, set, attr, info->nlh->nlmsg_flags); 7612 7617 if (err < 0) { 7613 7618 NL_SET_BAD_ATTR(extack, attr); 7614 7619 return err; ··· 7731 7738 } 7732 7739 7733 7740 static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, 7734 - const struct nlattr *attr, bool last) 7741 + const struct nlattr *attr) 7735 7742 { 7736 7743 struct nlattr *nla[NFTA_SET_ELEM_MAX + 1]; 7737 7744 struct nft_set_ext_tmpl tmpl; ··· 7798 7805 ext = nft_set_elem_ext(set, elem.priv); 7799 7806 if (flags) 7800 7807 *nft_set_ext_flags(ext) = flags; 7801 - 7802 - if (last) 7803 - elem.flags = NFT_SET_ELEM_INTERNAL_LAST; 7804 - else 7805 - elem.flags = 0; 7806 7808 7807 7809 trans = nft_trans_elem_alloc(ctx, NFT_MSG_DELSETELEM, set); 7808 7810 if (trans == NULL) ··· 7949 7961 return nft_set_flush(&ctx, set, genmask); 7950 7962 7951 7963 nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) { 7952 - err = nft_del_setelem(&ctx, set, attr, 7953 - nla_is_last(attr, rem)); 7964 + err = nft_del_setelem(&ctx, set, attr); 7954 7965 if (err == -ENOENT && 7955 7966 NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYSETELEM) 7956 7967 continue; ··· 9203 9216 return 0; 9204 9217 9205 9218 err_flowtable_hooks: 9219 + synchronize_rcu(); 9206 9220 nft_trans_destroy(trans); 9207 9221 err_flowtable_trans: 9208 9222 nft_hooks_destroy(&flowtable->hook_list);
+4
net/netfilter/nft_ct.c
··· 23 23 #include <net/netfilter/nf_conntrack_l4proto.h> 24 24 #include <net/netfilter/nf_conntrack_expect.h> 25 25 #include <net/netfilter/nf_conntrack_seqadj.h> 26 + #include "nf_internals.h" 26 27 27 28 struct nft_ct_helper_obj { 28 29 struct nf_conntrack_helper *helper4; ··· 544 543 #endif 545 544 #ifdef CONFIG_NF_CONNTRACK_ZONES 546 545 case NFT_CT_ZONE: 546 + nf_queue_nf_hook_drop(ctx->net); 547 547 mutex_lock(&nft_ct_pcpu_mutex); 548 548 if (--nft_ct_pcpu_template_refcnt == 0) 549 549 nft_ct_tmpl_put_pcpu(); ··· 1017 1015 struct nft_ct_timeout_obj *priv = nft_obj_data(obj); 1018 1016 struct nf_ct_timeout *timeout = priv->timeout; 1019 1017 1018 + nf_queue_nf_hook_drop(ctx->net); 1020 1019 nf_ct_untimeout(ctx->net, timeout); 1021 1020 nf_ct_netns_put(ctx->net, ctx->family); 1022 1021 kfree(priv->timeout); ··· 1150 1147 { 1151 1148 struct nft_ct_helper_obj *priv = nft_obj_data(obj); 1152 1149 1150 + nf_queue_nf_hook_drop(ctx->net); 1153 1151 if (priv->helper4) 1154 1152 nf_conntrack_helper_put(priv->helper4); 1155 1153 if (priv->helper6)
+9 -1
net/netfilter/nft_dynset.c
··· 30 30 const struct nft_set_ext *ext) 31 31 { 32 32 struct nft_set_elem_expr *elem_expr = nft_set_ext_expr(ext); 33 + struct nft_ctx ctx = { 34 + .net = read_pnet(&priv->set->net), 35 + .family = priv->set->table->family, 36 + }; 33 37 struct nft_expr *expr; 34 38 int i; 35 39 36 40 for (i = 0; i < priv->num_exprs; i++) { 37 41 expr = nft_setelem_expr_at(elem_expr, elem_expr->size); 38 42 if (nft_expr_clone(expr, priv->expr_array[i], GFP_ATOMIC) < 0) 39 - return -1; 43 + goto err_out; 40 44 41 45 elem_expr->size += priv->expr_array[i]->ops->size; 42 46 } 43 47 44 48 return 0; 49 + err_out: 50 + nft_set_elem_expr_destroy(&ctx, elem_expr); 51 + 52 + return -1; 45 53 } 46 54 47 55 struct nft_elem_priv *nft_dynset_new(struct nft_set *set,
+10 -61
net/netfilter/nft_set_rbtree.c
··· 304 304 priv->start_rbe_cookie = (unsigned long)rbe; 305 305 } 306 306 307 - static void nft_rbtree_set_start_cookie_open(struct nft_rbtree *priv, 308 - const struct nft_rbtree_elem *rbe, 309 - unsigned long open_interval) 310 - { 311 - priv->start_rbe_cookie = (unsigned long)rbe | open_interval; 312 - } 313 - 314 - #define NFT_RBTREE_OPEN_INTERVAL 1UL 315 - 316 307 static bool nft_rbtree_cmp_start_cookie(struct nft_rbtree *priv, 317 308 const struct nft_rbtree_elem *rbe) 318 309 { 319 - return (priv->start_rbe_cookie & ~NFT_RBTREE_OPEN_INTERVAL) == (unsigned long)rbe; 310 + return priv->start_rbe_cookie == (unsigned long)rbe; 320 311 } 321 312 322 313 static bool nft_rbtree_insert_same_interval(const struct net *net, ··· 337 346 338 347 static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, 339 348 struct nft_rbtree_elem *new, 340 - struct nft_elem_priv **elem_priv, u64 tstamp, bool last) 349 + struct nft_elem_priv **elem_priv, u64 tstamp) 341 350 { 342 351 struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL, *rbe_prev; 343 352 struct rb_node *node, *next, *parent, **p, *first = NULL; 344 353 struct nft_rbtree *priv = nft_set_priv(set); 345 354 u8 cur_genmask = nft_genmask_cur(net); 346 355 u8 genmask = nft_genmask_next(net); 347 - unsigned long open_interval = 0; 348 356 int d; 349 357 350 358 /* Descend the tree to search for an existing element greater than the ··· 449 459 } 450 460 } 451 461 452 - if (nft_rbtree_interval_null(set, new)) { 462 + if (nft_rbtree_interval_null(set, new)) 453 463 priv->start_rbe_cookie = 0; 454 - } else if (nft_rbtree_interval_start(new) && priv->start_rbe_cookie) { 455 - if (nft_set_is_anonymous(set)) { 456 - priv->start_rbe_cookie = 0; 457 - } else if (priv->start_rbe_cookie & NFT_RBTREE_OPEN_INTERVAL) { 458 - /* Previous element is an open interval that partially 459 - * overlaps with an existing non-open interval. 460 - */ 461 - return -ENOTEMPTY; 462 - } 463 - } 464 + else if (nft_rbtree_interval_start(new) && priv->start_rbe_cookie) 465 + priv->start_rbe_cookie = 0; 464 466 465 467 /* - new start element matching existing start element: full overlap 466 468 * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given. ··· 460 478 if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) && 461 479 nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) { 462 480 *elem_priv = &rbe_ge->priv; 463 - 464 - /* - Corner case: new start element of open interval (which 465 - * comes as last element in the batch) overlaps the start of 466 - * an existing interval with an end element: partial overlap. 467 - */ 468 - node = rb_first(&priv->root); 469 - rbe = __nft_rbtree_next_active(node, genmask); 470 - if (rbe && nft_rbtree_interval_end(rbe)) { 471 - rbe = nft_rbtree_next_active(rbe, genmask); 472 - if (rbe && 473 - nft_rbtree_interval_start(rbe) && 474 - !nft_rbtree_cmp(set, new, rbe)) { 475 - if (last) 476 - return -ENOTEMPTY; 477 - 478 - /* Maybe open interval? */ 479 - open_interval = NFT_RBTREE_OPEN_INTERVAL; 480 - } 481 - } 482 - nft_rbtree_set_start_cookie_open(priv, rbe_ge, open_interval); 483 - 481 + nft_rbtree_set_start_cookie(priv, rbe_ge); 484 482 return -EEXIST; 485 483 } 486 484 ··· 513 551 */ 514 552 if (rbe_ge && 515 553 nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new)) 516 - return -ENOTEMPTY; 517 - 518 - /* - start element overlaps an open interval but end element is new: 519 - * partial overlap, reported as -ENOEMPTY. 520 - */ 521 - if (!rbe_ge && priv->start_rbe_cookie && nft_rbtree_interval_end(new)) 522 554 return -ENOTEMPTY; 523 555 524 556 /* Accepted element: pick insertion point depending on key value */ ··· 624 668 struct nft_elem_priv **elem_priv) 625 669 { 626 670 struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem->priv); 627 - bool last = !!(elem->flags & NFT_SET_ELEM_INTERNAL_LAST); 628 671 struct nft_rbtree *priv = nft_set_priv(set); 629 672 u64 tstamp = nft_net_tstamp(net); 630 673 int err; ··· 640 685 cond_resched(); 641 686 642 687 write_lock_bh(&priv->lock); 643 - err = __nft_rbtree_insert(net, set, rbe, elem_priv, tstamp, last); 688 + err = __nft_rbtree_insert(net, set, rbe, elem_priv, tstamp); 644 689 write_unlock_bh(&priv->lock); 645 - 646 - if (nft_rbtree_interval_end(rbe)) 647 - priv->start_rbe_cookie = 0; 648 - 649 690 } while (err == -EAGAIN); 650 691 651 692 return err; ··· 729 778 const struct nft_set_elem *elem) 730 779 { 731 780 struct nft_rbtree_elem *rbe, *this = nft_elem_priv_cast(elem->priv); 732 - bool last = !!(elem->flags & NFT_SET_ELEM_INTERNAL_LAST); 733 781 struct nft_rbtree *priv = nft_set_priv(set); 734 782 const struct rb_node *parent = priv->root.rb_node; 735 783 u8 genmask = nft_genmask_next(net); ··· 769 819 continue; 770 820 } 771 821 772 - if (nft_rbtree_interval_start(rbe)) { 773 - if (!last) 774 - nft_rbtree_set_start_cookie(priv, rbe); 775 - } else if (!nft_rbtree_deactivate_same_interval(net, priv, rbe)) 822 + if (nft_rbtree_interval_start(rbe)) 823 + nft_rbtree_set_start_cookie(priv, rbe); 824 + else if (!nft_rbtree_deactivate_same_interval(net, priv, rbe)) 776 825 return NULL; 777 826 778 827 nft_rbtree_flush(net, set, &rbe->priv);
+4
net/netfilter/xt_CT.c
··· 16 16 #include <net/netfilter/nf_conntrack_ecache.h> 17 17 #include <net/netfilter/nf_conntrack_timeout.h> 18 18 #include <net/netfilter/nf_conntrack_zones.h> 19 + #include "nf_internals.h" 19 20 20 21 static inline int xt_ct_target(struct sk_buff *skb, struct nf_conn *ct) 21 22 { ··· 284 283 struct nf_conn_help *help; 285 284 286 285 if (ct) { 286 + if (info->helper[0] || info->timeout[0]) 287 + nf_queue_nf_hook_drop(par->net); 288 + 287 289 help = nfct_help(ct); 288 290 xt_ct_put_helper(help); 289 291
+2 -2
net/netfilter/xt_time.c
··· 223 223 224 224 localtime_2(&current_time, stamp); 225 225 226 - if (!(info->weekdays_match & (1 << current_time.weekday))) 226 + if (!(info->weekdays_match & (1U << current_time.weekday))) 227 227 return false; 228 228 229 229 /* Do not spend time computing monthday if all days match anyway */ 230 230 if (info->monthdays_match != XT_TIME_ALL_MONTHDAYS) { 231 231 localtime_3(&current_time, stamp); 232 - if (!(info->monthdays_match & (1 << current_time.monthday))) 232 + if (!(info->monthdays_match & (1U << current_time.monthday))) 233 233 return false; 234 234 } 235 235
+4 -1
net/phonet/af_phonet.c
··· 129 129 return 1; 130 130 } 131 131 132 - static int pn_header_parse(const struct sk_buff *skb, unsigned char *haddr) 132 + static int pn_header_parse(const struct sk_buff *skb, 133 + const struct net_device *dev, 134 + unsigned char *haddr) 133 135 { 134 136 const u8 *media = skb_mac_header(skb); 137 + 135 138 *haddr = *media; 136 139 return 1; 137 140 }
+5
net/rose/af_rose.c
··· 811 811 goto out_release; 812 812 } 813 813 814 + if (sk->sk_state == TCP_SYN_SENT) { 815 + err = -EALREADY; 816 + goto out_release; 817 + } 818 + 814 819 sk->sk_state = TCP_CLOSE; 815 820 sock->state = SS_UNCONNECTED; 816 821
-27
net/sched/sch_generic.c
··· 1288 1288 } 1289 1289 } 1290 1290 1291 - static void dev_reset_queue(struct net_device *dev, 1292 - struct netdev_queue *dev_queue, 1293 - void *_unused) 1294 - { 1295 - struct Qdisc *qdisc; 1296 - bool nolock; 1297 - 1298 - qdisc = rtnl_dereference(dev_queue->qdisc_sleeping); 1299 - if (!qdisc) 1300 - return; 1301 - 1302 - nolock = qdisc->flags & TCQ_F_NOLOCK; 1303 - 1304 - if (nolock) 1305 - spin_lock_bh(&qdisc->seqlock); 1306 - spin_lock_bh(qdisc_lock(qdisc)); 1307 - 1308 - qdisc_reset(qdisc); 1309 - 1310 - spin_unlock_bh(qdisc_lock(qdisc)); 1311 - if (nolock) { 1312 - clear_bit(__QDISC_STATE_MISSED, &qdisc->state); 1313 - clear_bit(__QDISC_STATE_DRAINING, &qdisc->state); 1314 - spin_unlock_bh(&qdisc->seqlock); 1315 - } 1316 - } 1317 - 1318 1291 static bool some_qdisc_is_busy(struct net_device *dev) 1319 1292 { 1320 1293 unsigned int i;
+8 -6
net/sched/sch_ingress.c
··· 113 113 { 114 114 struct ingress_sched_data *q = qdisc_priv(sch); 115 115 struct net_device *dev = qdisc_dev(sch); 116 - struct bpf_mprog_entry *entry = rtnl_dereference(dev->tcx_ingress); 116 + struct bpf_mprog_entry *entry; 117 117 118 118 if (sch->parent != TC_H_INGRESS) 119 119 return; 120 120 121 121 tcf_block_put_ext(q->block, sch, &q->block_info); 122 122 123 - if (entry) { 123 + if (mini_qdisc_pair_inited(&q->miniqp)) { 124 + entry = rtnl_dereference(dev->tcx_ingress); 124 125 tcx_miniq_dec(entry); 125 126 if (!tcx_entry_is_active(entry)) { 126 127 tcx_entry_update(dev, NULL, true); ··· 291 290 292 291 static void clsact_destroy(struct Qdisc *sch) 293 292 { 293 + struct bpf_mprog_entry *ingress_entry, *egress_entry; 294 294 struct clsact_sched_data *q = qdisc_priv(sch); 295 295 struct net_device *dev = qdisc_dev(sch); 296 - struct bpf_mprog_entry *ingress_entry = rtnl_dereference(dev->tcx_ingress); 297 - struct bpf_mprog_entry *egress_entry = rtnl_dereference(dev->tcx_egress); 298 296 299 297 if (sch->parent != TC_H_CLSACT) 300 298 return; ··· 301 301 tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info); 302 302 tcf_block_put_ext(q->egress_block, sch, &q->egress_block_info); 303 303 304 - if (ingress_entry) { 304 + if (mini_qdisc_pair_inited(&q->miniqp_ingress)) { 305 + ingress_entry = rtnl_dereference(dev->tcx_ingress); 305 306 tcx_miniq_dec(ingress_entry); 306 307 if (!tcx_entry_is_active(ingress_entry)) { 307 308 tcx_entry_update(dev, NULL, true); ··· 310 309 } 311 310 } 312 311 313 - if (egress_entry) { 312 + if (mini_qdisc_pair_inited(&q->miniqp_egress)) { 313 + egress_entry = rtnl_dereference(dev->tcx_egress); 314 314 tcx_miniq_dec(egress_entry); 315 315 if (!tcx_entry_is_active(egress_entry)) { 316 316 tcx_entry_update(dev, NULL, false);
+2 -5
net/sched/sch_teql.c
··· 146 146 master->slaves = NEXT_SLAVE(q); 147 147 if (q == master->slaves) { 148 148 struct netdev_queue *txq; 149 - spinlock_t *root_lock; 150 149 151 150 txq = netdev_get_tx_queue(master->dev, 0); 152 151 master->slaves = NULL; 153 152 154 - root_lock = qdisc_root_sleeping_lock(rtnl_dereference(txq->qdisc)); 155 - spin_lock_bh(root_lock); 156 - qdisc_reset(rtnl_dereference(txq->qdisc)); 157 - spin_unlock_bh(root_lock); 153 + dev_reset_queue(master->dev, 154 + txq, NULL); 158 155 } 159 156 } 160 157 skb_queue_purge(&dat->q);
+94 -66
net/shaper/shaper.c
··· 36 36 return &((struct net_shaper_nl_ctx *)ctx)->binding; 37 37 } 38 38 39 - static void net_shaper_lock(struct net_shaper_binding *binding) 40 - { 41 - switch (binding->type) { 42 - case NET_SHAPER_BINDING_TYPE_NETDEV: 43 - netdev_lock(binding->netdev); 44 - break; 45 - } 46 - } 47 - 48 - static void net_shaper_unlock(struct net_shaper_binding *binding) 49 - { 50 - switch (binding->type) { 51 - case NET_SHAPER_BINDING_TYPE_NETDEV: 52 - netdev_unlock(binding->netdev); 53 - break; 54 - } 55 - } 56 - 57 39 static struct net_shaper_hierarchy * 58 40 net_shaper_hierarchy(struct net_shaper_binding *binding) 59 41 { 60 42 /* Pairs with WRITE_ONCE() in net_shaper_hierarchy_setup. */ 61 43 if (binding->type == NET_SHAPER_BINDING_TYPE_NETDEV) 44 + return READ_ONCE(binding->netdev->net_shaper_hierarchy); 45 + 46 + /* No other type supported yet. */ 47 + return NULL; 48 + } 49 + 50 + static struct net_shaper_hierarchy * 51 + net_shaper_hierarchy_rcu(struct net_shaper_binding *binding) 52 + { 53 + /* Readers look up the device and take a ref, then take RCU lock 54 + * later at which point netdev may have been unregistered and flushed. 55 + * READ_ONCE() pairs with WRITE_ONCE() in net_shaper_hierarchy_setup. 56 + */ 57 + if (binding->type == NET_SHAPER_BINDING_TYPE_NETDEV && 58 + READ_ONCE(binding->netdev->reg_state) <= NETREG_REGISTERED) 62 59 return READ_ONCE(binding->netdev->net_shaper_hierarchy); 63 60 64 61 /* No other type supported yet. */ ··· 201 204 return 0; 202 205 } 203 206 207 + /* Like net_shaper_ctx_setup(), but for "write" handlers (never for dumps!) 208 + * Acquires the lock protecting the hierarchy (instance lock for netdev). 209 + */ 210 + static int net_shaper_ctx_setup_lock(const struct genl_info *info, int type, 211 + struct net_shaper_nl_ctx *ctx) 212 + { 213 + struct net *ns = genl_info_net(info); 214 + struct net_device *dev; 215 + int ifindex; 216 + 217 + if (GENL_REQ_ATTR_CHECK(info, type)) 218 + return -EINVAL; 219 + 220 + ifindex = nla_get_u32(info->attrs[type]); 221 + dev = netdev_get_by_index_lock(ns, ifindex); 222 + if (!dev) { 223 + NL_SET_BAD_ATTR(info->extack, info->attrs[type]); 224 + return -ENOENT; 225 + } 226 + 227 + if (!dev->netdev_ops->net_shaper_ops) { 228 + NL_SET_BAD_ATTR(info->extack, info->attrs[type]); 229 + netdev_unlock(dev); 230 + return -EOPNOTSUPP; 231 + } 232 + 233 + ctx->binding.type = NET_SHAPER_BINDING_TYPE_NETDEV; 234 + ctx->binding.netdev = dev; 235 + return 0; 236 + } 237 + 204 238 static void net_shaper_ctx_cleanup(struct net_shaper_nl_ctx *ctx) 205 239 { 206 240 if (ctx->binding.type == NET_SHAPER_BINDING_TYPE_NETDEV) 207 241 netdev_put(ctx->binding.netdev, &ctx->dev_tracker); 242 + } 243 + 244 + static void net_shaper_ctx_cleanup_unlock(struct net_shaper_nl_ctx *ctx) 245 + { 246 + if (ctx->binding.type == NET_SHAPER_BINDING_TYPE_NETDEV) 247 + netdev_unlock(ctx->binding.netdev); 208 248 } 209 249 210 250 static u32 net_shaper_handle_to_index(const struct net_shaper_handle *handle) ··· 285 251 net_shaper_lookup(struct net_shaper_binding *binding, 286 252 const struct net_shaper_handle *handle) 287 253 { 288 - struct net_shaper_hierarchy *hierarchy = net_shaper_hierarchy(binding); 289 254 u32 index = net_shaper_handle_to_index(handle); 255 + struct net_shaper_hierarchy *hierarchy; 290 256 257 + hierarchy = net_shaper_hierarchy_rcu(binding); 291 258 if (!hierarchy || xa_get_mark(&hierarchy->shapers, index, 292 259 NET_SHAPER_NOT_VALID)) 293 260 return NULL; ··· 297 262 } 298 263 299 264 /* Allocate on demand the per device shaper's hierarchy container. 300 - * Called under the net shaper lock 265 + * Called under the lock protecting the hierarchy (instance lock for netdev) 301 266 */ 302 267 static struct net_shaper_hierarchy * 303 268 net_shaper_hierarchy_setup(struct net_shaper_binding *binding) ··· 716 681 net_shaper_generic_post(info); 717 682 } 718 683 684 + int net_shaper_nl_pre_doit_write(const struct genl_split_ops *ops, 685 + struct sk_buff *skb, struct genl_info *info) 686 + { 687 + struct net_shaper_nl_ctx *ctx = (struct net_shaper_nl_ctx *)info->ctx; 688 + 689 + BUILD_BUG_ON(sizeof(*ctx) > sizeof(info->ctx)); 690 + 691 + return net_shaper_ctx_setup_lock(info, NET_SHAPER_A_IFINDEX, ctx); 692 + } 693 + 694 + void net_shaper_nl_post_doit_write(const struct genl_split_ops *ops, 695 + struct sk_buff *skb, struct genl_info *info) 696 + { 697 + net_shaper_ctx_cleanup_unlock((struct net_shaper_nl_ctx *)info->ctx); 698 + } 699 + 719 700 int net_shaper_nl_pre_dumpit(struct netlink_callback *cb) 720 701 { 721 702 struct net_shaper_nl_ctx *ctx = (struct net_shaper_nl_ctx *)cb->ctx; ··· 829 778 830 779 /* Don't error out dumps performed before any set operation. */ 831 780 binding = net_shaper_binding_from_ctx(ctx); 832 - hierarchy = net_shaper_hierarchy(binding); 833 - if (!hierarchy) 834 - return 0; 835 781 836 782 rcu_read_lock(); 783 + hierarchy = net_shaper_hierarchy_rcu(binding); 784 + if (!hierarchy) 785 + goto out_unlock; 786 + 837 787 for (; (shaper = xa_find(&hierarchy->shapers, &ctx->start_index, 838 788 U32_MAX, XA_PRESENT)); ctx->start_index++) { 839 789 ret = net_shaper_fill_one(skb, binding, shaper, info); 840 790 if (ret) 841 791 break; 842 792 } 793 + out_unlock: 843 794 rcu_read_unlock(); 844 795 845 796 return ret; ··· 859 806 860 807 binding = net_shaper_binding_from_ctx(info->ctx); 861 808 862 - net_shaper_lock(binding); 863 809 ret = net_shaper_parse_info(binding, info->attrs, info, &shaper, 864 810 &exists); 865 811 if (ret) 866 - goto unlock; 812 + return ret; 867 813 868 814 if (!exists) 869 815 net_shaper_default_parent(&shaper.handle, &shaper.parent); 870 816 871 817 hierarchy = net_shaper_hierarchy_setup(binding); 872 - if (!hierarchy) { 873 - ret = -ENOMEM; 874 - goto unlock; 875 - } 818 + if (!hierarchy) 819 + return -ENOMEM; 876 820 877 821 /* The 'set' operation can't create node-scope shapers. */ 878 822 handle = shaper.handle; 879 823 if (handle.scope == NET_SHAPER_SCOPE_NODE && 880 - !net_shaper_lookup(binding, &handle)) { 881 - ret = -ENOENT; 882 - goto unlock; 883 - } 824 + !net_shaper_lookup(binding, &handle)) 825 + return -ENOENT; 884 826 885 827 ret = net_shaper_pre_insert(binding, &handle, info->extack); 886 828 if (ret) 887 - goto unlock; 829 + return ret; 888 830 889 831 ops = net_shaper_ops(binding); 890 832 ret = ops->set(binding, &shaper, info->extack); 891 833 if (ret) { 892 834 net_shaper_rollback(binding); 893 - goto unlock; 835 + return ret; 894 836 } 895 837 896 838 net_shaper_commit(binding, 1, &shaper); 897 839 898 - unlock: 899 - net_shaper_unlock(binding); 900 - return ret; 840 + return 0; 901 841 } 902 842 903 843 static int __net_shaper_delete(struct net_shaper_binding *binding, ··· 1118 1072 1119 1073 binding = net_shaper_binding_from_ctx(info->ctx); 1120 1074 1121 - net_shaper_lock(binding); 1122 1075 ret = net_shaper_parse_handle(info->attrs[NET_SHAPER_A_HANDLE], info, 1123 1076 &handle); 1124 1077 if (ret) 1125 - goto unlock; 1078 + return ret; 1126 1079 1127 1080 hierarchy = net_shaper_hierarchy(binding); 1128 - if (!hierarchy) { 1129 - ret = -ENOENT; 1130 - goto unlock; 1131 - } 1081 + if (!hierarchy) 1082 + return -ENOENT; 1132 1083 1133 1084 shaper = net_shaper_lookup(binding, &handle); 1134 - if (!shaper) { 1135 - ret = -ENOENT; 1136 - goto unlock; 1137 - } 1085 + if (!shaper) 1086 + return -ENOENT; 1138 1087 1139 1088 if (handle.scope == NET_SHAPER_SCOPE_NODE) { 1140 1089 ret = net_shaper_pre_del_node(binding, shaper, info->extack); 1141 1090 if (ret) 1142 - goto unlock; 1091 + return ret; 1143 1092 } 1144 1093 1145 - ret = __net_shaper_delete(binding, shaper, info->extack); 1146 - 1147 - unlock: 1148 - net_shaper_unlock(binding); 1149 - return ret; 1094 + return __net_shaper_delete(binding, shaper, info->extack); 1150 1095 } 1151 1096 1152 1097 static int net_shaper_group_send_reply(struct net_shaper_binding *binding, ··· 1186 1149 if (!net_shaper_ops(binding)->group) 1187 1150 return -EOPNOTSUPP; 1188 1151 1189 - net_shaper_lock(binding); 1190 1152 leaves_count = net_shaper_list_len(info, NET_SHAPER_A_LEAVES); 1191 1153 if (!leaves_count) { 1192 1154 NL_SET_BAD_ATTR(info->extack, 1193 1155 info->attrs[NET_SHAPER_A_LEAVES]); 1194 - ret = -EINVAL; 1195 - goto unlock; 1156 + return -EINVAL; 1196 1157 } 1197 1158 1198 1159 leaves = kcalloc(leaves_count, sizeof(struct net_shaper) + 1199 1160 sizeof(struct net_shaper *), GFP_KERNEL); 1200 - if (!leaves) { 1201 - ret = -ENOMEM; 1202 - goto unlock; 1203 - } 1161 + if (!leaves) 1162 + return -ENOMEM; 1204 1163 old_nodes = (void *)&leaves[leaves_count]; 1205 1164 1206 1165 ret = net_shaper_parse_node(binding, info->attrs, info, &node); ··· 1273 1240 1274 1241 free_leaves: 1275 1242 kfree(leaves); 1276 - 1277 - unlock: 1278 - net_shaper_unlock(binding); 1279 1243 return ret; 1280 1244 1281 1245 free_msg: ··· 1382 1352 if (!hierarchy) 1383 1353 return; 1384 1354 1385 - net_shaper_lock(binding); 1386 1355 xa_lock(&hierarchy->shapers); 1387 1356 xa_for_each(&hierarchy->shapers, index, cur) { 1388 1357 __xa_erase(&hierarchy->shapers, index); 1389 1358 kfree(cur); 1390 1359 } 1391 1360 xa_unlock(&hierarchy->shapers); 1392 - net_shaper_unlock(binding); 1393 1361 1394 1362 kfree(hierarchy); 1395 1363 }
+6 -6
net/shaper/shaper_nl_gen.c
··· 99 99 }, 100 100 { 101 101 .cmd = NET_SHAPER_CMD_SET, 102 - .pre_doit = net_shaper_nl_pre_doit, 102 + .pre_doit = net_shaper_nl_pre_doit_write, 103 103 .doit = net_shaper_nl_set_doit, 104 - .post_doit = net_shaper_nl_post_doit, 104 + .post_doit = net_shaper_nl_post_doit_write, 105 105 .policy = net_shaper_set_nl_policy, 106 106 .maxattr = NET_SHAPER_A_IFINDEX, 107 107 .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, 108 108 }, 109 109 { 110 110 .cmd = NET_SHAPER_CMD_DELETE, 111 - .pre_doit = net_shaper_nl_pre_doit, 111 + .pre_doit = net_shaper_nl_pre_doit_write, 112 112 .doit = net_shaper_nl_delete_doit, 113 - .post_doit = net_shaper_nl_post_doit, 113 + .post_doit = net_shaper_nl_post_doit_write, 114 114 .policy = net_shaper_delete_nl_policy, 115 115 .maxattr = NET_SHAPER_A_IFINDEX, 116 116 .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO, 117 117 }, 118 118 { 119 119 .cmd = NET_SHAPER_CMD_GROUP, 120 - .pre_doit = net_shaper_nl_pre_doit, 120 + .pre_doit = net_shaper_nl_pre_doit_write, 121 121 .doit = net_shaper_nl_group_doit, 122 - .post_doit = net_shaper_nl_post_doit, 122 + .post_doit = net_shaper_nl_post_doit_write, 123 123 .policy = net_shaper_group_nl_policy, 124 124 .maxattr = NET_SHAPER_A_LEAVES, 125 125 .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+5
net/shaper/shaper_nl_gen.h
··· 18 18 19 19 int net_shaper_nl_pre_doit(const struct genl_split_ops *ops, 20 20 struct sk_buff *skb, struct genl_info *info); 21 + int net_shaper_nl_pre_doit_write(const struct genl_split_ops *ops, 22 + struct sk_buff *skb, struct genl_info *info); 21 23 int net_shaper_nl_cap_pre_doit(const struct genl_split_ops *ops, 22 24 struct sk_buff *skb, struct genl_info *info); 23 25 void 24 26 net_shaper_nl_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb, 25 27 struct genl_info *info); 28 + void 29 + net_shaper_nl_post_doit_write(const struct genl_split_ops *ops, 30 + struct sk_buff *skb, struct genl_info *info); 26 31 void 27 32 net_shaper_nl_cap_post_doit(const struct genl_split_ops *ops, 28 33 struct sk_buff *skb, struct genl_info *info);
+17 -6
net/smc/af_smc.c
··· 131 131 struct smc_sock *smc; 132 132 struct sock *child; 133 133 134 - smc = smc_clcsock_user_data(sk); 134 + rcu_read_lock(); 135 + smc = smc_clcsock_user_data_rcu(sk); 136 + if (!smc || !refcount_inc_not_zero(&smc->sk.sk_refcnt)) { 137 + rcu_read_unlock(); 138 + smc = NULL; 139 + goto drop; 140 + } 141 + rcu_read_unlock(); 135 142 136 143 if (READ_ONCE(sk->sk_ack_backlog) + atomic_read(&smc->queued_smc_hs) > 137 144 sk->sk_max_ack_backlog) ··· 160 153 if (inet_csk(child)->icsk_af_ops == inet_csk(sk)->icsk_af_ops) 161 154 inet_csk(child)->icsk_af_ops = smc->ori_af_ops; 162 155 } 156 + sock_put(&smc->sk); 163 157 return child; 164 158 165 159 drop: 166 160 dst_release(dst); 167 161 tcp_listendrop(sk); 162 + if (smc) 163 + sock_put(&smc->sk); 168 164 return NULL; 169 165 } 170 166 ··· 264 254 struct sock *clcsk = smc->clcsock->sk; 265 255 266 256 write_lock_bh(&clcsk->sk_callback_lock); 267 - clcsk->sk_user_data = NULL; 257 + rcu_assign_sk_user_data(clcsk, NULL); 268 258 269 259 smc_clcsock_restore_cb(&clcsk->sk_state_change, &smc->clcsk_state_change); 270 260 smc_clcsock_restore_cb(&clcsk->sk_data_ready, &smc->clcsk_data_ready); ··· 912 902 struct sock *clcsk = smc->clcsock->sk; 913 903 914 904 write_lock_bh(&clcsk->sk_callback_lock); 915 - clcsk->sk_user_data = (void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY); 905 + __rcu_assign_sk_user_data_with_flags(clcsk, smc, SK_USER_DATA_NOCOPY); 916 906 917 907 smc_clcsock_replace_cb(&clcsk->sk_state_change, smc_fback_state_change, 918 908 &smc->clcsk_state_change); ··· 2675 2665 * smc-specific sk_data_ready function 2676 2666 */ 2677 2667 write_lock_bh(&smc->clcsock->sk->sk_callback_lock); 2678 - smc->clcsock->sk->sk_user_data = 2679 - (void *)((uintptr_t)smc | SK_USER_DATA_NOCOPY); 2668 + __rcu_assign_sk_user_data_with_flags(smc->clcsock->sk, smc, 2669 + SK_USER_DATA_NOCOPY); 2680 2670 smc_clcsock_replace_cb(&smc->clcsock->sk->sk_data_ready, 2681 2671 smc_clcsock_data_ready, &smc->clcsk_data_ready); 2682 2672 write_unlock_bh(&smc->clcsock->sk->sk_callback_lock); ··· 2697 2687 write_lock_bh(&smc->clcsock->sk->sk_callback_lock); 2698 2688 smc_clcsock_restore_cb(&smc->clcsock->sk->sk_data_ready, 2699 2689 &smc->clcsk_data_ready); 2700 - smc->clcsock->sk->sk_user_data = NULL; 2690 + rcu_assign_sk_user_data(smc->clcsock->sk, NULL); 2701 2691 write_unlock_bh(&smc->clcsock->sk->sk_callback_lock); 2702 2692 goto out; 2703 2693 } 2694 + sock_set_flag(sk, SOCK_RCU_FREE); 2704 2695 sk->sk_max_ack_backlog = backlog; 2705 2696 sk->sk_ack_backlog = 0; 2706 2697 sk->sk_state = SMC_LISTEN;
+5
net/smc/smc.h
··· 346 346 ((uintptr_t)clcsk->sk_user_data & ~SK_USER_DATA_NOCOPY); 347 347 } 348 348 349 + static inline struct smc_sock *smc_clcsock_user_data_rcu(const struct sock *clcsk) 350 + { 351 + return (struct smc_sock *)rcu_dereference_sk_user_data(clcsk); 352 + } 353 + 349 354 /* save target_cb in saved_cb, and replace target_cb with new_cb */ 350 355 static inline void smc_clcsock_replace_cb(void (**target_cb)(struct sock *), 351 356 void (*new_cb)(struct sock *),
+1 -1
net/smc/smc_close.c
··· 218 218 write_lock_bh(&smc->clcsock->sk->sk_callback_lock); 219 219 smc_clcsock_restore_cb(&smc->clcsock->sk->sk_data_ready, 220 220 &smc->clcsk_data_ready); 221 - smc->clcsock->sk->sk_user_data = NULL; 221 + rcu_assign_sk_user_data(smc->clcsock->sk, NULL); 222 222 write_unlock_bh(&smc->clcsock->sk->sk_callback_lock); 223 223 rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); 224 224 }
+21 -5
net/sunrpc/cache.c
··· 1062 1062 struct cache_reader *rp = filp->private_data; 1063 1063 1064 1064 if (rp) { 1065 + struct cache_request *rq = NULL; 1066 + 1065 1067 spin_lock(&queue_lock); 1066 1068 if (rp->offset) { 1067 1069 struct cache_queue *cq; 1068 - for (cq= &rp->q; &cq->list != &cd->queue; 1069 - cq = list_entry(cq->list.next, struct cache_queue, list)) 1070 + for (cq = &rp->q; &cq->list != &cd->queue; 1071 + cq = list_entry(cq->list.next, 1072 + struct cache_queue, list)) 1070 1073 if (!cq->reader) { 1071 - container_of(cq, struct cache_request, q) 1072 - ->readers--; 1074 + struct cache_request *cr = 1075 + container_of(cq, 1076 + struct cache_request, q); 1077 + cr->readers--; 1078 + if (cr->readers == 0 && 1079 + !test_bit(CACHE_PENDING, 1080 + &cr->item->flags)) { 1081 + list_del(&cr->q.list); 1082 + rq = cr; 1083 + } 1073 1084 break; 1074 1085 } 1075 1086 rp->offset = 0; ··· 1088 1077 list_del(&rp->q.list); 1089 1078 spin_unlock(&queue_lock); 1090 1079 1080 + if (rq) { 1081 + cache_put(rq->item, cd); 1082 + kfree(rq->buf); 1083 + kfree(rq); 1084 + } 1085 + 1091 1086 filp->private_data = NULL; 1092 1087 kfree(rp); 1093 - 1094 1088 } 1095 1089 if (filp->f_mode & FMODE_WRITE) { 1096 1090 atomic_dec(&cd->writers);
+2
net/unix/af_unix.c
··· 1958 1958 static void unix_peek_fds(struct scm_cookie *scm, struct sk_buff *skb) 1959 1959 { 1960 1960 scm->fp = scm_fp_dup(UNIXCB(skb).fp); 1961 + 1962 + unix_peek_fpl(scm->fp); 1961 1963 } 1962 1964 1963 1965 static void unix_destruct_scm(struct sk_buff *skb)
+1
net/unix/af_unix.h
··· 29 29 void unix_update_edges(struct unix_sock *receiver); 30 30 int unix_prepare_fpl(struct scm_fp_list *fpl); 31 31 void unix_destroy_fpl(struct scm_fp_list *fpl); 32 + void unix_peek_fpl(struct scm_fp_list *fpl); 32 33 void unix_schedule_gc(struct user_struct *user); 33 34 34 35 /* SOCK_DIAG */
+51 -28
net/unix/garbage.c
··· 318 318 unix_free_vertices(fpl); 319 319 } 320 320 321 + static bool gc_in_progress; 322 + static seqcount_t unix_peek_seq = SEQCNT_ZERO(unix_peek_seq); 323 + 324 + void unix_peek_fpl(struct scm_fp_list *fpl) 325 + { 326 + static DEFINE_SPINLOCK(unix_peek_lock); 327 + 328 + if (!fpl || !fpl->count_unix) 329 + return; 330 + 331 + if (!READ_ONCE(gc_in_progress)) 332 + return; 333 + 334 + /* Invalidate the final refcnt check in unix_vertex_dead(). */ 335 + spin_lock(&unix_peek_lock); 336 + raw_write_seqcount_barrier(&unix_peek_seq); 337 + spin_unlock(&unix_peek_lock); 338 + } 339 + 321 340 static bool unix_vertex_dead(struct unix_vertex *vertex) 322 341 { 323 342 struct unix_edge *edge; ··· 368 349 return false; 369 350 370 351 return true; 352 + } 353 + 354 + static LIST_HEAD(unix_visited_vertices); 355 + static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2; 356 + 357 + static bool unix_scc_dead(struct list_head *scc, bool fast) 358 + { 359 + struct unix_vertex *vertex; 360 + bool scc_dead = true; 361 + unsigned int seq; 362 + 363 + seq = read_seqcount_begin(&unix_peek_seq); 364 + 365 + list_for_each_entry_reverse(vertex, scc, scc_entry) { 366 + /* Don't restart DFS from this vertex. */ 367 + list_move_tail(&vertex->entry, &unix_visited_vertices); 368 + 369 + /* Mark vertex as off-stack for __unix_walk_scc(). */ 370 + if (!fast) 371 + vertex->index = unix_vertex_grouped_index; 372 + 373 + if (scc_dead) 374 + scc_dead = unix_vertex_dead(vertex); 375 + } 376 + 377 + /* If MSG_PEEK intervened, defer this SCC to the next round. */ 378 + if (read_seqcount_retry(&unix_peek_seq, seq)) 379 + return false; 380 + 381 + return scc_dead; 371 382 } 372 383 373 384 static void unix_collect_skb(struct list_head *scc, struct sk_buff_head *hitlist) ··· 452 403 453 404 return false; 454 405 } 455 - 456 - static LIST_HEAD(unix_visited_vertices); 457 - static unsigned long unix_vertex_grouped_index = UNIX_VERTEX_INDEX_MARK2; 458 406 459 407 static unsigned long __unix_walk_scc(struct unix_vertex *vertex, 460 408 unsigned long *last_index, ··· 520 474 } 521 475 522 476 if (vertex->index == vertex->scc_index) { 523 - struct unix_vertex *v; 524 477 struct list_head scc; 525 - bool scc_dead = true; 526 478 527 479 /* SCC finalised. 528 480 * ··· 529 485 */ 530 486 __list_cut_position(&scc, &vertex_stack, &vertex->scc_entry); 531 487 532 - list_for_each_entry_reverse(v, &scc, scc_entry) { 533 - /* Don't restart DFS from this vertex in unix_walk_scc(). */ 534 - list_move_tail(&v->entry, &unix_visited_vertices); 535 - 536 - /* Mark vertex as off-stack. */ 537 - v->index = unix_vertex_grouped_index; 538 - 539 - if (scc_dead) 540 - scc_dead = unix_vertex_dead(v); 541 - } 542 - 543 - if (scc_dead) { 488 + if (unix_scc_dead(&scc, false)) { 544 489 unix_collect_skb(&scc, hitlist); 545 490 } else { 546 491 if (unix_vertex_max_scc_index < vertex->scc_index) ··· 583 550 while (!list_empty(&unix_unvisited_vertices)) { 584 551 struct unix_vertex *vertex; 585 552 struct list_head scc; 586 - bool scc_dead = true; 587 553 588 554 vertex = list_first_entry(&unix_unvisited_vertices, typeof(*vertex), entry); 589 555 list_add(&scc, &vertex->scc_entry); 590 556 591 - list_for_each_entry_reverse(vertex, &scc, scc_entry) { 592 - list_move_tail(&vertex->entry, &unix_visited_vertices); 593 - 594 - if (scc_dead) 595 - scc_dead = unix_vertex_dead(vertex); 596 - } 597 - 598 - if (scc_dead) { 557 + if (unix_scc_dead(&scc, true)) { 599 558 cyclic_sccs--; 600 559 unix_collect_skb(&scc, hitlist); 601 560 } ··· 601 576 WRITE_ONCE(unix_graph_state, 602 577 cyclic_sccs ? UNIX_GRAPH_CYCLIC : UNIX_GRAPH_NOT_CYCLIC); 603 578 } 604 - 605 - static bool gc_in_progress; 606 579 607 580 static void unix_gc(struct work_struct *work) 608 581 {
+1
net/wireless/pmsr.c
··· 664 664 } 665 665 spin_unlock_bh(&wdev->pmsr_lock); 666 666 667 + cancel_work_sync(&wdev->pmsr_free_wk); 667 668 if (found) 668 669 cfg80211_pmsr_process_abort(wdev); 669 670
+4 -5
scripts/livepatch/klp-build
··· 285 285 # application from appending it with '+' due to a dirty git working tree. 286 286 set_kernelversion() { 287 287 local file="$SRC/scripts/setlocalversion" 288 - local localversion 288 + local kernelrelease 289 289 290 290 stash_file "$file" 291 291 292 - localversion="$(cd "$SRC" && make --no-print-directory kernelversion)" 293 - localversion="$(cd "$SRC" && KERNELVERSION="$localversion" ./scripts/setlocalversion)" 294 - [[ -z "$localversion" ]] && die "setlocalversion failed" 292 + kernelrelease="$(cd "$SRC" && make syncconfig &>/dev/null && make -s kernelrelease)" 293 + [[ -z "$kernelrelease" ]] && die "failed to get kernel version" 295 294 296 - sed -i "2i echo $localversion; exit 0" scripts/setlocalversion 295 + sed -i "2i echo $kernelrelease; exit 0" scripts/setlocalversion 297 296 } 298 297 299 298 get_patch_files() {
+3 -3
sound/soc/samsung/i2s.c
··· 1360 1360 if (!pdev_sec) 1361 1361 return -ENOMEM; 1362 1362 1363 - pdev_sec->driver_override = kstrdup("samsung-i2s", GFP_KERNEL); 1364 - if (!pdev_sec->driver_override) { 1363 + ret = device_set_driver_override(&pdev_sec->dev, "samsung-i2s"); 1364 + if (ret) { 1365 1365 platform_device_put(pdev_sec); 1366 - return -ENOMEM; 1366 + return ret; 1367 1367 } 1368 1368 1369 1369 ret = platform_device_add(pdev_sec);
+5 -2
tools/bootconfig/main.c
··· 162 162 if (fd < 0) 163 163 return -errno; 164 164 ret = fstat(fd, &stat); 165 - if (ret < 0) 166 - return -errno; 165 + if (ret < 0) { 166 + ret = -errno; 167 + close(fd); 168 + return ret; 169 + } 167 170 168 171 ret = load_xbc_fd(fd, buf, stat.st_size); 169 172
+2 -3
tools/objtool/check.c
··· 2184 2184 last = insn; 2185 2185 2186 2186 /* 2187 - * Store back-pointers for unconditional forward jumps such 2187 + * Store back-pointers for forward jumps such 2188 2188 * that find_jump_table() can back-track using those and 2189 2189 * avoid some potentially confusing code. 2190 2190 */ 2191 - if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest && 2192 - insn->offset > last->offset && 2191 + if (insn->jump_dest && 2193 2192 insn->jump_dest->offset > insn->offset && 2194 2193 !insn->jump_dest->first_jump_src) { 2195 2194
+3 -20
tools/objtool/elf.c
··· 16 16 #include <string.h> 17 17 #include <unistd.h> 18 18 #include <errno.h> 19 - #include <libgen.h> 20 19 #include <ctype.h> 21 20 #include <linux/align.h> 22 21 #include <linux/kernel.h> ··· 1188 1189 struct elf *elf_create_file(GElf_Ehdr *ehdr, const char *name) 1189 1190 { 1190 1191 struct section *null, *symtab, *strtab, *shstrtab; 1191 - char *dir, *base, *tmp_name; 1192 + char *tmp_name; 1192 1193 struct symbol *sym; 1193 1194 struct elf *elf; 1194 1195 ··· 1202 1203 1203 1204 INIT_LIST_HEAD(&elf->sections); 1204 1205 1205 - dir = strdup(name); 1206 - if (!dir) { 1207 - ERROR_GLIBC("strdup"); 1208 - return NULL; 1209 - } 1210 - 1211 - dir = dirname(dir); 1212 - 1213 - base = strdup(name); 1214 - if (!base) { 1215 - ERROR_GLIBC("strdup"); 1216 - return NULL; 1217 - } 1218 - 1219 - base = basename(base); 1220 - 1221 - tmp_name = malloc(256); 1206 + tmp_name = malloc(strlen(name) + 8); 1222 1207 if (!tmp_name) { 1223 1208 ERROR_GLIBC("malloc"); 1224 1209 return NULL; 1225 1210 } 1226 1211 1227 - snprintf(tmp_name, 256, "%s/%s.XXXXXX", dir, base); 1212 + sprintf(tmp_name, "%s.XXXXXX", name); 1228 1213 1229 1214 elf->fd = mkstemp(tmp_name); 1230 1215 if (elf->fd == -1) {
+2 -1
tools/objtool/klp-diff.c
··· 14 14 #include <objtool/util.h> 15 15 #include <arch/special.h> 16 16 17 + #include <linux/align.h> 17 18 #include <linux/objtool_types.h> 18 19 #include <linux/livepatch_external.h> 19 20 #include <linux/stringify.h> ··· 561 560 } 562 561 563 562 if (!is_sec_sym(patched_sym)) 564 - offset = sec_size(out_sec); 563 + offset = ALIGN(sec_size(out_sec), out_sec->sh.sh_addralign); 565 564 566 565 if (patched_sym->len || is_sec_sym(patched_sym)) { 567 566 void *data = NULL;
+1 -1
tools/testing/selftests/bpf/Makefile
··· 409 409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \ 410 410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \ 411 411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \ 412 - HOSTPKG_CONFIG=$(PKG_CONFIG) \ 412 + HOSTPKG_CONFIG='$(PKG_CONFIG)' \ 413 413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 414 414 415 415 # Get Clang's default includes on this system, as opposed to those seen by
+53 -3
tools/testing/selftests/bpf/progs/exceptions_fail.c
··· 8 8 #include "bpf_experimental.h" 9 9 10 10 extern void bpf_rcu_read_lock(void) __ksym; 11 + extern void bpf_rcu_read_unlock(void) __ksym; 12 + extern void bpf_preempt_disable(void) __ksym; 13 + extern void bpf_preempt_enable(void) __ksym; 14 + extern void bpf_local_irq_save(unsigned long *) __ksym; 15 + extern void bpf_local_irq_restore(unsigned long *) __ksym; 11 16 12 17 #define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) 13 18 ··· 136 131 } 137 132 138 133 SEC("?tc") 139 - __failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") 134 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 140 135 int reject_with_rcu_read_lock(void *ctx) 141 136 { 142 137 bpf_rcu_read_lock(); ··· 152 147 } 153 148 154 149 SEC("?tc") 155 - __failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") 150 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 156 151 int reject_subprog_with_rcu_read_lock(void *ctx) 157 152 { 158 153 bpf_rcu_read_lock(); 159 - return throwing_subprog(ctx); 154 + throwing_subprog(ctx); 155 + bpf_rcu_read_unlock(); 156 + return 0; 160 157 } 161 158 162 159 static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2) ··· 350 343 bpf_loop(5, loop_cb1, NULL, 0); 351 344 else 352 345 bpf_loop(5, loop_cb2, NULL, 0); 346 + return 0; 347 + } 348 + 349 + __noinline static int always_throws(void) 350 + { 351 + bpf_throw(0); 352 + return 0; 353 + } 354 + 355 + __noinline static int rcu_lock_then_throw(void) 356 + { 357 + bpf_rcu_read_lock(); 358 + bpf_throw(0); 359 + return 0; 360 + } 361 + 362 + SEC("?tc") 363 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 364 + int reject_subprog_rcu_lock_throw(void *ctx) 365 + { 366 + rcu_lock_then_throw(); 367 + return 0; 368 + } 369 + 370 + SEC("?tc") 371 + __failure __msg("bpf_throw cannot be used inside bpf_preempt_disable-ed region") 372 + int reject_subprog_throw_preempt_lock(void *ctx) 373 + { 374 + bpf_preempt_disable(); 375 + always_throws(); 376 + bpf_preempt_enable(); 377 + return 0; 378 + } 379 + 380 + SEC("?tc") 381 + __failure __msg("bpf_throw cannot be used inside bpf_local_irq_save-ed region") 382 + int reject_subprog_throw_irq_lock(void *ctx) 383 + { 384 + unsigned long flags; 385 + 386 + bpf_local_irq_save(&flags); 387 + always_throws(); 388 + bpf_local_irq_restore(&flags); 353 389 return 0; 354 390 } 355 391
+94
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 2037 2037 : __clobber_all); 2038 2038 } 2039 2039 2040 + SEC("socket") 2041 + __description("maybe_fork_scalars: OR with constant rejects OOB") 2042 + __failure __msg("invalid access to map value") 2043 + __naked void or_scalar_fork_rejects_oob(void) 2044 + { 2045 + asm volatile (" \ 2046 + r1 = 0; \ 2047 + *(u64*)(r10 - 8) = r1; \ 2048 + r2 = r10; \ 2049 + r2 += -8; \ 2050 + r1 = %[map_hash_8b] ll; \ 2051 + call %[bpf_map_lookup_elem]; \ 2052 + if r0 == 0 goto l0_%=; \ 2053 + r9 = r0; \ 2054 + r6 = *(u64*)(r9 + 0); \ 2055 + r6 s>>= 63; \ 2056 + r6 |= 8; \ 2057 + /* r6 is -1 (current) or 8 (pushed) */ \ 2058 + if r6 s< 0 goto l0_%=; \ 2059 + /* pushed path: r6 = 8, OOB for value_size=8 */ \ 2060 + r9 += r6; \ 2061 + r0 = *(u8*)(r9 + 0); \ 2062 + l0_%=: r0 = 0; \ 2063 + exit; \ 2064 + " : 2065 + : __imm(bpf_map_lookup_elem), 2066 + __imm_addr(map_hash_8b) 2067 + : __clobber_all); 2068 + } 2069 + 2070 + SEC("socket") 2071 + __description("maybe_fork_scalars: AND with constant still works") 2072 + __success __retval(0) 2073 + __naked void and_scalar_fork_still_works(void) 2074 + { 2075 + asm volatile (" \ 2076 + r1 = 0; \ 2077 + *(u64*)(r10 - 8) = r1; \ 2078 + r2 = r10; \ 2079 + r2 += -8; \ 2080 + r1 = %[map_hash_8b] ll; \ 2081 + call %[bpf_map_lookup_elem]; \ 2082 + if r0 == 0 goto l0_%=; \ 2083 + r9 = r0; \ 2084 + r6 = *(u64*)(r9 + 0); \ 2085 + r6 s>>= 63; \ 2086 + r6 &= 4; \ 2087 + /* \ 2088 + * r6 is 0 (pushed, 0&4==0) or 4 (current) \ 2089 + * both within value_size=8 \ 2090 + */ \ 2091 + if r6 s< 0 goto l0_%=; \ 2092 + r9 += r6; \ 2093 + r0 = *(u8*)(r9 + 0); \ 2094 + l0_%=: r0 = 0; \ 2095 + exit; \ 2096 + " : 2097 + : __imm(bpf_map_lookup_elem), 2098 + __imm_addr(map_hash_8b) 2099 + : __clobber_all); 2100 + } 2101 + 2102 + SEC("socket") 2103 + __description("maybe_fork_scalars: OR with constant allows in-bounds") 2104 + __success __retval(0) 2105 + __naked void or_scalar_fork_allows_inbounds(void) 2106 + { 2107 + asm volatile (" \ 2108 + r1 = 0; \ 2109 + *(u64*)(r10 - 8) = r1; \ 2110 + r2 = r10; \ 2111 + r2 += -8; \ 2112 + r1 = %[map_hash_8b] ll; \ 2113 + call %[bpf_map_lookup_elem]; \ 2114 + if r0 == 0 goto l0_%=; \ 2115 + r9 = r0; \ 2116 + r6 = *(u64*)(r9 + 0); \ 2117 + r6 s>>= 63; \ 2118 + r6 |= 4; \ 2119 + /* \ 2120 + * r6 is -1 (current) or 4 (pushed) \ 2121 + * pushed path: r6 = 4, within value_size=8 \ 2122 + */ \ 2123 + if r6 s< 0 goto l0_%=; \ 2124 + r9 += r6; \ 2125 + r0 = *(u8*)(r9 + 0); \ 2126 + l0_%=: r0 = 0; \ 2127 + exit; \ 2128 + " : 2129 + : __imm(bpf_map_lookup_elem), 2130 + __imm_addr(map_hash_8b) 2131 + : __clobber_all); 2132 + } 2133 + 2040 2134 char _license[] SEC("license") = "GPL";
+22
tools/testing/selftests/bpf/progs/verifier_bswap.c
··· 91 91 BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f000000000000) 92 92 #endif 93 93 94 + SEC("socket") 95 + __description("BSWAP, reset reg id") 96 + __failure __msg("math between fp pointer and register with unbounded min value is not allowed") 97 + __naked void bswap_reset_reg_id(void) 98 + { 99 + asm volatile (" \ 100 + call %[bpf_ktime_get_ns]; \ 101 + r1 = r0; \ 102 + r0 = be16 r0; \ 103 + if r0 != 1 goto l0_%=; \ 104 + r2 = r10; \ 105 + r2 += -512; \ 106 + r2 += r1; \ 107 + *(u8 *)(r2 + 0) = 0; \ 108 + l0_%=: \ 109 + r0 = 0; \ 110 + exit; \ 111 + " : 112 + : __imm(bpf_ktime_get_ns) 113 + : __clobber_all); 114 + } 115 + 94 116 #else 95 117 96 118 SEC("socket")
+108
tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
··· 348 348 : __clobber_all); 349 349 } 350 350 351 + /* 352 + * Test that sync_linked_regs() checks reg->id (the linked target register) 353 + * for BPF_ADD_CONST32 rather than known_reg->id (the branch register). 354 + */ 355 + SEC("socket") 356 + __success 357 + __naked void scalars_alu32_zext_linked_reg(void) 358 + { 359 + asm volatile (" \ 360 + call %[bpf_get_prandom_u32]; \ 361 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 362 + r7 = r6; /* linked: same id as r6 */ \ 363 + w7 += 1; /* alu32: r7.id |= BPF_ADD_CONST32 */ \ 364 + r8 = 0xFFFFffff ll; \ 365 + if r6 < r8 goto l0_%=; \ 366 + /* r6 in [0xFFFFFFFF, 0xFFFFFFFF] */ \ 367 + /* sync_linked_regs: known_reg=r6, reg=r7 */ \ 368 + /* CPU: w7 = (u32)(0xFFFFFFFF + 1) = 0, zext -> r7 = 0 */ \ 369 + /* With fix: r7 64-bit = [0, 0] (zext applied) */ \ 370 + /* Without fix: r7 64-bit = [0x100000000] (no zext) */ \ 371 + r7 >>= 32; \ 372 + if r7 == 0 goto l0_%=; \ 373 + r0 /= 0; /* unreachable with fix */ \ 374 + l0_%=: \ 375 + r0 = 0; \ 376 + exit; \ 377 + " : 378 + : __imm(bpf_get_prandom_u32) 379 + : __clobber_all); 380 + } 381 + 382 + /* 383 + * Test that sync_linked_regs() skips propagation when one register used 384 + * alu32 (BPF_ADD_CONST32) and the other used alu64 (BPF_ADD_CONST64). 385 + * The delta relationship doesn't hold across different ALU widths. 386 + */ 387 + SEC("socket") 388 + __failure __msg("div by zero") 389 + __naked void scalars_alu32_alu64_cross_type(void) 390 + { 391 + asm volatile (" \ 392 + call %[bpf_get_prandom_u32]; \ 393 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 394 + r7 = r6; /* linked: same id as r6 */ \ 395 + w7 += 1; /* alu32: BPF_ADD_CONST32, delta = 1 */ \ 396 + r8 = r6; /* linked: same id as r6 */ \ 397 + r8 += 2; /* alu64: BPF_ADD_CONST64, delta = 2 */ \ 398 + r9 = 0xFFFFffff ll; \ 399 + if r7 < r9 goto l0_%=; \ 400 + /* r7 = 0xFFFFFFFF */ \ 401 + /* sync: known_reg=r7 (ADD_CONST32), reg=r8 (ADD_CONST64) */ \ 402 + /* Without fix: r8 = zext(0xFFFFFFFF + 1) = 0 */ \ 403 + /* With fix: r8 stays [2, 0x100000001] (r8 >= 2) */ \ 404 + if r8 > 0 goto l1_%=; \ 405 + goto l0_%=; \ 406 + l1_%=: \ 407 + r0 /= 0; /* div by zero */ \ 408 + l0_%=: \ 409 + r0 = 0; \ 410 + exit; \ 411 + " : 412 + : __imm(bpf_get_prandom_u32) 413 + : __clobber_all); 414 + } 415 + 416 + /* 417 + * Test that regsafe() prevents pruning when two paths reach the same program 418 + * point with linked registers carrying different ADD_CONST flags (one 419 + * BPF_ADD_CONST32 from alu32, another BPF_ADD_CONST64 from alu64). 420 + */ 421 + SEC("socket") 422 + __failure __msg("div by zero") 423 + __flag(BPF_F_TEST_STATE_FREQ) 424 + __naked void scalars_alu32_alu64_regsafe_pruning(void) 425 + { 426 + asm volatile (" \ 427 + call %[bpf_get_prandom_u32]; \ 428 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 429 + r7 = r6; /* linked: same id as r6 */ \ 430 + /* Get another random value for the path branch */ \ 431 + call %[bpf_get_prandom_u32]; \ 432 + if r0 > 0 goto l_pathb_%=; \ 433 + /* Path A: alu32 */ \ 434 + w7 += 1; /* BPF_ADD_CONST32, delta = 1 */\ 435 + goto l_merge_%=; \ 436 + l_pathb_%=: \ 437 + /* Path B: alu64 */ \ 438 + r7 += 1; /* BPF_ADD_CONST64, delta = 1 */\ 439 + l_merge_%=: \ 440 + /* Merge point: regsafe() compares path B against cached path A. */ \ 441 + /* Narrow r6 to trigger sync_linked_regs for r7 */ \ 442 + r9 = 0xFFFFffff ll; \ 443 + if r6 < r9 goto l0_%=; \ 444 + /* r6 = 0xFFFFFFFF */ \ 445 + /* sync: r7 = 0xFFFFFFFF + 1 = 0x100000000 */ \ 446 + /* Path A: zext -> r7 = 0 */ \ 447 + /* Path B: no zext -> r7 = 0x100000000 */ \ 448 + r7 >>= 32; \ 449 + if r7 == 0 goto l0_%=; \ 450 + r0 /= 0; /* div by zero on path B */ \ 451 + l0_%=: \ 452 + r0 = 0; \ 453 + exit; \ 454 + " : 455 + : __imm(bpf_get_prandom_u32) 456 + : __clobber_all); 457 + } 458 + 351 459 SEC("socket") 352 460 __success 353 461 void alu32_negative_offset(void)
+58
tools/testing/selftests/bpf/progs/verifier_sdiv.c
··· 1209 1209 : __clobber_all); 1210 1210 } 1211 1211 1212 + SEC("socket") 1213 + __description("SDIV32, INT_MIN divided by 2, imm") 1214 + __success __success_unpriv __retval(-1073741824) 1215 + __naked void sdiv32_int_min_div_2_imm(void) 1216 + { 1217 + asm volatile (" \ 1218 + w0 = %[int_min]; \ 1219 + w0 s/= 2; \ 1220 + exit; \ 1221 + " : 1222 + : __imm_const(int_min, INT_MIN) 1223 + : __clobber_all); 1224 + } 1225 + 1226 + SEC("socket") 1227 + __description("SDIV32, INT_MIN divided by 2, reg") 1228 + __success __success_unpriv __retval(-1073741824) 1229 + __naked void sdiv32_int_min_div_2_reg(void) 1230 + { 1231 + asm volatile (" \ 1232 + w0 = %[int_min]; \ 1233 + w1 = 2; \ 1234 + w0 s/= w1; \ 1235 + exit; \ 1236 + " : 1237 + : __imm_const(int_min, INT_MIN) 1238 + : __clobber_all); 1239 + } 1240 + 1241 + SEC("socket") 1242 + __description("SMOD32, INT_MIN modulo 2, imm") 1243 + __success __success_unpriv __retval(0) 1244 + __naked void smod32_int_min_mod_2_imm(void) 1245 + { 1246 + asm volatile (" \ 1247 + w0 = %[int_min]; \ 1248 + w0 s%%= 2; \ 1249 + exit; \ 1250 + " : 1251 + : __imm_const(int_min, INT_MIN) 1252 + : __clobber_all); 1253 + } 1254 + 1255 + SEC("socket") 1256 + __description("SMOD32, INT_MIN modulo -2, imm") 1257 + __success __success_unpriv __retval(0) 1258 + __naked void smod32_int_min_mod_neg2_imm(void) 1259 + { 1260 + asm volatile (" \ 1261 + w0 = %[int_min]; \ 1262 + w0 s%%= -2; \ 1263 + exit; \ 1264 + " : 1265 + : __imm_const(int_min, INT_MIN) 1266 + : __clobber_all); 1267 + } 1268 + 1269 + 1212 1270 #else 1213 1271 1214 1272 SEC("socket")
+12
tools/testing/selftests/hid/progs/hid_bpf_helpers.h
··· 6 6 #define __HID_BPF_HELPERS_H 7 7 8 8 /* "undefine" structs and enums in vmlinux.h, because we "override" them below */ 9 + #define bpf_wq bpf_wq___not_used 9 10 #define hid_bpf_ctx hid_bpf_ctx___not_used 10 11 #define hid_bpf_ops hid_bpf_ops___not_used 12 + #define hid_device hid_device___not_used 11 13 #define hid_report_type hid_report_type___not_used 12 14 #define hid_class_request hid_class_request___not_used 13 15 #define hid_bpf_attach_flags hid_bpf_attach_flags___not_used ··· 29 27 30 28 #include "vmlinux.h" 31 29 30 + #undef bpf_wq 32 31 #undef hid_bpf_ctx 33 32 #undef hid_bpf_ops 33 + #undef hid_device 34 34 #undef hid_report_type 35 35 #undef hid_class_request 36 36 #undef hid_bpf_attach_flags ··· 57 53 HID_FEATURE_REPORT = 2, 58 54 59 55 HID_REPORT_TYPES, 56 + }; 57 + 58 + struct hid_device { 59 + unsigned int id; 60 + } __attribute__((preserve_access_index)); 61 + 62 + struct bpf_wq { 63 + __u64 __opaque[2]; 60 64 }; 61 65 62 66 struct hid_bpf_ctx {