Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v5.13-rc6' into char-misc-next

We need the fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4152 -2046
+11 -2
.clang-format
··· 109 109 - 'css_for_each_child' 110 110 - 'css_for_each_descendant_post' 111 111 - 'css_for_each_descendant_pre' 112 - - 'cxl_for_each_cmd' 113 112 - 'device_for_each_child_node' 113 + - 'displayid_iter_for_each' 114 114 - 'dma_fence_chain_for_each' 115 115 - 'do_for_each_ftrace_op' 116 116 - 'drm_atomic_crtc_for_each_plane' ··· 136 136 - 'drm_mm_for_each_node_in_range' 137 137 - 'drm_mm_for_each_node_safe' 138 138 - 'flow_action_for_each' 139 + - 'for_each_acpi_dev_match' 139 140 - 'for_each_active_dev_scope' 140 141 - 'for_each_active_drhd_unit' 141 142 - 'for_each_active_iommu' ··· 172 171 - 'for_each_dapm_widgets' 173 172 - 'for_each_dev_addr' 174 173 - 'for_each_dev_scope' 175 - - 'for_each_displayid_db' 176 174 - 'for_each_dma_cap_mask' 177 175 - 'for_each_dpcm_be' 178 176 - 'for_each_dpcm_be_rollback' ··· 179 179 - 'for_each_dpcm_fe' 180 180 - 'for_each_drhd_unit' 181 181 - 'for_each_dss_dev' 182 + - 'for_each_dtpm_table' 182 183 - 'for_each_efi_memory_desc' 183 184 - 'for_each_efi_memory_desc_in_map' 184 185 - 'for_each_element' ··· 216 215 - 'for_each_migratetype_order' 217 216 - 'for_each_msi_entry' 218 217 - 'for_each_msi_entry_safe' 218 + - 'for_each_msi_vector' 219 219 - 'for_each_net' 220 220 - 'for_each_net_continue_reverse' 221 221 - 'for_each_netdev' ··· 272 270 - 'for_each_prime_number_from' 273 271 - 'for_each_process' 274 272 - 'for_each_process_thread' 273 + - 'for_each_prop_codec_conf' 274 + - 'for_each_prop_dai_codec' 275 + - 'for_each_prop_dai_cpu' 276 + - 'for_each_prop_dlc_codecs' 277 + - 'for_each_prop_dlc_cpus' 278 + - 'for_each_prop_dlc_platforms' 275 279 - 'for_each_property_of_node' 276 280 - 'for_each_registered_fb' 277 281 - 'for_each_requested_gpio' ··· 438 430 - 'queue_for_each_hw_ctx' 439 431 - 'radix_tree_for_each_slot' 440 432 - 'radix_tree_for_each_tagged' 433 + - 'rb_for_each' 441 434 - 'rbtree_postorder_for_each_entry_safe' 442 435 - 'rdma_for_each_block' 443 436 - 'rdma_for_each_port'
+3
.mailmap
··· 243 243 Mayuresh Janorkar <mayur@ti.com> 244 244 Michael Buesch <m@bues.ch> 245 245 Michel Dänzer <michel@tungstengraphics.com> 246 + Michel Lespinasse <michel@lespinasse.org> 247 + Michel Lespinasse <michel@lespinasse.org> <walken@google.com> 248 + Michel Lespinasse <michel@lespinasse.org> <walken@zoy.org> 246 249 Miguel Ojeda <ojeda@kernel.org> <miguel.ojeda.sandonis@gmail.com> 247 250 Mike Rapoport <rppt@kernel.org> <mike@compulab.co.il> 248 251 Mike Rapoport <rppt@kernel.org> <mike.rapoport@gmail.com>
+15
Documentation/devicetree/bindings/connector/usb-connector.yaml
··· 149 149 maxItems: 6 150 150 $ref: /schemas/types.yaml#/definitions/uint32-array 151 151 152 + sink-vdos-v1: 153 + description: An array of u32 with each entry, a Vendor Defined Message Object (VDO), 154 + providing additional information corresponding to the product, the detailed bit 155 + definitions and the order of each VDO can be found in 156 + "USB Power Delivery Specification Revision 2.0, Version 1.3" chapter 6.4.4.3.1 Discover 157 + Identity. User can specify the VDO array via VDO_IDH/_CERT/_PRODUCT/_CABLE/_AMA defined in 158 + dt-bindings/usb/pd.h. 159 + minItems: 3 160 + maxItems: 6 161 + $ref: /schemas/types.yaml#/definitions/uint32-array 162 + 152 163 op-sink-microwatt: 153 164 description: Sink required operating power in microwatt, if source can't 154 165 offer the power, Capability Mismatch is set. Required for power sink and ··· 217 206 SNK_DISCOVERY) and the actual currrent limit after reception of PS_Ready for PD link or during 218 207 SNK_READY for non-pd link. 219 208 type: boolean 209 + 210 + dependencies: 211 + sink-vdos-v1: [ 'sink-vdos' ] 212 + sink-vdos: [ 'sink-vdos-v1' ] 220 213 221 214 required: 222 215 - compatible
+1 -1
Documentation/devicetree/bindings/hwmon/ti,ads7828.yaml
··· 49 49 #size-cells = <0>; 50 50 51 51 adc@48 { 52 - comatible = "ti,ads7828"; 52 + compatible = "ti,ads7828"; 53 53 reg = <0x48>; 54 54 vref-supply = <&vref>; 55 55 ti,differential-input;
+1 -3
Documentation/devicetree/bindings/media/renesas,drif.yaml
··· 67 67 maxItems: 1 68 68 69 69 clock-names: 70 - maxItems: 1 71 - items: 72 - - const: fck 70 + const: fck 73 71 74 72 resets: 75 73 maxItems: 1
+2 -2
Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
··· 57 57 rate 58 58 59 59 sound-dai: 60 - $ref: /schemas/types.yaml#/definitions/phandle 60 + $ref: /schemas/types.yaml#/definitions/phandle-array 61 61 description: phandle of the CPU DAI 62 62 63 63 patternProperties: ··· 71 71 72 72 properties: 73 73 sound-dai: 74 - $ref: /schemas/types.yaml#/definitions/phandle 74 + $ref: /schemas/types.yaml#/definitions/phandle-array 75 75 description: phandle of the codec DAI 76 76 77 77 required:
+2 -2
Documentation/virt/kvm/mmu.rst
··· 171 171 shadow pages) so role.quadrant takes values in the range 0..3. Each 172 172 quadrant maps 1GB virtual address space. 173 173 role.access: 174 - Inherited guest access permissions in the form uwx. Note execute 175 - permission is positive, not negative. 174 + Inherited guest access permissions from the parent ptes in the form uwx. 175 + Note execute permission is positive, not negative. 176 176 role.invalid: 177 177 The page is invalid and should not be used. It is a root page that is 178 178 currently pinned (by a cpu hardware register pointing to it); once it is
+16 -5
MAINTAINERS
··· 3877 3877 S: Maintained 3878 3878 W: http://btrfs.wiki.kernel.org/ 3879 3879 Q: http://patchwork.kernel.org/project/linux-btrfs/list/ 3880 + C: irc://irc.libera.chat/btrfs 3880 3881 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git 3881 3882 F: Documentation/filesystems/btrfs.rst 3882 3883 F: fs/btrfs/ ··· 6946 6945 FANOTIFY 6947 6946 M: Jan Kara <jack@suse.cz> 6948 6947 R: Amir Goldstein <amir73il@gmail.com> 6948 + R: Matthew Bobrowski <repnop@google.com> 6949 6949 L: linux-fsdevel@vger.kernel.org 6950 6950 S: Maintained 6951 6951 F: fs/notify/fanotify/ ··· 12905 12903 12906 12904 NFC SUBSYSTEM 12907 12905 M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 12908 - L: linux-nfc@lists.01.org (moderated for non-subscribers) 12906 + L: linux-nfc@lists.01.org (subscribers-only) 12909 12907 L: netdev@vger.kernel.org 12910 12908 S: Maintained 12911 12909 F: Documentation/devicetree/bindings/net/nfc/ ··· 12918 12916 NFC VIRTUAL NCI DEVICE DRIVER 12919 12917 M: Bongsu Jeon <bongsu.jeon@samsung.com> 12920 12918 L: netdev@vger.kernel.org 12921 - L: linux-nfc@lists.01.org (moderated for non-subscribers) 12919 + L: linux-nfc@lists.01.org (subscribers-only) 12922 12920 S: Supported 12923 12921 F: drivers/nfc/virtual_ncidev.c 12924 12922 F: tools/testing/selftests/nci/ ··· 13216 13214 13217 13215 NXP-NCI NFC DRIVER 13218 13216 R: Charles Gorand <charles.gorand@effinnov.com> 13219 - L: linux-nfc@lists.01.org (moderated for non-subscribers) 13217 + L: linux-nfc@lists.01.org (subscribers-only) 13220 13218 S: Supported 13221 13219 F: drivers/nfc/nxp-nci 13222 13220 ··· 14119 14117 PCI ENDPOINT SUBSYSTEM 14120 14118 M: Kishon Vijay Abraham I <kishon@ti.com> 14121 14119 M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 14120 + R: Krzysztof Wilczyński <kw@linux.com> 14122 14121 L: linux-pci@vger.kernel.org 14123 14122 S: Supported 14124 14123 F: Documentation/PCI/endpoint/* ··· 14168 14165 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 14169 14166 M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 14170 14167 R: Rob Herring <robh@kernel.org> 14168 + R: Krzysztof Wilczyński <kw@linux.com> 14171 14169 L: linux-pci@vger.kernel.org 14172 14170 S: Supported 14173 14171 Q: http://patchwork.ozlabs.org/project/linux-pci/list/ ··· 16147 16143 SAMSUNG S3FWRN5 NFC DRIVER 16148 16144 M: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> 16149 16145 M: Krzysztof Opasiak <k.opasiak@samsung.com> 16150 - L: linux-nfc@lists.01.org (moderated for non-subscribers) 16146 + L: linux-nfc@lists.01.org (subscribers-only) 16151 16147 S: Maintained 16152 16148 F: Documentation/devicetree/bindings/net/nfc/samsung,s3fwrn5.yaml 16153 16149 F: drivers/nfc/s3fwrn5 ··· 18337 18333 TI TRF7970A NFC DRIVER 18338 18334 M: Mark Greer <mgreer@animalcreek.com> 18339 18335 L: linux-wireless@vger.kernel.org 18340 - L: linux-nfc@lists.01.org (moderated for non-subscribers) 18336 + L: linux-nfc@lists.01.org (subscribers-only) 18341 18337 S: Supported 18342 18338 F: Documentation/devicetree/bindings/net/nfc/trf7970a.txt 18343 18339 F: drivers/nfc/trf7970a.c ··· 18872 18868 S: Maintained 18873 18869 F: drivers/usb/host/isp116x* 18874 18870 F: include/linux/usb/isp116x.h 18871 + 18872 + USB ISP1760 DRIVER 18873 + M: Rui Miguel Silva <rui.silva@linaro.org> 18874 + L: linux-usb@vger.kernel.org 18875 + S: Maintained 18876 + F: drivers/usb/isp1760/* 18877 + F: Documentation/devicetree/bindings/usb/nxp,isp1760.yaml 18875 18878 18876 18879 USB LAN78XX ETHERNET DRIVER 18877 18880 M: Woojung Huh <woojung.huh@microchip.com>
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 13 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc6 6 6 NAME = Frozen Wasteland 7 7 8 8 # *DOCUMENTATION*
+5 -1
arch/arm/boot/dts/imx6dl-yapp4-common.dtsi
··· 105 105 phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>; 106 106 phy-reset-duration = <20>; 107 107 phy-supply = <&sw2_reg>; 108 - phy-handle = <&ethphy0>; 109 108 status = "okay"; 109 + 110 + fixed-link { 111 + speed = <1000>; 112 + full-duplex; 113 + }; 110 114 111 115 mdio { 112 116 #address-cells = <1>;
+12
arch/arm/boot/dts/imx6q-dhcom-som.dtsi
··· 406 406 vin-supply = <&sw1_reg>; 407 407 }; 408 408 409 + &reg_pu { 410 + vin-supply = <&sw1_reg>; 411 + }; 412 + 413 + &reg_vdd1p1 { 414 + vin-supply = <&sw2_reg>; 415 + }; 416 + 417 + &reg_vdd2p5 { 418 + vin-supply = <&sw2_reg>; 419 + }; 420 + 409 421 &uart1 { 410 422 pinctrl-names = "default"; 411 423 pinctrl-0 = <&pinctrl_uart1>;
+1 -1
arch/arm/boot/dts/imx6qdl-emcon-avari.dtsi
··· 126 126 compatible = "nxp,pca8574"; 127 127 reg = <0x3a>; 128 128 gpio-controller; 129 - #gpio-cells = <1>; 129 + #gpio-cells = <2>; 130 130 }; 131 131 }; 132 132
+1 -1
arch/arm/boot/dts/imx7d-meerkat96.dts
··· 193 193 pinctrl-names = "default"; 194 194 pinctrl-0 = <&pinctrl_usdhc1>; 195 195 keep-power-in-suspend; 196 - tuning-step = <2>; 196 + fsl,tuning-step = <2>; 197 197 vmmc-supply = <&reg_3p3v>; 198 198 no-1-8-v; 199 199 broken-cd;
+1 -1
arch/arm/boot/dts/imx7d-pico.dtsi
··· 351 351 pinctrl-2 = <&pinctrl_usdhc1_200mhz>; 352 352 cd-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>; 353 353 bus-width = <4>; 354 - tuning-step = <2>; 354 + fsl,tuning-step = <2>; 355 355 vmmc-supply = <&reg_3p3v>; 356 356 wakeup-source; 357 357 no-1-8-v;
+3 -2
arch/arm/include/asm/cpuidle.h
··· 7 7 #ifdef CONFIG_CPU_IDLE 8 8 extern int arm_cpuidle_simple_enter(struct cpuidle_device *dev, 9 9 struct cpuidle_driver *drv, int index); 10 + #define __cpuidle_method_section __used __section("__cpuidle_method_of_table") 10 11 #else 11 12 static inline int arm_cpuidle_simple_enter(struct cpuidle_device *dev, 12 13 struct cpuidle_driver *drv, int index) { return -ENODEV; } 14 + #define __cpuidle_method_section __maybe_unused /* drop silently */ 13 15 #endif 14 16 15 17 /* Common ARM WFI state */ ··· 44 42 45 43 #define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops) \ 46 44 static const struct of_cpuidle_method __cpuidle_method_of_table_##name \ 47 - __used __section("__cpuidle_method_of_table") \ 48 - = { .method = _method, .ops = _ops } 45 + __cpuidle_method_section = { .method = _method, .ops = _ops } 49 46 50 47 extern int arm_cpuidle_suspend(int index); 51 48
+1
arch/arm/mach-imx/pm-imx27.c
··· 12 12 #include <linux/suspend.h> 13 13 #include <linux/io.h> 14 14 15 + #include "common.h" 15 16 #include "hardware.h" 16 17 17 18 static int mx27_suspend_enter(suspend_state_t state)
-14
arch/arm/mach-omap1/board-ams-delta.c
··· 458 458 459 459 #ifdef CONFIG_LEDS_TRIGGERS 460 460 DEFINE_LED_TRIGGER(ams_delta_camera_led_trigger); 461 - 462 - static int ams_delta_camera_power(struct device *dev, int power) 463 - { 464 - /* 465 - * turn on camera LED 466 - */ 467 - if (power) 468 - led_trigger_event(ams_delta_camera_led_trigger, LED_FULL); 469 - else 470 - led_trigger_event(ams_delta_camera_led_trigger, LED_OFF); 471 - return 0; 472 - } 473 - #else 474 - #define ams_delta_camera_power NULL 475 461 #endif 476 462 477 463 static struct platform_device ams_delta_audio_device = {
+3 -1
arch/arm/mach-omap1/board-h2.c
··· 320 320 { 321 321 if (!IS_BUILTIN(CONFIG_TPS65010)) 322 322 return -ENOSYS; 323 - 323 + 324 324 tps65010_config_vregs1(TPS_LDO2_ENABLE | TPS_VLDO2_3_0V | 325 325 TPS_LDO1_ENABLE | TPS_VLDO1_3_0V); 326 326 ··· 393 393 h2_nand_resource.end += SZ_4K - 1; 394 394 BUG_ON(gpio_request(H2_NAND_RB_GPIO_PIN, "NAND ready") < 0); 395 395 gpio_direction_input(H2_NAND_RB_GPIO_PIN); 396 + 397 + gpiod_add_lookup_table(&isp1301_gpiod_table); 396 398 397 399 omap_cfg_reg(L3_1610_FLASH_CS2B_OE); 398 400 omap_cfg_reg(M8_1610_FLASH_CS2B_WE);
+7 -3
arch/arm/mach-omap1/pm.c
··· 655 655 irq = INT_7XX_WAKE_UP_REQ; 656 656 else if (cpu_is_omap16xx()) 657 657 irq = INT_1610_WAKE_UP_REQ; 658 - if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup", 659 - NULL)) 660 - pr_err("Failed to request irq %d (peripheral wakeup)\n", irq); 658 + else 659 + irq = -1; 660 + 661 + if (irq >= 0) { 662 + if (request_irq(irq, omap_wakeup_interrupt, 0, "peripheral wakeup", NULL)) 663 + pr_err("Failed to request irq %d (peripheral wakeup)\n", irq); 664 + } 661 665 662 666 /* Program new power ramp-up time 663 667 * (0 for most boards since we don't lower voltage when in deep sleep)
+1 -1
arch/arm/mach-omap2/board-n8x0.c
··· 322 322 323 323 static void n8x0_mmc_callback(void *data, u8 card_mask) 324 324 { 325 + #ifdef CONFIG_MMC_OMAP 325 326 int bit, *openp, index; 326 327 327 328 if (board_is_n800()) { ··· 340 339 else 341 340 *openp = 0; 342 341 343 - #ifdef CONFIG_MMC_OMAP 344 342 omap_mmc_notify_cover_event(mmc_device, index, *openp); 345 343 #else 346 344 pr_warn("MMC: notify cover event not available\n");
+1
arch/arm64/Kconfig.platforms
··· 165 165 166 166 config ARCH_MESON 167 167 bool "Amlogic Platforms" 168 + select COMMON_CLK 168 169 select MESON_IRQ_GPIO 169 170 help 170 171 This enables support for the arm64 based Amlogic SoCs
+2 -1
arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var1.dts
··· 46 46 eee-broken-100tx; 47 47 qca,clk-out-frequency = <125000000>; 48 48 qca,clk-out-strength = <AR803X_STRENGTH_FULL>; 49 - vddio-supply = <&vddh>; 49 + qca,keep-pll-enabled; 50 + vddio-supply = <&vddio>; 50 51 51 52 vddio: vddio-regulator { 52 53 regulator-name = "VDDIO";
+2 -3
arch/arm64/boot/dts/freescale/fsl-ls1028a-kontron-sl28-var4.dts
··· 31 31 reg = <0x4>; 32 32 eee-broken-1000t; 33 33 eee-broken-100tx; 34 - 35 34 qca,clk-out-frequency = <125000000>; 36 35 qca,clk-out-strength = <AR803X_STRENGTH_FULL>; 37 - 38 - vddio-supply = <&vddh>; 36 + qca,keep-pll-enabled; 37 + vddio-supply = <&vddio>; 39 38 40 39 vddio: vddio-regulator { 41 40 regulator-name = "VDDIO";
+2 -2
arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
··· 197 197 ddr: memory-controller@1080000 { 198 198 compatible = "fsl,qoriq-memory-controller"; 199 199 reg = <0x0 0x1080000 0x0 0x1000>; 200 - interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; 201 - big-endian; 200 + interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>; 201 + little-endian; 202 202 }; 203 203 204 204 dcfg: syscon@1e00000 {
+5 -5
arch/arm64/boot/dts/freescale/imx8mq-zii-ultra-rmb3.dts
··· 88 88 pinctrl-0 = <&pinctrl_codec2>; 89 89 reg = <0x18>; 90 90 #sound-dai-cells = <0>; 91 - HPVDD-supply = <&reg_3p3v>; 92 - SPRVDD-supply = <&reg_3p3v>; 93 - SPLVDD-supply = <&reg_3p3v>; 94 - AVDD-supply = <&reg_3p3v>; 95 - IOVDD-supply = <&reg_3p3v>; 91 + HPVDD-supply = <&reg_gen_3p3>; 92 + SPRVDD-supply = <&reg_gen_3p3>; 93 + SPLVDD-supply = <&reg_gen_3p3>; 94 + AVDD-supply = <&reg_gen_3p3>; 95 + IOVDD-supply = <&reg_gen_3p3>; 96 96 DVDD-supply = <&vgen4_reg>; 97 97 reset-gpios = <&gpio3 4 GPIO_ACTIVE_HIGH>; 98 98 };
+7 -16
arch/arm64/boot/dts/freescale/imx8mq-zii-ultra.dtsi
··· 45 45 reg_12p0_main: regulator-12p0-main { 46 46 compatible = "regulator-fixed"; 47 47 regulator-name = "12V_MAIN"; 48 - regulator-min-microvolt = <5000000>; 49 - regulator-max-microvolt = <5000000>; 48 + regulator-min-microvolt = <12000000>; 49 + regulator-max-microvolt = <12000000>; 50 50 regulator-always-on; 51 51 }; 52 52 ··· 69 69 }; 70 70 71 71 reg_gen_3p3: regulator-gen-3p3 { 72 - compatible = "regulator-fixed"; 73 - vin-supply = <&reg_3p3_main>; 74 - regulator-name = "GEN_3V3"; 75 - regulator-min-microvolt = <3300000>; 76 - regulator-max-microvolt = <3300000>; 77 - regulator-always-on; 78 - }; 79 - 80 - reg_3p3v: regulator-3p3v { 81 72 compatible = "regulator-fixed"; 82 73 vin-supply = <&reg_3p3_main>; 83 74 regulator-name = "GEN_3V3"; ··· 406 415 pinctrl-0 = <&pinctrl_codec1>; 407 416 reg = <0x18>; 408 417 #sound-dai-cells = <0>; 409 - HPVDD-supply = <&reg_3p3v>; 410 - SPRVDD-supply = <&reg_3p3v>; 411 - SPLVDD-supply = <&reg_3p3v>; 412 - AVDD-supply = <&reg_3p3v>; 413 - IOVDD-supply = <&reg_3p3v>; 418 + HPVDD-supply = <&reg_gen_3p3>; 419 + SPRVDD-supply = <&reg_gen_3p3>; 420 + SPLVDD-supply = <&reg_gen_3p3>; 421 + AVDD-supply = <&reg_gen_3p3>; 422 + IOVDD-supply = <&reg_gen_3p3>; 414 423 DVDD-supply = <&vgen4_reg>; 415 424 reset-gpios = <&gpio3 3 GPIO_ACTIVE_LOW>; 416 425 };
+6 -5
arch/arm64/boot/dts/ti/k3-am64-main.dtsi
··· 42 42 }; 43 43 }; 44 44 45 - dmss: dmss { 45 + dmss: bus@48000000 { 46 46 compatible = "simple-mfd"; 47 47 #address-cells = <2>; 48 48 #size-cells = <2>; 49 49 dma-ranges; 50 - ranges; 50 + ranges = <0x00 0x48000000 0x00 0x48000000 0x00 0x06400000>; 51 51 52 52 ti,sci-dev-id = <25>; 53 53 ··· 134 134 }; 135 135 }; 136 136 137 - dmsc: dmsc@44043000 { 137 + dmsc: system-controller@44043000 { 138 138 compatible = "ti,k2g-sci"; 139 139 ti,host-id = <12>; 140 140 mbox-names = "rx", "tx"; ··· 148 148 #power-domain-cells = <2>; 149 149 }; 150 150 151 - k3_clks: clocks { 151 + k3_clks: clock-controller { 152 152 compatible = "ti,k2g-sci-clk"; 153 153 #clock-cells = <2>; 154 154 }; ··· 373 373 clocks = <&k3_clks 145 0>; 374 374 }; 375 375 376 - main_gpio_intr: interrupt-controller0 { 376 + main_gpio_intr: interrupt-controller@a00000 { 377 377 compatible = "ti,sci-intr"; 378 + reg = <0x00 0x00a00000 0x00 0x800>; 378 379 ti,intr-trigger-type = <1>; 379 380 interrupt-controller; 380 381 interrupt-parent = <&gic500>;
+2 -1
arch/arm64/boot/dts/ti/k3-am64-mcu.dtsi
··· 74 74 clocks = <&k3_clks 148 0>; 75 75 }; 76 76 77 - mcu_gpio_intr: interrupt-controller1 { 77 + mcu_gpio_intr: interrupt-controller@4210000 { 78 78 compatible = "ti,sci-intr"; 79 + reg = <0x00 0x04210000 0x00 0x200>; 79 80 ti,intr-trigger-type = <1>; 80 81 interrupt-controller; 81 82 interrupt-parent = <&gic500>;
+6 -4
arch/arm64/boot/dts/ti/k3-am65-main.dtsi
··· 433 433 #phy-cells = <0>; 434 434 }; 435 435 436 - intr_main_gpio: interrupt-controller0 { 436 + intr_main_gpio: interrupt-controller@a00000 { 437 437 compatible = "ti,sci-intr"; 438 + reg = <0x0 0x00a00000 0x0 0x400>; 438 439 ti,intr-trigger-type = <1>; 439 440 interrupt-controller; 440 441 interrupt-parent = <&gic500>; ··· 445 444 ti,interrupt-ranges = <0 392 32>; 446 445 }; 447 446 448 - main-navss { 447 + main_navss: bus@30800000 { 449 448 compatible = "simple-mfd"; 450 449 #address-cells = <2>; 451 450 #size-cells = <2>; 452 - ranges; 451 + ranges = <0x0 0x30800000 0x0 0x30800000 0x0 0xbc00000>; 453 452 dma-coherent; 454 453 dma-ranges; 455 454 456 455 ti,sci-dev-id = <118>; 457 456 458 - intr_main_navss: interrupt-controller1 { 457 + intr_main_navss: interrupt-controller@310e0000 { 459 458 compatible = "ti,sci-intr"; 459 + reg = <0x0 0x310e0000 0x0 0x2000>; 460 460 ti,intr-trigger-type = <4>; 461 461 interrupt-controller; 462 462 interrupt-parent = <&gic500>;
+2 -2
arch/arm64/boot/dts/ti/k3-am65-mcu.dtsi
··· 116 116 }; 117 117 }; 118 118 119 - mcu-navss { 119 + mcu_navss: bus@28380000 { 120 120 compatible = "simple-mfd"; 121 121 #address-cells = <2>; 122 122 #size-cells = <2>; 123 - ranges; 123 + ranges = <0x00 0x28380000 0x00 0x28380000 0x00 0x03880000>; 124 124 dma-coherent; 125 125 dma-ranges; 126 126
+7 -6
arch/arm64/boot/dts/ti/k3-am65-wakeup.dtsi
··· 6 6 */ 7 7 8 8 &cbass_wakeup { 9 - dmsc: dmsc { 9 + dmsc: system-controller@44083000 { 10 10 compatible = "ti,am654-sci"; 11 11 ti,host-id = <12>; 12 - #address-cells = <1>; 13 - #size-cells = <1>; 14 - ranges; 15 12 16 13 mbox-names = "rx", "tx"; 17 14 18 15 mboxes= <&secure_proxy_main 11>, 19 16 <&secure_proxy_main 13>; 20 17 18 + reg-names = "debug_messages"; 19 + reg = <0x44083000 0x1000>; 20 + 21 21 k3_pds: power-controller { 22 22 compatible = "ti,sci-pm-domain"; 23 23 #power-domain-cells = <2>; 24 24 }; 25 25 26 - k3_clks: clocks { 26 + k3_clks: clock-controller { 27 27 compatible = "ti,k2g-sci-clk"; 28 28 #clock-cells = <2>; 29 29 }; ··· 69 69 power-domains = <&k3_pds 115 TI_SCI_PD_EXCLUSIVE>; 70 70 }; 71 71 72 - intr_wkup_gpio: interrupt-controller2 { 72 + intr_wkup_gpio: interrupt-controller@42200000 { 73 73 compatible = "ti,sci-intr"; 74 + reg = <0x42200000 0x200>; 74 75 ti,intr-trigger-type = <1>; 75 76 interrupt-controller; 76 77 interrupt-parent = <&gic500>;
-31
arch/arm64/boot/dts/ti/k3-am654-base-board.dts
··· 85 85 gpios = <&wkup_gpio0 27 GPIO_ACTIVE_LOW>; 86 86 }; 87 87 }; 88 - 89 - clk_ov5640_fixed: clock { 90 - compatible = "fixed-clock"; 91 - #clock-cells = <0>; 92 - clock-frequency = <24000000>; 93 - }; 94 88 }; 95 89 96 90 &wkup_pmx0 { ··· 281 287 pinctrl-names = "default"; 282 288 pinctrl-0 = <&main_i2c1_pins_default>; 283 289 clock-frequency = <400000>; 284 - 285 - ov5640: camera@3c { 286 - compatible = "ovti,ov5640"; 287 - reg = <0x3c>; 288 - 289 - clocks = <&clk_ov5640_fixed>; 290 - clock-names = "xclk"; 291 - 292 - port { 293 - csi2_cam0: endpoint { 294 - remote-endpoint = <&csi2_phy0>; 295 - clock-lanes = <0>; 296 - data-lanes = <1 2>; 297 - }; 298 - }; 299 - }; 300 - 301 290 }; 302 291 303 292 &main_i2c2 { ··· 470 493 cdns,read-delay = <0>; 471 494 #address-cells = <1>; 472 495 #size-cells = <1>; 473 - }; 474 - }; 475 - 476 - &csi2_0 { 477 - csi2_phy0: endpoint { 478 - remote-endpoint = <&csi2_cam0>; 479 - clock-lanes = <0>; 480 - data-lanes = <1 2>; 481 496 }; 482 497 }; 483 498
+6 -2
arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
··· 68 68 }; 69 69 }; 70 70 71 - main_gpio_intr: interrupt-controller0 { 71 + main_gpio_intr: interrupt-controller@a00000 { 72 72 compatible = "ti,sci-intr"; 73 + reg = <0x00 0x00a00000 0x00 0x800>; 73 74 ti,intr-trigger-type = <1>; 74 75 interrupt-controller; 75 76 interrupt-parent = <&gic500>; ··· 86 85 #size-cells = <2>; 87 86 ranges = <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>; 88 87 ti,sci-dev-id = <199>; 88 + dma-coherent; 89 + dma-ranges; 89 90 90 - main_navss_intr: interrupt-controller1 { 91 + main_navss_intr: interrupt-controller@310e0000 { 91 92 compatible = "ti,sci-intr"; 93 + reg = <0x00 0x310e0000 0x00 0x4000>; 92 94 ti,intr-trigger-type = <4>; 93 95 interrupt-controller; 94 96 interrupt-parent = <&gic500>;
+4 -3
arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
··· 6 6 */ 7 7 8 8 &cbass_mcu_wakeup { 9 - dmsc: dmsc@44083000 { 9 + dmsc: system-controller@44083000 { 10 10 compatible = "ti,k2g-sci"; 11 11 ti,host-id = <12>; 12 12 ··· 23 23 #power-domain-cells = <2>; 24 24 }; 25 25 26 - k3_clks: clocks { 26 + k3_clks: clock-controller { 27 27 compatible = "ti,k2g-sci-clk"; 28 28 #clock-cells = <2>; 29 29 }; ··· 96 96 clock-names = "fclk"; 97 97 }; 98 98 99 - wkup_gpio_intr: interrupt-controller2 { 99 + wkup_gpio_intr: interrupt-controller@42200000 { 100 100 compatible = "ti,sci-intr"; 101 + reg = <0x00 0x42200000 0x00 0x400>; 101 102 ti,intr-trigger-type = <1>; 102 103 interrupt-controller; 103 104 interrupt-parent = <&gic500>;
+6 -4
arch/arm64/boot/dts/ti/k3-j721e-main.dtsi
··· 76 76 }; 77 77 }; 78 78 79 - main_gpio_intr: interrupt-controller0 { 79 + main_gpio_intr: interrupt-controller@a00000 { 80 80 compatible = "ti,sci-intr"; 81 + reg = <0x00 0x00a00000 0x00 0x800>; 81 82 ti,intr-trigger-type = <1>; 82 83 interrupt-controller; 83 84 interrupt-parent = <&gic500>; ··· 88 87 ti,interrupt-ranges = <8 392 56>; 89 88 }; 90 89 91 - main-navss { 90 + main_navss: bus@30000000 { 92 91 compatible = "simple-mfd"; 93 92 #address-cells = <2>; 94 93 #size-cells = <2>; 95 - ranges; 94 + ranges = <0x00 0x30000000 0x00 0x30000000 0x00 0x0c400000>; 96 95 dma-coherent; 97 96 dma-ranges; 98 97 99 98 ti,sci-dev-id = <199>; 100 99 101 - main_navss_intr: interrupt-controller1 { 100 + main_navss_intr: interrupt-controller@310e0000 { 102 101 compatible = "ti,sci-intr"; 102 + reg = <0x0 0x310e0000 0x0 0x4000>; 103 103 ti,intr-trigger-type = <4>; 104 104 interrupt-controller; 105 105 interrupt-parent = <&gic500>;
+6 -5
arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
··· 6 6 */ 7 7 8 8 &cbass_mcu_wakeup { 9 - dmsc: dmsc@44083000 { 9 + dmsc: system-controller@44083000 { 10 10 compatible = "ti,k2g-sci"; 11 11 ti,host-id = <12>; 12 12 ··· 23 23 #power-domain-cells = <2>; 24 24 }; 25 25 26 - k3_clks: clocks { 26 + k3_clks: clock-controller { 27 27 compatible = "ti,k2g-sci-clk"; 28 28 #clock-cells = <2>; 29 29 }; ··· 96 96 clock-names = "fclk"; 97 97 }; 98 98 99 - wkup_gpio_intr: interrupt-controller2 { 99 + wkup_gpio_intr: interrupt-controller@42200000 { 100 100 compatible = "ti,sci-intr"; 101 + reg = <0x00 0x42200000 0x00 0x400>; 101 102 ti,intr-trigger-type = <1>; 102 103 interrupt-controller; 103 104 interrupt-parent = <&gic500>; ··· 250 249 }; 251 250 }; 252 251 253 - mcu-navss { 252 + mcu_navss: bus@28380000 { 254 253 compatible = "simple-mfd"; 255 254 #address-cells = <2>; 256 255 #size-cells = <2>; 257 - ranges; 256 + ranges = <0x00 0x28380000 0x00 0x28380000 0x00 0x03880000>; 258 257 dma-coherent; 259 258 dma-ranges; 260 259
+14 -16
arch/mips/mm/cache.c
··· 158 158 EXPORT_SYMBOL(_page_cachable_default); 159 159 160 160 #define PM(p) __pgprot(_page_cachable_default | (p)) 161 - #define PVA(p) PM(_PAGE_VALID | _PAGE_ACCESSED | (p)) 162 161 163 162 static inline void setup_protection_map(void) 164 163 { 165 164 protection_map[0] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ); 166 - protection_map[1] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC); 167 - protection_map[2] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ); 168 - protection_map[3] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC); 169 - protection_map[4] = PVA(_PAGE_PRESENT); 170 - protection_map[5] = PVA(_PAGE_PRESENT); 171 - protection_map[6] = PVA(_PAGE_PRESENT); 172 - protection_map[7] = PVA(_PAGE_PRESENT); 165 + protection_map[1] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC); 166 + protection_map[2] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ); 167 + protection_map[3] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC); 168 + protection_map[4] = PM(_PAGE_PRESENT); 169 + protection_map[5] = PM(_PAGE_PRESENT); 170 + protection_map[6] = PM(_PAGE_PRESENT); 171 + protection_map[7] = PM(_PAGE_PRESENT); 173 172 174 173 protection_map[8] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_NO_READ); 175 - protection_map[9] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC); 176 - protection_map[10] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE | 174 + protection_map[9] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC); 175 + protection_map[10] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE | 177 176 _PAGE_NO_READ); 178 - protection_map[11] = PVA(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE); 179 - protection_map[12] = PVA(_PAGE_PRESENT); 180 - protection_map[13] = PVA(_PAGE_PRESENT); 181 - protection_map[14] = PVA(_PAGE_PRESENT); 182 - protection_map[15] = PVA(_PAGE_PRESENT); 177 + protection_map[11] = PM(_PAGE_PRESENT | _PAGE_NO_EXEC | _PAGE_WRITE); 178 + protection_map[12] = PM(_PAGE_PRESENT); 179 + protection_map[13] = PM(_PAGE_PRESENT); 180 + protection_map[14] = PM(_PAGE_PRESENT | _PAGE_WRITE); 181 + protection_map[15] = PM(_PAGE_PRESENT | _PAGE_WRITE); 183 182 } 184 183 185 - #undef _PVA 186 184 #undef PM 187 185 188 186 void cpu_cache_init(void)
+29
arch/powerpc/include/asm/pte-walk.h
··· 31 31 pgd_t *pgdir = init_mm.pgd; 32 32 return __find_linux_pte(pgdir, ea, NULL, hshift); 33 33 } 34 + 35 + /* 36 + * Convert a kernel vmap virtual address (vmalloc or ioremap space) to a 37 + * physical address, without taking locks. This can be used in real-mode. 38 + */ 39 + static inline phys_addr_t ppc_find_vmap_phys(unsigned long addr) 40 + { 41 + pte_t *ptep; 42 + phys_addr_t pa; 43 + int hugepage_shift; 44 + 45 + /* 46 + * init_mm does not free page tables, and does not do THP. It may 47 + * have huge pages from huge vmalloc / ioremap etc. 48 + */ 49 + ptep = find_init_mm_pte(addr, &hugepage_shift); 50 + if (WARN_ON(!ptep)) 51 + return 0; 52 + 53 + pa = PFN_PHYS(pte_pfn(*ptep)); 54 + 55 + if (!hugepage_shift) 56 + hugepage_shift = PAGE_SHIFT; 57 + 58 + pa |= addr & ((1ul << hugepage_shift) - 1); 59 + 60 + return pa; 61 + } 62 + 34 63 /* 35 64 * This is what we should always use. Any other lockless page table lookup needs 36 65 * careful audit against THP split.
+1 -22
arch/powerpc/kernel/eeh.c
··· 346 346 */ 347 347 static inline unsigned long eeh_token_to_phys(unsigned long token) 348 348 { 349 - pte_t *ptep; 350 - unsigned long pa; 351 - int hugepage_shift; 352 - 353 - /* 354 - * We won't find hugepages here(this is iomem). Hence we are not 355 - * worried about _PAGE_SPLITTING/collapse. Also we will not hit 356 - * page table free, because of init_mm. 357 - */ 358 - ptep = find_init_mm_pte(token, &hugepage_shift); 359 - if (!ptep) 360 - return token; 361 - 362 - pa = pte_pfn(*ptep); 363 - 364 - /* On radix we can do hugepage mappings for io, so handle that */ 365 - if (!hugepage_shift) 366 - hugepage_shift = PAGE_SHIFT; 367 - 368 - pa <<= PAGE_SHIFT; 369 - pa |= token & ((1ul << hugepage_shift) - 1); 370 - return pa; 349 + return ppc_find_vmap_phys(token); 371 350 } 372 351 373 352 /*
+3 -13
arch/powerpc/kernel/io-workarounds.c
··· 55 55 #ifdef CONFIG_PPC_INDIRECT_MMIO 56 56 struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr) 57 57 { 58 - unsigned hugepage_shift; 59 58 struct iowa_bus *bus; 60 59 int token; 61 60 ··· 64 65 bus = &iowa_busses[token - 1]; 65 66 else { 66 67 unsigned long vaddr, paddr; 67 - pte_t *ptep; 68 68 69 69 vaddr = (unsigned long)PCI_FIX_ADDR(addr); 70 70 if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END) 71 71 return NULL; 72 - /* 73 - * We won't find huge pages here (iomem). Also can't hit 74 - * a page table free due to init_mm 75 - */ 76 - ptep = find_init_mm_pte(vaddr, &hugepage_shift); 77 - if (ptep == NULL) 78 - paddr = 0; 79 - else { 80 - WARN_ON(hugepage_shift); 81 - paddr = pte_pfn(*ptep) << PAGE_SHIFT; 82 - } 72 + 73 + paddr = ppc_find_vmap_phys(vaddr); 74 + 83 75 bus = iowa_pci_find(vaddr, paddr); 84 76 85 77 if (bus == NULL)
+5 -6
arch/powerpc/kernel/iommu.c
··· 898 898 unsigned int order; 899 899 unsigned int nio_pages, io_order; 900 900 struct page *page; 901 - size_t size_io = size; 902 901 903 902 size = PAGE_ALIGN(size); 904 903 order = get_order(size); ··· 924 925 memset(ret, 0, size); 925 926 926 927 /* Set up tces to cover the allocated range */ 927 - size_io = IOMMU_PAGE_ALIGN(size_io, tbl); 928 - nio_pages = size_io >> tbl->it_page_shift; 929 - io_order = get_iommu_order(size_io, tbl); 928 + nio_pages = size >> tbl->it_page_shift; 929 + io_order = get_iommu_order(size, tbl); 930 930 mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL, 931 931 mask >> tbl->it_page_shift, io_order, 0); 932 932 if (mapping == DMA_MAPPING_ERROR) { ··· 940 942 void *vaddr, dma_addr_t dma_handle) 941 943 { 942 944 if (tbl) { 943 - size_t size_io = IOMMU_PAGE_ALIGN(size, tbl); 944 - unsigned int nio_pages = size_io >> tbl->it_page_shift; 945 + unsigned int nio_pages; 945 946 947 + size = PAGE_ALIGN(size); 948 + nio_pages = size >> tbl->it_page_shift; 946 949 iommu_free(tbl, dma_handle, nio_pages); 947 950 size = PAGE_ALIGN(size); 948 951 free_pages((unsigned long)vaddr, get_order(size));
+2 -2
arch/powerpc/kernel/kprobes.c
··· 108 108 int ret = 0; 109 109 struct kprobe *prev; 110 110 struct ppc_inst insn = ppc_inst_read((struct ppc_inst *)p->addr); 111 - struct ppc_inst prefix = ppc_inst_read((struct ppc_inst *)(p->addr - 1)); 112 111 113 112 if ((unsigned long)p->addr & 0x03) { 114 113 printk("Attempt to register kprobe at an unaligned address\n"); ··· 115 116 } else if (IS_MTMSRD(insn) || IS_RFID(insn) || IS_RFI(insn)) { 116 117 printk("Cannot register a kprobe on rfi/rfid or mtmsr[d]\n"); 117 118 ret = -EINVAL; 118 - } else if (ppc_inst_prefixed(prefix)) { 119 + } else if ((unsigned long)p->addr & ~PAGE_MASK && 120 + ppc_inst_prefixed(ppc_inst_read((struct ppc_inst *)(p->addr - 1)))) { 119 121 printk("Cannot register a kprobe on the second word of prefixed instruction\n"); 120 122 ret = -EINVAL; 121 123 }
-1
arch/powerpc/kvm/book3s_hv.c
··· 4455 4455 mtspr(SPRN_EBBRR, ebb_regs[1]); 4456 4456 mtspr(SPRN_BESCR, ebb_regs[2]); 4457 4457 mtspr(SPRN_TAR, user_tar); 4458 - mtspr(SPRN_FSCR, current->thread.fscr); 4459 4458 } 4460 4459 mtspr(SPRN_VRSAVE, user_vrsave); 4461 4460
+2 -13
arch/powerpc/kvm/book3s_hv_rm_mmu.c
··· 23 23 #include <asm/pte-walk.h> 24 24 25 25 /* Translate address of a vmalloc'd thing to a linear map address */ 26 - static void *real_vmalloc_addr(void *x) 26 + static void *real_vmalloc_addr(void *addr) 27 27 { 28 - unsigned long addr = (unsigned long) x; 29 - pte_t *p; 30 - /* 31 - * assume we don't have huge pages in vmalloc space... 32 - * So don't worry about THP collapse/split. Called 33 - * Only in realmode with MSR_EE = 0, hence won't need irq_save/restore. 34 - */ 35 - p = find_init_mm_pte(addr, NULL); 36 - if (!p || !pte_present(*p)) 37 - return NULL; 38 - addr = (pte_pfn(*p) << PAGE_SHIFT) | (addr & ~PAGE_MASK); 39 - return __va(addr); 28 + return __va(ppc_find_vmap_phys((unsigned long)addr)); 40 29 } 41 30 42 31 /* Return 1 if we need to do a global tlbie, 0 if we can use tlbiel */
+7
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 59 59 #define STACK_SLOT_UAMOR (SFS-88) 60 60 #define STACK_SLOT_DAWR1 (SFS-96) 61 61 #define STACK_SLOT_DAWRX1 (SFS-104) 62 + #define STACK_SLOT_FSCR (SFS-112) 62 63 /* the following is used by the P9 short path */ 63 64 #define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */ 64 65 ··· 687 686 std r6, STACK_SLOT_DAWR0(r1) 688 687 std r7, STACK_SLOT_DAWRX0(r1) 689 688 std r8, STACK_SLOT_IAMR(r1) 689 + mfspr r5, SPRN_FSCR 690 + std r5, STACK_SLOT_FSCR(r1) 690 691 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 691 692 BEGIN_FTR_SECTION 692 693 mfspr r6, SPRN_DAWR1 ··· 1666 1663 ld r7, STACK_SLOT_HFSCR(r1) 1667 1664 mtspr SPRN_HFSCR, r7 1668 1665 ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) 1666 + BEGIN_FTR_SECTION 1667 + ld r5, STACK_SLOT_FSCR(r1) 1668 + mtspr SPRN_FSCR, r5 1669 + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 1669 1670 /* 1670 1671 * Restore various registers to 0, where non-zero values 1671 1672 * set by the guest could disrupt the host.
+9 -9
arch/riscv/Kconfig
··· 61 61 select GENERIC_TIME_VSYSCALL if MMU && 64BIT 62 62 select HANDLE_DOMAIN_IRQ 63 63 select HAVE_ARCH_AUDITSYSCALL 64 - select HAVE_ARCH_JUMP_LABEL 65 - select HAVE_ARCH_JUMP_LABEL_RELATIVE 64 + select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL 65 + select HAVE_ARCH_JUMP_LABEL_RELATIVE if !XIP_KERNEL 66 66 select HAVE_ARCH_KASAN if MMU && 64BIT 67 67 select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT 68 - select HAVE_ARCH_KGDB 68 + select HAVE_ARCH_KGDB if !XIP_KERNEL 69 69 select HAVE_ARCH_KGDB_QXFER_PKT 70 70 select HAVE_ARCH_MMAP_RND_BITS if MMU 71 71 select HAVE_ARCH_SECCOMP_FILTER ··· 80 80 select HAVE_GCC_PLUGINS 81 81 select HAVE_GENERIC_VDSO if MMU && 64BIT 82 82 select HAVE_IRQ_TIME_ACCOUNTING 83 - select HAVE_KPROBES 84 - select HAVE_KPROBES_ON_FTRACE 85 - select HAVE_KRETPROBES 83 + select HAVE_KPROBES if !XIP_KERNEL 84 + select HAVE_KPROBES_ON_FTRACE if !XIP_KERNEL 85 + select HAVE_KRETPROBES if !XIP_KERNEL 86 86 select HAVE_PCI 87 87 select HAVE_PERF_EVENTS 88 88 select HAVE_PERF_REGS ··· 231 231 bool "RV64I" 232 232 select 64BIT 233 233 select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && GCC_VERSION >= 50000 234 - select HAVE_DYNAMIC_FTRACE if MMU && $(cc-option,-fpatchable-function-entry=8) 234 + select HAVE_DYNAMIC_FTRACE if !XIP_KERNEL && MMU && $(cc-option,-fpatchable-function-entry=8) 235 235 select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE 236 - select HAVE_FTRACE_MCOUNT_RECORD 236 + select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL 237 237 select HAVE_FUNCTION_GRAPH_TRACER 238 - select HAVE_FUNCTION_TRACER 238 + select HAVE_FUNCTION_TRACER if !XIP_KERNEL 239 239 select SWIOTLB if MMU 240 240 241 241 endchoice
+9
arch/riscv/Makefile
··· 38 38 KBUILD_LDFLAGS += -melf32lriscv 39 39 endif 40 40 41 + ifeq ($(CONFIG_LD_IS_LLD),y) 42 + KBUILD_CFLAGS += -mno-relax 43 + KBUILD_AFLAGS += -mno-relax 44 + ifneq ($(LLVM_IAS),1) 45 + KBUILD_CFLAGS += -Wa,-mno-relax 46 + KBUILD_AFLAGS += -Wa,-mno-relax 47 + endif 48 + endif 49 + 41 50 # ISA string setting 42 51 riscv-march-$(CONFIG_ARCH_RV32I) := rv32ima 43 52 riscv-march-$(CONFIG_ARCH_RV64I) := rv64ima
+1
arch/riscv/boot/dts/microchip/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 dtb-$(CONFIG_SOC_MICROCHIP_POLARFIRE) += microchip-mpfs-icicle-kit.dtb 3 + obj-$(CONFIG_BUILTIN_DTB) += $(addsuffix .o, $(dtb-y))
+1
arch/riscv/boot/dts/sifive/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 dtb-$(CONFIG_SOC_SIFIVE) += hifive-unleashed-a00.dtb \ 3 3 hifive-unmatched-a00.dtb 4 + obj-$(CONFIG_BUILTIN_DTB) += $(addsuffix .o, $(dtb-y))
+1 -1
arch/riscv/errata/sifive/Makefile
··· 1 - obj-y += errata_cip_453.o 1 + obj-$(CONFIG_ERRATA_SIFIVE_CIP_453) += errata_cip_453.o 2 2 obj-y += errata.o
+2 -2
arch/riscv/include/asm/alternative-macros.h
··· 51 51 REG_ASM " " newlen "\n" \ 52 52 ".word " errata_id "\n" 53 53 54 - #define ALT_NEW_CONSTENT(vendor_id, errata_id, enable, new_c) \ 54 + #define ALT_NEW_CONTENT(vendor_id, errata_id, enable, new_c) \ 55 55 ".if " __stringify(enable) " == 1\n" \ 56 56 ".pushsection .alternative, \"a\"\n" \ 57 57 ALT_ENTRY("886b", "888f", __stringify(vendor_id), __stringify(errata_id), "889f - 888f") \ ··· 69 69 "886 :\n" \ 70 70 old_c "\n" \ 71 71 "887 :\n" \ 72 - ALT_NEW_CONSTENT(vendor_id, errata_id, enable, new_c) 72 + ALT_NEW_CONTENT(vendor_id, errata_id, enable, new_c) 73 73 74 74 #define _ALTERNATIVE_CFG(old_c, new_c, vendor_id, errata_id, CONFIG_k) \ 75 75 __ALTERNATIVE_CFG(old_c, new_c, vendor_id, errata_id, IS_ENABLED(CONFIG_k))
+2 -2
arch/riscv/kernel/setup.c
··· 231 231 232 232 /* Clean-up any unused pre-allocated resources */ 233 233 mem_res_sz = (num_resources - res_idx + 1) * sizeof(*mem_res); 234 - memblock_free((phys_addr_t) mem_res, mem_res_sz); 234 + memblock_free(__pa(mem_res), mem_res_sz); 235 235 return; 236 236 237 237 error: 238 238 /* Better an empty resource tree than an inconsistent one */ 239 239 release_child_resources(&iomem_resource); 240 - memblock_free((phys_addr_t) mem_res, mem_res_sz); 240 + memblock_free(__pa(mem_res), mem_res_sz); 241 241 } 242 242 243 243
+9 -4
arch/riscv/kernel/traps.c
··· 86 86 } 87 87 } 88 88 89 + #if defined (CONFIG_XIP_KERNEL) && defined (CONFIG_RISCV_ERRATA_ALTERNATIVE) 90 + #define __trap_section __section(".xip.traps") 91 + #else 92 + #define __trap_section 93 + #endif 89 94 #define DO_ERROR_INFO(name, signo, code, str) \ 90 - asmlinkage __visible void name(struct pt_regs *regs) \ 95 + asmlinkage __visible __trap_section void name(struct pt_regs *regs) \ 91 96 { \ 92 97 do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \ 93 98 } ··· 116 111 int handle_misaligned_load(struct pt_regs *regs); 117 112 int handle_misaligned_store(struct pt_regs *regs); 118 113 119 - asmlinkage void do_trap_load_misaligned(struct pt_regs *regs) 114 + asmlinkage void __trap_section do_trap_load_misaligned(struct pt_regs *regs) 120 115 { 121 116 if (!handle_misaligned_load(regs)) 122 117 return; ··· 124 119 "Oops - load address misaligned"); 125 120 } 126 121 127 - asmlinkage void do_trap_store_misaligned(struct pt_regs *regs) 122 + asmlinkage void __trap_section do_trap_store_misaligned(struct pt_regs *regs) 128 123 { 129 124 if (!handle_misaligned_store(regs)) 130 125 return; ··· 151 146 return GET_INSN_LENGTH(insn); 152 147 } 153 148 154 - asmlinkage __visible void do_trap_break(struct pt_regs *regs) 149 + asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs) 155 150 { 156 151 #ifdef CONFIG_KPROBES 157 152 if (kprobe_single_step_handler(regs))
+14 -1
arch/riscv/kernel/vmlinux-xip.lds.S
··· 99 99 } 100 100 PERCPU_SECTION(L1_CACHE_BYTES) 101 101 102 - . = ALIGN(PAGE_SIZE); 102 + . = ALIGN(8); 103 + .alternative : { 104 + __alt_start = .; 105 + *(.alternative) 106 + __alt_end = .; 107 + } 103 108 __init_end = .; 104 109 110 + . = ALIGN(16); 111 + .xip.traps : { 112 + __xip_traps_start = .; 113 + *(.xip.traps) 114 + __xip_traps_end = .; 115 + } 116 + 117 + . = ALIGN(PAGE_SIZE); 105 118 .sdata : { 106 119 __global_pointer$ = . + 0x800; 107 120 *(.sdata*)
+6 -2
arch/riscv/mm/init.c
··· 746 746 unsigned long init_data_start = (unsigned long)__init_data_begin; 747 747 unsigned long rodata_start = (unsigned long)__start_rodata; 748 748 unsigned long data_start = (unsigned long)_data; 749 - unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn))); 749 + #if defined(CONFIG_64BIT) && defined(CONFIG_MMU) 750 + unsigned long end_va = kernel_virt_addr + load_sz; 751 + #else 752 + unsigned long end_va = (unsigned long)(__va(PFN_PHYS(max_low_pfn))); 753 + #endif 750 754 751 755 set_memory_ro(text_start, (init_text_start - text_start) >> PAGE_SHIFT); 752 756 set_memory_ro(init_text_start, (init_data_start - init_text_start) >> PAGE_SHIFT); 753 757 set_memory_nx(init_data_start, (rodata_start - init_data_start) >> PAGE_SHIFT); 754 758 /* rodata section is marked readonly in mark_rodata_ro */ 755 759 set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT); 756 - set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT); 760 + set_memory_nx(data_start, (end_va - data_start) >> PAGE_SHIFT); 757 761 } 758 762 759 763 void mark_rodata_ro(void)
+3 -2
arch/x86/Makefile
··· 200 200 KBUILD_LDFLAGS += -m elf_$(UTS_MACHINE) 201 201 202 202 ifdef CONFIG_LTO_CLANG 203 - KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \ 204 - -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8) 203 + ifeq ($(shell test $(CONFIG_LLD_VERSION) -lt 130000; echo $$?),0) 204 + KBUILD_LDFLAGS += -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8) 205 + endif 205 206 endif 206 207 207 208 ifdef CONFIG_X86_NEED_RELOCS
+6 -3
arch/x86/events/intel/uncore_snbep.c
··· 1406 1406 die_id = i; 1407 1407 else 1408 1408 die_id = topology_phys_to_logical_pkg(i); 1409 + if (die_id < 0) 1410 + die_id = -ENODEV; 1409 1411 map->pbus_to_dieid[bus] = die_id; 1410 1412 break; 1411 1413 } ··· 1454 1452 i = -1; 1455 1453 if (reverse) { 1456 1454 for (bus = 255; bus >= 0; bus--) { 1457 - if (map->pbus_to_dieid[bus] >= 0) 1455 + if (map->pbus_to_dieid[bus] != -1) 1458 1456 i = map->pbus_to_dieid[bus]; 1459 1457 else 1460 1458 map->pbus_to_dieid[bus] = i; 1461 1459 } 1462 1460 } else { 1463 1461 for (bus = 0; bus <= 255; bus++) { 1464 - if (map->pbus_to_dieid[bus] >= 0) 1462 + if (map->pbus_to_dieid[bus] != -1) 1465 1463 i = map->pbus_to_dieid[bus]; 1466 1464 else 1467 1465 map->pbus_to_dieid[bus] = i; ··· 5099 5097 .perf_ctr = SNR_M2M_PCI_PMON_CTR0, 5100 5098 .event_ctl = SNR_M2M_PCI_PMON_CTL0, 5101 5099 .event_mask = SNBEP_PMON_RAW_EVENT_MASK, 5100 + .event_mask_ext = SNR_M2M_PCI_PMON_UMASK_EXT, 5102 5101 .box_ctl = SNR_M2M_PCI_PMON_BOX_CTL, 5103 5102 .ops = &snr_m2m_uncore_pci_ops, 5104 - .format_group = &skx_uncore_format_group, 5103 + .format_group = &snr_m2m_uncore_format_group, 5105 5104 }; 5106 5105 5107 5106 static struct attribute *icx_upi_uncore_formats_attr[] = {
+1
arch/x86/include/asm/apic.h
··· 174 174 extern int setup_APIC_eilvt(u8 lvt_off, u8 vector, u8 msg_type, u8 mask); 175 175 extern void lapic_assign_system_vectors(void); 176 176 extern void lapic_assign_legacy_vector(unsigned int isairq, bool replace); 177 + extern void lapic_update_legacy_vectors(void); 177 178 extern void lapic_online(void); 178 179 extern void lapic_offline(void); 179 180 extern bool apic_needs_pit(void);
+2 -5
arch/x86/include/asm/disabled-features.h
··· 56 56 # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) 57 57 #endif 58 58 59 - #ifdef CONFIG_IOMMU_SUPPORT 60 - # define DISABLE_ENQCMD 0 61 - #else 62 - # define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) 63 - #endif 59 + /* Force disable because it's broken beyond repair */ 60 + #define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) 64 61 65 62 #ifdef CONFIG_X86_SGX 66 63 # define DISABLE_SGX 0
+1 -5
arch/x86/include/asm/fpu/api.h
··· 106 106 */ 107 107 #define PASID_DISABLED 0 108 108 109 - #ifdef CONFIG_IOMMU_SUPPORT 110 - /* Update current's PASID MSR/state by mm's PASID. */ 111 - void update_pasid(void); 112 - #else 113 109 static inline void update_pasid(void) { } 114 - #endif 110 + 115 111 #endif /* _ASM_X86_FPU_API_H */
-7
arch/x86/include/asm/fpu/internal.h
··· 584 584 pkru_val = pk->pkru; 585 585 } 586 586 __write_pkru(pkru_val); 587 - 588 - /* 589 - * Expensive PASID MSR write will be avoided in update_pasid() because 590 - * TIF_NEED_FPU_LOAD was set. And the PASID state won't be updated 591 - * unless it's different from mm->pasid to reduce overhead. 592 - */ 593 - update_pasid(); 594 587 } 595 588 596 589 #endif /* _ASM_X86_FPU_INTERNAL_H */
+3 -1
arch/x86/include/asm/thermal.h
··· 3 3 #define _ASM_X86_THERMAL_H 4 4 5 5 #ifdef CONFIG_X86_THERMAL_VECTOR 6 + void therm_lvt_init(void); 6 7 void intel_init_thermal(struct cpuinfo_x86 *c); 7 8 bool x86_thermal_enabled(void); 8 9 void intel_thermal_interrupt(void); 9 10 #else 10 - static inline void intel_init_thermal(struct cpuinfo_x86 *c) { } 11 + static inline void therm_lvt_init(void) { } 12 + static inline void intel_init_thermal(struct cpuinfo_x86 *c) { } 11 13 #endif 12 14 13 15 #endif /* _ASM_X86_THERMAL_H */
+46 -18
arch/x86/kernel/alternative.c
··· 183 183 } 184 184 185 185 /* 186 + * optimize_nops_range() - Optimize a sequence of single byte NOPs (0x90) 187 + * 188 + * @instr: instruction byte stream 189 + * @instrlen: length of the above 190 + * @off: offset within @instr where the first NOP has been detected 191 + * 192 + * Return: number of NOPs found (and replaced). 193 + */ 194 + static __always_inline int optimize_nops_range(u8 *instr, u8 instrlen, int off) 195 + { 196 + unsigned long flags; 197 + int i = off, nnops; 198 + 199 + while (i < instrlen) { 200 + if (instr[i] != 0x90) 201 + break; 202 + 203 + i++; 204 + } 205 + 206 + nnops = i - off; 207 + 208 + if (nnops <= 1) 209 + return nnops; 210 + 211 + local_irq_save(flags); 212 + add_nops(instr + off, nnops); 213 + local_irq_restore(flags); 214 + 215 + DUMP_BYTES(instr, instrlen, "%px: [%d:%d) optimized NOPs: ", instr, off, i); 216 + 217 + return nnops; 218 + } 219 + 220 + /* 186 221 * "noinline" to cause control flow change and thus invalidate I$ and 187 222 * cause refetch after modification. 188 223 */ 189 224 static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *instr) 190 225 { 191 - unsigned long flags; 192 226 struct insn insn; 193 - int nop, i = 0; 227 + int i = 0; 194 228 195 229 /* 196 - * Jump over the non-NOP insns, the remaining bytes must be single-byte 197 - * NOPs, optimize them. 230 + * Jump over the non-NOP insns and optimize single-byte NOPs into bigger 231 + * ones. 198 232 */ 199 233 for (;;) { 200 234 if (insn_decode_kernel(&insn, &instr[i])) 201 235 return; 202 236 237 + /* 238 + * See if this and any potentially following NOPs can be 239 + * optimized. 240 + */ 203 241 if (insn.length == 1 && insn.opcode.bytes[0] == 0x90) 204 - break; 242 + i += optimize_nops_range(instr, a->instrlen, i); 243 + else 244 + i += insn.length; 205 245 206 - if ((i += insn.length) >= a->instrlen) 246 + if (i >= a->instrlen) 207 247 return; 208 248 } 209 - 210 - for (nop = i; i < a->instrlen; i++) { 211 - if (WARN_ONCE(instr[i] != 0x90, "Not a NOP at 0x%px\n", &instr[i])) 212 - return; 213 - } 214 - 215 - local_irq_save(flags); 216 - add_nops(instr + nop, i - nop); 217 - local_irq_restore(flags); 218 - 219 - DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ", 220 - instr, nop, a->instrlen); 221 249 } 222 250 223 251 /*
+1
arch/x86/kernel/apic/apic.c
··· 2604 2604 end_local_APIC_setup(); 2605 2605 irq_remap_enable_fault_handling(); 2606 2606 setup_IO_APIC(); 2607 + lapic_update_legacy_vectors(); 2607 2608 } 2608 2609 2609 2610 #ifdef CONFIG_UP_LATE_INIT
+20
arch/x86/kernel/apic/vector.c
··· 738 738 irq_matrix_assign_system(vector_matrix, ISA_IRQ_VECTOR(irq), replace); 739 739 } 740 740 741 + void __init lapic_update_legacy_vectors(void) 742 + { 743 + unsigned int i; 744 + 745 + if (IS_ENABLED(CONFIG_X86_IO_APIC) && nr_ioapics > 0) 746 + return; 747 + 748 + /* 749 + * If the IO/APIC is disabled via config, kernel command line or 750 + * lack of enumeration then all legacy interrupts are routed 751 + * through the PIC. Make sure that they are marked as legacy 752 + * vectors. PIC_CASCADE_IRQ has already been marked in 753 + * lapic_assign_system_vectors(). 754 + */ 755 + for (i = 0; i < nr_legacy_irqs(); i++) { 756 + if (i != PIC_CASCADE_IR) 757 + lapic_assign_legacy_vector(i, true); 758 + } 759 + } 760 + 741 761 void __init lapic_assign_system_vectors(void) 742 762 { 743 763 unsigned int i, vector = 0;
+2 -2
arch/x86/kernel/cpu/perfctr-watchdog.c
··· 63 63 case 15: 64 64 return msr - MSR_P4_BPU_PERFCTR0; 65 65 } 66 - fallthrough; 66 + break; 67 67 case X86_VENDOR_ZHAOXIN: 68 68 case X86_VENDOR_CENTAUR: 69 69 return msr - MSR_ARCH_PERFMON_PERFCTR0; ··· 96 96 case 15: 97 97 return msr - MSR_P4_BSU_ESCR0; 98 98 } 99 - fallthrough; 99 + break; 100 100 case X86_VENDOR_ZHAOXIN: 101 101 case X86_VENDOR_CENTAUR: 102 102 return msr - MSR_ARCH_PERFMON_EVENTSEL0;
-57
arch/x86/kernel/fpu/xstate.c
··· 1402 1402 return 0; 1403 1403 } 1404 1404 #endif /* CONFIG_PROC_PID_ARCH_STATUS */ 1405 - 1406 - #ifdef CONFIG_IOMMU_SUPPORT 1407 - void update_pasid(void) 1408 - { 1409 - u64 pasid_state; 1410 - u32 pasid; 1411 - 1412 - if (!cpu_feature_enabled(X86_FEATURE_ENQCMD)) 1413 - return; 1414 - 1415 - if (!current->mm) 1416 - return; 1417 - 1418 - pasid = READ_ONCE(current->mm->pasid); 1419 - /* Set the valid bit in the PASID MSR/state only for valid pasid. */ 1420 - pasid_state = pasid == PASID_DISABLED ? 1421 - pasid : pasid | MSR_IA32_PASID_VALID; 1422 - 1423 - /* 1424 - * No need to hold fregs_lock() since the task's fpstate won't 1425 - * be changed by others (e.g. ptrace) while the task is being 1426 - * switched to or is in IPI. 1427 - */ 1428 - if (!test_thread_flag(TIF_NEED_FPU_LOAD)) { 1429 - /* The MSR is active and can be directly updated. */ 1430 - wrmsrl(MSR_IA32_PASID, pasid_state); 1431 - } else { 1432 - struct fpu *fpu = &current->thread.fpu; 1433 - struct ia32_pasid_state *ppasid_state; 1434 - struct xregs_state *xsave; 1435 - 1436 - /* 1437 - * The CPU's xstate registers are not currently active. Just 1438 - * update the PASID state in the memory buffer here. The 1439 - * PASID MSR will be loaded when returning to user mode. 1440 - */ 1441 - xsave = &fpu->state.xsave; 1442 - xsave->header.xfeatures |= XFEATURE_MASK_PASID; 1443 - ppasid_state = get_xsave_addr(xsave, XFEATURE_PASID); 1444 - /* 1445 - * Since XFEATURE_MASK_PASID is set in xfeatures, ppasid_state 1446 - * won't be NULL and no need to check its value. 1447 - * 1448 - * Only update the task's PASID state when it's different 1449 - * from the mm's pasid. 1450 - */ 1451 - if (ppasid_state->pasid != pasid_state) { 1452 - /* 1453 - * Invalid fpregs so that state restoring will pick up 1454 - * the PASID state. 1455 - */ 1456 - __fpu_invalidate_fpregs_state(fpu); 1457 - ppasid_state->pasid = pasid_state; 1458 - } 1459 - } 1460 - } 1461 - #endif /* CONFIG_IOMMU_SUPPORT */
+30 -14
arch/x86/kernel/setup.c
··· 44 44 #include <asm/pci-direct.h> 45 45 #include <asm/prom.h> 46 46 #include <asm/proto.h> 47 + #include <asm/thermal.h> 47 48 #include <asm/unwind.h> 48 49 #include <asm/vsyscall.h> 49 50 #include <linux/vmalloc.h> ··· 638 637 * them from accessing certain memory ranges, namely anything below 639 638 * 1M and in the pages listed in bad_pages[] above. 640 639 * 641 - * To avoid these pages being ever accessed by SNB gfx devices 642 - * reserve all memory below the 1 MB mark and bad_pages that have 643 - * not already been reserved at boot time. 640 + * To avoid these pages being ever accessed by SNB gfx devices reserve 641 + * bad_pages that have not already been reserved at boot time. 642 + * All memory below the 1 MB mark is anyway reserved later during 643 + * setup_arch(), so there is no need to reserve it here. 644 644 */ 645 - memblock_reserve(0, 1<<20); 646 645 647 646 for (i = 0; i < ARRAY_SIZE(bad_pages); i++) { 648 647 if (memblock_reserve(bad_pages[i], PAGE_SIZE)) ··· 734 733 * The first 4Kb of memory is a BIOS owned area, but generally it is 735 734 * not listed as such in the E820 table. 736 735 * 737 - * Reserve the first memory page and typically some additional 738 - * memory (64KiB by default) since some BIOSes are known to corrupt 739 - * low memory. See the Kconfig help text for X86_RESERVE_LOW. 736 + * Reserve the first 64K of memory since some BIOSes are known to 737 + * corrupt low memory. After the real mode trampoline is allocated the 738 + * rest of the memory below 640k is reserved. 740 739 * 741 740 * In addition, make sure page 0 is always reserved because on 742 741 * systems with L1TF its contents can be leaked to user processes. 743 742 */ 744 - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); 743 + memblock_reserve(0, SZ_64K); 745 744 746 745 early_reserve_initrd(); 747 746 ··· 752 751 753 752 reserve_ibft_region(); 754 753 reserve_bios_regions(); 754 + trim_snb_memory(); 755 755 } 756 756 757 757 /* ··· 1083 1081 (max_pfn_mapped<<PAGE_SHIFT) - 1); 1084 1082 #endif 1085 1083 1086 - reserve_real_mode(); 1087 - 1088 1084 /* 1089 - * Reserving memory causing GPU hangs on Sandy Bridge integrated 1090 - * graphics devices should be done after we allocated memory under 1091 - * 1M for the real mode trampoline. 1085 + * Find free memory for the real mode trampoline and place it 1086 + * there. 1087 + * If there is not enough free memory under 1M, on EFI-enabled 1088 + * systems there will be additional attempt to reclaim the memory 1089 + * for the real mode trampoline at efi_free_boot_services(). 1090 + * 1091 + * Unconditionally reserve the entire first 1M of RAM because 1092 + * BIOSes are know to corrupt low memory and several 1093 + * hundred kilobytes are not worth complex detection what memory gets 1094 + * clobbered. Moreover, on machines with SandyBridge graphics or in 1095 + * setups that use crashkernel the entire 1M is reserved anyway. 1092 1096 */ 1093 - trim_snb_memory(); 1097 + reserve_real_mode(); 1094 1098 1095 1099 init_mem_mapping(); 1096 1100 ··· 1233 1225 x86_init.oem.banner(); 1234 1226 1235 1227 x86_init.timers.wallclock_init(); 1228 + 1229 + /* 1230 + * This needs to run before setup_local_APIC() which soft-disables the 1231 + * local APIC temporarily and that masks the thermal LVT interrupt, 1232 + * leading to softlockups on machines which have configured SMI 1233 + * interrupt delivery. 1234 + */ 1235 + therm_lvt_init(); 1236 1236 1237 1237 mcheck_init(); 1238 1238
+11 -6
arch/x86/kvm/lapic.c
··· 1494 1494 1495 1495 static void cancel_hv_timer(struct kvm_lapic *apic); 1496 1496 1497 + static void cancel_apic_timer(struct kvm_lapic *apic) 1498 + { 1499 + hrtimer_cancel(&apic->lapic_timer.timer); 1500 + preempt_disable(); 1501 + if (apic->lapic_timer.hv_timer_in_use) 1502 + cancel_hv_timer(apic); 1503 + preempt_enable(); 1504 + } 1505 + 1497 1506 static void apic_update_lvtt(struct kvm_lapic *apic) 1498 1507 { 1499 1508 u32 timer_mode = kvm_lapic_get_reg(apic, APIC_LVTT) & ··· 1511 1502 if (apic->lapic_timer.timer_mode != timer_mode) { 1512 1503 if (apic_lvtt_tscdeadline(apic) != (timer_mode == 1513 1504 APIC_LVT_TIMER_TSCDEADLINE)) { 1514 - hrtimer_cancel(&apic->lapic_timer.timer); 1515 - preempt_disable(); 1516 - if (apic->lapic_timer.hv_timer_in_use) 1517 - cancel_hv_timer(apic); 1518 - preempt_enable(); 1505 + cancel_apic_timer(apic); 1519 1506 kvm_lapic_set_reg(apic, APIC_TMICT, 0); 1520 1507 apic->lapic_timer.period = 0; 1521 1508 apic->lapic_timer.tscdeadline = 0; ··· 2097 2092 if (apic_lvtt_tscdeadline(apic)) 2098 2093 break; 2099 2094 2100 - hrtimer_cancel(&apic->lapic_timer.timer); 2095 + cancel_apic_timer(apic); 2101 2096 kvm_lapic_set_reg(apic, APIC_TMICT, val); 2102 2097 start_apic_timer(apic); 2103 2098 break;
+9 -5
arch/x86/kvm/mmu/paging_tmpl.h
··· 90 90 gpa_t pte_gpa[PT_MAX_FULL_LEVELS]; 91 91 pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS]; 92 92 bool pte_writable[PT_MAX_FULL_LEVELS]; 93 - unsigned pt_access; 94 - unsigned pte_access; 93 + unsigned int pt_access[PT_MAX_FULL_LEVELS]; 94 + unsigned int pte_access; 95 95 gfn_t gfn; 96 96 struct x86_exception fault; 97 97 }; ··· 418 418 } 419 419 420 420 walker->ptes[walker->level - 1] = pte; 421 + 422 + /* Convert to ACC_*_MASK flags for struct guest_walker. */ 423 + walker->pt_access[walker->level - 1] = FNAME(gpte_access)(pt_access ^ walk_nx_mask); 421 424 } while (!is_last_gpte(mmu, walker->level, pte)); 422 425 423 426 pte_pkey = FNAME(gpte_pkeys)(vcpu, pte); 424 427 accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0; 425 428 426 429 /* Convert to ACC_*_MASK flags for struct guest_walker. */ 427 - walker->pt_access = FNAME(gpte_access)(pt_access ^ walk_nx_mask); 428 430 walker->pte_access = FNAME(gpte_access)(pte_access ^ walk_nx_mask); 429 431 errcode = permission_fault(vcpu, mmu, walker->pte_access, pte_pkey, access); 430 432 if (unlikely(errcode)) ··· 465 463 } 466 464 467 465 pgprintk("%s: pte %llx pte_access %x pt_access %x\n", 468 - __func__, (u64)pte, walker->pte_access, walker->pt_access); 466 + __func__, (u64)pte, walker->pte_access, 467 + walker->pt_access[walker->level - 1]); 469 468 return 1; 470 469 471 470 error: ··· 646 643 bool huge_page_disallowed = exec && nx_huge_page_workaround_enabled; 647 644 struct kvm_mmu_page *sp = NULL; 648 645 struct kvm_shadow_walk_iterator it; 649 - unsigned direct_access, access = gw->pt_access; 646 + unsigned int direct_access, access; 650 647 int top_level, level, req_level, ret; 651 648 gfn_t base_gfn = gw->gfn; 652 649 ··· 678 675 sp = NULL; 679 676 if (!is_shadow_present_pte(*it.sptep)) { 680 677 table_gfn = gw->table_gfn[it.level - 2]; 678 + access = gw->pt_access[it.level - 2]; 681 679 sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1, 682 680 false, access); 683 681 }
+2 -4
arch/x86/kvm/svm/sev.c
··· 1103 1103 struct sev_data_send_start data; 1104 1104 int ret; 1105 1105 1106 + memset(&data, 0, sizeof(data)); 1106 1107 data.handle = sev->handle; 1107 1108 ret = sev_issue_cmd(kvm, SEV_CMD_SEND_START, &data, &argp->error); 1108 - if (ret < 0) 1109 - return ret; 1110 1109 1111 1110 params->session_len = data.session_len; 1112 1111 if (copy_to_user((void __user *)(uintptr_t)argp->data, params, ··· 1214 1215 struct sev_data_send_update_data data; 1215 1216 int ret; 1216 1217 1218 + memset(&data, 0, sizeof(data)); 1217 1219 data.handle = sev->handle; 1218 1220 ret = sev_issue_cmd(kvm, SEV_CMD_SEND_UPDATE_DATA, &data, &argp->error); 1219 - if (ret < 0) 1220 - return ret; 1221 1221 1222 1222 params->hdr_len = data.hdr_len; 1223 1223 params->trans_len = data.trans_len;
+3 -3
arch/x86/kvm/trace.h
··· 1550 1550 TP_ARGS(msg, err), 1551 1551 1552 1552 TP_STRUCT__entry( 1553 - __field(const char *, msg) 1553 + __string(msg, msg) 1554 1554 __field(u32, err) 1555 1555 ), 1556 1556 1557 1557 TP_fast_assign( 1558 - __entry->msg = msg; 1558 + __assign_str(msg, msg); 1559 1559 __entry->err = err; 1560 1560 ), 1561 1561 1562 - TP_printk("%s%s", __entry->msg, !__entry->err ? "" : 1562 + TP_printk("%s%s", __get_str(msg), !__entry->err ? "" : 1563 1563 __print_symbolic(__entry->err, VMX_VMENTER_INSTRUCTION_ERRORS)) 1564 1564 ); 1565 1565
+17 -2
arch/x86/kvm/x86.c
··· 3072 3072 static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu) 3073 3073 { 3074 3074 ++vcpu->stat.tlb_flush; 3075 + 3076 + if (!tdp_enabled) { 3077 + /* 3078 + * A TLB flush on behalf of the guest is equivalent to 3079 + * INVPCID(all), toggling CR4.PGE, etc., which requires 3080 + * a forced sync of the shadow page tables. Unload the 3081 + * entire MMU here and the subsequent load will sync the 3082 + * shadow page tables, and also flush the TLB. 3083 + */ 3084 + kvm_mmu_unload(vcpu); 3085 + return; 3086 + } 3087 + 3075 3088 static_call(kvm_x86_tlb_flush_guest)(vcpu); 3076 3089 } 3077 3090 ··· 3114 3101 * expensive IPIs. 3115 3102 */ 3116 3103 if (guest_pv_has(vcpu, KVM_FEATURE_PV_TLB_FLUSH)) { 3104 + u8 st_preempted = xchg(&st->preempted, 0); 3105 + 3117 3106 trace_kvm_pv_tlb_flush(vcpu->vcpu_id, 3118 - st->preempted & KVM_VCPU_FLUSH_TLB); 3119 - if (xchg(&st->preempted, 0) & KVM_VCPU_FLUSH_TLB) 3107 + st_preempted & KVM_VCPU_FLUSH_TLB); 3108 + if (st_preempted & KVM_VCPU_FLUSH_TLB) 3120 3109 kvm_vcpu_flush_tlb_guest(vcpu); 3121 3110 } else { 3122 3111 st->preempted = 0;
+2 -2
arch/x86/mm/fault.c
··· 836 836 837 837 if (si_code == SEGV_PKUERR) 838 838 force_sig_pkuerr((void __user *)address, pkey); 839 - 840 - force_sig_fault(SIGSEGV, si_code, (void __user *)address); 839 + else 840 + force_sig_fault(SIGSEGV, si_code, (void __user *)address); 841 841 842 842 local_irq_disable(); 843 843 }
+6 -5
arch/x86/mm/mem_encrypt_identity.c
··· 504 504 #define AMD_SME_BIT BIT(0) 505 505 #define AMD_SEV_BIT BIT(1) 506 506 507 - /* Check the SEV MSR whether SEV or SME is enabled */ 508 - sev_status = __rdmsr(MSR_AMD64_SEV); 509 - feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; 510 - 511 507 /* 512 508 * Check for the SME/SEV feature: 513 509 * CPUID Fn8000_001F[EAX] ··· 515 519 eax = 0x8000001f; 516 520 ecx = 0; 517 521 native_cpuid(&eax, &ebx, &ecx, &edx); 518 - if (!(eax & feature_mask)) 522 + /* Check whether SEV or SME is supported */ 523 + if (!(eax & (AMD_SEV_BIT | AMD_SME_BIT))) 519 524 return; 520 525 521 526 me_mask = 1UL << (ebx & 0x3f); 527 + 528 + /* Check the SEV MSR whether SEV or SME is enabled */ 529 + sev_status = __rdmsr(MSR_AMD64_SEV); 530 + feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; 522 531 523 532 /* Check if memory encryption is enabled */ 524 533 if (feature_mask == AMD_SME_BIT) {
+12
arch/x86/platform/efi/quirks.c
··· 450 450 size -= rm_size; 451 451 } 452 452 453 + /* 454 + * Don't free memory under 1M for two reasons: 455 + * - BIOS might clobber it 456 + * - Crash kernel needs it to be reserved 457 + */ 458 + if (start + size < SZ_1M) 459 + continue; 460 + if (start < SZ_1M) { 461 + size -= (SZ_1M - start); 462 + start = SZ_1M; 463 + } 464 + 453 465 memblock_free_late(start, size); 454 466 } 455 467
+8 -6
arch/x86/realmode/init.c
··· 29 29 30 30 /* Has to be under 1M so we can execute real-mode AP code. */ 31 31 mem = memblock_find_in_range(0, 1<<20, size, PAGE_SIZE); 32 - if (!mem) { 32 + if (!mem) 33 33 pr_info("No sub-1M memory is available for the trampoline\n"); 34 - return; 35 - } 34 + else 35 + set_real_mode_mem(mem); 36 36 37 - memblock_reserve(mem, size); 38 - set_real_mode_mem(mem); 39 - crash_reserve_low_1M(); 37 + /* 38 + * Unconditionally reserve the entire fisrt 1M, see comment in 39 + * setup_arch(). 40 + */ 41 + memblock_reserve(0, SZ_1M); 40 42 } 41 43 42 44 static void sme_sev_setup_real_mode(struct trampoline_header *th)
+2 -1
crypto/async_tx/async_xor.c
··· 233 233 if (submit->flags & ASYNC_TX_XOR_DROP_DST) { 234 234 src_cnt--; 235 235 src_list++; 236 - src_offs++; 236 + if (src_offs) 237 + src_offs++; 237 238 } 238 239 239 240 /* wait for any prerequisite operations */
+8
drivers/acpi/acpica/utdelete.c
··· 285 285 } 286 286 break; 287 287 288 + case ACPI_TYPE_LOCAL_ADDRESS_HANDLER: 289 + 290 + ACPI_DEBUG_PRINT((ACPI_DB_ALLOCATIONS, 291 + "***** Address handler %p\n", object)); 292 + 293 + acpi_os_delete_mutex(object->address_space.context_mutex); 294 + break; 295 + 288 296 default: 289 297 290 298 break;
+9 -20
drivers/acpi/bus.c
··· 330 330 if (ACPI_FAILURE(acpi_run_osc(handle, &context))) 331 331 return; 332 332 333 - capbuf_ret = context.ret.pointer; 334 - if (context.ret.length <= OSC_SUPPORT_DWORD) { 335 - kfree(context.ret.pointer); 336 - return; 337 - } 338 - 339 - /* 340 - * Now run _OSC again with query flag clear and with the caps 341 - * supported by both the OS and the platform. 342 - */ 343 - capbuf[OSC_QUERY_DWORD] = 0; 344 - capbuf[OSC_SUPPORT_DWORD] = capbuf_ret[OSC_SUPPORT_DWORD]; 345 333 kfree(context.ret.pointer); 334 + 335 + /* Now run _OSC again with query flag clear */ 336 + capbuf[OSC_QUERY_DWORD] = 0; 346 337 347 338 if (ACPI_FAILURE(acpi_run_osc(handle, &context))) 348 339 return; 349 340 350 341 capbuf_ret = context.ret.pointer; 351 - if (context.ret.length > OSC_SUPPORT_DWORD) { 352 - osc_sb_apei_support_acked = 353 - capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT; 354 - osc_pc_lpi_support_confirmed = 355 - capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT; 356 - osc_sb_native_usb4_support_confirmed = 357 - capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT; 358 - } 342 + osc_sb_apei_support_acked = 343 + capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT; 344 + osc_pc_lpi_support_confirmed = 345 + capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_PCLPI_SUPPORT; 346 + osc_sb_native_usb4_support_confirmed = 347 + capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_NATIVE_USB4_SUPPORT; 359 348 360 349 kfree(context.ret.pointer); 361 350 }
+1 -3
drivers/acpi/sleep.c
··· 1009 1009 return; 1010 1010 1011 1011 acpi_get_table(ACPI_SIG_FACS, 1, (struct acpi_table_header **)&facs); 1012 - if (facs) { 1012 + if (facs) 1013 1013 s4_hardware_signature = facs->hardware_signature; 1014 - acpi_put_table((struct acpi_table_header *)facs); 1015 - } 1016 1014 } 1017 1015 #else /* !CONFIG_HIBERNATION */ 1018 1016 static inline void acpi_sleep_hibernate_setup(void) {}
+3 -3
drivers/base/memory.c
··· 218 218 struct zone *zone; 219 219 int ret; 220 220 221 - zone = page_zone(pfn_to_page(start_pfn)); 222 - 223 221 /* 224 222 * Unaccount before offlining, such that unpopulated zone and kthreads 225 223 * can properly be torn down in offline_pages(). 226 224 */ 227 - if (nr_vmemmap_pages) 225 + if (nr_vmemmap_pages) { 226 + zone = page_zone(pfn_to_page(start_pfn)); 228 227 adjust_present_page_count(zone, -nr_vmemmap_pages); 228 + } 229 229 230 230 ret = offline_pages(start_pfn + nr_vmemmap_pages, 231 231 nr_pages - nr_vmemmap_pages);
+7 -18
drivers/block/loop.c
··· 1879 1879 1880 1880 static int lo_open(struct block_device *bdev, fmode_t mode) 1881 1881 { 1882 - struct loop_device *lo; 1882 + struct loop_device *lo = bdev->bd_disk->private_data; 1883 1883 int err; 1884 1884 1885 - /* 1886 - * take loop_ctl_mutex to protect lo pointer from race with 1887 - * loop_control_ioctl(LOOP_CTL_REMOVE), however, to reduce contention 1888 - * release it prior to updating lo->lo_refcnt. 1889 - */ 1890 - err = mutex_lock_killable(&loop_ctl_mutex); 1891 - if (err) 1892 - return err; 1893 - lo = bdev->bd_disk->private_data; 1894 - if (!lo) { 1895 - mutex_unlock(&loop_ctl_mutex); 1896 - return -ENXIO; 1897 - } 1898 1885 err = mutex_lock_killable(&lo->lo_mutex); 1899 - mutex_unlock(&loop_ctl_mutex); 1900 1886 if (err) 1901 1887 return err; 1902 - atomic_inc(&lo->lo_refcnt); 1888 + if (lo->lo_state == Lo_deleting) 1889 + err = -ENXIO; 1890 + else 1891 + atomic_inc(&lo->lo_refcnt); 1903 1892 mutex_unlock(&lo->lo_mutex); 1904 - return 0; 1893 + return err; 1905 1894 } 1906 1895 1907 1896 static void lo_release(struct gendisk *disk, fmode_t mode) ··· 2274 2285 mutex_unlock(&lo->lo_mutex); 2275 2286 break; 2276 2287 } 2277 - lo->lo_disk->private_data = NULL; 2288 + lo->lo_state = Lo_deleting; 2278 2289 mutex_unlock(&lo->lo_mutex); 2279 2290 idr_remove(&loop_index_idr, lo->lo_number); 2280 2291 loop_remove(lo);
+1
drivers/block/loop.h
··· 22 22 Lo_unbound, 23 23 Lo_bound, 24 24 Lo_rundown, 25 + Lo_deleting, 25 26 }; 26 27 27 28 struct loop_func_table;
+23 -2
drivers/bluetooth/btusb.c
··· 388 388 /* Realtek 8822CE Bluetooth devices */ 389 389 { USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK | 390 390 BTUSB_WIDEBAND_SPEECH }, 391 + { USB_DEVICE(0x0bda, 0xc822), .driver_info = BTUSB_REALTEK | 392 + BTUSB_WIDEBAND_SPEECH }, 391 393 392 394 /* Realtek 8852AE Bluetooth devices */ 393 395 { USB_DEVICE(0x0bda, 0xc852), .driver_info = BTUSB_REALTEK | ··· 2529 2527 } 2530 2528 2531 2529 btusb_setup_intel_newgen_get_fw_name(ver, fwname, sizeof(fwname), "sfi"); 2532 - err = request_firmware(&fw, fwname, &hdev->dev); 2530 + err = firmware_request_nowarn(&fw, fwname, &hdev->dev); 2533 2531 if (err < 0) { 2532 + if (!test_bit(BTUSB_BOOTLOADER, &data->flags)) { 2533 + /* Firmware has already been loaded */ 2534 + set_bit(BTUSB_FIRMWARE_LOADED, &data->flags); 2535 + return 0; 2536 + } 2537 + 2534 2538 bt_dev_err(hdev, "Failed to load Intel firmware file %s (%d)", 2535 2539 fwname, err); 2540 + 2536 2541 return err; 2537 2542 } 2538 2543 ··· 2689 2680 err = btusb_setup_intel_new_get_fw_name(ver, params, fwname, 2690 2681 sizeof(fwname), "sfi"); 2691 2682 if (err < 0) { 2683 + if (!test_bit(BTUSB_BOOTLOADER, &data->flags)) { 2684 + /* Firmware has already been loaded */ 2685 + set_bit(BTUSB_FIRMWARE_LOADED, &data->flags); 2686 + return 0; 2687 + } 2688 + 2692 2689 bt_dev_err(hdev, "Unsupported Intel firmware naming"); 2693 2690 return -EINVAL; 2694 2691 } 2695 2692 2696 - err = request_firmware(&fw, fwname, &hdev->dev); 2693 + err = firmware_request_nowarn(&fw, fwname, &hdev->dev); 2697 2694 if (err < 0) { 2695 + if (!test_bit(BTUSB_BOOTLOADER, &data->flags)) { 2696 + /* Firmware has already been loaded */ 2697 + set_bit(BTUSB_FIRMWARE_LOADED, &data->flags); 2698 + return 0; 2699 + } 2700 + 2698 2701 bt_dev_err(hdev, "Failed to load Intel firmware file %s (%d)", 2699 2702 fwname, err); 2700 2703 return err;
+38 -4
drivers/bus/mhi/pci_generic.c
··· 311 311 MHI_CHANNEL_CONFIG_DL(5, "DIAG", 32, 1), 312 312 MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0), 313 313 MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0), 314 - MHI_CHANNEL_CONFIG_UL(32, "AT", 32, 0), 315 - MHI_CHANNEL_CONFIG_DL(33, "AT", 32, 0), 314 + MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0), 315 + MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0), 316 316 MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2), 317 317 MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3), 318 318 }; ··· 708 708 struct mhi_pci_device *mhi_pdev = pci_get_drvdata(pdev); 709 709 struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl; 710 710 711 - del_timer(&mhi_pdev->health_check_timer); 711 + del_timer_sync(&mhi_pdev->health_check_timer); 712 712 cancel_work_sync(&mhi_pdev->recovery_work); 713 713 714 714 if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) { ··· 935 935 return ret; 936 936 } 937 937 938 + static int __maybe_unused mhi_pci_freeze(struct device *dev) 939 + { 940 + struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev); 941 + struct mhi_controller *mhi_cntrl = &mhi_pdev->mhi_cntrl; 942 + 943 + /* We want to stop all operations, hibernation does not guarantee that 944 + * device will be in the same state as before freezing, especially if 945 + * the intermediate restore kernel reinitializes MHI device with new 946 + * context. 947 + */ 948 + if (test_and_clear_bit(MHI_PCI_DEV_STARTED, &mhi_pdev->status)) { 949 + mhi_power_down(mhi_cntrl, false); 950 + mhi_unprepare_after_power_down(mhi_cntrl); 951 + } 952 + 953 + return 0; 954 + } 955 + 956 + static int __maybe_unused mhi_pci_restore(struct device *dev) 957 + { 958 + struct mhi_pci_device *mhi_pdev = dev_get_drvdata(dev); 959 + 960 + /* Reinitialize the device */ 961 + queue_work(system_long_wq, &mhi_pdev->recovery_work); 962 + 963 + return 0; 964 + } 965 + 938 966 static const struct dev_pm_ops mhi_pci_pm_ops = { 939 967 SET_RUNTIME_PM_OPS(mhi_pci_runtime_suspend, mhi_pci_runtime_resume, NULL) 940 - SET_SYSTEM_SLEEP_PM_OPS(mhi_pci_suspend, mhi_pci_resume) 968 + #ifdef CONFIG_PM_SLEEP 969 + .suspend = mhi_pci_suspend, 970 + .resume = mhi_pci_resume, 971 + .freeze = mhi_pci_freeze, 972 + .thaw = mhi_pci_restore, 973 + .restore = mhi_pci_restore, 974 + #endif 941 975 }; 942 976 943 977 static struct pci_driver mhi_pci_driver = {
+54 -6
drivers/bus/ti-sysc.c
··· 1334 1334 return error; 1335 1335 } 1336 1336 1337 + static int sysc_reinit_module(struct sysc *ddata, bool leave_enabled) 1338 + { 1339 + struct device *dev = ddata->dev; 1340 + int error; 1341 + 1342 + /* Disable target module if it is enabled */ 1343 + if (ddata->enabled) { 1344 + error = sysc_runtime_suspend(dev); 1345 + if (error) 1346 + dev_warn(dev, "reinit suspend failed: %i\n", error); 1347 + } 1348 + 1349 + /* Enable target module */ 1350 + error = sysc_runtime_resume(dev); 1351 + if (error) 1352 + dev_warn(dev, "reinit resume failed: %i\n", error); 1353 + 1354 + if (leave_enabled) 1355 + return error; 1356 + 1357 + /* Disable target module if no leave_enabled was set */ 1358 + error = sysc_runtime_suspend(dev); 1359 + if (error) 1360 + dev_warn(dev, "reinit suspend failed: %i\n", error); 1361 + 1362 + return error; 1363 + } 1364 + 1337 1365 static int __maybe_unused sysc_noirq_suspend(struct device *dev) 1338 1366 { 1339 1367 struct sysc *ddata; ··· 1372 1344 (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE)) 1373 1345 return 0; 1374 1346 1375 - return pm_runtime_force_suspend(dev); 1347 + if (!ddata->enabled) 1348 + return 0; 1349 + 1350 + ddata->needs_resume = 1; 1351 + 1352 + return sysc_runtime_suspend(dev); 1376 1353 } 1377 1354 1378 1355 static int __maybe_unused sysc_noirq_resume(struct device *dev) 1379 1356 { 1380 1357 struct sysc *ddata; 1358 + int error = 0; 1381 1359 1382 1360 ddata = dev_get_drvdata(dev); 1383 1361 ··· 1391 1357 (SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_NO_IDLE)) 1392 1358 return 0; 1393 1359 1394 - return pm_runtime_force_resume(dev); 1360 + if (ddata->cfg.quirks & SYSC_QUIRK_REINIT_ON_RESUME) { 1361 + error = sysc_reinit_module(ddata, ddata->needs_resume); 1362 + if (error) 1363 + dev_warn(dev, "noirq_resume failed: %i\n", error); 1364 + } else if (ddata->needs_resume) { 1365 + error = sysc_runtime_resume(dev); 1366 + if (error) 1367 + dev_warn(dev, "noirq_resume failed: %i\n", error); 1368 + } 1369 + 1370 + ddata->needs_resume = 0; 1371 + 1372 + return error; 1395 1373 } 1396 1374 1397 1375 static const struct dev_pm_ops sysc_pm_ops = { ··· 1454 1408 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1455 1409 /* Uarts on omap4 and later */ 1456 1410 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff, 1457 - SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), 1411 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1458 1412 SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff, 1459 - SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), 1413 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), 1460 1414 1461 1415 /* Quirks that need to be set based on the module address */ 1462 1416 SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff, ··· 1505 1459 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1506 1460 SYSC_QUIRK("tptc", 0, 0, -ENODEV, -ENODEV, 0x40007c00, 0xffffffff, 1507 1461 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1462 + SYSC_QUIRK("sata", 0, 0xfc, 0x1100, -ENODEV, 0x5e412000, 0xffffffff, 1463 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1508 1464 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, 0x14, 0x50700100, 0xffffffff, 1509 1465 SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1510 1466 SYSC_QUIRK("usb_host_hs", 0, 0, 0x10, -ENODEV, 0x50700101, 0xffffffff, ··· 1514 1466 SYSC_QUIRK("usb_otg_hs", 0, 0x400, 0x404, 0x408, 0x00000050, 1515 1467 0xffffffff, SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1516 1468 SYSC_QUIRK("usb_otg_hs", 0, 0, 0x10, -ENODEV, 0x4ea2080d, 0xffffffff, 1517 - SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY), 1469 + SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_SWSUP_MSTANDBY | 1470 + SYSC_QUIRK_REINIT_ON_RESUME), 1518 1471 SYSC_QUIRK("wdt", 0, 0, 0x10, 0x14, 0x502a0500, 0xfffff0f0, 1519 1472 SYSC_MODULE_QUIRK_WDT), 1520 1473 /* PRUSS on am3, am4 and am5 */ ··· 1573 1524 SYSC_QUIRK("prcm", 0, 0, -ENODEV, -ENODEV, 0x40000400, 0xffffffff, 0), 1574 1525 SYSC_QUIRK("rfbi", 0x4832a800, 0, 0x10, 0x14, 0x00000010, 0xffffffff, 0), 1575 1526 SYSC_QUIRK("rfbi", 0x58002000, 0, 0x10, 0x14, 0x00000010, 0xffffffff, 0), 1576 - SYSC_QUIRK("sata", 0, 0xfc, 0x1100, -ENODEV, 0x5e412000, 0xffffffff, 0), 1577 1527 SYSC_QUIRK("scm", 0, 0, 0x10, -ENODEV, 0x40000900, 0xffffffff, 0), 1578 1528 SYSC_QUIRK("scm", 0, 0, -ENODEV, -ENODEV, 0x4e8b0100, 0xffffffff, 0), 1579 1529 SYSC_QUIRK("scm", 0, 0, -ENODEV, -ENODEV, 0x4f000100, 0xffffffff, 0),
+2 -2
drivers/dma/idxd/init.c
··· 745 745 * If the CPU does not support MOVDIR64B or ENQCMDS, there's no point in 746 746 * enumerating the device. We can not utilize it. 747 747 */ 748 - if (!boot_cpu_has(X86_FEATURE_MOVDIR64B)) { 748 + if (!cpu_feature_enabled(X86_FEATURE_MOVDIR64B)) { 749 749 pr_warn("idxd driver failed to load without MOVDIR64B.\n"); 750 750 return -ENODEV; 751 751 } 752 752 753 - if (!boot_cpu_has(X86_FEATURE_ENQCMD)) 753 + if (!cpu_feature_enabled(X86_FEATURE_ENQCMD)) 754 754 pr_warn("Platform does not have ENQCMD(S) support.\n"); 755 755 else 756 756 support_enqcmd = true;
+1 -3
drivers/firmware/efi/cper.c
··· 276 276 if (!msg || !(mem->validation_bits & CPER_MEM_VALID_MODULE_HANDLE)) 277 277 return 0; 278 278 279 - n = 0; 280 - len = CPER_REC_LEN - 1; 279 + len = CPER_REC_LEN; 281 280 dmi_memdev_name(mem->mem_dev_handle, &bank, &device); 282 281 if (bank && device) 283 282 n = snprintf(msg, len, "DIMM location: %s %s ", bank, device); ··· 285 286 "DIMM location: not present. DMI handle: 0x%.4x ", 286 287 mem->mem_dev_handle); 287 288 288 - msg[n] = '\0'; 289 289 return n; 290 290 } 291 291
+3
drivers/firmware/efi/fdtparams.c
··· 98 98 BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(name)); 99 99 BUILD_BUG_ON(ARRAY_SIZE(target) != ARRAY_SIZE(dt_params[0].params)); 100 100 101 + if (!fdt) 102 + return 0; 103 + 101 104 for (i = 0; i < ARRAY_SIZE(dt_params); i++) { 102 105 node = fdt_path_offset(fdt, dt_params[i].path); 103 106 if (node < 0)
+1 -1
drivers/firmware/efi/libstub/file.c
··· 103 103 return 0; 104 104 105 105 /* Skip any leading slashes */ 106 - while (cmdline[i] == L'/' || cmdline[i] == L'\\') 106 + while (i < cmdline_len && (cmdline[i] == L'/' || cmdline[i] == L'\\')) 107 107 i++; 108 108 109 109 while (--result_len > 0 && i < cmdline_len) {
-5
drivers/firmware/efi/memattr.c
··· 67 67 return false; 68 68 } 69 69 70 - if (!(in->attribute & (EFI_MEMORY_RO | EFI_MEMORY_XP))) { 71 - pr_warn("Entry attributes invalid: RO and XP bits both cleared\n"); 72 - return false; 73 - } 74 - 75 70 if (PAGE_SIZE > EFI_PAGE_SIZE && 76 71 (!PAGE_ALIGNED(in->phys_addr) || 77 72 !PAGE_ALIGNED(in->num_pages << EFI_PAGE_SHIFT))) {
+1 -1
drivers/gpio/gpio-wcd934x.c
··· 7 7 #include <linux/slab.h> 8 8 #include <linux/of_device.h> 9 9 10 - #define WCD_PIN_MASK(p) BIT(p - 1) 10 + #define WCD_PIN_MASK(p) BIT(p) 11 11 #define WCD_REG_DIR_CTL_OFFSET 0x42 12 12 #define WCD_REG_VAL_CTL_OFFSET 0x43 13 13 #define WCD934X_NPINS 5
-16
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 337 337 { 338 338 struct amdgpu_ctx *ctx; 339 339 struct amdgpu_ctx_mgr *mgr; 340 - unsigned long ras_counter; 341 340 342 341 if (!fpriv) 343 342 return -EINVAL; ··· 360 361 361 362 if (atomic_read(&ctx->guilty)) 362 363 out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_GUILTY; 363 - 364 - /*query ue count*/ 365 - ras_counter = amdgpu_ras_query_error_count(adev, false); 366 - /*ras counter is monotonic increasing*/ 367 - if (ras_counter != ctx->ras_counter_ue) { 368 - out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_RAS_UE; 369 - ctx->ras_counter_ue = ras_counter; 370 - } 371 - 372 - /*query ce count*/ 373 - ras_counter = amdgpu_ras_query_error_count(adev, true); 374 - if (ras_counter != ctx->ras_counter_ce) { 375 - out->state.flags |= AMDGPU_CTX_QUERY2_FLAGS_RAS_CE; 376 - ctx->ras_counter_ce = ras_counter; 377 - } 378 364 379 365 mutex_unlock(&mgr->lock); 380 366 return 0;
+3 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3118 3118 */ 3119 3119 bool amdgpu_device_has_dc_support(struct amdgpu_device *adev) 3120 3120 { 3121 - if (amdgpu_sriov_vf(adev) || adev->enable_virtual_display) 3121 + if (amdgpu_sriov_vf(adev) || 3122 + adev->enable_virtual_display || 3123 + (adev->harvest_ip_mask & AMD_HARVEST_IP_DMU_MASK)) 3122 3124 return false; 3123 3125 3124 3126 return amdgpu_device_asic_has_dc_support(adev->asic_type);
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 1057 1057 1058 1058 return 0; 1059 1059 err: 1060 - drm_err(dev, "Failed to init gem fb: %d\n", ret); 1060 + drm_dbg_kms(dev, "Failed to init gem fb: %d\n", ret); 1061 1061 rfb->base.obj[0] = NULL; 1062 1062 return ret; 1063 1063 } ··· 1094 1094 1095 1095 return 0; 1096 1096 err: 1097 - drm_err(dev, "Failed to verify and init gem fb: %d\n", ret); 1097 + drm_dbg_kms(dev, "Failed to verify and init gem fb: %d\n", ret); 1098 1098 rfb->base.obj[0] = NULL; 1099 1099 return ret; 1100 1100 }
+23 -19
drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
··· 101 101 int amdgpu_fru_get_product_info(struct amdgpu_device *adev) 102 102 { 103 103 unsigned char buff[34]; 104 - int addrptr = 0, size = 0; 104 + int addrptr, size; 105 + int len; 105 106 106 107 if (!is_fru_eeprom_supported(adev)) 107 108 return 0; ··· 110 109 /* If algo exists, it means that the i2c_adapter's initialized */ 111 110 if (!adev->pm.smu_i2c.algo) { 112 111 DRM_WARN("Cannot access FRU, EEPROM accessor not initialized"); 113 - return 0; 112 + return -ENODEV; 114 113 } 115 114 116 115 /* There's a lot of repetition here. This is due to the FRU having ··· 129 128 size = amdgpu_fru_read_eeprom(adev, addrptr, buff); 130 129 if (size < 1) { 131 130 DRM_ERROR("Failed to read FRU Manufacturer, ret:%d", size); 132 - return size; 131 + return -EINVAL; 133 132 } 134 133 135 134 /* Increment the addrptr by the size of the field, and 1 due to the ··· 139 138 size = amdgpu_fru_read_eeprom(adev, addrptr, buff); 140 139 if (size < 1) { 141 140 DRM_ERROR("Failed to read FRU product name, ret:%d", size); 142 - return size; 141 + return -EINVAL; 143 142 } 144 143 144 + len = size; 145 145 /* Product name should only be 32 characters. Any more, 146 146 * and something could be wrong. Cap it at 32 to be safe 147 147 */ 148 - if (size > 32) { 148 + if (len >= sizeof(adev->product_name)) { 149 149 DRM_WARN("FRU Product Number is larger than 32 characters. This is likely a mistake"); 150 - size = 32; 150 + len = sizeof(adev->product_name) - 1; 151 151 } 152 152 /* Start at 2 due to buff using fields 0 and 1 for the address */ 153 - memcpy(adev->product_name, &buff[2], size); 154 - adev->product_name[size] = '\0'; 153 + memcpy(adev->product_name, &buff[2], len); 154 + adev->product_name[len] = '\0'; 155 155 156 156 addrptr += size + 1; 157 157 size = amdgpu_fru_read_eeprom(adev, addrptr, buff); 158 158 if (size < 1) { 159 159 DRM_ERROR("Failed to read FRU product number, ret:%d", size); 160 - return size; 160 + return -EINVAL; 161 161 } 162 162 163 + len = size; 163 164 /* Product number should only be 16 characters. Any more, 164 165 * and something could be wrong. Cap it at 16 to be safe 165 166 */ 166 - if (size > 16) { 167 + if (len >= sizeof(adev->product_number)) { 167 168 DRM_WARN("FRU Product Number is larger than 16 characters. This is likely a mistake"); 168 - size = 16; 169 + len = sizeof(adev->product_number) - 1; 169 170 } 170 - memcpy(adev->product_number, &buff[2], size); 171 - adev->product_number[size] = '\0'; 171 + memcpy(adev->product_number, &buff[2], len); 172 + adev->product_number[len] = '\0'; 172 173 173 174 addrptr += size + 1; 174 175 size = amdgpu_fru_read_eeprom(adev, addrptr, buff); 175 176 176 177 if (size < 1) { 177 178 DRM_ERROR("Failed to read FRU product version, ret:%d", size); 178 - return size; 179 + return -EINVAL; 179 180 } 180 181 181 182 addrptr += size + 1; ··· 185 182 186 183 if (size < 1) { 187 184 DRM_ERROR("Failed to read FRU serial number, ret:%d", size); 188 - return size; 185 + return -EINVAL; 189 186 } 190 187 188 + len = size; 191 189 /* Serial number should only be 16 characters. Any more, 192 190 * and something could be wrong. Cap it at 16 to be safe 193 191 */ 194 - if (size > 16) { 192 + if (len >= sizeof(adev->serial)) { 195 193 DRM_WARN("FRU Serial Number is larger than 16 characters. This is likely a mistake"); 196 - size = 16; 194 + len = sizeof(adev->serial) - 1; 197 195 } 198 - memcpy(adev->serial, &buff[2], size); 199 - adev->serial[size] = '\0'; 196 + memcpy(adev->serial, &buff[2], len); 197 + adev->serial[len] = '\0'; 200 198 201 199 return 0; 202 200 }
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 100 100 kfree(ubo->metadata); 101 101 } 102 102 103 - kfree(bo); 103 + kvfree(bo); 104 104 } 105 105 106 106 /** ··· 552 552 BUG_ON(bp->bo_ptr_size < sizeof(struct amdgpu_bo)); 553 553 554 554 *bo_ptr = NULL; 555 - bo = kzalloc(bp->bo_ptr_size, GFP_KERNEL); 555 + bo = kvzalloc(bp->bo_ptr_size, GFP_KERNEL); 556 556 if (bo == NULL) 557 557 return -ENOMEM; 558 558 drm_gem_private_object_init(adev_to_drm(adev), &bo->tbo.base, size);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
··· 76 76 uint64_t ring_mem_mc_addr; 77 77 void *ring_mem_handle; 78 78 uint32_t ring_size; 79 + uint32_t ring_wptr; 79 80 }; 80 81 81 82 /* More registers may will be supported */
+21 -5
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 173 173 #define mmGC_THROTTLE_CTRL_Sienna_Cichlid 0x2030 174 174 #define mmGC_THROTTLE_CTRL_Sienna_Cichlid_BASE_IDX 0 175 175 176 + #define mmRLC_SPARE_INT_0_Sienna_Cichlid 0x4ca5 177 + #define mmRLC_SPARE_INT_0_Sienna_Cichlid_BASE_IDX 1 178 + 176 179 #define GFX_RLCG_GC_WRITE_OLD (0x8 << 28) 177 180 #define GFX_RLCG_GC_WRITE (0x0 << 28) 178 181 #define GFX_RLCG_GC_READ (0x1 << 28) ··· 1483 1480 (adev->reg_offset[GC_HWIP][0][mmSCRATCH_REG0_BASE_IDX] + mmSCRATCH_REG2) * 4; 1484 1481 scratch_reg3 = adev->rmmio + 1485 1482 (adev->reg_offset[GC_HWIP][0][mmSCRATCH_REG1_BASE_IDX] + mmSCRATCH_REG3) * 4; 1486 - spare_int = adev->rmmio + 1487 - (adev->reg_offset[GC_HWIP][0][mmRLC_SPARE_INT_BASE_IDX] + mmRLC_SPARE_INT) * 4; 1483 + 1484 + if (adev->asic_type >= CHIP_SIENNA_CICHLID) { 1485 + spare_int = adev->rmmio + 1486 + (adev->reg_offset[GC_HWIP][0][mmRLC_SPARE_INT_0_Sienna_Cichlid_BASE_IDX] 1487 + + mmRLC_SPARE_INT_0_Sienna_Cichlid) * 4; 1488 + } else { 1489 + spare_int = adev->rmmio + 1490 + (adev->reg_offset[GC_HWIP][0][mmRLC_SPARE_INT_BASE_IDX] + mmRLC_SPARE_INT) * 4; 1491 + } 1488 1492 1489 1493 grbm_cntl = adev->reg_offset[GC_HWIP][0][mmGRBM_GFX_CNTL_BASE_IDX] + mmGRBM_GFX_CNTL; 1490 1494 grbm_idx = adev->reg_offset[GC_HWIP][0][mmGRBM_GFX_INDEX_BASE_IDX] + mmGRBM_GFX_INDEX; ··· 7359 7349 if (amdgpu_sriov_vf(adev)) { 7360 7350 gfx_v10_0_cp_gfx_enable(adev, false); 7361 7351 /* Program KIQ position of RLC_CP_SCHEDULERS during destroy */ 7362 - tmp = RREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS); 7363 - tmp &= 0xffffff00; 7364 - WREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS, tmp); 7352 + if (adev->asic_type >= CHIP_SIENNA_CICHLID) { 7353 + tmp = RREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS_Sienna_Cichlid); 7354 + tmp &= 0xffffff00; 7355 + WREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS_Sienna_Cichlid, tmp); 7356 + } else { 7357 + tmp = RREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS); 7358 + tmp &= 0xffffff00; 7359 + WREG32_SOC15(GC, 0, mmRLC_CP_SCHEDULERS, tmp); 7360 + } 7365 7361 7366 7362 return 0; 7367 7363 }
+2 -1
drivers/gpu/drm/amd/amdgpu/psp_v11_0.c
··· 720 720 struct amdgpu_device *adev = psp->adev; 721 721 722 722 if (amdgpu_sriov_vf(adev)) 723 - data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102); 723 + data = psp->km_ring.ring_wptr; 724 724 else 725 725 data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67); 726 726 ··· 734 734 if (amdgpu_sriov_vf(adev)) { 735 735 WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102, value); 736 736 WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_101, GFX_CTRL_CMD_ID_CONSUME_CMD); 737 + psp->km_ring.ring_wptr = value; 737 738 } else 738 739 WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67, value); 739 740 }
+2 -1
drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
··· 379 379 struct amdgpu_device *adev = psp->adev; 380 380 381 381 if (amdgpu_sriov_vf(adev)) 382 - data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_102); 382 + data = psp->km_ring.ring_wptr; 383 383 else 384 384 data = RREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67); 385 385 return data; ··· 394 394 /* send interrupt to PSP for SRIOV ring write pointer update */ 395 395 WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_101, 396 396 GFX_CTRL_CMD_ID_CONSUME_CMD); 397 + psp->km_ring.ring_wptr = value; 397 398 } else 398 399 WREG32_SOC15(MP0, 0, mmMP0_SMN_C2PMSG_67, value); 399 400 }
+1
drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
··· 357 357 358 358 error: 359 359 dma_fence_put(fence); 360 + amdgpu_bo_unpin(bo); 360 361 amdgpu_bo_unreserve(bo); 361 362 amdgpu_bo_unref(&bo); 362 363 return r;
+19 -11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 925 925 abm->dmcu_is_running = dmcu->funcs->is_dmcu_initialized(dmcu); 926 926 } 927 927 928 - adev->dm.dc->ctx->dmub_srv = dc_dmub_srv_create(adev->dm.dc, dmub_srv); 928 + if (!adev->dm.dc->ctx->dmub_srv) 929 + adev->dm.dc->ctx->dmub_srv = dc_dmub_srv_create(adev->dm.dc, dmub_srv); 929 930 if (!adev->dm.dc->ctx->dmub_srv) { 930 931 DRM_ERROR("Couldn't allocate DC DMUB server!\n"); 931 932 return -ENOMEM; ··· 1954 1953 s3_handle_mst(adev_to_drm(adev), true); 1955 1954 1956 1955 amdgpu_dm_irq_suspend(adev); 1957 - 1958 1956 1959 1957 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); 1960 1958 ··· 5500 5500 struct drm_display_mode saved_mode; 5501 5501 struct drm_display_mode *freesync_mode = NULL; 5502 5502 bool native_mode_found = false; 5503 - bool recalculate_timing = dm_state ? (dm_state->scaling != RMX_OFF) : false; 5503 + bool recalculate_timing = false; 5504 + bool scale = dm_state ? (dm_state->scaling != RMX_OFF) : false; 5504 5505 int mode_refresh; 5505 5506 int preferred_refresh = 0; 5506 5507 #if defined(CONFIG_DRM_AMD_DC_DCN) ··· 5564 5563 */ 5565 5564 DRM_DEBUG_DRIVER("No preferred mode found\n"); 5566 5565 } else { 5567 - recalculate_timing |= amdgpu_freesync_vid_mode && 5566 + recalculate_timing = amdgpu_freesync_vid_mode && 5568 5567 is_freesync_video_mode(&mode, aconnector); 5569 5568 if (recalculate_timing) { 5570 5569 freesync_mode = get_highest_refresh_rate_mode(aconnector, false); ··· 5572 5571 mode = *freesync_mode; 5573 5572 } else { 5574 5573 decide_crtc_timing_for_drm_display_mode( 5575 - &mode, preferred_mode, 5576 - dm_state ? (dm_state->scaling != RMX_OFF) : false); 5577 - } 5574 + &mode, preferred_mode, scale); 5578 5575 5579 - preferred_refresh = drm_mode_vrefresh(preferred_mode); 5576 + preferred_refresh = drm_mode_vrefresh(preferred_mode); 5577 + } 5580 5578 } 5581 5579 5582 5580 if (recalculate_timing) ··· 5587 5587 * If scaling is enabled and refresh rate didn't change 5588 5588 * we copy the vic and polarities of the old timings 5589 5589 */ 5590 - if (!recalculate_timing || mode_refresh != preferred_refresh) 5590 + if (!scale || mode_refresh != preferred_refresh) 5591 5591 fill_stream_properties_from_drm_display_mode( 5592 5592 stream, &mode, &aconnector->base, con_state, NULL, 5593 5593 requested_bpc); ··· 9854 9854 9855 9855 if (cursor_scale_w != primary_scale_w || 9856 9856 cursor_scale_h != primary_scale_h) { 9857 - DRM_DEBUG_ATOMIC("Cursor plane scaling doesn't match primary plane\n"); 9857 + drm_dbg_atomic(crtc->dev, "Cursor plane scaling doesn't match primary plane\n"); 9858 9858 return -EINVAL; 9859 9859 } 9860 9860 ··· 9891 9891 int i; 9892 9892 struct drm_plane *plane; 9893 9893 struct drm_plane_state *old_plane_state, *new_plane_state; 9894 - struct drm_plane_state *primary_state, *overlay_state = NULL; 9894 + struct drm_plane_state *primary_state, *cursor_state, *overlay_state = NULL; 9895 9895 9896 9896 /* Check if primary plane is contained inside overlay */ 9897 9897 for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) { ··· 9919 9919 9920 9920 /* check if primary plane is enabled */ 9921 9921 if (!primary_state->crtc) 9922 + return 0; 9923 + 9924 + /* check if cursor plane is enabled */ 9925 + cursor_state = drm_atomic_get_plane_state(state, overlay_state->crtc->cursor); 9926 + if (IS_ERR(cursor_state)) 9927 + return PTR_ERR(cursor_state); 9928 + 9929 + if (drm_atomic_plane_disabling(plane->state, cursor_state)) 9922 9930 return 0; 9923 9931 9924 9932 /* Perform the bounds check to ensure the overlay plane covers the primary */
+1 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 3236 3236 voltage_supported = dcn20_validate_bandwidth_internal(dc, context, false); 3237 3237 dummy_pstate_supported = context->bw_ctx.bw.dcn.clk.p_state_change_support; 3238 3238 3239 - if (voltage_supported && dummy_pstate_supported) { 3239 + if (voltage_supported && (dummy_pstate_supported || !(context->stream_count))) { 3240 3240 context->bw_ctx.bw.dcn.clk.p_state_change_support = false; 3241 3241 goto restore_dml_state; 3242 3242 }
+1
drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu10_hwmgr.c
··· 810 810 break; 811 811 case AMD_DPM_FORCED_LEVEL_MANUAL: 812 812 data->fine_grain_enabled = 1; 813 + break; 813 814 case AMD_DPM_FORCED_LEVEL_PROFILE_EXIT: 814 815 default: 815 816 break;
+2 -1
drivers/gpu/drm/drm_auth.c
··· 314 314 void drm_master_release(struct drm_file *file_priv) 315 315 { 316 316 struct drm_device *dev = file_priv->minor->dev; 317 - struct drm_master *master = file_priv->master; 317 + struct drm_master *master; 318 318 319 319 mutex_lock(&dev->master_mutex); 320 + master = file_priv->master; 320 321 if (file_priv->magic) 321 322 idr_remove(&file_priv->master->magic_map, file_priv->magic); 322 323
+5 -4
drivers/gpu/drm/drm_ioctl.c
··· 118 118 struct drm_file *file_priv) 119 119 { 120 120 struct drm_unique *u = data; 121 - struct drm_master *master = file_priv->master; 121 + struct drm_master *master; 122 122 123 - mutex_lock(&master->dev->master_mutex); 123 + mutex_lock(&dev->master_mutex); 124 + master = file_priv->master; 124 125 if (u->unique_len >= master->unique_len) { 125 126 if (copy_to_user(u->unique, master->unique, master->unique_len)) { 126 - mutex_unlock(&master->dev->master_mutex); 127 + mutex_unlock(&dev->master_mutex); 127 128 return -EFAULT; 128 129 } 129 130 } 130 131 u->unique_len = master->unique_len; 131 - mutex_unlock(&master->dev->master_mutex); 132 + mutex_unlock(&dev->master_mutex); 132 133 133 134 return 0; 134 135 }
-1
drivers/gpu/drm/i915/Kconfig
··· 20 20 select INPUT if ACPI 21 21 select ACPI_VIDEO if ACPI 22 22 select ACPI_BUTTON if ACPI 23 - select IO_MAPPING 24 23 select SYNC_FILE 25 24 select IOSF_MBI 26 25 select CRC32
+5 -4
drivers/gpu/drm/i915/gem/i915_gem_mman.c
··· 367 367 goto err_unpin; 368 368 369 369 /* Finally, remap it using the new GTT offset */ 370 - ret = io_mapping_map_user(&ggtt->iomap, area, area->vm_start + 371 - (vma->ggtt_view.partial.offset << PAGE_SHIFT), 372 - (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT, 373 - min_t(u64, vma->size, area->vm_end - area->vm_start)); 370 + ret = remap_io_mapping(area, 371 + area->vm_start + (vma->ggtt_view.partial.offset << PAGE_SHIFT), 372 + (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT, 373 + min_t(u64, vma->size, area->vm_end - area->vm_start), 374 + &ggtt->iomap); 374 375 if (ret) 375 376 goto err_fence; 376 377
+3
drivers/gpu/drm/i915/i915_drv.h
··· 1905 1905 struct drm_file *file); 1906 1906 1907 1907 /* i915_mm.c */ 1908 + int remap_io_mapping(struct vm_area_struct *vma, 1909 + unsigned long addr, unsigned long pfn, unsigned long size, 1910 + struct io_mapping *iomap); 1908 1911 int remap_io_sg(struct vm_area_struct *vma, 1909 1912 unsigned long addr, unsigned long size, 1910 1913 struct scatterlist *sgl, resource_size_t iobase);
+44
drivers/gpu/drm/i915/i915_mm.c
··· 37 37 resource_size_t iobase; 38 38 }; 39 39 40 + static int remap_pfn(pte_t *pte, unsigned long addr, void *data) 41 + { 42 + struct remap_pfn *r = data; 43 + 44 + /* Special PTE are not associated with any struct page */ 45 + set_pte_at(r->mm, addr, pte, pte_mkspecial(pfn_pte(r->pfn, r->prot))); 46 + r->pfn++; 47 + 48 + return 0; 49 + } 50 + 40 51 #define use_dma(io) ((io) != -1) 41 52 42 53 static inline unsigned long sgt_pfn(const struct remap_pfn *r) ··· 77 66 return 0; 78 67 } 79 68 69 + /** 70 + * remap_io_mapping - remap an IO mapping to userspace 71 + * @vma: user vma to map to 72 + * @addr: target user address to start at 73 + * @pfn: physical address of kernel memory 74 + * @size: size of map area 75 + * @iomap: the source io_mapping 76 + * 77 + * Note: this is only safe if the mm semaphore is held when called. 78 + */ 79 + int remap_io_mapping(struct vm_area_struct *vma, 80 + unsigned long addr, unsigned long pfn, unsigned long size, 81 + struct io_mapping *iomap) 82 + { 83 + struct remap_pfn r; 84 + int err; 85 + 80 86 #define EXPECTED_FLAGS (VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP) 87 + GEM_BUG_ON((vma->vm_flags & EXPECTED_FLAGS) != EXPECTED_FLAGS); 88 + 89 + /* We rely on prevalidation of the io-mapping to skip track_pfn(). */ 90 + r.mm = vma->vm_mm; 91 + r.pfn = pfn; 92 + r.prot = __pgprot((pgprot_val(iomap->prot) & _PAGE_CACHE_MASK) | 93 + (pgprot_val(vma->vm_page_prot) & ~_PAGE_CACHE_MASK)); 94 + 95 + err = apply_to_page_range(r.mm, addr, size, remap_pfn, &r); 96 + if (unlikely(err)) { 97 + zap_vma_ptes(vma, addr, (r.pfn - pfn) << PAGE_SHIFT); 98 + return err; 99 + } 100 + 101 + return 0; 102 + } 81 103 82 104 /** 83 105 * remap_io_sg - remap an IO mapping to userspace
+2 -2
drivers/gpu/drm/i915/selftests/i915_request.c
··· 1592 1592 1593 1593 for (n = 0; n < smoke[0].ncontexts; n++) { 1594 1594 smoke[0].contexts[n] = live_context(i915, file); 1595 - if (!smoke[0].contexts[n]) { 1596 - ret = -ENOMEM; 1595 + if (IS_ERR(smoke[0].contexts[n])) { 1596 + ret = PTR_ERR(smoke[0].contexts[n]); 1597 1597 goto out_contexts; 1598 1598 } 1599 1599 }
+1 -1
drivers/gpu/drm/mcde/mcde_dsi.c
··· 577 577 * porches and sync. 578 578 */ 579 579 /* (ps/s) / (pixels/s) = ps/pixels */ 580 - pclk = DIV_ROUND_UP_ULL(1000000000000, mode->clock); 580 + pclk = DIV_ROUND_UP_ULL(1000000000000, (mode->clock * 1000)); 581 581 dev_dbg(d->dev, "picoseconds between two pixels: %llu\n", 582 582 pclk); 583 583
+114 -41
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 157 157 * GPU registers so we need to add 0x1a800 to the register value on A630 158 158 * to get the right value from PM4. 159 159 */ 160 - get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800, 160 + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 161 161 rbmemptr_stats(ring, index, alwayson_start)); 162 162 163 163 /* Invalidate CCU depth and color */ ··· 187 187 188 188 get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP_0_LO, 189 189 rbmemptr_stats(ring, index, cpcycles_end)); 190 - get_stats_counter(ring, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L + 0x1a800, 190 + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 191 191 rbmemptr_stats(ring, index, alwayson_end)); 192 192 193 193 /* Write the fence to the scratch register */ ··· 206 206 OUT_RING(ring, submit->seqno); 207 207 208 208 trace_msm_gpu_submit_flush(submit, 209 - gmu_read64(&a6xx_gpu->gmu, REG_A6XX_GMU_ALWAYS_ON_COUNTER_L, 210 - REG_A6XX_GMU_ALWAYS_ON_COUNTER_H)); 209 + gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER_LO, 210 + REG_A6XX_CP_ALWAYS_ON_COUNTER_HI)); 211 211 212 212 a6xx_flush(gpu, ring); 213 213 } ··· 462 462 gpu_write(gpu, REG_A6XX_RBBM_CLOCK_CNTL, state ? clock_cntl_on : 0); 463 463 } 464 464 465 + /* For a615, a616, a618, A619, a630, a640 and a680 */ 466 + static const u32 a6xx_protect[] = { 467 + A6XX_PROTECT_RDONLY(0x00000, 0x04ff), 468 + A6XX_PROTECT_RDONLY(0x00501, 0x0005), 469 + A6XX_PROTECT_RDONLY(0x0050b, 0x02f4), 470 + A6XX_PROTECT_NORDWR(0x0050e, 0x0000), 471 + A6XX_PROTECT_NORDWR(0x00510, 0x0000), 472 + A6XX_PROTECT_NORDWR(0x00534, 0x0000), 473 + A6XX_PROTECT_NORDWR(0x00800, 0x0082), 474 + A6XX_PROTECT_NORDWR(0x008a0, 0x0008), 475 + A6XX_PROTECT_NORDWR(0x008ab, 0x0024), 476 + A6XX_PROTECT_RDONLY(0x008de, 0x00ae), 477 + A6XX_PROTECT_NORDWR(0x00900, 0x004d), 478 + A6XX_PROTECT_NORDWR(0x0098d, 0x0272), 479 + A6XX_PROTECT_NORDWR(0x00e00, 0x0001), 480 + A6XX_PROTECT_NORDWR(0x00e03, 0x000c), 481 + A6XX_PROTECT_NORDWR(0x03c00, 0x00c3), 482 + A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff), 483 + A6XX_PROTECT_NORDWR(0x08630, 0x01cf), 484 + A6XX_PROTECT_NORDWR(0x08e00, 0x0000), 485 + A6XX_PROTECT_NORDWR(0x08e08, 0x0000), 486 + A6XX_PROTECT_NORDWR(0x08e50, 0x001f), 487 + A6XX_PROTECT_NORDWR(0x09624, 0x01db), 488 + A6XX_PROTECT_NORDWR(0x09e70, 0x0001), 489 + A6XX_PROTECT_NORDWR(0x09e78, 0x0187), 490 + A6XX_PROTECT_NORDWR(0x0a630, 0x01cf), 491 + A6XX_PROTECT_NORDWR(0x0ae02, 0x0000), 492 + A6XX_PROTECT_NORDWR(0x0ae50, 0x032f), 493 + A6XX_PROTECT_NORDWR(0x0b604, 0x0000), 494 + A6XX_PROTECT_NORDWR(0x0be02, 0x0001), 495 + A6XX_PROTECT_NORDWR(0x0be20, 0x17df), 496 + A6XX_PROTECT_NORDWR(0x0f000, 0x0bff), 497 + A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff), 498 + A6XX_PROTECT_NORDWR(0x11c00, 0x0000), /* note: infinite range */ 499 + }; 500 + 501 + /* These are for a620 and a650 */ 502 + static const u32 a650_protect[] = { 503 + A6XX_PROTECT_RDONLY(0x00000, 0x04ff), 504 + A6XX_PROTECT_RDONLY(0x00501, 0x0005), 505 + A6XX_PROTECT_RDONLY(0x0050b, 0x02f4), 506 + A6XX_PROTECT_NORDWR(0x0050e, 0x0000), 507 + A6XX_PROTECT_NORDWR(0x00510, 0x0000), 508 + A6XX_PROTECT_NORDWR(0x00534, 0x0000), 509 + A6XX_PROTECT_NORDWR(0x00800, 0x0082), 510 + A6XX_PROTECT_NORDWR(0x008a0, 0x0008), 511 + A6XX_PROTECT_NORDWR(0x008ab, 0x0024), 512 + A6XX_PROTECT_RDONLY(0x008de, 0x00ae), 513 + A6XX_PROTECT_NORDWR(0x00900, 0x004d), 514 + A6XX_PROTECT_NORDWR(0x0098d, 0x0272), 515 + A6XX_PROTECT_NORDWR(0x00e00, 0x0001), 516 + A6XX_PROTECT_NORDWR(0x00e03, 0x000c), 517 + A6XX_PROTECT_NORDWR(0x03c00, 0x00c3), 518 + A6XX_PROTECT_RDONLY(0x03cc4, 0x1fff), 519 + A6XX_PROTECT_NORDWR(0x08630, 0x01cf), 520 + A6XX_PROTECT_NORDWR(0x08e00, 0x0000), 521 + A6XX_PROTECT_NORDWR(0x08e08, 0x0000), 522 + A6XX_PROTECT_NORDWR(0x08e50, 0x001f), 523 + A6XX_PROTECT_NORDWR(0x08e80, 0x027f), 524 + A6XX_PROTECT_NORDWR(0x09624, 0x01db), 525 + A6XX_PROTECT_NORDWR(0x09e60, 0x0011), 526 + A6XX_PROTECT_NORDWR(0x09e78, 0x0187), 527 + A6XX_PROTECT_NORDWR(0x0a630, 0x01cf), 528 + A6XX_PROTECT_NORDWR(0x0ae02, 0x0000), 529 + A6XX_PROTECT_NORDWR(0x0ae50, 0x032f), 530 + A6XX_PROTECT_NORDWR(0x0b604, 0x0000), 531 + A6XX_PROTECT_NORDWR(0x0b608, 0x0007), 532 + A6XX_PROTECT_NORDWR(0x0be02, 0x0001), 533 + A6XX_PROTECT_NORDWR(0x0be20, 0x17df), 534 + A6XX_PROTECT_NORDWR(0x0f000, 0x0bff), 535 + A6XX_PROTECT_RDONLY(0x0fc00, 0x1fff), 536 + A6XX_PROTECT_NORDWR(0x18400, 0x1fff), 537 + A6XX_PROTECT_NORDWR(0x1a800, 0x1fff), 538 + A6XX_PROTECT_NORDWR(0x1f400, 0x0443), 539 + A6XX_PROTECT_RDONLY(0x1f844, 0x007b), 540 + A6XX_PROTECT_NORDWR(0x1f887, 0x001b), 541 + A6XX_PROTECT_NORDWR(0x1f8c0, 0x0000), /* note: infinite range */ 542 + }; 543 + 544 + static void a6xx_set_cp_protect(struct msm_gpu *gpu) 545 + { 546 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 547 + const u32 *regs = a6xx_protect; 548 + unsigned i, count = ARRAY_SIZE(a6xx_protect), count_max = 32; 549 + 550 + BUILD_BUG_ON(ARRAY_SIZE(a6xx_protect) > 32); 551 + BUILD_BUG_ON(ARRAY_SIZE(a650_protect) > 48); 552 + 553 + if (adreno_is_a650(adreno_gpu)) { 554 + regs = a650_protect; 555 + count = ARRAY_SIZE(a650_protect); 556 + count_max = 48; 557 + } 558 + 559 + /* 560 + * Enable access protection to privileged registers, fault on an access 561 + * protect violation and select the last span to protect from the start 562 + * address all the way to the end of the register address space 563 + */ 564 + gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, BIT(0) | BIT(1) | BIT(3)); 565 + 566 + for (i = 0; i < count - 1; i++) 567 + gpu_write(gpu, REG_A6XX_CP_PROTECT(i), regs[i]); 568 + /* last CP_PROTECT to have "infinite" length on the last entry */ 569 + gpu_write(gpu, REG_A6XX_CP_PROTECT(count_max - 1), regs[i]); 570 + } 571 + 465 572 static void a6xx_set_ubwc_config(struct msm_gpu *gpu) 466 573 { 467 574 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); ··· 596 489 rgb565_predicator << 11 | amsbc << 4 | lower_bit << 1); 597 490 gpu_write(gpu, REG_A6XX_TPL1_NC_MODE_CNTL, lower_bit << 1); 598 491 gpu_write(gpu, REG_A6XX_SP_NC_MODE_CNTL, 599 - uavflagprd_inv >> 4 | lower_bit << 1); 492 + uavflagprd_inv << 4 | lower_bit << 1); 600 493 gpu_write(gpu, REG_A6XX_UCHE_MODE_CNTL, lower_bit << 21); 601 494 } 602 495 ··· 883 776 } 884 777 885 778 /* Protect registers from the CP */ 886 - gpu_write(gpu, REG_A6XX_CP_PROTECT_CNTL, 0x00000003); 887 - 888 - gpu_write(gpu, REG_A6XX_CP_PROTECT(0), 889 - A6XX_PROTECT_RDONLY(0x600, 0x51)); 890 - gpu_write(gpu, REG_A6XX_CP_PROTECT(1), A6XX_PROTECT_RW(0xae50, 0x2)); 891 - gpu_write(gpu, REG_A6XX_CP_PROTECT(2), A6XX_PROTECT_RW(0x9624, 0x13)); 892 - gpu_write(gpu, REG_A6XX_CP_PROTECT(3), A6XX_PROTECT_RW(0x8630, 0x8)); 893 - gpu_write(gpu, REG_A6XX_CP_PROTECT(4), A6XX_PROTECT_RW(0x9e70, 0x1)); 894 - gpu_write(gpu, REG_A6XX_CP_PROTECT(5), A6XX_PROTECT_RW(0x9e78, 0x187)); 895 - gpu_write(gpu, REG_A6XX_CP_PROTECT(6), A6XX_PROTECT_RW(0xf000, 0x810)); 896 - gpu_write(gpu, REG_A6XX_CP_PROTECT(7), 897 - A6XX_PROTECT_RDONLY(0xfc00, 0x3)); 898 - gpu_write(gpu, REG_A6XX_CP_PROTECT(8), A6XX_PROTECT_RW(0x50e, 0x0)); 899 - gpu_write(gpu, REG_A6XX_CP_PROTECT(9), A6XX_PROTECT_RDONLY(0x50f, 0x0)); 900 - gpu_write(gpu, REG_A6XX_CP_PROTECT(10), A6XX_PROTECT_RW(0x510, 0x0)); 901 - gpu_write(gpu, REG_A6XX_CP_PROTECT(11), 902 - A6XX_PROTECT_RDONLY(0x0, 0x4f9)); 903 - gpu_write(gpu, REG_A6XX_CP_PROTECT(12), 904 - A6XX_PROTECT_RDONLY(0x501, 0xa)); 905 - gpu_write(gpu, REG_A6XX_CP_PROTECT(13), 906 - A6XX_PROTECT_RDONLY(0x511, 0x44)); 907 - gpu_write(gpu, REG_A6XX_CP_PROTECT(14), A6XX_PROTECT_RW(0xe00, 0xe)); 908 - gpu_write(gpu, REG_A6XX_CP_PROTECT(15), A6XX_PROTECT_RW(0x8e00, 0x0)); 909 - gpu_write(gpu, REG_A6XX_CP_PROTECT(16), A6XX_PROTECT_RW(0x8e50, 0xf)); 910 - gpu_write(gpu, REG_A6XX_CP_PROTECT(17), A6XX_PROTECT_RW(0xbe02, 0x0)); 911 - gpu_write(gpu, REG_A6XX_CP_PROTECT(18), 912 - A6XX_PROTECT_RW(0xbe20, 0x11f3)); 913 - gpu_write(gpu, REG_A6XX_CP_PROTECT(19), A6XX_PROTECT_RW(0x800, 0x82)); 914 - gpu_write(gpu, REG_A6XX_CP_PROTECT(20), A6XX_PROTECT_RW(0x8a0, 0x8)); 915 - gpu_write(gpu, REG_A6XX_CP_PROTECT(21), A6XX_PROTECT_RW(0x8ab, 0x19)); 916 - gpu_write(gpu, REG_A6XX_CP_PROTECT(22), A6XX_PROTECT_RW(0x900, 0x4d)); 917 - gpu_write(gpu, REG_A6XX_CP_PROTECT(23), A6XX_PROTECT_RW(0x98d, 0x76)); 918 - gpu_write(gpu, REG_A6XX_CP_PROTECT(24), 919 - A6XX_PROTECT_RDONLY(0x980, 0x4)); 920 - gpu_write(gpu, REG_A6XX_CP_PROTECT(25), A6XX_PROTECT_RW(0xa630, 0x0)); 779 + a6xx_set_cp_protect(gpu); 921 780 922 781 /* Enable expanded apriv for targets that support it */ 923 782 if (gpu->hw_apriv) { ··· 1284 1211 if (ret) 1285 1212 return ret; 1286 1213 1287 - if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami) 1214 + if (a6xx_gpu->shadow_bo) 1288 1215 for (i = 0; i < gpu->nr_rings; i++) 1289 1216 a6xx_gpu->shadow[i] = 0; 1290 1217
+1 -1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
··· 44 44 * REG_CP_PROTECT_REG(n) - this will block both reads and writes for _len 45 45 * registers starting at _reg. 46 46 */ 47 - #define A6XX_PROTECT_RW(_reg, _len) \ 47 + #define A6XX_PROTECT_NORDWR(_reg, _len) \ 48 48 ((1 << 31) | \ 49 49 (((_len) & 0x3FFF) << 18) | ((_reg) & 0x3FFFF)) 50 50
+1
drivers/gpu/drm/msm/dsi/phy/dsi_phy_10nm.c
··· 432 432 pll_freq += div_u64(tmp64, multiplier); 433 433 434 434 vco_rate = pll_freq; 435 + pll_10nm->vco_current_rate = vco_rate; 435 436 436 437 DBG("DSI PLL%d returning vco rate = %lu, dec = %x, frac = %x", 437 438 pll_10nm->phy->id, (unsigned long)vco_rate, dec, frac);
+1
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 460 460 pll_freq += div_u64(tmp64, multiplier); 461 461 462 462 vco_rate = pll_freq; 463 + pll_7nm->vco_current_rate = vco_rate; 463 464 464 465 DBG("DSI PLL%d returning vco rate = %lu, dec = %x, frac = %x", 465 466 pll_7nm->phy->id, (unsigned long)vco_rate, dec, frac);
+7
drivers/gpu/drm/msm/msm_gem.c
··· 1241 1241 1242 1242 to_msm_bo(obj)->vram_node = &vma->node; 1243 1243 1244 + /* Call chain get_pages() -> update_inactive() tries to 1245 + * access msm_obj->mm_list, but it is not initialized yet. 1246 + * To avoid NULL pointer dereference error, initialize 1247 + * mm_list to be empty. 1248 + */ 1249 + INIT_LIST_HEAD(&msm_obj->mm_list); 1250 + 1244 1251 msm_gem_lock(obj); 1245 1252 pages = get_pages(obj); 1246 1253 msm_gem_unlock(obj);
+2 -2
drivers/gpu/drm/radeon/radeon_uvd.c
··· 286 286 if (rdev->uvd.vcpu_bo == NULL) 287 287 return -EINVAL; 288 288 289 - memcpy(rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size); 289 + memcpy_toio((void __iomem *)rdev->uvd.cpu_addr, rdev->uvd_fw->data, rdev->uvd_fw->size); 290 290 291 291 size = radeon_bo_size(rdev->uvd.vcpu_bo); 292 292 size -= rdev->uvd_fw->size; ··· 294 294 ptr = rdev->uvd.cpu_addr; 295 295 ptr += rdev->uvd_fw->size; 296 296 297 - memset(ptr, 0, size); 297 + memset_io((void __iomem *)ptr, 0, size); 298 298 299 299 return 0; 300 300 }
+27 -4
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
··· 209 209 goto err_disable_clk_tmds; 210 210 } 211 211 212 - ret = sun8i_hdmi_phy_probe(hdmi, phy_node); 212 + ret = sun8i_hdmi_phy_get(hdmi, phy_node); 213 213 of_node_put(phy_node); 214 214 if (ret) { 215 215 dev_err(dev, "Couldn't get the HDMI PHY\n"); ··· 242 242 243 243 cleanup_encoder: 244 244 drm_encoder_cleanup(encoder); 245 - sun8i_hdmi_phy_remove(hdmi); 246 245 err_disable_clk_tmds: 247 246 clk_disable_unprepare(hdmi->clk_tmds); 248 247 err_assert_ctrl_reset: ··· 262 263 struct sun8i_dw_hdmi *hdmi = dev_get_drvdata(dev); 263 264 264 265 dw_hdmi_unbind(hdmi->hdmi); 265 - sun8i_hdmi_phy_remove(hdmi); 266 266 clk_disable_unprepare(hdmi->clk_tmds); 267 267 reset_control_assert(hdmi->rst_ctrl); 268 268 gpiod_set_value(hdmi->ddc_en, 0); ··· 318 320 .of_match_table = sun8i_dw_hdmi_dt_ids, 319 321 }, 320 322 }; 321 - module_platform_driver(sun8i_dw_hdmi_pltfm_driver); 323 + 324 + static int __init sun8i_dw_hdmi_init(void) 325 + { 326 + int ret; 327 + 328 + ret = platform_driver_register(&sun8i_dw_hdmi_pltfm_driver); 329 + if (ret) 330 + return ret; 331 + 332 + ret = platform_driver_register(&sun8i_hdmi_phy_driver); 333 + if (ret) { 334 + platform_driver_unregister(&sun8i_dw_hdmi_pltfm_driver); 335 + return ret; 336 + } 337 + 338 + return ret; 339 + } 340 + 341 + static void __exit sun8i_dw_hdmi_exit(void) 342 + { 343 + platform_driver_unregister(&sun8i_dw_hdmi_pltfm_driver); 344 + platform_driver_unregister(&sun8i_hdmi_phy_driver); 345 + } 346 + 347 + module_init(sun8i_dw_hdmi_init); 348 + module_exit(sun8i_dw_hdmi_exit); 322 349 323 350 MODULE_AUTHOR("Jernej Skrabec <jernej.skrabec@siol.net>"); 324 351 MODULE_DESCRIPTION("Allwinner DW HDMI bridge");
+3 -2
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
··· 195 195 struct gpio_desc *ddc_en; 196 196 }; 197 197 198 + extern struct platform_driver sun8i_hdmi_phy_driver; 199 + 198 200 static inline struct sun8i_dw_hdmi * 199 201 encoder_to_sun8i_dw_hdmi(struct drm_encoder *encoder) 200 202 { 201 203 return container_of(encoder, struct sun8i_dw_hdmi, encoder); 202 204 } 203 205 204 - int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node); 205 - void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi); 206 + int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node); 206 207 207 208 void sun8i_hdmi_phy_init(struct sun8i_hdmi_phy *phy); 208 209 void sun8i_hdmi_phy_set_ops(struct sun8i_hdmi_phy *phy,
+36 -5
drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
··· 5 5 6 6 #include <linux/delay.h> 7 7 #include <linux/of_address.h> 8 + #include <linux/of_platform.h> 8 9 9 10 #include "sun8i_dw_hdmi.h" 10 11 ··· 598 597 { /* sentinel */ } 599 598 }; 600 599 601 - int sun8i_hdmi_phy_probe(struct sun8i_dw_hdmi *hdmi, struct device_node *node) 600 + int sun8i_hdmi_phy_get(struct sun8i_dw_hdmi *hdmi, struct device_node *node) 601 + { 602 + struct platform_device *pdev = of_find_device_by_node(node); 603 + struct sun8i_hdmi_phy *phy; 604 + 605 + if (!pdev) 606 + return -EPROBE_DEFER; 607 + 608 + phy = platform_get_drvdata(pdev); 609 + if (!phy) 610 + return -EPROBE_DEFER; 611 + 612 + hdmi->phy = phy; 613 + 614 + put_device(&pdev->dev); 615 + 616 + return 0; 617 + } 618 + 619 + static int sun8i_hdmi_phy_probe(struct platform_device *pdev) 602 620 { 603 621 const struct of_device_id *match; 604 - struct device *dev = hdmi->dev; 622 + struct device *dev = &pdev->dev; 623 + struct device_node *node = dev->of_node; 605 624 struct sun8i_hdmi_phy *phy; 606 625 struct resource res; 607 626 void __iomem *regs; ··· 725 704 clk_prepare_enable(phy->clk_phy); 726 705 } 727 706 728 - hdmi->phy = phy; 707 + platform_set_drvdata(pdev, phy); 729 708 730 709 return 0; 731 710 ··· 749 728 return ret; 750 729 } 751 730 752 - void sun8i_hdmi_phy_remove(struct sun8i_dw_hdmi *hdmi) 731 + static int sun8i_hdmi_phy_remove(struct platform_device *pdev) 753 732 { 754 - struct sun8i_hdmi_phy *phy = hdmi->phy; 733 + struct sun8i_hdmi_phy *phy = platform_get_drvdata(pdev); 755 734 756 735 clk_disable_unprepare(phy->clk_mod); 757 736 clk_disable_unprepare(phy->clk_bus); ··· 765 744 clk_put(phy->clk_pll1); 766 745 clk_put(phy->clk_mod); 767 746 clk_put(phy->clk_bus); 747 + return 0; 768 748 } 749 + 750 + struct platform_driver sun8i_hdmi_phy_driver = { 751 + .probe = sun8i_hdmi_phy_probe, 752 + .remove = sun8i_hdmi_phy_remove, 753 + .driver = { 754 + .name = "sun8i-hdmi-phy", 755 + .of_match_table = sun8i_hdmi_phy_of_table, 756 + }, 757 + };
+1 -1
drivers/gpu/drm/tegra/drm.h
··· 25 25 #include "trace.h" 26 26 27 27 /* XXX move to include/uapi/drm/drm_fourcc.h? */ 28 - #define DRM_FORMAT_MOD_NVIDIA_SECTOR_LAYOUT BIT(22) 28 + #define DRM_FORMAT_MOD_NVIDIA_SECTOR_LAYOUT BIT_ULL(22) 29 29 30 30 struct reset_control; 31 31
+1 -1
drivers/gpu/drm/tegra/hub.c
··· 510 510 * dGPU sector layout. 511 511 */ 512 512 if (tegra_plane_state->tiling.sector_layout == TEGRA_BO_SECTOR_LAYOUT_GPU) 513 - base |= BIT(39); 513 + base |= BIT_ULL(39); 514 514 #endif 515 515 516 516 tegra_plane_writel(p, tegra_plane_state->format, DC_WIN_COLOR_DEPTH);
+42 -28
drivers/gpu/drm/tegra/sor.c
··· 3125 3125 if (err < 0) { 3126 3126 dev_err(sor->dev, "failed to acquire SOR reset: %d\n", 3127 3127 err); 3128 - return err; 3128 + goto rpm_put; 3129 3129 } 3130 3130 3131 3131 err = reset_control_assert(sor->rst); 3132 3132 if (err < 0) { 3133 3133 dev_err(sor->dev, "failed to assert SOR reset: %d\n", 3134 3134 err); 3135 - return err; 3135 + goto rpm_put; 3136 3136 } 3137 3137 } 3138 3138 3139 3139 err = clk_prepare_enable(sor->clk); 3140 3140 if (err < 0) { 3141 3141 dev_err(sor->dev, "failed to enable clock: %d\n", err); 3142 - return err; 3142 + goto rpm_put; 3143 3143 } 3144 3144 3145 3145 usleep_range(1000, 3000); ··· 3150 3150 dev_err(sor->dev, "failed to deassert SOR reset: %d\n", 3151 3151 err); 3152 3152 clk_disable_unprepare(sor->clk); 3153 - return err; 3153 + goto rpm_put; 3154 3154 } 3155 3155 3156 3156 reset_control_release(sor->rst); ··· 3171 3171 } 3172 3172 3173 3173 return 0; 3174 + 3175 + rpm_put: 3176 + if (sor->rst) 3177 + pm_runtime_put(sor->dev); 3178 + 3179 + return err; 3174 3180 } 3175 3181 3176 3182 static int tegra_sor_exit(struct host1x_client *client) ··· 3745 3739 if (!sor->aux) 3746 3740 return -EPROBE_DEFER; 3747 3741 3748 - if (get_device(&sor->aux->ddc.dev)) { 3749 - if (try_module_get(sor->aux->ddc.owner)) 3750 - sor->output.ddc = &sor->aux->ddc; 3751 - else 3752 - put_device(&sor->aux->ddc.dev); 3753 - } 3742 + if (get_device(sor->aux->dev)) 3743 + sor->output.ddc = &sor->aux->ddc; 3754 3744 } 3755 3745 3756 3746 if (!sor->aux) { ··· 3774 3772 3775 3773 err = tegra_sor_parse_dt(sor); 3776 3774 if (err < 0) 3777 - return err; 3775 + goto put_aux; 3778 3776 3779 3777 err = tegra_output_probe(&sor->output); 3780 - if (err < 0) 3781 - return dev_err_probe(&pdev->dev, err, 3782 - "failed to probe output\n"); 3778 + if (err < 0) { 3779 + dev_err_probe(&pdev->dev, err, "failed to probe output\n"); 3780 + goto put_aux; 3781 + } 3783 3782 3784 3783 if (sor->ops && sor->ops->probe) { 3785 3784 err = sor->ops->probe(sor); ··· 3919 3916 platform_set_drvdata(pdev, sor); 3920 3917 pm_runtime_enable(&pdev->dev); 3921 3918 3922 - INIT_LIST_HEAD(&sor->client.list); 3919 + host1x_client_init(&sor->client); 3923 3920 sor->client.ops = &sor_client_ops; 3924 3921 sor->client.dev = &pdev->dev; 3925 - 3926 - err = host1x_client_register(&sor->client); 3927 - if (err < 0) { 3928 - dev_err(&pdev->dev, "failed to register host1x client: %d\n", 3929 - err); 3930 - goto rpm_disable; 3931 - } 3932 3922 3933 3923 /* 3934 3924 * On Tegra210 and earlier, provide our own implementation for the ··· 3934 3938 sor->index); 3935 3939 if (!name) { 3936 3940 err = -ENOMEM; 3937 - goto unregister; 3941 + goto uninit; 3938 3942 } 3939 3943 3940 3944 err = host1x_client_resume(&sor->client); 3941 3945 if (err < 0) { 3942 3946 dev_err(sor->dev, "failed to resume: %d\n", err); 3943 - goto unregister; 3947 + goto uninit; 3944 3948 } 3945 3949 3946 3950 sor->clk_pad = tegra_clk_sor_pad_register(sor, name); ··· 3951 3955 err = PTR_ERR(sor->clk_pad); 3952 3956 dev_err(sor->dev, "failed to register SOR pad clock: %d\n", 3953 3957 err); 3954 - goto unregister; 3958 + goto uninit; 3959 + } 3960 + 3961 + err = __host1x_client_register(&sor->client); 3962 + if (err < 0) { 3963 + dev_err(&pdev->dev, "failed to register host1x client: %d\n", 3964 + err); 3965 + goto uninit; 3955 3966 } 3956 3967 3957 3968 return 0; 3958 3969 3959 - unregister: 3960 - host1x_client_unregister(&sor->client); 3961 - rpm_disable: 3970 + uninit: 3971 + host1x_client_exit(&sor->client); 3962 3972 pm_runtime_disable(&pdev->dev); 3963 3973 remove: 3974 + if (sor->aux) 3975 + sor->output.ddc = NULL; 3976 + 3964 3977 tegra_output_remove(&sor->output); 3978 + put_aux: 3979 + if (sor->aux) 3980 + put_device(sor->aux->dev); 3981 + 3965 3982 return err; 3966 3983 } 3967 3984 ··· 3991 3982 } 3992 3983 3993 3984 pm_runtime_disable(&pdev->dev); 3985 + 3986 + if (sor->aux) { 3987 + put_device(sor->aux->dev); 3988 + sor->output.ddc = NULL; 3989 + } 3994 3990 3995 3991 tegra_output_remove(&sor->output); 3996 3992
+4 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 1172 1172 if (!ttm_bo_evict_swapout_allowable(bo, ctx, &locked, NULL)) 1173 1173 return -EBUSY; 1174 1174 1175 - if (!ttm_bo_get_unless_zero(bo)) { 1175 + if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) || 1176 + bo->ttm->page_flags & TTM_PAGE_FLAG_SG || 1177 + bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED || 1178 + !ttm_bo_get_unless_zero(bo)) { 1176 1179 if (locked) 1177 1180 dma_resv_unlock(bo->base.resv); 1178 1181 return -EBUSY;
+1 -7
drivers/gpu/drm/ttm/ttm_device.c
··· 143 143 144 144 for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { 145 145 list_for_each_entry(bo, &man->lru[j], lru) { 146 - uint32_t num_pages; 146 + uint32_t num_pages = PFN_UP(bo->base.size); 147 147 148 - if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) || 149 - bo->ttm->page_flags & TTM_PAGE_FLAG_SG || 150 - bo->ttm->page_flags & TTM_PAGE_FLAG_SWAPPED) 151 - continue; 152 - 153 - num_pages = bo->ttm->num_pages; 154 148 ret = ttm_bo_swapout(bo, ctx, gfp_flags); 155 149 /* ttm_bo_swapout has dropped the lru_lock */ 156 150 if (!ret)
+1 -1
drivers/gpu/drm/vc4/vc4_kms.c
··· 372 372 if (!old_hvs_state->fifo_state[channel].in_use) 373 373 continue; 374 374 375 - ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[i].pending_commit); 375 + ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[channel].pending_commit); 376 376 if (ret) 377 377 drm_err(dev, "Timed out waiting for commit\n"); 378 378 }
+24 -6
drivers/gpu/host1x/bus.c
··· 736 736 EXPORT_SYMBOL(host1x_driver_unregister); 737 737 738 738 /** 739 + * __host1x_client_init() - initialize a host1x client 740 + * @client: host1x client 741 + * @key: lock class key for the client-specific mutex 742 + */ 743 + void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key) 744 + { 745 + INIT_LIST_HEAD(&client->list); 746 + __mutex_init(&client->lock, "host1x client lock", key); 747 + client->usecount = 0; 748 + } 749 + EXPORT_SYMBOL(__host1x_client_init); 750 + 751 + /** 752 + * host1x_client_exit() - uninitialize a host1x client 753 + * @client: host1x client 754 + */ 755 + void host1x_client_exit(struct host1x_client *client) 756 + { 757 + mutex_destroy(&client->lock); 758 + } 759 + EXPORT_SYMBOL(host1x_client_exit); 760 + 761 + /** 739 762 * __host1x_client_register() - register a host1x client 740 763 * @client: host1x client 741 764 * @key: lock class key for the client-specific mutex ··· 770 747 * device and call host1x_device_init(), which will in turn call each client's 771 748 * &host1x_client_ops.init implementation. 772 749 */ 773 - int __host1x_client_register(struct host1x_client *client, 774 - struct lock_class_key *key) 750 + int __host1x_client_register(struct host1x_client *client) 775 751 { 776 752 struct host1x *host1x; 777 753 int err; 778 - 779 - INIT_LIST_HEAD(&client->list); 780 - __mutex_init(&client->lock, "host1x client lock", key); 781 - client->usecount = 0; 782 754 783 755 mutex_lock(&devices_lock); 784 756
+17 -2
drivers/hid/Kconfig
··· 93 93 depends on HID 94 94 95 95 config HID_A4TECH 96 - tristate "A4 tech mice" 96 + tristate "A4TECH mice" 97 97 depends on HID 98 98 default !EXPERT 99 99 help 100 - Support for A4 tech X5 and WOP-35 / Trust 450L mice. 100 + Support for some A4TECH mice with two scroll wheels. 101 101 102 102 config HID_ACCUTOUCH 103 103 tristate "Accutouch touch device" ··· 921 921 depends on HID 922 922 help 923 923 Support for Samsung InfraRed remote control or keyboards. 924 + 925 + config HID_SEMITEK 926 + tristate "Semitek USB keyboards" 927 + depends on HID 928 + help 929 + Support for Semitek USB keyboards that are not fully compliant 930 + with the HID standard. 931 + 932 + There are many variants, including: 933 + - GK61, GK64, GK68, GK84, GK96, etc. 934 + - SK61, SK64, SK68, SK84, SK96, etc. 935 + - Dierya DK61/DK66 936 + - Tronsmart TK09R 937 + - Woo-dy 938 + - X-Bows Nature/Knight 924 939 925 940 config HID_SONY 926 941 tristate "Sony PS2/3/4 accessories"
+1
drivers/hid/Makefile
··· 106 106 obj-$(CONFIG_HID_RMI) += hid-rmi.o 107 107 obj-$(CONFIG_HID_SAITEK) += hid-saitek.o 108 108 obj-$(CONFIG_HID_SAMSUNG) += hid-samsung.o 109 + obj-$(CONFIG_HID_SEMITEK) += hid-semitek.o 109 110 obj-$(CONFIG_HID_SMARTJOYPLUS) += hid-sjoy.o 110 111 obj-$(CONFIG_HID_SONY) += hid-sony.o 111 112 obj-$(CONFIG_HID_SPEEDLINK) += hid-speedlink.o
+10 -9
drivers/hid/amd-sfh-hid/amd_sfh_client.c
··· 88 88 sensor_index = req_node->sensor_idx; 89 89 report_id = req_node->report_id; 90 90 node_type = req_node->report_type; 91 + kfree(req_node); 91 92 92 93 if (node_type == HID_FEATURE_REPORT) { 93 94 report_size = get_feature_report(sensor_index, report_id, ··· 143 142 int rc, i; 144 143 145 144 dev = &privdata->pdev->dev; 146 - cl_data = kzalloc(sizeof(*cl_data), GFP_KERNEL); 145 + cl_data = devm_kzalloc(dev, sizeof(*cl_data), GFP_KERNEL); 147 146 if (!cl_data) 148 147 return -ENOMEM; 149 148 ··· 176 175 rc = -EINVAL; 177 176 goto cleanup; 178 177 } 179 - cl_data->feature_report[i] = kzalloc(feature_report_size, GFP_KERNEL); 178 + cl_data->feature_report[i] = devm_kzalloc(dev, feature_report_size, GFP_KERNEL); 180 179 if (!cl_data->feature_report[i]) { 181 180 rc = -ENOMEM; 182 181 goto cleanup; 183 182 } 184 - cl_data->input_report[i] = kzalloc(input_report_size, GFP_KERNEL); 183 + cl_data->input_report[i] = devm_kzalloc(dev, input_report_size, GFP_KERNEL); 185 184 if (!cl_data->input_report[i]) { 186 185 rc = -ENOMEM; 187 186 goto cleanup; ··· 190 189 info.sensor_idx = cl_idx; 191 190 info.dma_address = cl_data->sensor_dma_addr[i]; 192 191 193 - cl_data->report_descr[i] = kzalloc(cl_data->report_descr_sz[i], GFP_KERNEL); 192 + cl_data->report_descr[i] = 193 + devm_kzalloc(dev, cl_data->report_descr_sz[i], GFP_KERNEL); 194 194 if (!cl_data->report_descr[i]) { 195 195 rc = -ENOMEM; 196 196 goto cleanup; ··· 216 214 cl_data->sensor_virt_addr[i], 217 215 cl_data->sensor_dma_addr[i]); 218 216 } 219 - kfree(cl_data->feature_report[i]); 220 - kfree(cl_data->input_report[i]); 221 - kfree(cl_data->report_descr[i]); 217 + devm_kfree(dev, cl_data->feature_report[i]); 218 + devm_kfree(dev, cl_data->input_report[i]); 219 + devm_kfree(dev, cl_data->report_descr[i]); 222 220 } 223 - kfree(cl_data); 221 + devm_kfree(dev, cl_data); 224 222 return rc; 225 223 } 226 224 ··· 243 241 cl_data->sensor_dma_addr[i]); 244 242 } 245 243 } 246 - kfree(cl_data); 247 244 return 0; 248 245 }
-3
drivers/hid/amd-sfh-hid/amd_sfh_hid.c
··· 162 162 int i; 163 163 164 164 for (i = 0; i < cli_data->num_hid_devices; ++i) { 165 - kfree(cli_data->feature_report[i]); 166 - kfree(cli_data->input_report[i]); 167 - kfree(cli_data->report_descr[i]); 168 165 if (cli_data->hid_sensor_hubs[i]) { 169 166 kfree(cli_data->hid_sensor_hubs[i]->driver_data); 170 167 hid_destroy_device(cli_data->hid_sensor_hubs[i]);
+2
drivers/hid/hid-a4tech.c
··· 147 147 .driver_data = A4_2WHEEL_MOUSE_HACK_B8 }, 148 148 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_RP_649), 149 149 .driver_data = A4_2WHEEL_MOUSE_HACK_B8 }, 150 + { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_NB_95), 151 + .driver_data = A4_2WHEEL_MOUSE_HACK_B8 }, 150 152 { } 151 153 }; 152 154 MODULE_DEVICE_TABLE(hid, a4_devices);
+20 -12
drivers/hid/hid-asus.c
··· 79 79 #define QUIRK_T100_KEYBOARD BIT(6) 80 80 #define QUIRK_T100CHI BIT(7) 81 81 #define QUIRK_G752_KEYBOARD BIT(8) 82 - #define QUIRK_T101HA_DOCK BIT(9) 83 - #define QUIRK_T90CHI BIT(10) 84 - #define QUIRK_MEDION_E1239T BIT(11) 85 - #define QUIRK_ROG_NKEY_KEYBOARD BIT(12) 82 + #define QUIRK_T90CHI BIT(9) 83 + #define QUIRK_MEDION_E1239T BIT(10) 84 + #define QUIRK_ROG_NKEY_KEYBOARD BIT(11) 86 85 87 86 #define I2C_KEYBOARD_QUIRKS (QUIRK_FIX_NOTEBOOK_REPORT | \ 88 87 QUIRK_NO_INIT_REPORTS | \ ··· 334 335 if (drvdata->quirks & QUIRK_MEDION_E1239T) 335 336 return asus_e1239t_event(drvdata, data, size); 336 337 337 - if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD) { 338 + if (drvdata->quirks & QUIRK_USE_KBD_BACKLIGHT) { 338 339 /* 339 340 * Skip these report ID, the device emits a continuous stream associated 340 341 * with the AURA mode it is in which looks like an 'echo'. ··· 354 355 return -1; 355 356 } 356 357 } 358 + if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD) { 359 + /* 360 + * G713 and G733 send these codes on some keypresses, depending on 361 + * the key pressed it can trigger a shutdown event if not caught. 362 + */ 363 + if(data[0] == 0x02 && data[1] == 0x30) { 364 + return -1; 365 + } 366 + } 367 + 357 368 } 358 369 359 370 return 0; ··· 1081 1072 return ret; 1082 1073 } 1083 1074 1084 - /* use hid-multitouch for T101HA touchpad */ 1085 - if (id->driver_data & QUIRK_T101HA_DOCK && 1086 - hdev->collection->usage == HID_GD_MOUSE) 1087 - return -ENODEV; 1088 - 1089 1075 ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); 1090 1076 if (ret) { 1091 1077 hid_err(hdev, "Asus hw start failed: %d\n", ret); ··· 1234 1230 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1235 1231 USB_DEVICE_ID_ASUSTEK_T100TAF_KEYBOARD), 1236 1232 QUIRK_T100_KEYBOARD | QUIRK_NO_CONSUMER_USAGES }, 1237 - { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 1238 - USB_DEVICE_ID_ASUSTEK_T101HA_KEYBOARD), QUIRK_T101HA_DOCK }, 1239 1233 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_ASUS_AK1D) }, 1240 1234 { HID_USB_DEVICE(USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_ASUS_MD_5110) }, 1241 1235 { HID_USB_DEVICE(USB_VENDOR_ID_JESS, USB_DEVICE_ID_ASUS_MD_5112) }, ··· 1241 1239 USB_DEVICE_ID_ASUSTEK_T100CHI_KEYBOARD), QUIRK_T100CHI }, 1242 1240 { HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE_MEDION_E1239T), 1243 1241 QUIRK_MEDION_E1239T }, 1242 + /* 1243 + * Note bind to the HID_GROUP_GENERIC group, so that we only bind to the keyboard 1244 + * part, while letting hid-multitouch.c handle the touchpad. 1245 + */ 1246 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 1247 + USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_T101HA_KEYBOARD) }, 1244 1248 { } 1245 1249 }; 1246 1250 MODULE_DEVICE_TABLE(hid, asus_devices);
+3 -1
drivers/hid/hid-core.c
··· 2005 2005 case BUS_I2C: 2006 2006 bus = "I2C"; 2007 2007 break; 2008 + case BUS_VIRTUAL: 2009 + bus = "VIRTUAL"; 2010 + break; 2008 2011 default: 2009 2012 bus = "<UNKNOWN>"; 2010 2013 } ··· 2591 2588 2592 2589 return 0; 2593 2590 } 2594 - 2595 2591 EXPORT_SYMBOL_GPL(hid_check_keys_pressed); 2596 2592 2597 2593 static int __init hid_init(void)
+3
drivers/hid/hid-debug.c
··· 930 930 [KEY_APPSELECT] = "AppSelect", 931 931 [KEY_SCREENSAVER] = "ScreenSaver", 932 932 [KEY_VOICECOMMAND] = "VoiceCommand", 933 + [KEY_ASSISTANT] = "Assistant", 934 + [KEY_KBD_LAYOUT_NEXT] = "KbdLayoutNext", 935 + [KEY_EMOJI_PICKER] = "EmojiPicker", 933 936 [KEY_BRIGHTNESS_MIN] = "BrightnessMin", 934 937 [KEY_BRIGHTNESS_MAX] = "BrightnessMax", 935 938 [KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
+16 -13
drivers/hid/hid-ft260.c
··· 201 201 u8 address; /* 7-bit I2C address */ 202 202 u8 flag; /* I2C transaction condition */ 203 203 u8 length; /* data payload length */ 204 - u8 data[60]; /* data payload */ 204 + u8 data[FT260_WR_DATA_MAX]; /* data payload */ 205 205 } __packed; 206 206 207 207 struct ft260_i2c_read_request_report { ··· 249 249 250 250 ret = hid_hw_raw_request(hdev, report_id, buf, len, HID_FEATURE_REPORT, 251 251 HID_REQ_GET_REPORT); 252 - memcpy(data, buf, len); 252 + if (likely(ret == len)) 253 + memcpy(data, buf, len); 254 + else if (ret >= 0) 255 + ret = -EIO; 253 256 kfree(buf); 254 257 return ret; 255 258 } ··· 301 298 302 299 ret = ft260_hid_feature_report_get(hdev, FT260_I2C_STATUS, 303 300 (u8 *)&report, sizeof(report)); 304 - if (ret < 0) { 301 + if (unlikely(ret < 0)) { 305 302 hid_err(hdev, "failed to retrieve status: %d\n", ret); 306 303 return ret; 307 304 } ··· 431 428 432 429 struct ft260_i2c_write_request_report *rep = 433 430 (struct ft260_i2c_write_request_report *)dev->write_buf; 431 + 432 + if (data_len >= sizeof(rep->data)) 433 + return -EINVAL; 434 434 435 435 rep->address = addr; 436 436 rep->data[0] = cmd; ··· 727 721 728 722 ret = ft260_hid_feature_report_get(hdev, FT260_SYSTEM_SETTINGS, 729 723 (u8 *)cfg, len); 730 - if (ret != len) { 724 + if (ret < 0) { 731 725 hid_err(hdev, "failed to retrieve system status\n"); 732 - if (ret >= 0) 733 - return -EIO; 726 + return ret; 734 727 } 735 728 return 0; 736 729 } ··· 782 777 int ret; 783 778 784 779 ret = ft260_hid_feature_report_get(hdev, id, cfg, len); 785 - if (ret != len && ret >= 0) 786 - return -EIO; 780 + if (ret < 0) 781 + return ret; 787 782 788 783 return scnprintf(buf, PAGE_SIZE, "%hi\n", *field); 789 784 } ··· 794 789 int ret; 795 790 796 791 ret = ft260_hid_feature_report_get(hdev, id, cfg, len); 797 - if (ret != len && ret >= 0) 798 - return -EIO; 792 + if (ret < 0) 793 + return ret; 799 794 800 795 return scnprintf(buf, PAGE_SIZE, "%hi\n", le16_to_cpu(*field)); 801 796 } ··· 946 941 947 942 ret = ft260_hid_feature_report_get(hdev, FT260_CHIP_VERSION, 948 943 (u8 *)&version, sizeof(version)); 949 - if (ret != sizeof(version)) { 944 + if (ret < 0) { 950 945 hid_err(hdev, "failed to retrieve chip version\n"); 951 - if (ret >= 0) 952 - ret = -EIO; 953 946 goto err_hid_close; 954 947 } 955 948
+1
drivers/hid/hid-gt683r.c
··· 54 54 { HID_USB_DEVICE(USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GT683R_LED_PANEL) }, 55 55 { } 56 56 }; 57 + MODULE_DEVICE_TABLE(hid, gt683r_led_id); 57 58 58 59 static void gt683r_brightness_set(struct led_classdev *led_cdev, 59 60 enum led_brightness brightness)
+7 -2
drivers/hid/hid-ids.h
··· 26 26 #define USB_DEVICE_ID_A4TECH_WCP32PU 0x0006 27 27 #define USB_DEVICE_ID_A4TECH_X5_005D 0x000a 28 28 #define USB_DEVICE_ID_A4TECH_RP_649 0x001a 29 + #define USB_DEVICE_ID_A4TECH_NB_95 0x022b 29 30 30 31 #define USB_VENDOR_ID_AASHIMA 0x06d6 31 32 #define USB_DEVICE_ID_AASHIMA_GAMEPAD 0x0025 ··· 300 299 301 300 #define USB_VENDOR_ID_CORSAIR 0x1b1c 302 301 #define USB_DEVICE_ID_CORSAIR_K90 0x1b02 303 - 304 - #define USB_VENDOR_ID_CORSAIR 0x1b1c 305 302 #define USB_DEVICE_ID_CORSAIR_K70R 0x1b09 306 303 #define USB_DEVICE_ID_CORSAIR_K95RGB 0x1b11 307 304 #define USB_DEVICE_ID_CORSAIR_M65RGB 0x1b12 ··· 750 751 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 751 752 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 752 753 #define USB_DEVICE_ID_LENOVO_X1_TAB3 0x60b5 754 + #define USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E 0x600e 753 755 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D 0x608d 754 756 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019 0x6019 755 757 #define USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E 0x602e ··· 1051 1051 #define USB_DEVICE_ID_SAITEK_X52 0x075c 1052 1052 #define USB_DEVICE_ID_SAITEK_X52_2 0x0255 1053 1053 #define USB_DEVICE_ID_SAITEK_X52_PRO 0x0762 1054 + #define USB_DEVICE_ID_SAITEK_X65 0x0b6a 1054 1055 1055 1056 #define USB_VENDOR_ID_SAMSUNG 0x0419 1056 1057 #define USB_DEVICE_ID_SAMSUNG_IR_REMOTE 0x0001 ··· 1060 1059 #define USB_VENDOR_ID_SEMICO 0x1a2c 1061 1060 #define USB_DEVICE_ID_SEMICO_USB_KEYKOARD 0x0023 1062 1061 #define USB_DEVICE_ID_SEMICO_USB_KEYKOARD2 0x0027 1062 + 1063 + #define USB_VENDOR_ID_SEMITEK 0x1ea7 1064 + #define USB_DEVICE_ID_SEMITEK_KEYBOARD 0x0907 1063 1065 1064 1066 #define USB_VENDOR_ID_SENNHEISER 0x1395 1065 1067 #define USB_DEVICE_ID_SENNHEISER_BTD500USB 0x002c ··· 1165 1161 #define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819 1166 1162 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968 1167 1163 #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710 1164 + #define USB_DEVICE_ID_SYNAPTICS_DELL_K15A 0x6e21 1168 1165 #define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1002 0x73f4 1169 1166 #define USB_DEVICE_ID_SYNAPTICS_ACER_ONE_S1003 0x73f5 1170 1167 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
+3
drivers/hid/hid-input.c
··· 964 964 965 965 case 0x0cd: map_key_clear(KEY_PLAYPAUSE); break; 966 966 case 0x0cf: map_key_clear(KEY_VOICECOMMAND); break; 967 + 968 + case 0x0d9: map_key_clear(KEY_EMOJI_PICKER); break; 969 + 967 970 case 0x0e0: map_abs_clear(ABS_VOLUME); break; 968 971 case 0x0e2: map_key_clear(KEY_MUTE); break; 969 972 case 0x0e5: map_key_clear(KEY_BASSBOOST); break;
+1
drivers/hid/hid-logitech-hidpp.c
··· 1263 1263 int status; 1264 1264 1265 1265 long flags = (long) data[2]; 1266 + *level = POWER_SUPPLY_CAPACITY_LEVEL_UNKNOWN; 1266 1267 1267 1268 if (flags & 0x80) 1268 1269 switch (flags & 0x07) {
+5 -2
drivers/hid/hid-magicmouse.c
··· 693 693 if (id->vendor == USB_VENDOR_ID_APPLE && 694 694 id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 && 695 695 hdev->type != HID_TYPE_USBMOUSE) 696 - return 0; 696 + return -ENODEV; 697 697 698 698 msc = devm_kzalloc(&hdev->dev, sizeof(*msc), GFP_KERNEL); 699 699 if (msc == NULL) { ··· 779 779 static void magicmouse_remove(struct hid_device *hdev) 780 780 { 781 781 struct magicmouse_sc *msc = hid_get_drvdata(hdev); 782 - cancel_delayed_work_sync(&msc->work); 782 + 783 + if (msc) 784 + cancel_delayed_work_sync(&msc->work); 785 + 783 786 hid_hw_stop(hdev); 784 787 } 785 788
+37 -9
drivers/hid/hid-multitouch.c
··· 70 70 #define MT_QUIRK_WIN8_PTP_BUTTONS BIT(18) 71 71 #define MT_QUIRK_SEPARATE_APP_REPORT BIT(19) 72 72 #define MT_QUIRK_FORCE_MULTI_INPUT BIT(20) 73 + #define MT_QUIRK_DISABLE_WAKEUP BIT(21) 73 74 74 75 #define MT_INPUTMODE_TOUCHSCREEN 0x02 75 76 #define MT_INPUTMODE_TOUCHPAD 0x03 ··· 192 191 #define MT_CLS_EXPORT_ALL_INPUTS 0x0013 193 192 /* reserved 0x0014 */ 194 193 #define MT_CLS_WIN_8_FORCE_MULTI_INPUT 0x0015 194 + #define MT_CLS_WIN_8_DISABLE_WAKEUP 0x0016 195 195 196 196 /* vendor specific classes */ 197 197 #define MT_CLS_3M 0x0101 ··· 284 282 MT_QUIRK_STICKY_FINGERS | 285 283 MT_QUIRK_WIN8_PTP_BUTTONS | 286 284 MT_QUIRK_FORCE_MULTI_INPUT, 285 + .export_all_inputs = true }, 286 + { .name = MT_CLS_WIN_8_DISABLE_WAKEUP, 287 + .quirks = MT_QUIRK_ALWAYS_VALID | 288 + MT_QUIRK_IGNORE_DUPLICATES | 289 + MT_QUIRK_HOVERING | 290 + MT_QUIRK_CONTACT_CNT_ACCURATE | 291 + MT_QUIRK_STICKY_FINGERS | 292 + MT_QUIRK_WIN8_PTP_BUTTONS | 293 + MT_QUIRK_DISABLE_WAKEUP, 287 294 .export_all_inputs = true }, 288 295 289 296 /* ··· 615 604 if (!(HID_MAIN_ITEM_VARIABLE & field->flags)) 616 605 continue; 617 606 618 - for (n = 0; n < field->report_count; n++) { 619 - if (field->usage[n].hid == HID_DG_CONTACTID) 620 - rdata->is_mt_collection = true; 607 + if (field->logical == HID_DG_FINGER || td->hdev->group != HID_GROUP_MULTITOUCH_WIN_8) { 608 + for (n = 0; n < field->report_count; n++) { 609 + if (field->usage[n].hid == HID_DG_CONTACTID) { 610 + rdata->is_mt_collection = true; 611 + break; 612 + } 613 + } 621 614 } 622 615 } 623 616 ··· 774 759 return 1; 775 760 case HID_DG_CONFIDENCE: 776 761 if ((cls->name == MT_CLS_WIN_8 || 777 - cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT) && 762 + cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT || 763 + cls->name == MT_CLS_WIN_8_DISABLE_WAKEUP) && 778 764 (field->application == HID_DG_TOUCHPAD || 779 765 field->application == HID_DG_TOUCHSCREEN)) 780 766 app->quirks |= MT_QUIRK_CONFIDENCE; ··· 1592 1576 /* we do not set suffix = "Touchscreen" */ 1593 1577 hi->input->name = hdev->name; 1594 1578 break; 1595 - case HID_DG_STYLUS: 1596 - /* force BTN_STYLUS to allow tablet matching in udev */ 1597 - __set_bit(BTN_STYLUS, hi->input->keybit); 1598 - break; 1599 1579 case HID_VD_ASUS_CUSTOM_MEDIA_KEYS: 1600 1580 suffix = "Custom Media Keys"; 1601 1581 break; 1582 + case HID_DG_STYLUS: 1583 + /* force BTN_STYLUS to allow tablet matching in udev */ 1584 + __set_bit(BTN_STYLUS, hi->input->keybit); 1585 + fallthrough; 1602 1586 case HID_DG_PEN: 1603 1587 suffix = "Stylus"; 1604 1588 break; ··· 1765 1749 #ifdef CONFIG_PM 1766 1750 static int mt_suspend(struct hid_device *hdev, pm_message_t state) 1767 1751 { 1752 + struct mt_device *td = hid_get_drvdata(hdev); 1753 + 1768 1754 /* High latency is desirable for power savings during S3/S0ix */ 1769 - mt_set_modes(hdev, HID_LATENCY_HIGH, true, true); 1755 + if (td->mtclass.quirks & MT_QUIRK_DISABLE_WAKEUP) 1756 + mt_set_modes(hdev, HID_LATENCY_HIGH, false, false); 1757 + else 1758 + mt_set_modes(hdev, HID_LATENCY_HIGH, true, true); 1759 + 1770 1760 return 0; 1771 1761 } 1772 1762 ··· 1830 1808 { .driver_data = MT_CLS_EXPORT_ALL_INPUTS, 1831 1809 MT_USB_DEVICE(USB_VENDOR_ID_ANTON, 1832 1810 USB_DEVICE_ID_ANTON_TOUCH_PAD) }, 1811 + 1812 + /* Asus T101HA */ 1813 + { .driver_data = MT_CLS_WIN_8_DISABLE_WAKEUP, 1814 + HID_DEVICE(BUS_USB, HID_GROUP_MULTITOUCH_WIN_8, 1815 + USB_VENDOR_ID_ASUSTEK, 1816 + USB_DEVICE_ID_ASUSTEK_T101HA_KEYBOARD) }, 1833 1817 1834 1818 /* Asus T304UA */ 1835 1819 { .driver_data = MT_CLS_ASUS,
+4
drivers/hid/hid-quirks.c
··· 110 110 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_M912), HID_QUIRK_MULTI_INPUT }, 111 111 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE), HID_QUIRK_MULTI_INPUT }, 112 112 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2), HID_QUIRK_ALWAYS_POLL }, 113 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_OPTICAL_USB_MOUSE_600E), HID_QUIRK_ALWAYS_POLL }, 113 114 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_608D), HID_QUIRK_ALWAYS_POLL }, 114 115 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_6019), HID_QUIRK_ALWAYS_POLL }, 115 116 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_PIXART_USB_MOUSE_602E), HID_QUIRK_ALWAYS_POLL }, ··· 159 158 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 160 159 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_2), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 161 160 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X52_PRO), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 161 + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_X65), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, 162 162 { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS }, 163 163 { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS }, 164 164 { HID_USB_DEVICE(USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB), HID_QUIRK_NOGET }, ··· 178 176 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_QUAD_HD), HID_QUIRK_NO_INIT_REPORTS }, 179 177 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP_V103), HID_QUIRK_NO_INIT_REPORTS }, 180 178 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K12A), HID_QUIRK_NO_INIT_REPORTS }, 179 + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K15A), HID_QUIRK_NO_INIT_REPORTS }, 181 180 { HID_USB_DEVICE(USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD), HID_QUIRK_BADPAD }, 182 181 { HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT }, 183 182 { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET }, ··· 214 211 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_WCP32PU) }, 215 212 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_X5_005D) }, 216 213 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_RP_649) }, 214 + { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_NB_95) }, 217 215 #endif 218 216 #if IS_ENABLED(CONFIG_HID_ACCUTOUCH) 219 217 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_ACCUTOUCH_2216) },
+40
drivers/hid/hid-semitek.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * HID driver for Semitek keyboards 4 + * 5 + * Copyright (c) 2021 Benjamin Moody 6 + */ 7 + 8 + #include <linux/device.h> 9 + #include <linux/hid.h> 10 + #include <linux/module.h> 11 + 12 + #include "hid-ids.h" 13 + 14 + static __u8 *semitek_report_fixup(struct hid_device *hdev, __u8 *rdesc, 15 + unsigned int *rsize) 16 + { 17 + /* In the report descriptor for interface 2, fix the incorrect 18 + description of report ID 0x04 (the report contains a 19 + bitmask, not an array of keycodes.) */ 20 + if (*rsize == 0xcb && rdesc[0x83] == 0x81 && rdesc[0x84] == 0x00) { 21 + hid_info(hdev, "fixing up Semitek report descriptor\n"); 22 + rdesc[0x84] = 0x02; 23 + } 24 + return rdesc; 25 + } 26 + 27 + static const struct hid_device_id semitek_devices[] = { 28 + { HID_USB_DEVICE(USB_VENDOR_ID_SEMITEK, USB_DEVICE_ID_SEMITEK_KEYBOARD) }, 29 + { } 30 + }; 31 + MODULE_DEVICE_TABLE(hid, semitek_devices); 32 + 33 + static struct hid_driver semitek_driver = { 34 + .name = "semitek", 35 + .id_table = semitek_devices, 36 + .report_fixup = semitek_report_fixup, 37 + }; 38 + module_hid_driver(semitek_driver); 39 + 40 + MODULE_LICENSE("GPL");
+5 -3
drivers/hid/hid-sensor-custom.c
··· 387 387 struct hid_sensor_custom *sensor_inst = dev_get_drvdata(dev); 388 388 int index, field_index, usage; 389 389 char name[HID_CUSTOM_NAME_LENGTH]; 390 - int value; 390 + int value, ret; 391 391 392 392 if (sscanf(attr->attr.name, "feature-%x-%x-%s", &index, &usage, 393 393 name) == 3) { ··· 403 403 404 404 report_id = sensor_inst->fields[field_index].attribute. 405 405 report_id; 406 - sensor_hub_set_feature(sensor_inst->hsdev, report_id, 407 - index, sizeof(value), &value); 406 + ret = sensor_hub_set_feature(sensor_inst->hsdev, report_id, 407 + index, sizeof(value), &value); 408 + if (ret) 409 + return ret; 408 410 } else 409 411 return -EINVAL; 410 412
+9 -4
drivers/hid/hid-sensor-hub.c
··· 209 209 buffer_size = buffer_size / sizeof(__s32); 210 210 if (buffer_size) { 211 211 for (i = 0; i < buffer_size; ++i) { 212 - hid_set_field(report->field[field_index], i, 213 - (__force __s32)cpu_to_le32(*buf32)); 212 + ret = hid_set_field(report->field[field_index], i, 213 + (__force __s32)cpu_to_le32(*buf32)); 214 + if (ret) 215 + goto done_proc; 216 + 214 217 ++buf32; 215 218 } 216 219 } 217 220 if (remaining_bytes) { 218 221 value = 0; 219 222 memcpy(&value, (u8 *)buf32, remaining_bytes); 220 - hid_set_field(report->field[field_index], i, 221 - (__force __s32)cpu_to_le32(value)); 223 + ret = hid_set_field(report->field[field_index], i, 224 + (__force __s32)cpu_to_le32(value)); 225 + if (ret) 226 + goto done_proc; 222 227 } 223 228 hid_hw_request(hsdev->hdev, report, HID_REQ_SET_REPORT); 224 229 hid_hw_wait(hsdev->hdev);
+1 -1
drivers/hid/hid-thrustmaster.c
··· 312 312 } 313 313 314 314 tm_wheel->change_request = kzalloc(sizeof(struct usb_ctrlrequest), GFP_KERNEL); 315 - if (!tm_wheel->model_request) { 315 + if (!tm_wheel->change_request) { 316 316 ret = -ENOMEM; 317 317 goto error5; 318 318 }
+10 -3
drivers/hid/i2c-hid/i2c-hid-core.c
··· 45 45 #define I2C_HID_QUIRK_BOGUS_IRQ BIT(4) 46 46 #define I2C_HID_QUIRK_RESET_ON_RESUME BIT(5) 47 47 #define I2C_HID_QUIRK_BAD_INPUT_SIZE BIT(6) 48 + #define I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET BIT(7) 48 49 49 50 50 51 /* flags */ ··· 179 178 I2C_HID_QUIRK_RESET_ON_RESUME }, 180 179 { USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720, 181 180 I2C_HID_QUIRK_BAD_INPUT_SIZE }, 181 + /* 182 + * Sending the wakeup after reset actually break ELAN touchscreen controller 183 + */ 184 + { USB_VENDOR_ID_ELAN, HID_ANY_ID, 185 + I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET }, 182 186 { 0, 0 } 183 187 }; 184 188 ··· 467 461 } 468 462 469 463 /* At least some SIS devices need this after reset */ 470 - ret = i2c_hid_set_power(client, I2C_HID_PWR_ON); 464 + if (!(ihid->quirks & I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET)) 465 + ret = i2c_hid_set_power(client, I2C_HID_PWR_ON); 471 466 472 467 out_unlock: 473 468 mutex_unlock(&ihid->reset_lock); ··· 997 990 hid->vendor = le16_to_cpu(ihid->hdesc.wVendorID); 998 991 hid->product = le16_to_cpu(ihid->hdesc.wProductID); 999 992 1000 - snprintf(hid->name, sizeof(hid->name), "%s %04hX:%04hX", 1001 - client->name, hid->vendor, hid->product); 993 + snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X", 994 + client->name, (u16)hid->vendor, (u16)hid->product); 1002 995 strlcpy(hid->phys, dev_name(&client->dev), sizeof(hid->phys)); 1003 996 1004 997 ihid->quirks = i2c_hid_lookup_quirk(hid->vendor, hid->product);
+2
drivers/hid/intel-ish-hid/ipc/hw-ish.h
··· 28 28 #define EHL_Ax_DEVICE_ID 0x4BB3 29 29 #define TGL_LP_DEVICE_ID 0xA0FC 30 30 #define TGL_H_DEVICE_ID 0x43FC 31 + #define ADL_S_DEVICE_ID 0x7AF8 32 + #define ADL_P_DEVICE_ID 0x51FC 31 33 32 34 #define REVISION_ID_CHT_A0 0x6 33 35 #define REVISION_ID_CHT_Ax_SI 0x0
+2
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 39 39 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, EHL_Ax_DEVICE_ID)}, 40 40 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, TGL_LP_DEVICE_ID)}, 41 41 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, TGL_H_DEVICE_ID)}, 42 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_S_DEVICE_ID)}, 43 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, ADL_P_DEVICE_ID)}, 42 44 {0, } 43 45 }; 44 46 MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+3 -3
drivers/hid/surface-hid/surface_hid_core.c
··· 168 168 169 169 shid->hid->dev.parent = shid->dev; 170 170 shid->hid->bus = BUS_HOST; 171 - shid->hid->vendor = cpu_to_le16(shid->attrs.vendor); 172 - shid->hid->product = cpu_to_le16(shid->attrs.product); 173 - shid->hid->version = cpu_to_le16(shid->hid_desc.hid_version); 171 + shid->hid->vendor = get_unaligned_le16(&shid->attrs.vendor); 172 + shid->hid->product = get_unaligned_le16(&shid->attrs.product); 173 + shid->hid->version = get_unaligned_le16(&shid->hid_desc.hid_version); 174 174 shid->hid->country = shid->hid_desc.country_code; 175 175 176 176 snprintf(shid->hid->name, sizeof(shid->hid->name), "Microsoft Surface %04X:%04X",
+1 -1
drivers/hid/usbhid/hid-core.c
··· 374 374 raw_report = usbhid->ctrl[usbhid->ctrltail].raw_report; 375 375 dir = usbhid->ctrl[usbhid->ctrltail].dir; 376 376 377 - len = ((report->size - 1) >> 3) + 1 + (report->id > 0); 377 + len = hid_report_len(report); 378 378 if (dir == USB_DIR_OUT) { 379 379 usbhid->urbctrl->pipe = usb_sndctrlpipe(hid_to_usb_dev(hid), 0); 380 380 usbhid->urbctrl->transfer_buffer_length = len;
+1
drivers/hid/usbhid/hid-pidff.c
··· 1292 1292 1293 1293 if (pidff->pool[PID_DEVICE_MANAGED_POOL].value && 1294 1294 pidff->pool[PID_DEVICE_MANAGED_POOL].value[0] == 0) { 1295 + error = -EPERM; 1295 1296 hid_notice(hid, 1296 1297 "device does not support device managed pool\n"); 1297 1298 goto fail;
+14
drivers/hwmon/corsair-psu.c
··· 771 771 return 0; 772 772 } 773 773 774 + #ifdef CONFIG_PM 775 + static int corsairpsu_resume(struct hid_device *hdev) 776 + { 777 + struct corsairpsu_data *priv = hid_get_drvdata(hdev); 778 + 779 + /* some PSUs turn off the microcontroller during standby, so a reinit is required */ 780 + return corsairpsu_init(priv); 781 + } 782 + #endif 783 + 774 784 static const struct hid_device_id corsairpsu_idtable[] = { 775 785 { HID_USB_DEVICE(0x1b1c, 0x1c03) }, /* Corsair HX550i */ 776 786 { HID_USB_DEVICE(0x1b1c, 0x1c04) }, /* Corsair HX650i */ ··· 803 793 .probe = corsairpsu_probe, 804 794 .remove = corsairpsu_remove, 805 795 .raw_event = corsairpsu_raw_event, 796 + #ifdef CONFIG_PM 797 + .resume = corsairpsu_resume, 798 + .reset_resume = corsairpsu_resume, 799 + #endif 806 800 }; 807 801 module_hid_driver(corsairpsu_driver); 808 802
+2 -2
drivers/hwmon/dell-smm-hwmon.c
··· 838 838 static umode_t i8k_is_visible(struct kobject *kobj, struct attribute *attr, 839 839 int index) 840 840 { 841 - if (disallow_fan_support && index >= 8) 841 + if (disallow_fan_support && index >= 20) 842 842 return 0; 843 843 if (disallow_fan_type_call && 844 - (index == 9 || index == 12 || index == 15)) 844 + (index == 21 || index == 25 || index == 28)) 845 845 return 0; 846 846 if (index >= 0 && index <= 1 && 847 847 !(i8k_hwmon_flags & I8K_HWMON_HAVE_TEMP1))
+25 -7
drivers/hwmon/pmbus/fsp-3y.c
··· 37 37 struct pmbus_driver_info info; 38 38 int chip; 39 39 int page; 40 + 41 + bool vout_linear_11; 40 42 }; 41 43 42 44 #define to_fsp3y_data(x) container_of(x, struct fsp3y_data, info) ··· 110 108 int rv; 111 109 112 110 /* 113 - * YH5151-E outputs vout in linear11. The conversion is done when 114 - * reading. Here, we have to inject pmbus_core with the correct 115 - * exponent (it is -6). 111 + * Inject an exponent for non-compliant YH5151-E. 116 112 */ 117 - if (data->chip == yh5151e && reg == PMBUS_VOUT_MODE) 113 + if (data->vout_linear_11 && reg == PMBUS_VOUT_MODE) 118 114 return 0x1A; 119 115 120 116 rv = set_page(client, page); ··· 161 161 return rv; 162 162 163 163 /* 164 - * YH-5151E is non-compliant and outputs output voltages in linear11 165 - * instead of linear16. 164 + * Handle YH-5151E non-compliant linear11 vout voltage. 166 165 */ 167 - if (data->chip == yh5151e && reg == PMBUS_READ_VOUT) 166 + if (data->vout_linear_11 && reg == PMBUS_READ_VOUT) 168 167 rv = sign_extend32(rv, 10) & 0xffff; 169 168 170 169 return rv; ··· 254 255 data->page = rv; 255 256 256 257 data->info = fsp3y_info[data->chip]; 258 + 259 + /* 260 + * YH-5151E sometimes reports vout in linear11 and sometimes in 261 + * linear16. This depends on the exact individual piece of hardware. One 262 + * YH-5151E can use linear16 and another might use linear11 instead. 263 + * 264 + * The format can be recognized by reading VOUT_MODE - if it doesn't 265 + * report a valid exponent, then vout uses linear11. Otherwise, the 266 + * device is compliant and uses linear16. 267 + */ 268 + data->vout_linear_11 = false; 269 + if (data->chip == yh5151e) { 270 + rv = i2c_smbus_read_byte_data(client, PMBUS_VOUT_MODE); 271 + if (rv < 0) 272 + return rv; 273 + 274 + if (rv == 0xFF) 275 + data->vout_linear_11 = true; 276 + } 257 277 258 278 return pmbus_do_probe(client, &data->info); 259 279 }
+2 -2
drivers/hwmon/pmbus/isl68137.c
··· 244 244 info->read_word_data = raa_dmpvr2_read_word_data; 245 245 break; 246 246 case raa_dmpvr2_2rail_nontc: 247 - info->func[0] &= ~PMBUS_HAVE_TEMP; 248 - info->func[1] &= ~PMBUS_HAVE_TEMP; 247 + info->func[0] &= ~PMBUS_HAVE_TEMP3; 248 + info->func[1] &= ~PMBUS_HAVE_TEMP3; 249 249 fallthrough; 250 250 case raa_dmpvr2_2rail: 251 251 info->pages = 2;
+1 -1
drivers/hwmon/pmbus/q54sj108a2.c
··· 299 299 dev_err(&client->dev, "Failed to read Manufacturer ID\n"); 300 300 return ret; 301 301 } 302 - if (ret != 5 || strncmp(buf, "DELTA", 5)) { 302 + if (ret != 6 || strncmp(buf, "DELTA", 5)) { 303 303 buf[ret] = '\0'; 304 304 dev_err(dev, "Unsupported Manufacturer ID '%s'\n", buf); 305 305 return -ENODEV;
+9
drivers/hwmon/scpi-hwmon.c
··· 99 99 100 100 scpi_scale_reading(&value, sensor); 101 101 102 + /* 103 + * Temperature sensor values are treated as signed values based on 104 + * observation even though that is not explicitly specified, and 105 + * because an unsigned u64 temperature does not really make practical 106 + * sense especially when the temperature is below zero degrees Celsius. 107 + */ 108 + if (sensor->info.class == TEMPERATURE) 109 + return sprintf(buf, "%lld\n", (s64)value); 110 + 102 111 return sprintf(buf, "%llu\n", value); 103 112 } 104 113
+15 -2
drivers/hwmon/tps23861.c
··· 99 99 #define POWER_ENABLE 0x19 100 100 #define TPS23861_NUM_PORTS 4 101 101 102 + #define TPS23861_GENERAL_MASK_1 0x17 103 + #define TPS23861_CURRENT_SHUNT_MASK BIT(0) 104 + 102 105 #define TEMPERATURE_LSB 652 /* 0.652 degrees Celsius */ 103 106 #define VOLTAGE_LSB 3662 /* 3.662 mV */ 104 107 #define SHUNT_RESISTOR_DEFAULT 255000 /* 255 mOhm */ 105 - #define CURRENT_LSB_255 62260 /* 62.260 uA */ 106 - #define CURRENT_LSB_250 61039 /* 61.039 uA */ 108 + #define CURRENT_LSB_250 62260 /* 62.260 uA */ 109 + #define CURRENT_LSB_255 61039 /* 61.039 uA */ 107 110 #define RESISTANCE_LSB 110966 /* 11.0966 Ohm*/ 108 111 #define RESISTANCE_LSB_LOW 157216 /* 15.7216 Ohm*/ 109 112 ··· 120 117 static struct regmap_config tps23861_regmap_config = { 121 118 .reg_bits = 8, 122 119 .val_bits = 8, 120 + .max_register = 0x6f, 123 121 }; 124 122 125 123 static int tps23861_read_temp(struct tps23861_data *data, long *val) ··· 563 559 data->shunt_resistor = shunt_resistor; 564 560 else 565 561 data->shunt_resistor = SHUNT_RESISTOR_DEFAULT; 562 + 563 + if (data->shunt_resistor == SHUNT_RESISTOR_DEFAULT) 564 + regmap_clear_bits(data->regmap, 565 + TPS23861_GENERAL_MASK_1, 566 + TPS23861_CURRENT_SHUNT_MASK); 567 + else 568 + regmap_set_bits(data->regmap, 569 + TPS23861_GENERAL_MASK_1, 570 + TPS23861_CURRENT_SHUNT_MASK); 566 571 567 572 hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name, 568 573 data, &tps23861_chip_info,
+4 -5
drivers/i2c/busses/i2c-altera.c
··· 55 55 #define ALTR_I2C_XFER_TIMEOUT (msecs_to_jiffies(250)) 56 56 57 57 /** 58 - * altr_i2c_dev - I2C device context 58 + * struct altr_i2c_dev - I2C device context 59 59 * @base: pointer to register struct 60 60 * @msg: pointer to current message 61 61 * @msg_len: number of bytes transferred in msg ··· 172 172 altr_i2c_int_enable(idev, ALTR_I2C_ALL_IRQ, false); 173 173 } 174 174 175 - /** 175 + /* 176 176 * altr_i2c_transfer - On the last byte to be transmitted, send 177 177 * a Stop bit on the last byte. 178 178 */ ··· 185 185 writel(data, idev->base + ALTR_I2C_TFR_CMD); 186 186 } 187 187 188 - /** 188 + /* 189 189 * altr_i2c_empty_rx_fifo - Fetch data from RX FIFO until end of 190 190 * transfer. Send a Stop bit on the last byte. 191 191 */ ··· 201 201 } 202 202 } 203 203 204 - /** 204 + /* 205 205 * altr_i2c_fill_tx_fifo - Fill TX FIFO from current message buffer. 206 - * @return: Number of bytes left to transfer. 207 206 */ 208 207 static int altr_i2c_fill_tx_fifo(struct altr_i2c_dev *idev) 209 208 {
+20 -1
drivers/i2c/busses/i2c-qcom-geni.c
··· 650 650 return 0; 651 651 } 652 652 653 + static void geni_i2c_shutdown(struct platform_device *pdev) 654 + { 655 + struct geni_i2c_dev *gi2c = platform_get_drvdata(pdev); 656 + 657 + /* Make client i2c transfers start failing */ 658 + i2c_mark_adapter_suspended(&gi2c->adap); 659 + } 660 + 653 661 static int __maybe_unused geni_i2c_runtime_suspend(struct device *dev) 654 662 { 655 663 int ret; ··· 698 690 { 699 691 struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); 700 692 693 + i2c_mark_adapter_suspended(&gi2c->adap); 694 + 701 695 if (!gi2c->suspended) { 702 696 geni_i2c_runtime_suspend(dev); 703 697 pm_runtime_disable(dev); ··· 709 699 return 0; 710 700 } 711 701 702 + static int __maybe_unused geni_i2c_resume_noirq(struct device *dev) 703 + { 704 + struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); 705 + 706 + i2c_mark_adapter_resumed(&gi2c->adap); 707 + return 0; 708 + } 709 + 712 710 static const struct dev_pm_ops geni_i2c_pm_ops = { 713 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(geni_i2c_suspend_noirq, NULL) 711 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(geni_i2c_suspend_noirq, geni_i2c_resume_noirq) 714 712 SET_RUNTIME_PM_OPS(geni_i2c_runtime_suspend, geni_i2c_runtime_resume, 715 713 NULL) 716 714 }; ··· 732 714 static struct platform_driver geni_i2c_driver = { 733 715 .probe = geni_i2c_probe, 734 716 .remove = geni_i2c_remove, 717 + .shutdown = geni_i2c_shutdown, 735 718 .driver = { 736 719 .name = "geni_i2c", 737 720 .pm = &geni_i2c_pm_ops,
+2 -2
drivers/i2c/busses/i2c-tegra-bpmp.c
··· 65 65 *out |= SERIALI2C_RECV_LEN; 66 66 } 67 67 68 - /** 68 + /* 69 69 * The serialized I2C format is simply the following: 70 70 * [addr little-endian][flags little-endian][len little-endian][data if write] 71 71 * [addr little-endian][flags little-endian][len little-endian][data if write] ··· 109 109 request->xfer.data_size = pos; 110 110 } 111 111 112 - /** 112 + /* 113 113 * The data in the BPMP -> CPU direction is composed of sequential blocks for 114 114 * those messages that have I2C_M_RD. So, for example, if you have: 115 115 *
+5
drivers/infiniband/core/uverbs_cmd.c
··· 3248 3248 goto err_free_attr; 3249 3249 } 3250 3250 3251 + if (!rdma_is_port_valid(uobj->context->device, cmd.flow_attr.port)) { 3252 + err = -EINVAL; 3253 + goto err_uobj; 3254 + } 3255 + 3251 3256 qp = uobj_get_obj_read(qp, UVERBS_OBJECT_QP, cmd.qp_handle, attrs); 3252 3257 if (!qp) { 3253 3258 err = -EINVAL;
+1 -7
drivers/infiniband/hw/mlx4/main.c
··· 581 581 props->cq_caps.max_cq_moderation_count = MLX4_MAX_CQ_COUNT; 582 582 props->cq_caps.max_cq_moderation_period = MLX4_MAX_CQ_PERIOD; 583 583 584 - if (!mlx4_is_slave(dev->dev)) 585 - err = mlx4_get_internal_clock_params(dev->dev, &clock_params); 586 - 587 584 if (uhw->outlen >= resp.response_length + sizeof(resp.hca_core_clock_offset)) { 588 585 resp.response_length += sizeof(resp.hca_core_clock_offset); 589 - if (!err && !mlx4_is_slave(dev->dev)) { 586 + if (!mlx4_get_internal_clock_params(dev->dev, &clock_params)) { 590 587 resp.comp_mask |= MLX4_IB_QUERY_DEV_RESP_MASK_CORE_CLOCK_OFFSET; 591 588 resp.hca_core_clock_offset = clock_params.offset % PAGE_SIZE; 592 589 } ··· 1698 1701 enum mlx4_net_trans_promisc_mode type[2]; 1699 1702 struct mlx4_dev *dev = (to_mdev(qp->device))->dev; 1700 1703 int is_bonded = mlx4_is_bonded(dev); 1701 - 1702 - if (!rdma_is_port_valid(qp->device, flow_attr->port)) 1703 - return ERR_PTR(-EINVAL); 1704 1704 1705 1705 if (flow_attr->flags & ~IB_FLOW_ATTR_FLAGS_DONT_TRAP) 1706 1706 return ERR_PTR(-EOPNOTSUPP);
+4 -5
drivers/infiniband/hw/mlx5/cq.c
··· 849 849 ib_umem_release(cq->buf.umem); 850 850 } 851 851 852 - static void init_cq_frag_buf(struct mlx5_ib_cq *cq, 853 - struct mlx5_ib_cq_buf *buf) 852 + static void init_cq_frag_buf(struct mlx5_ib_cq_buf *buf) 854 853 { 855 854 int i; 856 855 void *cqe; 857 856 struct mlx5_cqe64 *cqe64; 858 857 859 858 for (i = 0; i < buf->nent; i++) { 860 - cqe = get_cqe(cq, i); 859 + cqe = mlx5_frag_buf_get_wqe(&buf->fbc, i); 861 860 cqe64 = buf->cqe_size == 64 ? cqe : cqe + 64; 862 861 cqe64->op_own = MLX5_CQE_INVALID << 4; 863 862 } ··· 882 883 if (err) 883 884 goto err_db; 884 885 885 - init_cq_frag_buf(cq, &cq->buf); 886 + init_cq_frag_buf(&cq->buf); 886 887 887 888 *inlen = MLX5_ST_SZ_BYTES(create_cq_in) + 888 889 MLX5_FLD_SZ_BYTES(create_cq_in, pas[0]) * ··· 1183 1184 if (err) 1184 1185 goto ex; 1185 1186 1186 - init_cq_frag_buf(cq, cq->resize_buf); 1187 + init_cq_frag_buf(cq->resize_buf); 1187 1188 1188 1189 return 0; 1189 1190
+6 -1
drivers/infiniband/hw/mlx5/doorbell.c
··· 41 41 struct ib_umem *umem; 42 42 unsigned long user_virt; 43 43 int refcnt; 44 + struct mm_struct *mm; 44 45 }; 45 46 46 47 int mlx5_ib_db_map_user(struct mlx5_ib_ucontext *context, ··· 54 53 mutex_lock(&context->db_page_mutex); 55 54 56 55 list_for_each_entry(page, &context->db_page_list, list) 57 - if (page->user_virt == (virt & PAGE_MASK)) 56 + if ((current->mm == page->mm) && 57 + (page->user_virt == (virt & PAGE_MASK))) 58 58 goto found; 59 59 60 60 page = kmalloc(sizeof(*page), GFP_KERNEL); ··· 73 71 kfree(page); 74 72 goto out; 75 73 } 74 + mmgrab(current->mm); 75 + page->mm = current->mm; 76 76 77 77 list_add(&page->list, &context->db_page_list); 78 78 ··· 95 91 96 92 if (!--db->u.user_page->refcnt) { 97 93 list_del(&db->u.user_page->list); 94 + mmdrop(db->u.user_page->mm); 98 95 ib_umem_release(db->u.user_page->umem); 99 96 kfree(db->u.user_page); 100 97 }
+8 -3
drivers/infiniband/hw/mlx5/fs.c
··· 1194 1194 goto free_ucmd; 1195 1195 } 1196 1196 1197 - if (flow_attr->port > dev->num_ports || 1198 - (flow_attr->flags & 1199 - ~(IB_FLOW_ATTR_FLAGS_DONT_TRAP | IB_FLOW_ATTR_FLAGS_EGRESS))) { 1197 + if (flow_attr->flags & 1198 + ~(IB_FLOW_ATTR_FLAGS_DONT_TRAP | IB_FLOW_ATTR_FLAGS_EGRESS)) { 1200 1199 err = -EINVAL; 1201 1200 goto free_ucmd; 1202 1201 } ··· 2132 2133 err = mlx5_ib_matcher_ns(attrs, obj); 2133 2134 if (err) 2134 2135 goto end; 2136 + 2137 + if (obj->ns_type == MLX5_FLOW_NAMESPACE_FDB && 2138 + mlx5_eswitch_mode(dev->mdev) != MLX5_ESWITCH_OFFLOADS) { 2139 + err = -EINVAL; 2140 + goto end; 2141 + } 2135 2142 2136 2143 uobj->object = obj; 2137 2144 obj->mdev = dev->mdev;
+2 -2
drivers/infiniband/hw/mlx5/mr.c
··· 1940 1940 mlx5r_deref_wait_odp_mkey(&mr->mmkey); 1941 1941 1942 1942 if (ibmr->type == IB_MR_TYPE_INTEGRITY) { 1943 - xa_cmpxchg(&dev->sig_mrs, mlx5_base_mkey(mr->mmkey.key), ibmr, 1944 - NULL, GFP_KERNEL); 1943 + xa_cmpxchg(&dev->sig_mrs, mlx5_base_mkey(mr->mmkey.key), 1944 + mr->sig, NULL, GFP_KERNEL); 1945 1945 1946 1946 if (mr->mtt_mr) { 1947 1947 rc = mlx5_ib_dereg_mr(&mr->mtt_mr->ibmr, NULL);
+1
drivers/infiniband/ulp/ipoib/ipoib_netlink.c
··· 163 163 164 164 static struct rtnl_link_ops ipoib_link_ops __read_mostly = { 165 165 .kind = "ipoib", 166 + .netns_refund = true, 166 167 .maxtype = IFLA_IPOIB_MAX, 167 168 .policy = ipoib_policy, 168 169 .priv_size = sizeof(struct ipoib_dev_priv),
-1
drivers/md/bcache/bcache.h
··· 364 364 365 365 /* The rest of this all shows up in sysfs */ 366 366 unsigned int sequential_cutoff; 367 - unsigned int readahead; 368 367 369 368 unsigned int io_disable:1; 370 369 unsigned int verify:1;
+7 -13
drivers/md/bcache/request.c
··· 880 880 struct bio *bio, unsigned int sectors) 881 881 { 882 882 int ret = MAP_CONTINUE; 883 - unsigned int reada = 0; 884 883 struct cached_dev *dc = container_of(s->d, struct cached_dev, disk); 885 884 struct bio *miss, *cache_bio; 885 + unsigned int size_limit; 886 886 887 887 s->cache_missed = 1; 888 888 ··· 892 892 goto out_submit; 893 893 } 894 894 895 - if (!(bio->bi_opf & REQ_RAHEAD) && 896 - !(bio->bi_opf & (REQ_META|REQ_PRIO)) && 897 - s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA) 898 - reada = min_t(sector_t, dc->readahead >> 9, 899 - get_capacity(bio->bi_bdev->bd_disk) - 900 - bio_end_sector(bio)); 901 - 902 - s->insert_bio_sectors = min(sectors, bio_sectors(bio) + reada); 895 + /* Limitation for valid replace key size and cache_bio bvecs number */ 896 + size_limit = min_t(unsigned int, BIO_MAX_VECS * PAGE_SECTORS, 897 + (1 << KEY_SIZE_BITS) - 1); 898 + s->insert_bio_sectors = min3(size_limit, sectors, bio_sectors(bio)); 903 899 904 900 s->iop.replace_key = KEY(s->iop.inode, 905 901 bio->bi_iter.bi_sector + s->insert_bio_sectors, ··· 907 911 908 912 s->iop.replace = true; 909 913 910 - miss = bio_next_split(bio, sectors, GFP_NOIO, &s->d->bio_split); 914 + miss = bio_next_split(bio, s->insert_bio_sectors, GFP_NOIO, 915 + &s->d->bio_split); 911 916 912 917 /* btree_search_recurse()'s btree iterator is no good anymore */ 913 918 ret = miss == bio ? MAP_DONE : -EINTR; ··· 929 932 bch_bio_map(cache_bio, NULL); 930 933 if (bch_bio_alloc_pages(cache_bio, __GFP_NOWARN|GFP_NOIO)) 931 934 goto out_put; 932 - 933 - if (reada) 934 - bch_mark_cache_readahead(s->iop.c, s->d); 935 935 936 936 s->cache_miss = miss; 937 937 s->iop.bio = cache_bio;
-14
drivers/md/bcache/stats.c
··· 46 46 read_attribute(cache_bypass_hits); 47 47 read_attribute(cache_bypass_misses); 48 48 read_attribute(cache_hit_ratio); 49 - read_attribute(cache_readaheads); 50 49 read_attribute(cache_miss_collisions); 51 50 read_attribute(bypassed); 52 51 ··· 63 64 DIV_SAFE(var(cache_hits) * 100, 64 65 var(cache_hits) + var(cache_misses))); 65 66 66 - var_print(cache_readaheads); 67 67 var_print(cache_miss_collisions); 68 68 sysfs_hprint(bypassed, var(sectors_bypassed) << 9); 69 69 #undef var ··· 84 86 &sysfs_cache_bypass_hits, 85 87 &sysfs_cache_bypass_misses, 86 88 &sysfs_cache_hit_ratio, 87 - &sysfs_cache_readaheads, 88 89 &sysfs_cache_miss_collisions, 89 90 &sysfs_bypassed, 90 91 NULL ··· 110 113 acc->total.cache_misses = 0; 111 114 acc->total.cache_bypass_hits = 0; 112 115 acc->total.cache_bypass_misses = 0; 113 - acc->total.cache_readaheads = 0; 114 116 acc->total.cache_miss_collisions = 0; 115 117 acc->total.sectors_bypassed = 0; 116 118 } ··· 141 145 scale_stat(&stats->cache_misses); 142 146 scale_stat(&stats->cache_bypass_hits); 143 147 scale_stat(&stats->cache_bypass_misses); 144 - scale_stat(&stats->cache_readaheads); 145 148 scale_stat(&stats->cache_miss_collisions); 146 149 scale_stat(&stats->sectors_bypassed); 147 150 } ··· 163 168 move_stat(cache_misses); 164 169 move_stat(cache_bypass_hits); 165 170 move_stat(cache_bypass_misses); 166 - move_stat(cache_readaheads); 167 171 move_stat(cache_miss_collisions); 168 172 move_stat(sectors_bypassed); 169 173 ··· 201 207 202 208 mark_cache_stats(&dc->accounting.collector, hit, bypass); 203 209 mark_cache_stats(&c->accounting.collector, hit, bypass); 204 - } 205 - 206 - void bch_mark_cache_readahead(struct cache_set *c, struct bcache_device *d) 207 - { 208 - struct cached_dev *dc = container_of(d, struct cached_dev, disk); 209 - 210 - atomic_inc(&dc->accounting.collector.cache_readaheads); 211 - atomic_inc(&c->accounting.collector.cache_readaheads); 212 210 } 213 211 214 212 void bch_mark_cache_miss_collision(struct cache_set *c, struct bcache_device *d)
-1
drivers/md/bcache/stats.h
··· 7 7 atomic_t cache_misses; 8 8 atomic_t cache_bypass_hits; 9 9 atomic_t cache_bypass_misses; 10 - atomic_t cache_readaheads; 11 10 atomic_t cache_miss_collisions; 12 11 atomic_t sectors_bypassed; 13 12 };
-4
drivers/md/bcache/sysfs.c
··· 137 137 rw_attribute(discard); 138 138 rw_attribute(running); 139 139 rw_attribute(label); 140 - rw_attribute(readahead); 141 140 rw_attribute(errors); 142 141 rw_attribute(io_error_limit); 143 142 rw_attribute(io_error_halflife); ··· 259 260 var_printf(partial_stripes_expensive, "%u"); 260 261 261 262 var_hprint(sequential_cutoff); 262 - var_hprint(readahead); 263 263 264 264 sysfs_print(running, atomic_read(&dc->running)); 265 265 sysfs_print(state, states[BDEV_STATE(&dc->sb)]); ··· 363 365 sysfs_strtoul_clamp(sequential_cutoff, 364 366 dc->sequential_cutoff, 365 367 0, UINT_MAX); 366 - d_strtoi_h(readahead); 367 368 368 369 if (attr == &sysfs_clear_stats) 369 370 bch_cache_accounting_clear(&dc->accounting); ··· 535 538 &sysfs_running, 536 539 &sysfs_state, 537 540 &sysfs_label, 538 - &sysfs_readahead, 539 541 #ifdef CONFIG_BCACHE_DEBUG 540 542 &sysfs_verify, 541 543 &sysfs_bypass_torture_test,
+1
drivers/misc/cardreader/rtl8411.c
··· 468 468 pcr->sd30_drive_sel_1v8 = DRIVER_TYPE_B; 469 469 pcr->sd30_drive_sel_3v3 = DRIVER_TYPE_D; 470 470 pcr->aspm_en = ASPM_L1_EN; 471 + pcr->aspm_mode = ASPM_MODE_CFG; 471 472 pcr->tx_initial_phase = SET_CLOCK_PHASE(23, 7, 14); 472 473 pcr->rx_initial_phase = SET_CLOCK_PHASE(4, 3, 10); 473 474 pcr->ic_version = rtl8411_get_ic_version(pcr);
+1
drivers/misc/cardreader/rts5209.c
··· 255 255 pcr->sd30_drive_sel_1v8 = DRIVER_TYPE_B; 256 256 pcr->sd30_drive_sel_3v3 = DRIVER_TYPE_D; 257 257 pcr->aspm_en = ASPM_L1_EN; 258 + pcr->aspm_mode = ASPM_MODE_CFG; 258 259 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 16); 259 260 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 260 261
+2
drivers/misc/cardreader/rts5227.c
··· 358 358 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 359 359 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 360 360 pcr->aspm_en = ASPM_L1_EN; 361 + pcr->aspm_mode = ASPM_MODE_CFG; 361 362 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 15); 362 363 pcr->rx_initial_phase = SET_CLOCK_PHASE(30, 7, 7); 363 364 ··· 484 483 485 484 rts5227_init_params(pcr); 486 485 pcr->ops = &rts522a_pcr_ops; 486 + pcr->aspm_mode = ASPM_MODE_REG; 487 487 pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 20, 11); 488 488 pcr->reg_pm_ctrl3 = RTS522A_PM_CTRL3; 489 489
+1
drivers/misc/cardreader/rts5228.c
··· 718 718 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 719 719 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 720 720 pcr->aspm_en = ASPM_L1_EN; 721 + pcr->aspm_mode = ASPM_MODE_REG; 721 722 pcr->tx_initial_phase = SET_CLOCK_PHASE(28, 27, 11); 722 723 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 723 724
+1
drivers/misc/cardreader/rts5229.c
··· 246 246 pcr->sd30_drive_sel_1v8 = DRIVER_TYPE_B; 247 247 pcr->sd30_drive_sel_3v3 = DRIVER_TYPE_D; 248 248 pcr->aspm_en = ASPM_L1_EN; 249 + pcr->aspm_mode = ASPM_MODE_CFG; 249 250 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 15); 250 251 pcr->rx_initial_phase = SET_CLOCK_PHASE(30, 6, 6); 251 252
+3
drivers/misc/cardreader/rts5249.c
··· 566 566 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 567 567 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 568 568 pcr->aspm_en = ASPM_L1_EN; 569 + pcr->aspm_mode = ASPM_MODE_CFG; 569 570 pcr->tx_initial_phase = SET_CLOCK_PHASE(1, 29, 16); 570 571 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 571 572 ··· 730 729 void rts524a_init_params(struct rtsx_pcr *pcr) 731 730 { 732 731 rts5249_init_params(pcr); 732 + pcr->aspm_mode = ASPM_MODE_REG; 733 733 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 734 734 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 735 735 pcr->option.ltr_l1off_snooze_sspwrgate = ··· 847 845 void rts525a_init_params(struct rtsx_pcr *pcr) 848 846 { 849 847 rts5249_init_params(pcr); 848 + pcr->aspm_mode = ASPM_MODE_REG; 850 849 pcr->tx_initial_phase = SET_CLOCK_PHASE(25, 29, 11); 851 850 pcr->option.ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF; 852 851 pcr->option.ltr_l1off_snooze_sspwrgate =
+1
drivers/misc/cardreader/rts5260.c
··· 628 628 pcr->sd30_drive_sel_1v8 = CFG_DRIVER_TYPE_B; 629 629 pcr->sd30_drive_sel_3v3 = CFG_DRIVER_TYPE_B; 630 630 pcr->aspm_en = ASPM_L1_EN; 631 + pcr->aspm_mode = ASPM_MODE_REG; 631 632 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 29, 11); 632 633 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 633 634
+1
drivers/misc/cardreader/rts5261.c
··· 783 783 pcr->sd30_drive_sel_1v8 = 0x00; 784 784 pcr->sd30_drive_sel_3v3 = 0x00; 785 785 pcr->aspm_en = ASPM_L1_EN; 786 + pcr->aspm_mode = ASPM_MODE_REG; 786 787 pcr->tx_initial_phase = SET_CLOCK_PHASE(27, 27, 11); 787 788 pcr->rx_initial_phase = SET_CLOCK_PHASE(24, 6, 5); 788 789
+31 -13
drivers/misc/cardreader/rtsx_pcr.c
··· 85 85 if (pcr->aspm_enabled == enable) 86 86 return; 87 87 88 - if (pcr->aspm_en & 0x02) 89 - rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, FORCE_ASPM_CTL0 | 90 - FORCE_ASPM_CTL1, enable ? 0 : FORCE_ASPM_CTL0 | FORCE_ASPM_CTL1); 91 - else 92 - rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, FORCE_ASPM_CTL0 | 93 - FORCE_ASPM_CTL1, FORCE_ASPM_CTL0 | FORCE_ASPM_CTL1); 88 + if (pcr->aspm_mode == ASPM_MODE_CFG) { 89 + pcie_capability_clear_and_set_word(pcr->pci, PCI_EXP_LNKCTL, 90 + PCI_EXP_LNKCTL_ASPMC, 91 + enable ? pcr->aspm_en : 0); 92 + } else if (pcr->aspm_mode == ASPM_MODE_REG) { 93 + if (pcr->aspm_en & 0x02) 94 + rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, FORCE_ASPM_CTL0 | 95 + FORCE_ASPM_CTL1, enable ? 0 : FORCE_ASPM_CTL0 | FORCE_ASPM_CTL1); 96 + else 97 + rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, FORCE_ASPM_CTL0 | 98 + FORCE_ASPM_CTL1, FORCE_ASPM_CTL0 | FORCE_ASPM_CTL1); 99 + } 94 100 95 101 if (!enable && (pcr->aspm_en & 0x02)) 96 102 mdelay(10); ··· 1400 1394 return err; 1401 1395 } 1402 1396 1403 - rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0x30, 0x30); 1397 + if (pcr->aspm_mode == ASPM_MODE_REG) 1398 + rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0x30, 0x30); 1404 1399 1405 1400 /* No CD interrupt if probing driver with card inserted. 1406 1401 * So we need to initialize pcr->card_exist here. ··· 1417 1410 static int rtsx_pci_init_chip(struct rtsx_pcr *pcr) 1418 1411 { 1419 1412 int err; 1413 + u16 cfg_val; 1414 + u8 val; 1420 1415 1421 1416 spin_lock_init(&pcr->lock); 1422 1417 mutex_init(&pcr->pcr_mutex); ··· 1486 1477 if (!pcr->slots) 1487 1478 return -ENOMEM; 1488 1479 1480 + if (pcr->aspm_mode == ASPM_MODE_CFG) { 1481 + pcie_capability_read_word(pcr->pci, PCI_EXP_LNKCTL, &cfg_val); 1482 + if (cfg_val & PCI_EXP_LNKCTL_ASPM_L1) 1483 + pcr->aspm_enabled = true; 1484 + else 1485 + pcr->aspm_enabled = false; 1486 + 1487 + } else if (pcr->aspm_mode == ASPM_MODE_REG) { 1488 + rtsx_pci_read_register(pcr, ASPM_FORCE_CTL, &val); 1489 + if (val & FORCE_ASPM_CTL0 && val & FORCE_ASPM_CTL1) 1490 + pcr->aspm_enabled = false; 1491 + else 1492 + pcr->aspm_enabled = true; 1493 + } 1494 + 1489 1495 if (pcr->ops->fetch_vendor_settings) 1490 1496 pcr->ops->fetch_vendor_settings(pcr); 1491 1497 ··· 1530 1506 struct pcr_handle *handle; 1531 1507 u32 base, len; 1532 1508 int ret, i, bar = 0; 1533 - u8 val; 1534 1509 1535 1510 dev_dbg(&(pcidev->dev), 1536 1511 ": Realtek PCI-E Card Reader found at %s [%04x:%04x] (rev %x)\n", ··· 1595 1572 pcr->host_cmds_addr = pcr->rtsx_resv_buf_addr; 1596 1573 pcr->host_sg_tbl_ptr = pcr->rtsx_resv_buf + HOST_CMDS_BUF_LEN; 1597 1574 pcr->host_sg_tbl_addr = pcr->rtsx_resv_buf_addr + HOST_CMDS_BUF_LEN; 1598 - rtsx_pci_read_register(pcr, ASPM_FORCE_CTL, &val); 1599 - if (val & FORCE_ASPM_CTL0 && val & FORCE_ASPM_CTL1) 1600 - pcr->aspm_enabled = false; 1601 - else 1602 - pcr->aspm_enabled = true; 1603 1575 pcr->card_inserted = 0; 1604 1576 pcr->card_removed = 0; 1605 1577 INIT_DELAYED_WORK(&pcr->carddet_work, rtsx_pci_card_detect);
+7 -2
drivers/mmc/host/renesas_sdhi_core.c
··· 692 692 693 693 /* Issue CMD19 twice for each tap */ 694 694 for (i = 0; i < 2 * priv->tap_num; i++) { 695 + int cmd_error; 696 + 695 697 /* Set sampling clock position */ 696 698 sd_scc_write32(host, priv, SH_MOBILE_SDHI_SCC_TAPSET, i % priv->tap_num); 697 699 698 - if (mmc_send_tuning(mmc, opcode, NULL) == 0) 700 + if (mmc_send_tuning(mmc, opcode, &cmd_error) == 0) 699 701 set_bit(i, priv->taps); 700 702 701 703 if (sd_scc_read32(host, priv, SH_MOBILE_SDHI_SCC_SMPCMP) == 0) 702 704 set_bit(i, priv->smpcmp); 705 + 706 + if (cmd_error) 707 + mmc_abort_tuning(mmc, opcode); 703 708 } 704 709 705 710 ret = renesas_sdhi_select_tuning(host); ··· 944 939 { .soc_id = "r8a7795", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps2367 }, 945 940 { .soc_id = "r8a7796", .revision = "ES1.[012]", .data = &sdhi_quirks_4tap_nohs400 }, 946 941 { .soc_id = "r8a7796", .revision = "ES1.*", .data = &sdhi_quirks_r8a7796_es13 }, 947 - { .soc_id = "r8a7796", .revision = "ES3.*", .data = &sdhi_quirks_bad_taps1357 }, 942 + { .soc_id = "r8a77961", .data = &sdhi_quirks_bad_taps1357 }, 948 943 { .soc_id = "r8a77965", .data = &sdhi_quirks_r8a77965 }, 949 944 { .soc_id = "r8a77980", .data = &sdhi_quirks_nohs400 }, 950 945 { .soc_id = "r8a77990", .data = &sdhi_quirks_r8a77990 },
-2
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 2177 2177 bool persistent, u8 *smt_idx); 2178 2178 int cxgb4_get_msix_idx_from_bmap(struct adapter *adap); 2179 2179 void cxgb4_free_msix_idx_in_bmap(struct adapter *adap, u32 msix_idx); 2180 - int cxgb_open(struct net_device *dev); 2181 - int cxgb_close(struct net_device *dev); 2182 2180 void cxgb4_enable_rx(struct adapter *adap, struct sge_rspq *q); 2183 2181 void cxgb4_quiesce_rx(struct sge_rspq *q); 2184 2182 int cxgb4_port_mirror_alloc(struct net_device *dev);
+2 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 2834 2834 /* 2835 2835 * net_device operations 2836 2836 */ 2837 - int cxgb_open(struct net_device *dev) 2837 + static int cxgb_open(struct net_device *dev) 2838 2838 { 2839 2839 struct port_info *pi = netdev_priv(dev); 2840 2840 struct adapter *adapter = pi->adapter; ··· 2882 2882 return err; 2883 2883 } 2884 2884 2885 - int cxgb_close(struct net_device *dev) 2885 + static int cxgb_close(struct net_device *dev) 2886 2886 { 2887 2887 struct port_info *pi = netdev_priv(dev); 2888 2888 struct adapter *adapter = pi->adapter;
+5 -9
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
··· 997 997 if (!ch_flower) 998 998 return -ENOENT; 999 999 1000 + rhashtable_remove_fast(&adap->flower_tbl, &ch_flower->node, 1001 + adap->flower_ht_params); 1002 + 1000 1003 ret = cxgb4_flow_rule_destroy(dev, ch_flower->fs.tc_prio, 1001 1004 &ch_flower->fs, ch_flower->filter_id); 1002 1005 if (ret) 1003 - goto err; 1006 + netdev_err(dev, "Flow rule destroy failed for tid: %u, ret: %d", 1007 + ch_flower->filter_id, ret); 1004 1008 1005 - ret = rhashtable_remove_fast(&adap->flower_tbl, &ch_flower->node, 1006 - adap->flower_ht_params); 1007 - if (ret) { 1008 - netdev_err(dev, "Flow remove from rhashtable failed"); 1009 - goto err; 1010 - } 1011 1009 kfree_rcu(ch_flower, rcu); 1012 - 1013 - err: 1014 1010 return ret; 1015 1011 } 1016 1012
+6 -3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
··· 589 589 * down before configuring tc params. 590 590 */ 591 591 if (netif_running(dev)) { 592 - cxgb_close(dev); 592 + netif_tx_stop_all_queues(dev); 593 + netif_carrier_off(dev); 593 594 needs_bring_up = true; 594 595 } 595 596 ··· 616 615 } 617 616 618 617 out: 619 - if (needs_bring_up) 620 - cxgb_open(dev); 618 + if (needs_bring_up) { 619 + netif_tx_start_all_queues(dev); 620 + netif_carrier_on(dev); 621 + } 621 622 622 623 mutex_unlock(&adap->tc_mqprio->mqprio_mutex); 623 624 return ret;
+6
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2556 2556 if (!eosw_txq) 2557 2557 return -ENOMEM; 2558 2558 2559 + if (!(adap->flags & CXGB4_FW_OK)) { 2560 + /* Don't stall caller when access to FW is lost */ 2561 + complete(&eosw_txq->completion); 2562 + return -EIO; 2563 + } 2564 + 2559 2565 skb = alloc_skb(len, GFP_KERNEL); 2560 2566 if (!skb) 2561 2567 return -ENOMEM;
+6 -1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 2313 2313 case XDP_TX: 2314 2314 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index]; 2315 2315 result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring); 2316 + if (result == I40E_XDP_CONSUMED) 2317 + goto out_failure; 2316 2318 break; 2317 2319 case XDP_REDIRECT: 2318 2320 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 2319 - result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED; 2321 + if (err) 2322 + goto out_failure; 2323 + result = I40E_XDP_REDIR; 2320 2324 break; 2321 2325 default: 2322 2326 bpf_warn_invalid_xdp_action(act); 2323 2327 fallthrough; 2324 2328 case XDP_ABORTED: 2329 + out_failure: 2325 2330 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 2326 2331 fallthrough; /* handle aborts by dropping packet */ 2327 2332 case XDP_DROP:
+6 -2
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 162 162 163 163 if (likely(act == XDP_REDIRECT)) { 164 164 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 165 - result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED; 165 + if (err) 166 + goto out_failure; 166 167 rcu_read_unlock(); 167 - return result; 168 + return I40E_XDP_REDIR; 168 169 } 169 170 170 171 switch (act) { ··· 174 173 case XDP_TX: 175 174 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->queue_index]; 176 175 result = i40e_xmit_xdp_tx_ring(xdp, xdp_ring); 176 + if (result == I40E_XDP_CONSUMED) 177 + goto out_failure; 177 178 break; 178 179 default: 179 180 bpf_warn_invalid_xdp_action(act); 180 181 fallthrough; 181 182 case XDP_ABORTED: 183 + out_failure: 182 184 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 183 185 fallthrough; /* handle aborts by dropping packet */ 184 186 case XDP_DROP:
+5 -3
drivers/net/ethernet/intel/ice/ice.h
··· 335 335 struct ice_tc_cfg tc_cfg; 336 336 struct bpf_prog *xdp_prog; 337 337 struct ice_ring **xdp_rings; /* XDP ring array */ 338 + unsigned long *af_xdp_zc_qps; /* tracks AF_XDP ZC enabled qps */ 338 339 u16 num_xdp_txq; /* Used XDP queues */ 339 340 u8 xdp_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */ 340 341 ··· 548 547 */ 549 548 static inline struct xsk_buff_pool *ice_xsk_pool(struct ice_ring *ring) 550 549 { 550 + struct ice_vsi *vsi = ring->vsi; 551 551 u16 qid = ring->q_index; 552 552 553 553 if (ice_ring_is_xdp(ring)) 554 - qid -= ring->vsi->num_xdp_txq; 554 + qid -= vsi->num_xdp_txq; 555 555 556 - if (!ice_is_xdp_ena_vsi(ring->vsi)) 556 + if (!ice_is_xdp_ena_vsi(vsi) || !test_bit(qid, vsi->af_xdp_zc_qps)) 557 557 return NULL; 558 558 559 - return xsk_get_pool_from_qid(ring->vsi->netdev, qid); 559 + return xsk_get_pool_from_qid(vsi->netdev, qid); 560 560 } 561 561 562 562 /**
+6 -45
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 1773 1773 ice_ethtool_advertise_link_mode(ICE_AQ_LINK_SPEED_100GB, 1774 1774 100000baseKR4_Full); 1775 1775 } 1776 - 1777 - /* Autoneg PHY types */ 1778 - if (phy_types_low & ICE_PHY_TYPE_LOW_100BASE_TX || 1779 - phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_T || 1780 - phy_types_low & ICE_PHY_TYPE_LOW_1000BASE_KX || 1781 - phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_T || 1782 - phy_types_low & ICE_PHY_TYPE_LOW_2500BASE_KX || 1783 - phy_types_low & ICE_PHY_TYPE_LOW_5GBASE_T || 1784 - phy_types_low & ICE_PHY_TYPE_LOW_5GBASE_KR || 1785 - phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_T || 1786 - phy_types_low & ICE_PHY_TYPE_LOW_10GBASE_KR_CR1 || 1787 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_T || 1788 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR || 1789 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR_S || 1790 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_CR1 || 1791 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR || 1792 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR_S || 1793 - phy_types_low & ICE_PHY_TYPE_LOW_25GBASE_KR1 || 1794 - phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_CR4 || 1795 - phy_types_low & ICE_PHY_TYPE_LOW_40GBASE_KR4) { 1796 - ethtool_link_ksettings_add_link_mode(ks, supported, 1797 - Autoneg); 1798 - ethtool_link_ksettings_add_link_mode(ks, advertising, 1799 - Autoneg); 1800 - } 1801 - if (phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_CR2 || 1802 - phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_KR2 || 1803 - phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_CP || 1804 - phy_types_low & ICE_PHY_TYPE_LOW_50GBASE_KR_PAM4) { 1805 - ethtool_link_ksettings_add_link_mode(ks, supported, 1806 - Autoneg); 1807 - ethtool_link_ksettings_add_link_mode(ks, advertising, 1808 - Autoneg); 1809 - } 1810 - if (phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CR4 || 1811 - phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_KR4 || 1812 - phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_KR_PAM4 || 1813 - phy_types_low & ICE_PHY_TYPE_LOW_100GBASE_CP2) { 1814 - ethtool_link_ksettings_add_link_mode(ks, supported, 1815 - Autoneg); 1816 - ethtool_link_ksettings_add_link_mode(ks, advertising, 1817 - Autoneg); 1818 - } 1819 1776 } 1820 1777 1821 1778 #define TEST_SET_BITS_TIMEOUT 50 ··· 1929 1972 ks->base.port = PORT_TP; 1930 1973 break; 1931 1974 case ICE_MEDIA_BACKPLANE: 1932 - ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg); 1933 1975 ethtool_link_ksettings_add_link_mode(ks, supported, Backplane); 1934 - ethtool_link_ksettings_add_link_mode(ks, advertising, Autoneg); 1935 1976 ethtool_link_ksettings_add_link_mode(ks, advertising, 1936 1977 Backplane); 1937 1978 ks->base.port = PORT_NONE; ··· 2003 2048 ethtool_link_ksettings_add_link_mode(ks, supported, FEC_BASER); 2004 2049 if (caps->link_fec_options & ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN) 2005 2050 ethtool_link_ksettings_add_link_mode(ks, supported, FEC_RS); 2051 + 2052 + /* Set supported and advertised autoneg */ 2053 + if (ice_is_phy_caps_an_enabled(caps)) { 2054 + ethtool_link_ksettings_add_link_mode(ks, supported, Autoneg); 2055 + ethtool_link_ksettings_add_link_mode(ks, advertising, Autoneg); 2056 + } 2006 2057 2007 2058 done: 2008 2059 kfree(caps);
+1
drivers/net/ethernet/intel/ice/ice_hw_autogen.h
··· 31 31 #define PF_FW_ATQLEN_ATQOVFL_M BIT(29) 32 32 #define PF_FW_ATQLEN_ATQCRIT_M BIT(30) 33 33 #define VF_MBX_ARQLEN(_VF) (0x0022BC00 + ((_VF) * 4)) 34 + #define VF_MBX_ATQLEN(_VF) (0x0022A800 + ((_VF) * 4)) 34 35 #define PF_FW_ATQLEN_ATQENABLE_M BIT(31) 35 36 #define PF_FW_ATQT 0x00080400 36 37 #define PF_MBX_ARQBAH 0x0022E400
+12
drivers/net/ethernet/intel/ice/ice_lib.c
··· 105 105 if (!vsi->q_vectors) 106 106 goto err_vectors; 107 107 108 + vsi->af_xdp_zc_qps = bitmap_zalloc(max_t(int, vsi->alloc_txq, vsi->alloc_rxq), GFP_KERNEL); 109 + if (!vsi->af_xdp_zc_qps) 110 + goto err_zc_qps; 111 + 108 112 return 0; 109 113 114 + err_zc_qps: 115 + devm_kfree(dev, vsi->q_vectors); 110 116 err_vectors: 111 117 devm_kfree(dev, vsi->rxq_map); 112 118 err_rxq_map: ··· 200 194 break; 201 195 case ICE_VSI_VF: 202 196 vf = &pf->vf[vsi->vf_id]; 197 + if (vf->num_req_qs) 198 + vf->num_vf_qs = vf->num_req_qs; 203 199 vsi->alloc_txq = vf->num_vf_qs; 204 200 vsi->alloc_rxq = vf->num_vf_qs; 205 201 /* pf->num_msix_per_vf includes (VF miscellaneous vector + ··· 296 288 297 289 dev = ice_pf_to_dev(pf); 298 290 291 + if (vsi->af_xdp_zc_qps) { 292 + bitmap_free(vsi->af_xdp_zc_qps); 293 + vsi->af_xdp_zc_qps = NULL; 294 + } 299 295 /* free the ring and vector containers */ 300 296 if (vsi->q_vectors) { 301 297 devm_kfree(dev, vsi->q_vectors);
+13 -4
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 523 523 struct bpf_prog *xdp_prog) 524 524 { 525 525 struct ice_ring *xdp_ring; 526 - int err; 526 + int err, result; 527 527 u32 act; 528 528 529 529 act = bpf_prog_run_xdp(xdp_prog, xdp); ··· 532 532 return ICE_XDP_PASS; 533 533 case XDP_TX: 534 534 xdp_ring = rx_ring->vsi->xdp_rings[smp_processor_id()]; 535 - return ice_xmit_xdp_buff(xdp, xdp_ring); 535 + result = ice_xmit_xdp_buff(xdp, xdp_ring); 536 + if (result == ICE_XDP_CONSUMED) 537 + goto out_failure; 538 + return result; 536 539 case XDP_REDIRECT: 537 540 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 538 - return !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; 541 + if (err) 542 + goto out_failure; 543 + return ICE_XDP_REDIR; 539 544 default: 540 545 bpf_warn_invalid_xdp_action(act); 541 546 fallthrough; 542 547 case XDP_ABORTED: 548 + out_failure: 543 549 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 544 550 fallthrough; 545 551 case XDP_DROP: ··· 2149 2143 struct ice_tx_offload_params offload = { 0 }; 2150 2144 struct ice_vsi *vsi = tx_ring->vsi; 2151 2145 struct ice_tx_buf *first; 2146 + struct ethhdr *eth; 2152 2147 unsigned int count; 2153 2148 int tso, csum; 2154 2149 ··· 2196 2189 goto out_drop; 2197 2190 2198 2191 /* allow CONTROL frames egress from main VSI if FW LLDP disabled */ 2199 - if (unlikely(skb->priority == TC_PRIO_CONTROL && 2192 + eth = (struct ethhdr *)skb_mac_header(skb); 2193 + if (unlikely((skb->priority == TC_PRIO_CONTROL || 2194 + eth->h_proto == htons(ETH_P_LLDP)) && 2200 2195 vsi->type == ICE_VSI_PF && 2201 2196 vsi->port_info->qos_cfg.is_sw_lldp)) 2202 2197 offload.cd_qw1 |= (u64)(ICE_TX_DESC_DTYPE_CTX |
+13 -6
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 713 713 */ 714 714 clear_bit(ICE_VF_STATE_INIT, vf->vf_states); 715 715 716 - /* VF_MBX_ARQLEN is cleared by PFR, so the driver needs to clear it 717 - * in the case of VFR. If this is done for PFR, it can mess up VF 718 - * resets because the VF driver may already have started cleanup 719 - * by the time we get here. 716 + /* VF_MBX_ARQLEN and VF_MBX_ATQLEN are cleared by PFR, so the driver 717 + * needs to clear them in the case of VFR/VFLR. If this is done for 718 + * PFR, it can mess up VF resets because the VF driver may already 719 + * have started cleanup by the time we get here. 720 720 */ 721 - if (!is_pfr) 721 + if (!is_pfr) { 722 722 wr32(hw, VF_MBX_ARQLEN(vf->vf_id), 0); 723 + wr32(hw, VF_MBX_ATQLEN(vf->vf_id), 0); 724 + } 723 725 724 726 /* In the case of a VFLR, the HW has already reset the VF and we 725 727 * just need to clean up, so don't hit the VFRTRIG register. ··· 1700 1698 ice_vf_ctrl_vsi_release(vf); 1701 1699 1702 1700 ice_vf_pre_vsi_rebuild(vf); 1703 - ice_vf_rebuild_vsi_with_release(vf); 1701 + 1702 + if (ice_vf_rebuild_vsi_with_release(vf)) { 1703 + dev_err(dev, "Failed to release and setup the VF%u's VSI\n", vf->vf_id); 1704 + return false; 1705 + } 1706 + 1704 1707 ice_vf_post_vsi_rebuild(vf); 1705 1708 1706 1709 /* if the VF has been reset allow it to come up again */
+9 -2
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 270 270 if (!pool) 271 271 return -EINVAL; 272 272 273 + clear_bit(qid, vsi->af_xdp_zc_qps); 273 274 xsk_pool_dma_unmap(pool, ICE_RX_DMA_ATTR); 274 275 275 276 return 0; ··· 300 299 ICE_RX_DMA_ATTR); 301 300 if (err) 302 301 return err; 302 + 303 + set_bit(qid, vsi->af_xdp_zc_qps); 303 304 304 305 return 0; 305 306 } ··· 476 473 477 474 if (likely(act == XDP_REDIRECT)) { 478 475 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 479 - result = !err ? ICE_XDP_REDIR : ICE_XDP_CONSUMED; 476 + if (err) 477 + goto out_failure; 480 478 rcu_read_unlock(); 481 - return result; 479 + return ICE_XDP_REDIR; 482 480 } 483 481 484 482 switch (act) { ··· 488 484 case XDP_TX: 489 485 xdp_ring = rx_ring->vsi->xdp_rings[rx_ring->q_index]; 490 486 result = ice_xmit_xdp_buff(xdp, xdp_ring); 487 + if (result == ICE_XDP_CONSUMED) 488 + goto out_failure; 491 489 break; 492 490 default: 493 491 bpf_warn_invalid_xdp_action(act); 494 492 fallthrough; 495 493 case XDP_ABORTED: 494 + out_failure: 496 495 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 497 496 fallthrough; 498 497 case XDP_DROP:
+1 -1
drivers/net/ethernet/intel/igb/igb.h
··· 749 749 void igb_ptp_tx_hang(struct igb_adapter *adapter); 750 750 void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb); 751 751 int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 752 - struct sk_buff *skb); 752 + ktime_t *timestamp); 753 753 int igb_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr); 754 754 int igb_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr); 755 755 void igb_set_flag_queue_pairs(struct igb_adapter *, const u32);
+32 -23
drivers/net/ethernet/intel/igb/igb_main.c
··· 8280 8280 static struct sk_buff *igb_construct_skb(struct igb_ring *rx_ring, 8281 8281 struct igb_rx_buffer *rx_buffer, 8282 8282 struct xdp_buff *xdp, 8283 - union e1000_adv_rx_desc *rx_desc) 8283 + ktime_t timestamp) 8284 8284 { 8285 8285 #if (PAGE_SIZE < 8192) 8286 8286 unsigned int truesize = igb_rx_pg_size(rx_ring) / 2; ··· 8300 8300 if (unlikely(!skb)) 8301 8301 return NULL; 8302 8302 8303 - if (unlikely(igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP))) { 8304 - if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb)) { 8305 - xdp->data += IGB_TS_HDR_LEN; 8306 - size -= IGB_TS_HDR_LEN; 8307 - } 8308 - } 8303 + if (timestamp) 8304 + skb_hwtstamps(skb)->hwtstamp = timestamp; 8309 8305 8310 8306 /* Determine available headroom for copy */ 8311 8307 headlen = size; ··· 8332 8336 static struct sk_buff *igb_build_skb(struct igb_ring *rx_ring, 8333 8337 struct igb_rx_buffer *rx_buffer, 8334 8338 struct xdp_buff *xdp, 8335 - union e1000_adv_rx_desc *rx_desc) 8339 + ktime_t timestamp) 8336 8340 { 8337 8341 #if (PAGE_SIZE < 8192) 8338 8342 unsigned int truesize = igb_rx_pg_size(rx_ring) / 2; ··· 8359 8363 if (metasize) 8360 8364 skb_metadata_set(skb, metasize); 8361 8365 8362 - /* pull timestamp out of packet data */ 8363 - if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { 8364 - if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb)) 8365 - __skb_pull(skb, IGB_TS_HDR_LEN); 8366 - } 8366 + if (timestamp) 8367 + skb_hwtstamps(skb)->hwtstamp = timestamp; 8367 8368 8368 8369 /* update buffer offset */ 8369 8370 #if (PAGE_SIZE < 8192) ··· 8394 8401 break; 8395 8402 case XDP_TX: 8396 8403 result = igb_xdp_xmit_back(adapter, xdp); 8404 + if (result == IGB_XDP_CONSUMED) 8405 + goto out_failure; 8397 8406 break; 8398 8407 case XDP_REDIRECT: 8399 8408 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog); 8400 - if (!err) 8401 - result = IGB_XDP_REDIR; 8402 - else 8403 - result = IGB_XDP_CONSUMED; 8409 + if (err) 8410 + goto out_failure; 8411 + result = IGB_XDP_REDIR; 8404 8412 break; 8405 8413 default: 8406 8414 bpf_warn_invalid_xdp_action(act); 8407 8415 fallthrough; 8408 8416 case XDP_ABORTED: 8417 + out_failure: 8409 8418 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 8410 8419 fallthrough; 8411 8420 case XDP_DROP: ··· 8677 8682 while (likely(total_packets < budget)) { 8678 8683 union e1000_adv_rx_desc *rx_desc; 8679 8684 struct igb_rx_buffer *rx_buffer; 8685 + ktime_t timestamp = 0; 8686 + int pkt_offset = 0; 8680 8687 unsigned int size; 8688 + void *pktbuf; 8681 8689 8682 8690 /* return some buffers to hardware, one at a time is too slow */ 8683 8691 if (cleaned_count >= IGB_RX_BUFFER_WRITE) { ··· 8700 8702 dma_rmb(); 8701 8703 8702 8704 rx_buffer = igb_get_rx_buffer(rx_ring, size, &rx_buf_pgcnt); 8705 + pktbuf = page_address(rx_buffer->page) + rx_buffer->page_offset; 8706 + 8707 + /* pull rx packet timestamp if available and valid */ 8708 + if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { 8709 + int ts_hdr_len; 8710 + 8711 + ts_hdr_len = igb_ptp_rx_pktstamp(rx_ring->q_vector, 8712 + pktbuf, &timestamp); 8713 + 8714 + pkt_offset += ts_hdr_len; 8715 + size -= ts_hdr_len; 8716 + } 8703 8717 8704 8718 /* retrieve a buffer from the ring */ 8705 8719 if (!skb) { 8706 - unsigned int offset = igb_rx_offset(rx_ring); 8707 - unsigned char *hard_start; 8720 + unsigned char *hard_start = pktbuf - igb_rx_offset(rx_ring); 8721 + unsigned int offset = pkt_offset + igb_rx_offset(rx_ring); 8708 8722 8709 - hard_start = page_address(rx_buffer->page) + 8710 - rx_buffer->page_offset - offset; 8711 8723 xdp_prepare_buff(&xdp, hard_start, offset, size, true); 8712 8724 #if (PAGE_SIZE > 4096) 8713 8725 /* At larger PAGE_SIZE, frame_sz depend on len size */ ··· 8740 8732 } else if (skb) 8741 8733 igb_add_rx_frag(rx_ring, rx_buffer, skb, size); 8742 8734 else if (ring_uses_build_skb(rx_ring)) 8743 - skb = igb_build_skb(rx_ring, rx_buffer, &xdp, rx_desc); 8735 + skb = igb_build_skb(rx_ring, rx_buffer, &xdp, 8736 + timestamp); 8744 8737 else 8745 8738 skb = igb_construct_skb(rx_ring, rx_buffer, 8746 - &xdp, rx_desc); 8739 + &xdp, timestamp); 8747 8740 8748 8741 /* exit if we failed to retrieve a buffer */ 8749 8742 if (!skb) {
+10 -13
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 856 856 dev_kfree_skb_any(skb); 857 857 } 858 858 859 - #define IGB_RET_PTP_DISABLED 1 860 - #define IGB_RET_PTP_INVALID 2 861 - 862 859 /** 863 860 * igb_ptp_rx_pktstamp - retrieve Rx per packet timestamp 864 861 * @q_vector: Pointer to interrupt specific structure 865 862 * @va: Pointer to address containing Rx buffer 866 - * @skb: Buffer containing timestamp and packet 863 + * @timestamp: Pointer where timestamp will be stored 867 864 * 868 865 * This function is meant to retrieve a timestamp from the first buffer of an 869 866 * incoming frame. The value is stored in little endian format starting on 870 867 * byte 8 871 868 * 872 - * Returns: 0 if success, nonzero if failure 869 + * Returns: The timestamp header length or 0 if not available 873 870 **/ 874 871 int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 875 - struct sk_buff *skb) 872 + ktime_t *timestamp) 876 873 { 877 874 struct igb_adapter *adapter = q_vector->adapter; 875 + struct skb_shared_hwtstamps ts; 878 876 __le64 *regval = (__le64 *)va; 879 877 int adjust = 0; 880 878 881 879 if (!(adapter->ptp_flags & IGB_PTP_ENABLED)) 882 - return IGB_RET_PTP_DISABLED; 880 + return 0; 883 881 884 882 /* The timestamp is recorded in little endian format. 885 883 * DWORD: 0 1 2 3 ··· 886 888 887 889 /* check reserved dwords are zero, be/le doesn't matter for zero */ 888 890 if (regval[0]) 889 - return IGB_RET_PTP_INVALID; 891 + return 0; 890 892 891 - igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), 892 - le64_to_cpu(regval[1])); 893 + igb_ptp_systim_to_hwtstamp(adapter, &ts, le64_to_cpu(regval[1])); 893 894 894 895 /* adjust timestamp for the RX latency based on link speed */ 895 896 if (adapter->hw.mac.type == e1000_i210) { ··· 904 907 break; 905 908 } 906 909 } 907 - skb_hwtstamps(skb)->hwtstamp = 908 - ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust); 909 910 910 - return 0; 911 + *timestamp = ktime_sub_ns(ts.hwtstamp, adjust); 912 + 913 + return IGB_TS_HDR_LEN; 911 914 } 912 915 913 916 /**
+5 -6
drivers/net/ethernet/intel/igc/igc_main.c
··· 2047 2047 break; 2048 2048 case XDP_TX: 2049 2049 if (igc_xdp_xmit_back(adapter, xdp) < 0) 2050 - res = IGC_XDP_CONSUMED; 2051 - else 2052 - res = IGC_XDP_TX; 2050 + goto out_failure; 2051 + res = IGC_XDP_TX; 2053 2052 break; 2054 2053 case XDP_REDIRECT: 2055 2054 if (xdp_do_redirect(adapter->netdev, xdp, prog) < 0) 2056 - res = IGC_XDP_CONSUMED; 2057 - else 2058 - res = IGC_XDP_REDIRECT; 2055 + goto out_failure; 2056 + res = IGC_XDP_REDIRECT; 2059 2057 break; 2060 2058 default: 2061 2059 bpf_warn_invalid_xdp_action(act); 2062 2060 fallthrough; 2063 2061 case XDP_ABORTED: 2062 + out_failure: 2064 2063 trace_xdp_exception(adapter->netdev, prog, act); 2065 2064 fallthrough; 2066 2065 case XDP_DROP:
+8 -8
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2213 2213 break; 2214 2214 case XDP_TX: 2215 2215 xdpf = xdp_convert_buff_to_frame(xdp); 2216 - if (unlikely(!xdpf)) { 2217 - result = IXGBE_XDP_CONSUMED; 2218 - break; 2219 - } 2216 + if (unlikely(!xdpf)) 2217 + goto out_failure; 2220 2218 result = ixgbe_xmit_xdp_ring(adapter, xdpf); 2219 + if (result == IXGBE_XDP_CONSUMED) 2220 + goto out_failure; 2221 2221 break; 2222 2222 case XDP_REDIRECT: 2223 2223 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog); 2224 - if (!err) 2225 - result = IXGBE_XDP_REDIR; 2226 - else 2227 - result = IXGBE_XDP_CONSUMED; 2224 + if (err) 2225 + goto out_failure; 2226 + result = IXGBE_XDP_REDIR; 2228 2227 break; 2229 2228 default: 2230 2229 bpf_warn_invalid_xdp_action(act); 2231 2230 fallthrough; 2232 2231 case XDP_ABORTED: 2232 + out_failure: 2233 2233 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 2234 2234 fallthrough; /* handle aborts by dropping packet */ 2235 2235 case XDP_DROP:
+8 -6
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 106 106 107 107 if (likely(act == XDP_REDIRECT)) { 108 108 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); 109 - result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED; 109 + if (err) 110 + goto out_failure; 110 111 rcu_read_unlock(); 111 - return result; 112 + return IXGBE_XDP_REDIR; 112 113 } 113 114 114 115 switch (act) { ··· 117 116 break; 118 117 case XDP_TX: 119 118 xdpf = xdp_convert_buff_to_frame(xdp); 120 - if (unlikely(!xdpf)) { 121 - result = IXGBE_XDP_CONSUMED; 122 - break; 123 - } 119 + if (unlikely(!xdpf)) 120 + goto out_failure; 124 121 result = ixgbe_xmit_xdp_ring(adapter, xdpf); 122 + if (result == IXGBE_XDP_CONSUMED) 123 + goto out_failure; 125 124 break; 126 125 default: 127 126 bpf_warn_invalid_xdp_action(act); 128 127 fallthrough; 129 128 case XDP_ABORTED: 129 + out_failure: 130 130 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 131 131 fallthrough; /* handle aborts by dropping packet */ 132 132 case XDP_DROP:
+3
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 1067 1067 case XDP_TX: 1068 1068 xdp_ring = adapter->xdp_ring[rx_ring->queue_index]; 1069 1069 result = ixgbevf_xmit_xdp_ring(xdp_ring, xdp); 1070 + if (result == IXGBEVF_XDP_CONSUMED) 1071 + goto out_failure; 1070 1072 break; 1071 1073 default: 1072 1074 bpf_warn_invalid_xdp_action(act); 1073 1075 fallthrough; 1074 1076 case XDP_ABORTED: 1077 + out_failure: 1075 1078 trace_xdp_exception(rx_ring->netdev, xdp_prog, act); 1076 1079 fallthrough; /* handle aborts by dropping packet */ 1077 1080 case XDP_DROP:
+3
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 823 823 #define QUERY_DEV_CAP_MAD_DEMUX_OFFSET 0xb0 824 824 #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_BASE_OFFSET 0xa8 825 825 #define QUERY_DEV_CAP_DMFS_HIGH_RATE_QPN_RANGE_OFFSET 0xac 826 + #define QUERY_DEV_CAP_MAP_CLOCK_TO_USER 0xc1 826 827 #define QUERY_DEV_CAP_QP_RATE_LIMIT_NUM_OFFSET 0xcc 827 828 #define QUERY_DEV_CAP_QP_RATE_LIMIT_MAX_OFFSET 0xd0 828 829 #define QUERY_DEV_CAP_QP_RATE_LIMIT_MIN_OFFSET 0xd2 ··· 842 841 843 842 if (mlx4_is_mfunc(dev)) 844 843 disable_unsupported_roce_caps(outbox); 844 + MLX4_GET(field, outbox, QUERY_DEV_CAP_MAP_CLOCK_TO_USER); 845 + dev_cap->map_clock_to_user = field & 0x80; 845 846 MLX4_GET(field, outbox, QUERY_DEV_CAP_RSVD_QP_OFFSET); 846 847 dev_cap->reserved_qps = 1 << (field & 0xf); 847 848 MLX4_GET(field, outbox, QUERY_DEV_CAP_MAX_QP_OFFSET);
+1
drivers/net/ethernet/mellanox/mlx4/fw.h
··· 131 131 u32 health_buffer_addrs; 132 132 struct mlx4_port_cap port_cap[MLX4_MAX_PORTS + 1]; 133 133 bool wol_port[MLX4_MAX_PORTS + 1]; 134 + bool map_clock_to_user; 134 135 }; 135 136 136 137 struct mlx4_func_cap {
+6
drivers/net/ethernet/mellanox/mlx4/main.c
··· 498 498 } 499 499 } 500 500 501 + dev->caps.map_clock_to_user = dev_cap->map_clock_to_user; 501 502 dev->caps.uar_page_size = PAGE_SIZE; 502 503 dev->caps.num_uars = dev_cap->uar_size / PAGE_SIZE; 503 504 dev->caps.local_ca_ack_delay = dev_cap->local_ca_ack_delay; ··· 1948 1947 1949 1948 if (mlx4_is_slave(dev)) 1950 1949 return -EOPNOTSUPP; 1950 + 1951 + if (!dev->caps.map_clock_to_user) { 1952 + mlx4_dbg(dev, "Map clock to user is not supported.\n"); 1953 + return -EOPNOTSUPP; 1954 + } 1951 1955 1952 1956 if (!params) 1953 1957 return -EINVAL;
+10 -2
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1624 1624 { 1625 1625 struct mlx5e_priv *priv = netdev_priv(netdev); 1626 1626 struct mlx5_core_dev *mdev = priv->mdev; 1627 + unsigned long fec_bitmap; 1627 1628 u16 fec_policy = 0; 1628 1629 int mode; 1629 1630 int err; 1630 1631 1631 - if (bitmap_weight((unsigned long *)&fecparam->fec, 1632 - ETHTOOL_FEC_LLRS_BIT + 1) > 1) 1632 + bitmap_from_arr32(&fec_bitmap, &fecparam->fec, sizeof(fecparam->fec) * BITS_PER_BYTE); 1633 + if (bitmap_weight(&fec_bitmap, ETHTOOL_FEC_LLRS_BIT + 1) > 1) 1633 1634 return -EOPNOTSUPP; 1634 1635 1635 1636 for (mode = 0; mode < ARRAY_SIZE(pplm_fec_2_ethtool); mode++) { ··· 1893 1892 1894 1893 if (curr_val == new_val) 1895 1894 return 0; 1895 + 1896 + if (new_val && !priv->profile->rx_ptp_support && 1897 + priv->tstamp.rx_filter != HWTSTAMP_FILTER_NONE) { 1898 + netdev_err(priv->netdev, 1899 + "Profile doesn't support enabling of CQE compression while hardware time-stamping is enabled.\n"); 1900 + return -EINVAL; 1901 + } 1896 1902 1897 1903 new_params = priv->channels.params; 1898 1904 MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_RX_CQE_COMPRESS, new_val);
+62 -15
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 3858 3858 netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n"); 3859 3859 } 3860 3860 3861 + if (mlx5e_is_uplink_rep(priv)) { 3862 + features &= ~NETIF_F_HW_TLS_RX; 3863 + if (netdev->features & NETIF_F_HW_TLS_RX) 3864 + netdev_warn(netdev, "Disabling hw_tls_rx, not supported in switchdev mode\n"); 3865 + 3866 + features &= ~NETIF_F_HW_TLS_TX; 3867 + if (netdev->features & NETIF_F_HW_TLS_TX) 3868 + netdev_warn(netdev, "Disabling hw_tls_tx, not supported in switchdev mode\n"); 3869 + } 3870 + 3861 3871 mutex_unlock(&priv->state_lock); 3862 3872 3863 3873 return features; ··· 3984 3974 return mlx5e_ptp_rx_manage_fs(priv, set); 3985 3975 } 3986 3976 3987 - int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr) 3977 + static int mlx5e_hwstamp_config_no_ptp_rx(struct mlx5e_priv *priv, bool rx_filter) 3978 + { 3979 + bool rx_cqe_compress_def = priv->channels.params.rx_cqe_compress_def; 3980 + int err; 3981 + 3982 + if (!rx_filter) 3983 + /* Reset CQE compression to Admin default */ 3984 + return mlx5e_modify_rx_cqe_compression_locked(priv, rx_cqe_compress_def); 3985 + 3986 + if (!MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS)) 3987 + return 0; 3988 + 3989 + /* Disable CQE compression */ 3990 + netdev_warn(priv->netdev, "Disabling RX cqe compression\n"); 3991 + err = mlx5e_modify_rx_cqe_compression_locked(priv, false); 3992 + if (err) 3993 + netdev_err(priv->netdev, "Failed disabling cqe compression err=%d\n", err); 3994 + 3995 + return err; 3996 + } 3997 + 3998 + static int mlx5e_hwstamp_config_ptp_rx(struct mlx5e_priv *priv, bool ptp_rx) 3988 3999 { 3989 4000 struct mlx5e_params new_params; 4001 + 4002 + if (ptp_rx == priv->channels.params.ptp_rx) 4003 + return 0; 4004 + 4005 + new_params = priv->channels.params; 4006 + new_params.ptp_rx = ptp_rx; 4007 + return mlx5e_safe_switch_params(priv, &new_params, mlx5e_ptp_rx_manage_fs_ctx, 4008 + &new_params.ptp_rx, true); 4009 + } 4010 + 4011 + int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr) 4012 + { 3990 4013 struct hwtstamp_config config; 3991 4014 bool rx_cqe_compress_def; 4015 + bool ptp_rx; 3992 4016 int err; 3993 4017 3994 4018 if (!MLX5_CAP_GEN(priv->mdev, device_frequency_khz) || ··· 4042 3998 } 4043 3999 4044 4000 mutex_lock(&priv->state_lock); 4045 - new_params = priv->channels.params; 4046 4001 rx_cqe_compress_def = priv->channels.params.rx_cqe_compress_def; 4047 4002 4048 4003 /* RX HW timestamp */ 4049 4004 switch (config.rx_filter) { 4050 4005 case HWTSTAMP_FILTER_NONE: 4051 - new_params.ptp_rx = false; 4006 + ptp_rx = false; 4052 4007 break; 4053 4008 case HWTSTAMP_FILTER_ALL: 4054 4009 case HWTSTAMP_FILTER_SOME: ··· 4064 4021 case HWTSTAMP_FILTER_PTP_V2_SYNC: 4065 4022 case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: 4066 4023 case HWTSTAMP_FILTER_NTP_ALL: 4067 - new_params.ptp_rx = rx_cqe_compress_def; 4068 4024 config.rx_filter = HWTSTAMP_FILTER_ALL; 4025 + /* ptp_rx is set if both HW TS is set and CQE 4026 + * compression is set 4027 + */ 4028 + ptp_rx = rx_cqe_compress_def; 4069 4029 break; 4070 4030 default: 4071 - mutex_unlock(&priv->state_lock); 4072 - return -ERANGE; 4031 + err = -ERANGE; 4032 + goto err_unlock; 4073 4033 } 4074 4034 4075 - if (new_params.ptp_rx == priv->channels.params.ptp_rx) 4076 - goto out; 4035 + if (!priv->profile->rx_ptp_support) 4036 + err = mlx5e_hwstamp_config_no_ptp_rx(priv, 4037 + config.rx_filter != HWTSTAMP_FILTER_NONE); 4038 + else 4039 + err = mlx5e_hwstamp_config_ptp_rx(priv, ptp_rx); 4040 + if (err) 4041 + goto err_unlock; 4077 4042 4078 - err = mlx5e_safe_switch_params(priv, &new_params, mlx5e_ptp_rx_manage_fs_ctx, 4079 - &new_params.ptp_rx, true); 4080 - if (err) { 4081 - mutex_unlock(&priv->state_lock); 4082 - return err; 4083 - } 4084 - out: 4085 4043 memcpy(&priv->tstamp, &config, sizeof(config)); 4086 4044 mutex_unlock(&priv->state_lock); 4087 4045 ··· 4091 4047 4092 4048 return copy_to_user(ifr->ifr_data, &config, 4093 4049 sizeof(config)) ? -EFAULT : 0; 4050 + err_unlock: 4051 + mutex_unlock(&priv->state_lock); 4052 + return err; 4094 4053 } 4095 4054 4096 4055 int mlx5e_hwstamp_get(struct mlx5e_priv *priv, struct ifreq *ifr)
+9
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2015 2015 misc_parameters_3); 2016 2016 struct flow_rule *rule = flow_cls_offload_flow_rule(f); 2017 2017 struct flow_dissector *dissector = rule->match.dissector; 2018 + enum fs_flow_table_type fs_type; 2018 2019 u16 addr_type = 0; 2019 2020 u8 ip_proto = 0; 2020 2021 u8 *match_level; 2021 2022 int err; 2022 2023 2024 + fs_type = mlx5e_is_eswitch_flow(flow) ? FS_FT_FDB : FS_FT_NIC_RX; 2023 2025 match_level = outer_match_level; 2024 2026 2025 2027 if (dissector->used_keys & ··· 2147 2145 if (match.mask->vlan_id || 2148 2146 match.mask->vlan_priority || 2149 2147 match.mask->vlan_tpid) { 2148 + if (!MLX5_CAP_FLOWTABLE_TYPE(priv->mdev, ft_field_support.outer_second_vid, 2149 + fs_type)) { 2150 + NL_SET_ERR_MSG_MOD(extack, 2151 + "Matching on CVLAN is not supported"); 2152 + return -EOPNOTSUPP; 2153 + } 2154 + 2150 2155 if (match.key->vlan_tpid == htons(ETH_P_8021AD)) { 2151 2156 MLX5_SET(fte_match_set_misc, misc_c, 2152 2157 outer_second_svlan_tag, 1);
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 219 219 struct mlx5_fs_chains *chains, 220 220 int i) 221 221 { 222 - flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; 222 + if (mlx5_chains_ignore_flow_level_supported(chains)) 223 + flow_act->flags |= FLOW_ACT_IGNORE_FLOW_LEVEL; 223 224 dest[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE; 224 225 dest[i].ft = mlx5_chains_get_tc_end_ft(chains); 225 226 }
+3
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 349 349 reset_abort_work); 350 350 struct mlx5_core_dev *dev = fw_reset->dev; 351 351 352 + if (!test_bit(MLX5_FW_RESET_FLAGS_RESET_REQUESTED, &fw_reset->reset_flags)) 353 + return; 354 + 352 355 mlx5_sync_reset_clear_reset_requested(dev, true); 353 356 mlx5_core_warn(dev, "PCI Sync FW Update Reset Aborted.\n"); 354 357 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 107 107 return chains->flags & MLX5_CHAINS_AND_PRIOS_SUPPORTED; 108 108 } 109 109 110 - static bool mlx5_chains_ignore_flow_level_supported(struct mlx5_fs_chains *chains) 110 + bool mlx5_chains_ignore_flow_level_supported(struct mlx5_fs_chains *chains) 111 111 { 112 112 return chains->flags & MLX5_CHAINS_IGNORE_FLOW_LEVEL_SUPPORTED; 113 113 }
+5
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.h
··· 28 28 29 29 bool 30 30 mlx5_chains_prios_supported(struct mlx5_fs_chains *chains); 31 + bool mlx5_chains_ignore_flow_level_supported(struct mlx5_fs_chains *chains); 31 32 bool 32 33 mlx5_chains_backwards_supported(struct mlx5_fs_chains *chains); 33 34 u32 ··· 70 69 struct mlx5_flow_table *ft); 71 70 72 71 #else /* CONFIG_MLX5_CLS_ACT */ 72 + 73 + static inline bool 74 + mlx5_chains_ignore_flow_level_supported(struct mlx5_fs_chains *chains) 75 + { return false; } 73 76 74 77 static inline struct mlx5_flow_table * 75 78 mlx5_chains_get_table(struct mlx5_fs_chains *chains, u32 chain, u32 prio,
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_fw.c
··· 112 112 int ret; 113 113 114 114 ft_attr.table_type = MLX5_FLOW_TABLE_TYPE_FDB; 115 - ft_attr.level = dmn->info.caps.max_ft_level - 2; 115 + ft_attr.level = min_t(int, dmn->info.caps.max_ft_level - 2, 116 + MLX5_FT_MAX_MULTIPATH_LEVEL); 116 117 ft_attr.reformat_en = reformat_req; 117 118 ft_attr.decap_en = reformat_req; 118 119
+1
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 3815 3815 dev_err(&pdev->dev, 3816 3816 "invalid sram_size %dB or board span %ldB\n", 3817 3817 mgp->sram_size, mgp->board_span); 3818 + status = -EINVAL; 3818 3819 goto abort_with_ioremap; 3819 3820 } 3820 3821 memcpy_fromio(mgp->eeprom_strings,
+3 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1240 1240 priv->phylink_config.dev = &priv->dev->dev; 1241 1241 priv->phylink_config.type = PHYLINK_NETDEV; 1242 1242 priv->phylink_config.pcs_poll = true; 1243 - priv->phylink_config.ovr_an_inband = 1244 - priv->plat->mdio_bus_data->xpcs_an_inband; 1243 + if (priv->plat->mdio_bus_data) 1244 + priv->phylink_config.ovr_an_inband = 1245 + priv->plat->mdio_bus_data->xpcs_an_inband; 1245 1246 1246 1247 if (!fwnode) 1247 1248 fwnode = dev_fwnode(priv->device); ··· 7049 7048 stmmac_napi_del(ndev); 7050 7049 error_hw_init: 7051 7050 destroy_workqueue(priv->wq); 7052 - stmmac_bus_clks_config(priv, false); 7053 7051 bitmap_free(priv->af_xdp_zc_qps); 7054 7052 7055 7053 return ret;
+2 -2
drivers/net/ieee802154/mrf24j40.c
··· 8 8 9 9 #include <linux/spi/spi.h> 10 10 #include <linux/interrupt.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/module.h> 12 - #include <linux/of.h> 13 13 #include <linux/regmap.h> 14 14 #include <linux/ieee802154.h> 15 15 #include <linux/irq.h> ··· 1388 1388 1389 1389 static struct spi_driver mrf24j40_driver = { 1390 1390 .driver = { 1391 - .of_match_table = of_match_ptr(mrf24j40_of_match), 1391 + .of_match_table = mrf24j40_of_match, 1392 1392 .name = "mrf24j40", 1393 1393 }, 1394 1394 .id_table = mrf24j40_ids,
+8 -12
drivers/net/virtio_net.c
··· 401 401 /* If headroom is not 0, there is an offset between the beginning of the 402 402 * data and the allocated space, otherwise the data and the allocated 403 403 * space are aligned. 404 + * 405 + * Buffers with headroom use PAGE_SIZE as alloc size, see 406 + * add_recvbuf_mergeable() + get_mergeable_buf_len() 404 407 */ 405 - if (headroom) { 406 - /* Buffers with headroom use PAGE_SIZE as alloc size, 407 - * see add_recvbuf_mergeable() + get_mergeable_buf_len() 408 - */ 409 - truesize = PAGE_SIZE; 410 - tailroom = truesize - len - offset; 411 - buf = page_address(page); 412 - } else { 413 - tailroom = truesize - len; 414 - buf = p; 415 - } 408 + truesize = headroom ? PAGE_SIZE : truesize; 409 + tailroom = truesize - len - headroom - (hdr_padded_len - hdr_len); 410 + buf = p - headroom; 416 411 417 412 len -= hdr_len; 418 413 offset += hdr_padded_len; ··· 953 958 put_page(page); 954 959 head_skb = page_to_skb(vi, rq, xdp_page, offset, 955 960 len, PAGE_SIZE, false, 956 - metasize, headroom); 961 + metasize, 962 + VIRTIO_XDP_HEADROOM); 957 963 return head_skb; 958 964 } 959 965 break;
+1 -2
drivers/net/wireguard/Makefile
··· 1 - ccflags-y := -O3 2 - ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt' 1 + ccflags-y := -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt' 3 2 ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG 4 3 wireguard-y := main.o 5 4 wireguard-y += noise.o
+100 -91
drivers/net/wireguard/allowedips.c
··· 6 6 #include "allowedips.h" 7 7 #include "peer.h" 8 8 9 + static struct kmem_cache *node_cache; 10 + 9 11 static void swap_endian(u8 *dst, const u8 *src, u8 bits) 10 12 { 11 13 if (bits == 32) { ··· 30 28 node->bitlen = bits; 31 29 memcpy(node->bits, src, bits / 8U); 32 30 } 33 - #define CHOOSE_NODE(parent, key) \ 34 - parent->bit[(key[parent->bit_at_a] >> parent->bit_at_b) & 1] 31 + 32 + static inline u8 choose(struct allowedips_node *node, const u8 *key) 33 + { 34 + return (key[node->bit_at_a] >> node->bit_at_b) & 1; 35 + } 35 36 36 37 static void push_rcu(struct allowedips_node **stack, 37 38 struct allowedips_node __rcu *p, unsigned int *len) ··· 43 38 WARN_ON(IS_ENABLED(DEBUG) && *len >= 128); 44 39 stack[(*len)++] = rcu_dereference_raw(p); 45 40 } 41 + } 42 + 43 + static void node_free_rcu(struct rcu_head *rcu) 44 + { 45 + kmem_cache_free(node_cache, container_of(rcu, struct allowedips_node, rcu)); 46 46 } 47 47 48 48 static void root_free_rcu(struct rcu_head *rcu) ··· 59 49 while (len > 0 && (node = stack[--len])) { 60 50 push_rcu(stack, node->bit[0], &len); 61 51 push_rcu(stack, node->bit[1], &len); 62 - kfree(node); 52 + kmem_cache_free(node_cache, node); 63 53 } 64 54 } 65 55 ··· 74 64 if (rcu_access_pointer(node->peer)) 75 65 list_del(&node->peer_list); 76 66 } 77 - } 78 - 79 - static void walk_remove_by_peer(struct allowedips_node __rcu **top, 80 - struct wg_peer *peer, struct mutex *lock) 81 - { 82 - #define REF(p) rcu_access_pointer(p) 83 - #define DEREF(p) rcu_dereference_protected(*(p), lockdep_is_held(lock)) 84 - #define PUSH(p) ({ \ 85 - WARN_ON(IS_ENABLED(DEBUG) && len >= 128); \ 86 - stack[len++] = p; \ 87 - }) 88 - 89 - struct allowedips_node __rcu **stack[128], **nptr; 90 - struct allowedips_node *node, *prev; 91 - unsigned int len; 92 - 93 - if (unlikely(!peer || !REF(*top))) 94 - return; 95 - 96 - for (prev = NULL, len = 0, PUSH(top); len > 0; prev = node) { 97 - nptr = stack[len - 1]; 98 - node = DEREF(nptr); 99 - if (!node) { 100 - --len; 101 - continue; 102 - } 103 - if (!prev || REF(prev->bit[0]) == node || 104 - REF(prev->bit[1]) == node) { 105 - if (REF(node->bit[0])) 106 - PUSH(&node->bit[0]); 107 - else if (REF(node->bit[1])) 108 - PUSH(&node->bit[1]); 109 - } else if (REF(node->bit[0]) == prev) { 110 - if (REF(node->bit[1])) 111 - PUSH(&node->bit[1]); 112 - } else { 113 - if (rcu_dereference_protected(node->peer, 114 - lockdep_is_held(lock)) == peer) { 115 - RCU_INIT_POINTER(node->peer, NULL); 116 - list_del_init(&node->peer_list); 117 - if (!node->bit[0] || !node->bit[1]) { 118 - rcu_assign_pointer(*nptr, DEREF( 119 - &node->bit[!REF(node->bit[0])])); 120 - kfree_rcu(node, rcu); 121 - node = DEREF(nptr); 122 - } 123 - } 124 - --len; 125 - } 126 - } 127 - 128 - #undef REF 129 - #undef DEREF 130 - #undef PUSH 131 67 } 132 68 133 69 static unsigned int fls128(u64 a, u64 b) ··· 115 159 found = node; 116 160 if (node->cidr == bits) 117 161 break; 118 - node = rcu_dereference_bh(CHOOSE_NODE(node, key)); 162 + node = rcu_dereference_bh(node->bit[choose(node, key)]); 119 163 } 120 164 return found; 121 165 } ··· 147 191 u8 cidr, u8 bits, struct allowedips_node **rnode, 148 192 struct mutex *lock) 149 193 { 150 - struct allowedips_node *node = rcu_dereference_protected(trie, 151 - lockdep_is_held(lock)); 194 + struct allowedips_node *node = rcu_dereference_protected(trie, lockdep_is_held(lock)); 152 195 struct allowedips_node *parent = NULL; 153 196 bool exact = false; 154 197 ··· 157 202 exact = true; 158 203 break; 159 204 } 160 - node = rcu_dereference_protected(CHOOSE_NODE(parent, key), 161 - lockdep_is_held(lock)); 205 + node = rcu_dereference_protected(parent->bit[choose(parent, key)], lockdep_is_held(lock)); 162 206 } 163 207 *rnode = parent; 164 208 return exact; 209 + } 210 + 211 + static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node) 212 + { 213 + node->parent_bit_packed = (unsigned long)parent | bit; 214 + rcu_assign_pointer(*parent, node); 215 + } 216 + 217 + static inline void choose_and_connect_node(struct allowedips_node *parent, struct allowedips_node *node) 218 + { 219 + u8 bit = choose(parent, node->bits); 220 + connect_node(&parent->bit[bit], bit, node); 165 221 } 166 222 167 223 static int add(struct allowedips_node __rcu **trie, u8 bits, const u8 *key, ··· 184 218 return -EINVAL; 185 219 186 220 if (!rcu_access_pointer(*trie)) { 187 - node = kzalloc(sizeof(*node), GFP_KERNEL); 221 + node = kmem_cache_zalloc(node_cache, GFP_KERNEL); 188 222 if (unlikely(!node)) 189 223 return -ENOMEM; 190 224 RCU_INIT_POINTER(node->peer, peer); 191 225 list_add_tail(&node->peer_list, &peer->allowedips_list); 192 226 copy_and_assign_cidr(node, key, cidr, bits); 193 - rcu_assign_pointer(*trie, node); 227 + connect_node(trie, 2, node); 194 228 return 0; 195 229 } 196 230 if (node_placement(*trie, key, cidr, bits, &node, lock)) { ··· 199 233 return 0; 200 234 } 201 235 202 - newnode = kzalloc(sizeof(*newnode), GFP_KERNEL); 236 + newnode = kmem_cache_zalloc(node_cache, GFP_KERNEL); 203 237 if (unlikely(!newnode)) 204 238 return -ENOMEM; 205 239 RCU_INIT_POINTER(newnode->peer, peer); ··· 209 243 if (!node) { 210 244 down = rcu_dereference_protected(*trie, lockdep_is_held(lock)); 211 245 } else { 212 - down = rcu_dereference_protected(CHOOSE_NODE(node, key), 213 - lockdep_is_held(lock)); 246 + const u8 bit = choose(node, key); 247 + down = rcu_dereference_protected(node->bit[bit], lockdep_is_held(lock)); 214 248 if (!down) { 215 - rcu_assign_pointer(CHOOSE_NODE(node, key), newnode); 249 + connect_node(&node->bit[bit], bit, newnode); 216 250 return 0; 217 251 } 218 252 } ··· 220 254 parent = node; 221 255 222 256 if (newnode->cidr == cidr) { 223 - rcu_assign_pointer(CHOOSE_NODE(newnode, down->bits), down); 257 + choose_and_connect_node(newnode, down); 224 258 if (!parent) 225 - rcu_assign_pointer(*trie, newnode); 259 + connect_node(trie, 2, newnode); 226 260 else 227 - rcu_assign_pointer(CHOOSE_NODE(parent, newnode->bits), 228 - newnode); 229 - } else { 230 - node = kzalloc(sizeof(*node), GFP_KERNEL); 231 - if (unlikely(!node)) { 232 - list_del(&newnode->peer_list); 233 - kfree(newnode); 234 - return -ENOMEM; 235 - } 236 - INIT_LIST_HEAD(&node->peer_list); 237 - copy_and_assign_cidr(node, newnode->bits, cidr, bits); 238 - 239 - rcu_assign_pointer(CHOOSE_NODE(node, down->bits), down); 240 - rcu_assign_pointer(CHOOSE_NODE(node, newnode->bits), newnode); 241 - if (!parent) 242 - rcu_assign_pointer(*trie, node); 243 - else 244 - rcu_assign_pointer(CHOOSE_NODE(parent, node->bits), 245 - node); 261 + choose_and_connect_node(parent, newnode); 262 + return 0; 246 263 } 264 + 265 + node = kmem_cache_zalloc(node_cache, GFP_KERNEL); 266 + if (unlikely(!node)) { 267 + list_del(&newnode->peer_list); 268 + kmem_cache_free(node_cache, newnode); 269 + return -ENOMEM; 270 + } 271 + INIT_LIST_HEAD(&node->peer_list); 272 + copy_and_assign_cidr(node, newnode->bits, cidr, bits); 273 + 274 + choose_and_connect_node(node, down); 275 + choose_and_connect_node(node, newnode); 276 + if (!parent) 277 + connect_node(trie, 2, node); 278 + else 279 + choose_and_connect_node(parent, node); 247 280 return 0; 248 281 } 249 282 ··· 300 335 void wg_allowedips_remove_by_peer(struct allowedips *table, 301 336 struct wg_peer *peer, struct mutex *lock) 302 337 { 338 + struct allowedips_node *node, *child, **parent_bit, *parent, *tmp; 339 + bool free_parent; 340 + 341 + if (list_empty(&peer->allowedips_list)) 342 + return; 303 343 ++table->seq; 304 - walk_remove_by_peer(&table->root4, peer, lock); 305 - walk_remove_by_peer(&table->root6, peer, lock); 344 + list_for_each_entry_safe(node, tmp, &peer->allowedips_list, peer_list) { 345 + list_del_init(&node->peer_list); 346 + RCU_INIT_POINTER(node->peer, NULL); 347 + if (node->bit[0] && node->bit[1]) 348 + continue; 349 + child = rcu_dereference_protected(node->bit[!rcu_access_pointer(node->bit[0])], 350 + lockdep_is_held(lock)); 351 + if (child) 352 + child->parent_bit_packed = node->parent_bit_packed; 353 + parent_bit = (struct allowedips_node **)(node->parent_bit_packed & ~3UL); 354 + *parent_bit = child; 355 + parent = (void *)parent_bit - 356 + offsetof(struct allowedips_node, bit[node->parent_bit_packed & 1]); 357 + free_parent = !rcu_access_pointer(node->bit[0]) && 358 + !rcu_access_pointer(node->bit[1]) && 359 + (node->parent_bit_packed & 3) <= 1 && 360 + !rcu_access_pointer(parent->peer); 361 + if (free_parent) 362 + child = rcu_dereference_protected( 363 + parent->bit[!(node->parent_bit_packed & 1)], 364 + lockdep_is_held(lock)); 365 + call_rcu(&node->rcu, node_free_rcu); 366 + if (!free_parent) 367 + continue; 368 + if (child) 369 + child->parent_bit_packed = parent->parent_bit_packed; 370 + *(struct allowedips_node **)(parent->parent_bit_packed & ~3UL) = child; 371 + call_rcu(&parent->rcu, node_free_rcu); 372 + } 306 373 } 307 374 308 375 int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr) ··· 369 372 else if (skb->protocol == htons(ETH_P_IPV6)) 370 373 return lookup(table->root6, 128, &ipv6_hdr(skb)->saddr); 371 374 return NULL; 375 + } 376 + 377 + int __init wg_allowedips_slab_init(void) 378 + { 379 + node_cache = KMEM_CACHE(allowedips_node, 0); 380 + return node_cache ? 0 : -ENOMEM; 381 + } 382 + 383 + void wg_allowedips_slab_uninit(void) 384 + { 385 + rcu_barrier(); 386 + kmem_cache_destroy(node_cache); 372 387 } 373 388 374 389 #include "selftest/allowedips.c"
+7 -7
drivers/net/wireguard/allowedips.h
··· 15 15 struct allowedips_node { 16 16 struct wg_peer __rcu *peer; 17 17 struct allowedips_node __rcu *bit[2]; 18 - /* While it may seem scandalous that we waste space for v4, 19 - * we're alloc'ing to the nearest power of 2 anyway, so this 20 - * doesn't actually make a difference. 21 - */ 22 - u8 bits[16] __aligned(__alignof(u64)); 23 18 u8 cidr, bit_at_a, bit_at_b, bitlen; 19 + u8 bits[16] __aligned(__alignof(u64)); 24 20 25 - /* Keep rarely used list at bottom to be beyond cache line. */ 21 + /* Keep rarely used members at bottom to be beyond cache line. */ 22 + unsigned long parent_bit_packed; 26 23 union { 27 24 struct list_head peer_list; 28 25 struct rcu_head rcu; ··· 30 33 struct allowedips_node __rcu *root4; 31 34 struct allowedips_node __rcu *root6; 32 35 u64 seq; 33 - }; 36 + } __aligned(4); /* We pack the lower 2 bits of &root, but m68k only gives 16-bit alignment. */ 34 37 35 38 void wg_allowedips_init(struct allowedips *table); 36 39 void wg_allowedips_free(struct allowedips *table, struct mutex *mutex); ··· 52 55 #ifdef DEBUG 53 56 bool wg_allowedips_selftest(void); 54 57 #endif 58 + 59 + int wg_allowedips_slab_init(void); 60 + void wg_allowedips_slab_uninit(void); 55 61 56 62 #endif /* _WG_ALLOWEDIPS_H */
+16 -1
drivers/net/wireguard/main.c
··· 21 21 { 22 22 int ret; 23 23 24 + ret = wg_allowedips_slab_init(); 25 + if (ret < 0) 26 + goto err_allowedips; 27 + 24 28 #ifdef DEBUG 29 + ret = -ENOTRECOVERABLE; 25 30 if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() || 26 31 !wg_ratelimiter_selftest()) 27 - return -ENOTRECOVERABLE; 32 + goto err_peer; 28 33 #endif 29 34 wg_noise_init(); 35 + 36 + ret = wg_peer_init(); 37 + if (ret < 0) 38 + goto err_peer; 30 39 31 40 ret = wg_device_init(); 32 41 if (ret < 0) ··· 53 44 err_netlink: 54 45 wg_device_uninit(); 55 46 err_device: 47 + wg_peer_uninit(); 48 + err_peer: 49 + wg_allowedips_slab_uninit(); 50 + err_allowedips: 56 51 return ret; 57 52 } 58 53 ··· 64 51 { 65 52 wg_genetlink_uninit(); 66 53 wg_device_uninit(); 54 + wg_peer_uninit(); 55 + wg_allowedips_slab_uninit(); 67 56 } 68 57 69 58 module_init(mod_init);
+20 -7
drivers/net/wireguard/peer.c
··· 15 15 #include <linux/rcupdate.h> 16 16 #include <linux/list.h> 17 17 18 + static struct kmem_cache *peer_cache; 18 19 static atomic64_t peer_counter = ATOMIC64_INIT(0); 19 20 20 21 struct wg_peer *wg_peer_create(struct wg_device *wg, ··· 30 29 if (wg->num_peers >= MAX_PEERS_PER_DEVICE) 31 30 return ERR_PTR(ret); 32 31 33 - peer = kzalloc(sizeof(*peer), GFP_KERNEL); 32 + peer = kmem_cache_zalloc(peer_cache, GFP_KERNEL); 34 33 if (unlikely(!peer)) 35 34 return ERR_PTR(ret); 36 - if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) 35 + if (unlikely(dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))) 37 36 goto err; 38 37 39 38 peer->device = wg; ··· 65 64 return peer; 66 65 67 66 err: 68 - kfree(peer); 67 + kmem_cache_free(peer_cache, peer); 69 68 return ERR_PTR(ret); 70 69 } 71 70 ··· 89 88 /* Mark as dead, so that we don't allow jumping contexts after. */ 90 89 WRITE_ONCE(peer->is_dead, true); 91 90 92 - /* The caller must now synchronize_rcu() for this to take effect. */ 91 + /* The caller must now synchronize_net() for this to take effect. */ 93 92 } 94 93 95 94 static void peer_remove_after_dead(struct wg_peer *peer) ··· 161 160 lockdep_assert_held(&peer->device->device_update_lock); 162 161 163 162 peer_make_dead(peer); 164 - synchronize_rcu(); 163 + synchronize_net(); 165 164 peer_remove_after_dead(peer); 166 165 } 167 166 ··· 179 178 peer_make_dead(peer); 180 179 list_add_tail(&peer->peer_list, &dead_peers); 181 180 } 182 - synchronize_rcu(); 181 + synchronize_net(); 183 182 list_for_each_entry_safe(peer, temp, &dead_peers, peer_list) 184 183 peer_remove_after_dead(peer); 185 184 } ··· 194 193 /* The final zeroing takes care of clearing any remaining handshake key 195 194 * material and other potentially sensitive information. 196 195 */ 197 - kfree_sensitive(peer); 196 + memzero_explicit(peer, sizeof(*peer)); 197 + kmem_cache_free(peer_cache, peer); 198 198 } 199 199 200 200 static void kref_release(struct kref *refcount) ··· 226 224 if (unlikely(!peer)) 227 225 return; 228 226 kref_put(&peer->refcount, kref_release); 227 + } 228 + 229 + int __init wg_peer_init(void) 230 + { 231 + peer_cache = KMEM_CACHE(wg_peer, 0); 232 + return peer_cache ? 0 : -ENOMEM; 233 + } 234 + 235 + void wg_peer_uninit(void) 236 + { 237 + kmem_cache_destroy(peer_cache); 229 238 }
+3
drivers/net/wireguard/peer.h
··· 80 80 void wg_peer_remove(struct wg_peer *peer); 81 81 void wg_peer_remove_all(struct wg_device *wg); 82 82 83 + int wg_peer_init(void); 84 + void wg_peer_uninit(void); 85 + 83 86 #endif /* _WG_PEER_H */
+80 -87
drivers/net/wireguard/selftest/allowedips.c
··· 19 19 20 20 #include <linux/siphash.h> 21 21 22 - static __init void swap_endian_and_apply_cidr(u8 *dst, const u8 *src, u8 bits, 23 - u8 cidr) 24 - { 25 - swap_endian(dst, src, bits); 26 - memset(dst + (cidr + 7) / 8, 0, bits / 8 - (cidr + 7) / 8); 27 - if (cidr) 28 - dst[(cidr + 7) / 8 - 1] &= ~0U << ((8 - (cidr % 8)) % 8); 29 - } 30 - 31 22 static __init void print_node(struct allowedips_node *node, u8 bits) 32 23 { 33 24 char *fmt_connection = KERN_DEBUG "\t\"%p/%d\" -> \"%p/%d\";\n"; 34 - char *fmt_declaration = KERN_DEBUG 35 - "\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n"; 25 + char *fmt_declaration = KERN_DEBUG "\t\"%p/%d\"[style=%s, color=\"#%06x\"];\n"; 26 + u8 ip1[16], ip2[16], cidr1, cidr2; 36 27 char *style = "dotted"; 37 - u8 ip1[16], ip2[16]; 38 28 u32 color = 0; 39 29 30 + if (node == NULL) 31 + return; 40 32 if (bits == 32) { 41 33 fmt_connection = KERN_DEBUG "\t\"%pI4/%d\" -> \"%pI4/%d\";\n"; 42 - fmt_declaration = KERN_DEBUG 43 - "\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n"; 34 + fmt_declaration = KERN_DEBUG "\t\"%pI4/%d\"[style=%s, color=\"#%06x\"];\n"; 44 35 } else if (bits == 128) { 45 36 fmt_connection = KERN_DEBUG "\t\"%pI6/%d\" -> \"%pI6/%d\";\n"; 46 - fmt_declaration = KERN_DEBUG 47 - "\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n"; 37 + fmt_declaration = KERN_DEBUG "\t\"%pI6/%d\"[style=%s, color=\"#%06x\"];\n"; 48 38 } 49 39 if (node->peer) { 50 40 hsiphash_key_t key = { { 0 } }; ··· 45 55 hsiphash_1u32(0xabad1dea, &key) % 200; 46 56 style = "bold"; 47 57 } 48 - swap_endian_and_apply_cidr(ip1, node->bits, bits, node->cidr); 49 - printk(fmt_declaration, ip1, node->cidr, style, color); 58 + wg_allowedips_read_node(node, ip1, &cidr1); 59 + printk(fmt_declaration, ip1, cidr1, style, color); 50 60 if (node->bit[0]) { 51 - swap_endian_and_apply_cidr(ip2, 52 - rcu_dereference_raw(node->bit[0])->bits, bits, 53 - node->cidr); 54 - printk(fmt_connection, ip1, node->cidr, ip2, 55 - rcu_dereference_raw(node->bit[0])->cidr); 56 - print_node(rcu_dereference_raw(node->bit[0]), bits); 61 + wg_allowedips_read_node(rcu_dereference_raw(node->bit[0]), ip2, &cidr2); 62 + printk(fmt_connection, ip1, cidr1, ip2, cidr2); 57 63 } 58 64 if (node->bit[1]) { 59 - swap_endian_and_apply_cidr(ip2, 60 - rcu_dereference_raw(node->bit[1])->bits, 61 - bits, node->cidr); 62 - printk(fmt_connection, ip1, node->cidr, ip2, 63 - rcu_dereference_raw(node->bit[1])->cidr); 64 - print_node(rcu_dereference_raw(node->bit[1]), bits); 65 + wg_allowedips_read_node(rcu_dereference_raw(node->bit[1]), ip2, &cidr2); 66 + printk(fmt_connection, ip1, cidr1, ip2, cidr2); 65 67 } 68 + if (node->bit[0]) 69 + print_node(rcu_dereference_raw(node->bit[0]), bits); 70 + if (node->bit[1]) 71 + print_node(rcu_dereference_raw(node->bit[1]), bits); 66 72 } 67 73 68 74 static __init void print_tree(struct allowedips_node __rcu *top, u8 bits) ··· 107 121 { 108 122 union nf_inet_addr mask; 109 123 110 - memset(&mask, 0x00, 128 / 8); 111 - memset(&mask, 0xff, cidr / 8); 124 + memset(&mask, 0, sizeof(mask)); 125 + memset(&mask.all, 0xff, cidr / 8); 112 126 if (cidr % 32) 113 127 mask.all[cidr / 32] = (__force u32)htonl( 114 128 (0xFFFFFFFFUL << (32 - (cidr % 32))) & 0xFFFFFFFFUL); ··· 135 149 } 136 150 137 151 static __init inline bool 138 - horrible_match_v4(const struct horrible_allowedips_node *node, 139 - struct in_addr *ip) 152 + horrible_match_v4(const struct horrible_allowedips_node *node, struct in_addr *ip) 140 153 { 141 154 return (ip->s_addr & node->mask.ip) == node->ip.ip; 142 155 } 143 156 144 157 static __init inline bool 145 - horrible_match_v6(const struct horrible_allowedips_node *node, 146 - struct in6_addr *ip) 158 + horrible_match_v6(const struct horrible_allowedips_node *node, struct in6_addr *ip) 147 159 { 148 - return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) == 149 - node->ip.ip6[0] && 150 - (ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) == 151 - node->ip.ip6[1] && 152 - (ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) == 153 - node->ip.ip6[2] && 160 + return (ip->in6_u.u6_addr32[0] & node->mask.ip6[0]) == node->ip.ip6[0] && 161 + (ip->in6_u.u6_addr32[1] & node->mask.ip6[1]) == node->ip.ip6[1] && 162 + (ip->in6_u.u6_addr32[2] & node->mask.ip6[2]) == node->ip.ip6[2] && 154 163 (ip->in6_u.u6_addr32[3] & node->mask.ip6[3]) == node->ip.ip6[3]; 155 164 } 156 165 157 166 static __init void 158 - horrible_insert_ordered(struct horrible_allowedips *table, 159 - struct horrible_allowedips_node *node) 167 + horrible_insert_ordered(struct horrible_allowedips *table, struct horrible_allowedips_node *node) 160 168 { 161 169 struct horrible_allowedips_node *other = NULL, *where = NULL; 162 170 u8 my_cidr = horrible_mask_to_cidr(node->mask); 163 171 164 172 hlist_for_each_entry(other, &table->head, table) { 165 - if (!memcmp(&other->mask, &node->mask, 166 - sizeof(union nf_inet_addr)) && 167 - !memcmp(&other->ip, &node->ip, 168 - sizeof(union nf_inet_addr)) && 169 - other->ip_version == node->ip_version) { 173 + if (other->ip_version == node->ip_version && 174 + !memcmp(&other->mask, &node->mask, sizeof(union nf_inet_addr)) && 175 + !memcmp(&other->ip, &node->ip, sizeof(union nf_inet_addr))) { 170 176 other->value = node->value; 171 177 kfree(node); 172 178 return; 173 179 } 180 + } 181 + hlist_for_each_entry(other, &table->head, table) { 174 182 where = other; 175 183 if (horrible_mask_to_cidr(other->mask) <= my_cidr) 176 184 break; ··· 181 201 horrible_allowedips_insert_v4(struct horrible_allowedips *table, 182 202 struct in_addr *ip, u8 cidr, void *value) 183 203 { 184 - struct horrible_allowedips_node *node = kzalloc(sizeof(*node), 185 - GFP_KERNEL); 204 + struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL); 186 205 187 206 if (unlikely(!node)) 188 207 return -ENOMEM; ··· 198 219 horrible_allowedips_insert_v6(struct horrible_allowedips *table, 199 220 struct in6_addr *ip, u8 cidr, void *value) 200 221 { 201 - struct horrible_allowedips_node *node = kzalloc(sizeof(*node), 202 - GFP_KERNEL); 222 + struct horrible_allowedips_node *node = kzalloc(sizeof(*node), GFP_KERNEL); 203 223 204 224 if (unlikely(!node)) 205 225 return -ENOMEM; ··· 212 234 } 213 235 214 236 static __init void * 215 - horrible_allowedips_lookup_v4(struct horrible_allowedips *table, 216 - struct in_addr *ip) 237 + horrible_allowedips_lookup_v4(struct horrible_allowedips *table, struct in_addr *ip) 217 238 { 218 239 struct horrible_allowedips_node *node; 219 - void *ret = NULL; 220 240 221 241 hlist_for_each_entry(node, &table->head, table) { 222 - if (node->ip_version != 4) 223 - continue; 224 - if (horrible_match_v4(node, ip)) { 225 - ret = node->value; 226 - break; 227 - } 242 + if (node->ip_version == 4 && horrible_match_v4(node, ip)) 243 + return node->value; 228 244 } 229 - return ret; 245 + return NULL; 230 246 } 231 247 232 248 static __init void * 233 - horrible_allowedips_lookup_v6(struct horrible_allowedips *table, 234 - struct in6_addr *ip) 249 + horrible_allowedips_lookup_v6(struct horrible_allowedips *table, struct in6_addr *ip) 235 250 { 236 251 struct horrible_allowedips_node *node; 237 - void *ret = NULL; 238 252 239 253 hlist_for_each_entry(node, &table->head, table) { 240 - if (node->ip_version != 6) 241 - continue; 242 - if (horrible_match_v6(node, ip)) { 243 - ret = node->value; 244 - break; 245 - } 254 + if (node->ip_version == 6 && horrible_match_v6(node, ip)) 255 + return node->value; 246 256 } 247 - return ret; 257 + return NULL; 258 + } 259 + 260 + 261 + static __init void 262 + horrible_allowedips_remove_by_value(struct horrible_allowedips *table, void *value) 263 + { 264 + struct horrible_allowedips_node *node; 265 + struct hlist_node *h; 266 + 267 + hlist_for_each_entry_safe(node, h, &table->head, table) { 268 + if (node->value != value) 269 + continue; 270 + hlist_del(&node->table); 271 + kfree(node); 272 + } 273 + 248 274 } 249 275 250 276 static __init bool randomized_test(void) ··· 278 296 goto free; 279 297 } 280 298 kref_init(&peers[i]->refcount); 299 + INIT_LIST_HEAD(&peers[i]->allowedips_list); 281 300 } 282 301 283 302 mutex_lock(&mutex); ··· 316 333 if (wg_allowedips_insert_v4(&t, 317 334 (struct in_addr *)mutated, 318 335 cidr, peer, &mutex) < 0) { 319 - pr_err("allowedips random malloc: FAIL\n"); 336 + pr_err("allowedips random self-test malloc: FAIL\n"); 320 337 goto free_locked; 321 338 } 322 339 if (horrible_allowedips_insert_v4(&h, ··· 379 396 print_tree(t.root6, 128); 380 397 } 381 398 382 - for (i = 0; i < NUM_QUERIES; ++i) { 383 - prandom_bytes(ip, 4); 384 - if (lookup(t.root4, 32, ip) != 385 - horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) { 386 - pr_err("allowedips random self-test: FAIL\n"); 387 - goto free; 399 + for (j = 0;; ++j) { 400 + for (i = 0; i < NUM_QUERIES; ++i) { 401 + prandom_bytes(ip, 4); 402 + if (lookup(t.root4, 32, ip) != horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip)) { 403 + horrible_allowedips_lookup_v4(&h, (struct in_addr *)ip); 404 + pr_err("allowedips random v4 self-test: FAIL\n"); 405 + goto free; 406 + } 407 + prandom_bytes(ip, 16); 408 + if (lookup(t.root6, 128, ip) != horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) { 409 + pr_err("allowedips random v6 self-test: FAIL\n"); 410 + goto free; 411 + } 388 412 } 413 + if (j >= NUM_PEERS) 414 + break; 415 + mutex_lock(&mutex); 416 + wg_allowedips_remove_by_peer(&t, peers[j], &mutex); 417 + mutex_unlock(&mutex); 418 + horrible_allowedips_remove_by_value(&h, peers[j]); 389 419 } 390 420 391 - for (i = 0; i < NUM_QUERIES; ++i) { 392 - prandom_bytes(ip, 16); 393 - if (lookup(t.root6, 128, ip) != 394 - horrible_allowedips_lookup_v6(&h, (struct in6_addr *)ip)) { 395 - pr_err("allowedips random self-test: FAIL\n"); 396 - goto free; 397 - } 421 + if (t.root4 || t.root6) { 422 + pr_err("allowedips random self-test removal: FAIL\n"); 423 + goto free; 398 424 } 425 + 399 426 ret = true; 400 427 401 428 free:
+1 -1
drivers/net/wireguard/socket.c
··· 430 430 if (new4) 431 431 wg->incoming_port = ntohs(inet_sk(new4)->inet_sport); 432 432 mutex_unlock(&wg->socket_update_lock); 433 - synchronize_rcu(); 433 + synchronize_net(); 434 434 sock_free(old4); 435 435 sock_free(old6); 436 436 }
+26
drivers/net/wireless/mediatek/mt76/mac80211.c
··· 514 514 static void mt76_rx_release_amsdu(struct mt76_phy *phy, enum mt76_rxq_id q) 515 515 { 516 516 struct sk_buff *skb = phy->rx_amsdu[q].head; 517 + struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb; 517 518 struct mt76_dev *dev = phy->dev; 518 519 519 520 phy->rx_amsdu[q].head = NULL; 520 521 phy->rx_amsdu[q].tail = NULL; 522 + 523 + /* 524 + * Validate if the amsdu has a proper first subframe. 525 + * A single MSDU can be parsed as A-MSDU when the unauthenticated A-MSDU 526 + * flag of the QoS header gets flipped. In such cases, the first 527 + * subframe has a LLC/SNAP header in the location of the destination 528 + * address. 529 + */ 530 + if (skb_shinfo(skb)->frag_list) { 531 + int offset = 0; 532 + 533 + if (!(status->flag & RX_FLAG_8023)) { 534 + offset = ieee80211_get_hdrlen_from_skb(skb); 535 + 536 + if ((status->flag & 537 + (RX_FLAG_DECRYPTED | RX_FLAG_IV_STRIPPED)) == 538 + RX_FLAG_DECRYPTED) 539 + offset += 8; 540 + } 541 + 542 + if (ether_addr_equal(skb->data + offset, rfc1042_header)) { 543 + dev_kfree_skb(skb); 544 + return; 545 + } 546 + } 521 547 __skb_queue_tail(&dev->rx_skb[q], skb); 522 548 } 523 549
-1
drivers/net/wireless/mediatek/mt76/mt7615/init.c
··· 510 510 mutex_init(&dev->pm.mutex); 511 511 init_waitqueue_head(&dev->pm.wait); 512 512 spin_lock_init(&dev->pm.txq_lock); 513 - set_bit(MT76_STATE_PM, &dev->mphy.state); 514 513 INIT_DELAYED_WORK(&dev->mphy.mac_work, mt7615_mac_work); 515 514 INIT_DELAYED_WORK(&dev->phy.scan_work, mt7615_scan_work); 516 515 INIT_DELAYED_WORK(&dev->coredump.work, mt7615_coredump_work);
+3 -2
drivers/net/wireless/mediatek/mt76/mt7615/mac.c
··· 1912 1912 napi_schedule(&dev->mt76.napi[i]); 1913 1913 mt76_connac_pm_dequeue_skbs(mphy, &dev->pm); 1914 1914 mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_WM], false); 1915 - ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work, 1916 - MT7615_WATCHDOG_TIME); 1915 + if (test_bit(MT76_STATE_RUNNING, &mphy->state)) 1916 + ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work, 1917 + MT7615_WATCHDOG_TIME); 1917 1918 } 1918 1919 1919 1920 ieee80211_wake_queues(mphy->hw);
+12 -7
drivers/net/wireless/mediatek/mt76/mt7615/sdio_mcu.c
··· 51 51 return ret; 52 52 } 53 53 54 - static int mt7663s_mcu_drv_pmctrl(struct mt7615_dev *dev) 54 + static int __mt7663s_mcu_drv_pmctrl(struct mt7615_dev *dev) 55 55 { 56 56 struct sdio_func *func = dev->mt76.sdio.func; 57 57 struct mt76_phy *mphy = &dev->mt76.phy; 58 58 u32 status; 59 59 int ret; 60 - 61 - if (!test_and_clear_bit(MT76_STATE_PM, &mphy->state)) 62 - goto out; 63 60 64 61 sdio_claim_host(func); 65 62 ··· 73 76 } 74 77 75 78 sdio_release_host(func); 76 - 77 - out: 78 79 dev->pm.last_activity = jiffies; 80 + 81 + return 0; 82 + } 83 + 84 + static int mt7663s_mcu_drv_pmctrl(struct mt7615_dev *dev) 85 + { 86 + struct mt76_phy *mphy = &dev->mt76.phy; 87 + 88 + if (test_and_clear_bit(MT76_STATE_PM, &mphy->state)) 89 + return __mt7663s_mcu_drv_pmctrl(dev); 79 90 80 91 return 0; 81 92 } ··· 128 123 struct mt7615_mcu_ops *mcu_ops; 129 124 int ret; 130 125 131 - ret = mt7663s_mcu_drv_pmctrl(dev); 126 + ret = __mt7663s_mcu_drv_pmctrl(dev); 132 127 if (ret) 133 128 return ret; 134 129
-3
drivers/net/wireless/mediatek/mt76/mt7615/usb_mcu.c
··· 55 55 56 56 dev->mt76.mcu_ops = &mt7663u_mcu_ops, 57 57 58 - /* usb does not support runtime-pm */ 59 - clear_bit(MT76_STATE_PM, &dev->mphy.state); 60 58 mt76_set(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN); 61 - 62 59 if (test_and_clear_bit(MT76_STATE_POWER_OFF, &dev->mphy.state)) { 63 60 mt7615_mcu_restart(&dev->mt76); 64 61 if (!mt76_poll_msec(dev, MT_CONN_ON_MISC,
+4
drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
··· 721 721 phy->phy_type = mt76_connac_get_phy_mode_v2(mphy, vif, band, sta); 722 722 phy->basic_rate = cpu_to_le16((u16)vif->bss_conf.basic_rates); 723 723 phy->rcpi = rcpi; 724 + phy->ampdu = FIELD_PREP(IEEE80211_HT_AMPDU_PARM_FACTOR, 725 + sta->ht_cap.ampdu_factor) | 726 + FIELD_PREP(IEEE80211_HT_AMPDU_PARM_DENSITY, 727 + sta->ht_cap.ampdu_density); 724 728 725 729 tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA, sizeof(*ra_info)); 726 730 ra_info = (struct sta_rec_ra_info *)tlv;
+77 -4
drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
··· 87 87 .reconfig_complete = mt76x02_reconfig_complete, 88 88 }; 89 89 90 - static int mt76x0e_register_device(struct mt76x02_dev *dev) 90 + static int mt76x0e_init_hardware(struct mt76x02_dev *dev, bool resume) 91 91 { 92 92 int err; 93 93 ··· 100 100 if (err < 0) 101 101 return err; 102 102 103 - err = mt76x02_dma_init(dev); 104 - if (err < 0) 105 - return err; 103 + if (!resume) { 104 + err = mt76x02_dma_init(dev); 105 + if (err < 0) 106 + return err; 107 + } 106 108 107 109 err = mt76x0_init_hardware(dev); 108 110 if (err < 0) ··· 124 122 125 123 mt76_clear(dev, 0x110, BIT(9)); 126 124 mt76_set(dev, MT_MAX_LEN_CFG, BIT(13)); 125 + 126 + return 0; 127 + } 128 + 129 + static int mt76x0e_register_device(struct mt76x02_dev *dev) 130 + { 131 + int err; 132 + 133 + err = mt76x0e_init_hardware(dev, false); 134 + if (err < 0) 135 + return err; 127 136 128 137 err = mt76x0_register_device(dev); 129 138 if (err < 0) ··· 179 166 ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32)); 180 167 if (ret) 181 168 return ret; 169 + 170 + mt76_pci_disable_aspm(pdev); 182 171 183 172 mdev = mt76_alloc_device(&pdev->dev, sizeof(*dev), &mt76x0e_ops, 184 173 &drv_ops); ··· 235 220 mt76_free_device(mdev); 236 221 } 237 222 223 + #ifdef CONFIG_PM 224 + static int mt76x0e_suspend(struct pci_dev *pdev, pm_message_t state) 225 + { 226 + struct mt76_dev *mdev = pci_get_drvdata(pdev); 227 + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); 228 + int i; 229 + 230 + mt76_worker_disable(&mdev->tx_worker); 231 + for (i = 0; i < ARRAY_SIZE(mdev->phy.q_tx); i++) 232 + mt76_queue_tx_cleanup(dev, mdev->phy.q_tx[i], true); 233 + for (i = 0; i < ARRAY_SIZE(mdev->q_mcu); i++) 234 + mt76_queue_tx_cleanup(dev, mdev->q_mcu[i], true); 235 + napi_disable(&mdev->tx_napi); 236 + 237 + mt76_for_each_q_rx(mdev, i) 238 + napi_disable(&mdev->napi[i]); 239 + 240 + mt76x02_dma_disable(dev); 241 + mt76x02_mcu_cleanup(dev); 242 + mt76x0_chip_onoff(dev, false, false); 243 + 244 + pci_enable_wake(pdev, pci_choose_state(pdev, state), true); 245 + pci_save_state(pdev); 246 + 247 + return pci_set_power_state(pdev, pci_choose_state(pdev, state)); 248 + } 249 + 250 + static int mt76x0e_resume(struct pci_dev *pdev) 251 + { 252 + struct mt76_dev *mdev = pci_get_drvdata(pdev); 253 + struct mt76x02_dev *dev = container_of(mdev, struct mt76x02_dev, mt76); 254 + int err, i; 255 + 256 + err = pci_set_power_state(pdev, PCI_D0); 257 + if (err) 258 + return err; 259 + 260 + pci_restore_state(pdev); 261 + 262 + mt76_worker_enable(&mdev->tx_worker); 263 + 264 + mt76_for_each_q_rx(mdev, i) { 265 + mt76_queue_rx_reset(dev, i); 266 + napi_enable(&mdev->napi[i]); 267 + napi_schedule(&mdev->napi[i]); 268 + } 269 + 270 + napi_enable(&mdev->tx_napi); 271 + napi_schedule(&mdev->tx_napi); 272 + 273 + return mt76x0e_init_hardware(dev, true); 274 + } 275 + #endif /* CONFIG_PM */ 276 + 238 277 static const struct pci_device_id mt76x0e_device_table[] = { 239 278 { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7610) }, 240 279 { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7630) }, ··· 306 237 .id_table = mt76x0e_device_table, 307 238 .probe = mt76x0e_probe, 308 239 .remove = mt76x0e_remove, 240 + #ifdef CONFIG_PM 241 + .suspend = mt76x0e_suspend, 242 + .resume = mt76x0e_resume, 243 + #endif /* CONFIG_PM */ 309 244 }; 310 245 311 246 module_pci_driver(mt76x0e_driver);
+2 -2
drivers/net/wireless/mediatek/mt76/mt7921/init.c
··· 76 76 struct wiphy *wiphy = hw->wiphy; 77 77 78 78 hw->queues = 4; 79 - hw->max_rx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF; 80 - hw->max_tx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF; 79 + hw->max_rx_aggregation_subframes = 64; 80 + hw->max_tx_aggregation_subframes = 128; 81 81 82 82 hw->radiotap_timestamp.units_pos = 83 83 IEEE80211_RADIOTAP_TIMESTAMP_UNIT_US;
+3 -2
drivers/net/wireless/mediatek/mt76/mt7921/mac.c
··· 1404 1404 napi_schedule(&dev->mt76.napi[i]); 1405 1405 mt76_connac_pm_dequeue_skbs(mphy, &dev->pm); 1406 1406 mt7921_tx_cleanup(dev); 1407 - ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work, 1408 - MT7921_WATCHDOG_TIME); 1407 + if (test_bit(MT76_STATE_RUNNING, &mphy->state)) 1408 + ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work, 1409 + MT7921_WATCHDOG_TIME); 1409 1410 } 1410 1411 1411 1412 ieee80211_wake_queues(mphy->hw);
+1 -2
drivers/net/wireless/mediatek/mt76/mt7921/main.c
··· 74 74 IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G; 75 75 else if (band == NL80211_BAND_5GHZ) 76 76 he_cap_elem->phy_cap_info[0] = 77 - IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G | 78 - IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G; 77 + IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G; 79 78 80 79 he_cap_elem->phy_cap_info[1] = 81 80 IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
+10 -7
drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
··· 402 402 mt7921_mcu_tx_rate_report(struct mt7921_dev *dev, struct sk_buff *skb, 403 403 u16 wlan_idx) 404 404 { 405 - struct mt7921_mcu_wlan_info_event *wtbl_info = 406 - (struct mt7921_mcu_wlan_info_event *)(skb->data); 407 - struct rate_info rate = {}; 408 - u8 curr_idx = wtbl_info->rate_info.rate_idx; 409 - u16 curr = le16_to_cpu(wtbl_info->rate_info.rate[curr_idx]); 410 - struct mt7921_mcu_peer_cap peer = wtbl_info->peer_cap; 405 + struct mt7921_mcu_wlan_info_event *wtbl_info; 411 406 struct mt76_phy *mphy = &dev->mphy; 412 407 struct mt7921_sta_stats *stats; 408 + struct rate_info rate = {}; 413 409 struct mt7921_sta *msta; 414 410 struct mt76_wcid *wcid; 411 + u8 idx; 415 412 416 413 if (wlan_idx >= MT76_N_WCIDS) 414 + return; 415 + 416 + wtbl_info = (struct mt7921_mcu_wlan_info_event *)skb->data; 417 + idx = wtbl_info->rate_info.rate_idx; 418 + if (idx >= ARRAY_SIZE(wtbl_info->rate_info.rate)) 417 419 return; 418 420 419 421 rcu_read_lock(); ··· 428 426 stats = &msta->stats; 429 427 430 428 /* current rate */ 431 - mt7921_mcu_tx_rate_parse(mphy, &peer, &rate, curr); 429 + mt7921_mcu_tx_rate_parse(mphy, &wtbl_info->peer_cap, &rate, 430 + le16_to_cpu(wtbl_info->rate_info.rate[idx])); 432 431 stats->tx_rate = rate; 433 432 out: 434 433 rcu_read_unlock();
+6
drivers/net/xen-netback/interface.c
··· 684 684 { 685 685 if (queue->task) { 686 686 kthread_stop(queue->task); 687 + put_task_struct(queue->task); 687 688 queue->task = NULL; 688 689 } 689 690 ··· 746 745 if (IS_ERR(task)) 747 746 goto kthread_err; 748 747 queue->task = task; 748 + /* 749 + * Take a reference to the task in order to prevent it from being freed 750 + * if the thread function returns before kthread_stop is called. 751 + */ 752 + get_task_struct(task); 749 753 750 754 task = kthread_run(xenvif_dealloc_kthread, queue, 751 755 "%s-dealloc", queue->name);
+3 -2
drivers/nvme/host/rdma.c
··· 1320 1320 int count) 1321 1321 { 1322 1322 struct nvme_sgl_desc *sg = &c->common.dptr.sgl; 1323 - struct scatterlist *sgl = req->data_sgl.sg_table.sgl; 1324 1323 struct ib_sge *sge = &req->sge[1]; 1324 + struct scatterlist *sgl; 1325 1325 u32 len = 0; 1326 1326 int i; 1327 1327 1328 - for (i = 0; i < count; i++, sgl++, sge++) { 1328 + for_each_sg(req->data_sgl.sg_table.sgl, sgl, count, i) { 1329 1329 sge->addr = sg_dma_address(sgl); 1330 1330 sge->length = sg_dma_len(sgl); 1331 1331 sge->lkey = queue->device->pd->local_dma_lkey; 1332 1332 len += sge->length; 1333 + sge++; 1333 1334 } 1334 1335 1335 1336 sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);
+16 -17
drivers/nvme/target/core.c
··· 1005 1005 return req->transfer_len - req->metadata_len; 1006 1006 } 1007 1007 1008 - static int nvmet_req_alloc_p2pmem_sgls(struct nvmet_req *req) 1008 + static int nvmet_req_alloc_p2pmem_sgls(struct pci_dev *p2p_dev, 1009 + struct nvmet_req *req) 1009 1010 { 1010 - req->sg = pci_p2pmem_alloc_sgl(req->p2p_dev, &req->sg_cnt, 1011 + req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt, 1011 1012 nvmet_data_transfer_len(req)); 1012 1013 if (!req->sg) 1013 1014 goto out_err; 1014 1015 1015 1016 if (req->metadata_len) { 1016 - req->metadata_sg = pci_p2pmem_alloc_sgl(req->p2p_dev, 1017 + req->metadata_sg = pci_p2pmem_alloc_sgl(p2p_dev, 1017 1018 &req->metadata_sg_cnt, req->metadata_len); 1018 1019 if (!req->metadata_sg) 1019 1020 goto out_free_sg; 1020 1021 } 1022 + 1023 + req->p2p_dev = p2p_dev; 1024 + 1021 1025 return 0; 1022 1026 out_free_sg: 1023 1027 pci_p2pmem_free_sgl(req->p2p_dev, req->sg); ··· 1029 1025 return -ENOMEM; 1030 1026 } 1031 1027 1032 - static bool nvmet_req_find_p2p_dev(struct nvmet_req *req) 1028 + static struct pci_dev *nvmet_req_find_p2p_dev(struct nvmet_req *req) 1033 1029 { 1034 - if (!IS_ENABLED(CONFIG_PCI_P2PDMA)) 1035 - return false; 1036 - 1037 - if (req->sq->ctrl && req->sq->qid && req->ns) { 1038 - req->p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map, 1039 - req->ns->nsid); 1040 - if (req->p2p_dev) 1041 - return true; 1042 - } 1043 - 1044 - req->p2p_dev = NULL; 1045 - return false; 1030 + if (!IS_ENABLED(CONFIG_PCI_P2PDMA) || 1031 + !req->sq->ctrl || !req->sq->qid || !req->ns) 1032 + return NULL; 1033 + return radix_tree_lookup(&req->sq->ctrl->p2p_ns_map, req->ns->nsid); 1046 1034 } 1047 1035 1048 1036 int nvmet_req_alloc_sgls(struct nvmet_req *req) 1049 1037 { 1050 - if (nvmet_req_find_p2p_dev(req) && !nvmet_req_alloc_p2pmem_sgls(req)) 1038 + struct pci_dev *p2p_dev = nvmet_req_find_p2p_dev(req); 1039 + 1040 + if (p2p_dev && !nvmet_req_alloc_p2pmem_sgls(p2p_dev, req)) 1051 1041 return 0; 1052 1042 1053 1043 req->sg = sgl_alloc(nvmet_data_transfer_len(req), GFP_KERNEL, ··· 1070 1072 pci_p2pmem_free_sgl(req->p2p_dev, req->sg); 1071 1073 if (req->metadata_sg) 1072 1074 pci_p2pmem_free_sgl(req->p2p_dev, req->metadata_sg); 1075 + req->p2p_dev = NULL; 1073 1076 } else { 1074 1077 sgl_free(req->sg); 1075 1078 if (req->metadata_sg)
+8 -3
drivers/nvme/target/loop.c
··· 263 263 264 264 static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl) 265 265 { 266 - clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags); 266 + if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags)) 267 + return; 267 268 nvmet_sq_destroy(&ctrl->queues[0].nvme_sq); 268 269 blk_cleanup_queue(ctrl->ctrl.admin_q); 269 270 blk_cleanup_queue(ctrl->ctrl.fabrics_q); ··· 300 299 clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[i].flags); 301 300 nvmet_sq_destroy(&ctrl->queues[i].nvme_sq); 302 301 } 302 + ctrl->ctrl.queue_count = 1; 303 303 } 304 304 305 305 static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl) ··· 407 405 return 0; 408 406 409 407 out_cleanup_queue: 408 + clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags); 410 409 blk_cleanup_queue(ctrl->ctrl.admin_q); 411 410 out_cleanup_fabrics_q: 412 411 blk_cleanup_queue(ctrl->ctrl.fabrics_q); ··· 465 462 nvme_loop_shutdown_ctrl(ctrl); 466 463 467 464 if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { 468 - /* state change failure should never happen */ 469 - WARN_ON_ONCE(1); 465 + if (ctrl->ctrl.state != NVME_CTRL_DELETING && 466 + ctrl->ctrl.state != NVME_CTRL_DELETING_NOIO) 467 + /* state change failure for non-deleted ctrl? */ 468 + WARN_ON_ONCE(1); 470 469 return; 471 470 } 472 471
+7
drivers/pci/of.c
··· 103 103 #endif 104 104 } 105 105 106 + bool pci_host_of_has_msi_map(struct device *dev) 107 + { 108 + if (dev && dev->of_node) 109 + return of_get_property(dev->of_node, "msi-map", NULL); 110 + return false; 111 + } 112 + 106 113 static inline int __of_pci_pci_compare(struct device_node *node, 107 114 unsigned int data) 108 115 {
+2 -1
drivers/pci/probe.c
··· 925 925 device_enable_async_suspend(bus->bridge); 926 926 pci_set_bus_of_node(bus); 927 927 pci_set_bus_msi_domain(bus); 928 - if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev)) 928 + if (bridge->msi_domain && !dev_get_msi_domain(&bus->dev) && 929 + !pci_host_of_has_msi_map(parent)) 929 930 bus->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 930 931 931 932 if (!parent)
+2 -2
drivers/phy/broadcom/phy-brcm-usb-init.h
··· 78 78 * Other architectures (e.g., ARM) either do not support big endian, or 79 79 * else leave I/O in little endian mode. 80 80 */ 81 - if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN)) 81 + if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) 82 82 return __raw_readl(addr); 83 83 else 84 84 return readl_relaxed(addr); ··· 87 87 static inline void brcm_usb_writel(u32 val, void __iomem *addr) 88 88 { 89 89 /* See brcmnand_readl() comments */ 90 - if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(__BIG_ENDIAN)) 90 + if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) 91 91 __raw_writel(val, addr); 92 92 else 93 93 writel_relaxed(val, addr);
+1
drivers/phy/cadence/phy-cadence-sierra.c
··· 940 940 sp->nsubnodes = node; 941 941 942 942 if (sp->num_lanes > SIERRA_MAX_LANES) { 943 + ret = -EINVAL; 943 944 dev_err(dev, "Invalid lane configuration\n"); 944 945 goto put_child2; 945 946 }
+2
drivers/phy/mediatek/phy-mtk-tphy.c
··· 949 949 break; 950 950 default: 951 951 dev_err(tphy->dev, "incompatible PHY type\n"); 952 + clk_disable_unprepare(instance->ref_clk); 953 + clk_disable_unprepare(instance->da_ref_clk); 952 954 return -EINVAL; 953 955 } 954 956
+4
drivers/phy/microchip/sparx5_serdes.c
··· 2470 2470 priv->coreclock = clock; 2471 2471 2472 2472 iores = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2473 + if (!iores) { 2474 + dev_err(priv->dev, "Invalid resource\n"); 2475 + return -EINVAL; 2476 + } 2473 2477 iomem = devm_ioremap(priv->dev, iores->start, resource_size(iores)); 2474 2478 if (IS_ERR(iomem)) { 2475 2479 dev_err(priv->dev, "Unable to get serdes registers: %s\n",
+1 -1
drivers/phy/ralink/phy-mt7621-pci.c
··· 341 341 .probe = mt7621_pci_phy_probe, 342 342 .driver = { 343 343 .name = "mt7621-pci-phy", 344 - .of_match_table = of_match_ptr(mt7621_pci_phy_ids), 344 + .of_match_table = mt7621_pci_phy_ids, 345 345 }, 346 346 }; 347 347
+1
drivers/phy/ti/phy-j721e-wiz.c
··· 1212 1212 1213 1213 if (wiz->typec_dir_delay < WIZ_TYPEC_DIR_DEBOUNCE_MIN || 1214 1214 wiz->typec_dir_delay > WIZ_TYPEC_DIR_DEBOUNCE_MAX) { 1215 + ret = -EINVAL; 1215 1216 dev_err(dev, "Invalid typec-dir-debounce property\n"); 1216 1217 goto err_addr_to_resource; 1217 1218 }
+2 -2
drivers/pinctrl/aspeed/pinctrl-aspeed-g5.c
··· 2702 2702 } 2703 2703 2704 2704 /** 2705 - * Configure a pin's signal by applying an expression's descriptor state for 2706 - * all descriptors in the expression. 2705 + * aspeed_g5_sig_expr_set() - Configure a pin's signal by applying an 2706 + * expression's descriptor state for all descriptors in the expression. 2707 2707 * 2708 2708 * @ctx: The pinmux context 2709 2709 * @expr: The expression associated with the function whose signal is to be
+2 -2
drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
··· 2611 2611 }; 2612 2612 2613 2613 /** 2614 - * Configure a pin's signal by applying an expression's descriptor state for 2615 - * all descriptors in the expression. 2614 + * aspeed_g6_sig_expr_set() - Configure a pin's signal by applying an 2615 + * expression's descriptor state for all descriptors in the expression. 2616 2616 * 2617 2617 * @ctx: The pinmux context 2618 2618 * @expr: The expression associated with the function whose signal is to be
+2 -1
drivers/pinctrl/aspeed/pinctrl-aspeed.c
··· 108 108 } 109 109 110 110 /** 111 - * Disable a signal on a pin by disabling all provided signal expressions. 111 + * aspeed_disable_sig() - Disable a signal on a pin by disabling all provided 112 + * signal expressions. 112 113 * 113 114 * @ctx: The pinmux context 114 115 * @exprs: The list of signal expressions (from a priority level on a pin)
+2 -1
drivers/pinctrl/aspeed/pinmux-aspeed.c
··· 21 21 } 22 22 23 23 /** 24 - * Query the enabled or disabled state of a signal descriptor 24 + * aspeed_sig_desc_eval() - Query the enabled or disabled state of a signal 25 + * descriptor. 25 26 * 26 27 * @desc: The signal descriptor of interest 27 28 * @enabled: True to query the enabled state, false to query disabled state
+1 -1
drivers/pinctrl/qcom/Kconfig
··· 223 223 config PINCTRL_SC8180X 224 224 tristate "Qualcomm Technologies Inc SC8180x pin controller driver" 225 225 depends on GPIOLIB && (OF || ACPI) 226 - select PINCTRL_MSM 226 + depends on PINCTRL_MSM 227 227 help 228 228 This is the pinctrl, pinmux, pinconf and gpiolib driver for the 229 229 Qualcomm Technologies Inc TLMM block found on the Qualcomm
+9 -9
drivers/pinctrl/qcom/pinctrl-sdx55.c
··· 410 410 "gpio29", "gpio30", "gpio31", "gpio32", "gpio33", "gpio34", "gpio35", 411 411 "gpio36", "gpio37", "gpio38", "gpio39", "gpio40", "gpio41", "gpio42", 412 412 "gpio43", "gpio44", "gpio45", "gpio46", "gpio47", "gpio48", "gpio49", 413 - "gpio50", "gpio51", "gpio52", "gpio52", "gpio53", "gpio53", "gpio54", 414 - "gpio55", "gpio56", "gpio57", "gpio58", "gpio59", "gpio60", "gpio61", 415 - "gpio62", "gpio63", "gpio64", "gpio65", "gpio66", "gpio67", "gpio68", 416 - "gpio69", "gpio70", "gpio71", "gpio72", "gpio73", "gpio74", "gpio75", 417 - "gpio76", "gpio77", "gpio78", "gpio79", "gpio80", "gpio81", "gpio82", 418 - "gpio83", "gpio84", "gpio85", "gpio86", "gpio87", "gpio88", "gpio89", 419 - "gpio90", "gpio91", "gpio92", "gpio93", "gpio94", "gpio95", "gpio96", 420 - "gpio97", "gpio98", "gpio99", "gpio100", "gpio101", "gpio102", 421 - "gpio103", "gpio104", "gpio105", "gpio106", "gpio107", 413 + "gpio50", "gpio51", "gpio52", "gpio53", "gpio54", "gpio55", "gpio56", 414 + "gpio57", "gpio58", "gpio59", "gpio60", "gpio61", "gpio62", "gpio63", 415 + "gpio64", "gpio65", "gpio66", "gpio67", "gpio68", "gpio69", "gpio70", 416 + "gpio71", "gpio72", "gpio73", "gpio74", "gpio75", "gpio76", "gpio77", 417 + "gpio78", "gpio79", "gpio80", "gpio81", "gpio82", "gpio83", "gpio84", 418 + "gpio85", "gpio86", "gpio87", "gpio88", "gpio89", "gpio90", "gpio91", 419 + "gpio92", "gpio93", "gpio94", "gpio95", "gpio96", "gpio97", "gpio98", 420 + "gpio99", "gpio100", "gpio101", "gpio102", "gpio103", "gpio104", 421 + "gpio105", "gpio106", "gpio107", 422 422 }; 423 423 424 424 static const char * const qdss_stm_groups[] = {
+1 -1
drivers/pinctrl/ralink/pinctrl-rt2880.c
··· 127 127 if (p->groups[group].enabled) { 128 128 dev_err(p->dev, "%s is already enabled\n", 129 129 p->groups[group].name); 130 - return -EBUSY; 130 + return 0; 131 131 } 132 132 133 133 p->groups[group].enabled = 1;
+2 -2
drivers/platform/mellanox/mlxreg-hotplug.c
··· 683 683 684 684 err = devm_request_irq(&pdev->dev, priv->irq, 685 685 mlxreg_hotplug_irq_handler, IRQF_TRIGGER_FALLING 686 - | IRQF_SHARED | IRQF_NO_AUTOEN, 687 - "mlxreg-hotplug", priv); 686 + | IRQF_SHARED, "mlxreg-hotplug", priv); 688 687 if (err) { 689 688 dev_err(&pdev->dev, "Failed to request irq: %d\n", err); 690 689 return err; 691 690 } 692 691 692 + disable_irq(priv->irq); 693 693 spin_lock_init(&priv->lock); 694 694 INIT_DELAYED_WORK(&priv->dwork_irq, mlxreg_hotplug_work_handler); 695 695 dev_set_drvdata(&pdev->dev, priv);
+1 -1
drivers/platform/surface/aggregator/controller.c
··· 1907 1907 { 1908 1908 int status; 1909 1909 1910 - status = __ssam_ssh_event_request(ctrl, reg, reg.cid_enable, id, flags); 1910 + status = __ssam_ssh_event_request(ctrl, reg, reg.cid_disable, id, flags); 1911 1911 1912 1912 if (status < 0 && status != -EINVAL) { 1913 1913 ssam_err(ctrl,
+5 -2
drivers/platform/surface/surface_aggregator_registry.c
··· 156 156 NULL, 157 157 }; 158 158 159 - /* Devices for Surface Laptop 3. */ 159 + /* Devices for Surface Laptop 3 and 4. */ 160 160 static const struct software_node *ssam_node_group_sl3[] = { 161 161 &ssam_node_root, 162 162 &ssam_node_bat_ac, ··· 521 521 /* Surface Laptop 3 (13", Intel) */ 522 522 { "MSHW0114", (unsigned long)ssam_node_group_sl3 }, 523 523 524 - /* Surface Laptop 3 (15", AMD) */ 524 + /* Surface Laptop 3 (15", AMD) and 4 (15", AMD) */ 525 525 { "MSHW0110", (unsigned long)ssam_node_group_sl3 }, 526 + 527 + /* Surface Laptop 4 (13", Intel) */ 528 + { "MSHW0250", (unsigned long)ssam_node_group_sl3 }, 526 529 527 530 /* Surface Laptop Go 1 */ 528 531 { "MSHW0118", (unsigned long)ssam_node_group_slg1 },
+1
drivers/platform/surface/surface_dtx.c
··· 427 427 */ 428 428 if (test_bit(SDTX_DEVICE_SHUTDOWN_BIT, &ddev->flags)) { 429 429 up_write(&ddev->client_lock); 430 + mutex_destroy(&client->read_lock); 430 431 sdtx_device_put(client->ddev); 431 432 kfree(client); 432 433 return -ENODEV;
+1
drivers/platform/x86/thinkpad_acpi.c
··· 8853 8853 TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (2nd gen) */ 8854 8854 TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (3nd gen) */ 8855 8855 TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL), /* P15 (1st gen) / P15v (1st gen) */ 8856 + TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL), /* X1 Carbon (9th gen) */ 8856 8857 }; 8857 8858 8858 8859 static int __init fan_init(struct ibm_init_struct *iibm)
+1 -1
drivers/regulator/Kconfig
··· 1031 1031 current source, LDO and Buck. 1032 1032 1033 1033 config REGULATOR_RTMV20 1034 - tristate "RTMV20 Laser Diode Regulator" 1034 + tristate "Richtek RTMV20 Laser Diode Regulator" 1035 1035 depends on I2C 1036 1036 select REGMAP_I2C 1037 1037 help
+10 -9
drivers/regulator/atc260x-regulator.c
··· 28 28 29 29 static const struct linear_range atc2609a_ldo_voltage_ranges0[] = { 30 30 REGULATOR_LINEAR_RANGE(700000, 0, 15, 100000), 31 - REGULATOR_LINEAR_RANGE(2100000, 16, 28, 100000), 31 + REGULATOR_LINEAR_RANGE(2100000, 0, 12, 100000), 32 32 }; 33 33 34 34 static const struct linear_range atc2609a_ldo_voltage_ranges1[] = { 35 35 REGULATOR_LINEAR_RANGE(850000, 0, 15, 100000), 36 - REGULATOR_LINEAR_RANGE(2100000, 16, 27, 100000), 36 + REGULATOR_LINEAR_RANGE(2100000, 0, 11, 100000), 37 37 }; 38 38 39 39 static const unsigned int atc260x_ldo_voltage_range_sel[] = { 40 - 0x0, 0x1, 40 + 0x0, 0x20, 41 41 }; 42 42 43 43 static int atc260x_dcdc_set_voltage_time_sel(struct regulator_dev *rdev, ··· 411 411 .owner = THIS_MODULE, \ 412 412 } 413 413 414 - #define atc2609a_reg_desc_ldo_range_pick(num, n_range) { \ 414 + #define atc2609a_reg_desc_ldo_range_pick(num, n_range, n_volt) { \ 415 415 .name = "LDO"#num, \ 416 416 .supply_name = "ldo"#num, \ 417 417 .of_match = of_match_ptr("ldo"#num), \ ··· 421 421 .type = REGULATOR_VOLTAGE, \ 422 422 .linear_ranges = atc2609a_ldo_voltage_ranges##n_range, \ 423 423 .n_linear_ranges = ARRAY_SIZE(atc2609a_ldo_voltage_ranges##n_range), \ 424 + .n_voltages = n_volt, \ 424 425 .vsel_reg = ATC2609A_PMU_LDO##num##_CTL0, \ 425 426 .vsel_mask = GENMASK(4, 1), \ 426 427 .vsel_range_reg = ATC2609A_PMU_LDO##num##_CTL0, \ ··· 459 458 atc2609a_reg_desc_ldo_bypass(0), 460 459 atc2609a_reg_desc_ldo_bypass(1), 461 460 atc2609a_reg_desc_ldo_bypass(2), 462 - atc2609a_reg_desc_ldo_range_pick(3, 0), 463 - atc2609a_reg_desc_ldo_range_pick(4, 0), 461 + atc2609a_reg_desc_ldo_range_pick(3, 0, 29), 462 + atc2609a_reg_desc_ldo_range_pick(4, 0, 29), 464 463 atc2609a_reg_desc_ldo(5), 465 - atc2609a_reg_desc_ldo_range_pick(6, 1), 466 - atc2609a_reg_desc_ldo_range_pick(7, 0), 467 - atc2609a_reg_desc_ldo_range_pick(8, 0), 464 + atc2609a_reg_desc_ldo_range_pick(6, 1, 28), 465 + atc2609a_reg_desc_ldo_range_pick(7, 0, 29), 466 + atc2609a_reg_desc_ldo_range_pick(8, 0, 29), 468 467 atc2609a_reg_desc_ldo_fixed(9), 469 468 }; 470 469
+1 -1
drivers/regulator/bd718x7-regulator.c
··· 334 334 NULL); 335 335 336 336 BD718XX_OPS(bd71837_buck_regulator_nolinear_ops, regulator_list_voltage_table, 337 - regulator_map_voltage_ascend, bd718xx_set_voltage_sel_restricted, 337 + regulator_map_voltage_ascend, bd71837_set_voltage_sel_restricted, 338 338 regulator_get_voltage_sel_regmap, regulator_set_voltage_time_sel, 339 339 NULL); 340 340 /*
+6
drivers/regulator/core.c
··· 1425 1425 * and we have control then make sure it is enabled. 1426 1426 */ 1427 1427 if (rdev->constraints->always_on || rdev->constraints->boot_on) { 1428 + /* If we want to enable this regulator, make sure that we know 1429 + * the supplying regulator. 1430 + */ 1431 + if (rdev->supply_name && !rdev->supply) 1432 + return -EPROBE_DEFER; 1433 + 1428 1434 if (rdev->supply) { 1429 1435 ret = regulator_enable(rdev->supply); 1430 1436 if (ret < 0) {
+2 -1
drivers/regulator/cros-ec-regulator.c
··· 225 225 226 226 drvdata->dev = devm_regulator_register(dev, &drvdata->desc, &cfg); 227 227 if (IS_ERR(drvdata->dev)) { 228 + ret = PTR_ERR(drvdata->dev); 228 229 dev_err(&pdev->dev, "Failed to register regulator: %d\n", ret); 229 - return PTR_ERR(drvdata->dev); 230 + return ret; 230 231 } 231 232 232 233 platform_set_drvdata(pdev, drvdata);
+7 -3
drivers/regulator/da9121-regulator.c
··· 280 280 case DA9121_BUCK_MODE_FORCE_PFM: 281 281 return REGULATOR_MODE_STANDBY; 282 282 default: 283 - return -EINVAL; 283 + return REGULATOR_MODE_INVALID; 284 284 } 285 285 } 286 286 ··· 317 317 { 318 318 struct da9121 *chip = rdev_get_drvdata(rdev); 319 319 int id = rdev_get_id(rdev); 320 - unsigned int val; 320 + unsigned int val, mode; 321 321 int ret = 0; 322 322 323 323 ret = regmap_read(chip->regmap, da9121_mode_field[id].reg, &val); ··· 326 326 return -EINVAL; 327 327 } 328 328 329 - return da9121_map_mode(val & da9121_mode_field[id].msk); 329 + mode = da9121_map_mode(val & da9121_mode_field[id].msk); 330 + if (mode == REGULATOR_MODE_INVALID) 331 + return -EINVAL; 332 + 333 + return mode; 330 334 } 331 335 332 336 static const struct regulator_ops da9121_buck_ops = {
+1 -2
drivers/regulator/fan53555.c
··· 55 55 56 56 #define FAN53555_NVOLTAGES 64 /* Numbers of voltages */ 57 57 #define FAN53526_NVOLTAGES 128 58 - #define TCS4525_NVOLTAGES 127 /* Numbers of voltages */ 59 58 60 59 #define TCS_VSEL_NSEL_MASK 0x7f 61 60 #define TCS_VSEL0_MODE (1 << 7) ··· 375 376 /* Init voltage range and step */ 376 377 di->vsel_min = 600000; 377 378 di->vsel_step = 6250; 378 - di->vsel_count = TCS4525_NVOLTAGES; 379 + di->vsel_count = FAN53526_NVOLTAGES; 379 380 380 381 return 0; 381 382 }
+3
drivers/regulator/fan53880.c
··· 51 51 REGULATOR_LINEAR_RANGE(800000, 0xf, 0x73, 25000), \ 52 52 }, \ 53 53 .n_linear_ranges = 2, \ 54 + .n_voltages = 0x74, \ 54 55 .vsel_reg = FAN53880_LDO ## _num ## VOUT, \ 55 56 .vsel_mask = 0x7f, \ 56 57 .enable_reg = FAN53880_ENABLE, \ ··· 77 76 REGULATOR_LINEAR_RANGE(600000, 0x1f, 0xf7, 12500), 78 77 }, 79 78 .n_linear_ranges = 2, 79 + .n_voltages = 0xf8, 80 80 .vsel_reg = FAN53880_BUCKVOUT, 81 81 .vsel_mask = 0x7f, 82 82 .enable_reg = FAN53880_ENABLE, ··· 97 95 REGULATOR_LINEAR_RANGE(3000000, 0x4, 0x70, 25000), 98 96 }, 99 97 .n_linear_ranges = 2, 98 + .n_voltages = 0x71, 100 99 .vsel_reg = FAN53880_BOOSTVOUT, 101 100 .vsel_mask = 0x7f, 102 101 .enable_reg = FAN53880_ENABLE_BOOST,
+6 -1
drivers/regulator/fixed.c
··· 88 88 { 89 89 struct fixed_voltage_data *priv = rdev_get_drvdata(rdev); 90 90 struct device *dev = rdev->dev.parent; 91 + int ret; 92 + 93 + ret = dev_pm_genpd_set_performance_state(dev, 0); 94 + if (ret) 95 + return ret; 91 96 92 97 priv->enable_counter--; 93 98 94 - return dev_pm_genpd_set_performance_state(dev, 0); 99 + return 0; 95 100 } 96 101 97 102 static int reg_is_enabled(struct regulator_dev *rdev)
+1 -1
drivers/regulator/helpers.c
··· 948 948 int ret; 949 949 unsigned int sel; 950 950 951 - if (!rdev->desc->n_ramp_values) 951 + if (WARN_ON(!rdev->desc->n_ramp_values || !rdev->desc->ramp_delay_table)) 952 952 return -EINVAL; 953 953 954 954 ret = find_closest_bigger(ramp_delay, rdev->desc->ramp_delay_table,
+2 -2
drivers/regulator/hi6421v600-regulator.c
··· 3 3 // Device driver for regulators in Hisi IC 4 4 // 5 5 // Copyright (c) 2013 Linaro Ltd. 6 - // Copyright (c) 2011 Hisilicon. 6 + // Copyright (c) 2011 HiSilicon Ltd. 7 7 // Copyright (c) 2020-2021 Huawei Technologies Co., Ltd 8 8 // 9 9 // Guodong Xu <guodong.xu@linaro.org> ··· 83 83 .owner = THIS_MODULE, \ 84 84 .volt_table = vtable, \ 85 85 .n_voltages = ARRAY_SIZE(vtable), \ 86 - .vsel_mask = (1 << (ARRAY_SIZE(vtable) - 1)) - 1, \ 86 + .vsel_mask = ARRAY_SIZE(vtable) - 1, \ 87 87 .vsel_reg = vreg, \ 88 88 .enable_reg = ereg, \ 89 89 .enable_mask = emask, \
+1 -1
drivers/regulator/hi655x-regulator.c
··· 2 2 // 3 3 // Device driver for regulators in Hi655x IC 4 4 // 5 - // Copyright (c) 2016 Hisilicon. 5 + // Copyright (c) 2016 HiSilicon Ltd. 6 6 // 7 7 // Authors: 8 8 // Chen Feng <puck.chen@hisilicon.com>
+11 -6
drivers/regulator/max77620-regulator.c
··· 814 814 config.dev = dev; 815 815 config.driver_data = pmic; 816 816 817 + /* 818 + * Set of_node_reuse flag to prevent driver core from attempting to 819 + * claim any pinmux resources already claimed by the parent device. 820 + * Otherwise PMIC driver will fail to re-probe. 821 + */ 822 + device_set_of_node_from_dev(&pdev->dev, pdev->dev.parent); 823 + 817 824 for (id = 0; id < MAX77620_NUM_REGS; id++) { 818 825 struct regulator_dev *rdev; 819 826 struct regulator_desc *rdesc; ··· 846 839 return ret; 847 840 848 841 rdev = devm_regulator_register(dev, rdesc, &config); 849 - if (IS_ERR(rdev)) { 850 - ret = PTR_ERR(rdev); 851 - dev_err(dev, "Regulator registration %s failed: %d\n", 852 - rdesc->name, ret); 853 - return ret; 854 - } 842 + if (IS_ERR(rdev)) 843 + return dev_err_probe(dev, PTR_ERR(rdev), 844 + "Regulator registration %s failed\n", 845 + rdesc->name); 855 846 } 856 847 857 848 return 0;
+1 -1
drivers/regulator/mt6315-regulator.c
··· 59 59 REGULATOR_LINEAR_RANGE(0, 0, 0xbf, 6250), 60 60 }; 61 61 62 - static unsigned int mt6315_map_mode(u32 mode) 62 + static unsigned int mt6315_map_mode(unsigned int mode) 63 63 { 64 64 switch (mode) { 65 65 case MT6315_BUCK_MODE_AUTO:
+2 -2
drivers/regulator/rt4801-regulator.c
··· 66 66 struct gpio_descs *gpios = priv->enable_gpios; 67 67 int id = rdev_get_id(rdev), ret; 68 68 69 - if (gpios->ndescs <= id) { 69 + if (!gpios || gpios->ndescs <= id) { 70 70 dev_warn(&rdev->dev, "no dedicated gpio can control\n"); 71 71 goto bypass_gpio; 72 72 } ··· 88 88 struct gpio_descs *gpios = priv->enable_gpios; 89 89 int id = rdev_get_id(rdev); 90 90 91 - if (gpios->ndescs <= id) { 91 + if (!gpios || gpios->ndescs <= id) { 92 92 dev_warn(&rdev->dev, "no dedicated gpio can control\n"); 93 93 goto bypass_gpio; 94 94 }
+42 -2
drivers/regulator/rtmv20-regulator.c
··· 27 27 #define RTMV20_REG_LDIRQ 0x30 28 28 #define RTMV20_REG_LDSTAT 0x40 29 29 #define RTMV20_REG_LDMASK 0x50 30 + #define RTMV20_MAX_REGS (RTMV20_REG_LDMASK + 1) 30 31 31 32 #define RTMV20_VID_MASK GENMASK(7, 4) 32 33 #define RICHTEK_VID 0x80 ··· 104 103 return 0; 105 104 } 106 105 106 + static int rtmv20_lsw_set_current_limit(struct regulator_dev *rdev, int min_uA, 107 + int max_uA) 108 + { 109 + int sel; 110 + 111 + if (min_uA > RTMV20_LSW_MAXUA || max_uA < RTMV20_LSW_MINUA) 112 + return -EINVAL; 113 + 114 + if (max_uA > RTMV20_LSW_MAXUA) 115 + max_uA = RTMV20_LSW_MAXUA; 116 + 117 + sel = (max_uA - RTMV20_LSW_MINUA) / RTMV20_LSW_STEPUA; 118 + 119 + /* Ensure the selected setting is still in range */ 120 + if ((sel * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA) < min_uA) 121 + return -EINVAL; 122 + 123 + sel <<= ffs(rdev->desc->csel_mask) - 1; 124 + 125 + return regmap_update_bits(rdev->regmap, rdev->desc->csel_reg, 126 + rdev->desc->csel_mask, sel); 127 + } 128 + 129 + static int rtmv20_lsw_get_current_limit(struct regulator_dev *rdev) 130 + { 131 + unsigned int val; 132 + int ret; 133 + 134 + ret = regmap_read(rdev->regmap, rdev->desc->csel_reg, &val); 135 + if (ret) 136 + return ret; 137 + 138 + val &= rdev->desc->csel_mask; 139 + val >>= ffs(rdev->desc->csel_mask) - 1; 140 + 141 + return val * RTMV20_LSW_STEPUA + RTMV20_LSW_MINUA; 142 + } 143 + 107 144 static const struct regulator_ops rtmv20_regulator_ops = { 108 - .set_current_limit = regulator_set_current_limit_regmap, 109 - .get_current_limit = regulator_get_current_limit_regmap, 145 + .set_current_limit = rtmv20_lsw_set_current_limit, 146 + .get_current_limit = rtmv20_lsw_get_current_limit, 110 147 .enable = rtmv20_lsw_enable, 111 148 .disable = rtmv20_lsw_disable, 112 149 .is_enabled = regulator_is_enabled_regmap, ··· 314 275 .val_bits = 8, 315 276 .cache_type = REGCACHE_RBTREE, 316 277 .max_register = RTMV20_REG_LDMASK, 278 + .num_reg_defaults_raw = RTMV20_MAX_REGS, 317 279 318 280 .writeable_reg = rtmv20_is_accessible_reg, 319 281 .readable_reg = rtmv20_is_accessible_reg,
+1 -1
drivers/regulator/scmi-regulator.c
··· 173 173 sreg->desc.uV_step = 174 174 vinfo->levels_uv[SCMI_VOLTAGE_SEGMENT_STEP]; 175 175 sreg->desc.linear_min_sel = 0; 176 - sreg->desc.n_voltages = delta_uV / sreg->desc.uV_step; 176 + sreg->desc.n_voltages = (delta_uV / sreg->desc.uV_step) + 1; 177 177 sreg->desc.ops = &scmi_reg_linear_ops; 178 178 } 179 179
+26 -21
drivers/scsi/hosts.c
··· 254 254 255 255 device_enable_async_suspend(&shost->shost_dev); 256 256 257 + get_device(&shost->shost_gendev); 257 258 error = device_add(&shost->shost_dev); 258 259 if (error) 259 260 goto out_del_gendev; 260 - 261 - get_device(&shost->shost_gendev); 262 261 263 262 if (shost->transportt->host_size) { 264 263 shost->shost_data = kzalloc(shost->transportt->host_size, ··· 277 278 278 279 if (!shost->work_q) { 279 280 error = -EINVAL; 280 - goto out_free_shost_data; 281 + goto out_del_dev; 281 282 } 282 283 } 283 284 284 285 error = scsi_sysfs_add_host(shost); 285 286 if (error) 286 - goto out_destroy_host; 287 + goto out_del_dev; 287 288 288 289 scsi_proc_host_add(shost); 289 290 scsi_autopm_put_host(shost); 290 291 return error; 291 292 292 - out_destroy_host: 293 - if (shost->work_q) 294 - destroy_workqueue(shost->work_q); 295 - out_free_shost_data: 296 - kfree(shost->shost_data); 293 + /* 294 + * Any host allocation in this function will be freed in 295 + * scsi_host_dev_release(). 296 + */ 297 297 out_del_dev: 298 298 device_del(&shost->shost_dev); 299 299 out_del_gendev: 300 + /* 301 + * Host state is SHOST_RUNNING so we have to explicitly release 302 + * ->shost_dev. 303 + */ 304 + put_device(&shost->shost_dev); 300 305 device_del(&shost->shost_gendev); 301 306 out_disable_runtime_pm: 302 307 device_disable_async_suspend(&shost->shost_gendev); 303 308 pm_runtime_disable(&shost->shost_gendev); 304 309 pm_runtime_set_suspended(&shost->shost_gendev); 305 310 pm_runtime_put_noidle(&shost->shost_gendev); 306 - scsi_mq_destroy_tags(shost); 307 311 fail: 308 312 return error; 309 313 } ··· 347 345 348 346 ida_simple_remove(&host_index_ida, shost->host_no); 349 347 350 - if (parent) 348 + if (shost->shost_state != SHOST_CREATED) 351 349 put_device(parent); 352 350 kfree(shost); 353 351 } ··· 390 388 mutex_init(&shost->scan_mutex); 391 389 392 390 index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL); 393 - if (index < 0) 394 - goto fail_kfree; 391 + if (index < 0) { 392 + kfree(shost); 393 + return NULL; 394 + } 395 395 shost->host_no = index; 396 396 397 397 shost->dma_channel = 0xff; ··· 485 481 shost_printk(KERN_WARNING, shost, 486 482 "error handler thread failed to spawn, error = %ld\n", 487 483 PTR_ERR(shost->ehandler)); 488 - goto fail_index_remove; 484 + goto fail; 489 485 } 490 486 491 487 shost->tmf_work_q = alloc_workqueue("scsi_tmf_%d", ··· 494 490 if (!shost->tmf_work_q) { 495 491 shost_printk(KERN_WARNING, shost, 496 492 "failed to create tmf workq\n"); 497 - goto fail_kthread; 493 + goto fail; 498 494 } 499 495 scsi_proc_hostdir_add(shost->hostt); 500 496 return shost; 497 + fail: 498 + /* 499 + * Host state is still SHOST_CREATED and that is enough to release 500 + * ->shost_gendev. scsi_host_dev_release() will free 501 + * dev_name(&shost->shost_dev). 502 + */ 503 + put_device(&shost->shost_gendev); 501 504 502 - fail_kthread: 503 - kthread_stop(shost->ehandler); 504 - fail_index_remove: 505 - ida_simple_remove(&host_index_ida, shost->host_no); 506 - fail_kfree: 507 - kfree(shost); 508 505 return NULL; 509 506 } 510 507 EXPORT_SYMBOL(scsi_host_alloc);
+1 -3
drivers/scsi/lpfc/lpfc_sli.c
··· 20589 20589 abtswqe = &abtsiocb->wqe; 20590 20590 memset(abtswqe, 0, sizeof(*abtswqe)); 20591 20591 20592 - if (lpfc_is_link_up(phba)) 20592 + if (!lpfc_is_link_up(phba)) 20593 20593 bf_set(abort_cmd_ia, &abtswqe->abort_cmd, 1); 20594 - else 20595 - bf_set(abort_cmd_ia, &abtswqe->abort_cmd, 0); 20596 20594 bf_set(abort_cmd_criteria, &abtswqe->abort_cmd, T_XRI_TAG); 20597 20595 abtswqe->abort_cmd.rsrvd5 = 0; 20598 20596 abtswqe->abort_cmd.wqe_com.abort_tag = xritag;
+9 -11
drivers/scsi/qedf/qedf_main.c
··· 1827 1827 fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf)); 1828 1828 QEDF_WARN(&(base_qedf->dbg_ctx), "Failed to create vport, " 1829 1829 "WWPN (0x%s) already exists.\n", buf); 1830 - goto err1; 1830 + return rc; 1831 1831 } 1832 1832 1833 1833 if (atomic_read(&base_qedf->link_state) != QEDF_LINK_UP) { 1834 1834 QEDF_WARN(&(base_qedf->dbg_ctx), "Cannot create vport " 1835 1835 "because link is not up.\n"); 1836 - rc = -EIO; 1837 - goto err1; 1836 + return -EIO; 1838 1837 } 1839 1838 1840 1839 vn_port = libfc_vport_create(vport, sizeof(struct qedf_ctx)); 1841 1840 if (!vn_port) { 1842 1841 QEDF_WARN(&(base_qedf->dbg_ctx), "Could not create lport " 1843 1842 "for vport.\n"); 1844 - rc = -ENOMEM; 1845 - goto err1; 1843 + return -ENOMEM; 1846 1844 } 1847 1845 1848 1846 fcoe_wwn_to_str(vport->port_name, buf, sizeof(buf)); ··· 1864 1866 if (rc) { 1865 1867 QEDF_ERR(&(base_qedf->dbg_ctx), "Could not allocate memory " 1866 1868 "for lport stats.\n"); 1867 - goto err2; 1869 + goto err; 1868 1870 } 1869 1871 1870 1872 fc_set_wwnn(vn_port, vport->node_name); ··· 1882 1884 if (rc) { 1883 1885 QEDF_WARN(&base_qedf->dbg_ctx, 1884 1886 "Error adding Scsi_Host rc=0x%x.\n", rc); 1885 - goto err2; 1887 + goto err; 1886 1888 } 1887 1889 1888 1890 /* Set default dev_loss_tmo based on module parameter */ ··· 1923 1925 vport_qedf->dbg_ctx.host_no = vn_port->host->host_no; 1924 1926 vport_qedf->dbg_ctx.pdev = base_qedf->pdev; 1925 1927 1926 - err2: 1928 + return 0; 1929 + 1930 + err: 1927 1931 scsi_host_put(vn_port->host); 1928 - err1: 1929 1932 return rc; 1930 1933 } 1931 1934 ··· 1967 1968 fc_lport_free_stats(vn_port); 1968 1969 1969 1970 /* Release Scsi_Host */ 1970 - if (vn_port->host) 1971 - scsi_host_put(vn_port->host); 1971 + scsi_host_put(vn_port->host); 1972 1972 1973 1973 out: 1974 1974 return 0;
+1
drivers/scsi/scsi_devinfo.c
··· 184 184 {"HP", "C3323-300", "4269", BLIST_NOTQ}, 185 185 {"HP", "C5713A", NULL, BLIST_NOREPORTLUN}, 186 186 {"HP", "DISK-SUBSYSTEM", "*", BLIST_REPORTLUN2}, 187 + {"HPE", "OPEN-", "*", BLIST_REPORTLUN2 | BLIST_TRY_VPD_PAGES}, 187 188 {"IBM", "AuSaV1S2", NULL, BLIST_FORCELUN}, 188 189 {"IBM", "ProFibre 4000R", "*", BLIST_SPARSELUN | BLIST_LARGELUN}, 189 190 {"IBM", "2105", NULL, BLIST_RETRY_HWERROR},
+14 -1
drivers/scsi/ufs/ufs-mediatek.c
··· 603 603 604 604 ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_LOCALVERINFO), &ver); 605 605 if (!ret) { 606 - if (ver >= UFS_UNIPRO_VER_1_8) 606 + if (ver >= UFS_UNIPRO_VER_1_8) { 607 607 host->hw_ver.major = 3; 608 + /* 609 + * Fix HCI version for some platforms with 610 + * incorrect version 611 + */ 612 + if (hba->ufs_version < ufshci_version(3, 0)) 613 + hba->ufs_version = ufshci_version(3, 0); 614 + } 608 615 } 616 + } 617 + 618 + static u32 ufs_mtk_get_ufs_hci_version(struct ufs_hba *hba) 619 + { 620 + return hba->ufs_version; 609 621 } 610 622 611 623 /** ··· 1060 1048 static const struct ufs_hba_variant_ops ufs_hba_mtk_vops = { 1061 1049 .name = "mediatek.ufshci", 1062 1050 .init = ufs_mtk_init, 1051 + .get_ufs_hci_version = ufs_mtk_get_ufs_hci_version, 1063 1052 .setup_clocks = ufs_mtk_setup_clocks, 1064 1053 .hce_enable_notify = ufs_mtk_hce_enable_notify, 1065 1054 .link_startup_notify = ufs_mtk_link_startup_notify,
+1 -3
drivers/soc/amlogic/meson-clk-measure.c
··· 626 626 627 627 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 628 628 base = devm_ioremap_resource(&pdev->dev, res); 629 - if (IS_ERR(base)) { 630 - dev_err(&pdev->dev, "io resource mapping failed\n"); 629 + if (IS_ERR(base)) 631 630 return PTR_ERR(base); 632 - } 633 631 634 632 priv->regmap = devm_regmap_init_mmio(&pdev->dev, base, 635 633 &meson_clk_msr_regmap_config);
+8 -2
drivers/spi/spi-bcm2835.c
··· 68 68 #define BCM2835_SPI_FIFO_SIZE 64 69 69 #define BCM2835_SPI_FIFO_SIZE_3_4 48 70 70 #define BCM2835_SPI_DMA_MIN_LENGTH 96 71 - #define BCM2835_SPI_NUM_CS 4 /* raise as necessary */ 71 + #define BCM2835_SPI_NUM_CS 24 /* raise as necessary */ 72 72 #define BCM2835_SPI_MODE_BITS (SPI_CPOL | SPI_CPHA | SPI_CS_HIGH \ 73 73 | SPI_NO_CS | SPI_3WIRE) 74 74 ··· 1195 1195 struct gpio_chip *chip; 1196 1196 u32 cs; 1197 1197 1198 + if (spi->chip_select >= BCM2835_SPI_NUM_CS) { 1199 + dev_err(&spi->dev, "only %d chip-selects supported\n", 1200 + BCM2835_SPI_NUM_CS - 1); 1201 + return -EINVAL; 1202 + } 1203 + 1198 1204 /* 1199 1205 * Precalculate SPI slave's CS register value for ->prepare_message(): 1200 1206 * The driver always uses software-controlled GPIO chip select, hence ··· 1294 1288 ctlr->use_gpio_descriptors = true; 1295 1289 ctlr->mode_bits = BCM2835_SPI_MODE_BITS; 1296 1290 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 1297 - ctlr->num_chipselect = BCM2835_SPI_NUM_CS; 1291 + ctlr->num_chipselect = 3; 1298 1292 ctlr->setup = bcm2835_spi_setup; 1299 1293 ctlr->transfer_one = bcm2835_spi_transfer_one; 1300 1294 ctlr->handle_err = bcm2835_spi_handle_err;
+14 -4
drivers/spi/spi-bitbang.c
··· 184 184 { 185 185 struct spi_bitbang_cs *cs = spi->controller_state; 186 186 struct spi_bitbang *bitbang; 187 + bool initial_setup = false; 188 + int retval; 187 189 188 190 bitbang = spi_master_get_devdata(spi->master); 189 191 ··· 194 192 if (!cs) 195 193 return -ENOMEM; 196 194 spi->controller_state = cs; 195 + initial_setup = true; 197 196 } 198 197 199 198 /* per-word shift register access, in hardware or bitbanging */ 200 199 cs->txrx_word = bitbang->txrx_word[spi->mode & (SPI_CPOL|SPI_CPHA)]; 201 - if (!cs->txrx_word) 202 - return -EINVAL; 200 + if (!cs->txrx_word) { 201 + retval = -EINVAL; 202 + goto err_free; 203 + } 203 204 204 205 if (bitbang->setup_transfer) { 205 - int retval = bitbang->setup_transfer(spi, NULL); 206 + retval = bitbang->setup_transfer(spi, NULL); 206 207 if (retval < 0) 207 - return retval; 208 + goto err_free; 208 209 } 209 210 210 211 dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs); 211 212 212 213 return 0; 214 + 215 + err_free: 216 + if (initial_setup) 217 + kfree(cs); 218 + return retval; 213 219 } 214 220 EXPORT_SYMBOL_GPL(spi_bitbang_setup); 215 221
+4
drivers/spi/spi-fsl-spi.c
··· 440 440 { 441 441 struct mpc8xxx_spi *mpc8xxx_spi; 442 442 struct fsl_spi_reg __iomem *reg_base; 443 + bool initial_setup = false; 443 444 int retval; 444 445 u32 hw_mode; 445 446 struct spi_mpc8xxx_cs *cs = spi_get_ctldata(spi); ··· 453 452 if (!cs) 454 453 return -ENOMEM; 455 454 spi_set_ctldata(spi, cs); 455 + initial_setup = true; 456 456 } 457 457 mpc8xxx_spi = spi_master_get_devdata(spi->master); 458 458 ··· 477 475 retval = fsl_spi_setup_transfer(spi, NULL); 478 476 if (retval < 0) { 479 477 cs->hw_mode = hw_mode; /* Restore settings */ 478 + if (initial_setup) 479 + kfree(cs); 480 480 return retval; 481 481 } 482 482
+8 -1
drivers/spi/spi-omap-uwire.c
··· 424 424 static int uwire_setup(struct spi_device *spi) 425 425 { 426 426 struct uwire_state *ust = spi->controller_state; 427 + bool initial_setup = false; 428 + int status; 427 429 428 430 if (ust == NULL) { 429 431 ust = kzalloc(sizeof(*ust), GFP_KERNEL); 430 432 if (ust == NULL) 431 433 return -ENOMEM; 432 434 spi->controller_state = ust; 435 + initial_setup = true; 433 436 } 434 437 435 - return uwire_setup_transfer(spi, NULL); 438 + status = uwire_setup_transfer(spi, NULL); 439 + if (status && initial_setup) 440 + kfree(ust); 441 + 442 + return status; 436 443 } 437 444 438 445 static void uwire_cleanup(struct spi_device *spi)
+20 -13
drivers/spi/spi-omap2-mcspi.c
··· 1032 1032 } 1033 1033 } 1034 1034 1035 + static void omap2_mcspi_cleanup(struct spi_device *spi) 1036 + { 1037 + struct omap2_mcspi_cs *cs; 1038 + 1039 + if (spi->controller_state) { 1040 + /* Unlink controller state from context save list */ 1041 + cs = spi->controller_state; 1042 + list_del(&cs->node); 1043 + 1044 + kfree(cs); 1045 + } 1046 + } 1047 + 1035 1048 static int omap2_mcspi_setup(struct spi_device *spi) 1036 1049 { 1050 + bool initial_setup = false; 1037 1051 int ret; 1038 1052 struct omap2_mcspi *mcspi = spi_master_get_devdata(spi->master); 1039 1053 struct omap2_mcspi_regs *ctx = &mcspi->ctx; ··· 1065 1051 spi->controller_state = cs; 1066 1052 /* Link this to context save list */ 1067 1053 list_add_tail(&cs->node, &ctx->cs); 1054 + initial_setup = true; 1068 1055 } 1069 1056 1070 1057 ret = pm_runtime_get_sync(mcspi->dev); 1071 1058 if (ret < 0) { 1072 1059 pm_runtime_put_noidle(mcspi->dev); 1060 + if (initial_setup) 1061 + omap2_mcspi_cleanup(spi); 1073 1062 1074 1063 return ret; 1075 1064 } 1076 1065 1077 1066 ret = omap2_mcspi_setup_transfer(spi, NULL); 1067 + if (ret && initial_setup) 1068 + omap2_mcspi_cleanup(spi); 1069 + 1078 1070 pm_runtime_mark_last_busy(mcspi->dev); 1079 1071 pm_runtime_put_autosuspend(mcspi->dev); 1080 1072 1081 1073 return ret; 1082 - } 1083 - 1084 - static void omap2_mcspi_cleanup(struct spi_device *spi) 1085 - { 1086 - struct omap2_mcspi_cs *cs; 1087 - 1088 - if (spi->controller_state) { 1089 - /* Unlink controller state from context save list */ 1090 - cs = spi->controller_state; 1091 - list_del(&cs->node); 1092 - 1093 - kfree(cs); 1094 - } 1095 1074 } 1096 1075 1097 1076 static irqreturn_t omap2_mcspi_irq_handler(int irq, void *data)
+8 -1
drivers/spi/spi-pxa2xx.c
··· 1254 1254 chip->gpio_cs_inverted = spi->mode & SPI_CS_HIGH; 1255 1255 1256 1256 err = gpiod_direction_output(gpiod, !chip->gpio_cs_inverted); 1257 + if (err) 1258 + gpiod_put(chip->gpiod_cs); 1257 1259 } 1258 1260 1259 1261 return err; ··· 1269 1267 struct driver_data *drv_data = 1270 1268 spi_controller_get_devdata(spi->controller); 1271 1269 uint tx_thres, tx_hi_thres, rx_thres; 1270 + int err; 1272 1271 1273 1272 switch (drv_data->ssp_type) { 1274 1273 case QUARK_X1000_SSP: ··· 1416 1413 if (drv_data->ssp_type == CE4100_SSP) 1417 1414 return 0; 1418 1415 1419 - return setup_cs(spi, chip, chip_info); 1416 + err = setup_cs(spi, chip, chip_info); 1417 + if (err) 1418 + kfree(chip); 1419 + 1420 + return err; 1420 1421 } 1421 1422 1422 1423 static void cleanup(struct spi_device *spi)
+4 -1
drivers/spi/spi-stm32-qspi.c
··· 294 294 int err = 0; 295 295 296 296 if (!op->data.nbytes) 297 - return stm32_qspi_wait_nobusy(qspi); 297 + goto wait_nobusy; 298 298 299 299 if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) 300 300 goto out; ··· 315 315 out: 316 316 /* clear flags */ 317 317 writel_relaxed(FCR_CTCF | FCR_CTEF, qspi->io_base + QSPI_FCR); 318 + wait_nobusy: 319 + if (!err) 320 + err = stm32_qspi_wait_nobusy(qspi); 318 321 319 322 return err; 320 323 }
+4 -3
drivers/spi/spi-zynq-qspi.c
··· 678 678 xqspi->irq = platform_get_irq(pdev, 0); 679 679 if (xqspi->irq <= 0) { 680 680 ret = -ENXIO; 681 - goto remove_master; 681 + goto clk_dis_all; 682 682 } 683 683 ret = devm_request_irq(&pdev->dev, xqspi->irq, zynq_qspi_irq, 684 684 0, pdev->name, xqspi); 685 685 if (ret != 0) { 686 686 ret = -ENXIO; 687 687 dev_err(&pdev->dev, "request_irq failed\n"); 688 - goto remove_master; 688 + goto clk_dis_all; 689 689 } 690 690 691 691 ret = of_property_read_u32(np, "num-cs", ··· 693 693 if (ret < 0) { 694 694 ctlr->num_chipselect = 1; 695 695 } else if (num_cs > ZYNQ_QSPI_MAX_NUM_CS) { 696 + ret = -EINVAL; 696 697 dev_err(&pdev->dev, "only 2 chip selects are available\n"); 697 - goto remove_master; 698 + goto clk_dis_all; 698 699 } else { 699 700 ctlr->num_chipselect = num_cs; 700 701 }
+1 -1
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 2091 2091 struct net_device *ndev = padapter->pnetdev; 2092 2092 2093 2093 { 2094 - struct station_info sinfo; 2094 + struct station_info sinfo = {}; 2095 2095 u8 ie_offset; 2096 2096 if (GetFrameSubType(pmgmt_frame) == WIFI_ASSOCREQ) 2097 2097 ie_offset = _ASOCREQ_IE_OFFSET_;
+1 -3
drivers/target/target_core_transport.c
··· 3121 3121 __releases(&cmd->t_state_lock) 3122 3122 __acquires(&cmd->t_state_lock) 3123 3123 { 3124 - 3125 - assert_spin_locked(&cmd->t_state_lock); 3126 - WARN_ON_ONCE(!irqs_disabled()); 3124 + lockdep_assert_held(&cmd->t_state_lock); 3127 3125 3128 3126 if (fabric_stop) 3129 3127 cmd->transport_state |= CMD_T_FABRIC_STOP;
+4 -2
drivers/tee/optee/call.c
··· 220 220 struct optee_msg_arg *msg_arg; 221 221 phys_addr_t msg_parg; 222 222 struct optee_session *sess = NULL; 223 + uuid_t client_uuid; 223 224 224 225 /* +2 for the meta parameters added below */ 225 226 shm = get_msg_arg(ctx, arg->num_params + 2, &msg_arg, &msg_parg); ··· 241 240 memcpy(&msg_arg->params[0].u.value, arg->uuid, sizeof(arg->uuid)); 242 241 msg_arg->params[1].u.value.c = arg->clnt_login; 243 242 244 - rc = tee_session_calc_client_uuid((uuid_t *)&msg_arg->params[1].u.value, 245 - arg->clnt_login, arg->clnt_uuid); 243 + rc = tee_session_calc_client_uuid(&client_uuid, arg->clnt_login, 244 + arg->clnt_uuid); 246 245 if (rc) 247 246 goto out; 247 + export_uuid(msg_arg->params[1].u.octets, &client_uuid); 248 248 249 249 rc = optee_to_msg_param(msg_arg->params + 2, arg->num_params, param); 250 250 if (rc)
+4 -2
drivers/tee/optee/optee_msg.h
··· 9 9 #include <linux/types.h> 10 10 11 11 /* 12 - * This file defines the OP-TEE message protocol used to communicate 12 + * This file defines the OP-TEE message protocol (ABI) used to communicate 13 13 * with an instance of OP-TEE running in secure world. 14 14 * 15 15 * This file is divided into two sections. ··· 144 144 * @tmem: parameter by temporary memory reference 145 145 * @rmem: parameter by registered memory reference 146 146 * @value: parameter by opaque value 147 + * @octets: parameter by octet string 147 148 * 148 149 * @attr & OPTEE_MSG_ATTR_TYPE_MASK indicates if tmem, rmem or value is used in 149 - * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value, 150 + * the union. OPTEE_MSG_ATTR_TYPE_VALUE_* indicates value or octets, 150 151 * OPTEE_MSG_ATTR_TYPE_TMEM_* indicates @tmem and 151 152 * OPTEE_MSG_ATTR_TYPE_RMEM_* indicates @rmem, 152 153 * OPTEE_MSG_ATTR_TYPE_NONE indicates that none of the members are used. ··· 158 157 struct optee_msg_param_tmem tmem; 159 158 struct optee_msg_param_rmem rmem; 160 159 struct optee_msg_param_value value; 160 + u8 octets[24]; 161 161 } u; 162 162 }; 163 163
+11 -4
drivers/thermal/intel/therm_throt.c
··· 621 621 return atomic_read(&therm_throt_en); 622 622 } 623 623 624 + void __init therm_lvt_init(void) 625 + { 626 + /* 627 + * This function is only called on boot CPU. Save the init thermal 628 + * LVT value on BSP and use that value to restore APs' thermal LVT 629 + * entry BIOS programmed later 630 + */ 631 + if (intel_thermal_supported(&boot_cpu_data)) 632 + lvtthmr_init = apic_read(APIC_LVTTHMR); 633 + } 634 + 624 635 void intel_init_thermal(struct cpuinfo_x86 *c) 625 636 { 626 637 unsigned int cpu = smp_processor_id(); ··· 640 629 641 630 if (!intel_thermal_supported(c)) 642 631 return; 643 - 644 - /* On the BSP? */ 645 - if (c == &boot_cpu_data) 646 - lvtthmr_init = apic_read(APIC_LVTTHMR); 647 632 648 633 /* 649 634 * First check if its enabled already, in which case there might
+5 -1
drivers/tty/serial/8250/8250_exar.c
··· 553 553 { 554 554 struct exar8250 *priv = pci_get_drvdata(pcidev); 555 555 struct uart_8250_port *port = serial8250_get_port(priv->line[0]); 556 - struct platform_device *pdev = port->port.private_data; 556 + struct platform_device *pdev; 557 + 558 + pdev = port->port.private_data; 559 + if (!pdev) 560 + return; 557 561 558 562 device_remove_software_node(&pdev->dev); 559 563 platform_device_unregister(pdev);
+3 -5
drivers/usb/cdns3/cdns3-gadget.c
··· 2007 2007 else 2008 2008 mask = BIT(priv_ep->num); 2009 2009 2010 - if (priv_ep->type != USB_ENDPOINT_XFER_ISOC) { 2010 + if (priv_ep->type != USB_ENDPOINT_XFER_ISOC && !priv_ep->dir) { 2011 2011 cdns3_set_register_bit(&regs->tdl_from_trb, mask); 2012 2012 cdns3_set_register_bit(&regs->tdl_beh, mask); 2013 2013 cdns3_set_register_bit(&regs->tdl_beh2, mask); ··· 2046 2046 case USB_ENDPOINT_XFER_INT: 2047 2047 ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_INT); 2048 2048 2049 - if ((priv_dev->dev_ver == DEV_VER_V2 && !priv_ep->dir) || 2050 - priv_dev->dev_ver > DEV_VER_V2) 2049 + if (priv_dev->dev_ver >= DEV_VER_V2 && !priv_ep->dir) 2051 2050 ep_cfg |= EP_CFG_TDL_CHK; 2052 2051 break; 2053 2052 case USB_ENDPOINT_XFER_BULK: 2054 2053 ep_cfg = EP_CFG_EPTYPE(USB_ENDPOINT_XFER_BULK); 2055 2054 2056 - if ((priv_dev->dev_ver == DEV_VER_V2 && !priv_ep->dir) || 2057 - priv_dev->dev_ver > DEV_VER_V2) 2055 + if (priv_dev->dev_ver >= DEV_VER_V2 && !priv_ep->dir) 2058 2056 ep_cfg |= EP_CFG_TDL_CHK; 2059 2057 break; 2060 2058 default:
+4 -3
drivers/usb/cdns3/cdnsp-ring.c
··· 1517 1517 { 1518 1518 struct cdnsp_device *pdev = (struct cdnsp_device *)data; 1519 1519 union cdnsp_trb *event_ring_deq; 1520 + unsigned long flags; 1520 1521 int counter = 0; 1521 1522 1522 - spin_lock(&pdev->lock); 1523 + spin_lock_irqsave(&pdev->lock, flags); 1523 1524 1524 1525 if (pdev->cdnsp_state & (CDNSP_STATE_HALTED | CDNSP_STATE_DYING)) { 1525 1526 cdnsp_died(pdev); 1526 - spin_unlock(&pdev->lock); 1527 + spin_unlock_irqrestore(&pdev->lock, flags); 1527 1528 return IRQ_HANDLED; 1528 1529 } 1529 1530 ··· 1540 1539 1541 1540 cdnsp_update_erst_dequeue(pdev, event_ring_deq, 1); 1542 1541 1543 - spin_unlock(&pdev->lock); 1542 + spin_unlock_irqrestore(&pdev->lock, flags); 1544 1543 1545 1544 return IRQ_HANDLED; 1546 1545 }
-6
drivers/usb/dwc3/core.c
··· 1690 1690 return 0; 1691 1691 } 1692 1692 1693 - static void dwc3_shutdown(struct platform_device *pdev) 1694 - { 1695 - dwc3_remove(pdev); 1696 - } 1697 - 1698 1693 #ifdef CONFIG_PM 1699 1694 static int dwc3_core_init_for_resume(struct dwc3 *dwc) 1700 1695 { ··· 2007 2012 static struct platform_driver dwc3_driver = { 2008 2013 .probe = dwc3_probe, 2009 2014 .remove = dwc3_remove, 2010 - .shutdown = dwc3_shutdown, 2011 2015 .driver = { 2012 2016 .name = "dwc3", 2013 2017 .of_match_table = of_match_ptr(of_dwc3_match),
+3
drivers/usb/dwc3/debug.h
··· 413 413 414 414 415 415 #ifdef CONFIG_DEBUG_FS 416 + extern void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep); 416 417 extern void dwc3_debugfs_init(struct dwc3 *d); 417 418 extern void dwc3_debugfs_exit(struct dwc3 *d); 418 419 #else 420 + static inline void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep) 421 + { } 419 422 static inline void dwc3_debugfs_init(struct dwc3 *d) 420 423 { } 421 424 static inline void dwc3_debugfs_exit(struct dwc3 *d)
+2 -19
drivers/usb/dwc3/debugfs.c
··· 886 886 } 887 887 } 888 888 889 - static void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep, 890 - struct dentry *parent) 889 + void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep) 891 890 { 892 891 struct dentry *dir; 893 892 894 - dir = debugfs_create_dir(dep->name, parent); 893 + dir = debugfs_create_dir(dep->name, dep->dwc->root); 895 894 dwc3_debugfs_create_endpoint_files(dep, dir); 896 - } 897 - 898 - static void dwc3_debugfs_create_endpoint_dirs(struct dwc3 *dwc, 899 - struct dentry *parent) 900 - { 901 - int i; 902 - 903 - for (i = 0; i < dwc->num_eps; i++) { 904 - struct dwc3_ep *dep = dwc->eps[i]; 905 - 906 - if (!dep) 907 - continue; 908 - 909 - dwc3_debugfs_create_endpoint_dir(dep, parent); 910 - } 911 895 } 912 896 913 897 void dwc3_debugfs_init(struct dwc3 *dwc) ··· 924 940 &dwc3_testmode_fops); 925 941 debugfs_create_file("link_state", 0644, root, dwc, 926 942 &dwc3_link_state_fops); 927 - dwc3_debugfs_create_endpoint_dirs(dwc, root); 928 943 } 929 944 } 930 945
+10 -3
drivers/usb/dwc3/dwc3-meson-g12a.c
··· 651 651 return PTR_ERR(priv->usb_glue_regmap); 652 652 653 653 /* Create a regmap for each USB2 PHY control register set */ 654 - for (i = 0; i < priv->usb2_ports; i++) { 654 + for (i = 0; i < priv->drvdata->num_phys; i++) { 655 655 struct regmap_config u2p_regmap_config = { 656 656 .reg_bits = 8, 657 657 .val_bits = 32, 658 658 .reg_stride = 4, 659 659 .max_register = U2P_R1, 660 660 }; 661 + 662 + if (!strstr(priv->drvdata->phy_names[i], "usb2")) 663 + continue; 661 664 662 665 u2p_regmap_config.name = devm_kasprintf(priv->dev, GFP_KERNEL, 663 666 "u2p-%d", i); ··· 775 772 776 773 ret = priv->drvdata->usb_init(priv); 777 774 if (ret) 778 - goto err_disable_clks; 775 + goto err_disable_regulator; 779 776 780 777 /* Init PHYs */ 781 778 for (i = 0 ; i < PHY_COUNT ; ++i) { 782 779 ret = phy_init(priv->phys[i]); 783 780 if (ret) 784 - goto err_disable_clks; 781 + goto err_disable_regulator; 785 782 } 786 783 787 784 /* Set PHY Power */ ··· 818 815 err_phys_exit: 819 816 for (i = 0 ; i < PHY_COUNT ; ++i) 820 817 phy_exit(priv->phys[i]); 818 + 819 + err_disable_regulator: 820 + if (priv->vbus) 821 + regulator_disable(priv->vbus); 821 822 822 823 err_disable_clks: 823 824 clk_bulk_disable_unprepare(priv->drvdata->num_clks,
+3
drivers/usb/dwc3/ep0.c
··· 292 292 epnum |= 1; 293 293 294 294 dep = dwc->eps[epnum]; 295 + if (dep == NULL) 296 + return NULL; 297 + 295 298 if (dep->flags & DWC3_EP_ENABLED) 296 299 return dep; 297 300
+12 -6
drivers/usb/dwc3/gadget.c
··· 2261 2261 } 2262 2262 2263 2263 /* 2264 - * Synchronize any pending event handling before executing the controller 2265 - * halt routine. 2264 + * Synchronize and disable any further event handling while controller 2265 + * is being enabled/disabled. 2266 2266 */ 2267 - if (!is_on) { 2268 - dwc3_gadget_disable_irq(dwc); 2269 - synchronize_irq(dwc->irq_gadget); 2270 - } 2267 + disable_irq(dwc->irq_gadget); 2271 2268 2272 2269 spin_lock_irqsave(&dwc->lock, flags); 2273 2270 ··· 2302 2305 2303 2306 ret = dwc3_gadget_run_stop(dwc, is_on, false); 2304 2307 spin_unlock_irqrestore(&dwc->lock, flags); 2308 + enable_irq(dwc->irq_gadget); 2309 + 2305 2310 pm_runtime_put(dwc->dev); 2306 2311 2307 2312 return ret; ··· 2753 2754 INIT_LIST_HEAD(&dep->started_list); 2754 2755 INIT_LIST_HEAD(&dep->cancelled_list); 2755 2756 2757 + dwc3_debugfs_create_endpoint_dir(dep); 2758 + 2756 2759 return 0; 2757 2760 } 2758 2761 ··· 2798 2797 list_del(&dep->endpoint.ep_list); 2799 2798 } 2800 2799 2800 + debugfs_remove_recursive(debugfs_lookup(dep->name, dwc->root)); 2801 2801 kfree(dep); 2802 2802 } 2803 2803 } ··· 4048 4046 dwc3_gadget_free_endpoints(dwc); 4049 4047 err4: 4050 4048 usb_put_gadget(dwc->gadget); 4049 + dwc->gadget = NULL; 4051 4050 err3: 4052 4051 dma_free_coherent(dwc->sysdev, DWC3_BOUNCE_SIZE, dwc->bounce, 4053 4052 dwc->bounce_addr); ··· 4068 4065 4069 4066 void dwc3_gadget_exit(struct dwc3 *dwc) 4070 4067 { 4068 + if (!dwc->gadget) 4069 + return; 4070 + 4071 4071 usb_del_gadget(dwc->gadget); 4072 4072 dwc3_gadget_free_endpoints(dwc); 4073 4073 usb_put_gadget(dwc->gadget);
+8
drivers/usb/gadget/config.c
··· 164 164 { 165 165 struct usb_gadget *g = f->config->cdev->gadget; 166 166 167 + /* super-speed-plus descriptor falls back to super-speed one, 168 + * if such a descriptor was provided, thus avoiding a NULL 169 + * pointer dereference if a 5gbps capable gadget is used with 170 + * a 10gbps capable config (device port + cable + host port) 171 + */ 172 + if (!ssp) 173 + ssp = ss; 174 + 167 175 if (fs) { 168 176 f->fs_descriptors = usb_copy_descriptors(fs); 169 177 if (!f->fs_descriptors)
+1 -1
drivers/usb/gadget/function/f_ecm.c
··· 791 791 fs_ecm_notify_desc.bEndpointAddress; 792 792 793 793 status = usb_assign_descriptors(f, ecm_fs_function, ecm_hs_function, 794 - ecm_ss_function, NULL); 794 + ecm_ss_function, ecm_ss_function); 795 795 if (status) 796 796 goto fail; 797 797
+3 -3
drivers/usb/gadget/function/f_eem.c
··· 302 302 eem_ss_out_desc.bEndpointAddress = eem_fs_out_desc.bEndpointAddress; 303 303 304 304 status = usb_assign_descriptors(f, eem_fs_function, eem_hs_function, 305 - eem_ss_function, NULL); 305 + eem_ss_function, eem_ss_function); 306 306 if (status) 307 307 goto fail; 308 308 ··· 495 495 skb2 = skb_clone(skb, GFP_ATOMIC); 496 496 if (unlikely(!skb2)) { 497 497 DBG(cdev, "unable to unframe EEM packet\n"); 498 - continue; 498 + goto next; 499 499 } 500 500 skb_trim(skb2, len - ETH_FCS_LEN); 501 501 ··· 505 505 GFP_ATOMIC); 506 506 if (unlikely(!skb3)) { 507 507 dev_kfree_skb_any(skb2); 508 - continue; 508 + goto next; 509 509 } 510 510 dev_kfree_skb_any(skb2); 511 511 skb_queue_tail(list, skb3);
+3
drivers/usb/gadget/function/f_fs.c
··· 3567 3567 ffs->func = NULL; 3568 3568 } 3569 3569 3570 + /* Drain any pending AIO completions */ 3571 + drain_workqueue(ffs->io_completion_wq); 3572 + 3570 3573 if (!--opts->refcnt) 3571 3574 functionfs_unbind(ffs); 3572 3575
+2 -1
drivers/usb/gadget/function/f_hid.c
··· 802 802 hidg_fs_out_ep_desc.bEndpointAddress; 803 803 804 804 status = usb_assign_descriptors(f, hidg_fs_descriptors, 805 - hidg_hs_descriptors, hidg_ss_descriptors, NULL); 805 + hidg_hs_descriptors, hidg_ss_descriptors, 806 + hidg_ss_descriptors); 806 807 if (status) 807 808 goto fail; 808 809
+1 -1
drivers/usb/gadget/function/f_loopback.c
··· 207 207 ss_loop_sink_desc.bEndpointAddress = fs_loop_sink_desc.bEndpointAddress; 208 208 209 209 ret = usb_assign_descriptors(f, fs_loopback_descs, hs_loopback_descs, 210 - ss_loopback_descs, NULL); 210 + ss_loopback_descs, ss_loopback_descs); 211 211 if (ret) 212 212 return ret; 213 213
+5 -5
drivers/usb/gadget/function/f_ncm.c
··· 583 583 data[0] = cpu_to_le32(ncm_bitrate(cdev->gadget)); 584 584 data[1] = data[0]; 585 585 586 - DBG(cdev, "notify speed %d\n", ncm_bitrate(cdev->gadget)); 586 + DBG(cdev, "notify speed %u\n", ncm_bitrate(cdev->gadget)); 587 587 ncm->notify_state = NCM_NOTIFY_CONNECT; 588 588 break; 589 589 } ··· 1101 1101 ncm->ndp_dgram_count = 1; 1102 1102 1103 1103 /* Note: we skip opts->next_ndp_index */ 1104 - } 1105 1104 1106 - /* Delay the timer. */ 1107 - hrtimer_start(&ncm->task_timer, TX_TIMEOUT_NSECS, 1108 - HRTIMER_MODE_REL_SOFT); 1105 + /* Start the timer. */ 1106 + hrtimer_start(&ncm->task_timer, TX_TIMEOUT_NSECS, 1107 + HRTIMER_MODE_REL_SOFT); 1108 + } 1109 1109 1110 1110 /* Add the datagram position entries */ 1111 1111 ntb_ndp = skb_put_zero(ncm->skb_tx_ndp, dgram_idx_len);
+2 -1
drivers/usb/gadget/function/f_printer.c
··· 1101 1101 ss_ep_out_desc.bEndpointAddress = fs_ep_out_desc.bEndpointAddress; 1102 1102 1103 1103 ret = usb_assign_descriptors(f, fs_printer_function, 1104 - hs_printer_function, ss_printer_function, NULL); 1104 + hs_printer_function, ss_printer_function, 1105 + ss_printer_function); 1105 1106 if (ret) 1106 1107 return ret; 1107 1108
+1 -1
drivers/usb/gadget/function/f_rndis.c
··· 789 789 ss_notify_desc.bEndpointAddress = fs_notify_desc.bEndpointAddress; 790 790 791 791 status = usb_assign_descriptors(f, eth_fs_function, eth_hs_function, 792 - eth_ss_function, NULL); 792 + eth_ss_function, eth_ss_function); 793 793 if (status) 794 794 goto fail; 795 795
+1 -1
drivers/usb/gadget/function/f_serial.c
··· 233 233 gser_ss_out_desc.bEndpointAddress = gser_fs_out_desc.bEndpointAddress; 234 234 235 235 status = usb_assign_descriptors(f, gser_fs_function, gser_hs_function, 236 - gser_ss_function, NULL); 236 + gser_ss_function, gser_ss_function); 237 237 if (status) 238 238 goto fail; 239 239 dev_dbg(&cdev->gadget->dev, "generic ttyGS%d: %s speed IN/%s OUT/%s\n",
+2 -1
drivers/usb/gadget/function/f_sourcesink.c
··· 431 431 ss_iso_sink_desc.bEndpointAddress = fs_iso_sink_desc.bEndpointAddress; 432 432 433 433 ret = usb_assign_descriptors(f, fs_source_sink_descs, 434 - hs_source_sink_descs, ss_source_sink_descs, NULL); 434 + hs_source_sink_descs, ss_source_sink_descs, 435 + ss_source_sink_descs); 435 436 if (ret) 436 437 return ret; 437 438
+1 -1
drivers/usb/gadget/function/f_subset.c
··· 358 358 fs_subset_out_desc.bEndpointAddress; 359 359 360 360 status = usb_assign_descriptors(f, fs_eth_function, hs_eth_function, 361 - ss_eth_function, NULL); 361 + ss_eth_function, ss_eth_function); 362 362 if (status) 363 363 goto fail; 364 364
+2 -1
drivers/usb/gadget/function/f_tcm.c
··· 2057 2057 uasp_fs_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress; 2058 2058 2059 2059 ret = usb_assign_descriptors(f, uasp_fs_function_desc, 2060 - uasp_hs_function_desc, uasp_ss_function_desc, NULL); 2060 + uasp_hs_function_desc, uasp_ss_function_desc, 2061 + uasp_ss_function_desc); 2061 2062 if (ret) 2062 2063 goto ep_fail; 2063 2064
+6 -1
drivers/usb/host/xhci-pci.c
··· 59 59 #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI 0x1138 60 60 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e 61 61 62 + #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639 62 63 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 63 64 #define PCI_DEVICE_ID_AMD_PROMONTORYA_3 0x43ba 64 65 #define PCI_DEVICE_ID_AMD_PROMONTORYA_2 0x43bb ··· 182 181 (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_2) || 183 182 (pdev->device == PCI_DEVICE_ID_AMD_PROMONTORYA_1))) 184 183 xhci->quirks |= XHCI_U2_DISABLE_WAKE; 184 + 185 + if (pdev->vendor == PCI_VENDOR_ID_AMD && 186 + pdev->device == PCI_DEVICE_ID_AMD_RENOIR_XHCI) 187 + xhci->quirks |= XHCI_BROKEN_D3COLD; 185 188 186 189 if (pdev->vendor == PCI_VENDOR_ID_INTEL) { 187 190 xhci->quirks |= XHCI_LPM_SUPPORT; ··· 544 539 * Systems with the TI redriver that loses port status change events 545 540 * need to have the registers polled during D3, so avoid D3cold. 546 541 */ 547 - if (xhci->quirks & XHCI_COMP_MODE_QUIRK) 542 + if (xhci->quirks & (XHCI_COMP_MODE_QUIRK | XHCI_BROKEN_D3COLD)) 548 543 pci_d3cold_disable(pdev); 549 544 550 545 if (xhci->quirks & XHCI_PME_STUCK_QUIRK)
+1
drivers/usb/host/xhci.h
··· 1892 1892 #define XHCI_DISABLE_SPARSE BIT_ULL(38) 1893 1893 #define XHCI_SG_TRB_CACHE_SIZE_QUIRK BIT_ULL(39) 1894 1894 #define XHCI_NO_SOFT_RETRY BIT_ULL(40) 1895 + #define XHCI_BROKEN_D3COLD BIT_ULL(41) 1895 1896 1896 1897 unsigned int num_active_eps; 1897 1898 unsigned int limit_active_eps;
+2
drivers/usb/misc/brcmstb-usb-pinmap.c
··· 263 263 return -EINVAL; 264 264 265 265 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 266 + if (!r) 267 + return -EINVAL; 266 268 267 269 pdata = devm_kzalloc(&pdev->dev, 268 270 sizeof(*pdata) +
+1 -2
drivers/usb/musb/musb_core.c
··· 2009 2009 schedule_delayed_work(&musb->irq_work, 2010 2010 msecs_to_jiffies(1000)); 2011 2011 musb->quirk_retries--; 2012 - break; 2013 2012 } 2014 - fallthrough; 2013 + break; 2015 2014 case MUSB_QUIRK_B_INVALID_VBUS_91: 2016 2015 if (musb->quirk_retries && !musb->flush_irq_work) { 2017 2016 musb_dbg(musb,
+78 -6
drivers/usb/serial/cp210x.c
··· 252 252 u8 gpio_input; 253 253 #endif 254 254 u8 partnum; 255 + u32 fw_version; 255 256 speed_t min_speed; 256 257 speed_t max_speed; 257 258 bool use_actual_rate; 259 + bool no_flow_control; 258 260 }; 259 261 260 262 enum cp210x_event_state { ··· 400 398 401 399 /* CP210X_VENDOR_SPECIFIC values */ 402 400 #define CP210X_READ_2NCONFIG 0x000E 401 + #define CP210X_GET_FW_VER_2N 0x0010 403 402 #define CP210X_READ_LATCH 0x00C2 404 403 #define CP210X_GET_PARTNUM 0x370B 405 404 #define CP210X_GET_PORTCONFIG 0x370C ··· 539 536 #define CP210X_2NCONFIG_GPIO_MODE_IDX 581 540 537 #define CP210X_2NCONFIG_GPIO_RSTLATCH_IDX 587 541 538 #define CP210X_2NCONFIG_GPIO_CONTROL_IDX 600 539 + 540 + /* CP2102N QFN20 port configuration values */ 541 + #define CP2102N_QFN20_GPIO2_TXLED_MODE BIT(2) 542 + #define CP2102N_QFN20_GPIO3_RXLED_MODE BIT(3) 543 + #define CP2102N_QFN20_GPIO1_RS485_MODE BIT(4) 544 + #define CP2102N_QFN20_GPIO0_CLK_MODE BIT(6) 542 545 543 546 /* CP210X_VENDOR_SPECIFIC, CP210X_WRITE_LATCH call writes these 0x2 bytes. */ 544 547 struct cp210x_gpio_write { ··· 1131 1122 static void cp210x_set_flow_control(struct tty_struct *tty, 1132 1123 struct usb_serial_port *port, struct ktermios *old_termios) 1133 1124 { 1125 + struct cp210x_serial_private *priv = usb_get_serial_data(port->serial); 1134 1126 struct cp210x_port_private *port_priv = usb_get_serial_port_data(port); 1135 1127 struct cp210x_special_chars chars; 1136 1128 struct cp210x_flow_ctl flow_ctl; 1137 1129 u32 flow_repl; 1138 1130 u32 ctl_hs; 1139 1131 int ret; 1132 + 1133 + /* 1134 + * Some CP2102N interpret ulXonLimit as ulFlowReplace (erratum 1135 + * CP2102N_E104). Report back that flow control is not supported. 1136 + */ 1137 + if (priv->no_flow_control) { 1138 + tty->termios.c_cflag &= ~CRTSCTS; 1139 + tty->termios.c_iflag &= ~(IXON | IXOFF); 1140 + } 1140 1141 1141 1142 if (old_termios && 1142 1143 C_CRTSCTS(tty) == (old_termios->c_cflag & CRTSCTS) && ··· 1204 1185 port_priv->crtscts = false; 1205 1186 } 1206 1187 1207 - if (I_IXOFF(tty)) 1188 + if (I_IXOFF(tty)) { 1208 1189 flow_repl |= CP210X_SERIAL_AUTO_RECEIVE; 1209 - else 1190 + 1191 + flow_ctl.ulXonLimit = cpu_to_le32(128); 1192 + flow_ctl.ulXoffLimit = cpu_to_le32(128); 1193 + } else { 1210 1194 flow_repl &= ~CP210X_SERIAL_AUTO_RECEIVE; 1195 + } 1211 1196 1212 1197 if (I_IXON(tty)) 1213 1198 flow_repl |= CP210X_SERIAL_AUTO_TRANSMIT; 1214 1199 else 1215 1200 flow_repl &= ~CP210X_SERIAL_AUTO_TRANSMIT; 1216 - 1217 - flow_ctl.ulXonLimit = cpu_to_le32(128); 1218 - flow_ctl.ulXoffLimit = cpu_to_le32(128); 1219 1201 1220 1202 dev_dbg(&port->dev, "%s - ctrl = 0x%02x, flow = 0x%02x\n", __func__, 1221 1203 ctl_hs, flow_repl); ··· 1753 1733 priv->gpio_pushpull = (gpio_pushpull >> 3) & 0x0f; 1754 1734 1755 1735 /* 0 indicates GPIO mode, 1 is alternate function */ 1756 - priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f; 1736 + if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN20) { 1737 + /* QFN20 is special... */ 1738 + if (gpio_ctrl & CP2102N_QFN20_GPIO0_CLK_MODE) /* GPIO 0 */ 1739 + priv->gpio_altfunc |= BIT(0); 1740 + if (gpio_ctrl & CP2102N_QFN20_GPIO1_RS485_MODE) /* GPIO 1 */ 1741 + priv->gpio_altfunc |= BIT(1); 1742 + if (gpio_ctrl & CP2102N_QFN20_GPIO2_TXLED_MODE) /* GPIO 2 */ 1743 + priv->gpio_altfunc |= BIT(2); 1744 + if (gpio_ctrl & CP2102N_QFN20_GPIO3_RXLED_MODE) /* GPIO 3 */ 1745 + priv->gpio_altfunc |= BIT(3); 1746 + } else { 1747 + priv->gpio_altfunc = (gpio_ctrl >> 2) & 0x0f; 1748 + } 1757 1749 1758 1750 if (priv->partnum == CP210X_PARTNUM_CP2102N_QFN28) { 1759 1751 /* ··· 1940 1908 priv->use_actual_rate = use_actual_rate; 1941 1909 } 1942 1910 1911 + static int cp210x_get_fw_version(struct usb_serial *serial, u16 value) 1912 + { 1913 + struct cp210x_serial_private *priv = usb_get_serial_data(serial); 1914 + u8 ver[3]; 1915 + int ret; 1916 + 1917 + ret = cp210x_read_vendor_block(serial, REQTYPE_DEVICE_TO_HOST, value, 1918 + ver, sizeof(ver)); 1919 + if (ret) 1920 + return ret; 1921 + 1922 + dev_dbg(&serial->interface->dev, "%s - %d.%d.%d\n", __func__, 1923 + ver[0], ver[1], ver[2]); 1924 + 1925 + priv->fw_version = ver[0] << 16 | ver[1] << 8 | ver[2]; 1926 + 1927 + return 0; 1928 + } 1929 + 1930 + static void cp210x_determine_quirks(struct usb_serial *serial) 1931 + { 1932 + struct cp210x_serial_private *priv = usb_get_serial_data(serial); 1933 + int ret; 1934 + 1935 + switch (priv->partnum) { 1936 + case CP210X_PARTNUM_CP2102N_QFN28: 1937 + case CP210X_PARTNUM_CP2102N_QFN24: 1938 + case CP210X_PARTNUM_CP2102N_QFN20: 1939 + ret = cp210x_get_fw_version(serial, CP210X_GET_FW_VER_2N); 1940 + if (ret) 1941 + break; 1942 + if (priv->fw_version <= 0x10004) 1943 + priv->no_flow_control = true; 1944 + break; 1945 + default: 1946 + break; 1947 + } 1948 + } 1949 + 1943 1950 static int cp210x_attach(struct usb_serial *serial) 1944 1951 { 1945 1952 int result; ··· 1999 1928 2000 1929 usb_set_serial_data(serial, priv); 2001 1930 1931 + cp210x_determine_quirks(serial); 2002 1932 cp210x_init_max_speed(serial); 2003 1933 2004 1934 result = cp210x_gpio_init(serial);
+1
drivers/usb/serial/ftdi_sio.c
··· 611 611 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 612 612 { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) }, 613 613 { USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) }, 614 + { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONMX_PID) }, 614 615 { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) }, 615 616 { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) }, 616 617 { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+1
drivers/usb/serial/ftdi_sio_ids.h
··· 581 581 #define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ 582 582 #define FTDI_NT_ORIONLX_PLUS_PID 0x7c91 /* OrionLX+ Substation Automation Platform */ 583 583 #define FTDI_NT_ORION_IO_PID 0x7c92 /* Orion I/O */ 584 + #define FTDI_NT_ORIONMX_PID 0x7c93 /* OrionMX */ 584 585 585 586 /* 586 587 * Synapse Wireless product ids (FTDI_VID)
+5 -3
drivers/usb/serial/omninet.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * USB ZyXEL omni.net LCD PLUS driver 3 + * USB ZyXEL omni.net driver 4 4 * 5 5 * Copyright (C) 2013,2017 Johan Hovold <johan@kernel.org> 6 6 * ··· 22 22 #include <linux/usb/serial.h> 23 23 24 24 #define DRIVER_AUTHOR "Alessandro Zummo" 25 - #define DRIVER_DESC "USB ZyXEL omni.net LCD PLUS Driver" 25 + #define DRIVER_DESC "USB ZyXEL omni.net Driver" 26 26 27 27 #define ZYXEL_VENDOR_ID 0x0586 28 28 #define ZYXEL_OMNINET_ID 0x1000 29 + #define ZYXEL_OMNI_56K_PLUS_ID 0x1500 29 30 /* This one seems to be a re-branded ZyXEL device */ 30 31 #define BT_IGNITIONPRO_ID 0x2000 31 32 ··· 41 40 42 41 static const struct usb_device_id id_table[] = { 43 42 { USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNINET_ID) }, 43 + { USB_DEVICE(ZYXEL_VENDOR_ID, ZYXEL_OMNI_56K_PLUS_ID) }, 44 44 { USB_DEVICE(ZYXEL_VENDOR_ID, BT_IGNITIONPRO_ID) }, 45 45 { } /* Terminating entry */ 46 46 }; ··· 52 50 .owner = THIS_MODULE, 53 51 .name = "omninet", 54 52 }, 55 - .description = "ZyXEL - omni.net lcd plus usb", 53 + .description = "ZyXEL - omni.net usb", 56 54 .id_table = id_table, 57 55 .num_bulk_out = 2, 58 56 .calc_num_ports = omninet_calc_num_ports,
+3 -3
drivers/usb/serial/quatech2.c
··· 416 416 417 417 /* flush the port transmit buffer */ 418 418 i = usb_control_msg(serial->dev, 419 - usb_rcvctrlpipe(serial->dev, 0), 419 + usb_sndctrlpipe(serial->dev, 0), 420 420 QT2_FLUSH_DEVICE, 0x40, 1, 421 421 port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT); 422 422 ··· 426 426 427 427 /* flush the port receive buffer */ 428 428 i = usb_control_msg(serial->dev, 429 - usb_rcvctrlpipe(serial->dev, 0), 429 + usb_sndctrlpipe(serial->dev, 0), 430 430 QT2_FLUSH_DEVICE, 0x40, 0, 431 431 port_priv->device_port, NULL, 0, QT2_USB_TIMEOUT); 432 432 ··· 639 639 int status; 640 640 641 641 /* power on unit */ 642 - status = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0), 642 + status = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0), 643 643 0xc2, 0x40, 0x8000, 0, NULL, 0, 644 644 QT2_USB_TIMEOUT); 645 645 if (status < 0) {
+1 -1
drivers/usb/typec/mux.c
··· 239 239 dev = class_find_device(&typec_mux_class, NULL, fwnode, 240 240 mux_fwnode_match); 241 241 242 - return dev ? to_typec_switch(dev) : ERR_PTR(-EPROBE_DEFER); 242 + return dev ? to_typec_mux(dev) : ERR_PTR(-EPROBE_DEFER); 243 243 } 244 244 245 245 /**
+11 -4
drivers/usb/typec/mux/intel_pmc_mux.c
··· 582 582 acpi_dev_free_resource_list(&resource_list); 583 583 584 584 if (!pmc->iom_base) { 585 - put_device(&adev->dev); 585 + acpi_dev_put(adev); 586 586 return -ENOMEM; 587 + } 588 + 589 + if (IS_ERR(pmc->iom_base)) { 590 + acpi_dev_put(adev); 591 + return PTR_ERR(pmc->iom_base); 587 592 } 588 593 589 594 pmc->iom_adev = adev; ··· 641 636 break; 642 637 643 638 ret = pmc_usb_register_port(pmc, i, fwnode); 644 - if (ret) 639 + if (ret) { 640 + fwnode_handle_put(fwnode); 645 641 goto err_remove_ports; 642 + } 646 643 } 647 644 648 645 platform_set_drvdata(pdev, pmc); ··· 658 651 usb_role_switch_unregister(pmc->port[i].usb_sw); 659 652 } 660 653 661 - put_device(&pmc->iom_adev->dev); 654 + acpi_dev_put(pmc->iom_adev); 662 655 663 656 return ret; 664 657 } ··· 674 667 usb_role_switch_unregister(pmc->port[i].usb_sw); 675 668 } 676 669 677 - put_device(&pmc->iom_adev->dev); 670 + acpi_dev_put(pmc->iom_adev); 678 671 679 672 return 0; 680 673 }
+77 -46
drivers/usb/typec/tcpm/tcpm.c
··· 401 401 unsigned int nr_src_pdo; 402 402 u32 snk_pdo[PDO_MAX_OBJECTS]; 403 403 unsigned int nr_snk_pdo; 404 + u32 snk_vdo_v1[VDO_MAX_OBJECTS]; 405 + unsigned int nr_snk_vdo_v1; 404 406 u32 snk_vdo[VDO_MAX_OBJECTS]; 405 407 unsigned int nr_snk_vdo; 406 408 ··· 1549 1547 if (PD_VDO_VID(p[0]) != USB_SID_PD) 1550 1548 break; 1551 1549 1552 - if (PD_VDO_SVDM_VER(p[0]) < svdm_version) 1550 + if (PD_VDO_SVDM_VER(p[0]) < svdm_version) { 1553 1551 typec_partner_set_svdm_version(port->partner, 1554 1552 PD_VDO_SVDM_VER(p[0])); 1553 + svdm_version = PD_VDO_SVDM_VER(p[0]); 1554 + } 1555 1555 1556 - tcpm_ams_start(port, DISCOVER_IDENTITY); 1557 - /* 6.4.4.3.1: Only respond as UFP (device) */ 1558 - if (port->data_role == TYPEC_DEVICE && 1556 + port->ams = DISCOVER_IDENTITY; 1557 + /* 1558 + * PD2.0 Spec 6.10.3: respond with NAK as DFP (data host) 1559 + * PD3.1 Spec 6.4.4.2.5.1: respond with NAK if "invalid field" or 1560 + * "wrong configuation" or "Unrecognized" 1561 + */ 1562 + if ((port->data_role == TYPEC_DEVICE || svdm_version >= SVDM_VER_2_0) && 1559 1563 port->nr_snk_vdo) { 1560 - /* 1561 - * Product Type DFP and Connector Type are not defined in SVDM 1562 - * version 1.0 and shall be set to zero. 1563 - */ 1564 - if (typec_get_negotiated_svdm_version(typec) < SVDM_VER_2_0) 1565 - response[1] = port->snk_vdo[0] & ~IDH_DFP_MASK 1566 - & ~IDH_CONN_MASK; 1567 - else 1568 - response[1] = port->snk_vdo[0]; 1569 - for (i = 1; i < port->nr_snk_vdo; i++) 1570 - response[i + 1] = port->snk_vdo[i]; 1571 - rlen = port->nr_snk_vdo + 1; 1564 + if (svdm_version < SVDM_VER_2_0) { 1565 + for (i = 0; i < port->nr_snk_vdo_v1; i++) 1566 + response[i + 1] = port->snk_vdo_v1[i]; 1567 + rlen = port->nr_snk_vdo_v1 + 1; 1568 + 1569 + } else { 1570 + for (i = 0; i < port->nr_snk_vdo; i++) 1571 + response[i + 1] = port->snk_vdo[i]; 1572 + rlen = port->nr_snk_vdo + 1; 1573 + } 1572 1574 } 1573 1575 break; 1574 1576 case CMD_DISCOVER_SVID: 1575 - tcpm_ams_start(port, DISCOVER_SVIDS); 1577 + port->ams = DISCOVER_SVIDS; 1576 1578 break; 1577 1579 case CMD_DISCOVER_MODES: 1578 - tcpm_ams_start(port, DISCOVER_MODES); 1580 + port->ams = DISCOVER_MODES; 1579 1581 break; 1580 1582 case CMD_ENTER_MODE: 1581 - tcpm_ams_start(port, DFP_TO_UFP_ENTER_MODE); 1583 + port->ams = DFP_TO_UFP_ENTER_MODE; 1582 1584 break; 1583 1585 case CMD_EXIT_MODE: 1584 - tcpm_ams_start(port, DFP_TO_UFP_EXIT_MODE); 1586 + port->ams = DFP_TO_UFP_EXIT_MODE; 1585 1587 break; 1586 1588 case CMD_ATTENTION: 1587 - tcpm_ams_start(port, ATTENTION); 1588 1589 /* Attention command does not have response */ 1589 1590 *adev_action = ADEV_ATTENTION; 1590 1591 return 0; ··· 1942 1937 tcpm_log(port, "VDM Tx error, retry"); 1943 1938 port->vdm_retries++; 1944 1939 port->vdm_state = VDM_STATE_READY; 1940 + if (PD_VDO_SVDM(vdo_hdr) && PD_VDO_CMDT(vdo_hdr) == CMDT_INIT) 1941 + tcpm_ams_finish(port); 1942 + } else { 1945 1943 tcpm_ams_finish(port); 1946 1944 } 1947 1945 break; ··· 2191 2183 2192 2184 if (!type) { 2193 2185 tcpm_log(port, "Alert message received with no type"); 2186 + tcpm_queue_message(port, PD_MSG_CTRL_NOT_SUPP); 2194 2187 return; 2195 2188 } 2196 2189 2197 2190 /* Just handling non-battery alerts for now */ 2198 2191 if (!(type & USB_PD_ADO_TYPE_BATT_STATUS_CHANGE)) { 2199 - switch (port->state) { 2200 - case SRC_READY: 2201 - case SNK_READY: 2192 + if (port->pwr_role == TYPEC_SOURCE) { 2193 + port->upcoming_state = GET_STATUS_SEND; 2194 + tcpm_ams_start(port, GETTING_SOURCE_SINK_STATUS); 2195 + } else { 2196 + /* 2197 + * Do not check SinkTxOk here in case the Source doesn't set its Rp to 2198 + * SinkTxOk in time. 2199 + */ 2200 + port->ams = GETTING_SOURCE_SINK_STATUS; 2202 2201 tcpm_set_state(port, GET_STATUS_SEND, 0); 2203 - break; 2204 - default: 2205 - tcpm_queue_message(port, PD_MSG_CTRL_WAIT); 2206 - break; 2207 2202 } 2203 + } else { 2204 + tcpm_queue_message(port, PD_MSG_CTRL_NOT_SUPP); 2208 2205 } 2209 2206 } 2210 2207 ··· 2453 2440 tcpm_pd_handle_state(port, BIST_RX, BIST, 0); 2454 2441 break; 2455 2442 case PD_DATA_ALERT: 2456 - tcpm_handle_alert(port, msg->payload, cnt); 2443 + if (port->state != SRC_READY && port->state != SNK_READY) 2444 + tcpm_pd_handle_state(port, port->pwr_role == TYPEC_SOURCE ? 2445 + SRC_SOFT_RESET_WAIT_SNK_TX : SNK_SOFT_RESET, 2446 + NONE_AMS, 0); 2447 + else 2448 + tcpm_handle_alert(port, msg->payload, cnt); 2457 2449 break; 2458 2450 case PD_DATA_BATT_STATUS: 2459 2451 case PD_DATA_GET_COUNTRY_INFO: ··· 2782 2764 2783 2765 switch (type) { 2784 2766 case PD_EXT_STATUS: 2785 - /* 2786 - * If PPS related events raised then get PPS status to clear 2787 - * (see USB PD 3.0 Spec, 6.5.2.4) 2788 - */ 2789 - if (msg->ext_msg.data[USB_PD_EXT_SDB_EVENT_FLAGS] & 2790 - USB_PD_EXT_SDB_PPS_EVENTS) 2791 - tcpm_pd_handle_state(port, GET_PPS_STATUS_SEND, 2792 - GETTING_SOURCE_SINK_STATUS, 0); 2793 - 2794 - else 2795 - tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0); 2796 - break; 2797 2767 case PD_EXT_PPS_STATUS: 2798 - /* 2799 - * For now the PPS status message is used to clear events 2800 - * and nothing more. 2801 - */ 2802 - tcpm_pd_handle_state(port, ready_state(port), NONE_AMS, 0); 2768 + if (port->ams == GETTING_SOURCE_SINK_STATUS) { 2769 + tcpm_ams_finish(port); 2770 + tcpm_set_state(port, ready_state(port), 0); 2771 + } else { 2772 + /* unexpected Status or PPS_Status Message */ 2773 + tcpm_pd_handle_state(port, port->pwr_role == TYPEC_SOURCE ? 2774 + SRC_SOFT_RESET_WAIT_SNK_TX : SNK_SOFT_RESET, 2775 + NONE_AMS, 0); 2776 + } 2803 2777 break; 2804 2778 case PD_EXT_SOURCE_CAP_EXT: 2805 2779 case PD_EXT_GET_BATT_CAP: ··· 5957 5947 return ret; 5958 5948 } 5959 5949 5950 + /* If sink-vdos is found, sink-vdos-v1 is expected for backward compatibility. */ 5951 + if (port->nr_snk_vdo) { 5952 + ret = fwnode_property_count_u32(fwnode, "sink-vdos-v1"); 5953 + if (ret < 0) 5954 + return ret; 5955 + else if (ret == 0) 5956 + return -ENODATA; 5957 + 5958 + port->nr_snk_vdo_v1 = min(ret, VDO_MAX_OBJECTS); 5959 + ret = fwnode_property_read_u32_array(fwnode, "sink-vdos-v1", 5960 + port->snk_vdo_v1, 5961 + port->nr_snk_vdo_v1); 5962 + if (ret < 0) 5963 + return ret; 5964 + } 5965 + 5960 5966 return 0; 5961 5967 } 5962 5968 ··· 6337 6311 void tcpm_unregister_port(struct tcpm_port *port) 6338 6312 { 6339 6313 int i; 6314 + 6315 + hrtimer_cancel(&port->send_discover_timer); 6316 + hrtimer_cancel(&port->enable_frs_timer); 6317 + hrtimer_cancel(&port->vdm_state_machine_timer); 6318 + hrtimer_cancel(&port->state_machine_timer); 6340 6319 6341 6320 tcpm_reset_port(port); 6342 6321 for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++)
+1 -1
drivers/usb/typec/tcpm/wcove.c
··· 378 378 const u8 *data = (void *)msg; 379 379 int i; 380 380 381 - for (i = 0; i < pd_header_cnt(msg->header) * 4 + 2; i++) { 381 + for (i = 0; i < pd_header_cnt_le(msg->header) * 4 + 2; i++) { 382 382 ret = regmap_write(wcove->regmap, USBC_TX_DATA + i, 383 383 data[i]); 384 384 if (ret)
+1
drivers/usb/typec/ucsi/ucsi.c
··· 1253 1253 } 1254 1254 1255 1255 err_reset: 1256 + memset(&ucsi->cap, 0, sizeof(ucsi->cap)); 1256 1257 ucsi_reset_ppm(ucsi); 1257 1258 err: 1258 1259 return ret;
+1
drivers/vfio/pci/Kconfig
··· 2 2 config VFIO_PCI 3 3 tristate "VFIO support for PCI devices" 4 4 depends on VFIO && PCI && EVENTFD 5 + depends on MMU 5 6 select VFIO_VIRQFD 6 7 select IRQ_BYPASS_MANAGER 7 8 help
+1 -1
drivers/vfio/pci/vfio_pci_config.c
··· 1581 1581 if (len == 0xFF) { 1582 1582 len = vfio_ext_cap_len(vdev, ecap, epos); 1583 1583 if (len < 0) 1584 - return ret; 1584 + return len; 1585 1585 } 1586 1586 } 1587 1587
+1 -1
drivers/vfio/platform/vfio_platform_common.c
··· 291 291 vfio_platform_regions_cleanup(vdev); 292 292 err_reg: 293 293 mutex_unlock(&driver_lock); 294 - module_put(THIS_MODULE); 294 + module_put(vdev->parent_module); 295 295 return ret; 296 296 } 297 297
+1 -1
drivers/vfio/vfio_iommu_type1.c
··· 2795 2795 return 0; 2796 2796 } 2797 2797 2798 - size = sizeof(*cap_iovas) + (iovas * sizeof(*cap_iovas->iova_ranges)); 2798 + size = struct_size(cap_iovas, iova_ranges, iovas); 2799 2799 2800 2800 cap_iovas = kzalloc(size, GFP_KERNEL); 2801 2801 if (!cap_iovas)
+35
drivers/video/fbdev/core/fb_defio.c
··· 52 52 return VM_FAULT_SIGBUS; 53 53 54 54 get_page(page); 55 + 56 + if (vmf->vma->vm_file) 57 + page->mapping = vmf->vma->vm_file->f_mapping; 58 + else 59 + printk(KERN_ERR "no mapping available\n"); 60 + 61 + BUG_ON(!page->mapping); 55 62 page->index = vmf->pgoff; 56 63 57 64 vmf->page = page; ··· 151 144 .page_mkwrite = fb_deferred_io_mkwrite, 152 145 }; 153 146 147 + static int fb_deferred_io_set_page_dirty(struct page *page) 148 + { 149 + if (!PageDirty(page)) 150 + SetPageDirty(page); 151 + return 0; 152 + } 153 + 154 + static const struct address_space_operations fb_deferred_io_aops = { 155 + .set_page_dirty = fb_deferred_io_set_page_dirty, 156 + }; 157 + 154 158 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) 155 159 { 156 160 vma->vm_ops = &fb_deferred_io_vm_ops; ··· 212 194 } 213 195 EXPORT_SYMBOL_GPL(fb_deferred_io_init); 214 196 197 + void fb_deferred_io_open(struct fb_info *info, 198 + struct inode *inode, 199 + struct file *file) 200 + { 201 + file->f_mapping->a_ops = &fb_deferred_io_aops; 202 + } 203 + EXPORT_SYMBOL_GPL(fb_deferred_io_open); 204 + 215 205 void fb_deferred_io_cleanup(struct fb_info *info) 216 206 { 217 207 struct fb_deferred_io *fbdefio = info->fbdefio; 208 + struct page *page; 209 + int i; 218 210 219 211 BUG_ON(!fbdefio); 220 212 cancel_delayed_work_sync(&info->deferred_work); 213 + 214 + /* clear out the mapping that we setup */ 215 + for (i = 0 ; i < info->fix.smem_len; i += PAGE_SIZE) { 216 + page = fb_deferred_io_page(info, i); 217 + page->mapping = NULL; 218 + } 219 + 221 220 mutex_destroy(&fbdefio->lock); 222 221 } 223 222 EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
+4
drivers/video/fbdev/core/fbmem.c
··· 1415 1415 if (res) 1416 1416 module_put(info->fbops->owner); 1417 1417 } 1418 + #ifdef CONFIG_FB_DEFERRED_IO 1419 + if (info->fbdefio) 1420 + fb_deferred_io_open(info, inode, file); 1421 + #endif 1418 1422 out: 1419 1423 unlock_fb_info(info); 1420 1424 if (res)
+1 -1
fs/afs/write.c
··· 730 730 return ret; 731 731 } 732 732 733 - start += ret * PAGE_SIZE; 733 + start += ret; 734 734 735 735 cond_resched(); 736 736 } while (wbc->nr_to_write > 0);
+12 -5
fs/btrfs/compression.c
··· 457 457 bytes_left = compressed_len; 458 458 for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) { 459 459 int submit = 0; 460 - int len; 460 + int len = 0; 461 461 462 462 page = compressed_pages[pg_index]; 463 463 page->mapping = inode->vfs_inode.i_mapping; ··· 465 465 submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, 466 466 0); 467 467 468 - if (pg_index == 0 && use_append) 469 - len = bio_add_zone_append_page(bio, page, PAGE_SIZE, 0); 470 - else 471 - len = bio_add_page(bio, page, PAGE_SIZE, 0); 468 + /* 469 + * Page can only be added to bio if the current bio fits in 470 + * stripe. 471 + */ 472 + if (!submit) { 473 + if (pg_index == 0 && use_append) 474 + len = bio_add_zone_append_page(bio, page, 475 + PAGE_SIZE, 0); 476 + else 477 + len = bio_add_page(bio, page, PAGE_SIZE, 0); 478 + } 472 479 473 480 page->mapping = NULL; 474 481 if (submit || len < PAGE_SIZE) {
+18 -8
fs/btrfs/disk-io.c
··· 2648 2648 ret = -EINVAL; 2649 2649 } 2650 2650 2651 + if (memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid, 2652 + BTRFS_FSID_SIZE)) { 2653 + btrfs_err(fs_info, 2654 + "superblock fsid doesn't match fsid of fs_devices: %pU != %pU", 2655 + fs_info->super_copy->fsid, fs_info->fs_devices->fsid); 2656 + ret = -EINVAL; 2657 + } 2658 + 2659 + if (btrfs_fs_incompat(fs_info, METADATA_UUID) && 2660 + memcmp(fs_info->fs_devices->metadata_uuid, 2661 + fs_info->super_copy->metadata_uuid, BTRFS_FSID_SIZE)) { 2662 + btrfs_err(fs_info, 2663 + "superblock metadata_uuid doesn't match metadata uuid of fs_devices: %pU != %pU", 2664 + fs_info->super_copy->metadata_uuid, 2665 + fs_info->fs_devices->metadata_uuid); 2666 + ret = -EINVAL; 2667 + } 2668 + 2651 2669 if (memcmp(fs_info->fs_devices->metadata_uuid, sb->dev_item.fsid, 2652 2670 BTRFS_FSID_SIZE) != 0) { 2653 2671 btrfs_err(fs_info, ··· 3297 3279 3298 3280 disk_super = fs_info->super_copy; 3299 3281 3300 - ASSERT(!memcmp(fs_info->fs_devices->fsid, fs_info->super_copy->fsid, 3301 - BTRFS_FSID_SIZE)); 3302 - 3303 - if (btrfs_fs_incompat(fs_info, METADATA_UUID)) { 3304 - ASSERT(!memcmp(fs_info->fs_devices->metadata_uuid, 3305 - fs_info->super_copy->metadata_uuid, 3306 - BTRFS_FSID_SIZE)); 3307 - } 3308 3282 3309 3283 features = btrfs_super_flags(disk_super); 3310 3284 if (features & BTRFS_SUPER_FLAG_CHANGING_FSID_V2) {
+1 -1
fs/btrfs/extent-tree.c
··· 1868 1868 trace_run_delayed_ref_head(fs_info, head, 0); 1869 1869 btrfs_delayed_ref_unlock(head); 1870 1870 btrfs_put_delayed_ref_head(head); 1871 - return 0; 1871 + return ret; 1872 1872 } 1873 1873 1874 1874 static struct btrfs_delayed_ref_head *btrfs_obtain_ref_head(
+81 -27
fs/btrfs/file-item.c
··· 788 788 u64 end_byte = bytenr + len; 789 789 u64 csum_end; 790 790 struct extent_buffer *leaf; 791 - int ret; 791 + int ret = 0; 792 792 const u32 csum_size = fs_info->csum_size; 793 793 u32 blocksize_bits = fs_info->sectorsize_bits; 794 794 ··· 806 806 807 807 ret = btrfs_search_slot(trans, root, &key, path, -1, 1); 808 808 if (ret > 0) { 809 + ret = 0; 809 810 if (path->slots[0] == 0) 810 811 break; 811 812 path->slots[0]--; ··· 863 862 ret = btrfs_del_items(trans, root, path, 864 863 path->slots[0], del_nr); 865 864 if (ret) 866 - goto out; 865 + break; 867 866 if (key.offset == bytenr) 868 867 break; 869 868 } else if (key.offset < bytenr && csum_end > end_byte) { ··· 907 906 ret = btrfs_split_item(trans, root, path, &key, offset); 908 907 if (ret && ret != -EAGAIN) { 909 908 btrfs_abort_transaction(trans, ret); 910 - goto out; 909 + break; 911 910 } 911 + ret = 0; 912 912 913 913 key.offset = end_byte - 1; 914 914 } else { ··· 919 917 } 920 918 btrfs_release_path(path); 921 919 } 922 - ret = 0; 923 - out: 924 920 btrfs_free_path(path); 925 921 return ret; 922 + } 923 + 924 + static int find_next_csum_offset(struct btrfs_root *root, 925 + struct btrfs_path *path, 926 + u64 *next_offset) 927 + { 928 + const u32 nritems = btrfs_header_nritems(path->nodes[0]); 929 + struct btrfs_key found_key; 930 + int slot = path->slots[0] + 1; 931 + int ret; 932 + 933 + if (nritems == 0 || slot >= nritems) { 934 + ret = btrfs_next_leaf(root, path); 935 + if (ret < 0) { 936 + return ret; 937 + } else if (ret > 0) { 938 + *next_offset = (u64)-1; 939 + return 0; 940 + } 941 + slot = path->slots[0]; 942 + } 943 + 944 + btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot); 945 + 946 + if (found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID || 947 + found_key.type != BTRFS_EXTENT_CSUM_KEY) 948 + *next_offset = (u64)-1; 949 + else 950 + *next_offset = found_key.offset; 951 + 952 + return 0; 926 953 } 927 954 928 955 int btrfs_csum_file_blocks(struct btrfs_trans_handle *trans, ··· 969 938 u64 total_bytes = 0; 970 939 u64 csum_offset; 971 940 u64 bytenr; 972 - u32 nritems; 973 941 u32 ins_size; 974 942 int index = 0; 975 943 int found_next; ··· 1011 981 goto insert; 1012 982 } 1013 983 } else { 1014 - int slot = path->slots[0] + 1; 1015 - /* we didn't find a csum item, insert one */ 1016 - nritems = btrfs_header_nritems(path->nodes[0]); 1017 - if (!nritems || (path->slots[0] >= nritems - 1)) { 1018 - ret = btrfs_next_leaf(root, path); 1019 - if (ret < 0) { 1020 - goto out; 1021 - } else if (ret > 0) { 1022 - found_next = 1; 1023 - goto insert; 1024 - } 1025 - slot = path->slots[0]; 1026 - } 1027 - btrfs_item_key_to_cpu(path->nodes[0], &found_key, slot); 1028 - if (found_key.objectid != BTRFS_EXTENT_CSUM_OBJECTID || 1029 - found_key.type != BTRFS_EXTENT_CSUM_KEY) { 1030 - found_next = 1; 1031 - goto insert; 1032 - } 1033 - next_offset = found_key.offset; 984 + /* We didn't find a csum item, insert one. */ 985 + ret = find_next_csum_offset(root, path, &next_offset); 986 + if (ret < 0) 987 + goto out; 1034 988 found_next = 1; 1035 989 goto insert; 1036 990 } ··· 1070 1056 tmp = sums->len - total_bytes; 1071 1057 tmp >>= fs_info->sectorsize_bits; 1072 1058 WARN_ON(tmp < 1); 1059 + extend_nr = max_t(int, 1, tmp); 1073 1060 1074 - extend_nr = max_t(int, 1, (int)tmp); 1061 + /* 1062 + * A log tree can already have checksum items with a subset of 1063 + * the checksums we are trying to log. This can happen after 1064 + * doing a sequence of partial writes into prealloc extents and 1065 + * fsyncs in between, with a full fsync logging a larger subrange 1066 + * of an extent for which a previous fast fsync logged a smaller 1067 + * subrange. And this happens in particular due to merging file 1068 + * extent items when we complete an ordered extent for a range 1069 + * covered by a prealloc extent - this is done at 1070 + * btrfs_mark_extent_written(). 1071 + * 1072 + * So if we try to extend the previous checksum item, which has 1073 + * a range that ends at the start of the range we want to insert, 1074 + * make sure we don't extend beyond the start offset of the next 1075 + * checksum item. If we are at the last item in the leaf, then 1076 + * forget the optimization of extending and add a new checksum 1077 + * item - it is not worth the complexity of releasing the path, 1078 + * getting the first key for the next leaf, repeat the btree 1079 + * search, etc, because log trees are temporary anyway and it 1080 + * would only save a few bytes of leaf space. 1081 + */ 1082 + if (root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID) { 1083 + if (path->slots[0] + 1 >= 1084 + btrfs_header_nritems(path->nodes[0])) { 1085 + ret = find_next_csum_offset(root, path, &next_offset); 1086 + if (ret < 0) 1087 + goto out; 1088 + found_next = 1; 1089 + goto insert; 1090 + } 1091 + 1092 + ret = find_next_csum_offset(root, path, &next_offset); 1093 + if (ret < 0) 1094 + goto out; 1095 + 1096 + tmp = (next_offset - bytenr) >> fs_info->sectorsize_bits; 1097 + if (tmp <= INT_MAX) 1098 + extend_nr = min_t(int, extend_nr, tmp); 1099 + } 1100 + 1075 1101 diff = (csum_offset + extend_nr) * csum_size; 1076 1102 diff = min(diff, 1077 1103 MAX_CSUM_ITEMS(fs_info, csum_size) * csum_size);
+2 -2
fs/btrfs/file.c
··· 1094 1094 int del_nr = 0; 1095 1095 int del_slot = 0; 1096 1096 int recow; 1097 - int ret; 1097 + int ret = 0; 1098 1098 u64 ino = btrfs_ino(inode); 1099 1099 1100 1100 path = btrfs_alloc_path(); ··· 1315 1315 } 1316 1316 out: 1317 1317 btrfs_free_path(path); 1318 - return 0; 1318 + return ret; 1319 1319 } 1320 1320 1321 1321 /*
+18 -1
fs/btrfs/inode.c
··· 3000 3000 if (ret || truncated) { 3001 3001 u64 unwritten_start = start; 3002 3002 3003 + /* 3004 + * If we failed to finish this ordered extent for any reason we 3005 + * need to make sure BTRFS_ORDERED_IOERR is set on the ordered 3006 + * extent, and mark the inode with the error if it wasn't 3007 + * already set. Any error during writeback would have already 3008 + * set the mapping error, so we need to set it if we're the ones 3009 + * marking this ordered extent as failed. 3010 + */ 3011 + if (ret && !test_and_set_bit(BTRFS_ORDERED_IOERR, 3012 + &ordered_extent->flags)) 3013 + mapping_set_error(ordered_extent->inode->i_mapping, -EIO); 3014 + 3003 3015 if (truncated) 3004 3016 unwritten_start += logical_len; 3005 3017 clear_extent_uptodate(io_tree, unwritten_start, end, NULL); ··· 9088 9076 int ret2; 9089 9077 bool root_log_pinned = false; 9090 9078 bool dest_log_pinned = false; 9079 + bool need_abort = false; 9091 9080 9092 9081 /* we only allow rename subvolume link between subvolumes */ 9093 9082 if (old_ino != BTRFS_FIRST_FREE_OBJECTID && root != dest) ··· 9148 9135 old_idx); 9149 9136 if (ret) 9150 9137 goto out_fail; 9138 + need_abort = true; 9151 9139 } 9152 9140 9153 9141 /* And now for the dest. */ ··· 9164 9150 new_ino, 9165 9151 btrfs_ino(BTRFS_I(old_dir)), 9166 9152 new_idx); 9167 - if (ret) 9153 + if (ret) { 9154 + if (need_abort) 9155 + btrfs_abort_transaction(trans, ret); 9168 9156 goto out_fail; 9157 + } 9169 9158 } 9170 9159 9171 9160 /* Update inode version and ctime/mtime. */
+22 -16
fs/btrfs/reflink.c
··· 203 203 * inline extent's data to the page. 204 204 */ 205 205 ASSERT(key.offset > 0); 206 - ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset, 207 - inline_data, size, datal, 208 - comp_type); 209 - goto out; 206 + goto copy_to_page; 210 207 } 211 208 } else if (i_size_read(dst) <= datal) { 212 209 struct btrfs_file_extent_item *ei; ··· 219 222 BTRFS_FILE_EXTENT_INLINE) 220 223 goto copy_inline_extent; 221 224 222 - ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset, 223 - inline_data, size, datal, comp_type); 224 - goto out; 225 + goto copy_to_page; 225 226 } 226 227 227 228 copy_inline_extent: 228 - ret = 0; 229 229 /* 230 230 * We have no extent items, or we have an extent at offset 0 which may 231 231 * or may not be inlined. All these cases are dealt the same way. ··· 234 240 * clone. Deal with all these cases by copying the inline extent 235 241 * data into the respective page at the destination inode. 236 242 */ 237 - ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset, 238 - inline_data, size, datal, comp_type); 239 - goto out; 243 + goto copy_to_page; 240 244 } 241 245 246 + /* 247 + * Release path before starting a new transaction so we don't hold locks 248 + * that would confuse lockdep. 249 + */ 242 250 btrfs_release_path(path); 243 251 /* 244 252 * If we end up here it means were copy the inline extent into a leaf ··· 278 282 out: 279 283 if (!ret && !trans) { 280 284 /* 281 - * Release path before starting a new transaction so we don't 282 - * hold locks that would confuse lockdep. 283 - */ 284 - btrfs_release_path(path); 285 - /* 286 285 * No transaction here means we copied the inline extent into a 287 286 * page of the destination inode. 288 287 * ··· 297 306 *trans_out = trans; 298 307 299 308 return ret; 309 + 310 + copy_to_page: 311 + /* 312 + * Release our path because we don't need it anymore and also because 313 + * copy_inline_to_page() needs to reserve data and metadata, which may 314 + * need to flush delalloc when we are low on available space and 315 + * therefore cause a deadlock if writeback of an inline extent needs to 316 + * write to the same leaf or an ordered extent completion needs to write 317 + * to the same leaf. 318 + */ 319 + btrfs_release_path(path); 320 + 321 + ret = copy_inline_to_page(BTRFS_I(dst), new_key->offset, 322 + inline_data, size, datal, comp_type); 323 + goto out; 300 324 } 301 325 302 326 /**
+29 -8
fs/btrfs/tree-log.c
··· 1574 1574 if (ret) 1575 1575 goto out; 1576 1576 1577 - btrfs_update_inode(trans, root, BTRFS_I(inode)); 1577 + ret = btrfs_update_inode(trans, root, BTRFS_I(inode)); 1578 + if (ret) 1579 + goto out; 1578 1580 } 1579 1581 1580 1582 ref_ptr = (unsigned long)(ref_ptr + ref_struct_size) + namelen; ··· 1751 1749 1752 1750 if (nlink != inode->i_nlink) { 1753 1751 set_nlink(inode, nlink); 1754 - btrfs_update_inode(trans, root, BTRFS_I(inode)); 1752 + ret = btrfs_update_inode(trans, root, BTRFS_I(inode)); 1753 + if (ret) 1754 + goto out; 1755 1755 } 1756 1756 BTRFS_I(inode)->index_cnt = (u64)-1; 1757 1757 ··· 1791 1787 break; 1792 1788 1793 1789 if (ret == 1) { 1790 + ret = 0; 1794 1791 if (path->slots[0] == 0) 1795 1792 break; 1796 1793 path->slots[0]--; ··· 1804 1799 1805 1800 ret = btrfs_del_item(trans, root, path); 1806 1801 if (ret) 1807 - goto out; 1802 + break; 1808 1803 1809 1804 btrfs_release_path(path); 1810 1805 inode = read_one_inode(root, key.offset); 1811 - if (!inode) 1812 - return -EIO; 1806 + if (!inode) { 1807 + ret = -EIO; 1808 + break; 1809 + } 1813 1810 1814 1811 ret = fixup_inode_link_count(trans, root, inode); 1815 1812 iput(inode); 1816 1813 if (ret) 1817 - goto out; 1814 + break; 1818 1815 1819 1816 /* 1820 1817 * fixup on a directory may create new entries, ··· 1825 1818 */ 1826 1819 key.offset = (u64)-1; 1827 1820 } 1828 - ret = 0; 1829 - out: 1830 1821 btrfs_release_path(path); 1831 1822 return ret; 1832 1823 } ··· 3302 3297 * begins and releases it only after writing its superblock. 3303 3298 */ 3304 3299 mutex_lock(&fs_info->tree_log_mutex); 3300 + 3301 + /* 3302 + * The previous transaction writeout phase could have failed, and thus 3303 + * marked the fs in an error state. We must not commit here, as we 3304 + * could have updated our generation in the super_for_commit and 3305 + * writing the super here would result in transid mismatches. If there 3306 + * is an error here just bail. 3307 + */ 3308 + if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) { 3309 + ret = -EIO; 3310 + btrfs_set_log_full_commit(trans); 3311 + btrfs_abort_transaction(trans, ret); 3312 + mutex_unlock(&fs_info->tree_log_mutex); 3313 + goto out_wake_log_root; 3314 + } 3315 + 3305 3316 btrfs_set_super_log_root(fs_info->super_for_commit, log_root_start); 3306 3317 btrfs_set_super_log_root_level(fs_info->super_for_commit, log_root_level); 3307 3318 ret = write_all_supers(fs_info, 1);
+18 -5
fs/btrfs/zoned.c
··· 150 150 return (u32)zone; 151 151 } 152 152 153 + static inline sector_t zone_start_sector(u32 zone_number, 154 + struct block_device *bdev) 155 + { 156 + return (sector_t)zone_number << ilog2(bdev_zone_sectors(bdev)); 157 + } 158 + 159 + static inline u64 zone_start_physical(u32 zone_number, 160 + struct btrfs_zoned_device_info *zone_info) 161 + { 162 + return (u64)zone_number << zone_info->zone_size_shift; 163 + } 164 + 153 165 /* 154 166 * Emulate blkdev_report_zones() for a non-zoned device. It slices up the block 155 167 * device into static sized chunks and fake a conventional zone on each of ··· 417 405 if (sb_zone + 1 >= zone_info->nr_zones) 418 406 continue; 419 407 420 - sector = sb_zone << (zone_info->zone_size_shift - SECTOR_SHIFT); 421 - ret = btrfs_get_dev_zones(device, sector << SECTOR_SHIFT, 408 + ret = btrfs_get_dev_zones(device, 409 + zone_start_physical(sb_zone, zone_info), 422 410 &zone_info->sb_zones[sb_pos], 423 411 &nr_zones); 424 412 if (ret) ··· 733 721 if (sb_zone + 1 >= nr_zones) 734 722 return -ENOENT; 735 723 736 - ret = blkdev_report_zones(bdev, sb_zone << zone_sectors_shift, 724 + ret = blkdev_report_zones(bdev, zone_start_sector(sb_zone, bdev), 737 725 BTRFS_NR_SB_LOG_ZONES, copy_zone_info_cb, 738 726 zones); 739 727 if (ret < 0) ··· 838 826 return -ENOENT; 839 827 840 828 return blkdev_zone_mgmt(bdev, REQ_OP_ZONE_RESET, 841 - sb_zone << zone_sectors_shift, 829 + zone_start_sector(sb_zone, bdev), 842 830 zone_sectors * BTRFS_NR_SB_LOG_ZONES, GFP_NOFS); 843 831 } 844 832 ··· 890 878 if (!(end <= sb_zone || 891 879 sb_zone + BTRFS_NR_SB_LOG_ZONES <= begin)) { 892 880 have_sb = true; 893 - pos = ((u64)sb_zone + BTRFS_NR_SB_LOG_ZONES) << shift; 881 + pos = zone_start_physical( 882 + sb_zone + BTRFS_NR_SB_LOG_ZONES, zinfo); 894 883 break; 895 884 } 896 885
+1 -1
fs/coredump.c
··· 519 519 * but then we need to teach dump_write() to restart and clear 520 520 * TIF_SIGPENDING. 521 521 */ 522 - return signal_pending(current); 522 + return fatal_signal_pending(current) || freezing(current); 523 523 } 524 524 525 525 static void wait_for_dump_helpers(struct file *file)
+1 -1
fs/debugfs/file.c
··· 893 893 894 894 copy[copy_len] = '\n'; 895 895 896 - ret = simple_read_from_buffer(user_buf, count, ppos, copy, copy_len); 896 + ret = simple_read_from_buffer(user_buf, count, ppos, copy, len); 897 897 kfree(copy); 898 898 899 899 return ret;
+23 -20
fs/ext4/extents.c
··· 3206 3206 ext4_ext_mark_unwritten(ex2); 3207 3207 3208 3208 err = ext4_ext_insert_extent(handle, inode, ppath, &newex, flags); 3209 - if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) { 3209 + if (err != -ENOSPC && err != -EDQUOT) 3210 + goto out; 3211 + 3212 + if (EXT4_EXT_MAY_ZEROOUT & split_flag) { 3210 3213 if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) { 3211 3214 if (split_flag & EXT4_EXT_DATA_VALID1) { 3212 3215 err = ext4_ext_zeroout(inode, ex2); ··· 3235 3232 ext4_ext_pblock(&orig_ex)); 3236 3233 } 3237 3234 3238 - if (err) 3239 - goto fix_extent_len; 3240 - /* update the extent length and mark as initialized */ 3241 - ex->ee_len = cpu_to_le16(ee_len); 3242 - ext4_ext_try_to_merge(handle, inode, path, ex); 3243 - err = ext4_ext_dirty(handle, inode, path + path->p_depth); 3244 - if (err) 3245 - goto fix_extent_len; 3246 - 3247 - /* update extent status tree */ 3248 - err = ext4_zeroout_es(inode, &zero_ex); 3249 - 3250 - goto out; 3251 - } else if (err) 3252 - goto fix_extent_len; 3253 - 3254 - out: 3255 - ext4_ext_show_leaf(inode, path); 3256 - return err; 3235 + if (!err) { 3236 + /* update the extent length and mark as initialized */ 3237 + ex->ee_len = cpu_to_le16(ee_len); 3238 + ext4_ext_try_to_merge(handle, inode, path, ex); 3239 + err = ext4_ext_dirty(handle, inode, path + path->p_depth); 3240 + if (!err) 3241 + /* update extent status tree */ 3242 + err = ext4_zeroout_es(inode, &zero_ex); 3243 + /* If we failed at this point, we don't know in which 3244 + * state the extent tree exactly is so don't try to fix 3245 + * length of the original extent as it may do even more 3246 + * damage. 3247 + */ 3248 + goto out; 3249 + } 3250 + } 3257 3251 3258 3252 fix_extent_len: 3259 3253 ex->ee_len = orig_ex.ee_len; ··· 3259 3259 * and err is a non-zero error code. 3260 3260 */ 3261 3261 ext4_ext_dirty(handle, inode, path + path->p_depth); 3262 + return err; 3263 + out: 3264 + ext4_ext_show_leaf(inode, path); 3262 3265 return err; 3263 3266 } 3264 3267
+90 -80
fs/ext4/fast_commit.c
··· 1288 1288 }; 1289 1289 1290 1290 static inline void tl_to_darg(struct dentry_info_args *darg, 1291 - struct ext4_fc_tl *tl) 1291 + struct ext4_fc_tl *tl, u8 *val) 1292 1292 { 1293 - struct ext4_fc_dentry_info *fcd; 1293 + struct ext4_fc_dentry_info fcd; 1294 1294 1295 - fcd = (struct ext4_fc_dentry_info *)ext4_fc_tag_val(tl); 1295 + memcpy(&fcd, val, sizeof(fcd)); 1296 1296 1297 - darg->parent_ino = le32_to_cpu(fcd->fc_parent_ino); 1298 - darg->ino = le32_to_cpu(fcd->fc_ino); 1299 - darg->dname = fcd->fc_dname; 1300 - darg->dname_len = ext4_fc_tag_len(tl) - 1301 - sizeof(struct ext4_fc_dentry_info); 1297 + darg->parent_ino = le32_to_cpu(fcd.fc_parent_ino); 1298 + darg->ino = le32_to_cpu(fcd.fc_ino); 1299 + darg->dname = val + offsetof(struct ext4_fc_dentry_info, fc_dname); 1300 + darg->dname_len = le16_to_cpu(tl->fc_len) - 1301 + sizeof(struct ext4_fc_dentry_info); 1302 1302 } 1303 1303 1304 1304 /* Unlink replay function */ 1305 - static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl) 1305 + static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl, 1306 + u8 *val) 1306 1307 { 1307 1308 struct inode *inode, *old_parent; 1308 1309 struct qstr entry; 1309 1310 struct dentry_info_args darg; 1310 1311 int ret = 0; 1311 1312 1312 - tl_to_darg(&darg, tl); 1313 + tl_to_darg(&darg, tl, val); 1313 1314 1314 1315 trace_ext4_fc_replay(sb, EXT4_FC_TAG_UNLINK, darg.ino, 1315 1316 darg.parent_ino, darg.dname_len); ··· 1400 1399 } 1401 1400 1402 1401 /* Link replay function */ 1403 - static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl) 1402 + static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl, 1403 + u8 *val) 1404 1404 { 1405 1405 struct inode *inode; 1406 1406 struct dentry_info_args darg; 1407 1407 int ret = 0; 1408 1408 1409 - tl_to_darg(&darg, tl); 1409 + tl_to_darg(&darg, tl, val); 1410 1410 trace_ext4_fc_replay(sb, EXT4_FC_TAG_LINK, darg.ino, 1411 1411 darg.parent_ino, darg.dname_len); 1412 1412 ··· 1452 1450 /* 1453 1451 * Inode replay function 1454 1452 */ 1455 - static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl) 1453 + static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl, 1454 + u8 *val) 1456 1455 { 1457 - struct ext4_fc_inode *fc_inode; 1456 + struct ext4_fc_inode fc_inode; 1458 1457 struct ext4_inode *raw_inode; 1459 1458 struct ext4_inode *raw_fc_inode; 1460 1459 struct inode *inode = NULL; ··· 1463 1460 int inode_len, ino, ret, tag = le16_to_cpu(tl->fc_tag); 1464 1461 struct ext4_extent_header *eh; 1465 1462 1466 - fc_inode = (struct ext4_fc_inode *)ext4_fc_tag_val(tl); 1463 + memcpy(&fc_inode, val, sizeof(fc_inode)); 1467 1464 1468 - ino = le32_to_cpu(fc_inode->fc_ino); 1465 + ino = le32_to_cpu(fc_inode.fc_ino); 1469 1466 trace_ext4_fc_replay(sb, tag, ino, 0, 0); 1470 1467 1471 1468 inode = ext4_iget(sb, ino, EXT4_IGET_NORMAL); ··· 1477 1474 1478 1475 ext4_fc_record_modified_inode(sb, ino); 1479 1476 1480 - raw_fc_inode = (struct ext4_inode *)fc_inode->fc_raw_inode; 1477 + raw_fc_inode = (struct ext4_inode *) 1478 + (val + offsetof(struct ext4_fc_inode, fc_raw_inode)); 1481 1479 ret = ext4_get_fc_inode_loc(sb, ino, &iloc); 1482 1480 if (ret) 1483 1481 goto out; 1484 1482 1485 - inode_len = ext4_fc_tag_len(tl) - sizeof(struct ext4_fc_inode); 1483 + inode_len = le16_to_cpu(tl->fc_len) - sizeof(struct ext4_fc_inode); 1486 1484 raw_inode = ext4_raw_inode(&iloc); 1487 1485 1488 1486 memcpy(raw_inode, raw_fc_inode, offsetof(struct ext4_inode, i_block)); ··· 1551 1547 * inode for which we are trying to create a dentry here, should already have 1552 1548 * been replayed before we start here. 1553 1549 */ 1554 - static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl) 1550 + static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl, 1551 + u8 *val) 1555 1552 { 1556 1553 int ret = 0; 1557 1554 struct inode *inode = NULL; 1558 1555 struct inode *dir = NULL; 1559 1556 struct dentry_info_args darg; 1560 1557 1561 - tl_to_darg(&darg, tl); 1558 + tl_to_darg(&darg, tl, val); 1562 1559 1563 1560 trace_ext4_fc_replay(sb, EXT4_FC_TAG_CREAT, darg.ino, 1564 1561 darg.parent_ino, darg.dname_len); ··· 1638 1633 1639 1634 /* Replay add range tag */ 1640 1635 static int ext4_fc_replay_add_range(struct super_block *sb, 1641 - struct ext4_fc_tl *tl) 1636 + struct ext4_fc_tl *tl, u8 *val) 1642 1637 { 1643 - struct ext4_fc_add_range *fc_add_ex; 1638 + struct ext4_fc_add_range fc_add_ex; 1644 1639 struct ext4_extent newex, *ex; 1645 1640 struct inode *inode; 1646 1641 ext4_lblk_t start, cur; ··· 1650 1645 struct ext4_ext_path *path = NULL; 1651 1646 int ret; 1652 1647 1653 - fc_add_ex = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl); 1654 - ex = (struct ext4_extent *)&fc_add_ex->fc_ex; 1648 + memcpy(&fc_add_ex, val, sizeof(fc_add_ex)); 1649 + ex = (struct ext4_extent *)&fc_add_ex.fc_ex; 1655 1650 1656 1651 trace_ext4_fc_replay(sb, EXT4_FC_TAG_ADD_RANGE, 1657 - le32_to_cpu(fc_add_ex->fc_ino), le32_to_cpu(ex->ee_block), 1652 + le32_to_cpu(fc_add_ex.fc_ino), le32_to_cpu(ex->ee_block), 1658 1653 ext4_ext_get_actual_len(ex)); 1659 1654 1660 - inode = ext4_iget(sb, le32_to_cpu(fc_add_ex->fc_ino), 1661 - EXT4_IGET_NORMAL); 1655 + inode = ext4_iget(sb, le32_to_cpu(fc_add_ex.fc_ino), EXT4_IGET_NORMAL); 1662 1656 if (IS_ERR(inode)) { 1663 1657 jbd_debug(1, "Inode not found."); 1664 1658 return 0; ··· 1766 1762 1767 1763 /* Replay DEL_RANGE tag */ 1768 1764 static int 1769 - ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl) 1765 + ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl, 1766 + u8 *val) 1770 1767 { 1771 1768 struct inode *inode; 1772 - struct ext4_fc_del_range *lrange; 1769 + struct ext4_fc_del_range lrange; 1773 1770 struct ext4_map_blocks map; 1774 1771 ext4_lblk_t cur, remaining; 1775 1772 int ret; 1776 1773 1777 - lrange = (struct ext4_fc_del_range *)ext4_fc_tag_val(tl); 1778 - cur = le32_to_cpu(lrange->fc_lblk); 1779 - remaining = le32_to_cpu(lrange->fc_len); 1774 + memcpy(&lrange, val, sizeof(lrange)); 1775 + cur = le32_to_cpu(lrange.fc_lblk); 1776 + remaining = le32_to_cpu(lrange.fc_len); 1780 1777 1781 1778 trace_ext4_fc_replay(sb, EXT4_FC_TAG_DEL_RANGE, 1782 - le32_to_cpu(lrange->fc_ino), cur, remaining); 1779 + le32_to_cpu(lrange.fc_ino), cur, remaining); 1783 1780 1784 - inode = ext4_iget(sb, le32_to_cpu(lrange->fc_ino), EXT4_IGET_NORMAL); 1781 + inode = ext4_iget(sb, le32_to_cpu(lrange.fc_ino), EXT4_IGET_NORMAL); 1785 1782 if (IS_ERR(inode)) { 1786 - jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange->fc_ino)); 1783 + jbd_debug(1, "Inode %d not found", le32_to_cpu(lrange.fc_ino)); 1787 1784 return 0; 1788 1785 } 1789 1786 1790 1787 ret = ext4_fc_record_modified_inode(sb, inode->i_ino); 1791 1788 1792 1789 jbd_debug(1, "DEL_RANGE, inode %ld, lblk %d, len %d\n", 1793 - inode->i_ino, le32_to_cpu(lrange->fc_lblk), 1794 - le32_to_cpu(lrange->fc_len)); 1790 + inode->i_ino, le32_to_cpu(lrange.fc_lblk), 1791 + le32_to_cpu(lrange.fc_len)); 1795 1792 while (remaining > 0) { 1796 1793 map.m_lblk = cur; 1797 1794 map.m_len = remaining; ··· 1813 1808 } 1814 1809 1815 1810 ret = ext4_punch_hole(inode, 1816 - le32_to_cpu(lrange->fc_lblk) << sb->s_blocksize_bits, 1817 - le32_to_cpu(lrange->fc_len) << sb->s_blocksize_bits); 1811 + le32_to_cpu(lrange.fc_lblk) << sb->s_blocksize_bits, 1812 + le32_to_cpu(lrange.fc_len) << sb->s_blocksize_bits); 1818 1813 if (ret) 1819 1814 jbd_debug(1, "ext4_punch_hole returned %d", ret); 1820 1815 ext4_ext_replay_shrink_inode(inode, ··· 1930 1925 struct ext4_sb_info *sbi = EXT4_SB(sb); 1931 1926 struct ext4_fc_replay_state *state; 1932 1927 int ret = JBD2_FC_REPLAY_CONTINUE; 1933 - struct ext4_fc_add_range *ext; 1934 - struct ext4_fc_tl *tl; 1935 - struct ext4_fc_tail *tail; 1936 - __u8 *start, *end; 1937 - struct ext4_fc_head *head; 1928 + struct ext4_fc_add_range ext; 1929 + struct ext4_fc_tl tl; 1930 + struct ext4_fc_tail tail; 1931 + __u8 *start, *end, *cur, *val; 1932 + struct ext4_fc_head head; 1938 1933 struct ext4_extent *ex; 1939 1934 1940 1935 state = &sbi->s_fc_replay_state; ··· 1961 1956 } 1962 1957 1963 1958 state->fc_replay_expected_off++; 1964 - fc_for_each_tl(start, end, tl) { 1959 + for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) { 1960 + memcpy(&tl, cur, sizeof(tl)); 1961 + val = cur + sizeof(tl); 1965 1962 jbd_debug(3, "Scan phase, tag:%s, blk %lld\n", 1966 - tag2str(le16_to_cpu(tl->fc_tag)), bh->b_blocknr); 1967 - switch (le16_to_cpu(tl->fc_tag)) { 1963 + tag2str(le16_to_cpu(tl.fc_tag)), bh->b_blocknr); 1964 + switch (le16_to_cpu(tl.fc_tag)) { 1968 1965 case EXT4_FC_TAG_ADD_RANGE: 1969 - ext = (struct ext4_fc_add_range *)ext4_fc_tag_val(tl); 1970 - ex = (struct ext4_extent *)&ext->fc_ex; 1966 + memcpy(&ext, val, sizeof(ext)); 1967 + ex = (struct ext4_extent *)&ext.fc_ex; 1971 1968 ret = ext4_fc_record_regions(sb, 1972 - le32_to_cpu(ext->fc_ino), 1969 + le32_to_cpu(ext.fc_ino), 1973 1970 le32_to_cpu(ex->ee_block), ext4_ext_pblock(ex), 1974 1971 ext4_ext_get_actual_len(ex)); 1975 1972 if (ret < 0) ··· 1985 1978 case EXT4_FC_TAG_INODE: 1986 1979 case EXT4_FC_TAG_PAD: 1987 1980 state->fc_cur_tag++; 1988 - state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl, 1989 - sizeof(*tl) + ext4_fc_tag_len(tl)); 1981 + state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur, 1982 + sizeof(tl) + le16_to_cpu(tl.fc_len)); 1990 1983 break; 1991 1984 case EXT4_FC_TAG_TAIL: 1992 1985 state->fc_cur_tag++; 1993 - tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl); 1994 - state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl, 1995 - sizeof(*tl) + 1986 + memcpy(&tail, val, sizeof(tail)); 1987 + state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur, 1988 + sizeof(tl) + 1996 1989 offsetof(struct ext4_fc_tail, 1997 1990 fc_crc)); 1998 - if (le32_to_cpu(tail->fc_tid) == expected_tid && 1999 - le32_to_cpu(tail->fc_crc) == state->fc_crc) { 1991 + if (le32_to_cpu(tail.fc_tid) == expected_tid && 1992 + le32_to_cpu(tail.fc_crc) == state->fc_crc) { 2000 1993 state->fc_replay_num_tags = state->fc_cur_tag; 2001 1994 state->fc_regions_valid = 2002 1995 state->fc_regions_used; ··· 2007 2000 state->fc_crc = 0; 2008 2001 break; 2009 2002 case EXT4_FC_TAG_HEAD: 2010 - head = (struct ext4_fc_head *)ext4_fc_tag_val(tl); 2011 - if (le32_to_cpu(head->fc_features) & 2003 + memcpy(&head, val, sizeof(head)); 2004 + if (le32_to_cpu(head.fc_features) & 2012 2005 ~EXT4_FC_SUPPORTED_FEATURES) { 2013 2006 ret = -EOPNOTSUPP; 2014 2007 break; 2015 2008 } 2016 - if (le32_to_cpu(head->fc_tid) != expected_tid) { 2009 + if (le32_to_cpu(head.fc_tid) != expected_tid) { 2017 2010 ret = JBD2_FC_REPLAY_STOP; 2018 2011 break; 2019 2012 } 2020 2013 state->fc_cur_tag++; 2021 - state->fc_crc = ext4_chksum(sbi, state->fc_crc, tl, 2022 - sizeof(*tl) + ext4_fc_tag_len(tl)); 2014 + state->fc_crc = ext4_chksum(sbi, state->fc_crc, cur, 2015 + sizeof(tl) + le16_to_cpu(tl.fc_len)); 2023 2016 break; 2024 2017 default: 2025 2018 ret = state->fc_replay_num_tags ? ··· 2043 2036 { 2044 2037 struct super_block *sb = journal->j_private; 2045 2038 struct ext4_sb_info *sbi = EXT4_SB(sb); 2046 - struct ext4_fc_tl *tl; 2047 - __u8 *start, *end; 2039 + struct ext4_fc_tl tl; 2040 + __u8 *start, *end, *cur, *val; 2048 2041 int ret = JBD2_FC_REPLAY_CONTINUE; 2049 2042 struct ext4_fc_replay_state *state = &sbi->s_fc_replay_state; 2050 - struct ext4_fc_tail *tail; 2043 + struct ext4_fc_tail tail; 2051 2044 2052 2045 if (pass == PASS_SCAN) { 2053 2046 state->fc_current_pass = PASS_SCAN; ··· 2074 2067 start = (u8 *)bh->b_data; 2075 2068 end = (__u8 *)bh->b_data + journal->j_blocksize - 1; 2076 2069 2077 - fc_for_each_tl(start, end, tl) { 2070 + for (cur = start; cur < end; cur = cur + sizeof(tl) + le16_to_cpu(tl.fc_len)) { 2071 + memcpy(&tl, cur, sizeof(tl)); 2072 + val = cur + sizeof(tl); 2073 + 2078 2074 if (state->fc_replay_num_tags == 0) { 2079 2075 ret = JBD2_FC_REPLAY_STOP; 2080 2076 ext4_fc_set_bitmaps_and_counters(sb); 2081 2077 break; 2082 2078 } 2083 2079 jbd_debug(3, "Replay phase, tag:%s\n", 2084 - tag2str(le16_to_cpu(tl->fc_tag))); 2080 + tag2str(le16_to_cpu(tl.fc_tag))); 2085 2081 state->fc_replay_num_tags--; 2086 - switch (le16_to_cpu(tl->fc_tag)) { 2082 + switch (le16_to_cpu(tl.fc_tag)) { 2087 2083 case EXT4_FC_TAG_LINK: 2088 - ret = ext4_fc_replay_link(sb, tl); 2084 + ret = ext4_fc_replay_link(sb, &tl, val); 2089 2085 break; 2090 2086 case EXT4_FC_TAG_UNLINK: 2091 - ret = ext4_fc_replay_unlink(sb, tl); 2087 + ret = ext4_fc_replay_unlink(sb, &tl, val); 2092 2088 break; 2093 2089 case EXT4_FC_TAG_ADD_RANGE: 2094 - ret = ext4_fc_replay_add_range(sb, tl); 2090 + ret = ext4_fc_replay_add_range(sb, &tl, val); 2095 2091 break; 2096 2092 case EXT4_FC_TAG_CREAT: 2097 - ret = ext4_fc_replay_create(sb, tl); 2093 + ret = ext4_fc_replay_create(sb, &tl, val); 2098 2094 break; 2099 2095 case EXT4_FC_TAG_DEL_RANGE: 2100 - ret = ext4_fc_replay_del_range(sb, tl); 2096 + ret = ext4_fc_replay_del_range(sb, &tl, val); 2101 2097 break; 2102 2098 case EXT4_FC_TAG_INODE: 2103 - ret = ext4_fc_replay_inode(sb, tl); 2099 + ret = ext4_fc_replay_inode(sb, &tl, val); 2104 2100 break; 2105 2101 case EXT4_FC_TAG_PAD: 2106 2102 trace_ext4_fc_replay(sb, EXT4_FC_TAG_PAD, 0, 2107 - ext4_fc_tag_len(tl), 0); 2103 + le16_to_cpu(tl.fc_len), 0); 2108 2104 break; 2109 2105 case EXT4_FC_TAG_TAIL: 2110 2106 trace_ext4_fc_replay(sb, EXT4_FC_TAG_TAIL, 0, 2111 - ext4_fc_tag_len(tl), 0); 2112 - tail = (struct ext4_fc_tail *)ext4_fc_tag_val(tl); 2113 - WARN_ON(le32_to_cpu(tail->fc_tid) != expected_tid); 2107 + le16_to_cpu(tl.fc_len), 0); 2108 + memcpy(&tail, val, sizeof(tail)); 2109 + WARN_ON(le32_to_cpu(tail.fc_tid) != expected_tid); 2114 2110 break; 2115 2111 case EXT4_FC_TAG_HEAD: 2116 2112 break; 2117 2113 default: 2118 - trace_ext4_fc_replay(sb, le16_to_cpu(tl->fc_tag), 0, 2119 - ext4_fc_tag_len(tl), 0); 2114 + trace_ext4_fc_replay(sb, le16_to_cpu(tl.fc_tag), 0, 2115 + le16_to_cpu(tl.fc_len), 0); 2120 2116 ret = -ECANCELED; 2121 2117 break; 2122 2118 }
-19
fs/ext4/fast_commit.h
··· 153 153 #define region_last(__region) (((__region)->lblk) + ((__region)->len) - 1) 154 154 #endif 155 155 156 - #define fc_for_each_tl(__start, __end, __tl) \ 157 - for (tl = (struct ext4_fc_tl *)(__start); \ 158 - (__u8 *)tl < (__u8 *)(__end); \ 159 - tl = (struct ext4_fc_tl *)((__u8 *)tl + \ 160 - sizeof(struct ext4_fc_tl) + \ 161 - + le16_to_cpu(tl->fc_len))) 162 - 163 156 static inline const char *tag2str(__u16 tag) 164 157 { 165 158 switch (tag) { ··· 177 184 default: 178 185 return "ERROR"; 179 186 } 180 - } 181 - 182 - /* Get length of a particular tlv */ 183 - static inline int ext4_fc_tag_len(struct ext4_fc_tl *tl) 184 - { 185 - return le16_to_cpu(tl->fc_len); 186 - } 187 - 188 - /* Get a pointer to "value" of a tlv */ 189 - static inline __u8 *ext4_fc_tag_val(struct ext4_fc_tl *tl) 190 - { 191 - return (__u8 *)tl + sizeof(*tl); 192 187 } 193 188 194 189 #endif /* __FAST_COMMIT_H__ */
+4 -2
fs/ext4/ialloc.c
··· 322 322 if (is_directory) { 323 323 count = ext4_used_dirs_count(sb, gdp) - 1; 324 324 ext4_used_dirs_set(sb, gdp, count); 325 - percpu_counter_dec(&sbi->s_dirs_counter); 325 + if (percpu_counter_initialized(&sbi->s_dirs_counter)) 326 + percpu_counter_dec(&sbi->s_dirs_counter); 326 327 } 327 328 ext4_inode_bitmap_csum_set(sb, block_group, gdp, bitmap_bh, 328 329 EXT4_INODES_PER_GROUP(sb) / 8); 329 330 ext4_group_desc_csum_set(sb, block_group, gdp); 330 331 ext4_unlock_group(sb, block_group); 331 332 332 - percpu_counter_inc(&sbi->s_freeinodes_counter); 333 + if (percpu_counter_initialized(&sbi->s_freeinodes_counter)) 334 + percpu_counter_inc(&sbi->s_freeinodes_counter); 333 335 if (sbi->s_log_groups_per_flex) { 334 336 struct flex_groups *fg; 335 337
+1 -1
fs/ext4/mballoc.c
··· 3217 3217 */ 3218 3218 if (sbi->s_es->s_log_groups_per_flex >= 32) { 3219 3219 ext4_msg(sb, KERN_ERR, "too many log groups per flexible block group"); 3220 - goto err_freesgi; 3220 + goto err_freebuddy; 3221 3221 } 3222 3222 sbi->s_mb_prefetch = min_t(uint, 1 << sbi->s_es->s_log_groups_per_flex, 3223 3223 BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
+4 -2
fs/ext4/namei.c
··· 1376 1376 struct dx_hash_info *hinfo = &name->hinfo; 1377 1377 int len; 1378 1378 1379 - if (!IS_CASEFOLDED(dir) || !dir->i_sb->s_encoding) { 1379 + if (!IS_CASEFOLDED(dir) || !dir->i_sb->s_encoding || 1380 + (IS_ENCRYPTED(dir) && !fscrypt_has_encryption_key(dir))) { 1380 1381 cf_name->name = NULL; 1381 1382 return 0; 1382 1383 } ··· 1428 1427 #endif 1429 1428 1430 1429 #ifdef CONFIG_UNICODE 1431 - if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent)) { 1430 + if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent) && 1431 + (!IS_ENCRYPTED(parent) || fscrypt_has_encryption_key(parent))) { 1432 1432 if (fname->cf_name.name) { 1433 1433 struct qstr cf = {.name = fname->cf_name.name, 1434 1434 .len = fname->cf_name.len};
+9 -2
fs/ext4/super.c
··· 4462 4462 } 4463 4463 4464 4464 if (sb->s_blocksize != blocksize) { 4465 + /* 4466 + * bh must be released before kill_bdev(), otherwise 4467 + * it won't be freed and its page also. kill_bdev() 4468 + * is called by sb_set_blocksize(). 4469 + */ 4470 + brelse(bh); 4465 4471 /* Validate the filesystem blocksize */ 4466 4472 if (!sb_set_blocksize(sb, blocksize)) { 4467 4473 ext4_msg(sb, KERN_ERR, "bad block size %d", 4468 4474 blocksize); 4475 + bh = NULL; 4469 4476 goto failed_mount; 4470 4477 } 4471 4478 4472 - brelse(bh); 4473 4479 logical_sb_block = sb_block * EXT4_MIN_BLOCK_SIZE; 4474 4480 offset = do_div(logical_sb_block, blocksize); 4475 4481 bh = ext4_sb_bread_unmovable(sb, logical_sb_block); ··· 5208 5202 kfree(get_qf_name(sb, sbi, i)); 5209 5203 #endif 5210 5204 fscrypt_free_dummy_policy(&sbi->s_dummy_enc_policy); 5211 - ext4_blkdev_remove(sbi); 5205 + /* ext4_blkdev_remove() calls kill_bdev(), release bh before it. */ 5212 5206 brelse(bh); 5207 + ext4_blkdev_remove(sbi); 5213 5208 out_fail: 5214 5209 sb->s_fs_info = NULL; 5215 5210 kfree(sbi->s_blockgroup_lock);
+4
fs/ext4/sysfs.c
··· 315 315 #endif 316 316 EXT4_ATTR_FEATURE(metadata_csum_seed); 317 317 EXT4_ATTR_FEATURE(fast_commit); 318 + #if defined(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 318 319 EXT4_ATTR_FEATURE(encrypted_casefold); 320 + #endif 319 321 320 322 static struct attribute *ext4_feat_attrs[] = { 321 323 ATTR_LIST(lazy_itable_init), ··· 335 333 #endif 336 334 ATTR_LIST(metadata_csum_seed), 337 335 ATTR_LIST(fast_commit), 336 + #if defined(CONFIG_UNICODE) && defined(CONFIG_FS_ENCRYPTION) 338 337 ATTR_LIST(encrypted_casefold), 338 + #endif 339 339 NULL, 340 340 }; 341 341 ATTRIBUTE_GROUPS(ext4_feat);
+4 -1
fs/gfs2/file.c
··· 911 911 current->backing_dev_info = inode_to_bdi(inode); 912 912 buffered = iomap_file_buffered_write(iocb, from, &gfs2_iomap_ops); 913 913 current->backing_dev_info = NULL; 914 - if (unlikely(buffered <= 0)) 914 + if (unlikely(buffered <= 0)) { 915 + if (!ret) 916 + ret = buffered; 915 917 goto out_unlock; 918 + } 916 919 917 920 /* 918 921 * We need to ensure that the page cache pages are written to
+24 -4
fs/gfs2/glock.c
··· 582 582 spin_unlock(&gl->gl_lockref.lock); 583 583 } 584 584 585 + static bool is_system_glock(struct gfs2_glock *gl) 586 + { 587 + struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; 588 + struct gfs2_inode *m_ip = GFS2_I(sdp->sd_statfs_inode); 589 + 590 + if (gl == m_ip->i_gl) 591 + return true; 592 + return false; 593 + } 594 + 585 595 /** 586 596 * do_xmote - Calls the DLM to change the state of a lock 587 597 * @gl: The lock state ··· 681 671 * to see sd_log_error and withdraw, and in the meantime, requeue the 682 672 * work for later. 683 673 * 674 + * We make a special exception for some system glocks, such as the 675 + * system statfs inode glock, which needs to be granted before the 676 + * gfs2_quotad daemon can exit, and that exit needs to finish before 677 + * we can unmount the withdrawn file system. 678 + * 684 679 * However, if we're just unlocking the lock (say, for unmount, when 685 680 * gfs2_gl_hash_clear calls clear_glock) and recovery is complete 686 681 * then it's okay to tell dlm to unlock it. 687 682 */ 688 683 if (unlikely(sdp->sd_log_error && !gfs2_withdrawn(sdp))) 689 684 gfs2_withdraw_delayed(sdp); 690 - if (glock_blocked_by_withdraw(gl)) { 691 - if (target != LM_ST_UNLOCKED || 692 - test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags)) { 685 + if (glock_blocked_by_withdraw(gl) && 686 + (target != LM_ST_UNLOCKED || 687 + test_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags))) { 688 + if (!is_system_glock(gl)) { 693 689 gfs2_glock_queue_work(gl, GL_GLOCK_DFT_HOLD); 694 690 goto out; 691 + } else { 692 + clear_bit(GLF_INVALIDATE_IN_PROGRESS, &gl->gl_flags); 695 693 } 696 694 } 697 695 ··· 1484 1466 glock_blocked_by_withdraw(gl) && 1485 1467 gh->gh_gl != sdp->sd_jinode_gl) { 1486 1468 sdp->sd_glock_dqs_held++; 1469 + spin_unlock(&gl->gl_lockref.lock); 1487 1470 might_sleep(); 1488 1471 wait_on_bit(&sdp->sd_flags, SDF_WITHDRAW_RECOVERY, 1489 1472 TASK_UNINTERRUPTIBLE); 1473 + spin_lock(&gl->gl_lockref.lock); 1490 1474 } 1491 1475 if (gh->gh_flags & GL_NOCACHE) 1492 1476 handle_callback(gl, LM_ST_UNLOCKED, 0, false); ··· 1795 1775 while(!list_empty(list)) { 1796 1776 gl = list_first_entry(list, struct gfs2_glock, gl_lru); 1797 1777 list_del_init(&gl->gl_lru); 1778 + clear_bit(GLF_LRU, &gl->gl_flags); 1798 1779 if (!spin_trylock(&gl->gl_lockref.lock)) { 1799 1780 add_back_to_lru: 1800 1781 list_add(&gl->gl_lru, &lru_list); ··· 1841 1820 if (!test_bit(GLF_LOCK, &gl->gl_flags)) { 1842 1821 list_move(&gl->gl_lru, &dispose); 1843 1822 atomic_dec(&lru_count); 1844 - clear_bit(GLF_LRU, &gl->gl_flags); 1845 1823 freed++; 1846 1824 continue; 1847 1825 }
+1 -1
fs/gfs2/glops.c
··· 396 396 struct timespec64 atime; 397 397 u16 height, depth; 398 398 umode_t mode = be32_to_cpu(str->di_mode); 399 - bool is_new = ip->i_inode.i_flags & I_NEW; 399 + bool is_new = ip->i_inode.i_state & I_NEW; 400 400 401 401 if (unlikely(ip->i_no_addr != be64_to_cpu(str->di_num.no_addr))) 402 402 goto corrupt;
+3 -3
fs/gfs2/log.c
··· 926 926 } 927 927 928 928 /** 929 - * ail_drain - drain the ail lists after a withdraw 929 + * gfs2_ail_drain - drain the ail lists after a withdraw 930 930 * @sdp: Pointer to GFS2 superblock 931 931 */ 932 - static void ail_drain(struct gfs2_sbd *sdp) 932 + void gfs2_ail_drain(struct gfs2_sbd *sdp) 933 933 { 934 934 struct gfs2_trans *tr; 935 935 ··· 956 956 list_del(&tr->tr_list); 957 957 gfs2_trans_free(sdp, tr); 958 958 } 959 + gfs2_drain_revokes(sdp); 959 960 spin_unlock(&sdp->sd_ail_lock); 960 961 } 961 962 ··· 1163 1162 if (tr && list_empty(&tr->tr_list)) 1164 1163 list_add(&tr->tr_list, &sdp->sd_ail1_list); 1165 1164 spin_unlock(&sdp->sd_ail_lock); 1166 - ail_drain(sdp); /* frees all transactions */ 1167 1165 tr = NULL; 1168 1166 goto out_end; 1169 1167 }
+1
fs/gfs2/log.h
··· 93 93 extern void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd); 94 94 extern void gfs2_glock_remove_revoke(struct gfs2_glock *gl); 95 95 extern void gfs2_flush_revokes(struct gfs2_sbd *sdp); 96 + extern void gfs2_ail_drain(struct gfs2_sbd *sdp); 96 97 97 98 #endif /* __LOG_DOT_H__ */
+6 -1
fs/gfs2/lops.c
··· 885 885 gfs2_log_write_page(sdp, page); 886 886 } 887 887 888 - static void revoke_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_trans *tr) 888 + void gfs2_drain_revokes(struct gfs2_sbd *sdp) 889 889 { 890 890 struct list_head *head = &sdp->sd_log_revokes; 891 891 struct gfs2_bufdata *bd; ··· 898 898 gfs2_glock_remove_revoke(gl); 899 899 kmem_cache_free(gfs2_bufdata_cachep, bd); 900 900 } 901 + } 902 + 903 + static void revoke_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_trans *tr) 904 + { 905 + gfs2_drain_revokes(sdp); 901 906 } 902 907 903 908 static void revoke_lo_before_scan(struct gfs2_jdesc *jd,
+1
fs/gfs2/lops.h
··· 20 20 extern void gfs2_pin(struct gfs2_sbd *sdp, struct buffer_head *bh); 21 21 extern int gfs2_find_jhead(struct gfs2_jdesc *jd, 22 22 struct gfs2_log_header_host *head, bool keep_cache); 23 + extern void gfs2_drain_revokes(struct gfs2_sbd *sdp); 23 24 static inline unsigned int buf_limit(struct gfs2_sbd *sdp) 24 25 { 25 26 return sdp->sd_ldptrs;
+1
fs/gfs2/util.c
··· 131 131 if (test_bit(SDF_NORECOVERY, &sdp->sd_flags) || !sdp->sd_jdesc) 132 132 return; 133 133 134 + gfs2_ail_drain(sdp); /* frees all transactions */ 134 135 inode = sdp->sd_jdesc->jd_inode; 135 136 ip = GFS2_I(inode); 136 137 i_gl = ip->i_gl;
+30 -13
fs/io_uring.c
··· 783 783 task_work_func_t func; 784 784 }; 785 785 786 + enum { 787 + IORING_RSRC_FILE = 0, 788 + IORING_RSRC_BUFFER = 1, 789 + }; 790 + 786 791 /* 787 792 * NOTE! Each of the iocb union members has the file pointer 788 793 * as the first entry in their struct definition. So you can ··· 8233 8228 { 8234 8229 int i, ret; 8235 8230 8231 + imu->acct_pages = 0; 8236 8232 for (i = 0; i < nr_pages; i++) { 8237 8233 if (!PageCompound(pages[i])) { 8238 8234 imu->acct_pages++; ··· 9676 9670 IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS | 9677 9671 IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL | 9678 9672 IORING_FEAT_POLL_32BITS | IORING_FEAT_SQPOLL_NONFIXED | 9679 - IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS; 9673 + IORING_FEAT_EXT_ARG | IORING_FEAT_NATIVE_WORKERS | 9674 + IORING_FEAT_RSRC_TAGS; 9680 9675 9681 9676 if (copy_to_user(params, p, sizeof(*p))) { 9682 9677 ret = -EFAULT; ··· 9917 9910 } 9918 9911 9919 9912 static int io_register_rsrc_update(struct io_ring_ctx *ctx, void __user *arg, 9920 - unsigned size) 9913 + unsigned size, unsigned type) 9921 9914 { 9922 9915 struct io_uring_rsrc_update2 up; 9923 9916 ··· 9925 9918 return -EINVAL; 9926 9919 if (copy_from_user(&up, arg, sizeof(up))) 9927 9920 return -EFAULT; 9928 - if (!up.nr) 9921 + if (!up.nr || up.resv) 9929 9922 return -EINVAL; 9930 - return __io_register_rsrc_update(ctx, up.type, &up, up.nr); 9923 + return __io_register_rsrc_update(ctx, type, &up, up.nr); 9931 9924 } 9932 9925 9933 9926 static int io_register_rsrc(struct io_ring_ctx *ctx, void __user *arg, 9934 - unsigned int size) 9927 + unsigned int size, unsigned int type) 9935 9928 { 9936 9929 struct io_uring_rsrc_register rr; 9937 9930 ··· 9942 9935 memset(&rr, 0, sizeof(rr)); 9943 9936 if (copy_from_user(&rr, arg, size)) 9944 9937 return -EFAULT; 9945 - if (!rr.nr) 9938 + if (!rr.nr || rr.resv || rr.resv2) 9946 9939 return -EINVAL; 9947 9940 9948 - switch (rr.type) { 9941 + switch (type) { 9949 9942 case IORING_RSRC_FILE: 9950 9943 return io_sqe_files_register(ctx, u64_to_user_ptr(rr.data), 9951 9944 rr.nr, u64_to_user_ptr(rr.tags)); ··· 9967 9960 case IORING_REGISTER_PROBE: 9968 9961 case IORING_REGISTER_PERSONALITY: 9969 9962 case IORING_UNREGISTER_PERSONALITY: 9970 - case IORING_REGISTER_RSRC: 9971 - case IORING_REGISTER_RSRC_UPDATE: 9963 + case IORING_REGISTER_FILES2: 9964 + case IORING_REGISTER_FILES_UPDATE2: 9965 + case IORING_REGISTER_BUFFERS2: 9966 + case IORING_REGISTER_BUFFERS_UPDATE: 9972 9967 return false; 9973 9968 default: 9974 9969 return true; ··· 10096 10087 case IORING_REGISTER_RESTRICTIONS: 10097 10088 ret = io_register_restrictions(ctx, arg, nr_args); 10098 10089 break; 10099 - case IORING_REGISTER_RSRC: 10100 - ret = io_register_rsrc(ctx, arg, nr_args); 10090 + case IORING_REGISTER_FILES2: 10091 + ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_FILE); 10101 10092 break; 10102 - case IORING_REGISTER_RSRC_UPDATE: 10103 - ret = io_register_rsrc_update(ctx, arg, nr_args); 10093 + case IORING_REGISTER_FILES_UPDATE2: 10094 + ret = io_register_rsrc_update(ctx, arg, nr_args, 10095 + IORING_RSRC_FILE); 10096 + break; 10097 + case IORING_REGISTER_BUFFERS2: 10098 + ret = io_register_rsrc(ctx, arg, nr_args, IORING_RSRC_BUFFER); 10099 + break; 10100 + case IORING_REGISTER_BUFFERS_UPDATE: 10101 + ret = io_register_rsrc_update(ctx, arg, nr_args, 10102 + IORING_RSRC_BUFFER); 10104 10103 break; 10105 10104 default: 10106 10105 ret = -EINVAL;
+1 -1
fs/nfs/client.c
··· 406 406 407 407 if (cl_init->hostname == NULL) { 408 408 WARN_ON(1); 409 - return NULL; 409 + return ERR_PTR(-EINVAL); 410 410 } 411 411 412 412 /* see if the client already exists */
+1
fs/nfs/nfs4_fs.h
··· 205 205 struct inode *inode; 206 206 nfs4_stateid *stateid; 207 207 long timeout; 208 + unsigned char task_is_privileged : 1; 208 209 unsigned char delay : 1, 209 210 recovering : 1, 210 211 retry : 1;
+1 -1
fs/nfs/nfs4client.c
··· 435 435 */ 436 436 nfs_mark_client_ready(clp, -EPERM); 437 437 } 438 - nfs_put_client(clp); 439 438 clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags); 439 + nfs_put_client(clp); 440 440 return old; 441 441 442 442 error:
+30 -7
fs/nfs/nfs4proc.c
··· 589 589 goto out_retry; 590 590 } 591 591 if (exception->recovering) { 592 + if (exception->task_is_privileged) 593 + return -EDEADLOCK; 592 594 ret = nfs4_wait_clnt_recover(clp); 593 595 if (test_bit(NFS_MIG_FAILED, &server->mig_status)) 594 596 return -EIO; ··· 616 614 goto out_retry; 617 615 } 618 616 if (exception->recovering) { 617 + if (exception->task_is_privileged) 618 + return -EDEADLOCK; 619 619 rpc_sleep_on(&clp->cl_rpcwaitq, task, NULL); 620 620 if (test_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) == 0) 621 621 rpc_wake_up_queued_task(&clp->cl_rpcwaitq, task); ··· 3882 3878 server->caps |= NFS_CAP_HARDLINKS; 3883 3879 if (res.has_symlinks != 0) 3884 3880 server->caps |= NFS_CAP_SYMLINKS; 3881 + #ifdef CONFIG_NFS_V4_SECURITY_LABEL 3882 + if (res.attr_bitmask[2] & FATTR4_WORD2_SECURITY_LABEL) 3883 + server->caps |= NFS_CAP_SECURITY_LABEL; 3884 + #endif 3885 3885 if (!(res.attr_bitmask[0] & FATTR4_WORD0_FILEID)) 3886 3886 server->fattr_valid &= ~NFS_ATTR_FATTR_FILEID; 3887 3887 if (!(res.attr_bitmask[1] & FATTR4_WORD1_MODE)) ··· 3906 3898 server->fattr_valid &= ~NFS_ATTR_FATTR_CTIME; 3907 3899 if (!(res.attr_bitmask[1] & FATTR4_WORD1_TIME_MODIFY)) 3908 3900 server->fattr_valid &= ~NFS_ATTR_FATTR_MTIME; 3909 - #ifdef CONFIG_NFS_V4_SECURITY_LABEL 3910 - if (!(res.attr_bitmask[2] & FATTR4_WORD2_SECURITY_LABEL)) 3911 - server->fattr_valid &= ~NFS_ATTR_FATTR_V4_SECURITY_LABEL; 3912 - #endif 3913 3901 memcpy(server->attr_bitmask_nl, res.attr_bitmask, 3914 3902 sizeof(server->attr_bitmask)); 3915 3903 server->attr_bitmask_nl[2] &= ~FATTR4_WORD2_SECURITY_LABEL; ··· 5972 5968 do { 5973 5969 err = __nfs4_proc_set_acl(inode, buf, buflen); 5974 5970 trace_nfs4_set_acl(inode, err); 5971 + if (err == -NFS4ERR_BADOWNER || err == -NFS4ERR_BADNAME) { 5972 + /* 5973 + * no need to retry since the kernel 5974 + * isn't involved in encoding the ACEs. 5975 + */ 5976 + err = -EINVAL; 5977 + break; 5978 + } 5975 5979 err = nfs4_handle_exception(NFS_SERVER(inode), err, 5976 5980 &exception); 5977 5981 } while (exception.retry); ··· 6421 6409 struct nfs4_exception exception = { 6422 6410 .inode = data->inode, 6423 6411 .stateid = &data->stateid, 6412 + .task_is_privileged = data->args.seq_args.sa_privileged, 6424 6413 }; 6425 6414 6426 6415 if (!nfs4_sequence_done(task, &data->res.seq_res)) ··· 6545 6532 data = kzalloc(sizeof(*data), GFP_NOFS); 6546 6533 if (data == NULL) 6547 6534 return -ENOMEM; 6548 - nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 0); 6549 6535 6550 6536 nfs4_state_protect(server->nfs_client, 6551 6537 NFS_SP4_MACH_CRED_CLEANUP, ··· 6575 6563 } 6576 6564 } 6577 6565 6566 + if (!data->inode) 6567 + nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 6568 + 1); 6569 + else 6570 + nfs4_init_sequence(&data->args.seq_args, &data->res.seq_res, 1, 6571 + 0); 6578 6572 task_setup_data.callback_data = data; 6579 6573 msg.rpc_argp = &data->args; 6580 6574 msg.rpc_resp = &data->res; ··· 9658 9640 &task_setup_data.rpc_client, &msg); 9659 9641 9660 9642 dprintk("--> %s\n", __func__); 9643 + lrp->inode = nfs_igrab_and_active(lrp->args.inode); 9661 9644 if (!sync) { 9662 - lrp->inode = nfs_igrab_and_active(lrp->args.inode); 9663 9645 if (!lrp->inode) { 9664 9646 nfs4_layoutreturn_release(lrp); 9665 9647 return -EAGAIN; 9666 9648 } 9667 9649 task_setup_data.flags |= RPC_TASK_ASYNC; 9668 9650 } 9669 - nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 0); 9651 + if (!lrp->inode) 9652 + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 9653 + 1); 9654 + else 9655 + nfs4_init_sequence(&lrp->args.seq_args, &lrp->res.seq_res, 1, 9656 + 0); 9670 9657 task = rpc_run_task(&task_setup_data); 9671 9658 if (IS_ERR(task)) 9672 9659 return PTR_ERR(task);
-4
fs/nfs/nfstrace.h
··· 430 430 { O_NOATIME, "O_NOATIME" }, \ 431 431 { O_CLOEXEC, "O_CLOEXEC" }) 432 432 433 - TRACE_DEFINE_ENUM(FMODE_READ); 434 - TRACE_DEFINE_ENUM(FMODE_WRITE); 435 - TRACE_DEFINE_ENUM(FMODE_EXEC); 436 - 437 433 #define show_fmode_flags(mode) \ 438 434 __print_flags(mode, "|", \ 439 435 { ((__force unsigned long)FMODE_READ), "READ" }, \
+24 -6
fs/notify/fanotify/fanotify_user.c
··· 424 424 * events generated by the listener process itself, without disclosing 425 425 * the pids of other processes. 426 426 */ 427 - if (!capable(CAP_SYS_ADMIN) && 427 + if (FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) && 428 428 task_tgid(current) != event->pid) 429 429 metadata.pid = 0; 430 430 431 - if (path && path->mnt && path->dentry) { 431 + /* 432 + * For now, fid mode is required for an unprivileged listener and 433 + * fid mode does not report fd in events. Keep this check anyway 434 + * for safety in case fid mode requirement is relaxed in the future 435 + * to allow unprivileged listener to get events with no fd and no fid. 436 + */ 437 + if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) && 438 + path && path->mnt && path->dentry) { 432 439 fd = create_fd(group, path, &f); 433 440 if (fd < 0) 434 441 return fd; ··· 1047 1040 int f_flags, fd; 1048 1041 unsigned int fid_mode = flags & FANOTIFY_FID_BITS; 1049 1042 unsigned int class = flags & FANOTIFY_CLASS_BITS; 1043 + unsigned int internal_flags = 0; 1050 1044 1051 1045 pr_debug("%s: flags=%x event_f_flags=%x\n", 1052 1046 __func__, flags, event_f_flags); ··· 1061 1053 */ 1062 1054 if ((flags & FANOTIFY_ADMIN_INIT_FLAGS) || !fid_mode) 1063 1055 return -EPERM; 1056 + 1057 + /* 1058 + * Setting the internal flag FANOTIFY_UNPRIV on the group 1059 + * prevents setting mount/filesystem marks on this group and 1060 + * prevents reporting pid and open fd in events. 1061 + */ 1062 + internal_flags |= FANOTIFY_UNPRIV; 1064 1063 } 1065 1064 1066 1065 #ifdef CONFIG_AUDITSYSCALL ··· 1120 1105 goto out_destroy_group; 1121 1106 } 1122 1107 1123 - group->fanotify_data.flags = flags; 1108 + group->fanotify_data.flags = flags | internal_flags; 1124 1109 group->memcg = get_mem_cgroup_from_mm(current->mm); 1125 1110 1126 1111 group->fanotify_data.merge_hash = fanotify_alloc_merge_hash(); ··· 1320 1305 group = f.file->private_data; 1321 1306 1322 1307 /* 1323 - * An unprivileged user is not allowed to watch a mount point nor 1324 - * a filesystem. 1308 + * An unprivileged user is not allowed to setup mount nor filesystem 1309 + * marks. This also includes setting up such marks by a group that 1310 + * was initialized by an unprivileged user. 1325 1311 */ 1326 1312 ret = -EPERM; 1327 - if (!capable(CAP_SYS_ADMIN) && 1313 + if ((!capable(CAP_SYS_ADMIN) || 1314 + FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV)) && 1328 1315 mark_type != FAN_MARK_INODE) 1329 1316 goto fput_and_out; 1330 1317 ··· 1477 1460 max_marks = clamp(max_marks, FANOTIFY_OLD_DEFAULT_MAX_MARKS, 1478 1461 FANOTIFY_DEFAULT_MAX_USER_MARKS); 1479 1462 1463 + BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS); 1480 1464 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 10); 1481 1465 BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 9); 1482 1466
+1 -1
fs/notify/fdinfo.c
··· 144 144 struct fsnotify_group *group = f->private_data; 145 145 146 146 seq_printf(m, "fanotify flags:%x event-flags:%x\n", 147 - group->fanotify_data.flags, 147 + group->fanotify_data.flags & FANOTIFY_INIT_FLAGS, 148 148 group->fanotify_data.f_flags); 149 149 150 150 show_fdinfo(m, f, fanotify_fdinfo);
+50 -5
fs/ocfs2/file.c
··· 1856 1856 } 1857 1857 1858 1858 /* 1859 + * zero out partial blocks of one cluster. 1860 + * 1861 + * start: file offset where zero starts, will be made upper block aligned. 1862 + * len: it will be trimmed to the end of current cluster if "start + len" 1863 + * is bigger than it. 1864 + */ 1865 + static int ocfs2_zeroout_partial_cluster(struct inode *inode, 1866 + u64 start, u64 len) 1867 + { 1868 + int ret; 1869 + u64 start_block, end_block, nr_blocks; 1870 + u64 p_block, offset; 1871 + u32 cluster, p_cluster, nr_clusters; 1872 + struct super_block *sb = inode->i_sb; 1873 + u64 end = ocfs2_align_bytes_to_clusters(sb, start); 1874 + 1875 + if (start + len < end) 1876 + end = start + len; 1877 + 1878 + start_block = ocfs2_blocks_for_bytes(sb, start); 1879 + end_block = ocfs2_blocks_for_bytes(sb, end); 1880 + nr_blocks = end_block - start_block; 1881 + if (!nr_blocks) 1882 + return 0; 1883 + 1884 + cluster = ocfs2_bytes_to_clusters(sb, start); 1885 + ret = ocfs2_get_clusters(inode, cluster, &p_cluster, 1886 + &nr_clusters, NULL); 1887 + if (ret) 1888 + return ret; 1889 + if (!p_cluster) 1890 + return 0; 1891 + 1892 + offset = start_block - ocfs2_clusters_to_blocks(sb, cluster); 1893 + p_block = ocfs2_clusters_to_blocks(sb, p_cluster) + offset; 1894 + return sb_issue_zeroout(sb, p_block, nr_blocks, GFP_NOFS); 1895 + } 1896 + 1897 + /* 1859 1898 * Parts of this function taken from xfs_change_file_space() 1860 1899 */ 1861 1900 static int __ocfs2_change_file_space(struct file *file, struct inode *inode, ··· 1904 1865 { 1905 1866 int ret; 1906 1867 s64 llen; 1907 - loff_t size; 1868 + loff_t size, orig_isize; 1908 1869 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 1909 1870 struct buffer_head *di_bh = NULL; 1910 1871 handle_t *handle; ··· 1935 1896 goto out_inode_unlock; 1936 1897 } 1937 1898 1899 + orig_isize = i_size_read(inode); 1938 1900 switch (sr->l_whence) { 1939 1901 case 0: /*SEEK_SET*/ 1940 1902 break; ··· 1943 1903 sr->l_start += f_pos; 1944 1904 break; 1945 1905 case 2: /*SEEK_END*/ 1946 - sr->l_start += i_size_read(inode); 1906 + sr->l_start += orig_isize; 1947 1907 break; 1948 1908 default: 1949 1909 ret = -EINVAL; ··· 1997 1957 default: 1998 1958 ret = -EINVAL; 1999 1959 } 1960 + 1961 + /* zeroout eof blocks in the cluster. */ 1962 + if (!ret && change_size && orig_isize < size) { 1963 + ret = ocfs2_zeroout_partial_cluster(inode, orig_isize, 1964 + size - orig_isize); 1965 + if (!ret) 1966 + i_size_write(inode, size); 1967 + } 2000 1968 up_write(&OCFS2_I(inode)->ip_alloc_sem); 2001 1969 if (ret) { 2002 1970 mlog_errno(ret); ··· 2020 1972 mlog_errno(ret); 2021 1973 goto out_inode_unlock; 2022 1974 } 2023 - 2024 - if (change_size && i_size_read(inode) < size) 2025 - i_size_write(inode, size); 2026 1975 2027 1976 inode->i_ctime = inode->i_mtime = current_time(inode); 2028 1977 ret = ocfs2_mark_inode_dirty(handle, inode, di_bh);
+8 -1
fs/proc/base.c
··· 2674 2674 } 2675 2675 2676 2676 #ifdef CONFIG_SECURITY 2677 + static int proc_pid_attr_open(struct inode *inode, struct file *file) 2678 + { 2679 + return __mem_open(inode, file, PTRACE_MODE_READ_FSCREDS); 2680 + } 2681 + 2677 2682 static ssize_t proc_pid_attr_read(struct file * file, char __user * buf, 2678 2683 size_t count, loff_t *ppos) 2679 2684 { ··· 2709 2704 int rv; 2710 2705 2711 2706 /* A task may only write when it was the opener. */ 2712 - if (file->f_cred != current_real_cred()) 2707 + if (file->private_data != current->mm) 2713 2708 return -EPERM; 2714 2709 2715 2710 rcu_read_lock(); ··· 2759 2754 } 2760 2755 2761 2756 static const struct file_operations proc_pid_attr_operations = { 2757 + .open = proc_pid_attr_open, 2762 2758 .read = proc_pid_attr_read, 2763 2759 .write = proc_pid_attr_write, 2764 2760 .llseek = generic_file_llseek, 2761 + .release = mem_release, 2765 2762 }; 2766 2763 2767 2764 #define LSM_DIR_OPS(LSM) \
+1
include/asm-generic/vmlinux.lds.h
··· 960 960 #ifdef CONFIG_AMD_MEM_ENCRYPT 961 961 #define PERCPU_DECRYPTED_SECTION \ 962 962 . = ALIGN(PAGE_SIZE); \ 963 + *(.data..decrypted) \ 963 964 *(.data..percpu..decrypted) \ 964 965 . = ALIGN(PAGE_SIZE); 965 966 #else
+76 -13
include/dt-bindings/usb/pd.h
··· 106 106 * <20:16> :: Reserved, Shall be set to zero 107 107 * <15:0> :: USB-IF assigned VID for this cable vendor 108 108 */ 109 + 110 + /* PD Rev2.0 definition */ 111 + #define IDH_PTYPE_UNDEF 0 112 + 109 113 /* SOP Product Type (UFP) */ 110 114 #define IDH_PTYPE_NOT_UFP 0 111 115 #define IDH_PTYPE_HUB 1 ··· 167 163 #define UFP_VDO_VER1_2 2 168 164 169 165 /* Device Capability */ 170 - #define DEV_USB2_CAPABLE BIT(0) 171 - #define DEV_USB2_BILLBOARD BIT(1) 172 - #define DEV_USB3_CAPABLE BIT(2) 173 - #define DEV_USB4_CAPABLE BIT(3) 166 + #define DEV_USB2_CAPABLE (1 << 0) 167 + #define DEV_USB2_BILLBOARD (1 << 1) 168 + #define DEV_USB3_CAPABLE (1 << 2) 169 + #define DEV_USB4_CAPABLE (1 << 3) 174 170 175 171 /* Connector Type */ 176 172 #define UFP_RECEPTACLE 2 ··· 195 191 196 192 /* Alternate Modes */ 197 193 #define UFP_ALTMODE_NOT_SUPP 0 198 - #define UFP_ALTMODE_TBT3 BIT(0) 199 - #define UFP_ALTMODE_RECFG BIT(1) 200 - #define UFP_ALTMODE_NO_RECFG BIT(2) 194 + #define UFP_ALTMODE_TBT3 (1 << 0) 195 + #define UFP_ALTMODE_RECFG (1 << 1) 196 + #define UFP_ALTMODE_NO_RECFG (1 << 2) 201 197 202 198 /* USB Highest Speed */ 203 199 #define UFP_USB2_ONLY 0 ··· 221 217 * <4:0> :: Port number 222 218 */ 223 219 #define DFP_VDO_VER1_1 1 224 - #define HOST_USB2_CAPABLE BIT(0) 225 - #define HOST_USB3_CAPABLE BIT(1) 226 - #define HOST_USB4_CAPABLE BIT(2) 220 + #define HOST_USB2_CAPABLE (1 << 0) 221 + #define HOST_USB3_CAPABLE (1 << 1) 222 + #define HOST_USB4_CAPABLE (1 << 2) 227 223 #define DFP_RECEPTACLE 2 228 224 #define DFP_CAPTIVE 3 229 225 ··· 232 228 | ((pnum) & 0x1f)) 233 229 234 230 /* 235 - * Passive Cable VDO 231 + * Cable VDO (for both Passive and Active Cable VDO in PD Rev2.0) 232 + * --------- 233 + * <31:28> :: Cable HW version 234 + * <27:24> :: Cable FW version 235 + * <23:20> :: Reserved, Shall be set to zero 236 + * <19:18> :: type-C to Type-A/B/C/Captive (00b == A, 01 == B, 10 == C, 11 == Captive) 237 + * <17> :: Reserved, Shall be set to zero 238 + * <16:13> :: cable latency (0001 == <10ns(~1m length)) 239 + * <12:11> :: cable termination type (11b == both ends active VCONN req) 240 + * <10> :: SSTX1 Directionality support (0b == fixed, 1b == cfgable) 241 + * <9> :: SSTX2 Directionality support 242 + * <8> :: SSRX1 Directionality support 243 + * <7> :: SSRX2 Directionality support 244 + * <6:5> :: Vbus current handling capability (01b == 3A, 10b == 5A) 245 + * <4> :: Vbus through cable (0b == no, 1b == yes) 246 + * <3> :: SOP" controller present? (0b == no, 1b == yes) 247 + * <2:0> :: USB SS Signaling support 248 + * 249 + * Passive Cable VDO (PD Rev3.0+) 236 250 * --------- 237 251 * <31:28> :: Cable HW version 238 252 * <27:24> :: Cable FW version ··· 266 244 * <4:3> :: Reserved, Shall be set to zero 267 245 * <2:0> :: USB highest speed 268 246 * 269 - * Active Cable VDO 1 247 + * Active Cable VDO 1 (PD Rev3.0+) 270 248 * --------- 271 249 * <31:28> :: Cable HW version 272 250 * <27:24> :: Cable FW version ··· 288 266 #define CABLE_VDO_VER1_0 0 289 267 #define CABLE_VDO_VER1_3 3 290 268 291 - /* Connector Type */ 269 + /* Connector Type (_ATYPE and _BTYPE are for PD Rev2.0 only) */ 270 + #define CABLE_ATYPE 0 271 + #define CABLE_BTYPE 1 292 272 #define CABLE_CTYPE 2 293 273 #define CABLE_CAPTIVE 3 294 274 ··· 327 303 #define CABLE_CURR_3A 1 328 304 #define CABLE_CURR_5A 2 329 305 306 + /* USB SuperSpeed Signaling Support (PD Rev2.0) */ 307 + #define CABLE_USBSS_U2_ONLY 0 308 + #define CABLE_USBSS_U31_GEN1 1 309 + #define CABLE_USBSS_U31_GEN2 2 310 + 330 311 /* USB Highest Speed */ 331 312 #define CABLE_USB2_ONLY 0 332 313 #define CABLE_USB32_GEN1 1 333 314 #define CABLE_USB32_4_GEN2 2 334 315 #define CABLE_USB4_GEN3 3 335 316 317 + #define VDO_CABLE(hw, fw, cbl, lat, term, tx1d, tx2d, rx1d, rx2d, cur, vps, sopp, usbss) \ 318 + (((hw) & 0x7) << 28 | ((fw) & 0x7) << 24 | ((cbl) & 0x3) << 18 \ 319 + | ((lat) & 0x7) << 13 | ((term) & 0x3) << 11 | (tx1d) << 10 \ 320 + | (tx2d) << 9 | (rx1d) << 8 | (rx2d) << 7 | ((cur) & 0x3) << 5 \ 321 + | (vps) << 4 | (sopp) << 3 | ((usbss) & 0x7)) 336 322 #define VDO_PCABLE(hw, fw, ver, conn, lat, term, vbm, cur, spd) \ 337 323 (((hw) & 0xf) << 28 | ((fw) & 0xf) << 24 | ((ver) & 0x7) << 21 \ 338 324 | ((conn) & 0x3) << 18 | ((lat) & 0xf) << 13 | ((term) & 0x3) << 11 \ ··· 406 372 | (trans) << 11 | (phy) << 10 | (ele) << 9 | (u4) << 8 \ 407 373 | ((hops) & 0x3) << 6 | (u2) << 5 | (u32) << 4 | (lane) << 3 \ 408 374 | (iso) << 2 | (gen)) 375 + 376 + /* 377 + * AMA VDO (PD Rev2.0) 378 + * --------- 379 + * <31:28> :: Cable HW version 380 + * <27:24> :: Cable FW version 381 + * <23:12> :: Reserved, Shall be set to zero 382 + * <11> :: SSTX1 Directionality support (0b == fixed, 1b == cfgable) 383 + * <10> :: SSTX2 Directionality support 384 + * <9> :: SSRX1 Directionality support 385 + * <8> :: SSRX2 Directionality support 386 + * <7:5> :: Vconn power 387 + * <4> :: Vconn power required 388 + * <3> :: Vbus power required 389 + * <2:0> :: USB SS Signaling support 390 + */ 391 + #define VDO_AMA(hw, fw, tx1d, tx2d, rx1d, rx2d, vcpwr, vcr, vbr, usbss) \ 392 + (((hw) & 0x7) << 28 | ((fw) & 0x7) << 24 \ 393 + | (tx1d) << 11 | (tx2d) << 10 | (rx1d) << 9 | (rx2d) << 8 \ 394 + | ((vcpwr) & 0x7) << 5 | (vcr) << 4 | (vbr) << 3 \ 395 + | ((usbss) & 0x7)) 396 + 397 + #define PD_VDO_AMA_VCONN_REQ(vdo) (((vdo) >> 4) & 1) 398 + #define PD_VDO_AMA_VBUS_REQ(vdo) (((vdo) >> 3) & 1) 399 + 400 + #define AMA_USBSS_U2_ONLY 0 401 + #define AMA_USBSS_U31_GEN1 1 402 + #define AMA_USBSS_U31_GEN2 2 403 + #define AMA_USBSS_BBONLY 3 409 404 410 405 /* 411 406 * VPD VDO
+1
include/linux/avf/virtchnl.h
··· 830 830 831 831 struct virtchnl_proto_hdrs { 832 832 u8 tunnel_level; 833 + u8 pad[3]; 833 834 /** 834 835 * specify where protocol header start from. 835 836 * 0 - from the outer layer
+1
include/linux/compiler_attributes.h
··· 199 199 * must end with any of these keywords: 200 200 * break; 201 201 * fallthrough; 202 + * continue; 202 203 * goto <label>; 203 204 * return [expression]; 204 205 *
+2 -1
include/linux/entry-kvm.h
··· 3 3 #define __LINUX_ENTRYKVM_H 4 4 5 5 #include <linux/entry-common.h> 6 + #include <linux/tick.h> 6 7 7 8 /* Transfer to guest mode work */ 8 9 #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK ··· 58 57 static inline void xfer_to_guest_mode_prepare(void) 59 58 { 60 59 lockdep_assert_irqs_disabled(); 61 - rcu_nocb_flush_deferred_wakeup(); 60 + tick_nohz_user_enter_prepare(); 62 61 } 63 62 64 63 /**
+4
include/linux/fanotify.h
··· 51 51 #define FANOTIFY_INIT_FLAGS (FANOTIFY_ADMIN_INIT_FLAGS | \ 52 52 FANOTIFY_USER_INIT_FLAGS) 53 53 54 + /* Internal group flags */ 55 + #define FANOTIFY_UNPRIV 0x80000000 56 + #define FANOTIFY_INTERNAL_GROUP_FLAGS (FANOTIFY_UNPRIV) 57 + 54 58 #define FANOTIFY_MARK_TYPE_BITS (FAN_MARK_INODE | FAN_MARK_MOUNT | \ 55 59 FAN_MARK_FILESYSTEM) 56 60
+3
include/linux/fb.h
··· 659 659 /* drivers/video/fb_defio.c */ 660 660 int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma); 661 661 extern void fb_deferred_io_init(struct fb_info *info); 662 + extern void fb_deferred_io_open(struct fb_info *info, 663 + struct inode *inode, 664 + struct file *file); 662 665 extern void fb_deferred_io_cleanup(struct fb_info *info); 663 666 extern int fb_deferred_io_fsync(struct file *file, loff_t start, 664 667 loff_t end, int datasync);
+1 -2
include/linux/hid.h
··· 1167 1167 */ 1168 1168 static inline u32 hid_report_len(struct hid_report *report) 1169 1169 { 1170 - /* equivalent to DIV_ROUND_UP(report->size, 8) + !!(report->id > 0) */ 1171 - return ((report->size - 1) >> 3) + 1 + (report->id > 0); 1170 + return DIV_ROUND_UP(report->size, 8) + (report->id > 0); 1172 1171 } 1173 1172 1174 1173 int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
+24 -6
include/linux/host1x.h
··· 332 332 int host1x_device_init(struct host1x_device *device); 333 333 int host1x_device_exit(struct host1x_device *device); 334 334 335 - int __host1x_client_register(struct host1x_client *client, 336 - struct lock_class_key *key); 337 - #define host1x_client_register(class) \ 338 - ({ \ 339 - static struct lock_class_key __key; \ 340 - __host1x_client_register(class, &__key); \ 335 + void __host1x_client_init(struct host1x_client *client, struct lock_class_key *key); 336 + void host1x_client_exit(struct host1x_client *client); 337 + 338 + #define host1x_client_init(client) \ 339 + ({ \ 340 + static struct lock_class_key __key; \ 341 + __host1x_client_init(client, &__key); \ 342 + }) 343 + 344 + int __host1x_client_register(struct host1x_client *client); 345 + 346 + /* 347 + * Note that this wrapper calls __host1x_client_init() for compatibility 348 + * with existing callers. Callers that want to separately initialize and 349 + * register a host1x client must first initialize using either of the 350 + * __host1x_client_init() or host1x_client_init() functions and then use 351 + * the low-level __host1x_client_register() function to avoid the client 352 + * getting reinitialized. 353 + */ 354 + #define host1x_client_register(client) \ 355 + ({ \ 356 + static struct lock_class_key __key; \ 357 + __host1x_client_init(client, &__key); \ 358 + __host1x_client_register(client); \ 341 359 }) 342 360 343 361 int host1x_client_unregister(struct host1x_client *client);
+9 -1
include/linux/kvm_host.h
··· 1185 1185 static inline unsigned long 1186 1186 __gfn_to_hva_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) 1187 1187 { 1188 - return slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE; 1188 + /* 1189 + * The index was checked originally in search_memslots. To avoid 1190 + * that a malicious guest builds a Spectre gadget out of e.g. page 1191 + * table walks, do not let the processor speculate loads outside 1192 + * the guest's registered memslots. 1193 + */ 1194 + unsigned long offset = gfn - slot->base_gfn; 1195 + offset = array_index_nospec(offset, slot->npages); 1196 + return slot->userspace_addr + offset * PAGE_SIZE; 1189 1197 } 1190 1198 1191 1199 static inline int memslot_id(struct kvm *kvm, gfn_t gfn)
+1 -3
include/linux/mfd/rohm-bd70528.h
··· 26 26 struct mutex rtc_timer_lock; 27 27 }; 28 28 29 - #define BD70528_BUCK_VOLTS 17 30 - #define BD70528_BUCK_VOLTS 17 31 - #define BD70528_BUCK_VOLTS 17 29 + #define BD70528_BUCK_VOLTS 0x10 32 30 #define BD70528_LDO_VOLTS 0x20 33 31 34 32 #define BD70528_REG_BUCK1_EN 0x0F
+5 -5
include/linux/mfd/rohm-bd71828.h
··· 26 26 BD71828_REGULATOR_AMOUNT, 27 27 }; 28 28 29 - #define BD71828_BUCK1267_VOLTS 0xEF 30 - #define BD71828_BUCK3_VOLTS 0x10 31 - #define BD71828_BUCK4_VOLTS 0x20 32 - #define BD71828_BUCK5_VOLTS 0x10 33 - #define BD71828_LDO_VOLTS 0x32 29 + #define BD71828_BUCK1267_VOLTS 0x100 30 + #define BD71828_BUCK3_VOLTS 0x20 31 + #define BD71828_BUCK4_VOLTS 0x40 32 + #define BD71828_BUCK5_VOLTS 0x20 33 + #define BD71828_LDO_VOLTS 0x40 34 34 /* LDO6 is fixed 1.8V voltage */ 35 35 #define BD71828_LDO_6_VOLTAGE 1800000 36 36
+1
include/linux/mlx4/device.h
··· 630 630 bool wol_port[MLX4_MAX_PORTS + 1]; 631 631 struct mlx4_rate_limit_caps rl_caps; 632 632 u32 health_buffer_addrs; 633 + bool map_clock_to_user; 633 634 }; 634 635 635 636 struct mlx4_buf_list {
+2
include/linux/mlx5/mlx5_ifc.h
··· 1289 1289 1290 1290 #define MLX5_FC_BULK_NUM_FCS(fc_enum) (MLX5_FC_BULK_SIZE_FACTOR * (fc_enum)) 1291 1291 1292 + #define MLX5_FT_MAX_MULTIPATH_LEVEL 63 1293 + 1292 1294 enum { 1293 1295 MLX5_STEERING_FORMAT_CONNECTX_5 = 0, 1294 1296 MLX5_STEERING_FORMAT_CONNECTX_6DX = 1,
+20 -7
include/linux/mm_types.h
··· 445 445 */ 446 446 atomic_t has_pinned; 447 447 448 - /** 449 - * @write_protect_seq: Locked when any thread is write 450 - * protecting pages mapped by this mm to enforce a later COW, 451 - * for instance during page table copying for fork(). 452 - */ 453 - seqcount_t write_protect_seq; 454 - 455 448 #ifdef CONFIG_MMU 456 449 atomic_long_t pgtables_bytes; /* PTE page table pages */ 457 450 #endif ··· 453 460 spinlock_t page_table_lock; /* Protects page tables and some 454 461 * counters 455 462 */ 463 + /* 464 + * With some kernel config, the current mmap_lock's offset 465 + * inside 'mm_struct' is at 0x120, which is very optimal, as 466 + * its two hot fields 'count' and 'owner' sit in 2 different 467 + * cachelines, and when mmap_lock is highly contended, both 468 + * of the 2 fields will be accessed frequently, current layout 469 + * will help to reduce cache bouncing. 470 + * 471 + * So please be careful with adding new fields before 472 + * mmap_lock, which can easily push the 2 fields into one 473 + * cacheline. 474 + */ 456 475 struct rw_semaphore mmap_lock; 457 476 458 477 struct list_head mmlist; /* List of maybe swapped mm's. These ··· 485 480 unsigned long stack_vm; /* VM_STACK */ 486 481 unsigned long def_flags; 487 482 483 + /** 484 + * @write_protect_seq: Locked when any thread is write 485 + * protecting pages mapped by this mm to enforce a later COW, 486 + * for instance during page table copying for fork(). 487 + */ 488 + seqcount_t write_protect_seq; 489 + 488 490 spinlock_t arg_lock; /* protect the below fields */ 491 + 489 492 unsigned long start_code, end_code, start_data, end_data; 490 493 unsigned long start_brk, brk, start_stack; 491 494 unsigned long arg_start, arg_end, env_start, env_end;
+2
include/linux/pci.h
··· 2344 2344 struct device_node; 2345 2345 struct irq_domain; 2346 2346 struct irq_domain *pci_host_bridge_of_msi_domain(struct pci_bus *bus); 2347 + bool pci_host_of_has_msi_map(struct device *dev); 2347 2348 2348 2349 /* Arch may override this (weak) */ 2349 2350 struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus); ··· 2352 2351 #else /* CONFIG_OF */ 2353 2352 static inline struct irq_domain * 2354 2353 pci_host_bridge_of_msi_domain(struct pci_bus *bus) { return NULL; } 2354 + static inline bool pci_host_of_has_msi_map(struct device *dev) { return false; } 2355 2355 #endif /* CONFIG_OF */ 2356 2356 2357 2357 static inline struct device_node *
+8
include/linux/pgtable.h
··· 432 432 * To be differentiate with macro pte_mkyoung, this macro is used on platforms 433 433 * where software maintains page access bit. 434 434 */ 435 + #ifndef pte_sw_mkyoung 436 + static inline pte_t pte_sw_mkyoung(pte_t pte) 437 + { 438 + return pte; 439 + } 440 + #define pte_sw_mkyoung pte_sw_mkyoung 441 + #endif 442 + 435 443 #ifndef pte_savedwrite 436 444 #define pte_savedwrite pte_write 437 445 #endif
+1
include/linux/platform_data/ti-sysc.h
··· 50 50 s8 emufree_shift; 51 51 }; 52 52 53 + #define SYSC_QUIRK_REINIT_ON_RESUME BIT(27) 53 54 #define SYSC_QUIRK_GPMC_DEBUG BIT(26) 54 55 #define SYSC_MODULE_QUIRK_ENA_RESETDONE BIT(25) 55 56 #define SYSC_MODULE_QUIRK_PRUSS BIT(24)
+2
include/linux/rtsx_pci.h
··· 1109 1109 }; 1110 1110 1111 1111 enum PDEV_STAT {PDEV_STAT_IDLE, PDEV_STAT_RUN}; 1112 + enum ASPM_MODE {ASPM_MODE_CFG, ASPM_MODE_REG}; 1112 1113 1113 1114 #define ASPM_L1_1_EN BIT(0) 1114 1115 #define ASPM_L1_2_EN BIT(1) ··· 1235 1234 u8 card_drive_sel; 1236 1235 #define ASPM_L1_EN 0x02 1237 1236 u8 aspm_en; 1237 + enum ASPM_MODE aspm_mode; 1238 1238 bool aspm_enabled; 1239 1239 1240 1240 #define PCR_MS_PMOS (1 << 0)
+8
include/linux/sched.h
··· 350 350 * Only for tasks we track a moving average of the past instantaneous 351 351 * estimated utilization. This allows to absorb sporadic drops in utilization 352 352 * of an otherwise almost periodic task. 353 + * 354 + * The UTIL_AVG_UNCHANGED flag is used to synchronize util_est with util_avg 355 + * updates. When a task is dequeued, its util_est should not be updated if its 356 + * util_avg has not been updated in the meantime. 357 + * This information is mapped into the MSB bit of util_est.enqueued at dequeue 358 + * time. Since max value of util_est.enqueued for a task is 1024 (PELT util_avg 359 + * for a task) it is safe to use MSB. 353 360 */ 354 361 struct util_est { 355 362 unsigned int enqueued; 356 363 unsigned int ewma; 357 364 #define UTIL_EST_WEIGHT_SHIFT 2 365 + #define UTIL_AVG_UNCHANGED 0x80000000 358 366 } __attribute__((__aligned__(sizeof(u64)))); 359 367 360 368 /*
+7
include/linux/tick.h
··· 11 11 #include <linux/context_tracking_state.h> 12 12 #include <linux/cpumask.h> 13 13 #include <linux/sched.h> 14 + #include <linux/rcupdate.h> 14 15 15 16 #ifdef CONFIG_GENERIC_CLOCKEVENTS 16 17 extern void __init tick_init(void); ··· 299 298 { 300 299 if (tick_nohz_full_enabled()) 301 300 __tick_nohz_task_switch(); 301 + } 302 + 303 + static inline void tick_nohz_user_enter_prepare(void) 304 + { 305 + if (tick_nohz_full_cpu(smp_processor_id())) 306 + rcu_nocb_flush_deferred_wakeup(); 302 307 } 303 308 304 309 #endif
+1 -1
include/linux/usb/pd.h
··· 460 460 #define PD_T_RECEIVER_RESPONSE 15 /* 15ms max */ 461 461 #define PD_T_SOURCE_ACTIVITY 45 462 462 #define PD_T_SINK_ACTIVITY 135 463 - #define PD_T_SINK_WAIT_CAP 240 463 + #define PD_T_SINK_WAIT_CAP 310 /* 310 - 620 ms */ 464 464 #define PD_T_PS_TRANSITION 500 465 465 #define PD_T_SRC_TRANSITION 35 466 466 #define PD_T_DRP_SNK 40
-4
include/linux/usb/pd_ext_sdb.h
··· 24 24 #define USB_PD_EXT_SDB_EVENT_OVP BIT(3) 25 25 #define USB_PD_EXT_SDB_EVENT_CF_CV_MODE BIT(4) 26 26 27 - #define USB_PD_EXT_SDB_PPS_EVENTS (USB_PD_EXT_SDB_EVENT_OCP | \ 28 - USB_PD_EXT_SDB_EVENT_OTP | \ 29 - USB_PD_EXT_SDB_EVENT_OVP) 30 - 31 27 #endif /* __LINUX_USB_PD_EXT_SDB_H */
+1 -1
include/net/caif/caif_dev.h
··· 119 119 * The link_support layer is used to add any Link Layer specific 120 120 * framing. 121 121 */ 122 - void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev, 122 + int caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev, 123 123 struct cflayer *link_support, int head_room, 124 124 struct cflayer **layer, int (**rcv_func)( 125 125 struct sk_buff *, struct net_device *,
+1 -1
include/net/caif/cfcnfg.h
··· 62 62 * @fcs: Specify if checksum is used in CAIF Framing Layer. 63 63 * @head_room: Head space needed by link specific protocol. 64 64 */ 65 - void 65 + int 66 66 cfcnfg_add_phy_layer(struct cfcnfg *cnfg, 67 67 struct net_device *dev, struct cflayer *phy_layer, 68 68 enum cfcnfg_phy_preference pref,
+1
include/net/caif/cfserl.h
··· 9 9 #include <net/caif/caif_layer.h> 10 10 11 11 struct cflayer *cfserl_create(int instance, bool use_stx); 12 + void cfserl_release(struct cflayer *layer); 12 13 #endif
-6
include/net/netfilter/nf_tables.h
··· 1506 1506 1507 1507 struct nft_trans_table { 1508 1508 bool update; 1509 - u8 state; 1510 - u32 flags; 1511 1509 }; 1512 1510 1513 1511 #define nft_trans_table_update(trans) \ 1514 1512 (((struct nft_trans_table *)trans->data)->update) 1515 - #define nft_trans_table_state(trans) \ 1516 - (((struct nft_trans_table *)trans->data)->state) 1517 - #define nft_trans_table_flags(trans) \ 1518 - (((struct nft_trans_table *)trans->data)->flags) 1519 1513 1520 1514 struct nft_trans_elem { 1521 1515 struct nft_set *set;
+9 -1
include/net/tls.h
··· 193 193 (sizeof(struct tls_offload_context_tx) + TLS_DRIVER_STATE_SIZE_TX) 194 194 195 195 enum tls_context_flags { 196 - TLS_RX_SYNC_RUNNING = 0, 196 + /* tls_device_down was called after the netdev went down, device state 197 + * was released, and kTLS works in software, even though rx_conf is 198 + * still TLS_HW (needed for transition). 199 + */ 200 + TLS_RX_DEV_DEGRADED = 0, 197 201 /* Unlike RX where resync is driven entirely by the core in TX only 198 202 * the driver knows when things went out of sync, so we need the flag 199 203 * to be atomic. ··· 270 266 271 267 /* cache cold stuff */ 272 268 struct proto *sk_proto; 269 + struct sock *sk; 273 270 274 271 void (*sk_destruct)(struct sock *sk); 275 272 ··· 453 448 struct sk_buff * 454 449 tls_validate_xmit_skb(struct sock *sk, struct net_device *dev, 455 450 struct sk_buff *skb); 451 + struct sk_buff * 452 + tls_validate_xmit_skb_sw(struct sock *sk, struct net_device *dev, 453 + struct sk_buff *skb); 456 454 457 455 static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk) 458 456 {
+1
include/uapi/linux/input-event-codes.h
··· 611 611 #define KEY_VOICECOMMAND 0x246 /* Listening Voice Command */ 612 612 #define KEY_ASSISTANT 0x247 /* AL Context-aware desktop assistant */ 613 613 #define KEY_KBD_LAYOUT_NEXT 0x248 /* AC Next Keyboard Layout Select */ 614 + #define KEY_EMOJI_PICKER 0x249 /* Show/hide emoji picker (HUTRR101) */ 614 615 615 616 #define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */ 616 617 #define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */
+10 -9
include/uapi/linux/io_uring.h
··· 280 280 #define IORING_FEAT_SQPOLL_NONFIXED (1U << 7) 281 281 #define IORING_FEAT_EXT_ARG (1U << 8) 282 282 #define IORING_FEAT_NATIVE_WORKERS (1U << 9) 283 + #define IORING_FEAT_RSRC_TAGS (1U << 10) 283 284 284 285 /* 285 286 * io_uring_register(2) opcodes and arguments ··· 299 298 IORING_UNREGISTER_PERSONALITY = 10, 300 299 IORING_REGISTER_RESTRICTIONS = 11, 301 300 IORING_REGISTER_ENABLE_RINGS = 12, 302 - IORING_REGISTER_RSRC = 13, 303 - IORING_REGISTER_RSRC_UPDATE = 14, 301 + 302 + /* extended with tagging */ 303 + IORING_REGISTER_FILES2 = 13, 304 + IORING_REGISTER_FILES_UPDATE2 = 14, 305 + IORING_REGISTER_BUFFERS2 = 15, 306 + IORING_REGISTER_BUFFERS_UPDATE = 16, 304 307 305 308 /* this goes last */ 306 309 IORING_REGISTER_LAST ··· 317 312 __aligned_u64 /* __s32 * */ fds; 318 313 }; 319 314 320 - enum { 321 - IORING_RSRC_FILE = 0, 322 - IORING_RSRC_BUFFER = 1, 323 - }; 324 - 325 315 struct io_uring_rsrc_register { 326 - __u32 type; 327 316 __u32 nr; 317 + __u32 resv; 318 + __u64 resv2; 328 319 __aligned_u64 data; 329 320 __aligned_u64 tags; 330 321 }; ··· 336 335 __u32 resv; 337 336 __aligned_u64 data; 338 337 __aligned_u64 tags; 339 - __u32 type; 340 338 __u32 nr; 339 + __u32 resv2; 341 340 }; 342 341 343 342 /* Skip updating fd indexes set to this value in the fd table */
+1 -1
include/uapi/linux/virtio_ids.h
··· 54 54 #define VIRTIO_ID_SOUND 25 /* virtio sound */ 55 55 #define VIRTIO_ID_FS 26 /* virtio filesystem */ 56 56 #define VIRTIO_ID_PMEM 27 /* virtio pmem */ 57 - #define VIRTIO_ID_BT 28 /* virtio bluetooth */ 58 57 #define VIRTIO_ID_MAC80211_HWSIM 29 /* virtio mac80211-hwsim */ 58 + #define VIRTIO_ID_BT 40 /* virtio bluetooth */ 59 59 60 60 #endif /* _LINUX_VIRTIO_IDS_H */
+1 -1
init/main.c
··· 1537 1537 */ 1538 1538 set_mems_allowed(node_states[N_MEMORY]); 1539 1539 1540 - cad_pid = task_pid(current); 1540 + cad_pid = get_pid(task_pid(current)); 1541 1541 1542 1542 smp_prepare_cpus(setup_max_cpus); 1543 1543
+5 -2
kernel/bpf/helpers.c
··· 14 14 #include <linux/jiffies.h> 15 15 #include <linux/pid_namespace.h> 16 16 #include <linux/proc_ns.h> 17 + #include <linux/security.h> 17 18 18 19 #include "../../lib/kstrtox.h" 19 20 ··· 1070 1069 case BPF_FUNC_probe_read_user: 1071 1070 return &bpf_probe_read_user_proto; 1072 1071 case BPF_FUNC_probe_read_kernel: 1073 - return &bpf_probe_read_kernel_proto; 1072 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1073 + NULL : &bpf_probe_read_kernel_proto; 1074 1074 case BPF_FUNC_probe_read_user_str: 1075 1075 return &bpf_probe_read_user_str_proto; 1076 1076 case BPF_FUNC_probe_read_kernel_str: 1077 - return &bpf_probe_read_kernel_str_proto; 1077 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1078 + NULL : &bpf_probe_read_kernel_str_proto; 1078 1079 case BPF_FUNC_snprintf_btf: 1079 1080 return &bpf_snprintf_btf_proto; 1080 1081 case BPF_FUNC_snprintf:
+4
kernel/cgroup/cgroup-v1.c
··· 820 820 struct cgroup *cgrp = kn->priv; 821 821 int ret; 822 822 823 + /* do not accept '\n' to prevent making /proc/<pid>/cgroup unparsable */ 824 + if (strchr(new_name_str, '\n')) 825 + return -EINVAL; 826 + 823 827 if (kernfs_type(kn) != KERNFS_DIR) 824 828 return -ENOTDIR; 825 829 if (kn->parent != new_parent)
+3 -2
kernel/entry/common.c
··· 5 5 #include <linux/highmem.h> 6 6 #include <linux/livepatch.h> 7 7 #include <linux/audit.h> 8 + #include <linux/tick.h> 8 9 9 10 #include "common.h" 10 11 ··· 187 186 local_irq_disable_exit_to_user(); 188 187 189 188 /* Check if any of the above work has queued a deferred wakeup */ 190 - rcu_nocb_flush_deferred_wakeup(); 189 + tick_nohz_user_enter_prepare(); 191 190 192 191 ti_work = READ_ONCE(current_thread_info()->flags); 193 192 } ··· 203 202 lockdep_assert_irqs_disabled(); 204 203 205 204 /* Flush pending rcuog wakeup before the last need_resched() check */ 206 - rcu_nocb_flush_deferred_wakeup(); 205 + tick_nohz_user_enter_prepare(); 207 206 208 207 if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) 209 208 ti_work = exit_to_user_mode_loop(regs, ti_work);
+2
kernel/events/core.c
··· 4609 4609 cpuctx = per_cpu_ptr(pmu->pmu_cpu_context, cpu); 4610 4610 ctx = &cpuctx->ctx; 4611 4611 get_ctx(ctx); 4612 + raw_spin_lock_irqsave(&ctx->lock, flags); 4612 4613 ++ctx->pin_count; 4614 + raw_spin_unlock_irqrestore(&ctx->lock, flags); 4613 4615 4614 4616 return ctx; 4615 4617 }
-3
kernel/irq_work.c
··· 70 70 if (!irq_work_claim(work)) 71 71 return false; 72 72 73 - /*record irq_work call stack in order to print it in KASAN reports*/ 74 - kasan_record_aux_stack(work); 75 - 76 73 /* Queue the entry and raise the IPI if needed. */ 77 74 preempt_disable(); 78 75 __irq_work_queue_local(work);
+2 -1
kernel/sched/debug.c
··· 885 885 #define __PS(S, F) SEQ_printf(m, "%-45s:%21Ld\n", S, (long long)(F)) 886 886 #define __P(F) __PS(#F, F) 887 887 #define P(F) __PS(#F, p->F) 888 + #define PM(F, M) __PS(#F, p->F & (M)) 888 889 #define __PSN(S, F) SEQ_printf(m, "%-45s:%14Ld.%06ld\n", S, SPLIT_NS((long long)(F))) 889 890 #define __PN(F) __PSN(#F, F) 890 891 #define PN(F) __PSN(#F, p->F) ··· 1012 1011 P(se.avg.util_avg); 1013 1012 P(se.avg.last_update_time); 1014 1013 P(se.avg.util_est.ewma); 1015 - P(se.avg.util_est.enqueued); 1014 + PM(se.avg.util_est.enqueued, ~UTIL_AVG_UNCHANGED); 1016 1015 #endif 1017 1016 #ifdef CONFIG_UCLAMP_TASK 1018 1017 __PS("uclamp.min", p->uclamp_req[UCLAMP_MIN].value);
+17 -11
kernel/sched/fair.c
··· 3499 3499 static inline void 3500 3500 update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq *gcfs_rq) 3501 3501 { 3502 - long delta_avg, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; 3502 + long delta, running_sum, runnable_sum = gcfs_rq->prop_runnable_sum; 3503 3503 unsigned long load_avg; 3504 3504 u64 load_sum = 0; 3505 - s64 delta_sum; 3506 3505 u32 divider; 3507 3506 3508 3507 if (!runnable_sum) ··· 3548 3549 load_sum = (s64)se_weight(se) * runnable_sum; 3549 3550 load_avg = div_s64(load_sum, divider); 3550 3551 3551 - delta_sum = load_sum - (s64)se_weight(se) * se->avg.load_sum; 3552 - delta_avg = load_avg - se->avg.load_avg; 3552 + delta = load_avg - se->avg.load_avg; 3553 3553 3554 3554 se->avg.load_sum = runnable_sum; 3555 3555 se->avg.load_avg = load_avg; 3556 - add_positive(&cfs_rq->avg.load_avg, delta_avg); 3557 - add_positive(&cfs_rq->avg.load_sum, delta_sum); 3556 + 3557 + add_positive(&cfs_rq->avg.load_avg, delta); 3558 + cfs_rq->avg.load_sum = cfs_rq->avg.load_avg * divider; 3558 3559 } 3559 3560 3560 3561 static inline void add_tg_cfs_propagate(struct cfs_rq *cfs_rq, long runnable_sum) ··· 3765 3766 */ 3766 3767 static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) 3767 3768 { 3769 + /* 3770 + * cfs_rq->avg.period_contrib can be used for both cfs_rq and se. 3771 + * See ___update_load_avg() for details. 3772 + */ 3773 + u32 divider = get_pelt_divider(&cfs_rq->avg); 3774 + 3768 3775 dequeue_load_avg(cfs_rq, se); 3769 3776 sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg); 3770 - sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum); 3777 + cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider; 3771 3778 sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg); 3772 - sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum); 3779 + cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider; 3773 3780 3774 3781 add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum); 3775 3782 ··· 3907 3902 { 3908 3903 struct util_est ue = READ_ONCE(p->se.avg.util_est); 3909 3904 3910 - return (max(ue.ewma, ue.enqueued) | UTIL_AVG_UNCHANGED); 3905 + return max(ue.ewma, (ue.enqueued & ~UTIL_AVG_UNCHANGED)); 3911 3906 } 3912 3907 3913 3908 static inline unsigned long task_util_est(struct task_struct *p) ··· 4007 4002 * Reset EWMA on utilization increases, the moving average is used only 4008 4003 * to smooth utilization decreases. 4009 4004 */ 4010 - ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED); 4005 + ue.enqueued = task_util(p); 4011 4006 if (sched_feat(UTIL_EST_FASTUP)) { 4012 4007 if (ue.ewma < ue.enqueued) { 4013 4008 ue.ewma = ue.enqueued; ··· 4056 4051 ue.ewma += last_ewma_diff; 4057 4052 ue.ewma >>= UTIL_EST_WEIGHT_SHIFT; 4058 4053 done: 4054 + ue.enqueued |= UTIL_AVG_UNCHANGED; 4059 4055 WRITE_ONCE(p->se.avg.util_est, ue); 4060 4056 4061 4057 trace_sched_util_est_se_tp(&p->se); ··· 8036 8030 /* Propagate pending load changes to the parent, if any: */ 8037 8031 se = cfs_rq->tg->se[cpu]; 8038 8032 if (se && !skip_blocked_update(se)) 8039 - update_load_avg(cfs_rq_of(se), se, 0); 8033 + update_load_avg(cfs_rq_of(se), se, UPDATE_TG); 8040 8034 8041 8035 /* 8042 8036 * There can be a lot of idle CPU cgroups. Don't let fully
+1 -10
kernel/sched/pelt.h
··· 42 42 return LOAD_AVG_MAX - 1024 + avg->period_contrib; 43 43 } 44 44 45 - /* 46 - * When a task is dequeued, its estimated utilization should not be update if 47 - * its util_avg has not been updated at least once. 48 - * This flag is used to synchronize util_avg updates with util_est updates. 49 - * We map this information into the LSB bit of the utilization saved at 50 - * dequeue time (i.e. util_est.dequeued). 51 - */ 52 - #define UTIL_AVG_UNCHANGED 0x1 53 - 54 45 static inline void cfs_se_util_change(struct sched_avg *avg) 55 46 { 56 47 unsigned int enqueued; ··· 49 58 if (!sched_feat(UTIL_EST)) 50 59 return; 51 60 52 - /* Avoid store if the flag has been already set */ 61 + /* Avoid store if the flag has been already reset */ 53 62 enqueued = avg->util_est.enqueued; 54 63 if (!(enqueued & UTIL_AVG_UNCHANGED)) 55 64 return;
+1
kernel/time/tick-sched.c
··· 230 230 231 231 #ifdef CONFIG_NO_HZ_FULL 232 232 cpumask_var_t tick_nohz_full_mask; 233 + EXPORT_SYMBOL_GPL(tick_nohz_full_mask); 233 234 bool tick_nohz_full_running; 234 235 EXPORT_SYMBOL_GPL(tick_nohz_full_running); 235 236 static atomic_t tick_dep_mask;
+12 -20
kernel/trace/bpf_trace.c
··· 215 215 static __always_inline int 216 216 bpf_probe_read_kernel_common(void *dst, u32 size, const void *unsafe_ptr) 217 217 { 218 - int ret = security_locked_down(LOCKDOWN_BPF_READ); 218 + int ret; 219 219 220 - if (unlikely(ret < 0)) 221 - goto fail; 222 220 ret = copy_from_kernel_nofault(dst, unsafe_ptr, size); 223 221 if (unlikely(ret < 0)) 224 - goto fail; 225 - return ret; 226 - fail: 227 - memset(dst, 0, size); 222 + memset(dst, 0, size); 228 223 return ret; 229 224 } 230 225 ··· 241 246 static __always_inline int 242 247 bpf_probe_read_kernel_str_common(void *dst, u32 size, const void *unsafe_ptr) 243 248 { 244 - int ret = security_locked_down(LOCKDOWN_BPF_READ); 245 - 246 - if (unlikely(ret < 0)) 247 - goto fail; 249 + int ret; 248 250 249 251 /* 250 252 * The strncpy_from_kernel_nofault() call will likely not fill the ··· 254 262 */ 255 263 ret = strncpy_from_kernel_nofault(dst, unsafe_ptr, size); 256 264 if (unlikely(ret < 0)) 257 - goto fail; 258 - 259 - return ret; 260 - fail: 261 - memset(dst, 0, size); 265 + memset(dst, 0, size); 262 266 return ret; 263 267 } 264 268 ··· 999 1011 case BPF_FUNC_probe_read_user: 1000 1012 return &bpf_probe_read_user_proto; 1001 1013 case BPF_FUNC_probe_read_kernel: 1002 - return &bpf_probe_read_kernel_proto; 1014 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1015 + NULL : &bpf_probe_read_kernel_proto; 1003 1016 case BPF_FUNC_probe_read_user_str: 1004 1017 return &bpf_probe_read_user_str_proto; 1005 1018 case BPF_FUNC_probe_read_kernel_str: 1006 - return &bpf_probe_read_kernel_str_proto; 1019 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1020 + NULL : &bpf_probe_read_kernel_str_proto; 1007 1021 #ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE 1008 1022 case BPF_FUNC_probe_read: 1009 - return &bpf_probe_read_compat_proto; 1023 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1024 + NULL : &bpf_probe_read_compat_proto; 1010 1025 case BPF_FUNC_probe_read_str: 1011 - return &bpf_probe_read_compat_str_proto; 1026 + return security_locked_down(LOCKDOWN_BPF_READ) < 0 ? 1027 + NULL : &bpf_probe_read_compat_str_proto; 1012 1028 #endif 1013 1029 #ifdef CONFIG_CGROUPS 1014 1030 case BPF_FUNC_get_current_cgroup_id:
+7 -1
kernel/trace/ftrace.c
··· 1967 1967 1968 1968 static void print_ip_ins(const char *fmt, const unsigned char *p) 1969 1969 { 1970 + char ins[MCOUNT_INSN_SIZE]; 1970 1971 int i; 1972 + 1973 + if (copy_from_kernel_nofault(ins, p, MCOUNT_INSN_SIZE)) { 1974 + printk(KERN_CONT "%s[FAULT] %px\n", fmt, p); 1975 + return; 1976 + } 1971 1977 1972 1978 printk(KERN_CONT "%s", fmt); 1973 1979 1974 1980 for (i = 0; i < MCOUNT_INSN_SIZE; i++) 1975 - printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]); 1981 + printk(KERN_CONT "%s%02x", i ? ":" : "", ins[i]); 1976 1982 } 1977 1983 1978 1984 enum ftrace_bug_type ftrace_bug_type;
+1 -1
kernel/trace/trace.c
··· 2736 2736 (entry = this_cpu_read(trace_buffered_event))) { 2737 2737 /* Try to use the per cpu buffer first */ 2738 2738 val = this_cpu_inc_return(trace_buffered_event_cnt); 2739 - if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) { 2739 + if ((len < (PAGE_SIZE - sizeof(*entry) - sizeof(entry->array[0]))) && val == 1) { 2740 2740 trace_event_setup(entry, type, trace_ctx); 2741 2741 entry->array[0] = len; 2742 2742 return entry;
+1 -1
lib/crc64.c
··· 37 37 /** 38 38 * crc64_be - Calculate bitwise big-endian ECMA-182 CRC64 39 39 * @crc: seed value for computation. 0 or (u64)~0 for a new CRC calculation, 40 - or the previous crc64 value if computing incrementally. 40 + * or the previous crc64 value if computing incrementally. 41 41 * @p: pointer to buffer over which CRC64 is run 42 42 * @len: length of buffer @p 43 43 */
+2 -2
mm/debug_vm_pgtable.c
··· 192 192 193 193 pr_debug("Validating PMD advanced\n"); 194 194 /* Align the address wrt HPAGE_PMD_SIZE */ 195 - vaddr = (vaddr & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE; 195 + vaddr &= HPAGE_PMD_MASK; 196 196 197 197 pgtable_trans_huge_deposit(mm, pmdp, pgtable); 198 198 ··· 330 330 331 331 pr_debug("Validating PUD advanced\n"); 332 332 /* Align the address wrt HPAGE_PUD_SIZE */ 333 - vaddr = (vaddr & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE; 333 + vaddr &= HPAGE_PUD_MASK; 334 334 335 335 set_pud_at(mm, vaddr, pudp, pud); 336 336 pudp_set_wrprotect(mm, vaddr, pudp);
+14 -4
mm/hugetlb.c
··· 1793 1793 SetPageHWPoison(page); 1794 1794 ClearPageHWPoison(head); 1795 1795 } 1796 - remove_hugetlb_page(h, page, false); 1796 + remove_hugetlb_page(h, head, false); 1797 1797 h->max_huge_pages--; 1798 1798 spin_unlock_irq(&hugetlb_lock); 1799 1799 update_and_free_page(h, head); ··· 4889 4889 if (!page) 4890 4890 goto out; 4891 4891 } else if (!*pagep) { 4892 - ret = -ENOMEM; 4893 - page = alloc_huge_page(dst_vma, dst_addr, 0); 4894 - if (IS_ERR(page)) 4892 + /* If a page already exists, then it's UFFDIO_COPY for 4893 + * a non-missing case. Return -EEXIST. 4894 + */ 4895 + if (vm_shared && 4896 + hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) { 4897 + ret = -EEXIST; 4895 4898 goto out; 4899 + } 4900 + 4901 + page = alloc_huge_page(dst_vma, dst_addr, 0); 4902 + if (IS_ERR(page)) { 4903 + ret = -ENOMEM; 4904 + goto out; 4905 + } 4896 4906 4897 4907 ret = copy_huge_page_from_user(page, 4898 4908 (const void __user *) src_addr,
+2 -2
mm/kasan/init.c
··· 220 220 /** 221 221 * kasan_populate_early_shadow - populate shadow memory region with 222 222 * kasan_early_shadow_page 223 - * @shadow_start - start of the memory range to populate 224 - * @shadow_end - end of the memory range to populate 223 + * @shadow_start: start of the memory range to populate 224 + * @shadow_end: end of the memory range to populate 225 225 */ 226 226 int __ref kasan_populate_early_shadow(const void *shadow_start, 227 227 const void *shadow_end)
+3 -3
mm/kfence/core.c
··· 627 627 * During low activity with no allocations we might wait a 628 628 * while; let's avoid the hung task warning. 629 629 */ 630 - wait_event_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), 631 - sysctl_hung_task_timeout_secs * HZ / 2); 630 + wait_event_idle_timeout(allocation_wait, atomic_read(&kfence_allocation_gate), 631 + sysctl_hung_task_timeout_secs * HZ / 2); 632 632 } else { 633 - wait_event(allocation_wait, atomic_read(&kfence_allocation_gate)); 633 + wait_event_idle(allocation_wait, atomic_read(&kfence_allocation_gate)); 634 634 } 635 635 636 636 /* Disable static key and reset timer. */
+4
mm/memory.c
··· 2939 2939 } 2940 2940 flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); 2941 2941 entry = mk_pte(new_page, vma->vm_page_prot); 2942 + entry = pte_sw_mkyoung(entry); 2942 2943 entry = maybe_mkwrite(pte_mkdirty(entry), vma); 2943 2944 2944 2945 /* ··· 3603 3602 __SetPageUptodate(page); 3604 3603 3605 3604 entry = mk_pte(page, vma->vm_page_prot); 3605 + entry = pte_sw_mkyoung(entry); 3606 3606 if (vma->vm_flags & VM_WRITE) 3607 3607 entry = pte_mkwrite(pte_mkdirty(entry)); 3608 3608 ··· 3788 3786 3789 3787 if (prefault && arch_wants_old_prefaulted_pte()) 3790 3788 entry = pte_mkold(entry); 3789 + else 3790 + entry = pte_sw_mkyoung(entry); 3791 3791 3792 3792 if (write) 3793 3793 entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+2
mm/page_alloc.c
··· 9158 9158 del_page_from_free_list(page_head, zone, page_order); 9159 9159 break_down_buddy_pages(zone, page_head, page, 0, 9160 9160 page_order, migratetype); 9161 + if (!is_migrate_isolate(migratetype)) 9162 + __mod_zone_freepage_state(zone, -1, migratetype); 9161 9163 ret = true; 9162 9164 break; 9163 9165 }
+6 -1
net/bluetooth/hci_core.c
··· 1610 1610 } else { 1611 1611 /* Init failed, cleanup */ 1612 1612 flush_work(&hdev->tx_work); 1613 - flush_work(&hdev->cmd_work); 1613 + 1614 + /* Since hci_rx_work() is possible to awake new cmd_work 1615 + * it should be flushed first to avoid unexpected call of 1616 + * hci_cmd_work() 1617 + */ 1614 1618 flush_work(&hdev->rx_work); 1619 + flush_work(&hdev->cmd_work); 1615 1620 1616 1621 skb_queue_purge(&hdev->cmd_q); 1617 1622 skb_queue_purge(&hdev->rx_q);
+2 -2
net/bluetooth/hci_sock.c
··· 762 762 /* Detach sockets from device */ 763 763 read_lock(&hci_sk_list.lock); 764 764 sk_for_each(sk, &hci_sk_list.head) { 765 - bh_lock_sock_nested(sk); 765 + lock_sock(sk); 766 766 if (hci_pi(sk)->hdev == hdev) { 767 767 hci_pi(sk)->hdev = NULL; 768 768 sk->sk_err = EPIPE; ··· 771 771 772 772 hci_dev_put(hdev); 773 773 } 774 - bh_unlock_sock(sk); 774 + release_sock(sk); 775 775 } 776 776 read_unlock(&hci_sk_list.lock); 777 777 }
+9 -4
net/caif/caif_dev.c
··· 308 308 caifd_put(caifd); 309 309 } 310 310 311 - void caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev, 311 + int caif_enroll_dev(struct net_device *dev, struct caif_dev_common *caifdev, 312 312 struct cflayer *link_support, int head_room, 313 313 struct cflayer **layer, 314 314 int (**rcv_func)(struct sk_buff *, struct net_device *, ··· 319 319 enum cfcnfg_phy_preference pref; 320 320 struct cfcnfg *cfg = get_cfcnfg(dev_net(dev)); 321 321 struct caif_device_entry_list *caifdevs; 322 + int res; 322 323 323 324 caifdevs = caif_device_list(dev_net(dev)); 324 325 caifd = caif_device_alloc(dev); 325 326 if (!caifd) 326 - return; 327 + return -ENOMEM; 327 328 *layer = &caifd->layer; 328 329 spin_lock_init(&caifd->flow_lock); 329 330 ··· 345 344 strlcpy(caifd->layer.name, dev->name, 346 345 sizeof(caifd->layer.name)); 347 346 caifd->layer.transmit = transmit; 348 - cfcnfg_add_phy_layer(cfg, 347 + res = cfcnfg_add_phy_layer(cfg, 349 348 dev, 350 349 &caifd->layer, 351 350 pref, ··· 355 354 mutex_unlock(&caifdevs->lock); 356 355 if (rcv_func) 357 356 *rcv_func = receive; 357 + return res; 358 358 } 359 359 EXPORT_SYMBOL(caif_enroll_dev); 360 360 ··· 370 368 struct cflayer *layer, *link_support; 371 369 int head_room = 0; 372 370 struct caif_device_entry_list *caifdevs; 371 + int res; 373 372 374 373 cfg = get_cfcnfg(dev_net(dev)); 375 374 caifdevs = caif_device_list(dev_net(dev)); ··· 396 393 break; 397 394 } 398 395 } 399 - caif_enroll_dev(dev, caifdev, link_support, head_room, 396 + res = caif_enroll_dev(dev, caifdev, link_support, head_room, 400 397 &layer, NULL); 398 + if (res) 399 + cfserl_release(link_support); 401 400 caifdev->flowctrl = dev_flowctrl; 402 401 break; 403 402
+13 -1
net/caif/caif_usb.c
··· 115 115 return (struct cflayer *) this; 116 116 } 117 117 118 + static void cfusbl_release(struct cflayer *layer) 119 + { 120 + kfree(layer); 121 + } 122 + 118 123 static struct packet_type caif_usb_type __read_mostly = { 119 124 .type = cpu_to_be16(ETH_P_802_EX1), 120 125 }; ··· 132 127 struct cflayer *layer, *link_support; 133 128 struct usbnet *usbnet; 134 129 struct usb_device *usbdev; 130 + int res; 135 131 136 132 /* Check whether we have a NCM device, and find its VID/PID. */ 137 133 if (!(dev->dev.parent && dev->dev.parent->driver && ··· 175 169 if (dev->num_tx_queues > 1) 176 170 pr_warn("USB device uses more than one tx queue\n"); 177 171 178 - caif_enroll_dev(dev, &common, link_support, CFUSB_MAX_HEADLEN, 172 + res = caif_enroll_dev(dev, &common, link_support, CFUSB_MAX_HEADLEN, 179 173 &layer, &caif_usb_type.func); 174 + if (res) 175 + goto err; 176 + 180 177 if (!pack_added) 181 178 dev_add_pack(&caif_usb_type); 182 179 pack_added = true; ··· 187 178 strlcpy(layer->name, dev->name, sizeof(layer->name)); 188 179 189 180 return 0; 181 + err: 182 + cfusbl_release(link_support); 183 + return res; 190 184 } 191 185 192 186 static struct notifier_block caif_device_notifier = {
+11 -5
net/caif/cfcnfg.c
··· 450 450 rcu_read_unlock(); 451 451 } 452 452 453 - void 453 + int 454 454 cfcnfg_add_phy_layer(struct cfcnfg *cnfg, 455 455 struct net_device *dev, struct cflayer *phy_layer, 456 456 enum cfcnfg_phy_preference pref, ··· 459 459 { 460 460 struct cflayer *frml; 461 461 struct cfcnfg_phyinfo *phyinfo = NULL; 462 - int i; 462 + int i, res = 0; 463 463 u8 phyid; 464 464 465 465 mutex_lock(&cnfg->lock); ··· 473 473 goto got_phyid; 474 474 } 475 475 pr_warn("Too many CAIF Link Layers (max 6)\n"); 476 + res = -EEXIST; 476 477 goto out; 477 478 478 479 got_phyid: 479 480 phyinfo = kzalloc(sizeof(struct cfcnfg_phyinfo), GFP_ATOMIC); 480 - if (!phyinfo) 481 + if (!phyinfo) { 482 + res = -ENOMEM; 481 483 goto out_err; 484 + } 482 485 483 486 phy_layer->id = phyid; 484 487 phyinfo->pref = pref; ··· 495 492 496 493 frml = cffrml_create(phyid, fcs); 497 494 498 - if (!frml) 495 + if (!frml) { 496 + res = -ENOMEM; 499 497 goto out_err; 498 + } 500 499 phyinfo->frm_layer = frml; 501 500 layer_set_up(frml, cnfg->mux); 502 501 ··· 516 511 list_add_rcu(&phyinfo->node, &cnfg->phys); 517 512 out: 518 513 mutex_unlock(&cnfg->lock); 519 - return; 514 + return res; 520 515 521 516 out_err: 522 517 kfree(phyinfo); 523 518 mutex_unlock(&cnfg->lock); 519 + return res; 524 520 } 525 521 EXPORT_SYMBOL(cfcnfg_add_phy_layer); 526 522
+5
net/caif/cfserl.c
··· 31 31 static void cfserl_ctrlcmd(struct cflayer *layr, enum caif_ctrlcmd ctrl, 32 32 int phyid); 33 33 34 + void cfserl_release(struct cflayer *layer) 35 + { 36 + kfree(layer); 37 + } 38 + 34 39 struct cflayer *cfserl_create(int instance, bool use_stx) 35 40 { 36 41 struct cfserl *this = kzalloc(sizeof(struct cfserl), GFP_ATOMIC);
+1 -1
net/compat.c
··· 177 177 if (kcmlen > stackbuf_size) 178 178 kcmsg_base = kcmsg = sock_kmalloc(sk, kcmlen, GFP_KERNEL); 179 179 if (kcmsg == NULL) 180 - return -ENOBUFS; 180 + return -ENOMEM; 181 181 182 182 /* Now copy them over neatly. */ 183 183 memset(kcmsg, 0, kcmlen);
+2 -2
net/core/devlink.c
··· 705 705 case DEVLINK_PORT_FLAVOUR_PHYSICAL: 706 706 case DEVLINK_PORT_FLAVOUR_CPU: 707 707 case DEVLINK_PORT_FLAVOUR_DSA: 708 - case DEVLINK_PORT_FLAVOUR_VIRTUAL: 709 708 if (nla_put_u32(msg, DEVLINK_ATTR_PORT_NUMBER, 710 709 attrs->phys.port_number)) 711 710 return -EMSGSIZE; ··· 8630 8631 8631 8632 switch (attrs->flavour) { 8632 8633 case DEVLINK_PORT_FLAVOUR_PHYSICAL: 8633 - case DEVLINK_PORT_FLAVOUR_VIRTUAL: 8634 8634 if (!attrs->split) 8635 8635 n = snprintf(name, len, "p%u", attrs->phys.port_number); 8636 8636 else ··· 8677 8679 n = snprintf(name, len, "pf%usf%u", attrs->pci_sf.pf, 8678 8680 attrs->pci_sf.sf); 8679 8681 break; 8682 + case DEVLINK_PORT_FLAVOUR_VIRTUAL: 8683 + return -EOPNOTSUPP; 8680 8684 } 8681 8685 8682 8686 if (n >= len)
+1 -1
net/core/fib_rules.c
··· 1168 1168 { 1169 1169 struct net *net; 1170 1170 struct sk_buff *skb; 1171 - int err = -ENOBUFS; 1171 + int err = -ENOMEM; 1172 1172 1173 1173 net = ops->fro_net; 1174 1174 skb = nlmsg_new(fib_rule_nlmsg_size(ops, rule), GFP_KERNEL);
+3 -1
net/core/rtnetlink.c
··· 4842 4842 if (err < 0) 4843 4843 goto errout; 4844 4844 4845 - if (!skb->len) 4845 + if (!skb->len) { 4846 + err = -EINVAL; 4846 4847 goto errout; 4848 + } 4847 4849 4848 4850 rtnl_notify(skb, net, 0, RTNLGRP_LINK, NULL, GFP_ATOMIC); 4849 4851 return 0;
+12 -4
net/core/sock.c
··· 815 815 } 816 816 EXPORT_SYMBOL(sock_set_rcvbuf); 817 817 818 + static void __sock_set_mark(struct sock *sk, u32 val) 819 + { 820 + if (val != sk->sk_mark) { 821 + sk->sk_mark = val; 822 + sk_dst_reset(sk); 823 + } 824 + } 825 + 818 826 void sock_set_mark(struct sock *sk, u32 val) 819 827 { 820 828 lock_sock(sk); 821 - sk->sk_mark = val; 829 + __sock_set_mark(sk, val); 822 830 release_sock(sk); 823 831 } 824 832 EXPORT_SYMBOL(sock_set_mark); ··· 1134 1126 case SO_MARK: 1135 1127 if (!ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) { 1136 1128 ret = -EPERM; 1137 - } else if (val != sk->sk_mark) { 1138 - sk->sk_mark = val; 1139 - sk_dst_reset(sk); 1129 + break; 1140 1130 } 1131 + 1132 + __sock_set_mark(sk, val); 1141 1133 break; 1142 1134 1143 1135 case SO_RXQ_OVFL:
+1 -1
net/dsa/tag_8021q.c
··· 64 64 #define DSA_8021Q_SUBVLAN_HI_SHIFT 9 65 65 #define DSA_8021Q_SUBVLAN_HI_MASK GENMASK(9, 9) 66 66 #define DSA_8021Q_SUBVLAN_LO_SHIFT 4 67 - #define DSA_8021Q_SUBVLAN_LO_MASK GENMASK(4, 3) 67 + #define DSA_8021Q_SUBVLAN_LO_MASK GENMASK(5, 4) 68 68 #define DSA_8021Q_SUBVLAN_HI(x) (((x) & GENMASK(2, 2)) >> 2) 69 69 #define DSA_8021Q_SUBVLAN_LO(x) ((x) & GENMASK(1, 0)) 70 70 #define DSA_8021Q_SUBVLAN(x) \
+6 -4
net/ieee802154/nl-mac.c
··· 680 680 nla_put_u8(msg, IEEE802154_ATTR_LLSEC_SECLEVEL, params.out_level) || 681 681 nla_put_u32(msg, IEEE802154_ATTR_LLSEC_FRAME_COUNTER, 682 682 be32_to_cpu(params.frame_counter)) || 683 - ieee802154_llsec_fill_key_id(msg, &params.out_key)) 683 + ieee802154_llsec_fill_key_id(msg, &params.out_key)) { 684 + rc = -ENOBUFS; 684 685 goto out_free; 686 + } 685 687 686 688 dev_put(dev); 687 689 ··· 1186 1184 { 1187 1185 struct ieee802154_llsec_device *dpos; 1188 1186 struct ieee802154_llsec_device_key *kpos; 1189 - int rc = 0, idx = 0, idx2; 1187 + int idx = 0, idx2; 1190 1188 1191 1189 list_for_each_entry(dpos, &data->table->devices, list) { 1192 1190 if (idx++ < data->s_idx) ··· 1202 1200 data->nlmsg_seq, 1203 1201 dpos->hwaddr, kpos, 1204 1202 data->dev)) { 1205 - return rc = -EMSGSIZE; 1203 + return -EMSGSIZE; 1206 1204 } 1207 1205 1208 1206 data->s_idx2++; ··· 1211 1209 data->s_idx++; 1212 1210 } 1213 1211 1214 - return rc; 1212 + return 0; 1215 1213 } 1216 1214 1217 1215 int ieee802154_llsec_dump_devkeys(struct sk_buff *skb,
+3 -1
net/ieee802154/nl-phy.c
··· 241 241 } 242 242 243 243 if (nla_put_string(msg, IEEE802154_ATTR_PHY_NAME, wpan_phy_name(phy)) || 244 - nla_put_string(msg, IEEE802154_ATTR_DEV_NAME, dev->name)) 244 + nla_put_string(msg, IEEE802154_ATTR_DEV_NAME, dev->name)) { 245 + rc = -EMSGSIZE; 245 246 goto nla_put_failure; 247 + } 246 248 dev_put(dev); 247 249 248 250 wpan_phy_put(phy);
+5 -4
net/ieee802154/nl802154.c
··· 1298 1298 if (!nla || nla_parse_nested_deprecated(attrs, NL802154_DEV_ADDR_ATTR_MAX, nla, nl802154_dev_addr_policy, NULL)) 1299 1299 return -EINVAL; 1300 1300 1301 - if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] || 1302 - !attrs[NL802154_DEV_ADDR_ATTR_MODE] || 1303 - !(attrs[NL802154_DEV_ADDR_ATTR_SHORT] || 1304 - attrs[NL802154_DEV_ADDR_ATTR_EXTENDED])) 1301 + if (!attrs[NL802154_DEV_ADDR_ATTR_PAN_ID] || !attrs[NL802154_DEV_ADDR_ATTR_MODE]) 1305 1302 return -EINVAL; 1306 1303 1307 1304 addr->pan_id = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_PAN_ID]); 1308 1305 addr->mode = nla_get_u32(attrs[NL802154_DEV_ADDR_ATTR_MODE]); 1309 1306 switch (addr->mode) { 1310 1307 case NL802154_DEV_ADDR_SHORT: 1308 + if (!attrs[NL802154_DEV_ADDR_ATTR_SHORT]) 1309 + return -EINVAL; 1311 1310 addr->short_addr = nla_get_le16(attrs[NL802154_DEV_ADDR_ATTR_SHORT]); 1312 1311 break; 1313 1312 case NL802154_DEV_ADDR_EXTENDED: 1313 + if (!attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]) 1314 + return -EINVAL; 1314 1315 addr->extended_addr = nla_get_le64(attrs[NL802154_DEV_ADDR_ATTR_EXTENDED]); 1315 1316 break; 1316 1317 default:
+8 -5
net/ipv4/ipconfig.c
··· 886 886 887 887 888 888 /* 889 - * Copy BOOTP-supplied string if not already set. 889 + * Copy BOOTP-supplied string 890 890 */ 891 891 static int __init ic_bootp_string(char *dest, char *src, int len, int max) 892 892 { ··· 935 935 } 936 936 break; 937 937 case 12: /* Host name */ 938 - ic_bootp_string(utsname()->nodename, ext+1, *ext, 939 - __NEW_UTS_LEN); 940 - ic_host_name_set = 1; 938 + if (!ic_host_name_set) { 939 + ic_bootp_string(utsname()->nodename, ext+1, *ext, 940 + __NEW_UTS_LEN); 941 + ic_host_name_set = 1; 942 + } 941 943 break; 942 944 case 15: /* Domain name (DNS) */ 943 - ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain)); 945 + if (!ic_domain[0]) 946 + ic_bootp_string(ic_domain, ext+1, *ext, sizeof(ic_domain)); 944 947 break; 945 948 case 17: /* Root path */ 946 949 if (!root_server_path[0])
+6 -2
net/ipv6/route.c
··· 3673 3673 if (nh) { 3674 3674 if (rt->fib6_src.plen) { 3675 3675 NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 3676 - goto out; 3676 + goto out_free; 3677 3677 } 3678 3678 if (!nexthop_get(nh)) { 3679 3679 NL_SET_ERR_MSG(extack, "Nexthop has been deleted"); 3680 - goto out; 3680 + goto out_free; 3681 3681 } 3682 3682 rt->nh = nh; 3683 3683 fib6_nh = nexthop_fib6_nh(rt->nh); ··· 3713 3713 return rt; 3714 3714 out: 3715 3715 fib6_info_release(rt); 3716 + return ERR_PTR(err); 3717 + out_free: 3718 + ip_fib_metrics_put(rt->fib6_metrics); 3719 + kfree(rt); 3716 3720 return ERR_PTR(err); 3717 3721 } 3718 3722
+3
net/ipv6/sit.c
··· 271 271 if (ipip6_tunnel_create(dev) < 0) 272 272 goto failed_free; 273 273 274 + if (!parms->name[0]) 275 + strcpy(parms->name, dev->name); 276 + 274 277 return nt; 275 278 276 279 failed_free:
+5
net/kcm/kcmsock.c
··· 1066 1066 goto partial_message; 1067 1067 } 1068 1068 1069 + if (skb_has_frag_list(head)) { 1070 + kfree_skb_list(skb_shinfo(head)->frag_list); 1071 + skb_shinfo(head)->frag_list = NULL; 1072 + } 1073 + 1069 1074 if (head != kcm->seq_skb) 1070 1075 kfree_skb(head); 1071 1076
+15 -1
net/mptcp/protocol.c
··· 947 947 { 948 948 struct mptcp_sock *msk = mptcp_sk(sk); 949 949 950 + #ifdef CONFIG_LOCKDEP 951 + WARN_ON_ONCE(!lockdep_is_held(&sk->sk_lock.slock)); 952 + #endif 953 + 950 954 if (!msk->wmem_reserved) 951 955 return; 952 956 ··· 1089 1085 1090 1086 static void __mptcp_clean_una_wakeup(struct sock *sk) 1091 1087 { 1088 + #ifdef CONFIG_LOCKDEP 1089 + WARN_ON_ONCE(!lockdep_is_held(&sk->sk_lock.slock)); 1090 + #endif 1092 1091 __mptcp_clean_una(sk); 1093 1092 mptcp_write_space(sk); 1093 + } 1094 + 1095 + static void mptcp_clean_una_wakeup(struct sock *sk) 1096 + { 1097 + mptcp_data_lock(sk); 1098 + __mptcp_clean_una_wakeup(sk); 1099 + mptcp_data_unlock(sk); 1094 1100 } 1095 1101 1096 1102 static void mptcp_enter_memory_pressure(struct sock *sk) ··· 2313 2299 struct sock *ssk; 2314 2300 int ret; 2315 2301 2316 - __mptcp_clean_una_wakeup(sk); 2302 + mptcp_clean_una_wakeup(sk); 2317 2303 dfrag = mptcp_rtx_head(sk); 2318 2304 if (!dfrag) { 2319 2305 if (mptcp_data_fin_enabled(msk)) {
+40 -39
net/mptcp/subflow.c
··· 630 630 631 631 /* if the sk is MP_CAPABLE, we try to fetch the client key */ 632 632 if (subflow_req->mp_capable) { 633 - if (TCP_SKB_CB(skb)->seq != subflow_req->ssn_offset + 1) { 634 - /* here we can receive and accept an in-window, 635 - * out-of-order pkt, which will not carry the MP_CAPABLE 636 - * opt even on mptcp enabled paths 637 - */ 638 - goto create_msk; 639 - } 640 - 633 + /* we can receive and accept an in-window, out-of-order pkt, 634 + * which may not carry the MP_CAPABLE opt even on mptcp enabled 635 + * paths: always try to extract the peer key, and fallback 636 + * for packets missing it. 637 + * Even OoO DSS packets coming legitly after dropped or 638 + * reordered MPC will cause fallback, but we don't have other 639 + * options. 640 + */ 641 641 mptcp_get_options(skb, &mp_opt); 642 642 if (!mp_opt.mp_capable) { 643 643 fallback = true; 644 644 goto create_child; 645 645 } 646 646 647 - create_msk: 648 647 new_msk = mptcp_sk_clone(listener->conn, &mp_opt, req); 649 648 if (!new_msk) 650 649 fallback = true; ··· 1011 1012 1012 1013 status = get_mapping_status(ssk, msk); 1013 1014 trace_subflow_check_data_avail(status, skb_peek(&ssk->sk_receive_queue)); 1014 - if (status == MAPPING_INVALID) { 1015 - ssk->sk_err = EBADMSG; 1016 - goto fatal; 1017 - } 1018 - if (status == MAPPING_DUMMY) { 1019 - __mptcp_do_fallback(msk); 1020 - skb = skb_peek(&ssk->sk_receive_queue); 1021 - subflow->map_valid = 1; 1022 - subflow->map_seq = READ_ONCE(msk->ack_seq); 1023 - subflow->map_data_len = skb->len; 1024 - subflow->map_subflow_seq = tcp_sk(ssk)->copied_seq - 1025 - subflow->ssn_offset; 1026 - subflow->data_avail = MPTCP_SUBFLOW_DATA_AVAIL; 1027 - return true; 1028 - } 1015 + if (unlikely(status == MAPPING_INVALID)) 1016 + goto fallback; 1017 + 1018 + if (unlikely(status == MAPPING_DUMMY)) 1019 + goto fallback; 1029 1020 1030 1021 if (status != MAPPING_OK) 1031 1022 goto no_data; ··· 1028 1039 * MP_CAPABLE-based mapping 1029 1040 */ 1030 1041 if (unlikely(!READ_ONCE(msk->can_ack))) { 1031 - if (!subflow->mpc_map) { 1032 - ssk->sk_err = EBADMSG; 1033 - goto fatal; 1034 - } 1042 + if (!subflow->mpc_map) 1043 + goto fallback; 1035 1044 WRITE_ONCE(msk->remote_key, subflow->remote_key); 1036 1045 WRITE_ONCE(msk->ack_seq, subflow->map_seq); 1037 1046 WRITE_ONCE(msk->can_ack, true); ··· 1057 1070 no_data: 1058 1071 subflow_sched_work_if_closed(msk, ssk); 1059 1072 return false; 1060 - fatal: 1061 - /* fatal protocol error, close the socket */ 1062 - /* This barrier is coupled with smp_rmb() in tcp_poll() */ 1063 - smp_wmb(); 1064 - ssk->sk_error_report(ssk); 1065 - tcp_set_state(ssk, TCP_CLOSE); 1066 - subflow->reset_transient = 0; 1067 - subflow->reset_reason = MPTCP_RST_EMPTCP; 1068 - tcp_send_active_reset(ssk, GFP_ATOMIC); 1069 - subflow->data_avail = 0; 1070 - return false; 1073 + 1074 + fallback: 1075 + /* RFC 8684 section 3.7. */ 1076 + if (subflow->mp_join || subflow->fully_established) { 1077 + /* fatal protocol error, close the socket. 1078 + * subflow_error_report() will introduce the appropriate barriers 1079 + */ 1080 + ssk->sk_err = EBADMSG; 1081 + ssk->sk_error_report(ssk); 1082 + tcp_set_state(ssk, TCP_CLOSE); 1083 + subflow->reset_transient = 0; 1084 + subflow->reset_reason = MPTCP_RST_EMPTCP; 1085 + tcp_send_active_reset(ssk, GFP_ATOMIC); 1086 + subflow->data_avail = 0; 1087 + return false; 1088 + } 1089 + 1090 + __mptcp_do_fallback(msk); 1091 + skb = skb_peek(&ssk->sk_receive_queue); 1092 + subflow->map_valid = 1; 1093 + subflow->map_seq = READ_ONCE(msk->ack_seq); 1094 + subflow->map_data_len = skb->len; 1095 + subflow->map_subflow_seq = tcp_sk(ssk)->copied_seq - subflow->ssn_offset; 1096 + subflow->data_avail = MPTCP_SUBFLOW_DATA_AVAIL; 1097 + return true; 1071 1098 } 1072 1099 1073 1100 bool mptcp_subflow_data_available(struct sock *sk)
+1 -1
net/netfilter/ipvs/ip_vs_ctl.c
··· 1367 1367 ip_vs_addr_copy(svc->af, &svc->addr, &u->addr); 1368 1368 svc->port = u->port; 1369 1369 svc->fwmark = u->fwmark; 1370 - svc->flags = u->flags; 1370 + svc->flags = u->flags & ~IP_VS_SVC_F_HASHED; 1371 1371 svc->timeout = u->timeout * HZ; 1372 1372 svc->netmask = u->netmask; 1373 1373 svc->ipvs = ipvs;
+1 -1
net/netfilter/nf_conntrack_proto.c
··· 664 664 665 665 #if IS_ENABLED(CONFIG_IPV6) 666 666 cleanup_sockopt: 667 - nf_unregister_sockopt(&so_getorigdst6); 667 + nf_unregister_sockopt(&so_getorigdst); 668 668 #endif 669 669 return ret; 670 670 }
+58 -28
net/netfilter/nf_tables_api.c
··· 736 736 goto nla_put_failure; 737 737 738 738 if (nla_put_string(skb, NFTA_TABLE_NAME, table->name) || 739 - nla_put_be32(skb, NFTA_TABLE_FLAGS, htonl(table->flags)) || 739 + nla_put_be32(skb, NFTA_TABLE_FLAGS, 740 + htonl(table->flags & NFT_TABLE_F_MASK)) || 740 741 nla_put_be32(skb, NFTA_TABLE_USE, htonl(table->use)) || 741 742 nla_put_be64(skb, NFTA_TABLE_HANDLE, cpu_to_be64(table->handle), 742 743 NFTA_TABLE_PAD)) ··· 948 947 949 948 static void nf_tables_table_disable(struct net *net, struct nft_table *table) 950 949 { 950 + table->flags &= ~NFT_TABLE_F_DORMANT; 951 951 nft_table_disable(net, table, 0); 952 + table->flags |= NFT_TABLE_F_DORMANT; 952 953 } 953 954 954 - enum { 955 - NFT_TABLE_STATE_UNCHANGED = 0, 956 - NFT_TABLE_STATE_DORMANT, 957 - NFT_TABLE_STATE_WAKEUP 958 - }; 955 + #define __NFT_TABLE_F_INTERNAL (NFT_TABLE_F_MASK + 1) 956 + #define __NFT_TABLE_F_WAS_DORMANT (__NFT_TABLE_F_INTERNAL << 0) 957 + #define __NFT_TABLE_F_WAS_AWAKEN (__NFT_TABLE_F_INTERNAL << 1) 958 + #define __NFT_TABLE_F_UPDATE (__NFT_TABLE_F_WAS_DORMANT | \ 959 + __NFT_TABLE_F_WAS_AWAKEN) 959 960 960 961 static int nf_tables_updtable(struct nft_ctx *ctx) 961 962 { 962 963 struct nft_trans *trans; 963 964 u32 flags; 964 - int ret = 0; 965 + int ret; 965 966 966 967 if (!ctx->nla[NFTA_TABLE_FLAGS]) 967 968 return 0; ··· 988 985 989 986 if ((flags & NFT_TABLE_F_DORMANT) && 990 987 !(ctx->table->flags & NFT_TABLE_F_DORMANT)) { 991 - nft_trans_table_state(trans) = NFT_TABLE_STATE_DORMANT; 988 + ctx->table->flags |= NFT_TABLE_F_DORMANT; 989 + if (!(ctx->table->flags & __NFT_TABLE_F_UPDATE)) 990 + ctx->table->flags |= __NFT_TABLE_F_WAS_AWAKEN; 992 991 } else if (!(flags & NFT_TABLE_F_DORMANT) && 993 992 ctx->table->flags & NFT_TABLE_F_DORMANT) { 994 - ret = nf_tables_table_enable(ctx->net, ctx->table); 995 - if (ret >= 0) 996 - nft_trans_table_state(trans) = NFT_TABLE_STATE_WAKEUP; 997 - } 998 - if (ret < 0) 999 - goto err; 993 + ctx->table->flags &= ~NFT_TABLE_F_DORMANT; 994 + if (!(ctx->table->flags & __NFT_TABLE_F_UPDATE)) { 995 + ret = nf_tables_table_enable(ctx->net, ctx->table); 996 + if (ret < 0) 997 + goto err_register_hooks; 1000 998 1001 - nft_trans_table_flags(trans) = flags; 999 + ctx->table->flags |= __NFT_TABLE_F_WAS_DORMANT; 1000 + } 1001 + } 1002 + 1002 1003 nft_trans_table_update(trans) = true; 1003 1004 nft_trans_commit_list_add_tail(ctx->net, trans); 1005 + 1004 1006 return 0; 1005 - err: 1007 + 1008 + err_register_hooks: 1006 1009 nft_trans_destroy(trans); 1007 1010 return ret; 1008 1011 } ··· 1914 1905 static int nft_chain_parse_hook(struct net *net, 1915 1906 const struct nlattr * const nla[], 1916 1907 struct nft_chain_hook *hook, u8 family, 1917 - bool autoload) 1908 + struct netlink_ext_ack *extack, bool autoload) 1918 1909 { 1919 1910 struct nftables_pernet *nft_net = nft_pernet(net); 1920 1911 struct nlattr *ha[NFTA_HOOK_MAX + 1]; ··· 1944 1935 if (nla[NFTA_CHAIN_TYPE]) { 1945 1936 type = nf_tables_chain_type_lookup(net, nla[NFTA_CHAIN_TYPE], 1946 1937 family, autoload); 1947 - if (IS_ERR(type)) 1938 + if (IS_ERR(type)) { 1939 + NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_TYPE]); 1948 1940 return PTR_ERR(type); 1941 + } 1949 1942 } 1950 1943 if (hook->num >= NFT_MAX_HOOKS || !(type->hook_mask & (1 << hook->num))) 1951 1944 return -EOPNOTSUPP; ··· 1956 1945 hook->priority <= NF_IP_PRI_CONNTRACK) 1957 1946 return -EOPNOTSUPP; 1958 1947 1959 - if (!try_module_get(type->owner)) 1948 + if (!try_module_get(type->owner)) { 1949 + if (nla[NFTA_CHAIN_TYPE]) 1950 + NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_TYPE]); 1960 1951 return -ENOENT; 1952 + } 1961 1953 1962 1954 hook->type = type; 1963 1955 ··· 2071 2057 static u64 chain_id; 2072 2058 2073 2059 static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask, 2074 - u8 policy, u32 flags) 2060 + u8 policy, u32 flags, 2061 + struct netlink_ext_ack *extack) 2075 2062 { 2076 2063 const struct nlattr * const *nla = ctx->nla; 2077 2064 struct nft_table *table = ctx->table; ··· 2094 2079 if (flags & NFT_CHAIN_BINDING) 2095 2080 return -EOPNOTSUPP; 2096 2081 2097 - err = nft_chain_parse_hook(net, nla, &hook, family, true); 2082 + err = nft_chain_parse_hook(net, nla, &hook, family, extack, 2083 + true); 2098 2084 if (err < 0) 2099 2085 return err; 2100 2086 ··· 2250 2234 return -EEXIST; 2251 2235 } 2252 2236 err = nft_chain_parse_hook(ctx->net, nla, &hook, ctx->family, 2253 - false); 2237 + extack, false); 2254 2238 if (err < 0) 2255 2239 return err; 2256 2240 ··· 2463 2447 extack); 2464 2448 } 2465 2449 2466 - return nf_tables_addchain(&ctx, family, genmask, policy, flags); 2450 + return nf_tables_addchain(&ctx, family, genmask, policy, flags, extack); 2467 2451 } 2468 2452 2469 2453 static int nf_tables_delchain(struct sk_buff *skb, const struct nfnl_info *info, ··· 3344 3328 if (n == NFT_RULE_MAXEXPRS) 3345 3329 goto err1; 3346 3330 err = nf_tables_expr_parse(&ctx, tmp, &expr_info[n]); 3347 - if (err < 0) 3331 + if (err < 0) { 3332 + NL_SET_BAD_ATTR(extack, tmp); 3348 3333 goto err1; 3334 + } 3349 3335 size += expr_info[n].ops->size; 3350 3336 n++; 3351 3337 } ··· 8565 8547 switch (trans->msg_type) { 8566 8548 case NFT_MSG_NEWTABLE: 8567 8549 if (nft_trans_table_update(trans)) { 8568 - if (nft_trans_table_state(trans) == NFT_TABLE_STATE_DORMANT) 8550 + if (!(trans->ctx.table->flags & __NFT_TABLE_F_UPDATE)) { 8551 + nft_trans_destroy(trans); 8552 + break; 8553 + } 8554 + if (trans->ctx.table->flags & NFT_TABLE_F_DORMANT) 8569 8555 nf_tables_table_disable(net, trans->ctx.table); 8570 8556 8571 - trans->ctx.table->flags = nft_trans_table_flags(trans); 8557 + trans->ctx.table->flags &= ~__NFT_TABLE_F_UPDATE; 8572 8558 } else { 8573 8559 nft_clear(net, trans->ctx.table); 8574 8560 } ··· 8790 8768 switch (trans->msg_type) { 8791 8769 case NFT_MSG_NEWTABLE: 8792 8770 if (nft_trans_table_update(trans)) { 8793 - if (nft_trans_table_state(trans) == NFT_TABLE_STATE_WAKEUP) 8771 + if (!(trans->ctx.table->flags & __NFT_TABLE_F_UPDATE)) { 8772 + nft_trans_destroy(trans); 8773 + break; 8774 + } 8775 + if (trans->ctx.table->flags & __NFT_TABLE_F_WAS_DORMANT) { 8794 8776 nf_tables_table_disable(net, trans->ctx.table); 8795 - 8777 + trans->ctx.table->flags |= NFT_TABLE_F_DORMANT; 8778 + } else if (trans->ctx.table->flags & __NFT_TABLE_F_WAS_AWAKEN) { 8779 + trans->ctx.table->flags &= ~NFT_TABLE_F_DORMANT; 8780 + } 8781 + trans->ctx.table->flags &= ~__NFT_TABLE_F_UPDATE; 8796 8782 nft_trans_destroy(trans); 8797 8783 } else { 8798 8784 list_del_rcu(&trans->ctx.table->list);
+1 -1
net/netfilter/nft_ct.c
··· 1217 1217 struct nf_conn *ct; 1218 1218 1219 1219 ct = nf_ct_get(pkt->skb, &ctinfo); 1220 - if (!ct || ctinfo == IP_CT_UNTRACKED) { 1220 + if (!ct || nf_ct_is_confirmed(ct) || nf_ct_is_template(ct)) { 1221 1221 regs->verdict.code = NFT_BREAK; 1222 1222 return; 1223 1223 }
+2
net/nfc/llcp_sock.c
··· 110 110 if (!llcp_sock->service_name) { 111 111 nfc_llcp_local_put(llcp_sock->local); 112 112 llcp_sock->local = NULL; 113 + llcp_sock->dev = NULL; 113 114 ret = -ENOMEM; 114 115 goto put_dev; 115 116 } ··· 120 119 llcp_sock->local = NULL; 121 120 kfree(llcp_sock->service_name); 122 121 llcp_sock->service_name = NULL; 122 + llcp_sock->dev = NULL; 123 123 ret = -EADDRINUSE; 124 124 goto put_dev; 125 125 }
+4 -6
net/sched/act_ct.c
··· 984 984 */ 985 985 cached = tcf_ct_skb_nfct_cached(net, skb, p->zone, force); 986 986 if (!cached) { 987 - if (!commit && tcf_ct_flow_table_lookup(p, skb, family)) { 987 + if (tcf_ct_flow_table_lookup(p, skb, family)) { 988 988 skip_add = true; 989 989 goto do_nat; 990 990 } ··· 1022 1022 * even if the connection is already confirmed. 1023 1023 */ 1024 1024 nf_conntrack_confirm(skb); 1025 - } else if (!skip_add) { 1026 - tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo); 1027 1025 } 1026 + 1027 + if (!skip_add) 1028 + tcf_ct_flow_table_process_conn(p->ct_ft, ct, ctinfo); 1028 1029 1029 1030 out_push: 1030 1031 skb_push_rcsum(skb, nh_ofs); ··· 1202 1201 NULL, TCA_CT_UNSPEC, 1203 1202 sizeof(p->zone)); 1204 1203 } 1205 - 1206 - if (p->zone == NF_CT_DEFAULT_ZONE_ID) 1207 - return 0; 1208 1204 1209 1205 nf_ct_zone_init(&zone, p->zone, NF_CT_DEFAULT_ZONE_DIR, 0); 1210 1206 tmpl = nf_ct_tmpl_alloc(net, &zone, GFP_KERNEL);
+4 -4
net/sched/sch_htb.c
··· 1488 1488 struct Qdisc *old_q; 1489 1489 1490 1490 /* One ref for cl->leaf.q, the other for dev_queue->qdisc. */ 1491 - qdisc_refcount_inc(new_q); 1491 + if (new_q) 1492 + qdisc_refcount_inc(new_q); 1492 1493 old_q = htb_graft_helper(dev_queue, new_q); 1493 1494 WARN_ON(!(old_q->flags & TCQ_F_BUILTIN)); 1494 1495 } ··· 1676 1675 cl->parent->common.classid, 1677 1676 NULL); 1678 1677 if (q->offload) { 1679 - if (new_q) { 1678 + if (new_q) 1680 1679 htb_set_lockdep_class_child(new_q); 1681 - htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1682 - } 1680 + htb_parent_to_leaf_offload(sch, dev_queue, new_q); 1683 1681 } 1684 1682 } 1685 1683
+50 -12
net/tls/tls_device.c
··· 50 50 static DECLARE_WORK(tls_device_gc_work, tls_device_gc_task); 51 51 static LIST_HEAD(tls_device_gc_list); 52 52 static LIST_HEAD(tls_device_list); 53 + static LIST_HEAD(tls_device_down_list); 53 54 static DEFINE_SPINLOCK(tls_device_lock); 54 55 55 56 static void tls_device_free_ctx(struct tls_context *ctx) ··· 681 680 struct tls_offload_context_rx *rx_ctx = tls_offload_ctx_rx(tls_ctx); 682 681 struct net_device *netdev; 683 682 684 - if (WARN_ON(test_and_set_bit(TLS_RX_SYNC_RUNNING, &tls_ctx->flags))) 685 - return; 686 - 687 683 trace_tls_device_rx_resync_send(sk, seq, rcd_sn, rx_ctx->resync_type); 684 + rcu_read_lock(); 688 685 netdev = READ_ONCE(tls_ctx->netdev); 689 686 if (netdev) 690 687 netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, rcd_sn, 691 688 TLS_OFFLOAD_CTX_DIR_RX); 692 - clear_bit_unlock(TLS_RX_SYNC_RUNNING, &tls_ctx->flags); 689 + rcu_read_unlock(); 693 690 TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSRXDEVICERESYNC); 694 691 } 695 692 ··· 759 760 u32 req_seq; 760 761 761 762 if (tls_ctx->rx_conf != TLS_HW) 763 + return; 764 + if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags))) 762 765 return; 763 766 764 767 prot = &tls_ctx->prot_info; ··· 963 962 is_encrypted, is_decrypted); 964 963 965 964 ctx->sw.decrypted |= is_decrypted; 965 + 966 + if (unlikely(test_bit(TLS_RX_DEV_DEGRADED, &tls_ctx->flags))) { 967 + if (likely(is_encrypted || is_decrypted)) 968 + return 0; 969 + 970 + /* After tls_device_down disables the offload, the next SKB will 971 + * likely have initial fragments decrypted, and final ones not 972 + * decrypted. We need to reencrypt that single SKB. 973 + */ 974 + return tls_device_reencrypt(sk, skb); 975 + } 966 976 967 977 /* Return immediately if the record is either entirely plaintext or 968 978 * entirely ciphertext. Otherwise handle reencrypt partially decrypted ··· 1304 1292 spin_unlock_irqrestore(&tls_device_lock, flags); 1305 1293 1306 1294 list_for_each_entry_safe(ctx, tmp, &list, list) { 1295 + /* Stop offloaded TX and switch to the fallback. 1296 + * tls_is_sk_tx_device_offloaded will return false. 1297 + */ 1298 + WRITE_ONCE(ctx->sk->sk_validate_xmit_skb, tls_validate_xmit_skb_sw); 1299 + 1300 + /* Stop the RX and TX resync. 1301 + * tls_dev_resync must not be called after tls_dev_del. 1302 + */ 1303 + WRITE_ONCE(ctx->netdev, NULL); 1304 + 1305 + /* Start skipping the RX resync logic completely. */ 1306 + set_bit(TLS_RX_DEV_DEGRADED, &ctx->flags); 1307 + 1308 + /* Sync with inflight packets. After this point: 1309 + * TX: no non-encrypted packets will be passed to the driver. 1310 + * RX: resync requests from the driver will be ignored. 1311 + */ 1312 + synchronize_net(); 1313 + 1314 + /* Release the offload context on the driver side. */ 1307 1315 if (ctx->tx_conf == TLS_HW) 1308 1316 netdev->tlsdev_ops->tls_dev_del(netdev, ctx, 1309 1317 TLS_OFFLOAD_CTX_DIR_TX); ··· 1331 1299 !test_bit(TLS_RX_DEV_CLOSED, &ctx->flags)) 1332 1300 netdev->tlsdev_ops->tls_dev_del(netdev, ctx, 1333 1301 TLS_OFFLOAD_CTX_DIR_RX); 1334 - WRITE_ONCE(ctx->netdev, NULL); 1335 - smp_mb__before_atomic(); /* pairs with test_and_set_bit() */ 1336 - while (test_bit(TLS_RX_SYNC_RUNNING, &ctx->flags)) 1337 - usleep_range(10, 200); 1338 - dev_put(netdev); 1339 - list_del_init(&ctx->list); 1340 1302 1341 - if (refcount_dec_and_test(&ctx->refcount)) 1342 - tls_device_free_ctx(ctx); 1303 + dev_put(netdev); 1304 + 1305 + /* Move the context to a separate list for two reasons: 1306 + * 1. When the context is deallocated, list_del is called. 1307 + * 2. It's no longer an offloaded context, so we don't want to 1308 + * run offload-specific code on this context. 1309 + */ 1310 + spin_lock_irqsave(&tls_device_lock, flags); 1311 + list_move_tail(&ctx->list, &tls_device_down_list); 1312 + spin_unlock_irqrestore(&tls_device_lock, flags); 1313 + 1314 + /* Device contexts for RX and TX will be freed in on sk_destruct 1315 + * by tls_device_free_ctx. rx_conf and tx_conf stay in TLS_HW. 1316 + */ 1343 1317 } 1344 1318 1345 1319 up_write(&device_offload_lock);
+7
net/tls/tls_device_fallback.c
··· 431 431 } 432 432 EXPORT_SYMBOL_GPL(tls_validate_xmit_skb); 433 433 434 + struct sk_buff *tls_validate_xmit_skb_sw(struct sock *sk, 435 + struct net_device *dev, 436 + struct sk_buff *skb) 437 + { 438 + return tls_sw_fallback(sk, skb); 439 + } 440 + 434 441 struct sk_buff *tls_encrypt_skb(struct sk_buff *skb) 435 442 { 436 443 return tls_sw_fallback(skb->sk, skb);
+1
net/tls/tls_main.c
··· 636 636 mutex_init(&ctx->tx_lock); 637 637 rcu_assign_pointer(icsk->icsk_ulp_data, ctx); 638 638 ctx->sk_proto = READ_ONCE(sk->sk_prot); 639 + ctx->sk = sk; 639 640 return ctx; 640 641 } 641 642
+1 -1
net/x25/af_x25.c
··· 536 536 if (protocol) 537 537 goto out; 538 538 539 - rc = -ENOBUFS; 539 + rc = -ENOMEM; 540 540 if ((sk = x25_alloc_socket(net, kern)) == NULL) 541 541 goto out; 542 542
+9 -4
samples/vfio-mdev/mdpy-fb.c
··· 117 117 if (format != DRM_FORMAT_XRGB8888) { 118 118 pci_err(pdev, "format mismatch (0x%x != 0x%x)\n", 119 119 format, DRM_FORMAT_XRGB8888); 120 - return -EINVAL; 120 + ret = -EINVAL; 121 + goto err_release_regions; 121 122 } 122 123 if (width < 100 || width > 10000) { 123 124 pci_err(pdev, "width (%d) out of range\n", width); 124 - return -EINVAL; 125 + ret = -EINVAL; 126 + goto err_release_regions; 125 127 } 126 128 if (height < 100 || height > 10000) { 127 129 pci_err(pdev, "height (%d) out of range\n", height); 128 - return -EINVAL; 130 + ret = -EINVAL; 131 + goto err_release_regions; 129 132 } 130 133 pci_info(pdev, "mdpy found: %dx%d framebuffer\n", 131 134 width, height); 132 135 133 136 info = framebuffer_alloc(sizeof(struct mdpy_fb_par), &pdev->dev); 134 - if (!info) 137 + if (!info) { 138 + ret = -ENOMEM; 135 139 goto err_release_regions; 140 + } 136 141 pci_set_drvdata(pdev, info); 137 142 par = info->par; 138 143
+1 -1
scripts/Makefile.modfinal
··· 59 59 quiet_cmd_btf_ko = BTF [M] $@ 60 60 cmd_btf_ko = \ 61 61 if [ -f vmlinux ]; then \ 62 - LLVM_OBJCOPY=$(OBJCOPY) $(PAHOLE) -J --btf_base vmlinux $@; \ 62 + LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \ 63 63 else \ 64 64 printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \ 65 65 fi;
+26 -7
sound/core/control_led.c
··· 17 17 #define MAX_LED (((SNDRV_CTL_ELEM_ACCESS_MIC_LED - SNDRV_CTL_ELEM_ACCESS_SPK_LED) \ 18 18 >> SNDRV_CTL_ELEM_ACCESS_LED_SHIFT) + 1) 19 19 20 + #define to_led_card_dev(_dev) \ 21 + container_of(_dev, struct snd_ctl_led_card, dev) 22 + 20 23 enum snd_ctl_led_mode { 21 24 MODE_FOLLOW_MUTE = 0, 22 25 MODE_FOLLOW_ROUTE, ··· 374 371 snd_ctl_led_refresh(); 375 372 } 376 373 374 + static void snd_ctl_led_card_release(struct device *dev) 375 + { 376 + struct snd_ctl_led_card *led_card = to_led_card_dev(dev); 377 + 378 + kfree(led_card); 379 + } 380 + 381 + static void snd_ctl_led_release(struct device *dev) 382 + { 383 + } 384 + 385 + static void snd_ctl_led_dev_release(struct device *dev) 386 + { 387 + } 388 + 377 389 /* 378 390 * sysfs 379 391 */ ··· 681 663 led_card->number = card->number; 682 664 led_card->led = led; 683 665 device_initialize(&led_card->dev); 666 + led_card->dev.release = snd_ctl_led_card_release; 684 667 if (dev_set_name(&led_card->dev, "card%d", card->number) < 0) 685 668 goto cerr; 686 669 led_card->dev.parent = &led->dev; ··· 700 681 put_device(&led_card->dev); 701 682 cerr2: 702 683 printk(KERN_ERR "snd_ctl_led: unable to add card%d", card->number); 703 - kfree(led_card); 704 684 } 705 685 } 706 686 ··· 718 700 snprintf(link_name, sizeof(link_name), "led-%s", led->name); 719 701 sysfs_remove_link(&card->ctl_dev.kobj, link_name); 720 702 sysfs_remove_link(&led_card->dev.kobj, "card"); 721 - device_del(&led_card->dev); 722 - kfree(led_card); 703 + device_unregister(&led_card->dev); 723 704 led->cards[card->number] = NULL; 724 705 } 725 706 } ··· 740 723 741 724 device_initialize(&snd_ctl_led_dev); 742 725 snd_ctl_led_dev.class = sound_class; 726 + snd_ctl_led_dev.release = snd_ctl_led_dev_release; 743 727 dev_set_name(&snd_ctl_led_dev, "ctl-led"); 744 728 if (device_add(&snd_ctl_led_dev)) { 745 729 put_device(&snd_ctl_led_dev); ··· 751 733 INIT_LIST_HEAD(&led->controls); 752 734 device_initialize(&led->dev); 753 735 led->dev.parent = &snd_ctl_led_dev; 736 + led->dev.release = snd_ctl_led_release; 754 737 led->dev.groups = snd_ctl_led_dev_attr_groups; 755 738 dev_set_name(&led->dev, led->name); 756 739 if (device_add(&led->dev)) { 757 740 put_device(&led->dev); 758 741 for (; group > 0; group--) { 759 742 led = &snd_ctl_leds[group - 1]; 760 - device_del(&led->dev); 743 + device_unregister(&led->dev); 761 744 } 762 - device_del(&snd_ctl_led_dev); 745 + device_unregister(&snd_ctl_led_dev); 763 746 return -ENOMEM; 764 747 } 765 748 } ··· 786 767 } 787 768 for (group = 0; group < MAX_LED; group++) { 788 769 led = &snd_ctl_leds[group]; 789 - device_del(&led->dev); 770 + device_unregister(&led->dev); 790 771 } 791 - device_del(&snd_ctl_led_dev); 772 + device_unregister(&snd_ctl_led_dev); 792 773 snd_ctl_led_clean(NULL); 793 774 } 794 775
+9 -1
sound/core/seq/seq_timer.c
··· 297 297 return err; 298 298 } 299 299 spin_lock_irq(&tmr->lock); 300 - tmr->timeri = t; 300 + if (tmr->timeri) 301 + err = -EBUSY; 302 + else 303 + tmr->timeri = t; 301 304 spin_unlock_irq(&tmr->lock); 305 + if (err < 0) { 306 + snd_timer_close(t); 307 + snd_timer_instance_free(t); 308 + return err; 309 + } 302 310 return 0; 303 311 } 304 312
+2 -1
sound/core/timer.c
··· 520 520 return; 521 521 if (timer->hw.flags & SNDRV_TIMER_HW_SLAVE) 522 522 return; 523 + event += 10; /* convert to SNDRV_TIMER_EVENT_MXXX */ 523 524 list_for_each_entry(ts, &ti->slave_active_head, active_list) 524 525 if (ts->ccallback) 525 - ts->ccallback(ts, event + 100, &tstamp, resolution); 526 + ts->ccallback(ts, event, &tstamp, resolution); 526 527 } 527 528 528 529 /* start/continue a master timer */
+1 -1
sound/firewire/amdtp-stream.c
··· 804 804 static inline void cancel_stream(struct amdtp_stream *s) 805 805 { 806 806 s->packet_index = -1; 807 - if (current_work() == &s->period_work) 807 + if (in_interrupt()) 808 808 amdtp_stream_pcm_abort(s); 809 809 WRITE_ONCE(s->pcm_buffer_pointer, SNDRV_PCM_POS_XRUN); 810 810 }
+4
sound/hda/intel-dsp-config.c
··· 331 331 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 332 332 .device = 0x51c8, 333 333 }, 334 + { 335 + .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 336 + .device = 0x51cc, 337 + }, 334 338 #endif 335 339 336 340 };
+5
sound/pci/hda/hda_codec.c
··· 2917 2917 #ifdef CONFIG_PM_SLEEP 2918 2918 static int hda_codec_pm_prepare(struct device *dev) 2919 2919 { 2920 + dev->power.power_state = PMSG_SUSPEND; 2920 2921 return pm_runtime_suspended(dev); 2921 2922 } 2922 2923 2923 2924 static void hda_codec_pm_complete(struct device *dev) 2924 2925 { 2925 2926 struct hda_codec *codec = dev_to_hda_codec(dev); 2927 + 2928 + /* If no other pm-functions are called between prepare() and complete() */ 2929 + if (dev->power.power_state.event == PM_EVENT_SUSPEND) 2930 + dev->power.power_state = PMSG_RESUME; 2926 2931 2927 2932 if (pm_runtime_suspended(dev) && (codec->jackpoll_interval || 2928 2933 hda_codec_need_resume(codec) || codec->forced_resume))
+1
sound/pci/hda/hda_generic.c
··· 3520 3520 static const struct snd_kcontrol_new cap_sw_temp = { 3521 3521 .iface = SNDRV_CTL_ELEM_IFACE_MIXER, 3522 3522 .name = "Capture Switch", 3523 + .access = SNDRV_CTL_ELEM_ACCESS_READWRITE, 3523 3524 .info = cap_sw_info, 3524 3525 .get = cap_sw_get, 3525 3526 .put = cap_sw_put,
+3
sound/pci/hda/hda_intel.c
··· 2485 2485 /* Alderlake-P */ 2486 2486 { PCI_DEVICE(0x8086, 0x51c8), 2487 2487 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2488 + /* Alderlake-M */ 2489 + { PCI_DEVICE(0x8086, 0x51cc), 2490 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2488 2491 /* Elkhart Lake */ 2489 2492 { PCI_DEVICE(0x8086, 0x4b55), 2490 2493 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE},
+3 -4
sound/pci/hda/patch_cirrus.c
··· 2206 2206 break; 2207 2207 case HDA_FIXUP_ACT_PROBE: 2208 2208 2209 - /* Set initial volume on Bullseye to -26 dB */ 2210 - if (codec->fixup_id == CS8409_BULLSEYE) 2211 - snd_hda_codec_amp_init_stereo(codec, CS8409_CS42L42_DMIC_ADC_PIN_NID, 2212 - HDA_INPUT, 0, 0xff, 0x19); 2209 + /* Set initial DMIC volume to -26 dB */ 2210 + snd_hda_codec_amp_init_stereo(codec, CS8409_CS42L42_DMIC_ADC_PIN_NID, 2211 + HDA_INPUT, 0, 0xff, 0x19); 2213 2212 snd_hda_gen_add_kctl(&spec->gen, 2214 2213 NULL, &cs8409_cs42l42_hp_volume_mixer); 2215 2214 snd_hda_gen_add_kctl(&spec->gen,
+17
sound/pci/hda/patch_realtek.c
··· 6568 6568 ALC285_FIXUP_HP_SPECTRE_X360, 6569 6569 ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, 6570 6570 ALC623_FIXUP_LENOVO_THINKSTATION_P340, 6571 + ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, 6571 6572 }; 6572 6573 6573 6574 static const struct hda_fixup alc269_fixups[] = { ··· 8147 8146 .chained = true, 8148 8147 .chain_id = ALC283_FIXUP_HEADSET_MIC, 8149 8148 }, 8149 + [ALC255_FIXUP_ACER_HEADPHONE_AND_MIC] = { 8150 + .type = HDA_FIXUP_PINS, 8151 + .v.pins = (const struct hda_pintbl[]) { 8152 + { 0x21, 0x03211030 }, /* Change the Headphone location to Left */ 8153 + { } 8154 + }, 8155 + .chained = true, 8156 + .chain_id = ALC255_FIXUP_XIAOMI_HEADSET_MIC 8157 + }, 8150 8158 }; 8151 8159 8152 8160 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 8192 8182 SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC), 8193 8183 SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC), 8194 8184 SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), 8185 + SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), 8195 8186 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 8196 8187 SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), 8197 8188 SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X), ··· 8314 8303 SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE), 8315 8304 SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE), 8316 8305 SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8306 + SND_PCI_QUIRK(0x103c, 0x841c, "HP Pavilion 15-CK0xx", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8317 8307 SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8318 8308 SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN), 8319 8309 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8320 8310 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 8321 8311 SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), 8322 8312 SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), 8313 + SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8314 + SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8323 8315 SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED), 8324 8316 SND_PCI_QUIRK(0x103c, 0x8729, "HP", ALC285_FIXUP_HP_GPIO_LED), 8325 8317 SND_PCI_QUIRK(0x103c, 0x8730, "HP ProBook 445 G7", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), ··· 8341 8327 SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), 8342 8328 SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), 8343 8329 SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8330 + SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8344 8331 SND_PCI_QUIRK(0x103c, 0x884c, "HP EliteBook 840 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), 8345 8332 SND_PCI_QUIRK(0x103c, 0x886d, "HP ZBook Fury 17.3 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8346 8333 SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8347 8334 SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8335 + SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED), 8348 8336 SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), 8349 8337 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 8350 8338 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), ··· 8752 8736 {.id = ALC285_FIXUP_HP_SPECTRE_X360, .name = "alc285-hp-spectre-x360"}, 8753 8737 {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, 8754 8738 {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, 8739 + {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"}, 8755 8740 {} 8756 8741 }; 8757 8742 #define ALC225_STANDARD_PINS \
+21 -5
sound/soc/codecs/rt5659.c
··· 2433 2433 return 0; 2434 2434 } 2435 2435 2436 - static const struct snd_soc_dapm_widget rt5659_dapm_widgets[] = { 2436 + static const struct snd_soc_dapm_widget rt5659_particular_dapm_widgets[] = { 2437 2437 SND_SOC_DAPM_SUPPLY("LDO2", RT5659_PWR_ANLG_3, RT5659_PWR_LDO2_BIT, 0, 2438 2438 NULL, 0), 2439 - SND_SOC_DAPM_SUPPLY("PLL", RT5659_PWR_ANLG_3, RT5659_PWR_PLL_BIT, 0, 2440 - NULL, 0), 2439 + SND_SOC_DAPM_SUPPLY("MICBIAS1", RT5659_PWR_ANLG_2, RT5659_PWR_MB1_BIT, 2440 + 0, NULL, 0), 2441 2441 SND_SOC_DAPM_SUPPLY("Mic Det Power", RT5659_PWR_VOL, 2442 2442 RT5659_PWR_MIC_DET_BIT, 0, NULL, 0), 2443 + }; 2444 + 2445 + static const struct snd_soc_dapm_widget rt5659_dapm_widgets[] = { 2446 + SND_SOC_DAPM_SUPPLY("PLL", RT5659_PWR_ANLG_3, RT5659_PWR_PLL_BIT, 0, 2447 + NULL, 0), 2443 2448 SND_SOC_DAPM_SUPPLY("Mono Vref", RT5659_PWR_ANLG_1, 2444 2449 RT5659_PWR_VREF3_BIT, 0, NULL, 0), 2445 2450 ··· 2469 2464 RT5659_ADC_MONO_R_ASRC_SFT, 0, NULL, 0), 2470 2465 2471 2466 /* Input Side */ 2472 - SND_SOC_DAPM_SUPPLY("MICBIAS1", RT5659_PWR_ANLG_2, RT5659_PWR_MB1_BIT, 2473 - 0, NULL, 0), 2474 2467 SND_SOC_DAPM_SUPPLY("MICBIAS2", RT5659_PWR_ANLG_2, RT5659_PWR_MB2_BIT, 2475 2468 0, NULL, 0), 2476 2469 SND_SOC_DAPM_SUPPLY("MICBIAS3", RT5659_PWR_ANLG_2, RT5659_PWR_MB3_BIT, ··· 3663 3660 3664 3661 static int rt5659_probe(struct snd_soc_component *component) 3665 3662 { 3663 + struct snd_soc_dapm_context *dapm = 3664 + snd_soc_component_get_dapm(component); 3666 3665 struct rt5659_priv *rt5659 = snd_soc_component_get_drvdata(component); 3667 3666 3668 3667 rt5659->component = component; 3668 + 3669 + switch (rt5659->pdata.jd_src) { 3670 + case RT5659_JD_HDA_HEADER: 3671 + break; 3672 + 3673 + default: 3674 + snd_soc_dapm_new_controls(dapm, 3675 + rt5659_particular_dapm_widgets, 3676 + ARRAY_SIZE(rt5659_particular_dapm_widgets)); 3677 + break; 3678 + } 3669 3679 3670 3680 return 0; 3671 3681 }
+2 -1
sound/soc/codecs/rt5682-sdw.c
··· 462 462 463 463 regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_2, 464 464 RT5682_EXT_JD_SRC, RT5682_EXT_JD_SRC_MANUAL); 465 - regmap_write(rt5682->regmap, RT5682_CBJ_CTRL_1, 0xd042); 465 + regmap_write(rt5682->regmap, RT5682_CBJ_CTRL_1, 0xd142); 466 + regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_5, 0x0700, 0x0600); 466 467 regmap_update_bits(rt5682->regmap, RT5682_CBJ_CTRL_3, 467 468 RT5682_CBJ_IN_BUF_EN, RT5682_CBJ_IN_BUF_EN); 468 469 regmap_update_bits(rt5682->regmap, RT5682_SAR_IL_CMD_1,
+7 -7
sound/soc/codecs/tas2562.h
··· 57 57 #define TAS2562_TDM_CFG0_RAMPRATE_MASK BIT(5) 58 58 #define TAS2562_TDM_CFG0_RAMPRATE_44_1 BIT(5) 59 59 #define TAS2562_TDM_CFG0_SAMPRATE_MASK GENMASK(3, 1) 60 - #define TAS2562_TDM_CFG0_SAMPRATE_7305_8KHZ 0x0 61 - #define TAS2562_TDM_CFG0_SAMPRATE_14_7_16KHZ 0x1 62 - #define TAS2562_TDM_CFG0_SAMPRATE_22_05_24KHZ 0x2 63 - #define TAS2562_TDM_CFG0_SAMPRATE_29_4_32KHZ 0x3 64 - #define TAS2562_TDM_CFG0_SAMPRATE_44_1_48KHZ 0x4 65 - #define TAS2562_TDM_CFG0_SAMPRATE_88_2_96KHZ 0x5 66 - #define TAS2562_TDM_CFG0_SAMPRATE_176_4_192KHZ 0x6 60 + #define TAS2562_TDM_CFG0_SAMPRATE_7305_8KHZ (0x0 << 1) 61 + #define TAS2562_TDM_CFG0_SAMPRATE_14_7_16KHZ (0x1 << 1) 62 + #define TAS2562_TDM_CFG0_SAMPRATE_22_05_24KHZ (0x2 << 1) 63 + #define TAS2562_TDM_CFG0_SAMPRATE_29_4_32KHZ (0x3 << 1) 64 + #define TAS2562_TDM_CFG0_SAMPRATE_44_1_48KHZ (0x4 << 1) 65 + #define TAS2562_TDM_CFG0_SAMPRATE_88_2_96KHZ (0x5 << 1) 66 + #define TAS2562_TDM_CFG0_SAMPRATE_176_4_192KHZ (0x6 << 1) 67 67 68 68 #define TAS2562_TDM_CFG2_RIGHT_JUSTIFY BIT(6) 69 69
+1
sound/soc/fsl/fsl-asoc-card.c
··· 744 744 /* Initialize sound card */ 745 745 priv->pdev = pdev; 746 746 priv->card.dev = &pdev->dev; 747 + priv->card.owner = THIS_MODULE; 747 748 ret = snd_soc_of_parse_card_name(&priv->card, "model"); 748 749 if (ret) { 749 750 snprintf(priv->name, sizeof(priv->name), "%s-audio",
+79
sound/soc/qcom/lpass-cpu.c
··· 93 93 struct snd_soc_dai *dai) 94 94 { 95 95 struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai); 96 + struct lpaif_i2sctl *i2sctl = drvdata->i2sctl; 97 + unsigned int id = dai->driver->id; 96 98 97 99 clk_disable_unprepare(drvdata->mi2s_osr_clk[dai->driver->id]); 100 + /* 101 + * Ensure LRCLK is disabled even in device node validation. 102 + * Will not impact if disabled in lpass_cpu_daiops_trigger() 103 + * suspend. 104 + */ 105 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 106 + regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_DISABLE); 107 + else 108 + regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_DISABLE); 109 + 110 + /* 111 + * BCLK may not be enabled if lpass_cpu_daiops_prepare is called before 112 + * lpass_cpu_daiops_shutdown. It's paired with the clk_enable in 113 + * lpass_cpu_daiops_prepare. 114 + */ 115 + if (drvdata->mi2s_was_prepared[dai->driver->id]) { 116 + drvdata->mi2s_was_prepared[dai->driver->id] = false; 117 + clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]); 118 + } 119 + 98 120 clk_unprepare(drvdata->mi2s_bit_clk[dai->driver->id]); 99 121 } 100 122 ··· 297 275 case SNDRV_PCM_TRIGGER_START: 298 276 case SNDRV_PCM_TRIGGER_RESUME: 299 277 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 278 + /* 279 + * Ensure lpass BCLK/LRCLK is enabled during 280 + * device resume as lpass_cpu_daiops_prepare() is not called 281 + * after the device resumes. We don't check mi2s_was_prepared before 282 + * enable/disable BCLK in trigger events because: 283 + * 1. These trigger events are paired, so the BCLK 284 + * enable_count is balanced. 285 + * 2. the BCLK can be shared (ex: headset and headset mic), 286 + * we need to increase the enable_count so that we don't 287 + * turn off the shared BCLK while other devices are using 288 + * it. 289 + */ 300 290 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 301 291 ret = regmap_fields_write(i2sctl->spken, id, 302 292 LPAIF_I2SCTL_SPKEN_ENABLE); ··· 330 296 case SNDRV_PCM_TRIGGER_STOP: 331 297 case SNDRV_PCM_TRIGGER_SUSPEND: 332 298 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 299 + /* 300 + * To ensure lpass BCLK/LRCLK is disabled during 301 + * device suspend. 302 + */ 333 303 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 334 304 ret = regmap_fields_write(i2sctl->spken, id, 335 305 LPAIF_I2SCTL_SPKEN_DISABLE); ··· 353 315 return ret; 354 316 } 355 317 318 + static int lpass_cpu_daiops_prepare(struct snd_pcm_substream *substream, 319 + struct snd_soc_dai *dai) 320 + { 321 + struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai); 322 + struct lpaif_i2sctl *i2sctl = drvdata->i2sctl; 323 + unsigned int id = dai->driver->id; 324 + int ret; 325 + 326 + /* 327 + * Ensure lpass BCLK/LRCLK is enabled bit before playback/capture 328 + * data flow starts. This allows other codec to have some delay before 329 + * the data flow. 330 + * (ex: to drop start up pop noise before capture starts). 331 + */ 332 + if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) 333 + ret = regmap_fields_write(i2sctl->spken, id, LPAIF_I2SCTL_SPKEN_ENABLE); 334 + else 335 + ret = regmap_fields_write(i2sctl->micen, id, LPAIF_I2SCTL_MICEN_ENABLE); 336 + 337 + if (ret) { 338 + dev_err(dai->dev, "error writing to i2sctl reg: %d\n", ret); 339 + return ret; 340 + } 341 + 342 + /* 343 + * Check mi2s_was_prepared before enabling BCLK as lpass_cpu_daiops_prepare can 344 + * be called multiple times. It's paired with the clk_disable in 345 + * lpass_cpu_daiops_shutdown. 346 + */ 347 + if (!drvdata->mi2s_was_prepared[dai->driver->id]) { 348 + ret = clk_enable(drvdata->mi2s_bit_clk[id]); 349 + if (ret) { 350 + dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret); 351 + return ret; 352 + } 353 + drvdata->mi2s_was_prepared[dai->driver->id] = true; 354 + } 355 + return 0; 356 + } 357 + 356 358 const struct snd_soc_dai_ops asoc_qcom_lpass_cpu_dai_ops = { 357 359 .set_sysclk = lpass_cpu_daiops_set_sysclk, 358 360 .startup = lpass_cpu_daiops_startup, 359 361 .shutdown = lpass_cpu_daiops_shutdown, 360 362 .hw_params = lpass_cpu_daiops_hw_params, 361 363 .trigger = lpass_cpu_daiops_trigger, 364 + .prepare = lpass_cpu_daiops_prepare, 362 365 }; 363 366 EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_dai_ops); 364 367
+4
sound/soc/qcom/lpass.h
··· 67 67 /* MI2S SD lines to use for playback/capture */ 68 68 unsigned int mi2s_playback_sd_mode[LPASS_MAX_MI2S_PORTS]; 69 69 unsigned int mi2s_capture_sd_mode[LPASS_MAX_MI2S_PORTS]; 70 + 71 + /* The state of MI2S prepare dai_ops was called */ 72 + bool mi2s_was_prepared[LPASS_MAX_MI2S_PORTS]; 73 + 70 74 int hdmi_port_enable; 71 75 72 76 /* low-power audio interface (LPAIF) registers */
+2
sound/soc/soc-core.c
··· 2225 2225 return NULL; 2226 2226 2227 2227 name = devm_kstrdup(dev, devname, GFP_KERNEL); 2228 + if (!name) 2229 + return NULL; 2228 2230 2229 2231 /* are we a "%s.%d" name (platform and SPI components) */ 2230 2232 found = strstr(name, dev->driver->name);
+3 -3
sound/soc/soc-topology.c
··· 1901 1901 * @src: older version of pcm as a source 1902 1902 * @pcm: latest version of pcm created from the source 1903 1903 * 1904 - * Support from vesion 4. User should free the returned pcm manually. 1904 + * Support from version 4. User should free the returned pcm manually. 1905 1905 */ 1906 1906 static int pcm_new_ver(struct soc_tplg *tplg, 1907 1907 struct snd_soc_tplg_pcm *src, ··· 2089 2089 * @src: old version of phyical link config as a source 2090 2090 * @link: latest version of physical link config created from the source 2091 2091 * 2092 - * Support from vesion 4. User need free the returned link config manually. 2092 + * Support from version 4. User need free the returned link config manually. 2093 2093 */ 2094 2094 static int link_new_ver(struct soc_tplg *tplg, 2095 2095 struct snd_soc_tplg_link_config *src, ··· 2400 2400 * @src: old version of manifest as a source 2401 2401 * @manifest: latest version of manifest created from the source 2402 2402 * 2403 - * Support from vesion 4. Users need free the returned manifest manually. 2403 + * Support from version 4. Users need free the returned manifest manually. 2404 2404 */ 2405 2405 static int manifest_new_ver(struct soc_tplg *tplg, 2406 2406 struct snd_soc_tplg_manifest *src,
+1
sound/soc/sof/pm.c
··· 256 256 257 257 /* reset FW state */ 258 258 sdev->fw_state = SOF_FW_BOOT_NOT_STARTED; 259 + sdev->enabled_cores_mask = 0; 259 260 260 261 return ret; 261 262 }
+40
tools/arch/mips/include/uapi/asm/perf_regs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _ASM_MIPS_PERF_REGS_H 3 + #define _ASM_MIPS_PERF_REGS_H 4 + 5 + enum perf_event_mips_regs { 6 + PERF_REG_MIPS_PC, 7 + PERF_REG_MIPS_R1, 8 + PERF_REG_MIPS_R2, 9 + PERF_REG_MIPS_R3, 10 + PERF_REG_MIPS_R4, 11 + PERF_REG_MIPS_R5, 12 + PERF_REG_MIPS_R6, 13 + PERF_REG_MIPS_R7, 14 + PERF_REG_MIPS_R8, 15 + PERF_REG_MIPS_R9, 16 + PERF_REG_MIPS_R10, 17 + PERF_REG_MIPS_R11, 18 + PERF_REG_MIPS_R12, 19 + PERF_REG_MIPS_R13, 20 + PERF_REG_MIPS_R14, 21 + PERF_REG_MIPS_R15, 22 + PERF_REG_MIPS_R16, 23 + PERF_REG_MIPS_R17, 24 + PERF_REG_MIPS_R18, 25 + PERF_REG_MIPS_R19, 26 + PERF_REG_MIPS_R20, 27 + PERF_REG_MIPS_R21, 28 + PERF_REG_MIPS_R22, 29 + PERF_REG_MIPS_R23, 30 + PERF_REG_MIPS_R24, 31 + PERF_REG_MIPS_R25, 32 + PERF_REG_MIPS_R26, 33 + PERF_REG_MIPS_R27, 34 + PERF_REG_MIPS_R28, 35 + PERF_REG_MIPS_R29, 36 + PERF_REG_MIPS_R30, 37 + PERF_REG_MIPS_R31, 38 + PERF_REG_MIPS_MAX = PERF_REG_MIPS_R31 + 1, 39 + }; 40 + #endif /* _ASM_MIPS_PERF_REGS_H */
+2 -5
tools/arch/x86/include/asm/disabled-features.h
··· 56 56 # define DISABLE_PTI (1 << (X86_FEATURE_PTI & 31)) 57 57 #endif 58 58 59 - #ifdef CONFIG_IOMMU_SUPPORT 60 - # define DISABLE_ENQCMD 0 61 - #else 62 - # define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) 63 - #endif 59 + /* Force disable because it's broken beyond repair */ 60 + #define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31)) 64 61 65 62 #ifdef CONFIG_X86_SGX 66 63 # define DISABLE_SGX 0
+4
tools/bootconfig/include/linux/bootconfig.h
··· 4 4 5 5 #include "../../../../include/linux/bootconfig.h" 6 6 7 + #ifndef fallthrough 8 + # define fallthrough 9 + #endif 10 + 7 11 #endif
+1
tools/bootconfig/main.c
··· 399 399 } 400 400 /* TODO: Ensure the @path is initramfs/initrd image */ 401 401 if (fstat(fd, &stat) < 0) { 402 + ret = -errno; 402 403 pr_err("Failed to get the size of %s\n", path); 403 404 goto out; 404 405 }
+4
tools/objtool/arch/x86/decode.c
··· 747 747 748 748 list_for_each_entry(insn, &file->retpoline_call_list, call_node) { 749 749 750 + if (insn->type != INSN_JUMP_DYNAMIC && 751 + insn->type != INSN_CALL_DYNAMIC) 752 + continue; 753 + 750 754 if (!strcmp(insn->sec->name, ".text.__x86.indirect_thunk")) 751 755 continue; 752 756
+24 -1
tools/objtool/elf.c
··· 717 717 718 718 struct symbol *elf_create_undef_symbol(struct elf *elf, const char *name) 719 719 { 720 - struct section *symtab; 720 + struct section *symtab, *symtab_shndx; 721 721 struct symbol *sym; 722 722 Elf_Data *data; 723 723 Elf_Scn *s; ··· 768 768 769 769 symtab->len += data->d_size; 770 770 symtab->changed = true; 771 + 772 + symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); 773 + if (symtab_shndx) { 774 + s = elf_getscn(elf->elf, symtab_shndx->idx); 775 + if (!s) { 776 + WARN_ELF("elf_getscn"); 777 + return NULL; 778 + } 779 + 780 + data = elf_newdata(s); 781 + if (!data) { 782 + WARN_ELF("elf_newdata"); 783 + return NULL; 784 + } 785 + 786 + data->d_buf = &sym->sym.st_size; /* conveniently 0 */ 787 + data->d_size = sizeof(Elf32_Word); 788 + data->d_align = 4; 789 + data->d_type = ELF_T_WORD; 790 + 791 + symtab_shndx->len += 4; 792 + symtab_shndx->changed = true; 793 + } 771 794 772 795 sym->sec = find_section_by_index(elf, 0); 773 796
-1
tools/perf/Makefile.config
··· 90 90 ifeq ($(ARCH),mips) 91 91 NO_PERF_REGS := 0 92 92 CFLAGS += -I$(OUTPUT)arch/mips/include/generated 93 - CFLAGS += -I../../arch/mips/include/uapi -I../../arch/mips/include/generated/uapi 94 93 LIBUNWIND_LIBS = -lunwind -lunwind-mips 95 94 endif 96 95
+6
tools/perf/builtin-record.c
··· 2714 2714 rec->no_buildid = true; 2715 2715 } 2716 2716 2717 + if (rec->opts.record_cgroup && !perf_can_record_cgroup()) { 2718 + pr_err("Kernel has no cgroup sampling support.\n"); 2719 + err = -EINVAL; 2720 + goto out_opts; 2721 + } 2722 + 2717 2723 if (rec->opts.kcore) 2718 2724 rec->data.is_dir = true; 2719 2725
+1
tools/perf/check-headers.sh
··· 39 39 arch/x86/tools/gen-insn-attr-x86.awk 40 40 arch/arm/include/uapi/asm/perf_regs.h 41 41 arch/arm64/include/uapi/asm/perf_regs.h 42 + arch/mips/include/uapi/asm/perf_regs.h 42 43 arch/powerpc/include/uapi/asm/perf_regs.h 43 44 arch/s390/include/uapi/asm/perf_regs.h 44 45 arch/x86/include/uapi/asm/perf_regs.h
+1 -1
tools/perf/tests/attr/base-record
··· 16 16 exclusive=0 17 17 exclude_user=0 18 18 exclude_kernel=0|1 19 - exclude_hv=0 19 + exclude_hv=0|1 20 20 exclude_idle=0 21 21 mmap=1 22 22 comm=1
+4 -2
tools/perf/util/bpf_counter.c
··· 521 521 522 522 evsel->bperf_leader_link_fd = bpf_link_get_fd_by_id(entry.link_id); 523 523 if (evsel->bperf_leader_link_fd < 0 && 524 - bperf_reload_leader_program(evsel, attr_map_fd, &entry)) 524 + bperf_reload_leader_program(evsel, attr_map_fd, &entry)) { 525 + err = -1; 525 526 goto out; 526 - 527 + } 527 528 /* 528 529 * The bpf_link holds reference to the leader program, and the 529 530 * leader program holds reference to the maps. Therefore, if ··· 551 550 /* Step 2: load the follower skeleton */ 552 551 evsel->follower_skel = bperf_follower_bpf__open(); 553 552 if (!evsel->follower_skel) { 553 + err = -1; 554 554 pr_err("Failed to open follower skeleton\n"); 555 555 goto out; 556 556 }
+6 -2
tools/perf/util/dwarf-aux.c
··· 975 975 if ((tag == DW_TAG_formal_parameter || 976 976 tag == DW_TAG_variable) && 977 977 die_compare_name(die_mem, fvp->name) && 978 - /* Does the DIE have location information or external instance? */ 978 + /* 979 + * Does the DIE have location information or const value 980 + * or external instance? 981 + */ 979 982 (dwarf_attr(die_mem, DW_AT_external, &attr) || 980 - dwarf_attr(die_mem, DW_AT_location, &attr))) 983 + dwarf_attr(die_mem, DW_AT_location, &attr) || 984 + dwarf_attr(die_mem, DW_AT_const_value, &attr))) 981 985 return DIE_FIND_CB_END; 982 986 if (dwarf_haspc(die_mem, fvp->addr)) 983 987 return DIE_FIND_CB_CONTINUE;
+1
tools/perf/util/env.c
··· 144 144 node = rb_entry(next, struct bpf_prog_info_node, rb_node); 145 145 next = rb_next(&node->rb_node); 146 146 rb_erase(&node->rb_node, root); 147 + free(node->info_linear); 147 148 free(node); 148 149 } 149 150
+1
tools/perf/util/evsel.c
··· 428 428 evsel->auto_merge_stats = orig->auto_merge_stats; 429 429 evsel->collect_stat = orig->collect_stat; 430 430 evsel->weak_group = orig->weak_group; 431 + evsel->use_config_name = orig->use_config_name; 431 432 432 433 if (evsel__copy_config_terms(evsel, orig) < 0) 433 434 goto out_err;
+2 -2
tools/perf/util/evsel.h
··· 83 83 bool collect_stat; 84 84 bool weak_group; 85 85 bool bpf_counter; 86 + bool use_config_name; 86 87 int bpf_fd; 87 88 struct bpf_object *bpf_obj; 89 + struct list_head config_terms; 88 90 }; 89 91 90 92 /* ··· 118 116 bool merged_stat; 119 117 bool reset_group; 120 118 bool errored; 121 - bool use_config_name; 122 119 struct hashmap *per_pkg_mask; 123 120 struct evsel *leader; 124 - struct list_head config_terms; 125 121 int err; 126 122 int cpu_iter; 127 123 struct {
+10
tools/perf/util/perf_api_probe.c
··· 103 103 evsel->core.attr.build_id = 1; 104 104 } 105 105 106 + static void perf_probe_cgroup(struct evsel *evsel) 107 + { 108 + evsel->core.attr.cgroup = 1; 109 + } 110 + 106 111 bool perf_can_sample_identifier(void) 107 112 { 108 113 return perf_probe_api(perf_probe_sample_identifier); ··· 186 181 bool perf_can_record_build_id(void) 187 182 { 188 183 return perf_probe_api(perf_probe_build_id); 184 + } 185 + 186 + bool perf_can_record_cgroup(void) 187 + { 188 + return perf_probe_api(perf_probe_cgroup); 189 189 }
+1
tools/perf/util/perf_api_probe.h
··· 12 12 bool perf_can_record_text_poke_events(void); 13 13 bool perf_can_sample_identifier(void); 14 14 bool perf_can_record_build_id(void); 15 + bool perf_can_record_cgroup(void); 15 16 16 17 #endif // __PERF_API_PROBE_H
+3
tools/perf/util/probe-finder.c
··· 190 190 immediate_value_is_supported()) { 191 191 Dwarf_Sword snum; 192 192 193 + if (!tvar) 194 + return 0; 195 + 193 196 dwarf_formsdata(&attr, &snum); 194 197 ret = asprintf(&tvar->value, "\\%ld", (long)snum); 195 198
+1
tools/perf/util/session.c
··· 1723 1723 if (event->header.size < hdr_sz || event->header.size > buf_sz) 1724 1724 return -1; 1725 1725 1726 + buf += hdr_sz; 1726 1727 rest = event->header.size - hdr_sz; 1727 1728 1728 1729 if (readn(fd, buf, rest) != (ssize_t)rest)
+3 -5
tools/perf/util/stat-display.c
··· 541 541 char *config; 542 542 int ret = 0; 543 543 544 - if (counter->uniquified_name || 544 + if (counter->uniquified_name || counter->use_config_name || 545 545 !counter->pmu_name || !strncmp(counter->name, counter->pmu_name, 546 546 strlen(counter->pmu_name))) 547 547 return; ··· 555 555 } 556 556 } else { 557 557 if (perf_pmu__has_hybrid()) { 558 - if (!counter->use_config_name) { 559 - ret = asprintf(&new_name, "%s/%s/", 560 - counter->pmu_name, counter->name); 561 - } 558 + ret = asprintf(&new_name, "%s/%s/", 559 + counter->pmu_name, counter->name); 562 560 } else { 563 561 ret = asprintf(&new_name, "%s [%s]", 564 562 counter->name, counter->pmu_name);
+1
tools/perf/util/symbol-elf.c
··· 2412 2412 2413 2413 list_for_each_entry_safe(pos, tmp, sdt_notes, note_list) { 2414 2414 list_del_init(&pos->note_list); 2415 + zfree(&pos->args); 2415 2416 zfree(&pos->name); 2416 2417 zfree(&pos->provider); 2417 2418 free(pos);
+6 -4
tools/testing/selftests/kvm/include/kvm_util.h
··· 43 43 VM_MODE_P40V48_4K, 44 44 VM_MODE_P40V48_64K, 45 45 VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */ 46 + VM_MODE_P47V64_4K, 46 47 NUM_VM_MODES, 47 48 }; 48 49 ··· 61 60 62 61 #elif defined(__s390x__) 63 62 64 - #define VM_MODE_DEFAULT VM_MODE_P52V48_4K 63 + #define VM_MODE_DEFAULT VM_MODE_P47V64_4K 65 64 #define MIN_PAGE_SHIFT 12U 66 65 #define ptes_per_page(page_size) ((page_size) / 16) 67 66 ··· 286 285 uint32_t num_percpu_pages, void *guest_code, 287 286 uint32_t vcpuids[]); 288 287 289 - /* Like vm_create_default_with_vcpus, but accepts mode as a parameter */ 288 + /* Like vm_create_default_with_vcpus, but accepts mode and slot0 memory as a parameter */ 290 289 struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, 291 - uint64_t extra_mem_pages, uint32_t num_percpu_pages, 292 - void *guest_code, uint32_t vcpuids[]); 290 + uint64_t slot0_mem_pages, uint64_t extra_mem_pages, 291 + uint32_t num_percpu_pages, void *guest_code, 292 + uint32_t vcpuids[]); 293 293 294 294 /* 295 295 * Adds a vCPU with reasonable defaults (e.g. a stack)
+1 -1
tools/testing/selftests/kvm/kvm_page_table_test.c
··· 268 268 269 269 /* Create a VM with enough guest pages */ 270 270 guest_num_pages = test_mem_size / guest_page_size; 271 - vm = vm_create_with_vcpus(mode, nr_vcpus, 271 + vm = vm_create_with_vcpus(mode, nr_vcpus, DEFAULT_GUEST_PHY_PAGES, 272 272 guest_num_pages, 0, guest_code, NULL); 273 273 274 274 /* Align down GPA of the testing memslot */
+43 -9
tools/testing/selftests/kvm/lib/kvm_util.c
··· 175 175 [VM_MODE_P40V48_4K] = "PA-bits:40, VA-bits:48, 4K pages", 176 176 [VM_MODE_P40V48_64K] = "PA-bits:40, VA-bits:48, 64K pages", 177 177 [VM_MODE_PXXV48_4K] = "PA-bits:ANY, VA-bits:48, 4K pages", 178 + [VM_MODE_P47V64_4K] = "PA-bits:47, VA-bits:64, 4K pages", 178 179 }; 179 180 _Static_assert(sizeof(strings)/sizeof(char *) == NUM_VM_MODES, 180 181 "Missing new mode strings?"); ··· 193 192 { 40, 48, 0x1000, 12 }, 194 193 { 40, 48, 0x10000, 16 }, 195 194 { 0, 0, 0x1000, 12 }, 195 + { 47, 64, 0x1000, 12 }, 196 196 }; 197 197 _Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES, 198 198 "Missing new mode params?"); ··· 279 277 TEST_FAIL("VM_MODE_PXXV48_4K not supported on non-x86 platforms"); 280 278 #endif 281 279 break; 280 + case VM_MODE_P47V64_4K: 281 + vm->pgtable_levels = 5; 282 + break; 282 283 default: 283 284 TEST_FAIL("Unknown guest mode, mode: 0x%x", mode); 284 285 } ··· 313 308 return vm; 314 309 } 315 310 311 + /* 312 + * VM Create with customized parameters 313 + * 314 + * Input Args: 315 + * mode - VM Mode (e.g. VM_MODE_P52V48_4K) 316 + * nr_vcpus - VCPU count 317 + * slot0_mem_pages - Slot0 physical memory size 318 + * extra_mem_pages - Non-slot0 physical memory total size 319 + * num_percpu_pages - Per-cpu physical memory pages 320 + * guest_code - Guest entry point 321 + * vcpuids - VCPU IDs 322 + * 323 + * Output Args: None 324 + * 325 + * Return: 326 + * Pointer to opaque structure that describes the created VM. 327 + * 328 + * Creates a VM with the mode specified by mode (e.g. VM_MODE_P52V48_4K), 329 + * with customized slot0 memory size, at least 512 pages currently. 330 + * extra_mem_pages is only used to calculate the maximum page table size, 331 + * no real memory allocation for non-slot0 memory in this function. 332 + */ 316 333 struct kvm_vm *vm_create_with_vcpus(enum vm_guest_mode mode, uint32_t nr_vcpus, 317 - uint64_t extra_mem_pages, uint32_t num_percpu_pages, 318 - void *guest_code, uint32_t vcpuids[]) 334 + uint64_t slot0_mem_pages, uint64_t extra_mem_pages, 335 + uint32_t num_percpu_pages, void *guest_code, 336 + uint32_t vcpuids[]) 319 337 { 338 + uint64_t vcpu_pages, extra_pg_pages, pages; 339 + struct kvm_vm *vm; 340 + int i; 341 + 342 + /* Force slot0 memory size not small than DEFAULT_GUEST_PHY_PAGES */ 343 + if (slot0_mem_pages < DEFAULT_GUEST_PHY_PAGES) 344 + slot0_mem_pages = DEFAULT_GUEST_PHY_PAGES; 345 + 320 346 /* The maximum page table size for a memory region will be when the 321 347 * smallest pages are used. Considering each page contains x page 322 348 * table descriptors, the total extra size for page tables (for extra 323 349 * N pages) will be: N/x+N/x^2+N/x^3+... which is definitely smaller 324 350 * than N/x*2. 325 351 */ 326 - uint64_t vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus; 327 - uint64_t extra_pg_pages = (extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2; 328 - uint64_t pages = DEFAULT_GUEST_PHY_PAGES + extra_mem_pages + vcpu_pages + extra_pg_pages; 329 - struct kvm_vm *vm; 330 - int i; 352 + vcpu_pages = (DEFAULT_STACK_PGS + num_percpu_pages) * nr_vcpus; 353 + extra_pg_pages = (slot0_mem_pages + extra_mem_pages + vcpu_pages) / PTES_PER_MIN_PAGE * 2; 354 + pages = slot0_mem_pages + vcpu_pages + extra_pg_pages; 331 355 332 356 TEST_ASSERT(nr_vcpus <= kvm_check_cap(KVM_CAP_MAX_VCPUS), 333 357 "nr_vcpus = %d too large for host, max-vcpus = %d", ··· 388 354 uint32_t num_percpu_pages, void *guest_code, 389 355 uint32_t vcpuids[]) 390 356 { 391 - return vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, extra_mem_pages, 392 - num_percpu_pages, guest_code, vcpuids); 357 + return vm_create_with_vcpus(VM_MODE_DEFAULT, nr_vcpus, DEFAULT_GUEST_PHY_PAGES, 358 + extra_mem_pages, num_percpu_pages, guest_code, vcpuids); 393 359 } 394 360 395 361 struct kvm_vm *vm_create_default(uint32_t vcpuid, uint64_t extra_mem_pages,
+1 -1
tools/testing/selftests/kvm/lib/perf_test_util.c
··· 69 69 TEST_ASSERT(vcpu_memory_bytes % perf_test_args.guest_page_size == 0, 70 70 "Guest memory size is not guest page size aligned."); 71 71 72 - vm = vm_create_with_vcpus(mode, vcpus, 72 + vm = vm_create_with_vcpus(mode, vcpus, DEFAULT_GUEST_PHY_PAGES, 73 73 (vcpus * vcpu_memory_bytes) / perf_test_args.guest_page_size, 74 74 0, guest_code, NULL); 75 75
+1 -1
tools/testing/selftests/kvm/memslot_perf_test.c
··· 267 267 data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots); 268 268 TEST_ASSERT(data->hva_slots, "malloc() fail"); 269 269 270 - data->vm = vm_create_default(VCPU_ID, 1024, guest_code); 270 + data->vm = vm_create_default(VCPU_ID, mempages, guest_code); 271 271 272 272 pr_info_v("Adding slots 1..%i, each slot with %"PRIu64" pages + %"PRIu64" extra pages last\n", 273 273 max_mem_slots - 1, data->pages_per_slot, rempages);
+9 -4
tools/testing/selftests/net/mptcp/mptcp_connect.sh
··· 501 501 local stat_ackrx_now_l=$(get_mib_counter "${listener_ns}" "MPTcpExtMPCapableACKRX") 502 502 local stat_cookietx_now=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesSent") 503 503 local stat_cookierx_now=$(get_mib_counter "${listener_ns}" "TcpExtSyncookiesRecv") 504 + local stat_ooo_now=$(get_mib_counter "${listener_ns}" "TcpExtTCPOFOQueue") 504 505 505 506 expect_synrx=$((stat_synrx_last_l)) 506 507 expect_ackrx=$((stat_ackrx_last_l)) ··· 519 518 "${stat_synrx_now_l}" "${expect_synrx}" 1>&2 520 519 retc=1 521 520 fi 522 - if [ ${stat_ackrx_now_l} -lt ${expect_ackrx} ]; then 523 - printf "[ FAIL ] lower MPC ACK rx (%d) than expected (%d)\n" \ 524 - "${stat_ackrx_now_l}" "${expect_ackrx}" 1>&2 525 - rets=1 521 + if [ ${stat_ackrx_now_l} -lt ${expect_ackrx} -a ${stat_ooo_now} -eq 0 ]; then 522 + if [ ${stat_ooo_now} -eq 0 ]; then 523 + printf "[ FAIL ] lower MPC ACK rx (%d) than expected (%d)\n" \ 524 + "${stat_ackrx_now_l}" "${expect_ackrx}" 1>&2 525 + rets=1 526 + else 527 + printf "[ Note ] fallback due to TCP OoO" 528 + fi 526 529 fi 527 530 528 531 if [ $retc -eq 0 ] && [ $rets -eq 0 ]; then
+1
tools/testing/selftests/proc/.gitignore
··· 10 10 /proc-self-map-files-002 11 11 /proc-self-syscall 12 12 /proc-self-wchan 13 + /proc-subset-pid 13 14 /proc-uptime-001 14 15 /proc-uptime-002 15 16 /read
+1
tools/testing/selftests/wireguard/netns.sh
··· 363 363 ip1 -4 route add default dev wg0 table 51820 364 364 ip1 -4 rule add not fwmark 51820 table 51820 365 365 ip1 -4 rule add table main suppress_prefixlength 0 366 + n1 bash -c 'printf 0 > /proc/sys/net/ipv4/conf/vethc/rp_filter' 366 367 # Flood the pings instead of sending just one, to trigger routing table reference counting bugs. 367 368 n1 ping -W 1 -c 100 -f 192.168.99.7 368 369 n1 ping -W 1 -c 100 -f abab::1111
-1
tools/testing/selftests/wireguard/qemu/kernel.config
··· 19 19 CONFIG_NETFILTER_XT_NAT=y 20 20 CONFIG_NETFILTER_XT_MATCH_LENGTH=y 21 21 CONFIG_NETFILTER_XT_MARK=y 22 - CONFIG_NF_CONNTRACK_IPV4=y 23 22 CONFIG_NF_NAT_IPV4=y 24 23 CONFIG_IP_NF_IPTABLES=y 25 24 CONFIG_IP_NF_FILTER=y