Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+1626 -1830
+3
.mailmap
··· 71 71 Ben Widawsky <bwidawsk@kernel.org> <ben@bwidawsk.net> 72 72 Ben Widawsky <bwidawsk@kernel.org> <ben.widawsky@intel.com> 73 73 Ben Widawsky <bwidawsk@kernel.org> <benjamin.widawsky@intel.com> 74 + Bjorn Andersson <andersson@kernel.org> <bjorn@kryo.se> 75 + Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@linaro.org> 76 + Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@sonymobile.com> 74 77 Björn Steinbrink <B.Steinbrink@gmx.de> 75 78 Björn Töpel <bjorn@kernel.org> <bjorn.topel@gmail.com> 76 79 Björn Töpel <bjorn@kernel.org> <bjorn.topel@intel.com>
+2 -2
Documentation/devicetree/bindings/dma/moxa,moxart-dma.txt
··· 34 34 Use specific request line passing from dma 35 35 For example, MMC request line is 5 36 36 37 - sdhci: sdhci@98e00000 { 38 - compatible = "moxa,moxart-sdhci"; 37 + mmc: mmc@98e00000 { 38 + compatible = "moxa,moxart-mmc"; 39 39 reg = <0x98e00000 0x5C>; 40 40 interrupts = <5 0>; 41 41 clocks = <&clk_apb>;
+1 -1
Documentation/devicetree/bindings/memory-controllers/fsl/imx8m-ddrc.yaml
··· 7 7 title: i.MX8M DDR Controller 8 8 9 9 maintainers: 10 - - Leonard Crestez <leonard.crestez@nxp.com> 10 + - Peng Fan <peng.fan@nxp.com> 11 11 12 12 description: 13 13 The DDRC block is integrated in i.MX8M for interfacing with DDR based
+1
Documentation/devicetree/bindings/opp/opp-v2-kryo-cpu.yaml
··· 40 40 patternProperties: 41 41 '^opp-?[0-9]+$': 42 42 type: object 43 + additionalProperties: false 43 44 44 45 properties: 45 46 opp-hz: true
+1
Documentation/devicetree/bindings/opp/opp-v2-qcom-level.yaml
··· 19 19 patternProperties: 20 20 '^opp-?[0-9]+$': 21 21 type: object 22 + additionalProperties: false 22 23 23 24 properties: 24 25 opp-level: true
+1 -1
Documentation/i2c/dev-interface.rst
··· 148 148 You do not need to pass the address byte; instead, set it through 149 149 ioctl I2C_SLAVE before you try to access the device. 150 150 151 - You can do SMBus level transactions (see documentation file smbus-protocol 151 + You can do SMBus level transactions (see documentation file smbus-protocol.rst 152 152 for details) through the following functions:: 153 153 154 154 __s32 i2c_smbus_write_quick(int file, __u8 value);
+3 -3
Documentation/i2c/slave-interface.rst
··· 32 32 =========== 33 33 34 34 I2C slave backends behave like standard I2C clients. So, you can instantiate 35 - them as described in the document 'instantiating-devices'. The only difference 36 - is that i2c slave backends have their own address space. So, you have to add 37 - 0x1000 to the address you would originally request. An example for 35 + them as described in the document instantiating-devices.rst. The only 36 + difference is that i2c slave backends have their own address space. So, you 37 + have to add 0x1000 to the address you would originally request. An example for 38 38 instantiating the slave-eeprom driver from userspace at the 7 bit address 0x64 39 39 on bus 1:: 40 40
+2 -2
Documentation/i2c/writing-clients.rst
··· 364 364 contains for each message the client address, the number of bytes of the 365 365 message and the message data itself. 366 366 367 - You can read the file ``i2c-protocol`` for more information about the 367 + You can read the file i2c-protocol.rst for more information about the 368 368 actual I2C protocol. 369 369 370 370 ··· 414 414 value, except for block transactions, which return the number of values 415 415 read. The block buffers need not be longer than 32 bytes. 416 416 417 - You can read the file ``smbus-protocol`` for more information about the 417 + You can read the file smbus-protocol.rst for more information about the 418 418 actual SMBus protocol. 419 419 420 420
+12 -11
MAINTAINERS
··· 671 671 F: include/trace/events/afs.h 672 672 673 673 AGPGART DRIVER 674 - M: David Airlie <airlied@linux.ie> 674 + M: David Airlie <airlied@redhat.com> 675 + L: dri-devel@lists.freedesktop.org 675 676 S: Maintained 676 677 T: git git://anongit.freedesktop.org/drm/drm 677 678 F: drivers/char/agp/ ··· 1018 1017 1019 1018 AMD MP2 I2C DRIVER 1020 1019 M: Elie Morisse <syniurge@gmail.com> 1021 - M: Nehal Shah <nehal-bakulchandra.shah@amd.com> 1022 1020 M: Shyam Sundar S K <shyam-sundar.s-k@amd.com> 1023 1021 L: linux-i2c@vger.kernel.org 1024 1022 S: Maintained ··· 2587 2587 2588 2588 ARM/QUALCOMM SUPPORT 2589 2589 M: Andy Gross <agross@kernel.org> 2590 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 2590 + M: Bjorn Andersson <andersson@kernel.org> 2591 2591 R: Konrad Dybcio <konrad.dybcio@somainline.org> 2592 2592 L: linux-arm-msm@vger.kernel.org 2593 2593 S: Maintained ··· 5253 5253 F: include/linux/blk-cgroup.h 5254 5254 5255 5255 CONTROL GROUP - CPUSET 5256 + M: Waiman Long <longman@redhat.com> 5256 5257 M: Zefan Li <lizefan.x@bytedance.com> 5257 5258 L: cgroups@vger.kernel.org 5258 5259 S: Maintained ··· 6755 6754 F: drivers/gpu/drm/panel/panel-widechips-ws2401.c 6756 6755 6757 6756 DRM DRIVERS 6758 - M: David Airlie <airlied@linux.ie> 6757 + M: David Airlie <airlied@gmail.com> 6759 6758 M: Daniel Vetter <daniel@ffwll.ch> 6760 6759 L: dri-devel@lists.freedesktop.org 6761 6760 S: Maintained ··· 8945 8944 8946 8945 HARDWARE SPINLOCK CORE 8947 8946 M: Ohad Ben-Cohen <ohad@wizery.com> 8948 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 8947 + M: Bjorn Andersson <andersson@kernel.org> 8949 8948 R: Baolin Wang <baolin.wang7@gmail.com> 8950 8949 L: linux-remoteproc@vger.kernel.org 8951 8950 S: Maintained ··· 16134 16133 F: drivers/pinctrl/pinctrl-at91* 16135 16134 16136 16135 PIN CONTROLLER - QUALCOMM 16137 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 16136 + M: Bjorn Andersson <andersson@kernel.org> 16138 16137 L: linux-arm-msm@vger.kernel.org 16139 16138 S: Maintained 16140 16139 F: Documentation/devicetree/bindings/pinctrl/qcom,*.txt ··· 16827 16826 F: drivers/media/platform/qcom/camss/ 16828 16827 16829 16828 QUALCOMM CLOCK DRIVERS 16830 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 16829 + M: Bjorn Andersson <andersson@kernel.org> 16831 16830 L: linux-arm-msm@vger.kernel.org 16832 16831 S: Supported 16833 16832 T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git ··· 17317 17316 F: fs/reiserfs/ 17318 17317 17319 17318 REMOTE PROCESSOR (REMOTEPROC) SUBSYSTEM 17320 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 17319 + M: Bjorn Andersson <andersson@kernel.org> 17321 17320 M: Mathieu Poirier <mathieu.poirier@linaro.org> 17322 17321 L: linux-remoteproc@vger.kernel.org 17323 17322 S: Maintained ··· 17330 17329 F: include/linux/remoteproc/ 17331 17330 17332 17331 REMOTE PROCESSOR MESSAGING (RPMSG) SUBSYSTEM 17333 - M: Bjorn Andersson <bjorn.andersson@linaro.org> 17332 + M: Bjorn Andersson <andersson@kernel.org> 17334 17333 M: Mathieu Poirier <mathieu.poirier@linaro.org> 17335 17334 L: linux-remoteproc@vger.kernel.org 17336 17335 S: Maintained ··· 19974 19973 F: drivers/net/team/ 19975 19974 F: include/linux/if_team.h 19976 19975 F: include/uapi/linux/if_team.h 19977 - F: tools/testing/selftests/net/team/ 19976 + F: tools/testing/selftests/drivers/net/team/ 19978 19977 19979 19978 TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT 19980 19979 M: "Savoir-faire Linux Inc." <kernel@savoirfairelinux.com> ··· 21581 21580 F: include/uapi/linux/virtio_gpio.h 21582 21581 21583 21582 VIRTIO GPU DRIVER 21584 - M: David Airlie <airlied@linux.ie> 21583 + M: David Airlie <airlied@redhat.com> 21585 21584 M: Gerd Hoffmann <kraxel@redhat.com> 21586 21585 R: Gurchetan Singh <gurchetansingh@chromium.org> 21587 21586 R: Chia-I Wu <olvaffe@gmail.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1 -2
arch/arm/boot/dts/am33xx-l4.dtsi
··· 1502 1502 mmc1: mmc@0 { 1503 1503 compatible = "ti,am335-sdhci"; 1504 1504 ti,needs-special-reset; 1505 - dmas = <&edma_xbar 24 0 0 1506 - &edma_xbar 25 0 0>; 1505 + dmas = <&edma 24 0>, <&edma 25 0>; 1507 1506 dma-names = "tx", "rx"; 1508 1507 interrupts = <64>; 1509 1508 reg = <0x0 0x1000>;
+4
arch/arm/boot/dts/am5748.dtsi
··· 25 25 status = "disabled"; 26 26 }; 27 27 28 + &usb4_tm { 29 + status = "disabled"; 30 + }; 31 + 28 32 &atl_tm { 29 33 status = "disabled"; 30 34 };
+1
arch/arm/boot/dts/integratorap-im-pd1.dts
··· 249 249 /* 640x480 16bpp @ 25.175MHz is 36827428 bytes/s */ 250 250 max-memory-bandwidth = <40000000>; 251 251 memory-region = <&impd1_ram>; 252 + dma-ranges; 252 253 253 254 port@0 { 254 255 #address-cells = <1>;
+5 -4
arch/arm/boot/dts/integratorap.dts
··· 160 160 161 161 pci: pciv3@62000000 { 162 162 compatible = "arm,integrator-ap-pci", "v3,v360epc-pci"; 163 + device_type = "pci"; 163 164 #interrupt-cells = <1>; 164 165 #size-cells = <2>; 165 166 #address-cells = <3>; ··· 262 261 lm0: bus@c0000000 { 263 262 compatible = "simple-bus"; 264 263 ranges = <0x00000000 0xc0000000 0x10000000>; 265 - dma-ranges = <0x00000000 0x80000000 0x10000000>; 264 + dma-ranges = <0x00000000 0xc0000000 0x10000000>; 266 265 reg = <0xc0000000 0x10000000>; 267 266 #address-cells = <1>; 268 267 #size-cells = <1>; ··· 270 269 lm1: bus@d0000000 { 271 270 compatible = "simple-bus"; 272 271 ranges = <0x00000000 0xd0000000 0x10000000>; 273 - dma-ranges = <0x00000000 0x80000000 0x10000000>; 272 + dma-ranges = <0x00000000 0xd0000000 0x10000000>; 274 273 reg = <0xd0000000 0x10000000>; 275 274 #address-cells = <1>; 276 275 #size-cells = <1>; ··· 278 277 lm2: bus@e0000000 { 279 278 compatible = "simple-bus"; 280 279 ranges = <0x00000000 0xe0000000 0x10000000>; 281 - dma-ranges = <0x00000000 0x80000000 0x10000000>; 280 + dma-ranges = <0x00000000 0xe0000000 0x10000000>; 282 281 reg = <0xe0000000 0x10000000>; 283 282 #address-cells = <1>; 284 283 #size-cells = <1>; ··· 286 285 lm3: bus@f0000000 { 287 286 compatible = "simple-bus"; 288 287 ranges = <0x00000000 0xf0000000 0x10000000>; 289 - dma-ranges = <0x00000000 0x80000000 0x10000000>; 288 + dma-ranges = <0x00000000 0xf0000000 0x10000000>; 290 289 reg = <0xf0000000 0x10000000>; 291 290 #address-cells = <1>; 292 291 #size-cells = <1>;
+2 -2
arch/arm/boot/dts/lan966x.dtsi
··· 541 541 542 542 phy0: ethernet-phy@1 { 543 543 reg = <1>; 544 - interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>; 544 + interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>; 545 545 status = "disabled"; 546 546 }; 547 547 548 548 phy1: ethernet-phy@2 { 549 549 reg = <2>; 550 - interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>; 550 + interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>; 551 551 status = "disabled"; 552 552 }; 553 553 };
+1 -1
arch/arm/boot/dts/moxart-uc7112lx.dts
··· 79 79 clocks = <&ref12>; 80 80 }; 81 81 82 - &sdhci { 82 + &mmc { 83 83 status = "okay"; 84 84 }; 85 85
+2 -2
arch/arm/boot/dts/moxart.dtsi
··· 93 93 clock-names = "PCLK"; 94 94 }; 95 95 96 - sdhci: sdhci@98e00000 { 97 - compatible = "moxa,moxart-sdhci"; 96 + mmc: mmc@98e00000 { 97 + compatible = "moxa,moxart-mmc"; 98 98 reg = <0x98e00000 0x5C>; 99 99 interrupts = <5 IRQ_TYPE_LEVEL_HIGH>; 100 100 clocks = <&clk_apb>;
+2 -2
arch/arm/mach-sunplus/Kconfig
··· 18 18 select ARM_PSCI 19 19 select PINCTRL 20 20 select PINCTRL_SPPCTL 21 - select SERIAL_SUNPLUS 22 - select SERIAL_SUNPLUS_CONSOLE 21 + select SERIAL_SUNPLUS if TTY 22 + select SERIAL_SUNPLUS_CONSOLE if TTY 23 23 help 24 24 Support for Sunplus SP7021 SoC. It is based on ARM 4-core 25 25 Cortex-A7 with various peripherals (e.g.: I2C, SPI, SDIO,
+5 -5
arch/arm64/boot/dts/freescale/imx8mm-mx8menlo.dts
··· 152 152 * CPLD_reset is RESET_SOFT in schematic 153 153 */ 154 154 gpio-line-names = 155 - "CPLD_D[1]", "CPLD_int", "CPLD_reset", "", 156 - "", "CPLD_D[0]", "", "", 157 - "", "", "", "CPLD_D[2]", 158 - "CPLD_D[3]", "CPLD_D[4]", "CPLD_D[5]", "CPLD_D[6]", 159 - "CPLD_D[7]", "", "", "", 155 + "CPLD_D[6]", "CPLD_int", "CPLD_reset", "", 156 + "", "CPLD_D[7]", "", "", 157 + "", "", "", "CPLD_D[5]", 158 + "CPLD_D[4]", "CPLD_D[3]", "CPLD_D[2]", "CPLD_D[1]", 159 + "CPLD_D[0]", "", "", "", 160 160 "", "", "", "", 161 161 "", "", "", "KBD_intK", 162 162 "", "", "", "";
-1
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml-mba8mx.dts
··· 5 5 6 6 /dts-v1/; 7 7 8 - #include <dt-bindings/phy/phy-imx8-pcie.h> 9 8 #include "imx8mm-tqma8mqml.dtsi" 10 9 #include "mba8mx.dtsi" 11 10
+1
arch/arm64/boot/dts/freescale/imx8mm-tqma8mqml.dtsi
··· 3 3 * Copyright 2020-2021 TQ-Systems GmbH 4 4 */ 5 5 6 + #include <dt-bindings/phy/phy-imx8-pcie.h> 6 7 #include "imx8mm.dtsi" 7 8 8 9 / {
+5 -5
arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
··· 367 367 nxp,dvs-standby-voltage = <850000>; 368 368 regulator-always-on; 369 369 regulator-boot-on; 370 - regulator-max-microvolt = <950000>; 371 - regulator-min-microvolt = <850000>; 370 + regulator-max-microvolt = <1050000>; 371 + regulator-min-microvolt = <805000>; 372 372 regulator-name = "On-module +VDD_ARM (BUCK2)"; 373 373 regulator-ramp-delay = <3125>; 374 374 }; ··· 376 376 reg_vdd_dram: BUCK3 { 377 377 regulator-always-on; 378 378 regulator-boot-on; 379 - regulator-max-microvolt = <950000>; 380 - regulator-min-microvolt = <850000>; 379 + regulator-max-microvolt = <1000000>; 380 + regulator-min-microvolt = <805000>; 381 381 regulator-name = "On-module +VDD_GPU_VPU_DDR (BUCK3)"; 382 382 }; 383 383 ··· 416 416 reg_vdd_snvs: LDO2 { 417 417 regulator-always-on; 418 418 regulator-boot-on; 419 - regulator-max-microvolt = <900000>; 419 + regulator-max-microvolt = <800000>; 420 420 regulator-min-microvolt = <800000>; 421 421 regulator-name = "On-module +V0.8_SNVS (LDO2)"; 422 422 };
-1
arch/arm64/boot/dts/freescale/imx8mn.dtsi
··· 672 672 <&clk IMX8MN_CLK_GPU_SHADER>, 673 673 <&clk IMX8MN_CLK_GPU_BUS_ROOT>, 674 674 <&clk IMX8MN_CLK_GPU_AHB>; 675 - resets = <&src IMX8MQ_RESET_GPU_RESET>; 676 675 }; 677 676 678 677 pgc_dispmix: power-domain@3 {
+8 -2
arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
··· 57 57 switch-1 { 58 58 label = "S12"; 59 59 linux,code = <BTN_0>; 60 - gpios = <&gpio5 26 GPIO_ACTIVE_LOW>; 60 + gpios = <&gpio5 27 GPIO_ACTIVE_LOW>; 61 61 }; 62 62 63 63 switch-2 { 64 64 label = "S13"; 65 65 linux,code = <BTN_1>; 66 - gpios = <&gpio5 27 GPIO_ACTIVE_LOW>; 66 + gpios = <&gpio5 26 GPIO_ACTIVE_LOW>; 67 67 }; 68 68 }; 69 69 ··· 394 394 395 395 &pcf85063 { 396 396 /* RTC_EVENT# is connected on MBa8MPxL */ 397 + pinctrl-names = "default"; 398 + pinctrl-0 = <&pinctrl_pcf85063>; 397 399 interrupt-parent = <&gpio4>; 398 400 interrupts = <28 IRQ_TYPE_EDGE_FALLING>; 399 401 }; ··· 630 628 631 629 pinctrl_lvdsdisplay: lvdsdisplaygrp { 632 630 fsl,pins = <MX8MP_IOMUXC_SAI5_RXC__GPIO3_IO20 0x10>; /* Power enable */ 631 + }; 632 + 633 + pinctrl_pcf85063: pcf85063grp { 634 + fsl,pins = <MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x80>; 633 635 }; 634 636 635 637 /* LVDS Backlight */
+8 -4
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 123 123 pinctrl-names = "default"; 124 124 pinctrl-0 = <&pinctrl_reg_can>; 125 125 regulator-name = "can2_stby"; 126 - gpio = <&gpio3 19 GPIO_ACTIVE_HIGH>; 127 - enable-active-high; 126 + gpio = <&gpio3 19 GPIO_ACTIVE_LOW>; 128 127 regulator-min-microvolt = <3300000>; 129 128 regulator-max-microvolt = <3300000>; 130 129 }; ··· 483 484 lan1: port@0 { 484 485 reg = <0>; 485 486 label = "lan1"; 487 + phy-mode = "internal"; 486 488 local-mac-address = [00 00 00 00 00 00]; 487 489 }; 488 490 489 491 lan2: port@1 { 490 492 reg = <1>; 491 493 label = "lan2"; 494 + phy-mode = "internal"; 492 495 local-mac-address = [00 00 00 00 00 00]; 493 496 }; 494 497 495 498 lan3: port@2 { 496 499 reg = <2>; 497 500 label = "lan3"; 501 + phy-mode = "internal"; 498 502 local-mac-address = [00 00 00 00 00 00]; 499 503 }; 500 504 501 505 lan4: port@3 { 502 506 reg = <3>; 503 507 label = "lan4"; 508 + phy-mode = "internal"; 504 509 local-mac-address = [00 00 00 00 00 00]; 505 510 }; 506 511 507 512 lan5: port@4 { 508 513 reg = <4>; 509 514 label = "lan5"; 515 + phy-mode = "internal"; 510 516 local-mac-address = [00 00 00 00 00 00]; 511 517 }; 512 518 513 - port@6 { 514 - reg = <6>; 519 + port@5 { 520 + reg = <5>; 515 521 label = "cpu"; 516 522 ethernet = <&fec>; 517 523 phy-mode = "rgmii-id";
+3
arch/arm64/boot/dts/freescale/imx8ulp.dtsi
··· 172 172 compatible = "fsl,imx8ulp-pcc3"; 173 173 reg = <0x292d0000 0x10000>; 174 174 #clock-cells = <1>; 175 + #reset-cells = <1>; 175 176 }; 176 177 177 178 tpm5: tpm@29340000 { ··· 271 270 compatible = "fsl,imx8ulp-pcc4"; 272 271 reg = <0x29800000 0x10000>; 273 272 #clock-cells = <1>; 273 + #reset-cells = <1>; 274 274 }; 275 275 276 276 lpi2c6: i2c@29840000 { ··· 416 414 compatible = "fsl,imx8ulp-pcc5"; 417 415 reg = <0x2da70000 0x10000>; 418 416 #clock-cells = <1>; 417 + #reset-cells = <1>; 419 418 }; 420 419 }; 421 420
+2 -1
arch/arm64/boot/dts/qcom/sc7280.dtsi
··· 3374 3374 <&gem_noc MASTER_APPSS_PROC 0 &cnoc2 SLAVE_USB3_0 0>; 3375 3375 interconnect-names = "usb-ddr", "apps-usb"; 3376 3376 3377 + wakeup-source; 3378 + 3377 3379 usb_1_dwc3: usb@a600000 { 3378 3380 compatible = "snps,dwc3"; 3379 3381 reg = <0 0x0a600000 0 0xe000>; ··· 3386 3384 phys = <&usb_1_hsphy>, <&usb_1_ssphy>; 3387 3385 phy-names = "usb2-phy", "usb3-phy"; 3388 3386 maximum-speed = "super-speed"; 3389 - wakeup-source; 3390 3387 }; 3391 3388 }; 3392 3389
+2 -2
arch/arm64/boot/dts/qcom/sc8280xp-lenovo-thinkpad-x13s.dts
··· 235 235 }; 236 236 237 237 &remoteproc_adsp { 238 - firmware-name = "qcom/sc8280xp/qcadsp8280.mbn"; 238 + firmware-name = "qcom/sc8280xp/LENOVO/21BX/qcadsp8280.mbn"; 239 239 240 240 status = "okay"; 241 241 }; 242 242 243 243 &remoteproc_nsp0 { 244 - firmware-name = "qcom/sc8280xp/qccdsp8280.mbn"; 244 + firmware-name = "qcom/sc8280xp/LENOVO/21BX/qccdsp8280.mbn"; 245 245 246 246 status = "okay"; 247 247 };
+8 -16
arch/arm64/boot/dts/qcom/sm8150.dtsi
··· 3394 3394 compute-cb@1 { 3395 3395 compatible = "qcom,fastrpc-compute-cb"; 3396 3396 reg = <1>; 3397 - iommus = <&apps_smmu 0x1401 0x2040>, 3398 - <&apps_smmu 0x1421 0x0>, 3399 - <&apps_smmu 0x2001 0x420>, 3400 - <&apps_smmu 0x2041 0x0>; 3397 + iommus = <&apps_smmu 0x1001 0x0460>; 3401 3398 }; 3402 3399 3403 3400 compute-cb@2 { 3404 3401 compatible = "qcom,fastrpc-compute-cb"; 3405 3402 reg = <2>; 3406 - iommus = <&apps_smmu 0x2 0x3440>, 3407 - <&apps_smmu 0x22 0x3400>; 3403 + iommus = <&apps_smmu 0x1002 0x0460>; 3408 3404 }; 3409 3405 3410 3406 compute-cb@3 { 3411 3407 compatible = "qcom,fastrpc-compute-cb"; 3412 3408 reg = <3>; 3413 - iommus = <&apps_smmu 0x3 0x3440>, 3414 - <&apps_smmu 0x1423 0x0>, 3415 - <&apps_smmu 0x2023 0x0>; 3409 + iommus = <&apps_smmu 0x1003 0x0460>; 3416 3410 }; 3417 3411 3418 3412 compute-cb@4 { 3419 3413 compatible = "qcom,fastrpc-compute-cb"; 3420 3414 reg = <4>; 3421 - iommus = <&apps_smmu 0x4 0x3440>, 3422 - <&apps_smmu 0x24 0x3400>; 3415 + iommus = <&apps_smmu 0x1004 0x0460>; 3423 3416 }; 3424 3417 3425 3418 compute-cb@5 { 3426 3419 compatible = "qcom,fastrpc-compute-cb"; 3427 3420 reg = <5>; 3428 - iommus = <&apps_smmu 0x5 0x3440>, 3429 - <&apps_smmu 0x25 0x3400>; 3421 + iommus = <&apps_smmu 0x1005 0x0460>; 3430 3422 }; 3431 3423 3432 3424 compute-cb@6 { 3433 3425 compatible = "qcom,fastrpc-compute-cb"; 3434 3426 reg = <6>; 3435 - iommus = <&apps_smmu 0x6 0x3460>; 3427 + iommus = <&apps_smmu 0x1006 0x0460>; 3436 3428 }; 3437 3429 3438 3430 compute-cb@7 { 3439 3431 compatible = "qcom,fastrpc-compute-cb"; 3440 3432 reg = <7>; 3441 - iommus = <&apps_smmu 0x7 0x3460>; 3433 + iommus = <&apps_smmu 0x1007 0x0460>; 3442 3434 }; 3443 3435 3444 3436 compute-cb@8 { 3445 3437 compatible = "qcom,fastrpc-compute-cb"; 3446 3438 reg = <8>; 3447 - iommus = <&apps_smmu 0x8 0x3460>; 3439 + iommus = <&apps_smmu 0x1008 0x0460>; 3448 3440 }; 3449 3441 3450 3442 /* note: secure cb9 in downstream */
+1 -1
arch/arm64/boot/dts/qcom/sm8350.dtsi
··· 2128 2128 2129 2129 ufs_mem_phy: phy@1d87000 { 2130 2130 compatible = "qcom,sm8350-qmp-ufs-phy"; 2131 - reg = <0 0x01d87000 0 0xe10>; 2131 + reg = <0 0x01d87000 0 0x1c4>; 2132 2132 #address-cells = <2>; 2133 2133 #size-cells = <2>; 2134 2134 ranges;
+2 -2
arch/arm64/boot/dts/rockchip/px30-engicam-px30-core.dtsi
··· 2 2 /* 3 3 * Copyright (c) 2020 Fuzhou Rockchip Electronics Co., Ltd 4 4 * Copyright (c) 2020 Engicam srl 5 - * Copyright (c) 2020 Amarula Solutons 6 - * Copyright (c) 2020 Amarula Solutons(India) 5 + * Copyright (c) 2020 Amarula Solutions 6 + * Copyright (c) 2020 Amarula Solutions(India) 7 7 */ 8 8 9 9 #include <dt-bindings/gpio/gpio.h>
+5
arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
··· 88 88 }; 89 89 }; 90 90 }; 91 + 92 + &wlan_host_wake_l { 93 + /* Kevin has an external pull up, but Bob does not. */ 94 + rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_up>; 95 + };
+9
arch/arm64/boot/dts/rockchip/rk3399-gru-chromebook.dtsi
··· 244 244 &edp { 245 245 status = "okay"; 246 246 247 + /* 248 + * eDP PHY/clk don't sync reliably at anything other than 24 MHz. Only 249 + * set this here, because rk3399-gru.dtsi ensures we can generate this 250 + * off GPLL=600MHz, whereas some other RK3399 boards may not. 251 + */ 252 + assigned-clocks = <&cru PCLK_EDP>; 253 + assigned-clock-rates = <24000000>; 254 + 247 255 ports { 248 256 edp_out: port@1 { 249 257 reg = <1>; ··· 586 578 }; 587 579 588 580 wlan_host_wake_l: wlan-host-wake-l { 581 + /* Kevin has an external pull up, but Bob does not */ 589 582 rockchip,pins = <0 RK_PB0 RK_FUNC_GPIO &pcfg_pull_none>; 590 583 }; 591 584 };
-1
arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
··· 62 62 vcc5v0_host: vcc5v0-host-regulator { 63 63 compatible = "regulator-fixed"; 64 64 gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>; 65 - enable-active-low; 66 65 pinctrl-names = "default"; 67 66 pinctrl-0 = <&vcc5v0_host_en>; 68 67 regulator-name = "vcc5v0_host";
-1
arch/arm64/boot/dts/rockchip/rk3566-quartz64-a.dts
··· 189 189 190 190 vcc3v3_sd: vcc3v3_sd { 191 191 compatible = "regulator-fixed"; 192 - enable-active-low; 193 192 gpio = <&gpio0 RK_PA5 GPIO_ACTIVE_LOW>; 194 193 pinctrl-names = "default"; 195 194 pinctrl-0 = <&vcc_sd_h>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3566-quartz64-b.dts
··· 506 506 disable-wp; 507 507 pinctrl-names = "default"; 508 508 pinctrl-0 = <&sdmmc0_bus4 &sdmmc0_clk &sdmmc0_cmd &sdmmc0_det>; 509 - sd-uhs-sdr104; 509 + sd-uhs-sdr50; 510 510 vmmc-supply = <&vcc3v3_sd>; 511 511 vqmmc-supply = <&vccio_sd>; 512 512 status = "okay";
+1 -1
arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts
··· 678 678 }; 679 679 680 680 &usb_host0_xhci { 681 - extcon = <&usb2phy0>; 681 + dr_mode = "host"; 682 682 status = "okay"; 683 683 }; 684 684
+1 -1
arch/arm64/boot/dts/rockchip/rk3568-evb1-v10.dts
··· 656 656 }; 657 657 658 658 &usb2phy0_otg { 659 - vbus-supply = <&vcc5v0_usb_otg>; 659 + phy-supply = <&vcc5v0_usb_otg>; 660 660 status = "okay"; 661 661 }; 662 662
+1 -1
arch/arm64/boot/dts/rockchip/rk3568-rock-3a.dts
··· 581 581 }; 582 582 583 583 &usb2phy0_otg { 584 - vbus-supply = <&vcc5v0_usb_otg>; 584 + phy-supply = <&vcc5v0_usb_otg>; 585 585 status = "okay"; 586 586 }; 587 587
+1
arch/arm64/configs/defconfig
··· 48 48 CONFIG_ARCH_MEDIATEK=y 49 49 CONFIG_ARCH_MESON=y 50 50 CONFIG_ARCH_MVEBU=y 51 + CONFIG_ARCH_NXP=y 51 52 CONFIG_ARCH_MXC=y 52 53 CONFIG_ARCH_NPCM=y 53 54 CONFIG_ARCH_QCOM=y
+1 -1
arch/arm64/kernel/topology.c
··· 237 237 for_each_cpu(cpu, cpus) { 238 238 if (!freq_counters_valid(cpu) || 239 239 freq_inv_set_max_ratio(cpu, 240 - cpufreq_get_hw_max_freq(cpu) * 1000, 240 + cpufreq_get_hw_max_freq(cpu) * 1000ULL, 241 241 arch_timer_get_rate())) 242 242 return; 243 243 }
+1 -1
arch/arm64/kvm/arm.c
··· 2114 2114 * at, which would end badly once inaccessible. 2115 2115 */ 2116 2116 kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start); 2117 - kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size); 2117 + kmemleak_free_part_phys(hyp_mem_base, hyp_mem_size); 2118 2118 return pkvm_drop_host_privileges(); 2119 2119 } 2120 2120
+18 -14
arch/arm64/mm/mmu.c
··· 331 331 } 332 332 BUG_ON(p4d_bad(p4d)); 333 333 334 - /* 335 - * No need for locking during early boot. And it doesn't work as 336 - * expected with KASLR enabled. 337 - */ 338 - if (system_state != SYSTEM_BOOTING) 339 - mutex_lock(&fixmap_lock); 340 334 pudp = pud_set_fixmap_offset(p4dp, addr); 341 335 do { 342 336 pud_t old_pud = READ_ONCE(*pudp); ··· 362 368 } while (pudp++, addr = next, addr != end); 363 369 364 370 pud_clear_fixmap(); 365 - if (system_state != SYSTEM_BOOTING) 366 - mutex_unlock(&fixmap_lock); 367 371 } 368 372 369 - static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, 370 - unsigned long virt, phys_addr_t size, 371 - pgprot_t prot, 372 - phys_addr_t (*pgtable_alloc)(int), 373 - int flags) 373 + static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys, 374 + unsigned long virt, phys_addr_t size, 375 + pgprot_t prot, 376 + phys_addr_t (*pgtable_alloc)(int), 377 + int flags) 374 378 { 375 379 unsigned long addr, end, next; 376 380 pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); ··· 392 400 } while (pgdp++, addr = next, addr != end); 393 401 } 394 402 403 + static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, 404 + unsigned long virt, phys_addr_t size, 405 + pgprot_t prot, 406 + phys_addr_t (*pgtable_alloc)(int), 407 + int flags) 408 + { 409 + mutex_lock(&fixmap_lock); 410 + __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, 411 + pgtable_alloc, flags); 412 + mutex_unlock(&fixmap_lock); 413 + } 414 + 395 415 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 396 - extern __alias(__create_pgd_mapping) 416 + extern __alias(__create_pgd_mapping_locked) 397 417 void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt, 398 418 phys_addr_t size, pgprot_t prot, 399 419 phys_addr_t (*pgtable_alloc)(int), int flags);
-2
arch/loongarch/include/asm/loongson.h
··· 14 14 #include <asm/addrspace.h> 15 15 #include <asm/bootinfo.h> 16 16 17 - extern const struct plat_smp_ops loongson3_smp_ops; 18 - 19 17 #define LOONGSON_REG(x) \ 20 18 (*(volatile u32 *)((char *)TO_UNCACHE(LOONGSON_REG_BASE) + (x))) 21 19
+2
arch/loongarch/kernel/head.S
··· 14 14 15 15 __REF 16 16 17 + .align 12 18 + 17 19 SYM_CODE_START(kernel_entry) # kernel entry point 18 20 19 21 /* Config direct window and set PG */
+2 -13
arch/loongarch/kernel/traps.c
··· 461 461 462 462 asmlinkage void noinstr do_ri(struct pt_regs *regs) 463 463 { 464 - int status = -1; 464 + int status = SIGILL; 465 465 unsigned int opcode = 0; 466 466 unsigned int __user *era = (unsigned int __user *)exception_era(regs); 467 - unsigned long old_era = regs->csr_era; 468 - unsigned long old_ra = regs->regs[1]; 469 467 irqentry_state_t state = irqentry_enter(regs); 470 468 471 469 local_irq_enable(); ··· 475 477 476 478 die_if_kernel("Reserved instruction in kernel code", regs); 477 479 478 - compute_return_era(regs); 479 - 480 480 if (unlikely(get_user(opcode, era) < 0)) { 481 481 status = SIGSEGV; 482 482 current->thread.error_code = 1; 483 483 } 484 484 485 - if (status < 0) 486 - status = SIGILL; 487 - 488 - if (unlikely(status > 0)) { 489 - regs->csr_era = old_era; /* Undo skip-over. */ 490 - regs->regs[1] = old_ra; 491 - force_sig(status); 492 - } 485 + force_sig(status); 493 486 494 487 out: 495 488 local_irq_disable();
-9
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 937 937 pmd = *pmdp; 938 938 pmd_clear(pmdp); 939 939 940 - /* 941 - * pmdp collapse_flush need to ensure that there are no parallel gup 942 - * walk after this call. This is needed so that we can have stable 943 - * page ref count when collapsing a page. We don't allow a collapse page 944 - * if we have gup taken on the page. We can ensure that by sending IPI 945 - * because gup walk happens with IRQ disabled. 946 - */ 947 - serialize_against_pte_lookup(vma->vm_mm); 948 - 949 940 radix__flush_tlb_collapsed_pmd(vma->vm_mm, address); 950 941 951 942 return pmd;
+1
arch/riscv/Kconfig
··· 386 386 config RISCV_ISA_SVPBMT 387 387 bool "SVPBMT extension support" 388 388 depends on 64BIT && MMU 389 + depends on !XIP_KERNEL 389 390 select RISCV_ALTERNATIVE 390 391 default y 391 392 help
+2 -2
arch/riscv/Kconfig.erratas
··· 46 46 47 47 config ERRATA_THEAD_PBMT 48 48 bool "Apply T-Head memory type errata" 49 - depends on ERRATA_THEAD && 64BIT 49 + depends on ERRATA_THEAD && 64BIT && MMU 50 50 select RISCV_ALTERNATIVE_EARLY 51 51 default y 52 52 help ··· 57 57 58 58 config ERRATA_THEAD_CMO 59 59 bool "Apply T-Head cache management errata" 60 - depends on ERRATA_THEAD 60 + depends on ERRATA_THEAD && MMU 61 61 select RISCV_DMA_NONCOHERENT 62 62 default y 63 63 help
+1
arch/riscv/errata/thead/errata.c
··· 37 37 if (stage == RISCV_ALTERNATIVES_EARLY_BOOT) 38 38 return false; 39 39 40 + riscv_cbom_block_size = L1_CACHE_BYTES; 40 41 riscv_noncoherent_supported(); 41 42 return true; 42 43 #else
+5
arch/riscv/include/asm/cacheflush.h
··· 42 42 43 43 #endif /* CONFIG_SMP */ 44 44 45 + /* 46 + * The T-Head CMO errata internally probe the CBOM block size, but otherwise 47 + * don't depend on Zicbom. 48 + */ 49 + extern unsigned int riscv_cbom_block_size; 45 50 #ifdef CONFIG_RISCV_ISA_ZICBOM 46 51 void riscv_init_cbom_blocksize(void); 47 52 #else
+1 -1
arch/riscv/kernel/setup.c
··· 296 296 setup_smp(); 297 297 #endif 298 298 299 - riscv_fill_hwcap(); 300 299 riscv_init_cbom_blocksize(); 300 + riscv_fill_hwcap(); 301 301 apply_boot_alternatives(); 302 302 } 303 303
+2
arch/riscv/kernel/signal.c
··· 124 124 if (restore_altstack(&frame->uc.uc_stack)) 125 125 goto badframe; 126 126 127 + regs->cause = -1UL; 128 + 127 129 return regs->a0; 128 130 129 131 badframe:
+13 -10
arch/riscv/mm/dma-noncoherent.c
··· 12 12 #include <linux/of_device.h> 13 13 #include <asm/cacheflush.h> 14 14 15 - static unsigned int riscv_cbom_block_size = L1_CACHE_BYTES; 15 + unsigned int riscv_cbom_block_size; 16 16 static bool noncoherent_supported; 17 17 18 18 void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, ··· 79 79 void riscv_init_cbom_blocksize(void) 80 80 { 81 81 struct device_node *node; 82 + unsigned long cbom_hartid; 83 + u32 val, probed_block_size; 82 84 int ret; 83 - u32 val; 84 85 86 + probed_block_size = 0; 85 87 for_each_of_cpu_node(node) { 86 88 unsigned long hartid; 87 - int cbom_hartid; 88 89 89 90 ret = riscv_of_processor_hartid(node, &hartid); 90 91 if (ret) 91 - continue; 92 - 93 - if (hartid < 0) 94 92 continue; 95 93 96 94 /* set block-size for cbom extension if available */ ··· 96 98 if (ret) 97 99 continue; 98 100 99 - if (!riscv_cbom_block_size) { 100 - riscv_cbom_block_size = val; 101 + if (!probed_block_size) { 102 + probed_block_size = val; 101 103 cbom_hartid = hartid; 102 104 } else { 103 - if (riscv_cbom_block_size != val) 104 - pr_warn("cbom-block-size mismatched between harts %d and %lu\n", 105 + if (probed_block_size != val) 106 + pr_warn("cbom-block-size mismatched between harts %lu and %lu\n", 105 107 cbom_hartid, hartid); 106 108 } 107 109 } 110 + 111 + if (probed_block_size) 112 + riscv_cbom_block_size = probed_block_size; 108 113 } 109 114 #endif 110 115 111 116 void riscv_noncoherent_supported(void) 112 117 { 118 + WARN(!riscv_cbom_block_size, 119 + "Non-coherent DMA support enabled without a block size\n"); 113 120 noncoherent_supported = true; 114 121 }
+13 -3
arch/s390/kvm/gaccess.c
··· 489 489 PROT_TYPE_ALC = 2, 490 490 PROT_TYPE_DAT = 3, 491 491 PROT_TYPE_IEP = 4, 492 + /* Dummy value for passing an initialized value when code != PGM_PROTECTION */ 493 + PROT_NONE, 492 494 }; 493 495 494 496 static int trans_exc_ending(struct kvm_vcpu *vcpu, int code, unsigned long gva, u8 ar, ··· 506 504 switch (code) { 507 505 case PGM_PROTECTION: 508 506 switch (prot) { 507 + case PROT_NONE: 508 + /* We should never get here, acts like termination */ 509 + WARN_ON_ONCE(1); 510 + break; 509 511 case PROT_TYPE_IEP: 510 512 tec->b61 = 1; 511 513 fallthrough; ··· 974 968 return rc; 975 969 } else { 976 970 gpa = kvm_s390_real_to_abs(vcpu, ga); 977 - if (kvm_is_error_gpa(vcpu->kvm, gpa)) 971 + if (kvm_is_error_gpa(vcpu->kvm, gpa)) { 978 972 rc = PGM_ADDRESSING; 973 + prot = PROT_NONE; 974 + } 979 975 } 980 976 if (rc) 981 977 return trans_exc(vcpu, rc, ga, ar, mode, prot); ··· 1120 1112 if (rc == PGM_PROTECTION && try_storage_prot_override) 1121 1113 rc = access_guest_page_with_key(vcpu->kvm, mode, gpas[idx], 1122 1114 data, fragment_len, PAGE_SPO_ACC); 1123 - if (rc == PGM_PROTECTION) 1124 - prot = PROT_TYPE_KEYC; 1125 1115 if (rc) 1126 1116 break; 1127 1117 len -= fragment_len; ··· 1129 1123 if (rc > 0) { 1130 1124 bool terminate = (mode == GACC_STORE) && (idx > 0); 1131 1125 1126 + if (rc == PGM_PROTECTION) 1127 + prot = PROT_TYPE_KEYC; 1128 + else 1129 + prot = PROT_NONE; 1132 1130 rc = trans_exc_ending(vcpu, rc, ga, ar, mode, prot, terminate); 1133 1131 } 1134 1132 out_unlock:
+1 -1
arch/s390/kvm/interrupt.c
··· 3324 3324 if (gaite->count == 0) 3325 3325 return; 3326 3326 if (gaite->aisb != 0) 3327 - set_bit_inv(gaite->aisbo, (unsigned long *)gaite->aisb); 3327 + set_bit_inv(gaite->aisbo, phys_to_virt(gaite->aisb)); 3328 3328 3329 3329 kvm = kvm_s390_pci_si_to_kvm(aift, si); 3330 3330 if (!kvm)
+2 -2
arch/s390/kvm/kvm-s390.c
··· 505 505 goto out; 506 506 } 507 507 508 - if (kvm_s390_pci_interp_allowed()) { 508 + if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM)) { 509 509 rc = kvm_s390_pci_init(); 510 510 if (rc) { 511 511 pr_err("Unable to allocate AIFT for PCI\n"); ··· 527 527 void kvm_arch_exit(void) 528 528 { 529 529 kvm_s390_gib_destroy(); 530 - if (kvm_s390_pci_interp_allowed()) 530 + if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM)) 531 531 kvm_s390_pci_exit(); 532 532 debug_unregister(kvm_s390_dbf); 533 533 debug_unregister(kvm_s390_dbf_uv);
+14 -6
arch/s390/kvm/pci.c
··· 58 58 if (!zpci_aipb) 59 59 return -ENOMEM; 60 60 61 - aift->sbv = airq_iv_create(ZPCI_NR_DEVICES, AIRQ_IV_ALLOC, 0); 61 + aift->sbv = airq_iv_create(ZPCI_NR_DEVICES, AIRQ_IV_ALLOC, NULL); 62 62 if (!aift->sbv) { 63 63 rc = -ENOMEM; 64 64 goto free_aipb; ··· 71 71 rc = -ENOMEM; 72 72 goto free_sbv; 73 73 } 74 - aift->gait = (struct zpci_gaite *)page_to_phys(page); 74 + aift->gait = (struct zpci_gaite *)page_to_virt(page); 75 75 76 76 zpci_aipb->aipb.faisb = virt_to_phys(aift->sbv->vector); 77 77 zpci_aipb->aipb.gait = virt_to_phys(aift->gait); ··· 373 373 gaite->gisc = 0; 374 374 gaite->aisbo = 0; 375 375 gaite->gisa = 0; 376 - aift->kzdev[zdev->aisb] = 0; 376 + aift->kzdev[zdev->aisb] = NULL; 377 377 /* Clear zdev info */ 378 378 airq_iv_free_bit(aift->sbv, zdev->aisb); 379 379 airq_iv_release(zdev->aibv); ··· 672 672 673 673 int kvm_s390_pci_init(void) 674 674 { 675 + zpci_kvm_hook.kvm_register = kvm_s390_pci_register_kvm; 676 + zpci_kvm_hook.kvm_unregister = kvm_s390_pci_unregister_kvm; 677 + 678 + if (!kvm_s390_pci_interp_allowed()) 679 + return 0; 680 + 675 681 aift = kzalloc(sizeof(struct zpci_aift), GFP_KERNEL); 676 682 if (!aift) 677 683 return -ENOMEM; 678 684 679 685 spin_lock_init(&aift->gait_lock); 680 686 mutex_init(&aift->aift_lock); 681 - zpci_kvm_hook.kvm_register = kvm_s390_pci_register_kvm; 682 - zpci_kvm_hook.kvm_unregister = kvm_s390_pci_unregister_kvm; 683 687 684 688 return 0; 685 689 } 686 690 687 691 void kvm_s390_pci_exit(void) 688 692 { 689 - mutex_destroy(&aift->aift_lock); 690 693 zpci_kvm_hook.kvm_register = NULL; 691 694 zpci_kvm_hook.kvm_unregister = NULL; 695 + 696 + if (!kvm_s390_pci_interp_allowed()) 697 + return; 698 + 699 + mutex_destroy(&aift->aift_lock); 692 700 693 701 kfree(aift); 694 702 }
+3 -3
arch/s390/kvm/pci.h
··· 46 46 static inline struct kvm *kvm_s390_pci_si_to_kvm(struct zpci_aift *aift, 47 47 unsigned long si) 48 48 { 49 - if (!IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM) || aift->kzdev == 0 || 50 - aift->kzdev[si] == 0) 51 - return 0; 49 + if (!IS_ENABLED(CONFIG_VFIO_PCI_ZDEV_KVM) || !aift->kzdev || 50 + !aift->kzdev[si]) 51 + return NULL; 52 52 return aift->kzdev[si]->kvm; 53 53 }; 54 54
+3
arch/x86/include/asm/intel-family.h
··· 115 115 #define INTEL_FAM6_RAPTORLAKE_P 0xBA 116 116 #define INTEL_FAM6_RAPTORLAKE_S 0xBF 117 117 118 + #define INTEL_FAM6_METEORLAKE 0xAC 119 + #define INTEL_FAM6_METEORLAKE_L 0xAA 120 + 118 121 /* "Small Core" Processors (Atom) */ 119 122 120 123 #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
+1
arch/x86/include/asm/kvm_host.h
··· 729 729 struct fpu_guest guest_fpu; 730 730 731 731 u64 xcr0; 732 + u64 guest_supported_xcr0; 732 733 733 734 struct kvm_pio_request pio; 734 735 void *pio_data;
+4 -1
arch/x86/kernel/cpu/sgx/encl.c
··· 344 344 } 345 345 346 346 va_page = sgx_encl_grow(encl, false); 347 - if (IS_ERR(va_page)) 347 + if (IS_ERR(va_page)) { 348 + if (PTR_ERR(va_page) == -EBUSY) 349 + vmret = VM_FAULT_NOPAGE; 348 350 goto err_out_epc; 351 + } 349 352 350 353 if (va_page) 351 354 list_add(&va_page->list, &encl->va_pages);
+9 -6
arch/x86/kernel/cpu/sgx/main.c
··· 49 49 * Reset post-kexec EPC pages to the uninitialized state. The pages are removed 50 50 * from the input list, and made available for the page allocator. SECS pages 51 51 * prepending their children in the input list are left intact. 52 + * 53 + * Return 0 when sanitization was successful or kthread was stopped, and the 54 + * number of unsanitized pages otherwise. 52 55 */ 53 - static void __sgx_sanitize_pages(struct list_head *dirty_page_list) 56 + static unsigned long __sgx_sanitize_pages(struct list_head *dirty_page_list) 54 57 { 58 + unsigned long left_dirty = 0; 55 59 struct sgx_epc_page *page; 56 60 LIST_HEAD(dirty); 57 61 int ret; ··· 63 59 /* dirty_page_list is thread-local, no need for a lock: */ 64 60 while (!list_empty(dirty_page_list)) { 65 61 if (kthread_should_stop()) 66 - return; 62 + return 0; 67 63 68 64 page = list_first_entry(dirty_page_list, struct sgx_epc_page, list); 69 65 ··· 96 92 } else { 97 93 /* The page is not yet clean - move to the dirty list. */ 98 94 list_move_tail(&page->list, &dirty); 95 + left_dirty++; 99 96 } 100 97 101 98 cond_resched(); 102 99 } 103 100 104 101 list_splice(&dirty, dirty_page_list); 102 + return left_dirty; 105 103 } 106 104 107 105 static bool sgx_reclaimer_age(struct sgx_epc_page *epc_page) ··· 401 395 * required for SECS pages, whose child pages blocked EREMOVE. 402 396 */ 403 397 __sgx_sanitize_pages(&sgx_dirty_page_list); 404 - __sgx_sanitize_pages(&sgx_dirty_page_list); 405 - 406 - /* sanity check: */ 407 - WARN_ON(!list_empty(&sgx_dirty_page_list)); 398 + WARN_ON(__sgx_sanitize_pages(&sgx_dirty_page_list)); 408 399 409 400 while (!kthread_should_stop()) { 410 401 if (try_to_freeze())
+8 -3
arch/x86/kvm/cpuid.c
··· 315 315 { 316 316 struct kvm_lapic *apic = vcpu->arch.apic; 317 317 struct kvm_cpuid_entry2 *best; 318 - u64 guest_supported_xcr0; 319 318 320 319 best = kvm_find_cpuid_entry(vcpu, 1); 321 320 if (best && apic) { ··· 326 327 kvm_apic_set_version(vcpu); 327 328 } 328 329 329 - guest_supported_xcr0 = 330 + vcpu->arch.guest_supported_xcr0 = 330 331 cpuid_get_supported_xcr0(vcpu->arch.cpuid_entries, vcpu->arch.cpuid_nent); 331 332 332 - vcpu->arch.guest_fpu.fpstate->user_xfeatures = guest_supported_xcr0; 333 + /* 334 + * FP+SSE can always be saved/restored via KVM_{G,S}ET_XSAVE, even if 335 + * XSAVE/XCRO are not exposed to the guest, and even if XSAVE isn't 336 + * supported by the host. 337 + */ 338 + vcpu->arch.guest_fpu.fpstate->user_xfeatures = vcpu->arch.guest_supported_xcr0 | 339 + XFEATURE_MASK_FPSSE; 333 340 334 341 kvm_update_pv_runtime(vcpu); 335 342
+3
arch/x86/kvm/emulate.c
··· 4132 4132 { 4133 4133 u32 eax, ecx, edx; 4134 4134 4135 + if (!(ctxt->ops->get_cr(ctxt, 4) & X86_CR4_OSXSAVE)) 4136 + return emulate_ud(ctxt); 4137 + 4135 4138 eax = reg_read(ctxt, VCPU_REGS_RAX); 4136 4139 edx = reg_read(ctxt, VCPU_REGS_RDX); 4137 4140 ecx = reg_read(ctxt, VCPU_REGS_RCX);
+2
arch/x86/kvm/mmu/mmu.c
··· 1596 1596 rmap_head = gfn_to_rmap(gfn, sp->role.level, slot); 1597 1597 rmap_count = pte_list_add(cache, spte, rmap_head); 1598 1598 1599 + if (rmap_count > kvm->stat.max_mmu_rmap_size) 1600 + kvm->stat.max_mmu_rmap_size = rmap_count; 1599 1601 if (rmap_count > RMAP_RECYCLE_THRESHOLD) { 1600 1602 kvm_zap_all_rmap_sptes(kvm, rmap_head); 1601 1603 kvm_flush_remote_tlbs_with_address(
+3 -7
arch/x86/kvm/x86.c
··· 1011 1011 } 1012 1012 EXPORT_SYMBOL_GPL(kvm_load_host_xsave_state); 1013 1013 1014 - static inline u64 kvm_guest_supported_xcr0(struct kvm_vcpu *vcpu) 1015 - { 1016 - return vcpu->arch.guest_fpu.fpstate->user_xfeatures; 1017 - } 1018 - 1019 1014 #ifdef CONFIG_X86_64 1020 1015 static inline u64 kvm_guest_supported_xfd(struct kvm_vcpu *vcpu) 1021 1016 { 1022 - return kvm_guest_supported_xcr0(vcpu) & XFEATURE_MASK_USER_DYNAMIC; 1017 + return vcpu->arch.guest_supported_xcr0 & XFEATURE_MASK_USER_DYNAMIC; 1023 1018 } 1024 1019 #endif 1025 1020 ··· 1037 1042 * saving. However, xcr0 bit 0 is always set, even if the 1038 1043 * emulated CPU does not support XSAVE (see kvm_vcpu_reset()). 1039 1044 */ 1040 - valid_bits = kvm_guest_supported_xcr0(vcpu) | XFEATURE_MASK_FP; 1045 + valid_bits = vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FP; 1041 1046 if (xcr0 & ~valid_bits) 1042 1047 return 1; 1043 1048 ··· 1065 1070 1066 1071 int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu) 1067 1072 { 1073 + /* Note, #UD due to CR4.OSXSAVE=0 has priority over the intercept. */ 1068 1074 if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || 1069 1075 __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) { 1070 1076 kvm_inject_gp(vcpu, 0);
+1 -1
arch/x86/lib/usercopy.c
··· 44 44 * called from other contexts. 45 45 */ 46 46 pagefault_disable(); 47 - ret = __copy_from_user_inatomic(to, from, n); 47 + ret = raw_copy_from_user(to, from, n); 48 48 pagefault_enable(); 49 49 50 50 return ret;
+3
arch/x86/mm/Makefile
··· 4 4 KCOV_INSTRUMENT_mem_encrypt.o := n 5 5 KCOV_INSTRUMENT_mem_encrypt_amd.o := n 6 6 KCOV_INSTRUMENT_mem_encrypt_identity.o := n 7 + KCOV_INSTRUMENT_pgprot.o := n 7 8 8 9 KASAN_SANITIZE_mem_encrypt.o := n 9 10 KASAN_SANITIZE_mem_encrypt_amd.o := n 10 11 KASAN_SANITIZE_mem_encrypt_identity.o := n 12 + KASAN_SANITIZE_pgprot.o := n 11 13 12 14 # Disable KCSAN entirely, because otherwise we get warnings that some functions 13 15 # reference __initdata sections. ··· 19 17 CFLAGS_REMOVE_mem_encrypt.o = -pg 20 18 CFLAGS_REMOVE_mem_encrypt_amd.o = -pg 21 19 CFLAGS_REMOVE_mem_encrypt_identity.o = -pg 20 + CFLAGS_REMOVE_pgprot.o = -pg 22 21 endif 23 22 24 23 obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o mmap.o \
+2 -1
block/genhd.c
··· 602 602 * Prevent new I/O from crossing bio_queue_enter(). 603 603 */ 604 604 blk_queue_start_drain(q); 605 - blk_mq_freeze_queue_wait(q); 606 605 607 606 if (!(disk->flags & GENHD_FL_HIDDEN)) { 608 607 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi"); ··· 624 625 sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk))); 625 626 pm_runtime_set_memalloc_noio(disk_to_dev(disk), false); 626 627 device_del(disk_to_dev(disk)); 628 + 629 + blk_mq_freeze_queue_wait(q); 627 630 628 631 blk_throtl_cancel_bios(disk->queue); 629 632
+1 -1
certs/Kconfig
··· 43 43 bool "Provide system-wide ring of trusted keys" 44 44 depends on KEYS 45 45 depends on ASYMMETRIC_KEY_TYPE 46 - depends on X509_CERTIFICATE_PARSER 46 + depends on X509_CERTIFICATE_PARSER = y 47 47 help 48 48 Provide a system keyring to which trusted keys can be added. Keys in 49 49 the keyring are considered to be trusted. Keys may be added at will
+20 -3
drivers/acpi/processor_idle.c
··· 531 531 /* No delay is needed if we are in guest */ 532 532 if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) 533 533 return; 534 + /* 535 + * Modern (>=Nehalem) Intel systems use ACPI via intel_idle, 536 + * not this code. Assume that any Intel systems using this 537 + * are ancient and may need the dummy wait. This also assumes 538 + * that the motivating chipset issue was Intel-only. 539 + */ 540 + if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) 541 + return; 534 542 #endif 535 - /* Dummy wait op - must do something useless after P_LVL2 read 536 - because chipsets cannot guarantee that STPCLK# signal 537 - gets asserted in time to freeze execution properly. */ 543 + /* 544 + * Dummy wait op - must do something useless after P_LVL2 read 545 + * because chipsets cannot guarantee that STPCLK# signal gets 546 + * asserted in time to freeze execution properly 547 + * 548 + * This workaround has been in place since the original ACPI 549 + * implementation was merged, circa 2002. 550 + * 551 + * If a profile is pointing to this instruction, please first 552 + * consider moving your system to a more modern idle 553 + * mechanism. 554 + */ 538 555 inl(acpi_gbl_FADT.xpm_timer_block.address); 539 556 } 540 557
+4
drivers/ata/libata-core.c
··· 3988 3988 { "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER }, 3989 3989 { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, 3990 3990 3991 + /* These specific Pioneer models have LPM issues */ 3992 + { "PIONEER BD-RW BDR-207M", NULL, ATA_HORKAGE_NOLPM }, 3993 + { "PIONEER BD-RW BDR-205", NULL, ATA_HORKAGE_NOLPM }, 3994 + 3991 3995 /* Crucial BX100 SSD 500GB has broken LPM support */ 3992 3996 { "CT500BX100SSD1", NULL, ATA_HORKAGE_NOLPM }, 3993 3997
+12 -12
drivers/ata/libata-sata.c
··· 1018 1018 EXPORT_SYMBOL_GPL(dev_attr_sw_activity); 1019 1019 1020 1020 /** 1021 - * __ata_change_queue_depth - helper for ata_scsi_change_queue_depth 1022 - * @ap: ATA port to which the device change the queue depth 1021 + * ata_change_queue_depth - Set a device maximum queue depth 1022 + * @ap: ATA port of the target device 1023 + * @dev: target ATA device 1023 1024 * @sdev: SCSI device to configure queue depth for 1024 1025 * @queue_depth: new queue depth 1025 1026 * 1026 - * libsas and libata have different approaches for associating a sdev to 1027 - * its ata_port. 1027 + * Helper to set a device maximum queue depth, usable with both libsas 1028 + * and libata. 1028 1029 * 1029 1030 */ 1030 - int __ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, 1031 - int queue_depth) 1031 + int ata_change_queue_depth(struct ata_port *ap, struct ata_device *dev, 1032 + struct scsi_device *sdev, int queue_depth) 1032 1033 { 1033 - struct ata_device *dev; 1034 1034 unsigned long flags; 1035 1035 1036 - if (queue_depth < 1 || queue_depth == sdev->queue_depth) 1036 + if (!dev || !ata_dev_enabled(dev)) 1037 1037 return sdev->queue_depth; 1038 1038 1039 - dev = ata_scsi_find_dev(ap, sdev); 1040 - if (!dev || !ata_dev_enabled(dev)) 1039 + if (queue_depth < 1 || queue_depth == sdev->queue_depth) 1041 1040 return sdev->queue_depth; 1042 1041 1043 1042 /* NCQ enabled? */ ··· 1058 1059 1059 1060 return scsi_change_queue_depth(sdev, queue_depth); 1060 1061 } 1061 - EXPORT_SYMBOL_GPL(__ata_change_queue_depth); 1062 + EXPORT_SYMBOL_GPL(ata_change_queue_depth); 1062 1063 1063 1064 /** 1064 1065 * ata_scsi_change_queue_depth - SCSI callback for queue depth config ··· 1079 1080 { 1080 1081 struct ata_port *ap = ata_shost_to_port(sdev->host); 1081 1082 1082 - return __ata_change_queue_depth(ap, sdev, queue_depth); 1083 + return ata_change_queue_depth(ap, ata_scsi_find_dev(ap, sdev), 1084 + sdev, queue_depth); 1083 1085 } 1084 1086 EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth); 1085 1087
+4 -6
drivers/ata/libata-scsi.c
··· 1055 1055 int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev) 1056 1056 { 1057 1057 struct request_queue *q = sdev->request_queue; 1058 + int depth = 1; 1058 1059 1059 1060 if (!ata_id_has_unload(dev->id)) 1060 1061 dev->flags |= ATA_DFLAG_NO_UNLOAD; ··· 1101 1100 if (dev->flags & ATA_DFLAG_AN) 1102 1101 set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events); 1103 1102 1104 - if (dev->flags & ATA_DFLAG_NCQ) { 1105 - int depth; 1106 - 1103 + if (dev->flags & ATA_DFLAG_NCQ) 1107 1104 depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); 1108 - depth = min(ATA_MAX_QUEUE, depth); 1109 - scsi_change_queue_depth(sdev, depth); 1110 - } 1105 + depth = min(ATA_MAX_QUEUE, depth); 1106 + scsi_change_queue_depth(sdev, depth); 1111 1107 1112 1108 if (dev->flags & ATA_DFLAG_TRUSTED) 1113 1109 sdev->security_supported = 1;
+1 -1
drivers/base/core.c
··· 1625 1625 } 1626 1626 early_param("fw_devlink", fw_devlink_setup); 1627 1627 1628 - static bool fw_devlink_strict = true; 1628 + static bool fw_devlink_strict; 1629 1629 static int __init fw_devlink_strict_setup(char *arg) 1630 1630 { 1631 1631 return strtobool(arg, &fw_devlink_strict);
+3 -3
drivers/counter/104-quad-8.c
··· 449 449 return -EINVAL; 450 450 } 451 451 452 + /* Enable IRQ line */ 453 + irq_enabled |= BIT(event_node->channel); 454 + 452 455 /* Skip configuration if it is the same as previously set */ 453 456 if (priv->irq_trigger[event_node->channel] == next_irq_trigger) 454 457 continue; ··· 465 462 priv->irq_trigger[event_node->channel] << 3; 466 463 iowrite8(QUAD8_CTR_IOR | ior_cfg, 467 464 &priv->reg->channel[event_node->channel].control); 468 - 469 - /* Enable IRQ line */ 470 - irq_enabled |= BIT(event_node->channel); 471 465 } 472 466 473 467 iowrite8(irq_enabled, &priv->reg->index_interrupt);
+1
drivers/dax/hmem/device.c
··· 15 15 .start = r->start, 16 16 .end = r->end, 17 17 .flags = IORESOURCE_MEM, 18 + .desc = IORES_DESC_SOFT_RESERVED, 18 19 }; 19 20 struct platform_device *pdev; 20 21 struct memregion_info info;
+5 -1
drivers/firmware/arm_scmi/clock.c
··· 450 450 static const struct scmi_clock_info * 451 451 scmi_clock_info_get(const struct scmi_protocol_handle *ph, u32 clk_id) 452 452 { 453 + struct scmi_clock_info *clk; 453 454 struct clock_info *ci = ph->get_priv(ph); 454 - struct scmi_clock_info *clk = ci->clk + clk_id; 455 455 456 + if (clk_id >= ci->num_clocks) 457 + return NULL; 458 + 459 + clk = ci->clk + clk_id; 456 460 if (!clk->name[0]) 457 461 return NULL; 458 462
+1
drivers/firmware/arm_scmi/optee.c
··· 106 106 * @channel_id: OP-TEE channel ID used for this transport 107 107 * @tee_session: TEE session identifier 108 108 * @caps: OP-TEE SCMI channel capabilities 109 + * @rx_len: Response size 109 110 * @mu: Mutex protection on channel access 110 111 * @cinfo: SCMI channel information 111 112 * @shmem: Virtual base address of the shared memory
+7 -3
drivers/firmware/arm_scmi/reset.c
··· 166 166 struct scmi_xfer *t; 167 167 struct scmi_msg_reset_domain_reset *dom; 168 168 struct scmi_reset_info *pi = ph->get_priv(ph); 169 - struct reset_dom_info *rdom = pi->dom_info + domain; 169 + struct reset_dom_info *rdom; 170 170 171 - if (rdom->async_reset) 171 + if (domain >= pi->num_domains) 172 + return -EINVAL; 173 + 174 + rdom = pi->dom_info + domain; 175 + if (rdom->async_reset && flags & AUTONOMOUS_RESET) 172 176 flags |= ASYNCHRONOUS_RESET; 173 177 174 178 ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t); ··· 184 180 dom->flags = cpu_to_le32(flags); 185 181 dom->reset_state = cpu_to_le32(state); 186 182 187 - if (rdom->async_reset) 183 + if (flags & ASYNCHRONOUS_RESET) 188 184 ret = ph->xops->do_xfer_with_response(ph, t); 189 185 else 190 186 ret = ph->xops->do_xfer(ph, t);
+20 -26
drivers/firmware/arm_scmi/scmi_pm_domain.c
··· 8 8 #include <linux/err.h> 9 9 #include <linux/io.h> 10 10 #include <linux/module.h> 11 - #include <linux/pm_clock.h> 12 11 #include <linux/pm_domain.h> 13 12 #include <linux/scmi_protocol.h> 14 13 ··· 50 51 static int scmi_pd_power_off(struct generic_pm_domain *domain) 51 52 { 52 53 return scmi_pd_power(domain, false); 53 - } 54 - 55 - static int scmi_pd_attach_dev(struct generic_pm_domain *pd, struct device *dev) 56 - { 57 - int ret; 58 - 59 - ret = pm_clk_create(dev); 60 - if (ret) 61 - return ret; 62 - 63 - ret = of_pm_clk_add_clks(dev); 64 - if (ret >= 0) 65 - return 0; 66 - 67 - pm_clk_destroy(dev); 68 - return ret; 69 - } 70 - 71 - static void scmi_pd_detach_dev(struct generic_pm_domain *pd, struct device *dev) 72 - { 73 - pm_clk_destroy(dev); 74 54 } 75 55 76 56 static int scmi_pm_domain_probe(struct scmi_device *sdev) ··· 102 124 scmi_pd->genpd.name = scmi_pd->name; 103 125 scmi_pd->genpd.power_off = scmi_pd_power_off; 104 126 scmi_pd->genpd.power_on = scmi_pd_power_on; 105 - scmi_pd->genpd.attach_dev = scmi_pd_attach_dev; 106 - scmi_pd->genpd.detach_dev = scmi_pd_detach_dev; 107 - scmi_pd->genpd.flags = GENPD_FLAG_PM_CLK | 108 - GENPD_FLAG_ACTIVE_WAKEUP; 109 127 110 128 pm_genpd_init(&scmi_pd->genpd, NULL, 111 129 state == SCMI_POWER_STATE_GENERIC_OFF); ··· 112 138 scmi_pd_data->domains = domains; 113 139 scmi_pd_data->num_domains = num_domains; 114 140 141 + dev_set_drvdata(dev, scmi_pd_data); 142 + 115 143 return of_genpd_add_provider_onecell(np, scmi_pd_data); 144 + } 145 + 146 + static void scmi_pm_domain_remove(struct scmi_device *sdev) 147 + { 148 + int i; 149 + struct genpd_onecell_data *scmi_pd_data; 150 + struct device *dev = &sdev->dev; 151 + struct device_node *np = dev->of_node; 152 + 153 + of_genpd_del_provider(np); 154 + 155 + scmi_pd_data = dev_get_drvdata(dev); 156 + for (i = 0; i < scmi_pd_data->num_domains; i++) { 157 + if (!scmi_pd_data->domains[i]) 158 + continue; 159 + pm_genpd_remove(scmi_pd_data->domains[i]); 160 + } 116 161 } 117 162 118 163 static const struct scmi_device_id scmi_id_table[] = { ··· 143 150 static struct scmi_driver scmi_power_domain_driver = { 144 151 .name = "scmi-power-domain", 145 152 .probe = scmi_pm_domain_probe, 153 + .remove = scmi_pm_domain_remove, 146 154 .id_table = scmi_id_table, 147 155 }; 148 156 module_scmi_driver(scmi_power_domain_driver);
+21 -4
drivers/firmware/arm_scmi/sensors.c
··· 762 762 { 763 763 int ret; 764 764 struct scmi_xfer *t; 765 + struct sensors_info *si = ph->get_priv(ph); 766 + 767 + if (sensor_id >= si->num_sensors) 768 + return -EINVAL; 765 769 766 770 ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_GET, 767 771 sizeof(__le32), sizeof(__le32), &t); ··· 775 771 put_unaligned_le32(sensor_id, t->tx.buf); 776 772 ret = ph->xops->do_xfer(ph, t); 777 773 if (!ret) { 778 - struct sensors_info *si = ph->get_priv(ph); 779 774 struct scmi_sensor_info *s = si->sensors + sensor_id; 780 775 781 776 *sensor_config = get_unaligned_le64(t->rx.buf); ··· 791 788 int ret; 792 789 struct scmi_xfer *t; 793 790 struct scmi_msg_sensor_config_set *msg; 791 + struct sensors_info *si = ph->get_priv(ph); 792 + 793 + if (sensor_id >= si->num_sensors) 794 + return -EINVAL; 794 795 795 796 ret = ph->xops->xfer_get_init(ph, SENSOR_CONFIG_SET, 796 797 sizeof(*msg), 0, &t); ··· 807 800 808 801 ret = ph->xops->do_xfer(ph, t); 809 802 if (!ret) { 810 - struct sensors_info *si = ph->get_priv(ph); 811 803 struct scmi_sensor_info *s = si->sensors + sensor_id; 812 804 813 805 s->sensor_config = sensor_config; ··· 837 831 int ret; 838 832 struct scmi_xfer *t; 839 833 struct scmi_msg_sensor_reading_get *sensor; 834 + struct scmi_sensor_info *s; 840 835 struct sensors_info *si = ph->get_priv(ph); 841 - struct scmi_sensor_info *s = si->sensors + sensor_id; 836 + 837 + if (sensor_id >= si->num_sensors) 838 + return -EINVAL; 842 839 843 840 ret = ph->xops->xfer_get_init(ph, SENSOR_READING_GET, 844 841 sizeof(*sensor), 0, &t); ··· 850 841 851 842 sensor = t->tx.buf; 852 843 sensor->id = cpu_to_le32(sensor_id); 844 + s = si->sensors + sensor_id; 853 845 if (s->async) { 854 846 sensor->flags = cpu_to_le32(SENSOR_READ_ASYNC); 855 847 ret = ph->xops->do_xfer_with_response(ph, t); ··· 905 895 int ret; 906 896 struct scmi_xfer *t; 907 897 struct scmi_msg_sensor_reading_get *sensor; 898 + struct scmi_sensor_info *s; 908 899 struct sensors_info *si = ph->get_priv(ph); 909 - struct scmi_sensor_info *s = si->sensors + sensor_id; 910 900 901 + if (sensor_id >= si->num_sensors) 902 + return -EINVAL; 903 + 904 + s = si->sensors + sensor_id; 911 905 if (!count || !readings || 912 906 (!s->num_axis && count > 1) || (s->num_axis && count > s->num_axis)) 913 907 return -EINVAL; ··· 961 947 scmi_sensor_info_get(const struct scmi_protocol_handle *ph, u32 sensor_id) 962 948 { 963 949 struct sensors_info *si = ph->get_priv(ph); 950 + 951 + if (sensor_id >= si->num_sensors) 952 + return NULL; 964 953 965 954 return si->sensors + sensor_id; 966 955 }
+4 -4
drivers/fpga/intel-m10-bmc-sec-update.c
··· 148 148 stride = regmap_get_reg_stride(sec->m10bmc->regmap); 149 149 num_bits = FLASH_COUNT_SIZE * 8; 150 150 151 - flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL); 152 - if (!flash_buf) 153 - return -ENOMEM; 154 - 155 151 if (FLASH_COUNT_SIZE % stride) { 156 152 dev_err(sec->dev, 157 153 "FLASH_COUNT_SIZE (0x%x) not aligned to stride (0x%x)\n", ··· 155 159 WARN_ON_ONCE(1); 156 160 return -EINVAL; 157 161 } 162 + 163 + flash_buf = kmalloc(FLASH_COUNT_SIZE, GFP_KERNEL); 164 + if (!flash_buf) 165 + return -ENOMEM; 158 166 159 167 ret = regmap_bulk_read(sec->m10bmc->regmap, STAGING_FLASH_COUNT, 160 168 flash_buf, FLASH_COUNT_SIZE / stride);
+10 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 39 39 #include <linux/pm_runtime.h> 40 40 #include <drm/drm_crtc_helper.h> 41 41 #include <drm/drm_damage_helper.h> 42 + #include <drm/drm_drv.h> 42 43 #include <drm/drm_edid.h> 43 44 #include <drm/drm_gem_framebuffer_helper.h> 44 45 #include <drm/drm_fb_helper.h> ··· 496 495 } 497 496 498 497 static const struct drm_framebuffer_funcs amdgpu_fb_funcs = { 498 + .destroy = drm_gem_fb_destroy, 499 + .create_handle = drm_gem_fb_create_handle, 500 + }; 501 + 502 + static const struct drm_framebuffer_funcs amdgpu_fb_funcs_atomic = { 499 503 .destroy = drm_gem_fb_destroy, 500 504 .create_handle = drm_gem_fb_create_handle, 501 505 .dirty = drm_atomic_helper_dirtyfb, ··· 1108 1102 if (ret) 1109 1103 goto err; 1110 1104 1111 - ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs); 1105 + if (drm_drv_uses_atomic_modeset(dev)) 1106 + ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs_atomic); 1107 + else 1108 + ret = drm_framebuffer_init(dev, &rfb->base, &amdgpu_fb_funcs); 1112 1109 if (ret) 1113 1110 goto err; 1114 1111
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 181 181 for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) { 182 182 if (adev->ip_versions[SDMA0_HWIP][0] < IP_VERSION(6, 0, 0)) 183 183 adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc; 184 + /* zero sdma_hqd_mask for non-existent engine */ 185 + else if (adev->sdma.num_instances == 1) 186 + adev->mes.sdma_hqd_mask[i] = i ? 0 : 0xfc; 184 187 else 185 188 adev->mes.sdma_hqd_mask[i] = 0xfc; 186 189 }
+1 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2484 2484 /* Intentionally setting invalid PTE flag 2485 2485 * combination to force a no-retry-fault 2486 2486 */ 2487 - flags = AMDGPU_PTE_EXECUTABLE | AMDGPU_PDE_PTE | 2488 - AMDGPU_PTE_TF; 2487 + flags = AMDGPU_PTE_SNOOPED | AMDGPU_PTE_PRT; 2489 2488 value = 0; 2490 2489 } else if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_NEVER) { 2491 2490 /* Redirect the access to the dummy page */
+5 -2
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 1103 1103 *flags |= AMDGPU_PDE_BFS(0x9); 1104 1104 1105 1105 } else if (level == AMDGPU_VM_PDB0) { 1106 - if (*flags & AMDGPU_PDE_PTE) 1106 + if (*flags & AMDGPU_PDE_PTE) { 1107 1107 *flags &= ~AMDGPU_PDE_PTE; 1108 - else 1108 + if (!(*flags & AMDGPU_PTE_VALID)) 1109 + *addr |= 1 << PAGE_SHIFT; 1110 + } else { 1109 1111 *flags |= AMDGPU_PTE_TF; 1112 + } 1110 1113 } 1111 1114 } 1112 1115
+10 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4759 4759 plane_info->visible = true; 4760 4760 plane_info->stereo_format = PLANE_STEREO_FORMAT_NONE; 4761 4761 4762 - plane_info->layer_index = 0; 4762 + plane_info->layer_index = plane_state->normalized_zpos; 4763 4763 4764 4764 ret = fill_plane_color_attributes(plane_state, plane_info->format, 4765 4765 &plane_info->color_space); ··· 4827 4827 dc_plane_state->global_alpha = plane_info.global_alpha; 4828 4828 dc_plane_state->global_alpha_value = plane_info.global_alpha_value; 4829 4829 dc_plane_state->dcc = plane_info.dcc; 4830 - dc_plane_state->layer_index = plane_info.layer_index; // Always returns 0 4830 + dc_plane_state->layer_index = plane_info.layer_index; 4831 4831 dc_plane_state->flip_int_enabled = true; 4832 4832 4833 4833 /* ··· 9484 9484 } 9485 9485 } 9486 9486 } 9487 + 9488 + /* 9489 + * DC consults the zpos (layer_index in DC terminology) to determine the 9490 + * hw plane on which to enable the hw cursor (see 9491 + * `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in 9492 + * atomic state, so call drm helper to normalize zpos. 9493 + */ 9494 + drm_atomic_normalize_zpos(dev, state); 9487 9495 9488 9496 /* Remove exiting planes if they are modified */ 9489 9497 for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
+6 -5
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
··· 99 99 return display_count; 100 100 } 101 101 102 - static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable) 102 + static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable) 103 103 { 104 104 struct dc *dc = clk_mgr_base->ctx->dc; 105 105 int i; ··· 110 110 if (pipe->top_pipe || pipe->prev_odm_pipe) 111 111 continue; 112 112 if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) { 113 - if (disable) 113 + if (disable) { 114 114 pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg); 115 - else 115 + reset_sync_context_for_pipe(dc, context, i); 116 + } else 116 117 pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg); 117 118 } 118 119 } ··· 212 211 } 213 212 214 213 if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { 215 - dcn31_disable_otg_wa(clk_mgr_base, true); 214 + dcn31_disable_otg_wa(clk_mgr_base, context, true); 216 215 217 216 clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; 218 217 dcn31_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); 219 - dcn31_disable_otg_wa(clk_mgr_base, false); 218 + dcn31_disable_otg_wa(clk_mgr_base, context, false); 220 219 221 220 update_dispclk = true; 222 221 }
+7 -7
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
··· 119 119 return display_count; 120 120 } 121 121 122 - static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable) 122 + static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable) 123 123 { 124 124 struct dc *dc = clk_mgr_base->ctx->dc; 125 125 int i; ··· 129 129 130 130 if (pipe->top_pipe || pipe->prev_odm_pipe) 131 131 continue; 132 - if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL || 133 - dc_is_virtual_signal(pipe->stream->signal))) { 134 - if (disable) 132 + if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) { 133 + if (disable) { 135 134 pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg); 136 - else 135 + reset_sync_context_for_pipe(dc, context, i); 136 + } else 137 137 pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg); 138 138 } 139 139 } ··· 233 233 } 234 234 235 235 if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { 236 - dcn314_disable_otg_wa(clk_mgr_base, true); 236 + dcn314_disable_otg_wa(clk_mgr_base, context, true); 237 237 238 238 clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; 239 239 dcn314_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); 240 - dcn314_disable_otg_wa(clk_mgr_base, false); 240 + dcn314_disable_otg_wa(clk_mgr_base, context, false); 241 241 242 242 update_dispclk = true; 243 243 }
+21 -15
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
··· 46 46 #define TO_CLK_MGR_DCN315(clk_mgr)\ 47 47 container_of(clk_mgr, struct clk_mgr_dcn315, base) 48 48 49 + #define UNSUPPORTED_DCFCLK 10000000 50 + #define MIN_DPP_DISP_CLK 100000 51 + 49 52 static int dcn315_get_active_display_cnt_wa( 50 53 struct dc *dc, 51 54 struct dc_state *context) ··· 82 79 return display_count; 83 80 } 84 81 85 - static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable) 82 + static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable) 86 83 { 87 84 struct dc *dc = clk_mgr_base->ctx->dc; 88 85 int i; ··· 94 91 continue; 95 92 if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL || 96 93 dc_is_virtual_signal(pipe->stream->signal))) { 97 - if (disable) 94 + if (disable) { 98 95 pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg); 99 - else 96 + reset_sync_context_for_pipe(dc, context, i); 97 + } else 100 98 pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg); 101 99 } 102 100 } ··· 150 146 } 151 147 } 152 148 149 + /* Lock pstate by requesting unsupported dcfclk if change is unsupported */ 150 + if (!new_clocks->p_state_change_support) 151 + new_clocks->dcfclk_khz = UNSUPPORTED_DCFCLK; 153 152 if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) { 154 153 clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz; 155 154 dcn315_smu_set_hard_min_dcfclk(clk_mgr, clk_mgr_base->clks.dcfclk_khz); ··· 166 159 167 160 // workaround: Limit dppclk to 100Mhz to avoid lower eDP panel switch to plus 4K monitor underflow. 168 161 if (!IS_DIAG_DC(dc->ctx->dce_environment)) { 169 - if (new_clocks->dppclk_khz < 100000) 170 - new_clocks->dppclk_khz = 100000; 171 - if (new_clocks->dispclk_khz < 100000) 172 - new_clocks->dispclk_khz = 100000; 162 + if (new_clocks->dppclk_khz < MIN_DPP_DISP_CLK) 163 + new_clocks->dppclk_khz = MIN_DPP_DISP_CLK; 164 + if (new_clocks->dispclk_khz < MIN_DPP_DISP_CLK) 165 + new_clocks->dispclk_khz = MIN_DPP_DISP_CLK; 173 166 } 174 167 175 168 if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr->base.clks.dppclk_khz)) { ··· 182 175 if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { 183 176 /* No need to apply the w/a if we haven't taken over from bios yet */ 184 177 if (clk_mgr_base->clks.dispclk_khz) 185 - dcn315_disable_otg_wa(clk_mgr_base, true); 178 + dcn315_disable_otg_wa(clk_mgr_base, context, true); 186 179 187 180 clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; 188 181 dcn315_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); 189 182 if (clk_mgr_base->clks.dispclk_khz) 190 - dcn315_disable_otg_wa(clk_mgr_base, false); 183 + dcn315_disable_otg_wa(clk_mgr_base, context, false); 191 184 192 185 update_dispclk = true; 193 186 } ··· 282 275 { 283 276 .wm_inst = WM_A, 284 277 .wm_type = WM_TYPE_PSTATE_CHG, 285 - .pstate_latency_us = 64.0, 278 + .pstate_latency_us = 129.0, 286 279 .sr_exit_time_us = 11.5, 287 280 .sr_enter_plus_exit_time_us = 14.5, 288 281 .valid = true, ··· 290 283 { 291 284 .wm_inst = WM_B, 292 285 .wm_type = WM_TYPE_PSTATE_CHG, 293 - .pstate_latency_us = 64.0, 286 + .pstate_latency_us = 129.0, 294 287 .sr_exit_time_us = 11.5, 295 288 .sr_enter_plus_exit_time_us = 14.5, 296 289 .valid = true, ··· 298 291 { 299 292 .wm_inst = WM_C, 300 293 .wm_type = WM_TYPE_PSTATE_CHG, 301 - .pstate_latency_us = 64.0, 294 + .pstate_latency_us = 129.0, 302 295 .sr_exit_time_us = 11.5, 303 296 .sr_enter_plus_exit_time_us = 14.5, 304 297 .valid = true, ··· 306 299 { 307 300 .wm_inst = WM_D, 308 301 .wm_type = WM_TYPE_PSTATE_CHG, 309 - .pstate_latency_us = 64.0, 302 + .pstate_latency_us = 129.0, 310 303 .sr_exit_time_us = 11.5, 311 304 .sr_enter_plus_exit_time_us = 14.5, 312 305 .valid = true, ··· 563 556 ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz); 564 557 bw_params->vram_type = bios_info->memory_type; 565 558 bw_params->num_channels = bios_info->ma_channel_number; 566 - if (!bw_params->num_channels) 567 - bw_params->num_channels = 2; 559 + bw_params->dram_channel_width_bytes = bios_info->memory_type == 0x22 ? 8 : 4; 568 560 569 561 for (i = 0; i < WM_SET_COUNT; i++) { 570 562 bw_params->wm_table.entries[i].wm_inst = i;
+6 -5
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
··· 112 112 return display_count; 113 113 } 114 114 115 - static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, bool disable) 115 + static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable) 116 116 { 117 117 struct dc *dc = clk_mgr_base->ctx->dc; 118 118 int i; ··· 124 124 continue; 125 125 if (pipe->stream && (pipe->stream->dpms_off || pipe->plane_state == NULL || 126 126 dc_is_virtual_signal(pipe->stream->signal))) { 127 - if (disable) 127 + if (disable) { 128 128 pipe->stream_res.tg->funcs->immediate_disable_crtc(pipe->stream_res.tg); 129 - else 129 + reset_sync_context_for_pipe(dc, context, i); 130 + } else 130 131 pipe->stream_res.tg->funcs->enable_crtc(pipe->stream_res.tg); 131 132 } 132 133 } ··· 222 221 } 223 222 224 223 if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) { 225 - dcn316_disable_otg_wa(clk_mgr_base, true); 224 + dcn316_disable_otg_wa(clk_mgr_base, context, true); 226 225 227 226 clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz; 228 227 dcn316_smu_set_dispclk(clk_mgr, clk_mgr_base->clks.dispclk_khz); 229 - dcn316_disable_otg_wa(clk_mgr_base, false); 228 + dcn316_disable_otg_wa(clk_mgr_base, context, false); 230 229 231 230 update_dispclk = true; 232 231 }
+17
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 3584 3584 } 3585 3585 } 3586 3586 3587 + void reset_sync_context_for_pipe(const struct dc *dc, 3588 + struct dc_state *context, 3589 + uint8_t pipe_idx) 3590 + { 3591 + int i; 3592 + struct pipe_ctx *pipe_ctx_reset; 3593 + 3594 + /* reset the otg sync context for the pipe and its slave pipes if any */ 3595 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 3596 + pipe_ctx_reset = &context->res_ctx.pipe_ctx[i]; 3597 + 3598 + if (((GET_PIPE_SYNCD_FROM_PIPE(pipe_ctx_reset) == pipe_idx) && 3599 + IS_PIPE_SYNCD_VALID(pipe_ctx_reset)) || (i == pipe_idx)) 3600 + SET_PIPE_SYNCD_TO_PIPE(pipe_ctx_reset, i); 3601 + } 3602 + } 3603 + 3587 3604 uint8_t resource_transmitter_to_phy_idx(const struct dc *dc, enum transmitter transmitter) 3588 3605 { 3589 3606 /* TODO - get transmitter to phy idx mapping from DMUB */
+4 -2
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 2164 2164 continue; 2165 2165 if (pipe_ctx->stream->signal != SIGNAL_TYPE_HDMI_TYPE_A) 2166 2166 continue; 2167 - if (pipe_ctx->stream_res.audio != NULL) { 2167 + if (pipe_ctx->stream_res.audio != NULL && 2168 + pipe_ctx->stream_res.audio->enabled == false) { 2168 2169 struct audio_output audio_output; 2169 2170 2170 2171 build_audio_output(context, pipe_ctx, &audio_output); ··· 2205 2204 if (!dc_is_dp_signal(pipe_ctx->stream->signal)) 2206 2205 continue; 2207 2206 2208 - if (pipe_ctx->stream_res.audio != NULL) { 2207 + if (pipe_ctx->stream_res.audio != NULL && 2208 + pipe_ctx->stream_res.audio->enabled == false) { 2209 2209 struct audio_output audio_output; 2210 2210 2211 2211 build_audio_output(context, pipe_ctx, &audio_output);
-220
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dsc.h
··· 445 445 type DSCRM_DSC_FORWARD_EN; \ 446 446 type DSCRM_DSC_OPP_PIPE_SOURCE 447 447 448 - #define DSC_REG_LIST_DCN314(id) \ 449 - SRI(DSC_TOP_CONTROL, DSC_TOP, id),\ 450 - SRI(DSC_DEBUG_CONTROL, DSC_TOP, id),\ 451 - SRI(DSCC_CONFIG0, DSCC, id),\ 452 - SRI(DSCC_CONFIG1, DSCC, id),\ 453 - SRI(DSCC_STATUS, DSCC, id),\ 454 - SRI(DSCC_INTERRUPT_CONTROL_STATUS, DSCC, id),\ 455 - SRI(DSCC_PPS_CONFIG0, DSCC, id),\ 456 - SRI(DSCC_PPS_CONFIG1, DSCC, id),\ 457 - SRI(DSCC_PPS_CONFIG2, DSCC, id),\ 458 - SRI(DSCC_PPS_CONFIG3, DSCC, id),\ 459 - SRI(DSCC_PPS_CONFIG4, DSCC, id),\ 460 - SRI(DSCC_PPS_CONFIG5, DSCC, id),\ 461 - SRI(DSCC_PPS_CONFIG6, DSCC, id),\ 462 - SRI(DSCC_PPS_CONFIG7, DSCC, id),\ 463 - SRI(DSCC_PPS_CONFIG8, DSCC, id),\ 464 - SRI(DSCC_PPS_CONFIG9, DSCC, id),\ 465 - SRI(DSCC_PPS_CONFIG10, DSCC, id),\ 466 - SRI(DSCC_PPS_CONFIG11, DSCC, id),\ 467 - SRI(DSCC_PPS_CONFIG12, DSCC, id),\ 468 - SRI(DSCC_PPS_CONFIG13, DSCC, id),\ 469 - SRI(DSCC_PPS_CONFIG14, DSCC, id),\ 470 - SRI(DSCC_PPS_CONFIG15, DSCC, id),\ 471 - SRI(DSCC_PPS_CONFIG16, DSCC, id),\ 472 - SRI(DSCC_PPS_CONFIG17, DSCC, id),\ 473 - SRI(DSCC_PPS_CONFIG18, DSCC, id),\ 474 - SRI(DSCC_PPS_CONFIG19, DSCC, id),\ 475 - SRI(DSCC_PPS_CONFIG20, DSCC, id),\ 476 - SRI(DSCC_PPS_CONFIG21, DSCC, id),\ 477 - SRI(DSCC_PPS_CONFIG22, DSCC, id),\ 478 - SRI(DSCC_MEM_POWER_CONTROL, DSCC, id),\ 479 - SRI(DSCC_R_Y_SQUARED_ERROR_LOWER, DSCC, id),\ 480 - SRI(DSCC_R_Y_SQUARED_ERROR_UPPER, DSCC, id),\ 481 - SRI(DSCC_G_CB_SQUARED_ERROR_LOWER, DSCC, id),\ 482 - SRI(DSCC_G_CB_SQUARED_ERROR_UPPER, DSCC, id),\ 483 - SRI(DSCC_B_CR_SQUARED_ERROR_LOWER, DSCC, id),\ 484 - SRI(DSCC_B_CR_SQUARED_ERROR_UPPER, DSCC, id),\ 485 - SRI(DSCC_MAX_ABS_ERROR0, DSCC, id),\ 486 - SRI(DSCC_MAX_ABS_ERROR1, DSCC, id),\ 487 - SRI(DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, DSCC, id),\ 488 - SRI(DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, DSCC, id),\ 489 - SRI(DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, DSCC, id),\ 490 - SRI(DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, DSCC, id),\ 491 - SRI(DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, DSCC, id),\ 492 - SRI(DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, DSCC, id),\ 493 - SRI(DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, DSCC, id),\ 494 - SRI(DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, DSCC, id),\ 495 - SRI(DSCCIF_CONFIG0, DSCCIF, id),\ 496 - SRI(DSCCIF_CONFIG1, DSCCIF, id),\ 497 - SRI(DSCRM_DSC_FORWARD_CONFIG, DSCRM, id) 498 - 499 - #define DSC_REG_LIST_SH_MASK_DCN314(mask_sh)\ 500 - DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_CLOCK_EN, mask_sh), \ 501 - DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_DISPCLK_R_GATE_DIS, mask_sh), \ 502 - DSC_SF(DSC_TOP0_DSC_TOP_CONTROL, DSC_DSCCLK_R_GATE_DIS, mask_sh), \ 503 - DSC_SF(DSC_TOP0_DSC_DEBUG_CONTROL, DSC_DBG_EN, mask_sh), \ 504 - DSC_SF(DSC_TOP0_DSC_DEBUG_CONTROL, DSC_TEST_CLOCK_MUX_SEL, mask_sh), \ 505 - DSC_SF(DSCC0_DSCC_CONFIG0, NUMBER_OF_SLICES_PER_LINE, mask_sh), \ 506 - DSC_SF(DSCC0_DSCC_CONFIG0, ALTERNATE_ICH_ENCODING_EN, mask_sh), \ 507 - DSC_SF(DSCC0_DSCC_CONFIG0, NUMBER_OF_SLICES_IN_VERTICAL_DIRECTION, mask_sh), \ 508 - DSC_SF(DSCC0_DSCC_CONFIG1, DSCC_RATE_CONTROL_BUFFER_MODEL_SIZE, mask_sh), \ 509 - /*DSC_SF(DSCC0_DSCC_CONFIG1, DSCC_DISABLE_ICH, mask_sh),*/ \ 510 - DSC_SF(DSCC0_DSCC_STATUS, DSCC_DOUBLE_BUFFER_REG_UPDATE_PENDING, mask_sh), \ 511 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_OVERFLOW_OCCURRED, mask_sh), \ 512 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_OVERFLOW_OCCURRED, mask_sh), \ 513 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_OVERFLOW_OCCURRED, mask_sh), \ 514 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_OVERFLOW_OCCURRED, mask_sh), \ 515 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_UNDERFLOW_OCCURRED, mask_sh), \ 516 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_UNDERFLOW_OCCURRED, mask_sh), \ 517 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_UNDERFLOW_OCCURRED, mask_sh), \ 518 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_UNDERFLOW_OCCURRED, mask_sh), \ 519 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL0_OVERFLOW_OCCURRED, mask_sh), \ 520 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL1_OVERFLOW_OCCURRED, mask_sh), \ 521 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL2_OVERFLOW_OCCURRED, mask_sh), \ 522 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL3_OVERFLOW_OCCURRED, mask_sh), \ 523 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 524 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 525 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 526 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 527 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER0_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \ 528 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER1_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \ 529 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER2_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \ 530 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_BUFFER3_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \ 531 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL0_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 532 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL1_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 533 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL2_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 534 - DSC_SF(DSCC0_DSCC_INTERRUPT_CONTROL_STATUS, DSCC_RATE_CONTROL_BUFFER_MODEL3_OVERFLOW_OCCURRED_INT_EN, mask_sh), \ 535 - DSC_SF(DSCC0_DSCC_PPS_CONFIG0, DSC_VERSION_MINOR, mask_sh), \ 536 - DSC_SF(DSCC0_DSCC_PPS_CONFIG0, DSC_VERSION_MAJOR, mask_sh), \ 537 - DSC_SF(DSCC0_DSCC_PPS_CONFIG0, PPS_IDENTIFIER, mask_sh), \ 538 - DSC_SF(DSCC0_DSCC_PPS_CONFIG0, LINEBUF_DEPTH, mask_sh), \ 539 - DSC2_SF(DSCC0, DSCC_PPS_CONFIG0__BITS_PER_COMPONENT, mask_sh), \ 540 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, BITS_PER_PIXEL, mask_sh), \ 541 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, VBR_ENABLE, mask_sh), \ 542 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, SIMPLE_422, mask_sh), \ 543 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, CONVERT_RGB, mask_sh), \ 544 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, BLOCK_PRED_ENABLE, mask_sh), \ 545 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, NATIVE_422, mask_sh), \ 546 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, NATIVE_420, mask_sh), \ 547 - DSC_SF(DSCC0_DSCC_PPS_CONFIG1, CHUNK_SIZE, mask_sh), \ 548 - DSC_SF(DSCC0_DSCC_PPS_CONFIG2, PIC_WIDTH, mask_sh), \ 549 - DSC_SF(DSCC0_DSCC_PPS_CONFIG2, PIC_HEIGHT, mask_sh), \ 550 - DSC_SF(DSCC0_DSCC_PPS_CONFIG3, SLICE_WIDTH, mask_sh), \ 551 - DSC_SF(DSCC0_DSCC_PPS_CONFIG3, SLICE_HEIGHT, mask_sh), \ 552 - DSC_SF(DSCC0_DSCC_PPS_CONFIG4, INITIAL_XMIT_DELAY, mask_sh), \ 553 - DSC_SF(DSCC0_DSCC_PPS_CONFIG4, INITIAL_DEC_DELAY, mask_sh), \ 554 - DSC_SF(DSCC0_DSCC_PPS_CONFIG5, INITIAL_SCALE_VALUE, mask_sh), \ 555 - DSC_SF(DSCC0_DSCC_PPS_CONFIG5, SCALE_INCREMENT_INTERVAL, mask_sh), \ 556 - DSC_SF(DSCC0_DSCC_PPS_CONFIG6, SCALE_DECREMENT_INTERVAL, mask_sh), \ 557 - DSC_SF(DSCC0_DSCC_PPS_CONFIG6, FIRST_LINE_BPG_OFFSET, mask_sh), \ 558 - DSC_SF(DSCC0_DSCC_PPS_CONFIG6, SECOND_LINE_BPG_OFFSET, mask_sh), \ 559 - DSC_SF(DSCC0_DSCC_PPS_CONFIG7, NFL_BPG_OFFSET, mask_sh), \ 560 - DSC_SF(DSCC0_DSCC_PPS_CONFIG7, SLICE_BPG_OFFSET, mask_sh), \ 561 - DSC_SF(DSCC0_DSCC_PPS_CONFIG8, NSL_BPG_OFFSET, mask_sh), \ 562 - DSC_SF(DSCC0_DSCC_PPS_CONFIG8, SECOND_LINE_OFFSET_ADJ, mask_sh), \ 563 - DSC_SF(DSCC0_DSCC_PPS_CONFIG9, INITIAL_OFFSET, mask_sh), \ 564 - DSC_SF(DSCC0_DSCC_PPS_CONFIG9, FINAL_OFFSET, mask_sh), \ 565 - DSC_SF(DSCC0_DSCC_PPS_CONFIG10, FLATNESS_MIN_QP, mask_sh), \ 566 - DSC_SF(DSCC0_DSCC_PPS_CONFIG10, FLATNESS_MAX_QP, mask_sh), \ 567 - DSC_SF(DSCC0_DSCC_PPS_CONFIG10, RC_MODEL_SIZE, mask_sh), \ 568 - DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_EDGE_FACTOR, mask_sh), \ 569 - DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_QUANT_INCR_LIMIT0, mask_sh), \ 570 - DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_QUANT_INCR_LIMIT1, mask_sh), \ 571 - DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_TGT_OFFSET_LO, mask_sh), \ 572 - DSC_SF(DSCC0_DSCC_PPS_CONFIG11, RC_TGT_OFFSET_HI, mask_sh), \ 573 - DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH0, mask_sh), \ 574 - DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH1, mask_sh), \ 575 - DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH2, mask_sh), \ 576 - DSC_SF(DSCC0_DSCC_PPS_CONFIG12, RC_BUF_THRESH3, mask_sh), \ 577 - DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH4, mask_sh), \ 578 - DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH5, mask_sh), \ 579 - DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH6, mask_sh), \ 580 - DSC_SF(DSCC0_DSCC_PPS_CONFIG13, RC_BUF_THRESH7, mask_sh), \ 581 - DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH8, mask_sh), \ 582 - DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH9, mask_sh), \ 583 - DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH10, mask_sh), \ 584 - DSC_SF(DSCC0_DSCC_PPS_CONFIG14, RC_BUF_THRESH11, mask_sh), \ 585 - DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RC_BUF_THRESH12, mask_sh), \ 586 - DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RC_BUF_THRESH13, mask_sh), \ 587 - DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_MIN_QP0, mask_sh), \ 588 - DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_MAX_QP0, mask_sh), \ 589 - DSC_SF(DSCC0_DSCC_PPS_CONFIG15, RANGE_BPG_OFFSET0, mask_sh), \ 590 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MIN_QP1, mask_sh), \ 591 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MAX_QP1, mask_sh), \ 592 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_BPG_OFFSET1, mask_sh), \ 593 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MIN_QP2, mask_sh), \ 594 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_MAX_QP2, mask_sh), \ 595 - DSC_SF(DSCC0_DSCC_PPS_CONFIG16, RANGE_BPG_OFFSET2, mask_sh), \ 596 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MIN_QP3, mask_sh), \ 597 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MAX_QP3, mask_sh), \ 598 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_BPG_OFFSET3, mask_sh), \ 599 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MIN_QP4, mask_sh), \ 600 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_MAX_QP4, mask_sh), \ 601 - DSC_SF(DSCC0_DSCC_PPS_CONFIG17, RANGE_BPG_OFFSET4, mask_sh), \ 602 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MIN_QP5, mask_sh), \ 603 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MAX_QP5, mask_sh), \ 604 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_BPG_OFFSET5, mask_sh), \ 605 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MIN_QP6, mask_sh), \ 606 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_MAX_QP6, mask_sh), \ 607 - DSC_SF(DSCC0_DSCC_PPS_CONFIG18, RANGE_BPG_OFFSET6, mask_sh), \ 608 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MIN_QP7, mask_sh), \ 609 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MAX_QP7, mask_sh), \ 610 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_BPG_OFFSET7, mask_sh), \ 611 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MIN_QP8, mask_sh), \ 612 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_MAX_QP8, mask_sh), \ 613 - DSC_SF(DSCC0_DSCC_PPS_CONFIG19, RANGE_BPG_OFFSET8, mask_sh), \ 614 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MIN_QP9, mask_sh), \ 615 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MAX_QP9, mask_sh), \ 616 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_BPG_OFFSET9, mask_sh), \ 617 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MIN_QP10, mask_sh), \ 618 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_MAX_QP10, mask_sh), \ 619 - DSC_SF(DSCC0_DSCC_PPS_CONFIG20, RANGE_BPG_OFFSET10, mask_sh), \ 620 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MIN_QP11, mask_sh), \ 621 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MAX_QP11, mask_sh), \ 622 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_BPG_OFFSET11, mask_sh), \ 623 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MIN_QP12, mask_sh), \ 624 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_MAX_QP12, mask_sh), \ 625 - DSC_SF(DSCC0_DSCC_PPS_CONFIG21, RANGE_BPG_OFFSET12, mask_sh), \ 626 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MIN_QP13, mask_sh), \ 627 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MAX_QP13, mask_sh), \ 628 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_BPG_OFFSET13, mask_sh), \ 629 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MIN_QP14, mask_sh), \ 630 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_MAX_QP14, mask_sh), \ 631 - DSC_SF(DSCC0_DSCC_PPS_CONFIG22, RANGE_BPG_OFFSET14, mask_sh), \ 632 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_DEFAULT_MEM_LOW_POWER_STATE, mask_sh), \ 633 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_FORCE, mask_sh), \ 634 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_DIS, mask_sh), \ 635 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_MEM_PWR_STATE, mask_sh), \ 636 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_FORCE, mask_sh), \ 637 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_DIS, mask_sh), \ 638 - DSC_SF(DSCC0_DSCC_MEM_POWER_CONTROL, DSCC_NATIVE_422_MEM_PWR_STATE, mask_sh), \ 639 - DSC_SF(DSCC0_DSCC_R_Y_SQUARED_ERROR_LOWER, DSCC_R_Y_SQUARED_ERROR_LOWER, mask_sh), \ 640 - DSC_SF(DSCC0_DSCC_R_Y_SQUARED_ERROR_UPPER, DSCC_R_Y_SQUARED_ERROR_UPPER, mask_sh), \ 641 - DSC_SF(DSCC0_DSCC_G_CB_SQUARED_ERROR_LOWER, DSCC_G_CB_SQUARED_ERROR_LOWER, mask_sh), \ 642 - DSC_SF(DSCC0_DSCC_G_CB_SQUARED_ERROR_UPPER, DSCC_G_CB_SQUARED_ERROR_UPPER, mask_sh), \ 643 - DSC_SF(DSCC0_DSCC_B_CR_SQUARED_ERROR_LOWER, DSCC_B_CR_SQUARED_ERROR_LOWER, mask_sh), \ 644 - DSC_SF(DSCC0_DSCC_B_CR_SQUARED_ERROR_UPPER, DSCC_B_CR_SQUARED_ERROR_UPPER, mask_sh), \ 645 - DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR0, DSCC_R_Y_MAX_ABS_ERROR, mask_sh), \ 646 - DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR0, DSCC_G_CB_MAX_ABS_ERROR, mask_sh), \ 647 - DSC_SF(DSCC0_DSCC_MAX_ABS_ERROR1, DSCC_B_CR_MAX_ABS_ERROR, mask_sh), \ 648 - DSC_SF(DSCC0_DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER0_MAX_FULLNESS_LEVEL, mask_sh), \ 649 - DSC_SF(DSCC0_DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER1_MAX_FULLNESS_LEVEL, mask_sh), \ 650 - DSC_SF(DSCC0_DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER2_MAX_FULLNESS_LEVEL, mask_sh), \ 651 - DSC_SF(DSCC0_DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, DSCC_RATE_BUFFER3_MAX_FULLNESS_LEVEL, mask_sh), \ 652 - DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER0_MAX_FULLNESS_LEVEL, mask_sh), \ 653 - DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER1_MAX_FULLNESS_LEVEL, mask_sh), \ 654 - DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER2_MAX_FULLNESS_LEVEL, mask_sh), \ 655 - DSC_SF(DSCC0_DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, DSCC_RATE_CONTROL_BUFFER3_MAX_FULLNESS_LEVEL, mask_sh), \ 656 - DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_RECOVERY_EN, mask_sh), \ 657 - DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_OCCURRED_INT_EN, mask_sh), \ 658 - DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_INTERFACE_UNDERFLOW_OCCURRED_STATUS, mask_sh), \ 659 - DSC_SF(DSCCIF0_DSCCIF_CONFIG0, INPUT_PIXEL_FORMAT, mask_sh), \ 660 - DSC2_SF(DSCCIF0, DSCCIF_CONFIG0__BITS_PER_COMPONENT, mask_sh), \ 661 - DSC_SF(DSCCIF0_DSCCIF_CONFIG0, DOUBLE_BUFFER_REG_UPDATE_PENDING, mask_sh), \ 662 - DSC_SF(DSCCIF0_DSCCIF_CONFIG1, PIC_WIDTH, mask_sh), \ 663 - DSC_SF(DSCCIF0_DSCCIF_CONFIG1, PIC_HEIGHT, mask_sh), \ 664 - DSC_SF(DSCRM0_DSCRM_DSC_FORWARD_CONFIG, DSCRM_DSC_FORWARD_EN, mask_sh), \ 665 - DSC_SF(DSCRM0_DSCRM_DSC_FORWARD_CONFIG, DSCRM_DSC_OPP_PIPE_SOURCE, mask_sh) 666 - 667 - 668 448 struct dcn20_dsc_registers { 669 449 uint32_t DSC_TOP_CONTROL; 670 450 uint32_t DSC_DEBUG_CONTROL;
+1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 1565 1565 /* Any updates are handled in dc interface, just need 1566 1566 * to apply existing for plane enable / opp change */ 1567 1567 if (pipe_ctx->update_flags.bits.enable || pipe_ctx->update_flags.bits.opp_changed 1568 + || pipe_ctx->update_flags.bits.plane_changed 1568 1569 || pipe_ctx->stream->update_flags.bits.gamut_remap 1569 1570 || pipe_ctx->stream->update_flags.bits.out_csc) { 1570 1571 /* dpp/cm gamut remap*/
+2 -14
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.c
··· 343 343 { 344 344 struct dc_stream_state *stream = pipe_ctx->stream; 345 345 unsigned int odm_combine_factor = 0; 346 - struct dc *dc = pipe_ctx->stream->ctx->dc; 347 346 bool two_pix_per_container = false; 348 347 349 348 two_pix_per_container = optc2_is_two_pixels_per_containter(&stream->timing); ··· 363 364 } else { 364 365 *k1_div = PIXEL_RATE_DIV_BY_1; 365 366 *k2_div = PIXEL_RATE_DIV_BY_4; 366 - if ((odm_combine_factor == 2) || dc->debug.enable_dp_dig_pixel_rate_div_policy) 367 + if (odm_combine_factor == 2) 367 368 *k2_div = PIXEL_RATE_DIV_BY_2; 368 369 } 369 370 } ··· 383 384 return; 384 385 385 386 odm_combine_factor = get_odm_config(pipe_ctx, NULL); 386 - if (optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing) || odm_combine_factor > 1 387 - || dcn314_is_dp_dig_pixel_rate_div_policy(pipe_ctx)) 387 + if (optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing) || odm_combine_factor > 1) 388 388 pix_per_cycle = 2; 389 389 390 390 if (pipe_ctx->stream_res.stream_enc->funcs->set_input_mode) 391 391 pipe_ctx->stream_res.stream_enc->funcs->set_input_mode(pipe_ctx->stream_res.stream_enc, 392 392 pix_per_cycle); 393 - } 394 - 395 - bool dcn314_is_dp_dig_pixel_rate_div_policy(struct pipe_ctx *pipe_ctx) 396 - { 397 - struct dc *dc = pipe_ctx->stream->ctx->dc; 398 - 399 - if (dc_is_dp_signal(pipe_ctx->stream->signal) && !is_dp_128b_132b_signal(pipe_ctx) && 400 - dc->debug.enable_dp_dig_pixel_rate_div_policy) 401 - return true; 402 - return false; 403 393 }
-2
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_hwseq.h
··· 41 41 42 42 void dcn314_set_pixels_per_cycle(struct pipe_ctx *pipe_ctx); 43 43 44 - bool dcn314_is_dp_dig_pixel_rate_div_policy(struct pipe_ctx *pipe_ctx); 45 - 46 44 #endif /* __DC_HWSS_DCN314_H__ */
-1
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_init.c
··· 146 146 .setup_hpo_hw_control = dcn31_setup_hpo_hw_control, 147 147 .calculate_dccg_k1_k2_values = dcn314_calculate_dccg_k1_k2_values, 148 148 .set_pixels_per_cycle = dcn314_set_pixels_per_cycle, 149 - .is_dp_dig_pixel_rate_div_policy = dcn314_is_dp_dig_pixel_rate_div_policy, 150 149 }; 151 150 152 151 void dcn314_hw_sequencer_construct(struct dc *dc)
+7 -4
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
··· 87 87 #define DCHUBBUB_DEBUG_CTRL_0__DET_DEPTH__SHIFT 0x10 88 88 #define DCHUBBUB_DEBUG_CTRL_0__DET_DEPTH_MASK 0x01FF0000L 89 89 90 + #define DSCC0_DSCC_CONFIG0__ICH_RESET_AT_END_OF_LINE__SHIFT 0x0 91 + #define DSCC0_DSCC_CONFIG0__ICH_RESET_AT_END_OF_LINE_MASK 0x0000000FL 92 + 90 93 #include "reg_helper.h" 91 94 #include "dce/dmub_abm.h" 92 95 #include "dce/dmub_psr.h" ··· 582 579 583 580 #define dsc_regsDCN314(id)\ 584 581 [id] = {\ 585 - DSC_REG_LIST_DCN314(id)\ 582 + DSC_REG_LIST_DCN20(id)\ 586 583 } 587 584 588 585 static const struct dcn20_dsc_registers dsc_regs[] = { ··· 593 590 }; 594 591 595 592 static const struct dcn20_dsc_shift dsc_shift = { 596 - DSC_REG_LIST_SH_MASK_DCN314(__SHIFT) 593 + DSC_REG_LIST_SH_MASK_DCN20(__SHIFT) 597 594 }; 598 595 599 596 static const struct dcn20_dsc_mask dsc_mask = { 600 - DSC_REG_LIST_SH_MASK_DCN314(_MASK) 597 + DSC_REG_LIST_SH_MASK_DCN20(_MASK) 601 598 }; 602 599 603 600 static const struct dcn30_mpc_registers mpc_regs = { ··· 847 844 .num_ddc = 5, 848 845 .num_vmid = 16, 849 846 .num_mpc_3dlut = 2, 850 - .num_dsc = 4, 847 + .num_dsc = 3, 851 848 }; 852 849 853 850 static const struct dc_plane_cap plane_cap = {
+6 -1
drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
··· 291 291 .do_urgent_latency_adjustment = false, 292 292 .urgent_latency_adjustment_fabric_clock_component_us = 0, 293 293 .urgent_latency_adjustment_fabric_clock_reference_mhz = 0, 294 + .num_chans = 4, 294 295 }; 295 296 296 297 struct _vcs_dpi_ip_params_st dcn3_16_ip = { ··· 681 680 682 681 dcn3_15_ip.max_num_otg = dc->res_pool->res_cap->num_timing_generator; 683 682 dcn3_15_ip.max_num_dpp = dc->res_pool->pipe_count; 684 - dcn3_15_soc.num_chans = bw_params->num_channels; 683 + 684 + if (bw_params->num_channels > 0) 685 + dcn3_15_soc.num_chans = bw_params->num_channels; 686 + if (bw_params->dram_channel_width_bytes > 0) 687 + dcn3_15_soc.dram_channel_width_bytes = bw_params->dram_channel_width_bytes; 685 688 686 689 ASSERT(clk_table->num_entries); 687 690
+99 -321
drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
··· 265 265 266 266 static void CalculateFlipSchedule( 267 267 struct display_mode_lib *mode_lib, 268 + unsigned int k, 268 269 double HostVMInefficiencyFactor, 269 270 double UrgentExtraLatency, 270 271 double UrgentLatency, 271 - unsigned int GPUVMMaxPageTableLevels, 272 - bool HostVMEnable, 273 - unsigned int HostVMMaxNonCachedPageTableLevels, 274 - bool GPUVMEnable, 275 - double HostVMMinPageSize, 276 272 double PDEAndMetaPTEBytesPerFrame, 277 273 double MetaRowBytes, 278 - double DPTEBytesPerRow, 279 - double BandwidthAvailableForImmediateFlip, 280 - unsigned int TotImmediateFlipBytes, 281 - enum source_format_class SourcePixelFormat, 282 - double LineTime, 283 - double VRatio, 284 - double VRatioChroma, 285 - double Tno_bw, 286 - bool DCCEnable, 287 - unsigned int dpte_row_height, 288 - unsigned int meta_row_height, 289 - unsigned int dpte_row_height_chroma, 290 - unsigned int meta_row_height_chroma, 291 - double *DestinationLinesToRequestVMInImmediateFlip, 292 - double *DestinationLinesToRequestRowInImmediateFlip, 293 - double *final_flip_bw, 294 - bool *ImmediateFlipSupportedForPipe); 274 + double DPTEBytesPerRow); 295 275 static double CalculateWriteBackDelay( 296 276 enum source_format_class WritebackPixelFormat, 297 277 double WritebackHRatio, ··· 305 325 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 306 326 struct display_mode_lib *mode_lib, 307 327 unsigned int PrefetchMode, 308 - unsigned int NumberOfActivePlanes, 309 - unsigned int MaxLineBufferLines, 310 - unsigned int LineBufferSize, 311 - unsigned int WritebackInterfaceBufferSize, 312 328 double DCFCLK, 313 329 double ReturnBW, 314 - bool SynchronizedVBlank, 315 - unsigned int dpte_group_bytes[], 316 - unsigned int MetaChunkSize, 317 330 double UrgentLatency, 318 331 double ExtraLatency, 319 - double WritebackLatency, 320 - double WritebackChunkSize, 321 332 double SOCCLK, 322 - double DRAMClockChangeLatency, 323 - double SRExitTime, 324 - double SREnterPlusExitTime, 325 - double SRExitZ8Time, 326 - double SREnterPlusExitZ8Time, 327 333 double DCFCLKDeepSleep, 328 334 unsigned int DETBufferSizeY[], 329 335 unsigned int DETBufferSizeC[], 330 336 unsigned int SwathHeightY[], 331 337 unsigned int SwathHeightC[], 332 - unsigned int LBBitPerPixel[], 333 338 double SwathWidthY[], 334 339 double SwathWidthC[], 335 - double HRatio[], 336 - double HRatioChroma[], 337 - unsigned int vtaps[], 338 - unsigned int VTAPsChroma[], 339 - double VRatio[], 340 - double VRatioChroma[], 341 - unsigned int HTotal[], 342 - double PixelClock[], 343 - unsigned int BlendingAndTiming[], 344 340 unsigned int DPPPerPlane[], 345 341 double BytePerPixelDETY[], 346 342 double BytePerPixelDETC[], 347 - double DSTXAfterScaler[], 348 - double DSTYAfterScaler[], 349 - bool WritebackEnable[], 350 - enum source_format_class WritebackPixelFormat[], 351 - double WritebackDestinationWidth[], 352 - double WritebackDestinationHeight[], 353 - double WritebackSourceHeight[], 354 343 bool UnboundedRequestEnabled, 355 344 unsigned int CompressedBufferSizeInkByte, 356 345 enum clock_change_support *DRAMClockChangeSupport, 357 - double *UrgentWatermark, 358 - double *WritebackUrgentWatermark, 359 - double *DRAMClockChangeWatermark, 360 - double *WritebackDRAMClockChangeWatermark, 361 346 double *StutterExitWatermark, 362 347 double *StutterEnterPlusExitWatermark, 363 348 double *Z8StutterExitWatermark, 364 - double *Z8StutterEnterPlusExitWatermark, 365 - double *MinActiveDRAMClockChangeLatencySupported); 349 + double *Z8StutterEnterPlusExitWatermark); 366 350 367 351 static void CalculateDCFCLKDeepSleep( 368 352 struct display_mode_lib *mode_lib, ··· 2872 2928 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2873 2929 CalculateFlipSchedule( 2874 2930 mode_lib, 2931 + k, 2875 2932 HostVMInefficiencyFactor, 2876 2933 v->UrgentExtraLatency, 2877 2934 v->UrgentLatency, 2878 - v->GPUVMMaxPageTableLevels, 2879 - v->HostVMEnable, 2880 - v->HostVMMaxNonCachedPageTableLevels, 2881 - v->GPUVMEnable, 2882 - v->HostVMMinPageSize, 2883 2935 v->PDEAndMetaPTEBytesFrame[k], 2884 2936 v->MetaRowByte[k], 2885 - v->PixelPTEBytesPerRow[k], 2886 - v->BandwidthAvailableForImmediateFlip, 2887 - v->TotImmediateFlipBytes, 2888 - v->SourcePixelFormat[k], 2889 - v->HTotal[k] / v->PixelClock[k], 2890 - v->VRatio[k], 2891 - v->VRatioChroma[k], 2892 - v->Tno_bw[k], 2893 - v->DCCEnable[k], 2894 - v->dpte_row_height[k], 2895 - v->meta_row_height[k], 2896 - v->dpte_row_height_chroma[k], 2897 - v->meta_row_height_chroma[k], 2898 - &v->DestinationLinesToRequestVMInImmediateFlip[k], 2899 - &v->DestinationLinesToRequestRowInImmediateFlip[k], 2900 - &v->final_flip_bw[k], 2901 - &v->ImmediateFlipSupportedForPipe[k]); 2937 + v->PixelPTEBytesPerRow[k]); 2902 2938 } 2903 2939 2904 2940 v->total_dcn_read_bw_with_flip = 0.0; ··· 2965 3041 CalculateWatermarksAndDRAMSpeedChangeSupport( 2966 3042 mode_lib, 2967 3043 PrefetchMode, 2968 - v->NumberOfActivePlanes, 2969 - v->MaxLineBufferLines, 2970 - v->LineBufferSize, 2971 - v->WritebackInterfaceBufferSize, 2972 3044 v->DCFCLK, 2973 3045 v->ReturnBW, 2974 - v->SynchronizedVBlank, 2975 - v->dpte_group_bytes, 2976 - v->MetaChunkSize, 2977 3046 v->UrgentLatency, 2978 3047 v->UrgentExtraLatency, 2979 - v->WritebackLatency, 2980 - v->WritebackChunkSize, 2981 3048 v->SOCCLK, 2982 - v->DRAMClockChangeLatency, 2983 - v->SRExitTime, 2984 - v->SREnterPlusExitTime, 2985 - v->SRExitZ8Time, 2986 - v->SREnterPlusExitZ8Time, 2987 3049 v->DCFCLKDeepSleep, 2988 3050 v->DETBufferSizeY, 2989 3051 v->DETBufferSizeC, 2990 3052 v->SwathHeightY, 2991 3053 v->SwathHeightC, 2992 - v->LBBitPerPixel, 2993 3054 v->SwathWidthY, 2994 3055 v->SwathWidthC, 2995 - v->HRatio, 2996 - v->HRatioChroma, 2997 - v->vtaps, 2998 - v->VTAPsChroma, 2999 - v->VRatio, 3000 - v->VRatioChroma, 3001 - v->HTotal, 3002 - v->PixelClock, 3003 - v->BlendingAndTiming, 3004 3056 v->DPPPerPlane, 3005 3057 v->BytePerPixelDETY, 3006 3058 v->BytePerPixelDETC, 3007 - v->DSTXAfterScaler, 3008 - v->DSTYAfterScaler, 3009 - v->WritebackEnable, 3010 - v->WritebackPixelFormat, 3011 - v->WritebackDestinationWidth, 3012 - v->WritebackDestinationHeight, 3013 - v->WritebackSourceHeight, 3014 3059 v->UnboundedRequestEnabled, 3015 3060 v->CompressedBufferSizeInkByte, 3016 3061 &DRAMClockChangeSupport, 3017 - &v->UrgentWatermark, 3018 - &v->WritebackUrgentWatermark, 3019 - &v->DRAMClockChangeWatermark, 3020 - &v->WritebackDRAMClockChangeWatermark, 3021 3062 &v->StutterExitWatermark, 3022 3063 &v->StutterEnterPlusExitWatermark, 3023 3064 &v->Z8StutterExitWatermark, 3024 - &v->Z8StutterEnterPlusExitWatermark, 3025 - &v->MinActiveDRAMClockChangeLatencySupported); 3065 + &v->Z8StutterEnterPlusExitWatermark); 3026 3066 3027 3067 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 3028 3068 if (v->WritebackEnable[k] == true) { ··· 3598 3710 3599 3711 static void CalculateFlipSchedule( 3600 3712 struct display_mode_lib *mode_lib, 3713 + unsigned int k, 3601 3714 double HostVMInefficiencyFactor, 3602 3715 double UrgentExtraLatency, 3603 3716 double UrgentLatency, 3604 - unsigned int GPUVMMaxPageTableLevels, 3605 - bool HostVMEnable, 3606 - unsigned int HostVMMaxNonCachedPageTableLevels, 3607 - bool GPUVMEnable, 3608 - double HostVMMinPageSize, 3609 3717 double PDEAndMetaPTEBytesPerFrame, 3610 3718 double MetaRowBytes, 3611 - double DPTEBytesPerRow, 3612 - double BandwidthAvailableForImmediateFlip, 3613 - unsigned int TotImmediateFlipBytes, 3614 - enum source_format_class SourcePixelFormat, 3615 - double LineTime, 3616 - double VRatio, 3617 - double VRatioChroma, 3618 - double Tno_bw, 3619 - bool DCCEnable, 3620 - unsigned int dpte_row_height, 3621 - unsigned int meta_row_height, 3622 - unsigned int dpte_row_height_chroma, 3623 - unsigned int meta_row_height_chroma, 3624 - double *DestinationLinesToRequestVMInImmediateFlip, 3625 - double *DestinationLinesToRequestRowInImmediateFlip, 3626 - double *final_flip_bw, 3627 - bool *ImmediateFlipSupportedForPipe) 3719 + double DPTEBytesPerRow) 3628 3720 { 3721 + struct vba_vars_st *v = &mode_lib->vba; 3629 3722 double min_row_time = 0.0; 3630 3723 unsigned int HostVMDynamicLevelsTrips; 3631 3724 double TimeForFetchingMetaPTEImmediateFlip; 3632 3725 double TimeForFetchingRowInVBlankImmediateFlip; 3633 3726 double ImmediateFlipBW; 3727 + double LineTime = v->HTotal[k] / v->PixelClock[k]; 3634 3728 3635 - if (GPUVMEnable == true && HostVMEnable == true) { 3636 - HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels; 3729 + if (v->GPUVMEnable == true && v->HostVMEnable == true) { 3730 + HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; 3637 3731 } else { 3638 3732 HostVMDynamicLevelsTrips = 0; 3639 3733 } 3640 3734 3641 - if (GPUVMEnable == true || DCCEnable == true) { 3642 - ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + DPTEBytesPerRow) * BandwidthAvailableForImmediateFlip / TotImmediateFlipBytes; 3735 + if (v->GPUVMEnable == true || v->DCCEnable[k] == true) { 3736 + ImmediateFlipBW = (PDEAndMetaPTEBytesPerFrame + MetaRowBytes + DPTEBytesPerRow) * v->BandwidthAvailableForImmediateFlip / v->TotImmediateFlipBytes; 3643 3737 } 3644 3738 3645 - if (GPUVMEnable == true) { 3739 + if (v->GPUVMEnable == true) { 3646 3740 TimeForFetchingMetaPTEImmediateFlip = dml_max3( 3647 - Tno_bw + PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / ImmediateFlipBW, 3648 - UrgentExtraLatency + UrgentLatency * (GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1), 3741 + v->Tno_bw[k] + PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / ImmediateFlipBW, 3742 + UrgentExtraLatency + UrgentLatency * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1), 3649 3743 LineTime / 4.0); 3650 3744 } else { 3651 3745 TimeForFetchingMetaPTEImmediateFlip = 0; 3652 3746 } 3653 3747 3654 - *DestinationLinesToRequestVMInImmediateFlip = dml_ceil(4.0 * (TimeForFetchingMetaPTEImmediateFlip / LineTime), 1) / 4.0; 3655 - if ((GPUVMEnable == true || DCCEnable == true)) { 3748 + v->DestinationLinesToRequestVMInImmediateFlip[k] = dml_ceil(4.0 * (TimeForFetchingMetaPTEImmediateFlip / LineTime), 1) / 4.0; 3749 + if ((v->GPUVMEnable == true || v->DCCEnable[k] == true)) { 3656 3750 TimeForFetchingRowInVBlankImmediateFlip = dml_max3( 3657 3751 (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / ImmediateFlipBW, 3658 3752 UrgentLatency * (HostVMDynamicLevelsTrips + 1), ··· 3643 3773 TimeForFetchingRowInVBlankImmediateFlip = 0; 3644 3774 } 3645 3775 3646 - *DestinationLinesToRequestRowInImmediateFlip = dml_ceil(4.0 * (TimeForFetchingRowInVBlankImmediateFlip / LineTime), 1) / 4.0; 3776 + v->DestinationLinesToRequestRowInImmediateFlip[k] = dml_ceil(4.0 * (TimeForFetchingRowInVBlankImmediateFlip / LineTime), 1) / 4.0; 3647 3777 3648 - if (GPUVMEnable == true) { 3649 - *final_flip_bw = dml_max( 3650 - PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / (*DestinationLinesToRequestVMInImmediateFlip * LineTime), 3651 - (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (*DestinationLinesToRequestRowInImmediateFlip * LineTime)); 3652 - } else if ((GPUVMEnable == true || DCCEnable == true)) { 3653 - *final_flip_bw = (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (*DestinationLinesToRequestRowInImmediateFlip * LineTime); 3778 + if (v->GPUVMEnable == true) { 3779 + v->final_flip_bw[k] = dml_max( 3780 + PDEAndMetaPTEBytesPerFrame * HostVMInefficiencyFactor / (v->DestinationLinesToRequestVMInImmediateFlip[k] * LineTime), 3781 + (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (v->DestinationLinesToRequestRowInImmediateFlip[k] * LineTime)); 3782 + } else if ((v->GPUVMEnable == true || v->DCCEnable[k] == true)) { 3783 + v->final_flip_bw[k] = (MetaRowBytes + DPTEBytesPerRow * HostVMInefficiencyFactor) / (v->DestinationLinesToRequestRowInImmediateFlip[k] * LineTime); 3654 3784 } else { 3655 - *final_flip_bw = 0; 3785 + v->final_flip_bw[k] = 0; 3656 3786 } 3657 3787 3658 - if (SourcePixelFormat == dm_420_8 || SourcePixelFormat == dm_420_10 || SourcePixelFormat == dm_rgbe_alpha) { 3659 - if (GPUVMEnable == true && DCCEnable != true) { 3660 - min_row_time = dml_min(dpte_row_height * LineTime / VRatio, dpte_row_height_chroma * LineTime / VRatioChroma); 3661 - } else if (GPUVMEnable != true && DCCEnable == true) { 3662 - min_row_time = dml_min(meta_row_height * LineTime / VRatio, meta_row_height_chroma * LineTime / VRatioChroma); 3788 + if (v->SourcePixelFormat[k] == dm_420_8 || v->SourcePixelFormat[k] == dm_420_10 || v->SourcePixelFormat[k] == dm_rgbe_alpha) { 3789 + if (v->GPUVMEnable == true && v->DCCEnable[k] != true) { 3790 + min_row_time = dml_min(v->dpte_row_height[k] * LineTime / v->VRatio[k], v->dpte_row_height_chroma[k] * LineTime / v->VRatioChroma[k]); 3791 + } else if (v->GPUVMEnable != true && v->DCCEnable[k] == true) { 3792 + min_row_time = dml_min(v->meta_row_height[k] * LineTime / v->VRatio[k], v->meta_row_height_chroma[k] * LineTime / v->VRatioChroma[k]); 3663 3793 } else { 3664 3794 min_row_time = dml_min4( 3665 - dpte_row_height * LineTime / VRatio, 3666 - meta_row_height * LineTime / VRatio, 3667 - dpte_row_height_chroma * LineTime / VRatioChroma, 3668 - meta_row_height_chroma * LineTime / VRatioChroma); 3795 + v->dpte_row_height[k] * LineTime / v->VRatio[k], 3796 + v->meta_row_height[k] * LineTime / v->VRatio[k], 3797 + v->dpte_row_height_chroma[k] * LineTime / v->VRatioChroma[k], 3798 + v->meta_row_height_chroma[k] * LineTime / v->VRatioChroma[k]); 3669 3799 } 3670 3800 } else { 3671 - if (GPUVMEnable == true && DCCEnable != true) { 3672 - min_row_time = dpte_row_height * LineTime / VRatio; 3673 - } else if (GPUVMEnable != true && DCCEnable == true) { 3674 - min_row_time = meta_row_height * LineTime / VRatio; 3801 + if (v->GPUVMEnable == true && v->DCCEnable[k] != true) { 3802 + min_row_time = v->dpte_row_height[k] * LineTime / v->VRatio[k]; 3803 + } else if (v->GPUVMEnable != true && v->DCCEnable[k] == true) { 3804 + min_row_time = v->meta_row_height[k] * LineTime / v->VRatio[k]; 3675 3805 } else { 3676 - min_row_time = dml_min(dpte_row_height * LineTime / VRatio, meta_row_height * LineTime / VRatio); 3806 + min_row_time = dml_min(v->dpte_row_height[k] * LineTime / v->VRatio[k], v->meta_row_height[k] * LineTime / v->VRatio[k]); 3677 3807 } 3678 3808 } 3679 3809 3680 - if (*DestinationLinesToRequestVMInImmediateFlip >= 32 || *DestinationLinesToRequestRowInImmediateFlip >= 16 3810 + if (v->DestinationLinesToRequestVMInImmediateFlip[k] >= 32 || v->DestinationLinesToRequestRowInImmediateFlip[k] >= 16 3681 3811 || TimeForFetchingMetaPTEImmediateFlip + 2 * TimeForFetchingRowInVBlankImmediateFlip > min_row_time) { 3682 - *ImmediateFlipSupportedForPipe = false; 3812 + v->ImmediateFlipSupportedForPipe[k] = false; 3683 3813 } else { 3684 - *ImmediateFlipSupportedForPipe = true; 3814 + v->ImmediateFlipSupportedForPipe[k] = true; 3685 3815 } 3686 3816 3687 3817 #ifdef __DML_VBA_DEBUG__ 3688 - dml_print("DML::%s: DestinationLinesToRequestVMInImmediateFlip = %f\n", __func__, *DestinationLinesToRequestVMInImmediateFlip); 3689 - dml_print("DML::%s: DestinationLinesToRequestRowInImmediateFlip = %f\n", __func__, *DestinationLinesToRequestRowInImmediateFlip); 3818 + dml_print("DML::%s: DestinationLinesToRequestVMInImmediateFlip = %f\n", __func__, v->DestinationLinesToRequestVMInImmediateFlip[k]); 3819 + dml_print("DML::%s: DestinationLinesToRequestRowInImmediateFlip = %f\n", __func__, v->DestinationLinesToRequestRowInImmediateFlip[k]); 3690 3820 dml_print("DML::%s: TimeForFetchingMetaPTEImmediateFlip = %f\n", __func__, TimeForFetchingMetaPTEImmediateFlip); 3691 3821 dml_print("DML::%s: TimeForFetchingRowInVBlankImmediateFlip = %f\n", __func__, TimeForFetchingRowInVBlankImmediateFlip); 3692 3822 dml_print("DML::%s: min_row_time = %f\n", __func__, min_row_time); 3693 - dml_print("DML::%s: ImmediateFlipSupportedForPipe = %d\n", __func__, *ImmediateFlipSupportedForPipe); 3823 + dml_print("DML::%s: ImmediateFlipSupportedForPipe = %d\n", __func__, v->ImmediateFlipSupportedForPipe[k]); 3694 3824 #endif 3695 3825 3696 3826 } ··· 5282 5412 for (k = 0; k < v->NumberOfActivePlanes; k++) { 5283 5413 CalculateFlipSchedule( 5284 5414 mode_lib, 5415 + k, 5285 5416 HostVMInefficiencyFactor, 5286 5417 v->ExtraLatency, 5287 5418 v->UrgLatency[i], 5288 - v->GPUVMMaxPageTableLevels, 5289 - v->HostVMEnable, 5290 - v->HostVMMaxNonCachedPageTableLevels, 5291 - v->GPUVMEnable, 5292 - v->HostVMMinPageSize, 5293 5419 v->PDEAndMetaPTEBytesPerFrame[i][j][k], 5294 5420 v->MetaRowBytes[i][j][k], 5295 - v->DPTEBytesPerRow[i][j][k], 5296 - v->BandwidthAvailableForImmediateFlip, 5297 - v->TotImmediateFlipBytes, 5298 - v->SourcePixelFormat[k], 5299 - v->HTotal[k] / v->PixelClock[k], 5300 - v->VRatio[k], 5301 - v->VRatioChroma[k], 5302 - v->Tno_bw[k], 5303 - v->DCCEnable[k], 5304 - v->dpte_row_height[k], 5305 - v->meta_row_height[k], 5306 - v->dpte_row_height_chroma[k], 5307 - v->meta_row_height_chroma[k], 5308 - &v->DestinationLinesToRequestVMInImmediateFlip[k], 5309 - &v->DestinationLinesToRequestRowInImmediateFlip[k], 5310 - &v->final_flip_bw[k], 5311 - &v->ImmediateFlipSupportedForPipe[k]); 5421 + v->DPTEBytesPerRow[i][j][k]); 5312 5422 } 5313 5423 v->total_dcn_read_bw_with_flip = 0.0; 5314 5424 for (k = 0; k < v->NumberOfActivePlanes; k++) { ··· 5346 5496 CalculateWatermarksAndDRAMSpeedChangeSupport( 5347 5497 mode_lib, 5348 5498 v->PrefetchModePerState[i][j], 5349 - v->NumberOfActivePlanes, 5350 - v->MaxLineBufferLines, 5351 - v->LineBufferSize, 5352 - v->WritebackInterfaceBufferSize, 5353 5499 v->DCFCLKState[i][j], 5354 5500 v->ReturnBWPerState[i][j], 5355 - v->SynchronizedVBlank, 5356 - v->dpte_group_bytes, 5357 - v->MetaChunkSize, 5358 5501 v->UrgLatency[i], 5359 5502 v->ExtraLatency, 5360 - v->WritebackLatency, 5361 - v->WritebackChunkSize, 5362 5503 v->SOCCLKPerState[i], 5363 - v->DRAMClockChangeLatency, 5364 - v->SRExitTime, 5365 - v->SREnterPlusExitTime, 5366 - v->SRExitZ8Time, 5367 - v->SREnterPlusExitZ8Time, 5368 5504 v->ProjectedDCFCLKDeepSleep[i][j], 5369 5505 v->DETBufferSizeYThisState, 5370 5506 v->DETBufferSizeCThisState, 5371 5507 v->SwathHeightYThisState, 5372 5508 v->SwathHeightCThisState, 5373 - v->LBBitPerPixel, 5374 5509 v->SwathWidthYThisState, 5375 5510 v->SwathWidthCThisState, 5376 - v->HRatio, 5377 - v->HRatioChroma, 5378 - v->vtaps, 5379 - v->VTAPsChroma, 5380 - v->VRatio, 5381 - v->VRatioChroma, 5382 - v->HTotal, 5383 - v->PixelClock, 5384 - v->BlendingAndTiming, 5385 5511 v->NoOfDPPThisState, 5386 5512 v->BytePerPixelInDETY, 5387 5513 v->BytePerPixelInDETC, 5388 - v->DSTXAfterScaler, 5389 - v->DSTYAfterScaler, 5390 - v->WritebackEnable, 5391 - v->WritebackPixelFormat, 5392 - v->WritebackDestinationWidth, 5393 - v->WritebackDestinationHeight, 5394 - v->WritebackSourceHeight, 5395 5514 UnboundedRequestEnabledThisState, 5396 5515 CompressedBufferSizeInkByteThisState, 5397 5516 &v->DRAMClockChangeSupport[i][j], 5398 - &v->UrgentWatermark, 5399 - &v->WritebackUrgentWatermark, 5400 - &v->DRAMClockChangeWatermark, 5401 - &v->WritebackDRAMClockChangeWatermark, 5402 5517 &dummy, 5403 5518 &dummy, 5404 5519 &dummy, 5405 - &dummy, 5406 - &v->MinActiveDRAMClockChangeLatencySupported); 5520 + &dummy); 5407 5521 } 5408 5522 } 5409 5523 ··· 5493 5679 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 5494 5680 struct display_mode_lib *mode_lib, 5495 5681 unsigned int PrefetchMode, 5496 - unsigned int NumberOfActivePlanes, 5497 - unsigned int MaxLineBufferLines, 5498 - unsigned int LineBufferSize, 5499 - unsigned int WritebackInterfaceBufferSize, 5500 5682 double DCFCLK, 5501 5683 double ReturnBW, 5502 - bool SynchronizedVBlank, 5503 - unsigned int dpte_group_bytes[], 5504 - unsigned int MetaChunkSize, 5505 5684 double UrgentLatency, 5506 5685 double ExtraLatency, 5507 - double WritebackLatency, 5508 - double WritebackChunkSize, 5509 5686 double SOCCLK, 5510 - double DRAMClockChangeLatency, 5511 - double SRExitTime, 5512 - double SREnterPlusExitTime, 5513 - double SRExitZ8Time, 5514 - double SREnterPlusExitZ8Time, 5515 5687 double DCFCLKDeepSleep, 5516 5688 unsigned int DETBufferSizeY[], 5517 5689 unsigned int DETBufferSizeC[], 5518 5690 unsigned int SwathHeightY[], 5519 5691 unsigned int SwathHeightC[], 5520 - unsigned int LBBitPerPixel[], 5521 5692 double SwathWidthY[], 5522 5693 double SwathWidthC[], 5523 - double HRatio[], 5524 - double HRatioChroma[], 5525 - unsigned int vtaps[], 5526 - unsigned int VTAPsChroma[], 5527 - double VRatio[], 5528 - double VRatioChroma[], 5529 - unsigned int HTotal[], 5530 - double PixelClock[], 5531 - unsigned int BlendingAndTiming[], 5532 5694 unsigned int DPPPerPlane[], 5533 5695 double BytePerPixelDETY[], 5534 5696 double BytePerPixelDETC[], 5535 - double DSTXAfterScaler[], 5536 - double DSTYAfterScaler[], 5537 - bool WritebackEnable[], 5538 - enum source_format_class WritebackPixelFormat[], 5539 - double WritebackDestinationWidth[], 5540 - double WritebackDestinationHeight[], 5541 - double WritebackSourceHeight[], 5542 5697 bool UnboundedRequestEnabled, 5543 5698 unsigned int CompressedBufferSizeInkByte, 5544 5699 enum clock_change_support *DRAMClockChangeSupport, 5545 - double *UrgentWatermark, 5546 - double *WritebackUrgentWatermark, 5547 - double *DRAMClockChangeWatermark, 5548 - double *WritebackDRAMClockChangeWatermark, 5549 5700 double *StutterExitWatermark, 5550 5701 double *StutterEnterPlusExitWatermark, 5551 5702 double *Z8StutterExitWatermark, 5552 - double *Z8StutterEnterPlusExitWatermark, 5553 - double *MinActiveDRAMClockChangeLatencySupported) 5703 + double *Z8StutterEnterPlusExitWatermark) 5554 5704 { 5555 5705 struct vba_vars_st *v = &mode_lib->vba; 5556 5706 double EffectiveLBLatencyHidingY; ··· 5534 5756 double TotalPixelBW = 0.0; 5535 5757 int k, j; 5536 5758 5537 - *UrgentWatermark = UrgentLatency + ExtraLatency; 5759 + v->UrgentWatermark = UrgentLatency + ExtraLatency; 5538 5760 5539 5761 #ifdef __DML_VBA_DEBUG__ 5540 5762 dml_print("DML::%s: UrgentLatency = %f\n", __func__, UrgentLatency); 5541 5763 dml_print("DML::%s: ExtraLatency = %f\n", __func__, ExtraLatency); 5542 - dml_print("DML::%s: UrgentWatermark = %f\n", __func__, *UrgentWatermark); 5764 + dml_print("DML::%s: UrgentWatermark = %f\n", __func__, v->UrgentWatermark); 5543 5765 #endif 5544 5766 5545 - *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark; 5767 + v->DRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->UrgentWatermark; 5546 5768 5547 5769 #ifdef __DML_VBA_DEBUG__ 5548 - dml_print("DML::%s: DRAMClockChangeLatency = %f\n", __func__, DRAMClockChangeLatency); 5549 - dml_print("DML::%s: DRAMClockChangeWatermark = %f\n", __func__, *DRAMClockChangeWatermark); 5770 + dml_print("DML::%s: v->DRAMClockChangeLatency = %f\n", __func__, v->DRAMClockChangeLatency); 5771 + dml_print("DML::%s: DRAMClockChangeWatermark = %f\n", __func__, v->DRAMClockChangeWatermark); 5550 5772 #endif 5551 5773 5552 5774 v->TotalActiveWriteback = 0; 5553 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5554 - if (WritebackEnable[k] == true) { 5775 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5776 + if (v->WritebackEnable[k] == true) { 5555 5777 v->TotalActiveWriteback = v->TotalActiveWriteback + 1; 5556 5778 } 5557 5779 } 5558 5780 5559 5781 if (v->TotalActiveWriteback <= 1) { 5560 - *WritebackUrgentWatermark = WritebackLatency; 5782 + v->WritebackUrgentWatermark = v->WritebackLatency; 5561 5783 } else { 5562 - *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5784 + v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5563 5785 } 5564 5786 5565 5787 if (v->TotalActiveWriteback <= 1) { 5566 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency; 5788 + v->WritebackDRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->WritebackLatency; 5567 5789 } else { 5568 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5790 + v->WritebackDRAMClockChangeWatermark = v->DRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5569 5791 } 5570 5792 5571 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5793 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5572 5794 TotalPixelBW = TotalPixelBW 5573 - + DPPPerPlane[k] * (SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k] + SwathWidthC[k] * BytePerPixelDETC[k] * VRatioChroma[k]) 5574 - / (HTotal[k] / PixelClock[k]); 5795 + + DPPPerPlane[k] * (SwathWidthY[k] * BytePerPixelDETY[k] * v->VRatio[k] + SwathWidthC[k] * BytePerPixelDETC[k] * v->VRatioChroma[k]) 5796 + / (v->HTotal[k] / v->PixelClock[k]); 5575 5797 } 5576 5798 5577 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5799 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5578 5800 double EffectiveDETBufferSizeY = DETBufferSizeY[k]; 5579 5801 5580 5802 v->LBLatencyHidingSourceLinesY = dml_min( 5581 - (double) MaxLineBufferLines, 5582 - dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1); 5803 + (double) v->MaxLineBufferLines, 5804 + dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1); 5583 5805 5584 5806 v->LBLatencyHidingSourceLinesC = dml_min( 5585 - (double) MaxLineBufferLines, 5586 - dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1); 5807 + (double) v->MaxLineBufferLines, 5808 + dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1); 5587 5809 5588 - EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]); 5810 + EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]); 5589 5811 5590 - EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]); 5812 + EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]); 5591 5813 5592 5814 if (UnboundedRequestEnabled) { 5593 5815 EffectiveDETBufferSizeY = EffectiveDETBufferSizeY 5594 - + CompressedBufferSizeInkByte * 1024 * SwathWidthY[k] * BytePerPixelDETY[k] * VRatio[k] / (HTotal[k] / PixelClock[k]) / TotalPixelBW; 5816 + + CompressedBufferSizeInkByte * 1024 * SwathWidthY[k] * BytePerPixelDETY[k] * v->VRatio[k] / (v->HTotal[k] / v->PixelClock[k]) / TotalPixelBW; 5595 5817 } 5596 5818 5597 5819 LinesInDETY[k] = (double) EffectiveDETBufferSizeY / BytePerPixelDETY[k] / SwathWidthY[k]; 5598 5820 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5599 - FullDETBufferingTimeY = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5821 + FullDETBufferingTimeY = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5600 5822 if (BytePerPixelDETC[k] > 0) { 5601 5823 LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5602 5824 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5603 - FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k]; 5825 + FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k]; 5604 5826 } else { 5605 5827 LinesInDETC = 0; 5606 5828 FullDETBufferingTimeC = 999999; 5607 5829 } 5608 5830 5609 5831 ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY 5610 - - ((double) DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k] / PixelClock[k] - *UrgentWatermark - *DRAMClockChangeWatermark; 5832 + - ((double) v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) * v->HTotal[k] / v->PixelClock[k] - v->UrgentWatermark - v->DRAMClockChangeWatermark; 5611 5833 5612 - if (NumberOfActivePlanes > 1) { 5834 + if (v->NumberOfActivePlanes > 1) { 5613 5835 ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY 5614 - - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k]; 5836 + - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k]; 5615 5837 } 5616 5838 5617 5839 if (BytePerPixelDETC[k] > 0) { 5618 5840 ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC 5619 - - ((double) DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) * HTotal[k] / PixelClock[k] - *UrgentWatermark - *DRAMClockChangeWatermark; 5841 + - ((double) v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) * v->HTotal[k] / v->PixelClock[k] - v->UrgentWatermark - v->DRAMClockChangeWatermark; 5620 5842 5621 - if (NumberOfActivePlanes > 1) { 5843 + if (v->NumberOfActivePlanes > 1) { 5622 5844 ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC 5623 - - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k]; 5845 + - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k]; 5624 5846 } 5625 5847 v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5626 5848 } else { 5627 5849 v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5628 5850 } 5629 5851 5630 - if (WritebackEnable[k] == true) { 5631 - WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 5632 - / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4); 5633 - if (WritebackPixelFormat[k] == dm_444_64) { 5852 + if (v->WritebackEnable[k] == true) { 5853 + WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 5854 + / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4); 5855 + if (v->WritebackPixelFormat[k] == dm_444_64) { 5634 5856 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5635 5857 } 5636 5858 WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark; ··· 5640 5862 5641 5863 v->MinActiveDRAMClockChangeMargin = 999999; 5642 5864 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5643 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5865 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5644 5866 if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) { 5645 5867 v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k]; 5646 - if (BlendingAndTiming[k] == k) { 5868 + if (v->BlendingAndTiming[k] == k) { 5647 5869 PlaneWithMinActiveDRAMClockChangeMargin = k; 5648 5870 } else { 5649 - for (j = 0; j < NumberOfActivePlanes; ++j) { 5650 - if (BlendingAndTiming[k] == j) { 5871 + for (j = 0; j < v->NumberOfActivePlanes; ++j) { 5872 + if (v->BlendingAndTiming[k] == j) { 5651 5873 PlaneWithMinActiveDRAMClockChangeMargin = j; 5652 5874 } 5653 5875 } ··· 5655 5877 } 5656 5878 } 5657 5879 5658 - *MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency; 5880 + v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->DRAMClockChangeLatency ; 5659 5881 5660 5882 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5661 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5662 - if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) 5883 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5884 + if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) 5663 5885 && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5664 5886 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k]; 5665 5887 } ··· 5667 5889 5668 5890 v->TotalNumberOfActiveOTG = 0; 5669 5891 5670 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5671 - if (BlendingAndTiming[k] == k) { 5892 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5893 + if (v->BlendingAndTiming[k] == k) { 5672 5894 v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1; 5673 5895 } 5674 5896 } 5675 5897 5676 5898 if (v->MinActiveDRAMClockChangeMargin > 0 && PrefetchMode == 0) { 5677 5899 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5678 - } else if ((SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 5900 + } else if ((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 5679 5901 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0) { 5680 5902 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5681 5903 } else { 5682 5904 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5683 5905 } 5684 5906 5685 - *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5686 - *StutterEnterPlusExitWatermark = (SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep); 5687 - *Z8StutterExitWatermark = SRExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep; 5688 - *Z8StutterEnterPlusExitWatermark = SREnterPlusExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep; 5907 + *StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5908 + *StutterEnterPlusExitWatermark = (v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep); 5909 + *Z8StutterExitWatermark = v->SRExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep; 5910 + *Z8StutterEnterPlusExitWatermark = v->SREnterPlusExitZ8Time + ExtraLatency + 10 / DCFCLKDeepSleep; 5689 5911 5690 5912 #ifdef __DML_VBA_DEBUG__ 5691 5913 dml_print("DML::%s: StutterExitWatermark = %f\n", __func__, *StutterExitWatermark);
+45 -1
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
··· 244 244 } 245 245 246 246 /** 247 + * Finds dummy_latency_index when MCLK switching using firmware based 248 + * vblank stretch is enabled. This function will iterate through the 249 + * table of dummy pstate latencies until the lowest value that allows 250 + * dm_allow_self_refresh_and_mclk_switch to happen is found 251 + */ 252 + int dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(struct dc *dc, 253 + struct dc_state *context, 254 + display_e2e_pipe_params_st *pipes, 255 + int pipe_cnt, 256 + int vlevel) 257 + { 258 + const int max_latency_table_entries = 4; 259 + const struct vba_vars_st *vba = &context->bw_ctx.dml.vba; 260 + int dummy_latency_index = 0; 261 + 262 + dc_assert_fp_enabled(); 263 + 264 + while (dummy_latency_index < max_latency_table_entries) { 265 + context->bw_ctx.dml.soc.dram_clock_change_latency_us = 266 + dc->clk_mgr->bw_params->dummy_pstate_table[dummy_latency_index].dummy_pstate_latency_us; 267 + dcn32_internal_validate_bw(dc, context, pipes, &pipe_cnt, &vlevel, false); 268 + 269 + if (vlevel < context->bw_ctx.dml.vba.soc.num_states && 270 + vba->DRAMClockChangeSupport[vlevel][vba->maxMpcComb] != dm_dram_clock_change_unsupported) 271 + break; 272 + 273 + dummy_latency_index++; 274 + } 275 + 276 + if (dummy_latency_index == max_latency_table_entries) { 277 + ASSERT(dummy_latency_index != max_latency_table_entries); 278 + /* If the execution gets here, it means dummy p_states are 279 + * not possible. This should never happen and would mean 280 + * something is severely wrong. 281 + * Here we reset dummy_latency_index to 3, because it is 282 + * better to have underflows than system crashes. 283 + */ 284 + dummy_latency_index = max_latency_table_entries - 1; 285 + } 286 + 287 + return dummy_latency_index; 288 + } 289 + 290 + /** 247 291 * dcn32_helper_populate_phantom_dlg_params - Get DLG params for phantom pipes 248 292 * and populate pipe_ctx with those params. 249 293 * ··· 1690 1646 dcn30_can_support_mclk_switch_using_fw_based_vblank_stretch(dc, context); 1691 1647 1692 1648 if (context->bw_ctx.bw.dcn.clk.fw_based_mclk_switching) { 1693 - dummy_latency_index = dcn30_find_dummy_latency_index_for_fw_based_mclk_switch(dc, 1649 + dummy_latency_index = dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(dc, 1694 1650 context, pipes, pipe_cnt, vlevel); 1695 1651 1696 1652 /* After calling dcn30_find_dummy_latency_index_for_fw_based_mclk_switch
+6
drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.h
··· 71 71 72 72 void dcn32_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params); 73 73 74 + int dcn32_find_dummy_latency_index_for_fw_based_mclk_switch(struct dc *dc, 75 + struct dc_state *context, 76 + display_e2e_pipe_params_st *pipes, 77 + int pipe_cnt, 78 + int vlevel); 79 + 74 80 #endif
+2
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_32.c
··· 1992 1992 dml32_CalculateODMMode( 1993 1993 mode_lib->vba.MaximumPixelsPerLinePerDSCUnit, 1994 1994 mode_lib->vba.HActive[k], 1995 + mode_lib->vba.OutputFormat[k], 1995 1996 mode_lib->vba.Output[k], 1996 1997 mode_lib->vba.ODMUse[k], 1997 1998 mode_lib->vba.MaxDispclk[i], ··· 2014 2013 dml32_CalculateODMMode( 2015 2014 mode_lib->vba.MaximumPixelsPerLinePerDSCUnit, 2016 2015 mode_lib->vba.HActive[k], 2016 + mode_lib->vba.OutputFormat[k], 2017 2017 mode_lib->vba.Output[k], 2018 2018 mode_lib->vba.ODMUse[k], 2019 2019 mode_lib->vba.MaxDispclk[i],
+26
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.c
··· 27 27 #include "display_mode_vba_32.h" 28 28 #include "../display_mode_lib.h" 29 29 30 + #define DCN32_MAX_FMT_420_BUFFER_WIDTH 4096 31 + 30 32 unsigned int dml32_dscceComputeDelay( 31 33 unsigned int bpc, 32 34 double BPP, ··· 1184 1182 void dml32_CalculateODMMode( 1185 1183 unsigned int MaximumPixelsPerLinePerDSCUnit, 1186 1184 unsigned int HActive, 1185 + enum output_format_class OutFormat, 1187 1186 enum output_encoder_class Output, 1188 1187 enum odm_combine_policy ODMUse, 1189 1188 double StateDispclk, ··· 1255 1252 *NumberOfDPP = 1; 1256 1253 else 1257 1254 *TotalAvailablePipesSupport = false; 1255 + } 1256 + if (OutFormat == dm_420 && HActive > DCN32_MAX_FMT_420_BUFFER_WIDTH && 1257 + ODMUse != dm_odm_combine_policy_4to1) { 1258 + if (HActive > DCN32_MAX_FMT_420_BUFFER_WIDTH * 4) { 1259 + *ODMMode = dm_odm_combine_mode_disabled; 1260 + *NumberOfDPP = 0; 1261 + *TotalAvailablePipesSupport = false; 1262 + } else if (HActive > DCN32_MAX_FMT_420_BUFFER_WIDTH * 2 || 1263 + *ODMMode == dm_odm_combine_mode_4to1) { 1264 + *ODMMode = dm_odm_combine_mode_4to1; 1265 + *RequiredDISPCLKPerSurface = SurfaceRequiredDISPCLKWithODMCombineFourToOne; 1266 + *NumberOfDPP = 4; 1267 + } else { 1268 + *ODMMode = dm_odm_combine_mode_2to1; 1269 + *RequiredDISPCLKPerSurface = SurfaceRequiredDISPCLKWithODMCombineTwoToOne; 1270 + *NumberOfDPP = 2; 1271 + } 1272 + } 1273 + if (Output == dm_hdmi && OutFormat == dm_420 && 1274 + HActive > DCN32_MAX_FMT_420_BUFFER_WIDTH) { 1275 + *ODMMode = dm_odm_combine_mode_disabled; 1276 + *NumberOfDPP = 0; 1277 + *TotalAvailablePipesSupport = false; 1258 1278 } 1259 1279 } 1260 1280
+1
drivers/gpu/drm/amd/display/dc/dml/dcn32/display_mode_vba_util_32.h
··· 216 216 void dml32_CalculateODMMode( 217 217 unsigned int MaximumPixelsPerLinePerDSCUnit, 218 218 unsigned int HActive, 219 + enum output_format_class OutFormat, 219 220 enum output_encoder_class Output, 220 221 enum odm_combine_policy ODMUse, 221 222 double StateDispclk,
+4
drivers/gpu/drm/amd/display/dc/inc/resource.h
··· 219 219 struct dc_state *context, 220 220 uint8_t disabled_master_pipe_idx); 221 221 222 + void reset_sync_context_for_pipe(const struct dc *dc, 223 + struct dc_state *context, 224 + uint8_t pipe_idx); 225 + 222 226 uint8_t resource_transmitter_to_phy_idx(const struct dc *dc, enum transmitter transmitter); 223 227 224 228 const struct link_hwss *get_link_hwss(const struct dc_link *link,
+2 -42
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 209 209 if (!adev->scpm_enabled) 210 210 return 0; 211 211 212 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) 212 + if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) || 213 + (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0))) 213 214 return 0; 214 215 215 216 /* override pptable_id from driver parameter */ ··· 219 218 dev_info(adev->dev, "override pptable id %d\n", pptable_id); 220 219 } else { 221 220 pptable_id = smu->smu_table.boot_values.pp_table_id; 222 - 223 - /* 224 - * Temporary solution for SMU V13.0.0 with SCPM enabled: 225 - * - use vbios carried pptable when pptable_id is 3664, 3715 or 3795 226 - * - use 36831 soft pptable when pptable_id is 3683 227 - */ 228 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) { 229 - switch (pptable_id) { 230 - case 3664: 231 - case 3715: 232 - case 3795: 233 - pptable_id = 0; 234 - break; 235 - case 3683: 236 - pptable_id = 36831; 237 - break; 238 - default: 239 - dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id); 240 - return -EINVAL; 241 - } 242 - } 243 221 } 244 222 245 223 /* "pptable_id == 0" means vbios carries the pptable. */ ··· 451 471 } else { 452 472 pptable_id = smu->smu_table.boot_values.pp_table_id; 453 473 454 - /* 455 - * Temporary solution for SMU V13.0.0 with SCPM disabled: 456 - * - use 3664, 3683 or 3715 on request 457 - * - use 3664 when pptable_id is 0 458 - * TODO: drop these when the pptable carried in vbios is ready. 459 - */ 460 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) { 461 - switch (pptable_id) { 462 - case 0: 463 - pptable_id = 3664; 464 - break; 465 - case 3664: 466 - case 3683: 467 - case 3715: 468 - break; 469 - default: 470 - dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id); 471 - return -EINVAL; 472 - } 473 - } 474 474 } 475 475 476 476 /* force using vbios pptable in sriov mode */
+3 -50
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 410 410 { 411 411 struct smu_table_context *smu_table = &smu->smu_table; 412 412 struct amdgpu_device *adev = smu->adev; 413 - uint32_t pptable_id; 414 413 int ret = 0; 415 414 416 - /* 417 - * With SCPM enabled, the pptable used will be signed. It cannot 418 - * be used directly by driver. To get the raw pptable, we need to 419 - * rely on the combo pptable(and its revelant SMU message). 420 - */ 421 - if (adev->scpm_enabled) { 422 - ret = smu_v13_0_0_get_pptable_from_pmfw(smu, 423 - &smu_table->power_play_table, 424 - &smu_table->power_play_table_size); 425 - } else { 426 - /* override pptable_id from driver parameter */ 427 - if (amdgpu_smu_pptable_id >= 0) { 428 - pptable_id = amdgpu_smu_pptable_id; 429 - dev_info(adev->dev, "override pptable id %d\n", pptable_id); 430 - } else { 431 - pptable_id = smu_table->boot_values.pp_table_id; 432 - } 433 - 434 - /* 435 - * Temporary solution for SMU V13.0.0 with SCPM disabled: 436 - * - use vbios carried pptable when pptable_id is 3664, 3715 or 3795 437 - * - use soft pptable when pptable_id is 3683 438 - */ 439 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) { 440 - switch (pptable_id) { 441 - case 3664: 442 - case 3715: 443 - case 3795: 444 - pptable_id = 0; 445 - break; 446 - case 3683: 447 - break; 448 - default: 449 - dev_err(adev->dev, "Unsupported pptable id %d\n", pptable_id); 450 - return -EINVAL; 451 - } 452 - } 453 - 454 - /* force using vbios pptable in sriov mode */ 455 - if ((amdgpu_sriov_vf(adev) || !pptable_id) && (amdgpu_emu_mode != 1)) 456 - ret = smu_v13_0_0_get_pptable_from_pmfw(smu, 457 - &smu_table->power_play_table, 458 - &smu_table->power_play_table_size); 459 - else 460 - ret = smu_v13_0_get_pptable_from_firmware(smu, 461 - &smu_table->power_play_table, 462 - &smu_table->power_play_table_size, 463 - pptable_id); 464 - } 415 + ret = smu_v13_0_0_get_pptable_from_pmfw(smu, 416 + &smu_table->power_play_table, 417 + &smu_table->power_play_table_size); 465 418 if (ret) 466 419 return ret; 467 420
+1
drivers/gpu/drm/hisilicon/hibmc/Kconfig
··· 2 2 config DRM_HISI_HIBMC 3 3 tristate "DRM Support for Hisilicon Hibmc" 4 4 depends on DRM && PCI && (ARM64 || COMPILE_TEST) 5 + depends on MMU 5 6 select DRM_KMS_HELPER 6 7 select DRM_VRAM_HELPER 7 8 select DRM_TTM
+4 -4
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 1269 1269 trace_i915_context_free(ctx); 1270 1270 GEM_BUG_ON(!i915_gem_context_is_closed(ctx)); 1271 1271 1272 + spin_lock(&ctx->i915->gem.contexts.lock); 1273 + list_del(&ctx->link); 1274 + spin_unlock(&ctx->i915->gem.contexts.lock); 1275 + 1272 1276 if (ctx->syncobj) 1273 1277 drm_syncobj_put(ctx->syncobj); 1274 1278 ··· 1524 1520 lut_close(ctx); 1525 1521 1526 1522 ctx->file_priv = ERR_PTR(-EBADF); 1527 - 1528 - spin_lock(&ctx->i915->gem.contexts.lock); 1529 - list_del(&ctx->link); 1530 - spin_unlock(&ctx->i915->gem.contexts.lock); 1531 1523 1532 1524 client = ctx->client; 1533 1525 if (client) {
+2 -1
drivers/gpu/drm/i915/i915_gem.c
··· 1191 1191 1192 1192 intel_uc_cleanup_firmwares(&to_gt(dev_priv)->uc); 1193 1193 1194 - i915_gem_drain_freed_objects(dev_priv); 1194 + /* Flush any outstanding work, including i915_gem_context.release_work. */ 1195 + i915_gem_drain_workqueue(dev_priv); 1195 1196 1196 1197 drm_WARN_ON(&dev_priv->drm, !list_empty(&dev_priv->gem.contexts.list)); 1197 1198 }
+1 -1
drivers/gpu/drm/mediatek/mtk_drm_ddp_comp.c
··· 157 157 { 158 158 struct mtk_ddp_comp_dev *priv = dev_get_drvdata(dev); 159 159 160 - mtk_ddp_write(cmdq_pkt, h << 16 | w, &priv->cmdq_reg, priv->regs, DISP_REG_DITHER_SIZE); 160 + mtk_ddp_write(cmdq_pkt, w << 16 | h, &priv->cmdq_reg, priv->regs, DISP_REG_DITHER_SIZE); 161 161 mtk_ddp_write(cmdq_pkt, DITHER_RELAY_MODE, &priv->cmdq_reg, priv->regs, 162 162 DISP_REG_DITHER_CFG); 163 163 mtk_dither_set_common(priv->regs, &priv->cmdq_reg, bpc, DISP_REG_DITHER_CFG,
+13 -11
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 685 685 if (--dsi->refcount != 0) 686 686 return; 687 687 688 + /* 689 + * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since 690 + * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(), 691 + * which needs irq for vblank, and mtk_dsi_stop() will disable irq. 692 + * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(), 693 + * after dsi is fully set. 694 + */ 695 + mtk_dsi_stop(dsi); 696 + 697 + mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500); 688 698 mtk_dsi_reset_engine(dsi); 689 699 mtk_dsi_lane0_ulp_mode_enter(dsi); 690 700 mtk_dsi_clk_ulp_mode_enter(dsi); ··· 744 734 { 745 735 if (!dsi->enabled) 746 736 return; 747 - 748 - /* 749 - * mtk_dsi_stop() and mtk_dsi_start() is asymmetric, since 750 - * mtk_dsi_stop() should be called after mtk_drm_crtc_atomic_disable(), 751 - * which needs irq for vblank, and mtk_dsi_stop() will disable irq. 752 - * mtk_dsi_start() needs to be called in mtk_output_dsi_enable(), 753 - * after dsi is fully set. 754 - */ 755 - mtk_dsi_stop(dsi); 756 - 757 - mtk_dsi_switch_to_cmd_mode(dsi, VM_DONE_INT_FLAG, 500); 758 737 759 738 dsi->enabled = false; 760 739 } ··· 807 808 808 809 static const struct drm_bridge_funcs mtk_dsi_bridge_funcs = { 809 810 .attach = mtk_dsi_bridge_attach, 811 + .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state, 810 812 .atomic_disable = mtk_dsi_bridge_atomic_disable, 813 + .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state, 811 814 .atomic_enable = mtk_dsi_bridge_atomic_enable, 812 815 .atomic_pre_enable = mtk_dsi_bridge_atomic_pre_enable, 813 816 .atomic_post_disable = mtk_dsi_bridge_atomic_post_disable, 817 + .atomic_reset = drm_atomic_helper_bridge_reset, 814 818 .mode_set = mtk_dsi_bridge_mode_set, 815 819 }; 816 820
+5 -1
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 262 262 if (ret) 263 263 return ret; 264 264 265 - drm_fbdev_generic_setup(dev, 0); 265 + /* 266 + * FIXME: A 24-bit color depth does not work with 24 bpp on 267 + * G200ER. Force 32 bpp. 268 + */ 269 + drm_fbdev_generic_setup(dev, 32); 266 270 267 271 return 0; 268 272 }
+1 -1
drivers/gpu/drm/panel/panel-simple.c
··· 2257 2257 .enable = 200, 2258 2258 .disable = 20, 2259 2259 }, 2260 - .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 2260 + .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 2261 2261 .connector_type = DRM_MODE_CONNECTOR_LVDS, 2262 2262 }; 2263 2263
+1 -1
drivers/i2c/busses/i2c-imx.c
··· 1583 1583 if (i2c_imx->dma) 1584 1584 i2c_imx_dma_free(i2c_imx); 1585 1585 1586 - if (ret == 0) { 1586 + if (ret >= 0) { 1587 1587 /* setup chip registers to defaults */ 1588 1588 imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IADR); 1589 1589 imx_i2c_write_reg(0, i2c_imx, IMX_I2C_IFDR);
+27 -41
drivers/i2c/busses/i2c-mlxbf.c
··· 6 6 */ 7 7 8 8 #include <linux/acpi.h> 9 + #include <linux/bitfield.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/err.h> 11 12 #include <linux/interrupt.h> ··· 64 63 */ 65 64 #define MLXBF_I2C_TYU_PLL_OUT_FREQ (400 * 1000 * 1000) 66 65 /* Reference clock for Bluefield - 156 MHz. */ 67 - #define MLXBF_I2C_PLL_IN_FREQ (156 * 1000 * 1000) 66 + #define MLXBF_I2C_PLL_IN_FREQ 156250000ULL 68 67 69 68 /* Constant used to determine the PLL frequency. */ 70 - #define MLNXBF_I2C_COREPLL_CONST 16384 69 + #define MLNXBF_I2C_COREPLL_CONST 16384ULL 70 + 71 + #define MLXBF_I2C_FREQUENCY_1GHZ 1000000000ULL 71 72 72 73 /* PLL registers. */ 73 - #define MLXBF_I2C_CORE_PLL_REG0 0x0 74 74 #define MLXBF_I2C_CORE_PLL_REG1 0x4 75 75 #define MLXBF_I2C_CORE_PLL_REG2 0x8 76 76 ··· 183 181 #define MLXBF_I2C_COREPLL_FREQ MLXBF_I2C_TYU_PLL_OUT_FREQ 184 182 185 183 /* Core PLL TYU configuration. */ 186 - #define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(12, 0) 187 - #define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(3, 0) 188 - #define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(5, 0) 189 - 190 - #define MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT 3 191 - #define MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT 16 192 - #define MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT 20 184 + #define MLXBF_I2C_COREPLL_CORE_F_TYU_MASK GENMASK(15, 3) 185 + #define MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK GENMASK(19, 16) 186 + #define MLXBF_I2C_COREPLL_CORE_R_TYU_MASK GENMASK(25, 20) 193 187 194 188 /* Core PLL YU configuration. */ 195 189 #define MLXBF_I2C_COREPLL_CORE_F_YU_MASK GENMASK(25, 0) 196 190 #define MLXBF_I2C_COREPLL_CORE_OD_YU_MASK GENMASK(3, 0) 197 - #define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(5, 0) 191 + #define MLXBF_I2C_COREPLL_CORE_R_YU_MASK GENMASK(31, 26) 198 192 199 - #define MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT 0 200 - #define MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT 1 201 - #define MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT 26 202 193 203 194 /* Core PLL frequency. */ 204 195 static u64 mlxbf_i2c_corepll_frequency; ··· 474 479 #define MLXBF_I2C_MASK_8 GENMASK(7, 0) 475 480 #define MLXBF_I2C_MASK_16 GENMASK(15, 0) 476 481 477 - #define MLXBF_I2C_FREQUENCY_1GHZ 1000000000 478 - 479 482 /* 480 483 * Function to poll a set of bits at a specific address; it checks whether 481 484 * the bits are equal to zero when eq_zero is set to 'true', and not equal ··· 662 669 /* Clear status bits. */ 663 670 writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_STATUS); 664 671 /* Set the cause data. */ 665 - writel(~0x0, priv->smbus->io + MLXBF_I2C_CAUSE_OR_CLEAR); 672 + writel(~0x0, priv->mst_cause->io + MLXBF_I2C_CAUSE_OR_CLEAR); 666 673 /* Zero PEC byte. */ 667 674 writel(0x0, priv->smbus->io + MLXBF_I2C_SMBUS_MASTER_PEC); 668 675 /* Zero byte count. */ ··· 731 738 if (flags & MLXBF_I2C_F_WRITE) { 732 739 write_en = 1; 733 740 write_len += operation->length; 741 + if (data_idx + operation->length > 742 + MLXBF_I2C_MASTER_DATA_DESC_SIZE) 743 + return -ENOBUFS; 734 744 memcpy(data_desc + data_idx, 735 745 operation->buffer, operation->length); 736 746 data_idx += operation->length; ··· 1403 1407 return 0; 1404 1408 } 1405 1409 1406 - static u64 mlxbf_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res) 1410 + static u64 mlxbf_i2c_calculate_freq_from_tyu(struct mlxbf_i2c_resource *corepll_res) 1407 1411 { 1408 - u64 core_frequency, pad_frequency; 1412 + u64 core_frequency; 1409 1413 u8 core_od, core_r; 1410 1414 u32 corepll_val; 1411 1415 u16 core_f; 1412 1416 1413 - pad_frequency = MLXBF_I2C_PLL_IN_FREQ; 1414 - 1415 1417 corepll_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1); 1416 1418 1417 1419 /* Get Core PLL configuration bits. */ 1418 - core_f = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_F_TYU_SHIFT) & 1419 - MLXBF_I2C_COREPLL_CORE_F_TYU_MASK; 1420 - core_od = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_OD_TYU_SHIFT) & 1421 - MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK; 1422 - core_r = rol32(corepll_val, MLXBF_I2C_COREPLL_CORE_R_TYU_SHIFT) & 1423 - MLXBF_I2C_COREPLL_CORE_R_TYU_MASK; 1420 + core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_TYU_MASK, corepll_val); 1421 + core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_TYU_MASK, corepll_val); 1422 + core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_TYU_MASK, corepll_val); 1424 1423 1425 1424 /* 1426 1425 * Compute PLL output frequency as follow: ··· 1427 1436 * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency 1428 1437 * and PadFrequency, respectively. 1429 1438 */ 1430 - core_frequency = pad_frequency * (++core_f); 1439 + core_frequency = MLXBF_I2C_PLL_IN_FREQ * (++core_f); 1431 1440 core_frequency /= (++core_r) * (++core_od); 1432 1441 1433 1442 return core_frequency; 1434 1443 } 1435 1444 1436 - static u64 mlxbf_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res) 1445 + static u64 mlxbf_i2c_calculate_freq_from_yu(struct mlxbf_i2c_resource *corepll_res) 1437 1446 { 1438 1447 u32 corepll_reg1_val, corepll_reg2_val; 1439 - u64 corepll_frequency, pad_frequency; 1448 + u64 corepll_frequency; 1440 1449 u8 core_od, core_r; 1441 1450 u32 core_f; 1442 - 1443 - pad_frequency = MLXBF_I2C_PLL_IN_FREQ; 1444 1451 1445 1452 corepll_reg1_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG1); 1446 1453 corepll_reg2_val = readl(corepll_res->io + MLXBF_I2C_CORE_PLL_REG2); 1447 1454 1448 1455 /* Get Core PLL configuration bits */ 1449 - core_f = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_F_YU_SHIFT) & 1450 - MLXBF_I2C_COREPLL_CORE_F_YU_MASK; 1451 - core_r = rol32(corepll_reg1_val, MLXBF_I2C_COREPLL_CORE_R_YU_SHIFT) & 1452 - MLXBF_I2C_COREPLL_CORE_R_YU_MASK; 1453 - core_od = rol32(corepll_reg2_val, MLXBF_I2C_COREPLL_CORE_OD_YU_SHIFT) & 1454 - MLXBF_I2C_COREPLL_CORE_OD_YU_MASK; 1456 + core_f = FIELD_GET(MLXBF_I2C_COREPLL_CORE_F_YU_MASK, corepll_reg1_val); 1457 + core_r = FIELD_GET(MLXBF_I2C_COREPLL_CORE_R_YU_MASK, corepll_reg1_val); 1458 + core_od = FIELD_GET(MLXBF_I2C_COREPLL_CORE_OD_YU_MASK, corepll_reg2_val); 1455 1459 1456 1460 /* 1457 1461 * Compute PLL output frequency as follow: ··· 1458 1472 * Where PLL_OUT_FREQ and PLL_IN_FREQ refer to CoreFrequency 1459 1473 * and PadFrequency, respectively. 1460 1474 */ 1461 - corepll_frequency = (pad_frequency * core_f) / MLNXBF_I2C_COREPLL_CONST; 1475 + corepll_frequency = (MLXBF_I2C_PLL_IN_FREQ * core_f) / MLNXBF_I2C_COREPLL_CONST; 1462 1476 corepll_frequency /= (++core_r) * (++core_od); 1463 1477 1464 1478 return corepll_frequency; ··· 2166 2180 [1] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_1], 2167 2181 [2] = &mlxbf_i2c_gpio_res[MLXBF_I2C_CHIP_TYPE_1] 2168 2182 }, 2169 - .calculate_freq = mlxbf_calculate_freq_from_tyu 2183 + .calculate_freq = mlxbf_i2c_calculate_freq_from_tyu 2170 2184 }, 2171 2185 [MLXBF_I2C_CHIP_TYPE_2] = { 2172 2186 .type = MLXBF_I2C_CHIP_TYPE_2, 2173 2187 .shared_res = { 2174 2188 [0] = &mlxbf_i2c_corepll_res[MLXBF_I2C_CHIP_TYPE_2] 2175 2189 }, 2176 - .calculate_freq = mlxbf_calculate_freq_from_yu 2190 + .calculate_freq = mlxbf_i2c_calculate_freq_from_yu 2177 2191 } 2178 2192 }; 2179 2193
+3 -2
drivers/i2c/i2c-mux.c
··· 243 243 int (*deselect)(struct i2c_mux_core *, u32)) 244 244 { 245 245 struct i2c_mux_core *muxc; 246 + size_t mux_size; 246 247 247 - muxc = devm_kzalloc(dev, struct_size(muxc, adapter, max_adapters) 248 - + sizeof_priv, GFP_KERNEL); 248 + mux_size = struct_size(muxc, adapter, max_adapters); 249 + muxc = devm_kzalloc(dev, size_add(mux_size, sizeof_priv), GFP_KERNEL); 249 250 if (!muxc) 250 251 return NULL; 251 252 if (sizeof_priv)
+3
drivers/input/keyboard/iqs62x-keys.c
··· 77 77 if (ret) { 78 78 dev_err(&pdev->dev, "Failed to read switch code: %d\n", 79 79 ret); 80 + fwnode_handle_put(child); 80 81 return ret; 81 82 } 82 83 iqs62x_keys->switches[i].code = val; ··· 91 90 iqs62x_keys->switches[i].flag = (i == IQS62X_SW_HALL_N ? 92 91 IQS62X_EVENT_HALL_N_T : 93 92 IQS62X_EVENT_HALL_S_T); 93 + 94 + fwnode_handle_put(child); 94 95 } 95 96 96 97 return 0;
+1 -1
drivers/input/keyboard/snvs_pwrkey.c
··· 20 20 #include <linux/mfd/syscon.h> 21 21 #include <linux/regmap.h> 22 22 23 - #define SNVS_HPVIDR1_REG 0xF8 23 + #define SNVS_HPVIDR1_REG 0xBF8 24 24 #define SNVS_LPSR_REG 0x4C /* LP Status Register */ 25 25 #define SNVS_LPCR_REG 0x38 /* LP Control Register */ 26 26 #define SNVS_HPSR_REG 0x14
-1
drivers/input/mouse/synaptics.c
··· 186 186 "LEN2044", /* L470 */ 187 187 "LEN2054", /* E480 */ 188 188 "LEN2055", /* E580 */ 189 - "LEN2064", /* T14 Gen 1 AMD / P14s Gen 1 AMD */ 190 189 "LEN2068", /* T14 Gen 1 */ 191 190 "SYN3052", /* HP EliteBook 840 G4 */ 192 191 "SYN3221", /* HP 15-ay000 */
+1 -1
drivers/input/touchscreen/melfas_mip4.c
··· 1453 1453 "ce", GPIOD_OUT_LOW); 1454 1454 if (IS_ERR(ts->gpio_ce)) { 1455 1455 error = PTR_ERR(ts->gpio_ce); 1456 - if (error != EPROBE_DEFER) 1456 + if (error != -EPROBE_DEFER) 1457 1457 dev_err(&client->dev, 1458 1458 "Failed to get gpio: %d\n", error); 1459 1459 return error;
+7 -1
drivers/irqchip/Kconfig
··· 561 561 select GENERIC_IRQ_CHIP 562 562 select IRQ_DOMAIN 563 563 select GENERIC_IRQ_EFFECTIVE_AFF_MASK 564 + select LOONGSON_LIOINTC 565 + select LOONGSON_EIOINTC 566 + select LOONGSON_PCH_PIC 567 + select LOONGSON_PCH_MSI 568 + select LOONGSON_PCH_LPC 564 569 help 565 570 Support for the LoongArch CPU Interrupt Controller. For details of 566 571 irq chip hierarchy on LoongArch platforms please read the document ··· 628 623 629 624 config LOONGSON_PCH_LPC 630 625 bool "Loongson PCH LPC Controller" 626 + depends on LOONGARCH 631 627 depends on MACH_LOONGSON64 632 - default (MACH_LOONGSON64 && LOONGARCH) 628 + default MACH_LOONGSON64 633 629 select IRQ_DOMAIN_HIERARCHY 634 630 help 635 631 Support for the Loongson PCH LPC Controller.
+8 -6
drivers/irqchip/irq-gic-v3-its.c
··· 1574 1574 const struct cpumask *aff_mask) 1575 1575 { 1576 1576 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1577 - cpumask_var_t tmpmask; 1577 + static DEFINE_RAW_SPINLOCK(tmpmask_lock); 1578 + static struct cpumask __tmpmask; 1579 + struct cpumask *tmpmask; 1580 + unsigned long flags; 1578 1581 int cpu, node; 1579 - 1580 - if (!alloc_cpumask_var(&tmpmask, GFP_ATOMIC)) 1581 - return -ENOMEM; 1582 - 1583 1582 node = its_dev->its->numa_node; 1583 + tmpmask = &__tmpmask; 1584 + 1585 + raw_spin_lock_irqsave(&tmpmask_lock, flags); 1584 1586 1585 1587 if (!irqd_affinity_is_managed(d)) { 1586 1588 /* First try the NUMA node */ ··· 1636 1634 cpu = cpumask_pick_least_loaded(d, tmpmask); 1637 1635 } 1638 1636 out: 1639 - free_cpumask_var(tmpmask); 1637 + raw_spin_unlock_irqrestore(&tmpmask_lock, flags); 1640 1638 1641 1639 pr_debug("IRQ%d -> %*pbl CPU%d\n", d->irq, cpumask_pr_args(aff_mask), cpu); 1642 1640 return cpu;
+1 -1
drivers/irqchip/irq-stm32-exti.c
··· 716 716 717 717 irq_domain_set_hwirq_and_chip(dm, virq, hwirq, chip, chip_data); 718 718 719 - if (!host_data->drv_data || !host_data->drv_data->desc_irqs) 719 + if (!host_data->drv_data->desc_irqs) 720 720 return -EINVAL; 721 721 722 722 desc_irq = host_data->drv_data->desc_irqs[hwirq];
+1 -1
drivers/media/usb/b2c2/flexcop-usb.c
··· 511 511 512 512 if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1) 513 513 return -ENODEV; 514 - if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[1].desc)) 514 + if (!usb_endpoint_is_isoc_in(&fc_usb->uintf->cur_altsetting->endpoint[0].desc)) 515 515 return -ENODEV; 516 516 517 517 switch (fc_usb->udev->speed) {
+2 -1
drivers/mmc/core/sd.c
··· 870 870 * the CCS bit is set as well. We deliberately deviate from the spec in 871 871 * regards to this, which allows UHS-I to be supported for SDSC cards. 872 872 */ 873 - if (!mmc_host_is_spi(host) && rocr && (*rocr & SD_ROCR_S18A)) { 873 + if (!mmc_host_is_spi(host) && (ocr & SD_OCR_S18R) && 874 + rocr && (*rocr & SD_ROCR_S18A)) { 874 875 err = mmc_set_uhs_voltage(host, pocr); 875 876 if (err == -EAGAIN) { 876 877 retries--;
+1 -1
drivers/mmc/host/mmc_hsq.c
··· 34 34 spin_lock_irqsave(&hsq->lock, flags); 35 35 36 36 /* Make sure we are not already running a request now */ 37 - if (hsq->mrq) { 37 + if (hsq->mrq || hsq->recovery_halt) { 38 38 spin_unlock_irqrestore(&hsq->lock, flags); 39 39 return; 40 40 }
+3 -14
drivers/mmc/host/moxart-mmc.c
··· 111 111 #define CLK_DIV_MASK 0x7f 112 112 113 113 /* REG_BUS_WIDTH */ 114 - #define BUS_WIDTH_8 BIT(2) 115 - #define BUS_WIDTH_4 BIT(1) 114 + #define BUS_WIDTH_4_SUPPORT BIT(3) 115 + #define BUS_WIDTH_4 BIT(2) 116 116 #define BUS_WIDTH_1 BIT(0) 117 117 118 118 #define MMC_VDD_360 23 ··· 524 524 case MMC_BUS_WIDTH_4: 525 525 writel(BUS_WIDTH_4, host->base + REG_BUS_WIDTH); 526 526 break; 527 - case MMC_BUS_WIDTH_8: 528 - writel(BUS_WIDTH_8, host->base + REG_BUS_WIDTH); 529 - break; 530 527 default: 531 528 writel(BUS_WIDTH_1, host->base + REG_BUS_WIDTH); 532 529 break; ··· 648 651 dmaengine_slave_config(host->dma_chan_rx, &cfg); 649 652 } 650 653 651 - switch ((readl(host->base + REG_BUS_WIDTH) >> 3) & 3) { 652 - case 1: 654 + if (readl(host->base + REG_BUS_WIDTH) & BUS_WIDTH_4_SUPPORT) 653 655 mmc->caps |= MMC_CAP_4_BIT_DATA; 654 - break; 655 - case 2: 656 - mmc->caps |= MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA; 657 - break; 658 - default: 659 - break; 660 - } 661 656 662 657 writel(0, host->base + REG_INTERRUPT_MASK); 663 658
+2 -2
drivers/mmc/host/sdhci.c
··· 3928 3928 3929 3929 if (intmask & (SDHCI_INT_INDEX | SDHCI_INT_END_BIT | SDHCI_INT_CRC)) { 3930 3930 *cmd_error = -EILSEQ; 3931 - if (!mmc_op_tuning(host->cmd->opcode)) 3931 + if (!mmc_op_tuning(SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)))) 3932 3932 sdhci_err_stats_inc(host, CMD_CRC); 3933 3933 } else if (intmask & SDHCI_INT_TIMEOUT) { 3934 3934 *cmd_error = -ETIMEDOUT; ··· 3938 3938 3939 3939 if (intmask & (SDHCI_INT_DATA_END_BIT | SDHCI_INT_DATA_CRC)) { 3940 3940 *data_error = -EILSEQ; 3941 - if (!mmc_op_tuning(host->cmd->opcode)) 3941 + if (!mmc_op_tuning(SDHCI_GET_CMD(sdhci_readw(host, SDHCI_COMMAND)))) 3942 3942 sdhci_err_stats_inc(host, DAT_CRC); 3943 3943 } else if (intmask & SDHCI_INT_DATA_TIMEOUT) { 3944 3944 *data_error = -ETIMEDOUT;
+15 -2
drivers/net/can/c_can/c_can.h
··· 235 235 return ring->tail & (ring->obj_num - 1); 236 236 } 237 237 238 - static inline u8 c_can_get_tx_free(const struct c_can_tx_ring *ring) 238 + static inline u8 c_can_get_tx_free(const struct c_can_priv *priv, 239 + const struct c_can_tx_ring *ring) 239 240 { 240 - return ring->obj_num - (ring->head - ring->tail); 241 + u8 head = c_can_get_tx_head(ring); 242 + u8 tail = c_can_get_tx_tail(ring); 243 + 244 + if (priv->type == BOSCH_D_CAN) 245 + return ring->obj_num - (ring->head - ring->tail); 246 + 247 + /* This is not a FIFO. C/D_CAN sends out the buffers 248 + * prioritized. The lowest buffer number wins. 249 + */ 250 + if (head < tail) 251 + return 0; 252 + 253 + return ring->obj_num - head; 241 254 } 242 255 243 256 #endif /* C_CAN_H */
+5 -6
drivers/net/can/c_can/c_can_main.c
··· 429 429 static bool c_can_tx_busy(const struct c_can_priv *priv, 430 430 const struct c_can_tx_ring *tx_ring) 431 431 { 432 - if (c_can_get_tx_free(tx_ring) > 0) 432 + if (c_can_get_tx_free(priv, tx_ring) > 0) 433 433 return false; 434 434 435 435 netif_stop_queue(priv->dev); ··· 437 437 /* Memory barrier before checking tx_free (head and tail) */ 438 438 smp_mb(); 439 439 440 - if (c_can_get_tx_free(tx_ring) == 0) { 440 + if (c_can_get_tx_free(priv, tx_ring) == 0) { 441 441 netdev_dbg(priv->dev, 442 442 "Stopping tx-queue (tx_head=0x%08x, tx_tail=0x%08x, len=%d).\n", 443 443 tx_ring->head, tx_ring->tail, ··· 465 465 466 466 idx = c_can_get_tx_head(tx_ring); 467 467 tx_ring->head++; 468 - if (c_can_get_tx_free(tx_ring) == 0) 468 + if (c_can_get_tx_free(priv, tx_ring) == 0) 469 469 netif_stop_queue(dev); 470 470 471 471 if (idx < c_can_get_tx_tail(tx_ring)) ··· 748 748 return; 749 749 750 750 tx_ring->tail += pkts; 751 - if (c_can_get_tx_free(tx_ring)) { 751 + if (c_can_get_tx_free(priv, tx_ring)) { 752 752 /* Make sure that anybody stopping the queue after 753 753 * this sees the new tx_ring->tail. 754 754 */ ··· 760 760 stats->tx_packets += pkts; 761 761 762 762 tail = c_can_get_tx_tail(tx_ring); 763 - 764 - if (tail == 0) { 763 + if (priv->type == BOSCH_D_CAN && tail == 0) { 765 764 u8 head = c_can_get_tx_head(tx_ring); 766 765 767 766 /* Start transmission for all cached messages */
+13 -6
drivers/net/dsa/mt7530.c
··· 506 506 static int 507 507 mt7531_pad_setup(struct dsa_switch *ds, phy_interface_t interface) 508 508 { 509 - struct mt7530_priv *priv = ds->priv; 509 + return 0; 510 + } 511 + 512 + static void 513 + mt7531_pll_setup(struct mt7530_priv *priv) 514 + { 510 515 u32 top_sig; 511 516 u32 hwstrap; 512 517 u32 xtal; 513 518 u32 val; 514 519 515 520 if (mt7531_dual_sgmii_supported(priv)) 516 - return 0; 521 + return; 517 522 518 523 val = mt7530_read(priv, MT7531_CREV); 519 524 top_sig = mt7530_read(priv, MT7531_TOP_SIG_SR); ··· 597 592 val |= EN_COREPLL; 598 593 mt7530_write(priv, MT7531_PLLGP_EN, val); 599 594 usleep_range(25, 35); 600 - 601 - return 0; 602 595 } 603 596 604 597 static void ··· 2329 2326 return -ENODEV; 2330 2327 } 2331 2328 2329 + /* all MACs must be forced link-down before sw reset */ 2330 + for (i = 0; i < MT7530_NUM_PORTS; i++) 2331 + mt7530_write(priv, MT7530_PMCR_P(i), MT7531_FORCE_LNK); 2332 + 2332 2333 /* Reset the switch through internal reset */ 2333 2334 mt7530_write(priv, MT7530_SYS_CTRL, 2334 2335 SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST | 2335 2336 SYS_CTRL_REG_RST); 2337 + 2338 + mt7531_pll_setup(priv); 2336 2339 2337 2340 if (mt7531_dual_sgmii_supported(priv)) { 2338 2341 priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII; ··· 2885 2876 break; 2886 2877 case 6: 2887 2878 interface = PHY_INTERFACE_MODE_2500BASEX; 2888 - 2889 - mt7531_pad_setup(ds, interface); 2890 2879 2891 2880 priv->p6_interface = interface; 2892 2881 break;
+4
drivers/net/ethernet/cadence/macb_main.c
··· 5131 5131 if (!(bp->wol & MACB_WOL_ENABLED)) { 5132 5132 rtnl_lock(); 5133 5133 phylink_stop(bp->phylink); 5134 + phy_exit(bp->sgmii_phy); 5134 5135 rtnl_unlock(); 5135 5136 spin_lock_irqsave(&bp->lock, flags); 5136 5137 macb_reset_hw(bp); ··· 5221 5220 macb_set_rx_mode(netdev); 5222 5221 macb_restore_features(bp); 5223 5222 rtnl_lock(); 5223 + if (!device_may_wakeup(&bp->dev->dev)) 5224 + phy_init(bp->sgmii_phy); 5225 + 5224 5226 phylink_start(bp->phylink); 5225 5227 rtnl_unlock(); 5226 5228
+19 -9
drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
··· 14 14 #include "cudbg_entity.h" 15 15 #include "cudbg_lib.h" 16 16 #include "cudbg_zlib.h" 17 + #include "cxgb4_tc_mqprio.h" 17 18 18 19 static const u32 t6_tp_pio_array[][IREG_NUM_ELEM] = { 19 20 {0x7e40, 0x7e44, 0x020, 28}, /* t6_tp_pio_regs_20_to_3b */ ··· 3459 3458 for (i = 0; i < utxq->ntxq; i++) 3460 3459 QDESC_GET_TXQ(&utxq->uldtxq[i].q, 3461 3460 cudbg_uld_txq_to_qtype(j), 3462 - out_unlock); 3461 + out_unlock_uld); 3463 3462 } 3464 3463 } 3465 3464 ··· 3476 3475 for (i = 0; i < urxq->nrxq; i++) 3477 3476 QDESC_GET_RXQ(&urxq->uldrxq[i].rspq, 3478 3477 cudbg_uld_rxq_to_qtype(j), 3479 - out_unlock); 3478 + out_unlock_uld); 3480 3479 } 3481 3480 3482 3481 /* ULD FLQ */ ··· 3488 3487 for (i = 0; i < urxq->nrxq; i++) 3489 3488 QDESC_GET_FLQ(&urxq->uldrxq[i].fl, 3490 3489 cudbg_uld_flq_to_qtype(j), 3491 - out_unlock); 3490 + out_unlock_uld); 3492 3491 } 3493 3492 3494 3493 /* ULD CIQ */ ··· 3501 3500 for (i = 0; i < urxq->nciq; i++) 3502 3501 QDESC_GET_RXQ(&urxq->uldrxq[base + i].rspq, 3503 3502 cudbg_uld_ciq_to_qtype(j), 3504 - out_unlock); 3503 + out_unlock_uld); 3505 3504 } 3506 3505 } 3506 + mutex_unlock(&uld_mutex); 3507 3507 3508 + if (!padap->tc_mqprio) 3509 + goto out; 3510 + 3511 + mutex_lock(&padap->tc_mqprio->mqprio_mutex); 3508 3512 /* ETHOFLD TXQ */ 3509 3513 if (s->eohw_txq) 3510 3514 for (i = 0; i < s->eoqsets; i++) 3511 3515 QDESC_GET_TXQ(&s->eohw_txq[i].q, 3512 - CUDBG_QTYPE_ETHOFLD_TXQ, out); 3516 + CUDBG_QTYPE_ETHOFLD_TXQ, out_unlock_mqprio); 3513 3517 3514 3518 /* ETHOFLD RXQ and FLQ */ 3515 3519 if (s->eohw_rxq) { 3516 3520 for (i = 0; i < s->eoqsets; i++) 3517 3521 QDESC_GET_RXQ(&s->eohw_rxq[i].rspq, 3518 - CUDBG_QTYPE_ETHOFLD_RXQ, out); 3522 + CUDBG_QTYPE_ETHOFLD_RXQ, out_unlock_mqprio); 3519 3523 3520 3524 for (i = 0; i < s->eoqsets; i++) 3521 3525 QDESC_GET_FLQ(&s->eohw_rxq[i].fl, 3522 - CUDBG_QTYPE_ETHOFLD_FLQ, out); 3526 + CUDBG_QTYPE_ETHOFLD_FLQ, out_unlock_mqprio); 3523 3527 } 3524 3528 3525 - out_unlock: 3526 - mutex_unlock(&uld_mutex); 3529 + out_unlock_mqprio: 3530 + mutex_unlock(&padap->tc_mqprio->mqprio_mutex); 3527 3531 3528 3532 out: 3529 3533 qdesc_info->qdesc_entry_size = sizeof(*qdesc_entry); ··· 3565 3559 #undef QDESC_GET 3566 3560 3567 3561 return rc; 3562 + 3563 + out_unlock_uld: 3564 + mutex_unlock(&uld_mutex); 3565 + goto out; 3568 3566 } 3569 3567 3570 3568 int cudbg_collect_flash(struct cudbg_init *pdbg_init,
+1 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1467 1467 bool wd; 1468 1468 1469 1469 if (tx_ring->xsk_pool) 1470 - wd = ice_xmit_zc(tx_ring, ICE_DESC_UNUSED(tx_ring), budget); 1470 + wd = ice_xmit_zc(tx_ring); 1471 1471 else if (ice_ring_is_xdp(tx_ring)) 1472 1472 wd = true; 1473 1473 else
+69 -96
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 392 392 goto failure; 393 393 } 394 394 395 - if (!is_power_of_2(vsi->rx_rings[qid]->count) || 396 - !is_power_of_2(vsi->tx_rings[qid]->count)) { 397 - netdev_err(vsi->netdev, "Please align ring sizes to power of 2\n"); 398 - pool_failure = -EINVAL; 399 - goto failure; 400 - } 401 - 402 395 if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); 403 396 404 397 if (if_running) { ··· 527 534 bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count) 528 535 { 529 536 u16 rx_thresh = ICE_RING_QUARTER(rx_ring); 530 - u16 batched, leftover, i, tail_bumps; 537 + u16 leftover, i, tail_bumps; 531 538 532 - batched = ALIGN_DOWN(count, rx_thresh); 533 - tail_bumps = batched / rx_thresh; 534 - leftover = count & (rx_thresh - 1); 539 + tail_bumps = count / rx_thresh; 540 + leftover = count - (tail_bumps * rx_thresh); 535 541 536 542 for (i = 0; i < tail_bumps; i++) 537 543 if (!__ice_alloc_rx_bufs_zc(rx_ring, rx_thresh)) ··· 780 788 } 781 789 782 790 /** 783 - * ice_clean_xdp_irq_zc - Reclaim resources after transmit completes on XDP ring 784 - * @xdp_ring: XDP ring to clean 785 - * @napi_budget: amount of descriptors that NAPI allows us to clean 786 - * 787 - * Returns count of cleaned descriptors 791 + * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ 792 + * @xdp_ring: XDP Tx ring 788 793 */ 789 - static u16 ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring, int napi_budget) 794 + static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) 790 795 { 791 - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 792 - int budget = napi_budget / tx_thresh; 793 - u16 next_dd = xdp_ring->next_dd; 794 - u16 ntc, cleared_dds = 0; 796 + u16 ntc = xdp_ring->next_to_clean; 797 + struct ice_tx_desc *tx_desc; 798 + u16 cnt = xdp_ring->count; 799 + struct ice_tx_buf *tx_buf; 800 + u16 xsk_frames = 0; 801 + u16 last_rs; 802 + int i; 795 803 796 - do { 797 - struct ice_tx_desc *next_dd_desc; 798 - u16 desc_cnt = xdp_ring->count; 799 - struct ice_tx_buf *tx_buf; 800 - u32 xsk_frames; 801 - u16 i; 804 + last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; 805 + tx_desc = ICE_TX_DESC(xdp_ring, last_rs); 806 + if ((tx_desc->cmd_type_offset_bsz & 807 + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) { 808 + if (last_rs >= ntc) 809 + xsk_frames = last_rs - ntc + 1; 810 + else 811 + xsk_frames = last_rs + cnt - ntc + 1; 812 + } 802 813 803 - next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd); 804 - if (!(next_dd_desc->cmd_type_offset_bsz & 805 - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) 806 - break; 814 + if (!xsk_frames) 815 + return; 807 816 808 - cleared_dds++; 809 - xsk_frames = 0; 810 - if (likely(!xdp_ring->xdp_tx_active)) { 811 - xsk_frames = tx_thresh; 812 - goto skip; 817 + if (likely(!xdp_ring->xdp_tx_active)) 818 + goto skip; 819 + 820 + ntc = xdp_ring->next_to_clean; 821 + for (i = 0; i < xsk_frames; i++) { 822 + tx_buf = &xdp_ring->tx_buf[ntc]; 823 + 824 + if (tx_buf->raw_buf) { 825 + ice_clean_xdp_tx_buf(xdp_ring, tx_buf); 826 + tx_buf->raw_buf = NULL; 827 + } else { 828 + xsk_frames++; 813 829 } 814 830 815 - ntc = xdp_ring->next_to_clean; 816 - 817 - for (i = 0; i < tx_thresh; i++) { 818 - tx_buf = &xdp_ring->tx_buf[ntc]; 819 - 820 - if (tx_buf->raw_buf) { 821 - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); 822 - tx_buf->raw_buf = NULL; 823 - } else { 824 - xsk_frames++; 825 - } 826 - 827 - ntc++; 828 - if (ntc >= xdp_ring->count) 829 - ntc = 0; 830 - } 831 + ntc++; 832 + if (ntc >= xdp_ring->count) 833 + ntc = 0; 834 + } 831 835 skip: 832 - xdp_ring->next_to_clean += tx_thresh; 833 - if (xdp_ring->next_to_clean >= desc_cnt) 834 - xdp_ring->next_to_clean -= desc_cnt; 835 - if (xsk_frames) 836 - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); 837 - next_dd_desc->cmd_type_offset_bsz = 0; 838 - next_dd = next_dd + tx_thresh; 839 - if (next_dd >= desc_cnt) 840 - next_dd = tx_thresh - 1; 841 - } while (--budget); 842 - 843 - xdp_ring->next_dd = next_dd; 844 - 845 - return cleared_dds * tx_thresh; 836 + tx_desc->cmd_type_offset_bsz = 0; 837 + xdp_ring->next_to_clean += xsk_frames; 838 + if (xdp_ring->next_to_clean >= cnt) 839 + xdp_ring->next_to_clean -= cnt; 840 + if (xsk_frames) 841 + xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); 846 842 } 847 843 848 844 /** ··· 865 885 static void ice_xmit_pkt_batch(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 866 886 unsigned int *total_bytes) 867 887 { 868 - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 869 888 u16 ntu = xdp_ring->next_to_use; 870 889 struct ice_tx_desc *tx_desc; 871 890 u32 i; ··· 884 905 } 885 906 886 907 xdp_ring->next_to_use = ntu; 887 - 888 - if (xdp_ring->next_to_use > xdp_ring->next_rs) { 889 - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 890 - tx_desc->cmd_type_offset_bsz |= 891 - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 892 - xdp_ring->next_rs += tx_thresh; 893 - } 894 908 } 895 909 896 910 /** ··· 896 924 static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *descs, 897 925 u32 nb_pkts, unsigned int *total_bytes) 898 926 { 899 - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 900 927 u32 batched, leftover, i; 901 928 902 929 batched = ALIGN_DOWN(nb_pkts, PKTS_PER_BATCH); ··· 904 933 ice_xmit_pkt_batch(xdp_ring, &descs[i], total_bytes); 905 934 for (; i < batched + leftover; i++) 906 935 ice_xmit_pkt(xdp_ring, &descs[i], total_bytes); 936 + } 907 937 908 - if (xdp_ring->next_to_use > xdp_ring->next_rs) { 909 - struct ice_tx_desc *tx_desc; 938 + /** 939 + * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU) 940 + * @xdp_ring: XDP ring to produce the HW Tx descriptors on 941 + */ 942 + static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring) 943 + { 944 + u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1; 945 + struct ice_tx_desc *tx_desc; 910 946 911 - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 912 - tx_desc->cmd_type_offset_bsz |= 913 - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 914 - xdp_ring->next_rs += tx_thresh; 915 - } 947 + tx_desc = ICE_TX_DESC(xdp_ring, ntu); 948 + tx_desc->cmd_type_offset_bsz |= 949 + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 916 950 } 917 951 918 952 /** 919 953 * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring 920 954 * @xdp_ring: XDP ring to produce the HW Tx descriptors on 921 - * @budget: number of free descriptors on HW Tx ring that can be used 922 - * @napi_budget: amount of descriptors that NAPI allows us to clean 923 955 * 924 956 * Returns true if there is no more work that needs to be done, false otherwise 925 957 */ 926 - bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget) 958 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring) 927 959 { 928 960 struct xdp_desc *descs = xdp_ring->xsk_pool->tx_descs; 929 - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); 930 961 u32 nb_pkts, nb_processed = 0; 931 962 unsigned int total_bytes = 0; 963 + int budget; 932 964 933 - if (budget < tx_thresh) 934 - budget += ice_clean_xdp_irq_zc(xdp_ring, napi_budget); 965 + ice_clean_xdp_irq_zc(xdp_ring); 966 + 967 + budget = ICE_DESC_UNUSED(xdp_ring); 968 + budget = min_t(u16, budget, ICE_RING_QUARTER(xdp_ring)); 935 969 936 970 nb_pkts = xsk_tx_peek_release_desc_batch(xdp_ring->xsk_pool, budget); 937 971 if (!nb_pkts) 938 972 return true; 939 973 940 974 if (xdp_ring->next_to_use + nb_pkts >= xdp_ring->count) { 941 - struct ice_tx_desc *tx_desc; 942 - 943 975 nb_processed = xdp_ring->count - xdp_ring->next_to_use; 944 976 ice_fill_tx_hw_ring(xdp_ring, descs, nb_processed, &total_bytes); 945 - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); 946 - tx_desc->cmd_type_offset_bsz |= 947 - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); 948 - xdp_ring->next_rs = tx_thresh - 1; 949 977 xdp_ring->next_to_use = 0; 950 978 } 951 979 952 980 ice_fill_tx_hw_ring(xdp_ring, &descs[nb_processed], nb_pkts - nb_processed, 953 981 &total_bytes); 954 982 983 + ice_set_rs_bit(xdp_ring); 955 984 ice_xdp_ring_update_tail(xdp_ring); 956 985 ice_update_tx_ring_stats(xdp_ring, nb_pkts, total_bytes); 957 986 ··· 1029 1058 */ 1030 1059 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) 1031 1060 { 1032 - u16 count_mask = rx_ring->count - 1; 1033 1061 u16 ntc = rx_ring->next_to_clean; 1034 1062 u16 ntu = rx_ring->next_to_use; 1035 1063 1036 - for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) { 1064 + while (ntc != ntu) { 1037 1065 struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc); 1038 1066 1039 1067 xsk_buff_free(xdp); 1068 + ntc++; 1069 + if (ntc >= rx_ring->count) 1070 + ntc = 0; 1040 1071 } 1041 1072 } 1042 1073
+2 -5
drivers/net/ethernet/intel/ice/ice_xsk.h
··· 26 26 bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi); 27 27 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring); 28 28 void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring); 29 - bool ice_xmit_zc(struct ice_tx_ring *xdp_ring, u32 budget, int napi_budget); 29 + bool ice_xmit_zc(struct ice_tx_ring *xdp_ring); 30 30 int ice_realloc_zc_buf(struct ice_vsi *vsi, bool zc); 31 31 #else 32 - static inline bool 33 - ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring, 34 - u32 __always_unused budget, 35 - int __always_unused napi_budget) 32 + static inline bool ice_xmit_zc(struct ice_tx_ring __always_unused *xdp_ring) 36 33 { 37 34 return false; 38 35 }
+2 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_debugfs.c
··· 700 700 701 701 void mvpp2_dbgfs_init(struct mvpp2 *priv, const char *name) 702 702 { 703 - static struct dentry *mvpp2_root; 704 - struct dentry *mvpp2_dir; 703 + struct dentry *mvpp2_dir, *mvpp2_root; 705 704 int ret, i; 706 705 706 + mvpp2_root = debugfs_lookup(MVPP2_DRIVER_NAME, NULL); 707 707 if (!mvpp2_root) 708 708 mvpp2_root = debugfs_create_dir(MVPP2_DRIVER_NAME, NULL); 709 709
+2 -2
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 315 315 #define MTK_RXD5_PPE_CPU_REASON GENMASK(22, 18) 316 316 #define MTK_RXD5_SRC_PORT GENMASK(29, 26) 317 317 318 - #define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0xf) 319 - #define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0x7) 318 + #define RX_DMA_GET_SPORT(x) (((x) >> 19) & 0x7) 319 + #define RX_DMA_GET_SPORT_V2(x) (((x) >> 26) & 0xf) 320 320 321 321 /* PDMA V2 descriptor rxd3 */ 322 322 #define RX_DMA_VTAG_V2 BIT(0)
+2 -2
drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_mdio.c
··· 246 246 } 247 247 248 248 priv->clk_io = devm_ioremap(dev, res->start, resource_size(res)); 249 - if (IS_ERR(priv->clk_io)) 250 - return PTR_ERR(priv->clk_io); 249 + if (!priv->clk_io) 250 + return -ENOMEM; 251 251 252 252 mlxbf_gige_mdio_cfg(priv); 253 253
+7
drivers/net/ethernet/mscc/ocelot.c
··· 289 289 if (!(vlan->portmask & BIT(port))) 290 290 continue; 291 291 292 + /* Ignore the VLAN added by ocelot_add_vlan_unaware_pvid(), 293 + * because this is never active in hardware at the same time as 294 + * the bridge VLANs, which only matter in VLAN-aware mode. 295 + */ 296 + if (vlan->vid >= OCELOT_RSV_VLAN_RANGE_START) 297 + continue; 298 + 292 299 if (vlan->untagged & BIT(port)) 293 300 num_untagged++; 294 301 }
+1 -1
drivers/net/ethernet/sfc/ef10.c
··· 4213 4213 .ev_test_generate = efx_ef10_ev_test_generate, 4214 4214 .filter_table_probe = efx_ef10_filter_table_probe, 4215 4215 .filter_table_restore = efx_mcdi_filter_table_restore, 4216 - .filter_table_remove = efx_mcdi_filter_table_remove, 4216 + .filter_table_remove = efx_ef10_filter_table_remove, 4217 4217 .filter_update_rx_scatter = efx_mcdi_update_rx_scatter, 4218 4218 .filter_insert = efx_mcdi_filter_insert, 4219 4219 .filter_remove_safe = efx_mcdi_filter_remove_safe,
+13 -10
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3801 3801 3802 3802 stmmac_reset_queues_param(priv); 3803 3803 3804 + if (priv->plat->serdes_powerup) { 3805 + ret = priv->plat->serdes_powerup(dev, priv->plat->bsp_priv); 3806 + if (ret < 0) { 3807 + netdev_err(priv->dev, "%s: Serdes powerup failed\n", 3808 + __func__); 3809 + goto init_error; 3810 + } 3811 + } 3812 + 3804 3813 ret = stmmac_hw_setup(dev, true); 3805 3814 if (ret < 0) { 3806 3815 netdev_err(priv->dev, "%s: Hw setup failed\n", __func__); ··· 3912 3903 3913 3904 /* Disable the MAC Rx/Tx */ 3914 3905 stmmac_mac_set(priv, priv->ioaddr, false); 3906 + 3907 + /* Powerdown Serdes if there is */ 3908 + if (priv->plat->serdes_powerdown) 3909 + priv->plat->serdes_powerdown(dev, priv->plat->bsp_priv); 3915 3910 3916 3911 netif_carrier_off(dev); 3917 3912 ··· 7288 7275 goto error_netdev_register; 7289 7276 } 7290 7277 7291 - if (priv->plat->serdes_powerup) { 7292 - ret = priv->plat->serdes_powerup(ndev, 7293 - priv->plat->bsp_priv); 7294 - 7295 - if (ret < 0) 7296 - goto error_serdes_powerup; 7297 - } 7298 - 7299 7278 #ifdef CONFIG_DEBUG_FS 7300 7279 stmmac_init_fs(ndev); 7301 7280 #endif ··· 7302 7297 7303 7298 return ret; 7304 7299 7305 - error_serdes_powerup: 7306 - unregister_netdev(ndev); 7307 7300 error_netdev_register: 7308 7301 phylink_destroy(priv->phylink); 7309 7302 error_xpcs_setup:
+1
drivers/net/hippi/rrunner.c
··· 213 213 pci_iounmap(pdev, rrpriv->regs); 214 214 if (pdev) 215 215 pci_release_regions(pdev); 216 + pci_disable_device(pdev); 216 217 out2: 217 218 free_netdev(dev); 218 219 out3:
+6 -4
drivers/net/phy/phy_device.c
··· 316 316 317 317 phydev->suspended_by_mdio_bus = 0; 318 318 319 - /* If we manged to get here with the PHY state machine in a state neither 320 - * PHY_HALTED nor PHY_READY this is an indication that something went wrong 321 - * and we should most likely be using MAC managed PM and we are not. 319 + /* If we managed to get here with the PHY state machine in a state 320 + * neither PHY_HALTED, PHY_READY nor PHY_UP, this is an indication 321 + * that something went wrong and we should most likely be using 322 + * MAC managed PM, but we are not. 322 323 */ 323 - WARN_ON(phydev->state != PHY_HALTED && phydev->state != PHY_READY); 324 + WARN_ON(phydev->state != PHY_HALTED && phydev->state != PHY_READY && 325 + phydev->state != PHY_UP); 324 326 325 327 ret = phy_init_hw(phydev); 326 328 if (ret < 0)
+6 -3
drivers/net/tun.c
··· 2828 2828 rcu_assign_pointer(tfile->tun, tun); 2829 2829 } 2830 2830 2831 - netif_carrier_on(tun->dev); 2831 + if (ifr->ifr_flags & IFF_NO_CARRIER) 2832 + netif_carrier_off(tun->dev); 2833 + else 2834 + netif_carrier_on(tun->dev); 2832 2835 2833 2836 /* Make sure persistent devices do not get stuck in 2834 2837 * xoff state. ··· 3059 3056 * This is needed because we never checked for invalid flags on 3060 3057 * TUNSETIFF. 3061 3058 */ 3062 - return put_user(IFF_TUN | IFF_TAP | TUN_FEATURES, 3063 - (unsigned int __user*)argp); 3059 + return put_user(IFF_TUN | IFF_TAP | IFF_NO_CARRIER | 3060 + TUN_FEATURES, (unsigned int __user*)argp); 3064 3061 } else if (cmd == TUNSETQUEUE) { 3065 3062 return tun_set_queue(file, &ifr); 3066 3063 } else if (cmd == SIOCGSKNS) {
+1
drivers/net/usb/qmi_wwan.c
··· 1402 1402 {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */ 1403 1403 {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */ 1404 1404 {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */ 1405 + {QMI_FIXED_INTF(0x413c, 0x81c2, 8)}, /* Dell Wireless 5811e */ 1405 1406 {QMI_FIXED_INTF(0x413c, 0x81cc, 8)}, /* Dell Wireless 5816e */ 1406 1407 {QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */ 1407 1408 {QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */
+6 -1
drivers/net/usb/usbnet.c
··· 1598 1598 struct usbnet *dev; 1599 1599 struct usb_device *xdev; 1600 1600 struct net_device *net; 1601 + struct urb *urb; 1601 1602 1602 1603 dev = usb_get_intfdata(intf); 1603 1604 usb_set_intfdata(intf, NULL); ··· 1615 1614 net = dev->net; 1616 1615 unregister_netdev (net); 1617 1616 1618 - usb_scuttle_anchored_urbs(&dev->deferred); 1617 + while ((urb = usb_get_from_anchor(&dev->deferred))) { 1618 + dev_kfree_skb(urb->context); 1619 + kfree(urb->sg); 1620 + usb_free_urb(urb); 1621 + } 1619 1622 1620 1623 if (dev->driver_info->unbind) 1621 1624 dev->driver_info->unbind(dev, intf);
-2
drivers/nvdimm/namespace_devs.c
··· 1712 1712 res->flags = IORESOURCE_MEM; 1713 1713 1714 1714 for (i = 0; i < nd_region->ndr_mappings; i++) { 1715 - uuid_t uuid; 1716 - 1717 1715 nsl_get_uuid(ndd, nd_label, &uuid); 1718 1716 if (has_uuid_at_pos(nd_region, &uuid, cookie, i)) 1719 1717 continue;
+3 -3
drivers/nvdimm/pmem.c
··· 45 45 return to_nd_region(to_dev(pmem)->parent); 46 46 } 47 47 48 - static phys_addr_t to_phys(struct pmem_device *pmem, phys_addr_t offset) 48 + static phys_addr_t pmem_to_phys(struct pmem_device *pmem, phys_addr_t offset) 49 49 { 50 50 return pmem->phys_addr + offset; 51 51 } ··· 63 63 static void pmem_mkpage_present(struct pmem_device *pmem, phys_addr_t offset, 64 64 unsigned int len) 65 65 { 66 - phys_addr_t phys = to_phys(pmem, offset); 66 + phys_addr_t phys = pmem_to_phys(pmem, offset); 67 67 unsigned long pfn_start, pfn_end, pfn; 68 68 69 69 /* only pmem in the linear map supports HWPoison */ ··· 97 97 static long __pmem_clear_poison(struct pmem_device *pmem, 98 98 phys_addr_t offset, unsigned int len) 99 99 { 100 - phys_addr_t phys = to_phys(pmem, offset); 100 + phys_addr_t phys = pmem_to_phys(pmem, offset); 101 101 long cleared = nvdimm_clear_poison(to_dev(pmem), phys, len); 102 102 103 103 if (cleared > 0) {
+1 -1
drivers/opp/core.c
··· 873 873 } 874 874 } 875 875 876 - return ret; 876 + return 0; 877 877 } 878 878 EXPORT_SYMBOL_GPL(dev_pm_opp_config_clks_simple); 879 879
+1 -1
drivers/perf/arm-cmn.c
··· 36 36 #define CMN_CI_CHILD_COUNT GENMASK_ULL(15, 0) 37 37 #define CMN_CI_CHILD_PTR_OFFSET GENMASK_ULL(31, 16) 38 38 39 - #define CMN_CHILD_NODE_ADDR GENMASK(27, 0) 39 + #define CMN_CHILD_NODE_ADDR GENMASK(29, 0) 40 40 #define CMN_CHILD_NODE_EXTERNAL BIT(31) 41 41 42 42 #define CMN_MAX_DIMENSION 12
+17 -70
drivers/phy/marvell/phy-mvebu-a3700-comphy.c
··· 274 274 int submode; 275 275 bool invert_tx; 276 276 bool invert_rx; 277 - bool needs_reset; 278 277 }; 279 278 280 279 struct gbe_phy_init_data_fix { ··· 1096 1097 0x0, PU_PLL_BIT | PU_RX_BIT | PU_TX_BIT); 1097 1098 } 1098 1099 1099 - static int mvebu_a3700_comphy_reset(struct phy *phy) 1100 + static void mvebu_a3700_comphy_usb3_power_off(struct mvebu_a3700_comphy_lane *lane) 1100 1101 { 1101 - struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy); 1102 - u16 mask, data; 1103 - 1104 - dev_dbg(lane->dev, "resetting lane %d\n", lane->id); 1105 - 1106 - /* COMPHY reset for internal logic */ 1107 - comphy_lane_reg_set(lane, COMPHY_SFT_RESET, 1108 - SFT_RST_NO_REG, SFT_RST_NO_REG); 1109 - 1110 - /* COMPHY register reset (cleared automatically) */ 1111 - comphy_lane_reg_set(lane, COMPHY_SFT_RESET, SFT_RST, SFT_RST); 1112 - 1113 - /* PIPE soft and register reset */ 1114 - data = PIPE_SOFT_RESET | PIPE_REG_RESET; 1115 - mask = data; 1116 - comphy_lane_reg_set(lane, COMPHY_PIPE_RST_CLK_CTRL, data, mask); 1117 - 1118 - /* Release PIPE register reset */ 1119 - comphy_lane_reg_set(lane, COMPHY_PIPE_RST_CLK_CTRL, 1120 - 0x0, PIPE_REG_RESET); 1121 - 1122 - /* Reset SB configuration register (only for lanes 0 and 1) */ 1123 - if (lane->id == 0 || lane->id == 1) { 1124 - u32 mask, data; 1125 - 1126 - data = PIN_RESET_CORE_BIT | PIN_RESET_COMPHY_BIT | 1127 - PIN_PU_PLL_BIT | PIN_PU_RX_BIT | PIN_PU_TX_BIT; 1128 - mask = data | PIN_PU_IVREF_BIT | PIN_TX_IDLE_BIT; 1129 - comphy_periph_reg_set(lane, COMPHY_PHY_CFG1, data, mask); 1130 - } 1131 - 1132 - return 0; 1102 + /* 1103 + * The USB3 MAC sets the USB3 PHY to low state, so we do not 1104 + * need to power off USB3 PHY again. 1105 + */ 1133 1106 } 1134 1107 1135 1108 static bool mvebu_a3700_comphy_check_mode(int lane, ··· 1142 1171 (lane->mode != mode || lane->submode != submode)) 1143 1172 return -EBUSY; 1144 1173 1145 - /* If changing mode, ensure reset is called */ 1146 - if (lane->mode != PHY_MODE_INVALID && lane->mode != mode) 1147 - lane->needs_reset = true; 1148 - 1149 1174 /* Just remember the mode, ->power_on() will do the real setup */ 1150 1175 lane->mode = mode; 1151 1176 lane->submode = submode; ··· 1152 1185 static int mvebu_a3700_comphy_power_on(struct phy *phy) 1153 1186 { 1154 1187 struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy); 1155 - int ret; 1156 1188 1157 1189 if (!mvebu_a3700_comphy_check_mode(lane->id, lane->mode, 1158 1190 lane->submode)) { 1159 1191 dev_err(lane->dev, "invalid COMPHY mode\n"); 1160 1192 return -EINVAL; 1161 - } 1162 - 1163 - if (lane->needs_reset) { 1164 - ret = mvebu_a3700_comphy_reset(phy); 1165 - if (ret) 1166 - return ret; 1167 - 1168 - lane->needs_reset = false; 1169 1193 } 1170 1194 1171 1195 switch (lane->mode) { ··· 1182 1224 { 1183 1225 struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy); 1184 1226 1185 - switch (lane->mode) { 1186 - case PHY_MODE_USB_HOST_SS: 1187 - /* 1188 - * The USB3 MAC sets the USB3 PHY to low state, so we do not 1189 - * need to power off USB3 PHY again. 1190 - */ 1191 - break; 1192 - 1193 - case PHY_MODE_SATA: 1194 - mvebu_a3700_comphy_sata_power_off(lane); 1195 - break; 1196 - 1197 - case PHY_MODE_ETHERNET: 1227 + switch (lane->id) { 1228 + case 0: 1229 + mvebu_a3700_comphy_usb3_power_off(lane); 1198 1230 mvebu_a3700_comphy_ethernet_power_off(lane); 1199 - break; 1200 - 1201 - case PHY_MODE_PCIE: 1231 + return 0; 1232 + case 1: 1202 1233 mvebu_a3700_comphy_pcie_power_off(lane); 1203 - break; 1204 - 1234 + mvebu_a3700_comphy_ethernet_power_off(lane); 1235 + return 0; 1236 + case 2: 1237 + mvebu_a3700_comphy_usb3_power_off(lane); 1238 + mvebu_a3700_comphy_sata_power_off(lane); 1239 + return 0; 1205 1240 default: 1206 1241 dev_err(lane->dev, "invalid COMPHY mode\n"); 1207 1242 return -EINVAL; 1208 1243 } 1209 - 1210 - return 0; 1211 1244 } 1212 1245 1213 1246 static const struct phy_ops mvebu_a3700_comphy_ops = { 1214 1247 .power_on = mvebu_a3700_comphy_power_on, 1215 1248 .power_off = mvebu_a3700_comphy_power_off, 1216 - .reset = mvebu_a3700_comphy_reset, 1217 1249 .set_mode = mvebu_a3700_comphy_set_mode, 1218 1250 .owner = THIS_MODULE, 1219 1251 }; ··· 1341 1393 * To avoid relying on the bootloader/firmware configuration, 1342 1394 * power off all comphys. 1343 1395 */ 1344 - mvebu_a3700_comphy_reset(phy); 1345 - lane->needs_reset = false; 1396 + mvebu_a3700_comphy_power_off(phy); 1346 1397 } 1347 1398 1348 1399 provider = devm_of_phy_provider_register(&pdev->dev,
+1
drivers/reset/reset-imx7.c
··· 329 329 break; 330 330 331 331 case IMX8MP_RESET_PCIE_CTRL_APPS_EN: 332 + case IMX8MP_RESET_PCIEPHY_PERST: 332 333 value = assert ? 0 : bit; 333 334 break; 334 335 }
+17 -5
drivers/reset/reset-microchip-sparx5.c
··· 33 33 .reg_stride = 4, 34 34 }; 35 35 36 - static int sparx5_switch_reset(struct reset_controller_dev *rcdev, 37 - unsigned long id) 36 + static int sparx5_switch_reset(struct mchp_reset_context *ctx) 38 37 { 39 - struct mchp_reset_context *ctx = 40 - container_of(rcdev, struct mchp_reset_context, rcdev); 41 38 u32 val; 42 39 43 40 /* Make sure the core is PROTECTED from reset */ ··· 51 54 1, 100); 52 55 } 53 56 57 + static int sparx5_reset_noop(struct reset_controller_dev *rcdev, 58 + unsigned long id) 59 + { 60 + return 0; 61 + } 62 + 54 63 static const struct reset_control_ops sparx5_reset_ops = { 55 - .reset = sparx5_switch_reset, 64 + .reset = sparx5_reset_noop, 56 65 }; 57 66 58 67 static int mchp_sparx5_map_syscon(struct platform_device *pdev, char *name, ··· 125 122 ctx->rcdev.of_node = dn; 126 123 ctx->props = device_get_match_data(&pdev->dev); 127 124 125 + /* Issue the reset very early, our actual reset callback is a noop. */ 126 + err = sparx5_switch_reset(ctx); 127 + if (err) 128 + return err; 129 + 128 130 return devm_reset_controller_register(&pdev->dev, &ctx->rcdev); 129 131 } 130 132 ··· 171 163 return platform_driver_register(&mchp_sparx5_reset_driver); 172 164 } 173 165 166 + /* 167 + * Because this is a global reset, keep this postcore_initcall() to issue the 168 + * reset as early as possible during the kernel startup. 169 + */ 174 170 postcore_initcall(mchp_sparx5_reset_init); 175 171 176 172 MODULE_DESCRIPTION("Microchip Sparx5 switch reset driver");
+1 -1
drivers/reset/reset-npcm.c
··· 291 291 iprst2 |= ipsrst2_bits; 292 292 iprst3 |= (ipsrst3_bits | NPCM_IPSRST3_USBPHY1 | 293 293 NPCM_IPSRST3_USBPHY2); 294 - iprst2 |= ipsrst4_bits; 294 + iprst4 |= ipsrst4_bits; 295 295 296 296 writel(iprst1, rc->base + NPCM_IPSRST1); 297 297 writel(iprst2, rc->base + NPCM_IPSRST2);
+7 -2
drivers/s390/block/dasd_alias.c
··· 675 675 struct dasd_device *dasd_alias_get_start_dev(struct dasd_device *base_device) 676 676 { 677 677 struct dasd_eckd_private *alias_priv, *private = base_device->private; 678 - struct alias_pav_group *group = private->pavgroup; 679 678 struct alias_lcu *lcu = private->lcu; 680 679 struct dasd_device *alias_device; 680 + struct alias_pav_group *group; 681 681 unsigned long flags; 682 682 683 - if (!group || !lcu) 683 + if (!lcu) 684 684 return NULL; 685 685 if (lcu->pav == NO_PAV || 686 686 lcu->flags & (NEED_UAC_UPDATE | UPDATE_PENDING)) ··· 697 697 } 698 698 699 699 spin_lock_irqsave(&lcu->lock, flags); 700 + group = private->pavgroup; 701 + if (!group) { 702 + spin_unlock_irqrestore(&lcu->lock, flags); 703 + return NULL; 704 + } 700 705 alias_device = group->next; 701 706 if (!alias_device) { 702 707 if (list_empty(&group->aliaslist)) {
+30
drivers/s390/crypto/vfio_ap_ops.c
··· 984 984 goto done; 985 985 } 986 986 987 + if (test_bit_inv(apid, matrix_mdev->matrix.apm)) { 988 + ret = count; 989 + goto done; 990 + } 991 + 987 992 set_bit_inv(apid, matrix_mdev->matrix.apm); 988 993 989 994 ret = vfio_ap_mdev_validate_masks(matrix_mdev); ··· 1114 1109 goto done; 1115 1110 } 1116 1111 1112 + if (!test_bit_inv(apid, matrix_mdev->matrix.apm)) { 1113 + ret = count; 1114 + goto done; 1115 + } 1116 + 1117 1117 clear_bit_inv((unsigned long)apid, matrix_mdev->matrix.apm); 1118 1118 vfio_ap_mdev_hot_unplug_adapter(matrix_mdev, apid); 1119 1119 ret = count; ··· 1190 1180 1191 1181 if (apqi > matrix_mdev->matrix.aqm_max) { 1192 1182 ret = -ENODEV; 1183 + goto done; 1184 + } 1185 + 1186 + if (test_bit_inv(apqi, matrix_mdev->matrix.aqm)) { 1187 + ret = count; 1193 1188 goto done; 1194 1189 } 1195 1190 ··· 1301 1286 goto done; 1302 1287 } 1303 1288 1289 + if (!test_bit_inv(apqi, matrix_mdev->matrix.aqm)) { 1290 + ret = count; 1291 + goto done; 1292 + } 1293 + 1304 1294 clear_bit_inv((unsigned long)apqi, matrix_mdev->matrix.aqm); 1305 1295 vfio_ap_mdev_hot_unplug_domain(matrix_mdev, apqi); 1306 1296 ret = count; ··· 1346 1326 1347 1327 if (id > matrix_mdev->matrix.adm_max) { 1348 1328 ret = -ENODEV; 1329 + goto done; 1330 + } 1331 + 1332 + if (test_bit_inv(id, matrix_mdev->matrix.adm)) { 1333 + ret = count; 1349 1334 goto done; 1350 1335 } 1351 1336 ··· 1400 1375 1401 1376 if (domid > matrix_mdev->matrix.adm_max) { 1402 1377 ret = -ENODEV; 1378 + goto done; 1379 + } 1380 + 1381 + if (!test_bit_inv(domid, matrix_mdev->matrix.adm)) { 1382 + ret = count; 1403 1383 goto done; 1404 1384 } 1405 1385
+2 -1
drivers/scsi/libsas/sas_scsi_host.c
··· 872 872 struct domain_device *dev = sdev_to_domain_dev(sdev); 873 873 874 874 if (dev_is_sata(dev)) 875 - return __ata_change_queue_depth(dev->sata_dev.ap, sdev, depth); 875 + return ata_change_queue_depth(dev->sata_dev.ap, 876 + sas_to_ata_dev(dev), sdev, depth); 876 877 877 878 if (!sdev->tagged_supported) 878 879 depth = 1;
+1 -1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 2993 2993 2994 2994 if (ioc->is_mcpu_endpoint || 2995 2995 sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma || 2996 - dma_get_required_mask(&pdev->dev) <= 32) 2996 + dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32)) 2997 2997 ioc->dma_mask = 32; 2998 2998 /* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */ 2999 2999 else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
-5
drivers/scsi/qedf/qedf_main.c
··· 3686 3686 err1: 3687 3687 scsi_host_put(lport->host); 3688 3688 err0: 3689 - if (qedf) { 3690 - QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, "Probe done.\n"); 3691 - 3692 - clear_bit(QEDF_PROBING, &qedf->flags); 3693 - } 3694 3689 return rc; 3695 3690 } 3696 3691
+3 -1
drivers/scsi/qla2xxx/qla_target.c
··· 2151 2151 2152 2152 abort_cmd = ha->tgt.tgt_ops->find_cmd_by_tag(sess, 2153 2153 le32_to_cpu(abts->exchange_addr_to_abort)); 2154 - if (!abort_cmd) 2154 + if (!abort_cmd) { 2155 + mempool_free(mcmd, qla_tgt_mgmt_cmd_mempool); 2155 2156 return -EIO; 2157 + } 2156 2158 mcmd->unpacked_lun = abort_cmd->se_cmd.orig_fe_lun; 2157 2159 2158 2160 if (abort_cmd->qpair) {
-1
drivers/soc/bcm/brcmstb/biuctrl.c
··· 288 288 if (BRCM_ID(family_id) == 0x7260 && BRCM_REV(family_id) == 0) 289 289 cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs; 290 290 out: 291 - of_node_put(np); 292 291 return ret; 293 292 } 294 293
+10 -13
drivers/soc/sunxi/sunxi_sram.c
··· 78 78 79 79 static struct sunxi_sram_desc sun50i_a64_sram_c = { 80 80 .data = SUNXI_SRAM_DATA("C", 0x4, 24, 1, 81 - SUNXI_SRAM_MAP(0, 1, "cpu"), 82 - SUNXI_SRAM_MAP(1, 0, "de2")), 81 + SUNXI_SRAM_MAP(1, 0, "cpu"), 82 + SUNXI_SRAM_MAP(0, 1, "de2")), 83 83 }; 84 84 85 85 static const struct of_device_id sunxi_sram_dt_ids[] = { ··· 254 254 writel(val | ((device << sram_data->offset) & mask), 255 255 base + sram_data->reg); 256 256 257 + sram_desc->claimed = true; 257 258 spin_unlock(&sram_lock); 258 259 259 260 return 0; ··· 330 329 .writeable_reg = sunxi_sram_regmap_accessible_reg, 331 330 }; 332 331 333 - static int sunxi_sram_probe(struct platform_device *pdev) 332 + static int __init sunxi_sram_probe(struct platform_device *pdev) 334 333 { 335 - struct dentry *d; 336 334 struct regmap *emac_clock; 337 335 const struct sunxi_sramc_variant *variant; 336 + struct device *dev = &pdev->dev; 338 337 339 338 sram_dev = &pdev->dev; 340 339 ··· 346 345 if (IS_ERR(base)) 347 346 return PTR_ERR(base); 348 347 349 - of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 350 - 351 - d = debugfs_create_file("sram", S_IRUGO, NULL, NULL, 352 - &sunxi_sram_fops); 353 - if (!d) 354 - return -ENOMEM; 355 - 356 348 if (variant->num_emac_clocks > 0) { 357 349 emac_clock = devm_regmap_init_mmio(&pdev->dev, base, 358 350 &sunxi_sram_emac_clock_regmap); ··· 353 359 if (IS_ERR(emac_clock)) 354 360 return PTR_ERR(emac_clock); 355 361 } 362 + 363 + of_platform_populate(dev->of_node, NULL, NULL, dev); 364 + 365 + debugfs_create_file("sram", 0444, NULL, NULL, &sunxi_sram_fops); 356 366 357 367 return 0; 358 368 } ··· 407 409 .name = "sunxi-sram", 408 410 .of_match_table = sunxi_sram_dt_match, 409 411 }, 410 - .probe = sunxi_sram_probe, 411 412 }; 412 - module_platform_driver(sunxi_sram_driver); 413 + builtin_platform_driver_probe(sunxi_sram_driver, sunxi_sram_probe); 413 414 414 415 MODULE_AUTHOR("Maxime Ripard <maxime.ripard@free-electrons.com>"); 415 416 MODULE_DESCRIPTION("Allwinner sunXi SRAM Controller Driver");
+1
drivers/thunderbolt/icm.c
··· 2529 2529 tb->cm_ops = &icm_icl_ops; 2530 2530 break; 2531 2531 2532 + case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI: 2532 2533 case PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI: 2533 2534 icm->is_supported = icm_tgl_is_supported; 2534 2535 icm->get_mode = icm_ar_get_mode;
+1
drivers/thunderbolt/nhi.h
··· 55 55 * need for the PCI quirk anymore as we will use ICM also on Apple 56 56 * hardware. 57 57 */ 58 + #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_2C_NHI 0x1134 58 59 #define PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_4C_NHI 0x1137 59 60 #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_NHI 0x157d 60 61 #define PCI_DEVICE_ID_INTEL_WIN_RIDGE_2C_BRIDGE 0x157e
+1
drivers/tty/serial/8250/8250_omap.c
··· 1334 1334 up.port.throttle = omap_8250_throttle; 1335 1335 up.port.unthrottle = omap_8250_unthrottle; 1336 1336 up.port.rs485_config = serial8250_em485_config; 1337 + up.port.rs485_supported = serial8250_em485_supported; 1337 1338 up.rs485_start_tx = serial8250_em485_start_tx; 1338 1339 up.rs485_stop_tx = serial8250_em485_stop_tx; 1339 1340 up.port.has_sysrq = IS_ENABLED(CONFIG_SERIAL_8250_CONSOLE);
+5 -4
drivers/tty/serial/fsl_lpuart.c
··· 2724 2724 lpuart_reg.cons = LPUART_CONSOLE; 2725 2725 handler = lpuart_int; 2726 2726 } 2727 - ret = uart_add_one_port(&lpuart_reg, &sport->port); 2728 - if (ret) 2729 - goto failed_attach_port; 2730 2727 2731 2728 ret = lpuart_global_reset(sport); 2732 2729 if (ret) 2733 2730 goto failed_reset; 2731 + 2732 + ret = uart_add_one_port(&lpuart_reg, &sport->port); 2733 + if (ret) 2734 + goto failed_attach_port; 2734 2735 2735 2736 ret = uart_get_rs485_mode(&sport->port); 2736 2737 if (ret) ··· 2748 2747 2749 2748 failed_irq_request: 2750 2749 failed_get_rs485: 2751 - failed_reset: 2752 2750 uart_remove_one_port(&lpuart_reg, &sport->port); 2753 2751 failed_attach_port: 2752 + failed_reset: 2754 2753 lpuart_disable_clks(sport); 2755 2754 return ret; 2756 2755 }
+2 -3
drivers/tty/serial/serial-tegra.c
··· 525 525 count = tup->tx_bytes_requested - state.residue; 526 526 async_tx_ack(tup->tx_dma_desc); 527 527 spin_lock_irqsave(&tup->uport.lock, flags); 528 - xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1); 528 + uart_xmit_advance(&tup->uport, count); 529 529 tup->tx_in_progress = 0; 530 530 if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) 531 531 uart_write_wakeup(&tup->uport); ··· 613 613 static void tegra_uart_stop_tx(struct uart_port *u) 614 614 { 615 615 struct tegra_uart_port *tup = to_tegra_uport(u); 616 - struct circ_buf *xmit = &tup->uport.state->xmit; 617 616 struct dma_tx_state state; 618 617 unsigned int count; 619 618 ··· 623 624 dmaengine_tx_status(tup->tx_dma_chan, tup->tx_cookie, &state); 624 625 count = tup->tx_bytes_requested - state.residue; 625 626 async_tx_ack(tup->tx_dma_desc); 626 - xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1); 627 + uart_xmit_advance(&tup->uport, count); 627 628 tup->tx_in_progress = 0; 628 629 } 629 630
+1 -1
drivers/tty/serial/sifive.c
··· 945 945 return PTR_ERR(base); 946 946 } 947 947 948 - clk = devm_clk_get(&pdev->dev, NULL); 948 + clk = devm_clk_get_enabled(&pdev->dev, NULL); 949 949 if (IS_ERR(clk)) { 950 950 dev_err(&pdev->dev, "unable to find controller clock\n"); 951 951 return PTR_ERR(clk);
+1 -1
drivers/tty/serial/tegra-tcu.c
··· 101 101 break; 102 102 103 103 tegra_tcu_write(tcu, &xmit->buf[xmit->tail], count); 104 - xmit->tail = (xmit->tail + count) & (UART_XMIT_SIZE - 1); 104 + uart_xmit_advance(port, count); 105 105 } 106 106 107 107 uart_write_wakeup(port);
+1 -1
drivers/usb/core/hub.c
··· 6039 6039 * 6040 6040 * Return: The same as for usb_reset_and_verify_device(). 6041 6041 * However, if a reset is already in progress (for instance, if a 6042 - * driver doesn't have pre_ or post_reset() callbacks, and while 6042 + * driver doesn't have pre_reset() or post_reset() callbacks, and while 6043 6043 * being unbound or re-bound during the ongoing reset its disconnect() 6044 6044 * or probe() routine tries to perform a second, nested reset), the 6045 6045 * routine returns -EINPROGRESS.
+7 -6
drivers/usb/dwc3/core.c
··· 1752 1752 1753 1753 dwc3_get_properties(dwc); 1754 1754 1755 - if (!dwc->sysdev_is_parent) { 1756 - ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64)); 1757 - if (ret) 1758 - return ret; 1759 - } 1760 - 1761 1755 dwc->reset = devm_reset_control_array_get_optional_shared(dev); 1762 1756 if (IS_ERR(dwc->reset)) 1763 1757 return PTR_ERR(dwc->reset); ··· 1816 1822 1817 1823 platform_set_drvdata(pdev, dwc); 1818 1824 dwc3_cache_hwparams(dwc); 1825 + 1826 + if (!dwc->sysdev_is_parent && 1827 + DWC3_GHWPARAMS0_AWIDTH(dwc->hwparams.hwparams0) == 64) { 1828 + ret = dma_set_mask_and_coherent(dwc->sysdev, DMA_BIT_MASK(64)); 1829 + if (ret) 1830 + goto disable_clks; 1831 + } 1819 1832 1820 1833 spin_lock_init(&dwc->lock); 1821 1834 mutex_init(&dwc->mutex);
+6
drivers/usb/serial/option.c
··· 256 256 #define QUECTEL_PRODUCT_EM060K 0x030b 257 257 #define QUECTEL_PRODUCT_EM12 0x0512 258 258 #define QUECTEL_PRODUCT_RM500Q 0x0800 259 + #define QUECTEL_PRODUCT_RM520N 0x0801 259 260 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 260 261 #define QUECTEL_PRODUCT_EC200T 0x6026 261 262 #define QUECTEL_PRODUCT_RM500K 0x7001 ··· 1139 1138 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0xff, 0xff), 1140 1139 .driver_info = NUMEP2 }, 1141 1140 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EG95, 0xff, 0, 0) }, 1141 + { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0203, 0xff), /* BG95-M3 */ 1142 + .driver_info = ZLP }, 1142 1143 { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_BG96), 1143 1144 .driver_info = RSVD(4) }, 1144 1145 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff), ··· 1162 1159 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, 1163 1160 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), 1164 1161 .driver_info = ZLP }, 1162 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) }, 1163 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) }, 1164 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) }, 1165 1165 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1166 1166 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1167 1167 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
+1
drivers/usb/typec/Kconfig
··· 56 56 tristate "Analogix ANX7411 Type-C DRP Port controller driver" 57 57 depends on I2C 58 58 depends on USB_ROLE_SWITCH 59 + depends on POWER_SUPPLY 59 60 help 60 61 Say Y or M here if your system has Analogix ANX7411 Type-C DRP Port 61 62 controller driver.
+6 -3
drivers/xen/xenbus/xenbus_client.c
··· 382 382 unsigned long ring_size = nr_pages * XEN_PAGE_SIZE; 383 383 grant_ref_t gref_head; 384 384 unsigned int i; 385 + void *addr; 385 386 int ret; 386 387 387 - *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO); 388 + addr = *vaddr = alloc_pages_exact(ring_size, gfp | __GFP_ZERO); 388 389 if (!*vaddr) { 389 390 ret = -ENOMEM; 390 391 goto err; ··· 402 401 unsigned long gfn; 403 402 404 403 if (is_vmalloc_addr(*vaddr)) 405 - gfn = pfn_to_gfn(vmalloc_to_pfn(vaddr[i])); 404 + gfn = pfn_to_gfn(vmalloc_to_pfn(addr)); 406 405 else 407 - gfn = virt_to_gfn(vaddr[i]); 406 + gfn = virt_to_gfn(addr); 408 407 409 408 grefs[i] = gnttab_claim_grant_reference(&gref_head); 410 409 gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id, 411 410 gfn, 0); 411 + 412 + addr += XEN_PAGE_SIZE; 412 413 } 413 414 414 415 return 0;
+3
fs/dax.c
··· 1445 1445 loff_t done = 0; 1446 1446 int ret; 1447 1447 1448 + if (!iomi.len) 1449 + return 0; 1450 + 1448 1451 if (iov_iter_rw(iter) == WRITE) { 1449 1452 lockdep_assert_held_write(&iomi.inode->i_rwsem); 1450 1453 iomi.flags |= IOMAP_WRITE;
+5 -5
fs/ext4/ext4.h
··· 167 167 #define EXT4_MB_CR0_OPTIMIZED 0x8000 168 168 /* Avg fragment size rb tree lookup succeeded at least once for cr = 1 */ 169 169 #define EXT4_MB_CR1_OPTIMIZED 0x00010000 170 - /* Perform linear traversal for one group */ 171 - #define EXT4_MB_SEARCH_NEXT_LINEAR 0x00020000 172 170 struct ext4_allocation_request { 173 171 /* target inode for block we're allocating */ 174 172 struct inode *inode; ··· 1598 1600 struct list_head s_discard_list; 1599 1601 struct work_struct s_discard_work; 1600 1602 atomic_t s_retry_alloc_pending; 1601 - struct rb_root s_mb_avg_fragment_size_root; 1602 - rwlock_t s_mb_rb_lock; 1603 + struct list_head *s_mb_avg_fragment_size; 1604 + rwlock_t *s_mb_avg_fragment_size_locks; 1603 1605 struct list_head *s_mb_largest_free_orders; 1604 1606 rwlock_t *s_mb_largest_free_orders_locks; 1605 1607 ··· 3411 3413 ext4_grpblk_t bb_first_free; /* first free block */ 3412 3414 ext4_grpblk_t bb_free; /* total free blocks */ 3413 3415 ext4_grpblk_t bb_fragments; /* nr of freespace fragments */ 3416 + int bb_avg_fragment_size_order; /* order of average 3417 + fragment in BG */ 3414 3418 ext4_grpblk_t bb_largest_free_order;/* order of largest frag in BG */ 3415 3419 ext4_group_t bb_group; /* Group number */ 3416 3420 struct list_head bb_prealloc_list; ··· 3420 3420 void *bb_bitmap; 3421 3421 #endif 3422 3422 struct rw_semaphore alloc_sem; 3423 - struct rb_node bb_avg_fragment_size_rb; 3423 + struct list_head bb_avg_fragment_size_node; 3424 3424 struct list_head bb_largest_free_order_node; 3425 3425 ext4_grpblk_t bb_counters[]; /* Nr of free power-of-two-block 3426 3426 * regions, index is order.
+4
fs/ext4/extents.c
··· 460 460 error_msg = "invalid eh_entries"; 461 461 goto corrupted; 462 462 } 463 + if (unlikely((eh->eh_entries == 0) && (depth > 0))) { 464 + error_msg = "eh_entries is 0 but eh_depth is > 0"; 465 + goto corrupted; 466 + } 463 467 if (!ext4_valid_extent_entries(inode, eh, lblk, &pblk, depth)) { 464 468 error_msg = "invalid extent entries"; 465 469 goto corrupted;
+1 -1
fs/ext4/ialloc.c
··· 510 510 goto fallback; 511 511 } 512 512 513 - max_dirs = ndirs / ngroups + inodes_per_group / 16; 513 + max_dirs = ndirs / ngroups + inodes_per_group*flex_size / 16; 514 514 min_inodes = avefreei - inodes_per_group*flex_size / 4; 515 515 if (min_inodes < 1) 516 516 min_inodes = 1;
+143 -174
fs/ext4/mballoc.c
··· 140 140 * number of buddy bitmap orders possible) number of lists. Group-infos are 141 141 * placed in appropriate lists. 142 142 * 143 - * 2) Average fragment size rb tree (sbi->s_mb_avg_fragment_size_root) 143 + * 2) Average fragment size lists (sbi->s_mb_avg_fragment_size) 144 144 * 145 - * Locking: sbi->s_mb_rb_lock (rwlock) 145 + * Locking: sbi->s_mb_avg_fragment_size_locks(array of rw locks) 146 146 * 147 - * This is a red black tree consisting of group infos and the tree is sorted 148 - * by average fragment sizes (which is calculated as ext4_group_info->bb_free 149 - * / ext4_group_info->bb_fragments). 147 + * This is an array of lists where in the i-th list there are groups with 148 + * average fragment size >= 2^i and < 2^(i+1). The average fragment size 149 + * is computed as ext4_group_info->bb_free / ext4_group_info->bb_fragments. 150 + * Note that we don't bother with a special list for completely empty groups 151 + * so we only have MB_NUM_ORDERS(sb) lists. 150 152 * 151 153 * When "mb_optimize_scan" mount option is set, mballoc consults the above data 152 154 * structures to decide the order in which groups are to be traversed for ··· 162 160 * 163 161 * At CR = 1, we only consider groups where average fragment size > request 164 162 * size. So, we lookup a group which has average fragment size just above or 165 - * equal to request size using our rb tree (data structure 2) in O(log N) time. 163 + * equal to request size using our average fragment size group lists (data 164 + * structure 2) in O(1) time. 166 165 * 167 166 * If "mb_optimize_scan" mount option is not set, mballoc traverses groups in 168 167 * linear order which requires O(N) search time for each CR 0 and CR 1 phase. ··· 805 802 } 806 803 } 807 804 808 - static void ext4_mb_rb_insert(struct rb_root *root, struct rb_node *new, 809 - int (*cmp)(struct rb_node *, struct rb_node *)) 805 + static int mb_avg_fragment_size_order(struct super_block *sb, ext4_grpblk_t len) 810 806 { 811 - struct rb_node **iter = &root->rb_node, *parent = NULL; 807 + int order; 812 808 813 - while (*iter) { 814 - parent = *iter; 815 - if (cmp(new, *iter) > 0) 816 - iter = &((*iter)->rb_left); 817 - else 818 - iter = &((*iter)->rb_right); 819 - } 820 - 821 - rb_link_node(new, parent, iter); 822 - rb_insert_color(new, root); 809 + /* 810 + * We don't bother with a special lists groups with only 1 block free 811 + * extents and for completely empty groups. 812 + */ 813 + order = fls(len) - 2; 814 + if (order < 0) 815 + return 0; 816 + if (order == MB_NUM_ORDERS(sb)) 817 + order--; 818 + return order; 823 819 } 824 820 825 - static int 826 - ext4_mb_avg_fragment_size_cmp(struct rb_node *rb1, struct rb_node *rb2) 827 - { 828 - struct ext4_group_info *grp1 = rb_entry(rb1, 829 - struct ext4_group_info, 830 - bb_avg_fragment_size_rb); 831 - struct ext4_group_info *grp2 = rb_entry(rb2, 832 - struct ext4_group_info, 833 - bb_avg_fragment_size_rb); 834 - int num_frags_1, num_frags_2; 835 - 836 - num_frags_1 = grp1->bb_fragments ? 837 - grp1->bb_free / grp1->bb_fragments : 0; 838 - num_frags_2 = grp2->bb_fragments ? 839 - grp2->bb_free / grp2->bb_fragments : 0; 840 - 841 - return (num_frags_2 - num_frags_1); 842 - } 843 - 844 - /* 845 - * Reinsert grpinfo into the avg_fragment_size tree with new average 846 - * fragment size. 847 - */ 821 + /* Move group to appropriate avg_fragment_size list */ 848 822 static void 849 823 mb_update_avg_fragment_size(struct super_block *sb, struct ext4_group_info *grp) 850 824 { 851 825 struct ext4_sb_info *sbi = EXT4_SB(sb); 826 + int new_order; 852 827 853 828 if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || grp->bb_free == 0) 854 829 return; 855 830 856 - write_lock(&sbi->s_mb_rb_lock); 857 - if (!RB_EMPTY_NODE(&grp->bb_avg_fragment_size_rb)) { 858 - rb_erase(&grp->bb_avg_fragment_size_rb, 859 - &sbi->s_mb_avg_fragment_size_root); 860 - RB_CLEAR_NODE(&grp->bb_avg_fragment_size_rb); 861 - } 831 + new_order = mb_avg_fragment_size_order(sb, 832 + grp->bb_free / grp->bb_fragments); 833 + if (new_order == grp->bb_avg_fragment_size_order) 834 + return; 862 835 863 - ext4_mb_rb_insert(&sbi->s_mb_avg_fragment_size_root, 864 - &grp->bb_avg_fragment_size_rb, 865 - ext4_mb_avg_fragment_size_cmp); 866 - write_unlock(&sbi->s_mb_rb_lock); 836 + if (grp->bb_avg_fragment_size_order != -1) { 837 + write_lock(&sbi->s_mb_avg_fragment_size_locks[ 838 + grp->bb_avg_fragment_size_order]); 839 + list_del(&grp->bb_avg_fragment_size_node); 840 + write_unlock(&sbi->s_mb_avg_fragment_size_locks[ 841 + grp->bb_avg_fragment_size_order]); 842 + } 843 + grp->bb_avg_fragment_size_order = new_order; 844 + write_lock(&sbi->s_mb_avg_fragment_size_locks[ 845 + grp->bb_avg_fragment_size_order]); 846 + list_add_tail(&grp->bb_avg_fragment_size_node, 847 + &sbi->s_mb_avg_fragment_size[grp->bb_avg_fragment_size_order]); 848 + write_unlock(&sbi->s_mb_avg_fragment_size_locks[ 849 + grp->bb_avg_fragment_size_order]); 867 850 } 868 851 869 852 /* ··· 898 909 *new_cr = 1; 899 910 } else { 900 911 *group = grp->bb_group; 901 - ac->ac_last_optimal_group = *group; 902 912 ac->ac_flags |= EXT4_MB_CR0_OPTIMIZED; 903 913 } 904 914 } 905 915 906 916 /* 907 - * Choose next group by traversing average fragment size tree. Updates *new_cr 908 - * if cr lvel needs an update. Sets EXT4_MB_SEARCH_NEXT_LINEAR to indicate that 909 - * the linear search should continue for one iteration since there's lock 910 - * contention on the rb tree lock. 917 + * Choose next group by traversing average fragment size list of suitable 918 + * order. Updates *new_cr if cr level needs an update. 911 919 */ 912 920 static void ext4_mb_choose_next_group_cr1(struct ext4_allocation_context *ac, 913 921 int *new_cr, ext4_group_t *group, ext4_group_t ngroups) 914 922 { 915 923 struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); 916 - int avg_fragment_size, best_so_far; 917 - struct rb_node *node, *found; 918 - struct ext4_group_info *grp; 919 - 920 - /* 921 - * If there is contention on the lock, instead of waiting for the lock 922 - * to become available, just continue searching lineraly. We'll resume 923 - * our rb tree search later starting at ac->ac_last_optimal_group. 924 - */ 925 - if (!read_trylock(&sbi->s_mb_rb_lock)) { 926 - ac->ac_flags |= EXT4_MB_SEARCH_NEXT_LINEAR; 927 - return; 928 - } 924 + struct ext4_group_info *grp = NULL, *iter; 925 + int i; 929 926 930 927 if (unlikely(ac->ac_flags & EXT4_MB_CR1_OPTIMIZED)) { 931 928 if (sbi->s_mb_stats) 932 929 atomic_inc(&sbi->s_bal_cr1_bad_suggestions); 933 - /* We have found something at CR 1 in the past */ 934 - grp = ext4_get_group_info(ac->ac_sb, ac->ac_last_optimal_group); 935 - for (found = rb_next(&grp->bb_avg_fragment_size_rb); found != NULL; 936 - found = rb_next(found)) { 937 - grp = rb_entry(found, struct ext4_group_info, 938 - bb_avg_fragment_size_rb); 930 + } 931 + 932 + for (i = mb_avg_fragment_size_order(ac->ac_sb, ac->ac_g_ex.fe_len); 933 + i < MB_NUM_ORDERS(ac->ac_sb); i++) { 934 + if (list_empty(&sbi->s_mb_avg_fragment_size[i])) 935 + continue; 936 + read_lock(&sbi->s_mb_avg_fragment_size_locks[i]); 937 + if (list_empty(&sbi->s_mb_avg_fragment_size[i])) { 938 + read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]); 939 + continue; 940 + } 941 + list_for_each_entry(iter, &sbi->s_mb_avg_fragment_size[i], 942 + bb_avg_fragment_size_node) { 939 943 if (sbi->s_mb_stats) 940 944 atomic64_inc(&sbi->s_bal_cX_groups_considered[1]); 941 - if (likely(ext4_mb_good_group(ac, grp->bb_group, 1))) 945 + if (likely(ext4_mb_good_group(ac, iter->bb_group, 1))) { 946 + grp = iter; 942 947 break; 943 - } 944 - goto done; 945 - } 946 - 947 - node = sbi->s_mb_avg_fragment_size_root.rb_node; 948 - best_so_far = 0; 949 - found = NULL; 950 - 951 - while (node) { 952 - grp = rb_entry(node, struct ext4_group_info, 953 - bb_avg_fragment_size_rb); 954 - avg_fragment_size = 0; 955 - if (ext4_mb_good_group(ac, grp->bb_group, 1)) { 956 - avg_fragment_size = grp->bb_fragments ? 957 - grp->bb_free / grp->bb_fragments : 0; 958 - if (!best_so_far || avg_fragment_size < best_so_far) { 959 - best_so_far = avg_fragment_size; 960 - found = node; 961 948 } 962 949 } 963 - if (avg_fragment_size > ac->ac_g_ex.fe_len) 964 - node = node->rb_right; 965 - else 966 - node = node->rb_left; 950 + read_unlock(&sbi->s_mb_avg_fragment_size_locks[i]); 951 + if (grp) 952 + break; 967 953 } 968 954 969 - done: 970 - if (found) { 971 - grp = rb_entry(found, struct ext4_group_info, 972 - bb_avg_fragment_size_rb); 955 + if (grp) { 973 956 *group = grp->bb_group; 974 957 ac->ac_flags |= EXT4_MB_CR1_OPTIMIZED; 975 958 } else { 976 959 *new_cr = 2; 977 960 } 978 - 979 - read_unlock(&sbi->s_mb_rb_lock); 980 - ac->ac_last_optimal_group = *group; 981 961 } 982 962 983 963 static inline int should_optimize_scan(struct ext4_allocation_context *ac) ··· 972 1014 973 1015 if (ac->ac_groups_linear_remaining) { 974 1016 ac->ac_groups_linear_remaining--; 975 - goto inc_and_return; 976 - } 977 - 978 - if (ac->ac_flags & EXT4_MB_SEARCH_NEXT_LINEAR) { 979 - ac->ac_flags &= ~EXT4_MB_SEARCH_NEXT_LINEAR; 980 1017 goto inc_and_return; 981 1018 } 982 1019 ··· 1002 1049 { 1003 1050 *new_cr = ac->ac_criteria; 1004 1051 1005 - if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining) 1052 + if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining) { 1053 + *group = next_linear_group(ac, *group, ngroups); 1006 1054 return; 1055 + } 1007 1056 1008 1057 if (*new_cr == 0) { 1009 1058 ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups); ··· 1030 1075 struct ext4_sb_info *sbi = EXT4_SB(sb); 1031 1076 int i; 1032 1077 1033 - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && grp->bb_largest_free_order >= 0) { 1078 + for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) 1079 + if (grp->bb_counters[i] > 0) 1080 + break; 1081 + /* No need to move between order lists? */ 1082 + if (!test_opt2(sb, MB_OPTIMIZE_SCAN) || 1083 + i == grp->bb_largest_free_order) { 1084 + grp->bb_largest_free_order = i; 1085 + return; 1086 + } 1087 + 1088 + if (grp->bb_largest_free_order >= 0) { 1034 1089 write_lock(&sbi->s_mb_largest_free_orders_locks[ 1035 1090 grp->bb_largest_free_order]); 1036 1091 list_del_init(&grp->bb_largest_free_order_node); 1037 1092 write_unlock(&sbi->s_mb_largest_free_orders_locks[ 1038 1093 grp->bb_largest_free_order]); 1039 1094 } 1040 - grp->bb_largest_free_order = -1; /* uninit */ 1041 - 1042 - for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--) { 1043 - if (grp->bb_counters[i] > 0) { 1044 - grp->bb_largest_free_order = i; 1045 - break; 1046 - } 1047 - } 1048 - if (test_opt2(sb, MB_OPTIMIZE_SCAN) && 1049 - grp->bb_largest_free_order >= 0 && grp->bb_free) { 1095 + grp->bb_largest_free_order = i; 1096 + if (grp->bb_largest_free_order >= 0 && grp->bb_free) { 1050 1097 write_lock(&sbi->s_mb_largest_free_orders_locks[ 1051 1098 grp->bb_largest_free_order]); 1052 1099 list_add_tail(&grp->bb_largest_free_order_node, ··· 1105 1148 EXT4_GROUP_INFO_BBITMAP_CORRUPT); 1106 1149 } 1107 1150 mb_set_largest_free_order(sb, grp); 1151 + mb_update_avg_fragment_size(sb, grp); 1108 1152 1109 1153 clear_bit(EXT4_GROUP_INFO_NEED_INIT_BIT, &(grp->bb_state)); 1110 1154 1111 1155 period = get_cycles() - period; 1112 1156 atomic_inc(&sbi->s_mb_buddies_generated); 1113 1157 atomic64_add(period, &sbi->s_mb_generation_time); 1114 - mb_update_avg_fragment_size(sb, grp); 1115 1158 } 1116 1159 1117 1160 /* The buddy information is attached the buddy cache inode ··· 2593 2636 ext4_mb_regular_allocator(struct ext4_allocation_context *ac) 2594 2637 { 2595 2638 ext4_group_t prefetch_grp = 0, ngroups, group, i; 2596 - int cr = -1; 2639 + int cr = -1, new_cr; 2597 2640 int err = 0, first_err = 0; 2598 2641 unsigned int nr = 0, prefetch_ios = 0; 2599 2642 struct ext4_sb_info *sbi; ··· 2664 2707 * from the goal value specified 2665 2708 */ 2666 2709 group = ac->ac_g_ex.fe_group; 2667 - ac->ac_last_optimal_group = group; 2668 2710 ac->ac_groups_linear_remaining = sbi->s_mb_max_linear_groups; 2669 2711 prefetch_grp = group; 2670 2712 2671 - for (i = 0; i < ngroups; group = next_linear_group(ac, group, ngroups), 2672 - i++) { 2673 - int ret = 0, new_cr; 2713 + for (i = 0, new_cr = cr; i < ngroups; i++, 2714 + ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups)) { 2715 + int ret = 0; 2674 2716 2675 2717 cond_resched(); 2676 - 2677 - ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups); 2678 2718 if (new_cr != cr) { 2679 2719 cr = new_cr; 2680 2720 goto repeat; ··· 2945 2991 struct super_block *sb = pde_data(file_inode(seq->file)); 2946 2992 unsigned long position; 2947 2993 2948 - read_lock(&EXT4_SB(sb)->s_mb_rb_lock); 2949 - 2950 - if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1) 2994 + if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb)) 2951 2995 return NULL; 2952 2996 position = *pos + 1; 2953 2997 return (void *) ((unsigned long) position); ··· 2957 3005 unsigned long position; 2958 3006 2959 3007 ++*pos; 2960 - if (*pos < 0 || *pos >= MB_NUM_ORDERS(sb) + 1) 3008 + if (*pos < 0 || *pos >= 2*MB_NUM_ORDERS(sb)) 2961 3009 return NULL; 2962 3010 position = *pos + 1; 2963 3011 return (void *) ((unsigned long) position); ··· 2969 3017 struct ext4_sb_info *sbi = EXT4_SB(sb); 2970 3018 unsigned long position = ((unsigned long) v); 2971 3019 struct ext4_group_info *grp; 2972 - struct rb_node *n; 2973 - unsigned int count, min, max; 3020 + unsigned int count; 2974 3021 2975 3022 position--; 2976 3023 if (position >= MB_NUM_ORDERS(sb)) { 2977 - seq_puts(seq, "fragment_size_tree:\n"); 2978 - n = rb_first(&sbi->s_mb_avg_fragment_size_root); 2979 - if (!n) { 2980 - seq_puts(seq, "\ttree_min: 0\n\ttree_max: 0\n\ttree_nodes: 0\n"); 2981 - return 0; 2982 - } 2983 - grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb); 2984 - min = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0; 2985 - count = 1; 2986 - while (rb_next(n)) { 2987 - count++; 2988 - n = rb_next(n); 2989 - } 2990 - grp = rb_entry(n, struct ext4_group_info, bb_avg_fragment_size_rb); 2991 - max = grp->bb_fragments ? grp->bb_free / grp->bb_fragments : 0; 3024 + position -= MB_NUM_ORDERS(sb); 3025 + if (position == 0) 3026 + seq_puts(seq, "avg_fragment_size_lists:\n"); 2992 3027 2993 - seq_printf(seq, "\ttree_min: %u\n\ttree_max: %u\n\ttree_nodes: %u\n", 2994 - min, max, count); 3028 + count = 0; 3029 + read_lock(&sbi->s_mb_avg_fragment_size_locks[position]); 3030 + list_for_each_entry(grp, &sbi->s_mb_avg_fragment_size[position], 3031 + bb_avg_fragment_size_node) 3032 + count++; 3033 + read_unlock(&sbi->s_mb_avg_fragment_size_locks[position]); 3034 + seq_printf(seq, "\tlist_order_%u_groups: %u\n", 3035 + (unsigned int)position, count); 2995 3036 return 0; 2996 3037 } 2997 3038 ··· 2994 3049 seq_puts(seq, "max_free_order_lists:\n"); 2995 3050 } 2996 3051 count = 0; 3052 + read_lock(&sbi->s_mb_largest_free_orders_locks[position]); 2997 3053 list_for_each_entry(grp, &sbi->s_mb_largest_free_orders[position], 2998 3054 bb_largest_free_order_node) 2999 3055 count++; 3056 + read_unlock(&sbi->s_mb_largest_free_orders_locks[position]); 3000 3057 seq_printf(seq, "\tlist_order_%u_groups: %u\n", 3001 3058 (unsigned int)position, count); 3002 3059 ··· 3006 3059 } 3007 3060 3008 3061 static void ext4_mb_seq_structs_summary_stop(struct seq_file *seq, void *v) 3009 - __releases(&EXT4_SB(sb)->s_mb_rb_lock) 3010 3062 { 3011 - struct super_block *sb = pde_data(file_inode(seq->file)); 3012 - 3013 - read_unlock(&EXT4_SB(sb)->s_mb_rb_lock); 3014 3063 } 3015 3064 3016 3065 const struct seq_operations ext4_mb_seq_structs_summary_ops = { ··· 3119 3176 init_rwsem(&meta_group_info[i]->alloc_sem); 3120 3177 meta_group_info[i]->bb_free_root = RB_ROOT; 3121 3178 INIT_LIST_HEAD(&meta_group_info[i]->bb_largest_free_order_node); 3122 - RB_CLEAR_NODE(&meta_group_info[i]->bb_avg_fragment_size_rb); 3179 + INIT_LIST_HEAD(&meta_group_info[i]->bb_avg_fragment_size_node); 3123 3180 meta_group_info[i]->bb_largest_free_order = -1; /* uninit */ 3181 + meta_group_info[i]->bb_avg_fragment_size_order = -1; /* uninit */ 3124 3182 meta_group_info[i]->bb_group = group; 3125 3183 3126 3184 mb_group_bb_bitmap_alloc(sb, meta_group_info[i], group); ··· 3370 3426 i++; 3371 3427 } while (i < MB_NUM_ORDERS(sb)); 3372 3428 3373 - sbi->s_mb_avg_fragment_size_root = RB_ROOT; 3429 + sbi->s_mb_avg_fragment_size = 3430 + kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head), 3431 + GFP_KERNEL); 3432 + if (!sbi->s_mb_avg_fragment_size) { 3433 + ret = -ENOMEM; 3434 + goto out; 3435 + } 3436 + sbi->s_mb_avg_fragment_size_locks = 3437 + kmalloc_array(MB_NUM_ORDERS(sb), sizeof(rwlock_t), 3438 + GFP_KERNEL); 3439 + if (!sbi->s_mb_avg_fragment_size_locks) { 3440 + ret = -ENOMEM; 3441 + goto out; 3442 + } 3443 + for (i = 0; i < MB_NUM_ORDERS(sb); i++) { 3444 + INIT_LIST_HEAD(&sbi->s_mb_avg_fragment_size[i]); 3445 + rwlock_init(&sbi->s_mb_avg_fragment_size_locks[i]); 3446 + } 3374 3447 sbi->s_mb_largest_free_orders = 3375 3448 kmalloc_array(MB_NUM_ORDERS(sb), sizeof(struct list_head), 3376 3449 GFP_KERNEL); ··· 3406 3445 INIT_LIST_HEAD(&sbi->s_mb_largest_free_orders[i]); 3407 3446 rwlock_init(&sbi->s_mb_largest_free_orders_locks[i]); 3408 3447 } 3409 - rwlock_init(&sbi->s_mb_rb_lock); 3410 3448 3411 3449 spin_lock_init(&sbi->s_md_lock); 3412 3450 sbi->s_mb_free_pending = 0; ··· 3476 3516 free_percpu(sbi->s_locality_groups); 3477 3517 sbi->s_locality_groups = NULL; 3478 3518 out: 3519 + kfree(sbi->s_mb_avg_fragment_size); 3520 + kfree(sbi->s_mb_avg_fragment_size_locks); 3479 3521 kfree(sbi->s_mb_largest_free_orders); 3480 3522 kfree(sbi->s_mb_largest_free_orders_locks); 3481 3523 kfree(sbi->s_mb_offsets); ··· 3544 3582 kvfree(group_info); 3545 3583 rcu_read_unlock(); 3546 3584 } 3585 + kfree(sbi->s_mb_avg_fragment_size); 3586 + kfree(sbi->s_mb_avg_fragment_size_locks); 3547 3587 kfree(sbi->s_mb_largest_free_orders); 3548 3588 kfree(sbi->s_mb_largest_free_orders_locks); 3549 3589 kfree(sbi->s_mb_offsets); ··· 5157 5193 struct ext4_sb_info *sbi = EXT4_SB(ac->ac_sb); 5158 5194 int bsbits = ac->ac_sb->s_blocksize_bits; 5159 5195 loff_t size, isize; 5196 + bool inode_pa_eligible, group_pa_eligible; 5160 5197 5161 5198 if (!(ac->ac_flags & EXT4_MB_HINT_DATA)) 5162 5199 return; ··· 5165 5200 if (unlikely(ac->ac_flags & EXT4_MB_HINT_GOAL_ONLY)) 5166 5201 return; 5167 5202 5203 + group_pa_eligible = sbi->s_mb_group_prealloc > 0; 5204 + inode_pa_eligible = true; 5168 5205 size = ac->ac_o_ex.fe_logical + EXT4_C2B(sbi, ac->ac_o_ex.fe_len); 5169 5206 isize = (i_size_read(ac->ac_inode) + ac->ac_sb->s_blocksize - 1) 5170 5207 >> bsbits; 5171 5208 5209 + /* No point in using inode preallocation for closed files */ 5172 5210 if ((size == isize) && !ext4_fs_is_busy(sbi) && 5173 - !inode_is_open_for_write(ac->ac_inode)) { 5174 - ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC; 5175 - return; 5176 - } 5211 + !inode_is_open_for_write(ac->ac_inode)) 5212 + inode_pa_eligible = false; 5177 5213 5178 - if (sbi->s_mb_group_prealloc <= 0) { 5179 - ac->ac_flags |= EXT4_MB_STREAM_ALLOC; 5180 - return; 5181 - } 5182 - 5183 - /* don't use group allocation for large files */ 5184 5214 size = max(size, isize); 5185 - if (size > sbi->s_mb_stream_request) { 5186 - ac->ac_flags |= EXT4_MB_STREAM_ALLOC; 5215 + /* Don't use group allocation for large files */ 5216 + if (size > sbi->s_mb_stream_request) 5217 + group_pa_eligible = false; 5218 + 5219 + if (!group_pa_eligible) { 5220 + if (inode_pa_eligible) 5221 + ac->ac_flags |= EXT4_MB_STREAM_ALLOC; 5222 + else 5223 + ac->ac_flags |= EXT4_MB_HINT_NOPREALLOC; 5187 5224 return; 5188 5225 } 5189 5226 ··· 5532 5565 ext4_fsblk_t block = 0; 5533 5566 unsigned int inquota = 0; 5534 5567 unsigned int reserv_clstrs = 0; 5568 + int retries = 0; 5535 5569 u64 seq; 5536 5570 5537 5571 might_sleep(); ··· 5635 5667 ar->len = ac->ac_b_ex.fe_len; 5636 5668 } 5637 5669 } else { 5638 - if (ext4_mb_discard_preallocations_should_retry(sb, ac, &seq)) 5670 + if (++retries < 3 && 5671 + ext4_mb_discard_preallocations_should_retry(sb, ac, &seq)) 5639 5672 goto repeat; 5640 5673 /* 5641 5674 * If block allocation fails then the pa allocated above
-1
fs/ext4/mballoc.h
··· 178 178 /* copy of the best found extent taken before preallocation efforts */ 179 179 struct ext4_free_extent ac_f_ex; 180 180 181 - ext4_group_t ac_last_optimal_group; 182 181 __u32 ac_groups_considered; 183 182 __u32 ac_flags; /* allocation hints */ 184 183 __u16 ac_groups_scanned;
+2 -1
fs/ntfs/super.c
··· 2092 2092 // TODO: Initialize security. 2093 2093 /* Get the extended system files' directory inode. */ 2094 2094 vol->extend_ino = ntfs_iget(sb, FILE_Extend); 2095 - if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino)) { 2095 + if (IS_ERR(vol->extend_ino) || is_bad_inode(vol->extend_ino) || 2096 + !S_ISDIR(vol->extend_ino->i_mode)) { 2096 2097 if (!IS_ERR(vol->extend_ino)) 2097 2098 iput(vol->extend_ino); 2098 2099 ntfs_error(sb, "Failed to load $Extend.");
+3 -3
fs/xfs/xfs_notify_failure.c
··· 175 175 u64 ddev_start; 176 176 u64 ddev_end; 177 177 178 - if (!(mp->m_sb.sb_flags & SB_BORN)) { 178 + if (!(mp->m_super->s_flags & SB_BORN)) { 179 179 xfs_warn(mp, "filesystem is not ready for notify_failure()!"); 180 180 return -EIO; 181 181 } 182 182 183 183 if (mp->m_rtdev_targp && mp->m_rtdev_targp->bt_daxdev == dax_dev) { 184 - xfs_warn(mp, 184 + xfs_debug(mp, 185 185 "notify_failure() not supported on realtime device!"); 186 186 return -EOPNOTSUPP; 187 187 } ··· 194 194 } 195 195 196 196 if (!xfs_has_rmapbt(mp)) { 197 - xfs_warn(mp, "notify_failure() needs rmapbt enabled!"); 197 + xfs_debug(mp, "notify_failure() needs rmapbt enabled!"); 198 198 return -EOPNOTSUPP; 199 199 } 200 200
+1 -2
include/asm-generic/vmlinux.lds.h
··· 543 543 */ 544 544 #ifdef CONFIG_CFI_CLANG 545 545 #define TEXT_CFI_JT \ 546 - . = ALIGN(PMD_SIZE); \ 546 + ALIGN_FUNCTION(); \ 547 547 __cfi_jt_start = .; \ 548 548 *(.text..L.cfi.jumptable .text..L.cfi.jumptable.*) \ 549 - . = ALIGN(PMD_SIZE); \ 550 549 __cfi_jt_end = .; 551 550 #else 552 551 #define TEXT_CFI_JT
+3 -2
include/linux/cpumask.h
··· 1127 1127 * cover a worst-case of every other cpu being on one of two nodes for a 1128 1128 * very large NR_CPUS. 1129 1129 * 1130 - * Use PAGE_SIZE as a minimum for smaller configurations. 1130 + * Use PAGE_SIZE as a minimum for smaller configurations while avoiding 1131 + * unsigned comparison to -1. 1131 1132 */ 1132 - #define CPUMAP_FILE_MAX_BYTES ((((NR_CPUS * 9)/32 - 1) > PAGE_SIZE) \ 1133 + #define CPUMAP_FILE_MAX_BYTES (((NR_CPUS * 9)/32 > PAGE_SIZE) \ 1133 1134 ? (NR_CPUS * 9)/32 - 1 : PAGE_SIZE) 1134 1135 #define CPULIST_FILE_MAX_BYTES (((NR_CPUS * 7)/2 > PAGE_SIZE) ? (NR_CPUS * 7)/2 : PAGE_SIZE) 1135 1136
+2 -2
include/linux/libata.h
··· 1136 1136 extern void ata_scsi_slave_destroy(struct scsi_device *sdev); 1137 1137 extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, 1138 1138 int queue_depth); 1139 - extern int __ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, 1140 - int queue_depth); 1139 + extern int ata_change_queue_depth(struct ata_port *ap, struct ata_device *dev, 1140 + struct scsi_device *sdev, int queue_depth); 1141 1141 extern struct ata_device *ata_dev_pair(struct ata_device *adev); 1142 1142 extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev); 1143 1143 extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap);
-45
include/linux/memcontrol.h
··· 1788 1788 rcu_read_unlock(); 1789 1789 } 1790 1790 1791 - /** 1792 - * get_mem_cgroup_from_obj - get a memcg associated with passed kernel object. 1793 - * @p: pointer to object from which memcg should be extracted. It can be NULL. 1794 - * 1795 - * Retrieves the memory group into which the memory of the pointed kernel 1796 - * object is accounted. If memcg is found, its reference is taken. 1797 - * If a passed kernel object is uncharged, or if proper memcg cannot be found, 1798 - * as well as if mem_cgroup is disabled, NULL is returned. 1799 - * 1800 - * Return: valid memcg pointer with taken reference or NULL. 1801 - */ 1802 - static inline struct mem_cgroup *get_mem_cgroup_from_obj(void *p) 1803 - { 1804 - struct mem_cgroup *memcg; 1805 - 1806 - rcu_read_lock(); 1807 - do { 1808 - memcg = mem_cgroup_from_obj(p); 1809 - } while (memcg && !css_tryget(&memcg->css)); 1810 - rcu_read_unlock(); 1811 - return memcg; 1812 - } 1813 - 1814 - /** 1815 - * mem_cgroup_or_root - always returns a pointer to a valid memory cgroup. 1816 - * @memcg: pointer to a valid memory cgroup or NULL. 1817 - * 1818 - * If passed argument is not NULL, returns it without any additional checks 1819 - * and changes. Otherwise, root_mem_cgroup is returned. 1820 - * 1821 - * NOTE: root_mem_cgroup can be NULL during early boot. 1822 - */ 1823 - static inline struct mem_cgroup *mem_cgroup_or_root(struct mem_cgroup *memcg) 1824 - { 1825 - return memcg ? memcg : root_mem_cgroup; 1826 - } 1827 1791 #else 1828 1792 static inline bool mem_cgroup_kmem_disabled(void) 1829 1793 { ··· 1844 1880 { 1845 1881 } 1846 1882 1847 - static inline struct mem_cgroup *get_mem_cgroup_from_obj(void *p) 1848 - { 1849 - return NULL; 1850 - } 1851 - 1852 - static inline struct mem_cgroup *mem_cgroup_or_root(struct mem_cgroup *memcg) 1853 - { 1854 - return NULL; 1855 - } 1856 1883 #endif /* CONFIG_MEMCG_KMEM */ 1857 1884 1858 1885 #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_ZSWAP)
+5
include/linux/memremap.h
··· 139 139 }; 140 140 }; 141 141 142 + static inline bool pgmap_has_memory_failure(struct dev_pagemap *pgmap) 143 + { 144 + return pgmap->ops && pgmap->ops->memory_failure; 145 + } 146 + 142 147 static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) 143 148 { 144 149 if (pgmap->flags & PGMAP_ALTMAP_VALID)
+2 -2
include/linux/scmi_protocol.h
··· 84 84 struct scmi_clk_proto_ops { 85 85 int (*count_get)(const struct scmi_protocol_handle *ph); 86 86 87 - const struct scmi_clock_info *(*info_get) 87 + const struct scmi_clock_info __must_check *(*info_get) 88 88 (const struct scmi_protocol_handle *ph, u32 clk_id); 89 89 int (*rate_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 90 90 u64 *rate); ··· 466 466 */ 467 467 struct scmi_sensor_proto_ops { 468 468 int (*count_get)(const struct scmi_protocol_handle *ph); 469 - const struct scmi_sensor_info *(*info_get) 469 + const struct scmi_sensor_info __must_check *(*info_get) 470 470 (const struct scmi_protocol_handle *ph, u32 sensor_id); 471 471 int (*trip_point_config)(const struct scmi_protocol_handle *ph, 472 472 u32 sensor_id, u8 trip_id, u64 trip_value);
+17
include/linux/serial_core.h
··· 624 624 /* number of characters left in xmit buffer before we ask for more */ 625 625 #define WAKEUP_CHARS 256 626 626 627 + /** 628 + * uart_xmit_advance - Advance xmit buffer and account Tx'ed chars 629 + * @up: uart_port structure describing the port 630 + * @chars: number of characters sent 631 + * 632 + * This function advances the tail of circular xmit buffer by the number of 633 + * @chars transmitted and handles accounting of transmitted bytes (into 634 + * @up's icount.tx). 635 + */ 636 + static inline void uart_xmit_advance(struct uart_port *up, unsigned int chars) 637 + { 638 + struct circ_buf *xmit = &up->state->xmit; 639 + 640 + xmit->tail = (xmit->tail + chars) & (UART_XMIT_SIZE - 1); 641 + up->icount.tx += chars; 642 + } 643 + 627 644 struct module; 628 645 struct tty_driver; 629 646
+15 -15
include/trace/events/scmi.h
··· 27 27 __entry->val2 = val2; 28 28 ), 29 29 30 - TP_printk("[0x%02X]:[0x%02X]:[%08X]:%u:%u", 31 - __entry->protocol_id, __entry->msg_id, 32 - __entry->res_id, __entry->val1, __entry->val2) 30 + TP_printk("pt=%02X msg_id=%02X res_id:%u vals=%u:%u", 31 + __entry->protocol_id, __entry->msg_id, 32 + __entry->res_id, __entry->val1, __entry->val2) 33 33 ); 34 34 35 35 TRACE_EVENT(scmi_xfer_begin, ··· 53 53 __entry->poll = poll; 54 54 ), 55 55 56 - TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u poll=%u", 57 - __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 58 - __entry->seq, __entry->poll) 56 + TP_printk("pt=%02X msg_id=%02X seq=%04X transfer_id=%X poll=%u", 57 + __entry->protocol_id, __entry->msg_id, __entry->seq, 58 + __entry->transfer_id, __entry->poll) 59 59 ); 60 60 61 61 TRACE_EVENT(scmi_xfer_response_wait, ··· 81 81 __entry->poll = poll; 82 82 ), 83 83 84 - TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u tmo_ms=%u poll=%u", 85 - __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 86 - __entry->seq, __entry->timeout, __entry->poll) 84 + TP_printk("pt=%02X msg_id=%02X seq=%04X transfer_id=%X tmo_ms=%u poll=%u", 85 + __entry->protocol_id, __entry->msg_id, __entry->seq, 86 + __entry->transfer_id, __entry->timeout, __entry->poll) 87 87 ); 88 88 89 89 TRACE_EVENT(scmi_xfer_end, ··· 107 107 __entry->status = status; 108 108 ), 109 109 110 - TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u status=%d", 111 - __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 112 - __entry->seq, __entry->status) 110 + TP_printk("pt=%02X msg_id=%02X seq=%04X transfer_id=%X s=%d", 111 + __entry->protocol_id, __entry->msg_id, __entry->seq, 112 + __entry->transfer_id, __entry->status) 113 113 ); 114 114 115 115 TRACE_EVENT(scmi_rx_done, ··· 133 133 __entry->msg_type = msg_type; 134 134 ), 135 135 136 - TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u msg_type=%u", 137 - __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 138 - __entry->seq, __entry->msg_type) 136 + TP_printk("pt=%02X msg_id=%02X seq=%04X transfer_id=%X msg_type=%u", 137 + __entry->protocol_id, __entry->msg_id, __entry->seq, 138 + __entry->transfer_id, __entry->msg_type) 139 139 ); 140 140 141 141 TRACE_EVENT(scmi_msg_dump,
+2
include/uapi/linux/if_tun.h
··· 67 67 #define IFF_TAP 0x0002 68 68 #define IFF_NAPI 0x0010 69 69 #define IFF_NAPI_FRAGS 0x0020 70 + /* Used in TUNSETIFF to bring up tun/tap without carrier */ 71 + #define IFF_NO_CARRIER 0x0040 70 72 #define IFF_NO_PI 0x1000 71 73 /* This flag has no real effect */ 72 74 #define IFF_ONE_QUEUE 0x2000
+3
io_uring/io_uring.c
··· 2648 2648 io_kill_timeouts(ctx, NULL, true); 2649 2649 /* if we failed setting up the ctx, we might not have any rings */ 2650 2650 io_iopoll_try_reap_events(ctx); 2651 + /* drop cached put refs after potentially doing completions */ 2652 + if (current->io_uring) 2653 + io_uring_drop_tctx_refs(current); 2651 2654 } 2652 2655 2653 2656 INIT_WORK(&ctx->exit_work, io_ring_exit_work);
+4 -1
kernel/cgroup/cgroup.c
··· 6049 6049 if (!kn) 6050 6050 goto out; 6051 6051 6052 + if (kernfs_type(kn) != KERNFS_DIR) 6053 + goto put; 6054 + 6052 6055 rcu_read_lock(); 6053 6056 6054 6057 cgrp = rcu_dereference(*(void __rcu __force **)&kn->priv); ··· 6059 6056 cgrp = NULL; 6060 6057 6061 6058 rcu_read_unlock(); 6062 - 6059 + put: 6063 6060 kernfs_put(kn); 6064 6061 out: 6065 6062 return cgrp;
+2 -4
kernel/workqueue.c
··· 3066 3066 if (WARN_ON(!work->func)) 3067 3067 return false; 3068 3068 3069 - if (!from_cancel) { 3070 - lock_map_acquire(&work->lockdep_map); 3071 - lock_map_release(&work->lockdep_map); 3072 - } 3069 + lock_map_acquire(&work->lockdep_map); 3070 + lock_map_release(&work->lockdep_map); 3073 3071 3074 3072 if (start_flush_work(work, &barr, from_cancel)) { 3075 3073 wait_for_completion(&barr.done);
+3 -1
lib/Kconfig.debug
··· 264 264 config DEBUG_INFO_DWARF4 265 265 bool "Generate DWARF Version 4 debuginfo" 266 266 select DEBUG_INFO 267 + depends on !CC_IS_CLANG || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502))) 267 268 help 268 - Generate DWARF v4 debug info. This requires gcc 4.5+ and gdb 7.0+. 269 + Generate DWARF v4 debug info. This requires gcc 4.5+, binutils 2.35.2 270 + if using clang without clang's integrated assembler, and gdb 7.0+. 269 271 270 272 If you have consumers of DWARF debug info that are not ready for 271 273 newer revisions of DWARF, you may wish to choose this or have your
+14 -5
mm/damon/dbgfs.c
··· 884 884 struct dentry *root, *dir, **new_dirs; 885 885 struct damon_ctx **new_ctxs; 886 886 int i, j; 887 + int ret = 0; 887 888 888 889 if (damon_nr_running_ctxs()) 889 890 return -EBUSY; ··· 899 898 900 899 new_dirs = kmalloc_array(dbgfs_nr_ctxs - 1, sizeof(*dbgfs_dirs), 901 900 GFP_KERNEL); 902 - if (!new_dirs) 903 - return -ENOMEM; 901 + if (!new_dirs) { 902 + ret = -ENOMEM; 903 + goto out_dput; 904 + } 904 905 905 906 new_ctxs = kmalloc_array(dbgfs_nr_ctxs - 1, sizeof(*dbgfs_ctxs), 906 907 GFP_KERNEL); 907 908 if (!new_ctxs) { 908 - kfree(new_dirs); 909 - return -ENOMEM; 909 + ret = -ENOMEM; 910 + goto out_new_dirs; 910 911 } 911 912 912 913 for (i = 0, j = 0; i < dbgfs_nr_ctxs; i++) { ··· 928 925 dbgfs_ctxs = new_ctxs; 929 926 dbgfs_nr_ctxs--; 930 927 931 - return 0; 928 + goto out_dput; 929 + 930 + out_new_dirs: 931 + kfree(new_dirs); 932 + out_dput: 933 + dput(dir); 934 + return ret; 932 935 } 933 936 934 937 static ssize_t dbgfs_rm_context_write(struct file *file,
+3
mm/frontswap.c
··· 125 125 * p->frontswap set to something valid to work properly. 126 126 */ 127 127 frontswap_map_set(sis, map); 128 + 129 + if (!frontswap_enabled()) 130 + return; 128 131 frontswap_ops->init(type); 129 132 } 130 133
+28 -6
mm/gup.c
··· 2345 2345 } 2346 2346 2347 2347 #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL 2348 - static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, 2349 - unsigned int flags, struct page **pages, int *nr) 2348 + /* 2349 + * Fast-gup relies on pte change detection to avoid concurrent pgtable 2350 + * operations. 2351 + * 2352 + * To pin the page, fast-gup needs to do below in order: 2353 + * (1) pin the page (by prefetching pte), then (2) check pte not changed. 2354 + * 2355 + * For the rest of pgtable operations where pgtable updates can be racy 2356 + * with fast-gup, we need to do (1) clear pte, then (2) check whether page 2357 + * is pinned. 2358 + * 2359 + * Above will work for all pte-level operations, including THP split. 2360 + * 2361 + * For THP collapse, it's a bit more complicated because fast-gup may be 2362 + * walking a pgtable page that is being freed (pte is still valid but pmd 2363 + * can be cleared already). To avoid race in such condition, we need to 2364 + * also check pmd here to make sure pmd doesn't change (corresponds to 2365 + * pmdp_collapse_flush() in the THP collapse code path). 2366 + */ 2367 + static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, 2368 + unsigned long end, unsigned int flags, 2369 + struct page **pages, int *nr) 2350 2370 { 2351 2371 struct dev_pagemap *pgmap = NULL; 2352 2372 int nr_start = *nr, ret = 0; ··· 2412 2392 goto pte_unmap; 2413 2393 } 2414 2394 2415 - if (unlikely(pte_val(pte) != pte_val(*ptep))) { 2395 + if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || 2396 + unlikely(pte_val(pte) != pte_val(*ptep))) { 2416 2397 gup_put_folio(folio, 1, flags); 2417 2398 goto pte_unmap; 2418 2399 } ··· 2460 2439 * get_user_pages_fast_only implementation that can pin pages. Thus it's still 2461 2440 * useful to have gup_huge_pmd even if we can't operate on ptes. 2462 2441 */ 2463 - static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, 2464 - unsigned int flags, struct page **pages, int *nr) 2442 + static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, 2443 + unsigned long end, unsigned int flags, 2444 + struct page **pages, int *nr) 2465 2445 { 2466 2446 return 0; 2467 2447 } ··· 2786 2764 if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr, 2787 2765 PMD_SHIFT, next, flags, pages, nr)) 2788 2766 return 0; 2789 - } else if (!gup_pte_range(pmd, addr, next, flags, pages, nr)) 2767 + } else if (!gup_pte_range(pmd, pmdp, addr, next, flags, pages, nr)) 2790 2768 return 0; 2791 2769 } while (pmdp++, addr = next, addr != end); 2792 2770
+2 -4
mm/huge_memory.c
··· 2894 2894 max_zone_pfn = zone_end_pfn(zone); 2895 2895 for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++) { 2896 2896 int nr_pages; 2897 - if (!pfn_valid(pfn)) 2898 - continue; 2899 2897 2900 - page = pfn_to_page(pfn); 2901 - if (!get_page_unless_zero(page)) 2898 + page = pfn_to_online_page(pfn); 2899 + if (!page || !get_page_unless_zero(page)) 2902 2900 continue; 2903 2901 2904 2902 if (zone != page_zone(page))
+8 -6
mm/hugetlb.c
··· 3420 3420 { 3421 3421 int i, nid = page_to_nid(page); 3422 3422 struct hstate *target_hstate; 3423 + struct page *subpage; 3423 3424 int rc = 0; 3424 3425 3425 3426 target_hstate = size_to_hstate(PAGE_SIZE << h->demote_order); ··· 3454 3453 mutex_lock(&target_hstate->resize_lock); 3455 3454 for (i = 0; i < pages_per_huge_page(h); 3456 3455 i += pages_per_huge_page(target_hstate)) { 3456 + subpage = nth_page(page, i); 3457 3457 if (hstate_is_gigantic(target_hstate)) 3458 - prep_compound_gigantic_page_for_demote(page + i, 3458 + prep_compound_gigantic_page_for_demote(subpage, 3459 3459 target_hstate->order); 3460 3460 else 3461 - prep_compound_page(page + i, target_hstate->order); 3462 - set_page_private(page + i, 0); 3463 - set_page_refcounted(page + i); 3464 - prep_new_huge_page(target_hstate, page + i, nid); 3465 - put_page(page + i); 3461 + prep_compound_page(subpage, target_hstate->order); 3462 + set_page_private(subpage, 0); 3463 + set_page_refcounted(subpage); 3464 + prep_new_huge_page(target_hstate, subpage, nid); 3465 + put_page(subpage); 3466 3466 } 3467 3467 mutex_unlock(&target_hstate->resize_lock); 3468 3468
+6 -4
mm/khugepaged.c
··· 1083 1083 1084 1084 pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */ 1085 1085 /* 1086 - * After this gup_fast can't run anymore. This also removes 1087 - * any huge TLB entry from the CPU so we won't allow 1088 - * huge and small TLB entries for the same virtual address 1089 - * to avoid the risk of CPU bugs in that area. 1086 + * This removes any huge TLB entry from the CPU so we won't allow 1087 + * huge and small TLB entries for the same virtual address to 1088 + * avoid the risk of CPU bugs in that area. 1089 + * 1090 + * Parallel fast GUP is fine since fast GUP will back off when 1091 + * it detects PMD is changed. 1090 1092 */ 1091 1093 _pmd = pmdp_collapse_flush(vma, address, pmd); 1092 1094 spin_unlock(pmd_ptl);
+5 -2
mm/madvise.c
··· 451 451 continue; 452 452 } 453 453 454 - /* Do not interfere with other mappings of this page */ 455 - if (page_mapcount(page) != 1) 454 + /* 455 + * Do not interfere with other mappings of this page and 456 + * non-LRU page. 457 + */ 458 + if (!PageLRU(page) || page_mapcount(page) != 1) 456 459 continue; 457 460 458 461 VM_BUG_ON_PAGE(PageTransCompound(page), page);
+16 -11
mm/memory-failure.c
··· 345 345 * not much we can do. We just print a message and ignore otherwise. 346 346 */ 347 347 348 + #define FSDAX_INVALID_PGOFF ULONG_MAX 349 + 348 350 /* 349 351 * Schedule a process for later kill. 350 352 * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. 351 353 * 352 - * Notice: @fsdax_pgoff is used only when @p is a fsdax page. 353 - * In other cases, such as anonymous and file-backend page, the address to be 354 - * killed can be caculated by @p itself. 354 + * Note: @fsdax_pgoff is used only when @p is a fsdax page and a 355 + * filesystem with a memory failure handler has claimed the 356 + * memory_failure event. In all other cases, page->index and 357 + * page->mapping are sufficient for mapping the page back to its 358 + * corresponding user virtual address. 355 359 */ 356 360 static void add_to_kill(struct task_struct *tsk, struct page *p, 357 361 pgoff_t fsdax_pgoff, struct vm_area_struct *vma, ··· 371 367 372 368 tk->addr = page_address_in_vma(p, vma); 373 369 if (is_zone_device_page(p)) { 374 - /* 375 - * Since page->mapping is not used for fsdax, we need 376 - * calculate the address based on the vma. 377 - */ 378 - if (p->pgmap->type == MEMORY_DEVICE_FS_DAX) 370 + if (fsdax_pgoff != FSDAX_INVALID_PGOFF) 379 371 tk->addr = vma_pgoff_address(fsdax_pgoff, 1, vma); 380 372 tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr); 381 373 } else ··· 523 523 if (!page_mapped_in_vma(page, vma)) 524 524 continue; 525 525 if (vma->vm_mm == t->mm) 526 - add_to_kill(t, page, 0, vma, to_kill); 526 + add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma, 527 + to_kill); 527 528 } 528 529 } 529 530 read_unlock(&tasklist_lock); ··· 560 559 * to be informed of all such data corruptions. 561 560 */ 562 561 if (vma->vm_mm == t->mm) 563 - add_to_kill(t, page, 0, vma, to_kill); 562 + add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma, 563 + to_kill); 564 564 } 565 565 } 566 566 read_unlock(&tasklist_lock); ··· 744 742 .pfn = pfn, 745 743 }; 746 744 priv.tk.tsk = p; 745 + 746 + if (!p->mm) 747 + return -EFAULT; 747 748 748 749 mmap_read_lock(p->mm); 749 750 ret = walk_page_range(p->mm, 0, TASK_SIZE, &hwp_walk_ops, ··· 1933 1928 * Call driver's implementation to handle the memory failure, otherwise 1934 1929 * fall back to generic handler. 1935 1930 */ 1936 - if (pgmap->ops->memory_failure) { 1931 + if (pgmap_has_memory_failure(pgmap)) { 1937 1932 rc = pgmap->ops->memory_failure(pgmap, pfn, 1, flags); 1938 1933 /* 1939 1934 * Fall back to generic handler too if operation is not
+13 -7
mm/memory.c
··· 4386 4386 4387 4387 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, 4388 4388 vmf->address, &vmf->ptl); 4389 - ret = 0; 4390 - /* Re-check under ptl */ 4391 - if (likely(!vmf_pte_changed(vmf))) 4392 - do_set_pte(vmf, page, vmf->address); 4393 - else 4394 - ret = VM_FAULT_NOPAGE; 4395 4389 4396 - update_mmu_tlb(vma, vmf->address, vmf->pte); 4390 + /* Re-check under ptl */ 4391 + if (likely(!vmf_pte_changed(vmf))) { 4392 + do_set_pte(vmf, page, vmf->address); 4393 + 4394 + /* no need to invalidate: a not-present page won't be cached */ 4395 + update_mmu_cache(vma, vmf->address, vmf->pte); 4396 + 4397 + ret = 0; 4398 + } else { 4399 + update_mmu_tlb(vma, vmf->address, vmf->pte); 4400 + ret = VM_FAULT_NOPAGE; 4401 + } 4402 + 4397 4403 pte_unmap_unlock(vmf->pte, vmf->ptl); 4398 4404 return ret; 4399 4405 }
+11 -5
mm/migrate_device.c
··· 7 7 #include <linux/export.h> 8 8 #include <linux/memremap.h> 9 9 #include <linux/migrate.h> 10 + #include <linux/mm.h> 10 11 #include <linux/mm_inline.h> 11 12 #include <linux/mmu_notifier.h> 12 13 #include <linux/oom.h> ··· 194 193 bool anon_exclusive; 195 194 pte_t swp_pte; 196 195 196 + flush_cache_page(vma, addr, pte_pfn(*ptep)); 197 197 anon_exclusive = PageAnon(page) && PageAnonExclusive(page); 198 198 if (anon_exclusive) { 199 - flush_cache_page(vma, addr, pte_pfn(*ptep)); 200 - ptep_clear_flush(vma, addr, ptep); 199 + pte = ptep_clear_flush(vma, addr, ptep); 201 200 202 201 if (page_try_share_anon_rmap(page)) { 203 202 set_pte_at(mm, addr, ptep, pte); ··· 207 206 goto next; 208 207 } 209 208 } else { 210 - ptep_get_and_clear(mm, addr, ptep); 209 + pte = ptep_get_and_clear(mm, addr, ptep); 211 210 } 212 211 213 212 migrate->cpages++; 213 + 214 + /* Set the dirty flag on the folio now the pte is gone. */ 215 + if (pte_dirty(pte)) 216 + folio_mark_dirty(page_folio(page)); 214 217 215 218 /* Setup special migration page table entry */ 216 219 if (mpfn & MIGRATE_PFN_WRITE) ··· 259 254 migrate->dst[migrate->npages] = 0; 260 255 migrate->src[migrate->npages++] = mpfn; 261 256 } 262 - arch_leave_lazy_mmu_mode(); 263 - pte_unmap_unlock(ptep - 1, ptl); 264 257 265 258 /* Only flush the TLB if we actually modified any entries */ 266 259 if (unmapped) 267 260 flush_tlb_range(walk->vma, start, end); 261 + 262 + arch_leave_lazy_mmu_mode(); 263 + pte_unmap_unlock(ptep - 1, ptl); 268 264 269 265 return 0; 270 266 }
+55 -10
mm/page_alloc.c
··· 4708 4708 EXPORT_SYMBOL_GPL(fs_reclaim_release); 4709 4709 #endif 4710 4710 4711 + /* 4712 + * Zonelists may change due to hotplug during allocation. Detect when zonelists 4713 + * have been rebuilt so allocation retries. Reader side does not lock and 4714 + * retries the allocation if zonelist changes. Writer side is protected by the 4715 + * embedded spin_lock. 4716 + */ 4717 + static DEFINE_SEQLOCK(zonelist_update_seq); 4718 + 4719 + static unsigned int zonelist_iter_begin(void) 4720 + { 4721 + if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE)) 4722 + return read_seqbegin(&zonelist_update_seq); 4723 + 4724 + return 0; 4725 + } 4726 + 4727 + static unsigned int check_retry_zonelist(unsigned int seq) 4728 + { 4729 + if (IS_ENABLED(CONFIG_MEMORY_HOTREMOVE)) 4730 + return read_seqretry(&zonelist_update_seq, seq); 4731 + 4732 + return seq; 4733 + } 4734 + 4711 4735 /* Perform direct synchronous page reclaim */ 4712 4736 static unsigned long 4713 4737 __perform_reclaim(gfp_t gfp_mask, unsigned int order, ··· 5025 5001 int compaction_retries; 5026 5002 int no_progress_loops; 5027 5003 unsigned int cpuset_mems_cookie; 5004 + unsigned int zonelist_iter_cookie; 5028 5005 int reserve_flags; 5029 5006 5030 5007 /* ··· 5036 5011 (__GFP_ATOMIC|__GFP_DIRECT_RECLAIM))) 5037 5012 gfp_mask &= ~__GFP_ATOMIC; 5038 5013 5039 - retry_cpuset: 5014 + restart: 5040 5015 compaction_retries = 0; 5041 5016 no_progress_loops = 0; 5042 5017 compact_priority = DEF_COMPACT_PRIORITY; 5043 5018 cpuset_mems_cookie = read_mems_allowed_begin(); 5019 + zonelist_iter_cookie = zonelist_iter_begin(); 5044 5020 5045 5021 /* 5046 5022 * The fast path uses conservative alloc_flags to succeed only until ··· 5213 5187 goto retry; 5214 5188 5215 5189 5216 - /* Deal with possible cpuset update races before we start OOM killing */ 5217 - if (check_retry_cpuset(cpuset_mems_cookie, ac)) 5218 - goto retry_cpuset; 5190 + /* 5191 + * Deal with possible cpuset update races or zonelist updates to avoid 5192 + * a unnecessary OOM kill. 5193 + */ 5194 + if (check_retry_cpuset(cpuset_mems_cookie, ac) || 5195 + check_retry_zonelist(zonelist_iter_cookie)) 5196 + goto restart; 5219 5197 5220 5198 /* Reclaim has failed us, start killing things */ 5221 5199 page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress); ··· 5239 5209 } 5240 5210 5241 5211 nopage: 5242 - /* Deal with possible cpuset update races before we fail */ 5243 - if (check_retry_cpuset(cpuset_mems_cookie, ac)) 5244 - goto retry_cpuset; 5212 + /* 5213 + * Deal with possible cpuset update races or zonelist updates to avoid 5214 + * a unnecessary OOM kill. 5215 + */ 5216 + if (check_retry_cpuset(cpuset_mems_cookie, ac) || 5217 + check_retry_zonelist(zonelist_iter_cookie)) 5218 + goto restart; 5245 5219 5246 5220 /* 5247 5221 * Make sure that __GFP_NOFAIL request doesn't leak out and make sure ··· 5740 5706 /* reset page count bias and offset to start of new frag */ 5741 5707 nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; 5742 5708 offset = size - fragsz; 5709 + if (unlikely(offset < 0)) { 5710 + /* 5711 + * The caller is trying to allocate a fragment 5712 + * with fragsz > PAGE_SIZE but the cache isn't big 5713 + * enough to satisfy the request, this may 5714 + * happen in low memory conditions. 5715 + * We don't release the cache page because 5716 + * it could make memory pressure worse 5717 + * so we simply return NULL here. 5718 + */ 5719 + return NULL; 5720 + } 5743 5721 } 5744 5722 5745 5723 nc->pagecnt_bias--; ··· 6560 6514 int nid; 6561 6515 int __maybe_unused cpu; 6562 6516 pg_data_t *self = data; 6563 - static DEFINE_SPINLOCK(lock); 6564 6517 6565 - spin_lock(&lock); 6518 + write_seqlock(&zonelist_update_seq); 6566 6519 6567 6520 #ifdef CONFIG_NUMA 6568 6521 memset(node_load, 0, sizeof(node_load)); ··· 6598 6553 #endif 6599 6554 } 6600 6555 6601 - spin_unlock(&lock); 6556 + write_sequnlock(&zonelist_update_seq); 6602 6557 } 6603 6558 6604 6559 static noinline void __init
+14 -11
mm/page_isolation.c
··· 288 288 * @isolate_before: isolate the pageblock before the boundary_pfn 289 289 * @skip_isolation: the flag to skip the pageblock isolation in second 290 290 * isolate_single_pageblock() 291 + * @migratetype: migrate type to set in error recovery. 291 292 * 292 293 * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one 293 294 * pageblock. When not all pageblocks within a page are isolated at the same ··· 303 302 * the in-use page then splitting the free page. 304 303 */ 305 304 static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, 306 - gfp_t gfp_flags, bool isolate_before, bool skip_isolation) 305 + gfp_t gfp_flags, bool isolate_before, bool skip_isolation, 306 + int migratetype) 307 307 { 308 - unsigned char saved_mt; 309 308 unsigned long start_pfn; 310 309 unsigned long isolate_pageblock; 311 310 unsigned long pfn; ··· 329 328 start_pfn = max(ALIGN_DOWN(isolate_pageblock, MAX_ORDER_NR_PAGES), 330 329 zone->zone_start_pfn); 331 330 332 - saved_mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); 331 + if (skip_isolation) { 332 + int mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock)); 333 333 334 - if (skip_isolation) 335 - VM_BUG_ON(!is_migrate_isolate(saved_mt)); 336 - else { 337 - ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt, flags, 338 - isolate_pageblock, isolate_pageblock + pageblock_nr_pages); 334 + VM_BUG_ON(!is_migrate_isolate(mt)); 335 + } else { 336 + ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype, 337 + flags, isolate_pageblock, isolate_pageblock + pageblock_nr_pages); 339 338 340 339 if (ret) 341 340 return ret; ··· 476 475 failed: 477 476 /* restore the original migratetype */ 478 477 if (!skip_isolation) 479 - unset_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt); 478 + unset_migratetype_isolate(pfn_to_page(isolate_pageblock), migratetype); 480 479 return -EBUSY; 481 480 } 482 481 ··· 538 537 bool skip_isolation = false; 539 538 540 539 /* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */ 541 - ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false, skip_isolation); 540 + ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false, 541 + skip_isolation, migratetype); 542 542 if (ret) 543 543 return ret; 544 544 ··· 547 545 skip_isolation = true; 548 546 549 547 /* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */ 550 - ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true, skip_isolation); 548 + ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true, 549 + skip_isolation, migratetype); 551 550 if (ret) { 552 551 unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype); 553 552 return ret;
+1 -1
mm/secretmem.c
··· 285 285 286 286 secretmem_mnt = kern_mount(&secretmem_fs); 287 287 if (IS_ERR(secretmem_mnt)) 288 - ret = PTR_ERR(secretmem_mnt); 288 + return PTR_ERR(secretmem_mnt); 289 289 290 290 /* prevent secretmem mappings from ever getting PROT_EXEC */ 291 291 secretmem_mnt->mnt_flags |= MNT_NOEXEC;
+4 -1
mm/slab_common.c
··· 475 475 void kmem_cache_destroy(struct kmem_cache *s) 476 476 { 477 477 int refcnt; 478 + bool rcu_set; 478 479 479 480 if (unlikely(!s) || !kasan_check_byte(s)) 480 481 return; 481 482 482 483 cpus_read_lock(); 483 484 mutex_lock(&slab_mutex); 485 + 486 + rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU; 484 487 485 488 refcnt = --s->refcount; 486 489 if (refcnt) ··· 495 492 out_unlock: 496 493 mutex_unlock(&slab_mutex); 497 494 cpus_read_unlock(); 498 - if (!refcnt && !(s->flags & SLAB_TYPESAFE_BY_RCU)) 495 + if (!refcnt && !rcu_set) 499 496 kmem_cache_release(s); 500 497 } 501 498 EXPORT_SYMBOL(kmem_cache_destroy);
+16 -2
mm/slub.c
··· 310 310 */ 311 311 static nodemask_t slab_nodes; 312 312 313 + /* 314 + * Workqueue used for flush_cpu_slab(). 315 + */ 316 + static struct workqueue_struct *flushwq; 317 + 313 318 /******************************************************************** 314 319 * Core slab cache functions 315 320 *******************************************************************/ ··· 2735 2730 INIT_WORK(&sfw->work, flush_cpu_slab); 2736 2731 sfw->skip = false; 2737 2732 sfw->s = s; 2738 - schedule_work_on(cpu, &sfw->work); 2733 + queue_work_on(cpu, flushwq, &sfw->work); 2739 2734 } 2740 2735 2741 2736 for_each_online_cpu(cpu) { ··· 4863 4858 4864 4859 void __init kmem_cache_init_late(void) 4865 4860 { 4861 + flushwq = alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0); 4862 + WARN_ON(!flushwq); 4866 4863 } 4867 4864 4868 4865 struct kmem_cache * ··· 4933 4926 /* Honor the call site pointer we received. */ 4934 4927 trace_kmalloc(caller, ret, s, size, s->size, gfpflags); 4935 4928 4929 + ret = kasan_kmalloc(s, ret, size, gfpflags); 4930 + 4936 4931 return ret; 4937 4932 } 4938 4933 EXPORT_SYMBOL(__kmalloc_track_caller); ··· 4965 4956 4966 4957 /* Honor the call site pointer we received. */ 4967 4958 trace_kmalloc_node(caller, ret, s, size, s->size, gfpflags, node); 4959 + 4960 + ret = kasan_kmalloc(s, ret, size, gfpflags); 4968 4961 4969 4962 return ret; 4970 4963 } ··· 5901 5890 char *name = kmalloc(ID_STR_LENGTH, GFP_KERNEL); 5902 5891 char *p = name; 5903 5892 5904 - BUG_ON(!name); 5893 + if (!name) 5894 + return ERR_PTR(-ENOMEM); 5905 5895 5906 5896 *p++ = ':'; 5907 5897 /* ··· 5960 5948 * for the symlinks. 5961 5949 */ 5962 5950 name = create_unique_id(s); 5951 + if (IS_ERR(name)) 5952 + return PTR_ERR(name); 5963 5953 } 5964 5954 5965 5955 s->kobj.kset = kset;
+1 -1
mm/swap_state.c
··· 151 151 152 152 for (i = 0; i < nr; i++) { 153 153 void *entry = xas_store(&xas, shadow); 154 - VM_BUG_ON_FOLIO(entry != folio, folio); 154 + VM_BUG_ON_PAGE(entry != folio, entry); 155 155 set_page_private(folio_page(folio, i), 0); 156 156 xas_next(&xas); 157 157 }
+2 -2
mm/vmscan.c
··· 2550 2550 } 2551 2551 2552 2552 if (unlikely(buffer_heads_over_limit)) { 2553 - if (folio_get_private(folio) && folio_trylock(folio)) { 2554 - if (folio_get_private(folio)) 2553 + if (folio_test_private(folio) && folio_trylock(folio)) { 2554 + if (folio_test_private(folio)) 2555 2555 filemap_release_folio(folio, 0); 2556 2556 folio_unlock(folio); 2557 2557 }
-7
net/core/net_namespace.c
··· 18 18 #include <linux/user_namespace.h> 19 19 #include <linux/net_namespace.h> 20 20 #include <linux/sched/task.h> 21 - #include <linux/sched/mm.h> 22 21 #include <linux/uidgid.h> 23 22 #include <linux/cookie.h> 24 23 ··· 1143 1144 * setup_net() and cleanup_net() are not possible. 1144 1145 */ 1145 1146 for_each_net(net) { 1146 - struct mem_cgroup *old, *memcg; 1147 - 1148 - memcg = mem_cgroup_or_root(get_mem_cgroup_from_obj(net)); 1149 - old = set_active_memcg(memcg); 1150 1147 error = ops_init(ops, net); 1151 - set_active_memcg(old); 1152 - mem_cgroup_put(memcg); 1153 1148 if (error) 1154 1149 goto out_undo; 1155 1150 list_add_tail(&net->exit_list, &net_exit_list);
+6 -3
net/mac80211/mlme.c
··· 4064 4064 4065 4065 if (!(link->u.mgd.conn_flags & IEEE80211_CONN_DISABLE_HE) && 4066 4066 (!elems->he_cap || !elems->he_operation)) { 4067 - mutex_unlock(&sdata->local->sta_mtx); 4068 4067 sdata_info(sdata, 4069 4068 "HE AP is missing HE capability/operation\n"); 4070 4069 ret = false; ··· 5634 5635 5635 5636 mutex_lock(&local->sta_mtx); 5636 5637 sta = sta_info_get(sdata, sdata->vif.cfg.ap_addr); 5637 - if (WARN_ON(!sta)) 5638 + if (WARN_ON(!sta)) { 5639 + mutex_unlock(&local->sta_mtx); 5638 5640 goto free; 5641 + } 5639 5642 link_sta = rcu_dereference_protected(sta->link[link->link_id], 5640 5643 lockdep_is_held(&local->sta_mtx)); 5641 - if (WARN_ON(!link_sta)) 5644 + if (WARN_ON(!link_sta)) { 5645 + mutex_unlock(&local->sta_mtx); 5642 5646 goto free; 5647 + } 5643 5648 5644 5649 changed |= ieee80211_recalc_twt_req(link, link_sta, elems); 5645 5650
+4 -2
net/mac80211/rc80211_minstrel_ht.c
··· 10 10 #include <linux/random.h> 11 11 #include <linux/moduleparam.h> 12 12 #include <linux/ieee80211.h> 13 + #include <linux/minmax.h> 13 14 #include <net/mac80211.h> 14 15 #include "rate.h" 15 16 #include "sta_info.h" ··· 1551 1550 { 1552 1551 struct ieee80211_sta_rates *rates; 1553 1552 int i = 0; 1553 + int max_rates = min_t(int, mp->hw->max_rates, IEEE80211_TX_RATE_TABLE_SIZE); 1554 1554 1555 1555 rates = kzalloc(sizeof(*rates), GFP_ATOMIC); 1556 1556 if (!rates) ··· 1561 1559 minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_tp_rate[0]); 1562 1560 1563 1561 /* Fill up remaining, keep one entry for max_probe_rate */ 1564 - for (; i < (mp->hw->max_rates - 1); i++) 1562 + for (; i < (max_rates - 1); i++) 1565 1563 minstrel_ht_set_rate(mp, mi, rates, i, mi->max_tp_rate[i]); 1566 1564 1567 - if (i < mp->hw->max_rates) 1565 + if (i < max_rates) 1568 1566 minstrel_ht_set_rate(mp, mi, rates, i++, mi->max_prob_rate); 1569 1567 1570 1568 if (i < IEEE80211_TX_RATE_TABLE_SIZE)
+1 -1
net/mac80211/status.c
··· 729 729 730 730 if (!sdata) { 731 731 skb->dev = NULL; 732 - } else { 732 + } else if (!dropped) { 733 733 unsigned int hdr_size = 734 734 ieee80211_hdrlen(hdr->frame_control); 735 735
+4
net/mac80211/tx.c
··· 5917 5917 skb_reset_network_header(skb); 5918 5918 skb_reset_mac_header(skb); 5919 5919 5920 + if (local->hw.queues < IEEE80211_NUM_ACS) 5921 + goto start_xmit; 5922 + 5920 5923 /* update QoS header to prioritize control port frames if possible, 5921 5924 * priorization also happens for control port frames send over 5922 5925 * AF_PACKET ··· 5947 5944 } 5948 5945 rcu_read_unlock(); 5949 5946 5947 + start_xmit: 5950 5948 /* mutex lock is only needed for incrementing the cookie counter */ 5951 5949 mutex_lock(&local->mtx); 5952 5950
+2 -2
net/mac80211/util.c
··· 301 301 local_bh_disable(); 302 302 spin_lock(&fq->lock); 303 303 304 + sdata->vif.txqs_stopped[ac] = false; 305 + 304 306 if (!test_bit(SDATA_STATE_RUNNING, &sdata->state)) 305 307 goto out; 306 308 307 309 if (sdata->vif.type == NL80211_IFTYPE_AP) 308 310 ps = &sdata->bss->ps; 309 - 310 - sdata->vif.txqs_stopped[ac] = false; 311 311 312 312 list_for_each_entry_rcu(sta, &local->sta_list, list) { 313 313 if (sdata != sta->sdata)
+13 -3
net/mptcp/protocol.c
··· 2685 2685 dfrag_clear(sk, dfrag); 2686 2686 } 2687 2687 2688 - static void mptcp_cancel_work(struct sock *sk) 2688 + void mptcp_cancel_work(struct sock *sk) 2689 2689 { 2690 2690 struct mptcp_sock *msk = mptcp_sk(sk); 2691 2691 ··· 2825 2825 sock_put(sk); 2826 2826 } 2827 2827 2828 - static void mptcp_close(struct sock *sk, long timeout) 2828 + bool __mptcp_close(struct sock *sk, long timeout) 2829 2829 { 2830 2830 struct mptcp_subflow_context *subflow; 2831 2831 struct mptcp_sock *msk = mptcp_sk(sk); 2832 2832 bool do_cancel_work = false; 2833 2833 2834 - lock_sock(sk); 2835 2834 sk->sk_shutdown = SHUTDOWN_MASK; 2836 2835 2837 2836 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) { ··· 2872 2873 } else { 2873 2874 mptcp_reset_timeout(msk, 0); 2874 2875 } 2876 + 2877 + return do_cancel_work; 2878 + } 2879 + 2880 + static void mptcp_close(struct sock *sk, long timeout) 2881 + { 2882 + bool do_cancel_work; 2883 + 2884 + lock_sock(sk); 2885 + 2886 + do_cancel_work = __mptcp_close(sk, timeout); 2875 2887 release_sock(sk); 2876 2888 if (do_cancel_work) 2877 2889 mptcp_cancel_work(sk);
+2
net/mptcp/protocol.h
··· 614 614 void mptcp_subflow_queue_clean(struct sock *ssk); 615 615 void mptcp_sock_graft(struct sock *sk, struct socket *parent); 616 616 struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk); 617 + bool __mptcp_close(struct sock *sk, long timeout); 618 + void mptcp_cancel_work(struct sock *sk); 617 619 618 620 bool mptcp_addresses_equal(const struct mptcp_addr_info *a, 619 621 const struct mptcp_addr_info *b, bool use_port);
+7 -26
net/mptcp/subflow.c
··· 602 602 return !crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN); 603 603 } 604 604 605 - static void mptcp_sock_destruct(struct sock *sk) 606 - { 607 - /* if new mptcp socket isn't accepted, it is free'd 608 - * from the tcp listener sockets request queue, linked 609 - * from req->sk. The tcp socket is released. 610 - * This calls the ULP release function which will 611 - * also remove the mptcp socket, via 612 - * sock_put(ctx->conn). 613 - * 614 - * Problem is that the mptcp socket will be in 615 - * ESTABLISHED state and will not have the SOCK_DEAD flag. 616 - * Both result in warnings from inet_sock_destruct. 617 - */ 618 - if ((1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_CLOSE_WAIT)) { 619 - sk->sk_state = TCP_CLOSE; 620 - WARN_ON_ONCE(sk->sk_socket); 621 - sock_orphan(sk); 622 - } 623 - 624 - /* We don't need to clear msk->subflow, as it's still NULL at this point */ 625 - mptcp_destroy_common(mptcp_sk(sk), 0); 626 - inet_sock_destruct(sk); 627 - } 628 - 629 605 static void mptcp_force_close(struct sock *sk) 630 606 { 631 607 /* the msk is not yet exposed to user-space */ ··· 744 768 /* new mpc subflow takes ownership of the newly 745 769 * created mptcp socket 746 770 */ 747 - new_msk->sk_destruct = mptcp_sock_destruct; 748 771 mptcp_sk(new_msk)->setsockopt_seq = ctx->setsockopt_seq; 749 772 mptcp_pm_new_connection(mptcp_sk(new_msk), child, 1); 750 773 mptcp_token_accept(subflow_req, mptcp_sk(new_msk)); ··· 1738 1763 1739 1764 for (msk = head; msk; msk = next) { 1740 1765 struct sock *sk = (struct sock *)msk; 1741 - bool slow; 1766 + bool slow, do_cancel_work; 1742 1767 1768 + sock_hold(sk); 1743 1769 slow = lock_sock_fast_nested(sk); 1744 1770 next = msk->dl_next; 1745 1771 msk->first = NULL; 1746 1772 msk->dl_next = NULL; 1773 + 1774 + do_cancel_work = __mptcp_close(sk, 0); 1747 1775 unlock_sock_fast(sk, slow); 1776 + if (do_cancel_work) 1777 + mptcp_cancel_work(sk); 1778 + sock_put(sk); 1748 1779 } 1749 1780 1750 1781 /* we are still under the listener msk socket lock */
+4 -1
net/sched/act_ct.c
··· 1390 1390 1391 1391 err = tcf_ct_flow_table_get(net, params); 1392 1392 if (err) 1393 - goto cleanup; 1393 + goto cleanup_params; 1394 1394 1395 1395 spin_lock_bh(&c->tcf_lock); 1396 1396 goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch); ··· 1405 1405 1406 1406 return res; 1407 1407 1408 + cleanup_params: 1409 + if (params->tmpl) 1410 + nf_ct_put(params->tmpl); 1408 1411 cleanup: 1409 1412 if (goto_ch) 1410 1413 tcf_chain_put_by_act(goto_ch);
+2 -2
net/wireless/util.c
··· 1361 1361 25599, /* 4.166666... */ 1362 1362 17067, /* 2.777777... */ 1363 1363 12801, /* 2.083333... */ 1364 - 11769, /* 1.851851... */ 1364 + 11377, /* 1.851725... */ 1365 1365 10239, /* 1.666666... */ 1366 1366 8532, /* 1.388888... */ 1367 1367 7680, /* 1.250000... */ ··· 1444 1444 25599, /* 4.166666... */ 1445 1445 17067, /* 2.777777... */ 1446 1446 12801, /* 2.083333... */ 1447 - 11769, /* 1.851851... */ 1447 + 11377, /* 1.851725... */ 1448 1448 10239, /* 1.666666... */ 1449 1449 8532, /* 1.388888... */ 1450 1450 7680, /* 1.250000... */
+10 -11
scripts/Makefile.debug
··· 1 1 DEBUG_CFLAGS := 2 + debug-flags-y := -g 2 3 3 4 ifdef CONFIG_DEBUG_INFO_SPLIT 4 5 DEBUG_CFLAGS += -gsplit-dwarf 5 - else 6 - DEBUG_CFLAGS += -g 7 6 endif 8 7 9 - ifndef CONFIG_AS_IS_LLVM 10 - KBUILD_AFLAGS += -Wa,-gdwarf-2 8 + debug-flags-$(CONFIG_DEBUG_INFO_DWARF4) += -gdwarf-4 9 + debug-flags-$(CONFIG_DEBUG_INFO_DWARF5) += -gdwarf-5 10 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_AS_IS_GNU),yy) 11 + # Clang does not pass -g or -gdwarf-* option down to GAS. 12 + # Add -Wa, prefix to explicitly specify the flags. 13 + KBUILD_AFLAGS += $(addprefix -Wa$(comma), $(debug-flags-y)) 11 14 endif 12 - 13 - ifndef CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT 14 - dwarf-version-$(CONFIG_DEBUG_INFO_DWARF4) := 4 15 - dwarf-version-$(CONFIG_DEBUG_INFO_DWARF5) := 5 16 - DEBUG_CFLAGS += -gdwarf-$(dwarf-version-y) 17 - endif 15 + DEBUG_CFLAGS += $(debug-flags-y) 16 + KBUILD_AFLAGS += $(debug-flags-y) 18 17 19 18 ifdef CONFIG_DEBUG_INFO_REDUCED 20 19 DEBUG_CFLAGS += -fno-var-tracking ··· 28 29 KBUILD_LDFLAGS += --compress-debug-sections=zlib 29 30 endif 30 31 31 - KBUILD_CFLAGS += $(DEBUG_CFLAGS) 32 + KBUILD_CFLAGS += $(DEBUG_CFLAGS) 32 33 export DEBUG_CFLAGS
-1
scripts/clang-tools/run-clang-tools.py
··· 12 12 import argparse 13 13 import json 14 14 import multiprocessing 15 - import os 16 15 import subprocess 17 16 import sys 18 17
-1
scripts/kconfig/lkc.h
··· 98 98 bool menu_is_visible(struct menu *menu); 99 99 bool menu_has_prompt(struct menu *menu); 100 100 const char *menu_get_prompt(struct menu *menu); 101 - struct menu *menu_get_root_menu(struct menu *menu); 102 101 struct menu *menu_get_parent_menu(struct menu *menu); 103 102 bool menu_has_help(struct menu *menu); 104 103 const char *menu_get_help(struct menu *menu);
-5
scripts/kconfig/menu.c
··· 661 661 return NULL; 662 662 } 663 663 664 - struct menu *menu_get_root_menu(struct menu *menu) 665 - { 666 - return &rootmenu; 667 - } 668 - 669 664 struct menu *menu_get_parent_menu(struct menu *menu) 670 665 { 671 666 enum prop_type type;
+10
sound/hda/intel-dsp-config.c
··· 450 450 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 451 451 .device = 0x51cb, 452 452 }, 453 + /* RaptorLake-M */ 454 + { 455 + .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 456 + .device = 0x51ce, 457 + }, 458 + /* RaptorLake-PX */ 459 + { 460 + .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 461 + .device = 0x51cf, 462 + }, 453 463 #endif 454 464 455 465 };
+14 -3
sound/soc/codecs/nau8824.c
··· 901 901 NAU8824_IRQ_KEY_RELEASE_DIS | 902 902 NAU8824_IRQ_KEY_SHORT_PRESS_DIS, 0); 903 903 904 - nau8824_sema_release(nau8824); 904 + if (nau8824->resume_lock) { 905 + nau8824_sema_release(nau8824); 906 + nau8824->resume_lock = false; 907 + } 905 908 } 906 909 907 910 static void nau8824_setup_auto_irq(struct nau8824 *nau8824) ··· 969 966 /* release semaphore held after resume, 970 967 * and cancel jack detection 971 968 */ 972 - nau8824_sema_release(nau8824); 969 + if (nau8824->resume_lock) { 970 + nau8824_sema_release(nau8824); 971 + nau8824->resume_lock = false; 972 + } 973 973 cancel_work_sync(&nau8824->jdet_work); 974 974 } else if (active_irq & NAU8824_KEY_SHORT_PRESS_IRQ) { 975 975 int key_status, button_pressed; ··· 1530 1524 static int __maybe_unused nau8824_resume(struct snd_soc_component *component) 1531 1525 { 1532 1526 struct nau8824 *nau8824 = snd_soc_component_get_drvdata(component); 1527 + int ret; 1533 1528 1534 1529 regcache_cache_only(nau8824->regmap, false); 1535 1530 regcache_sync(nau8824->regmap); ··· 1538 1531 /* Hold semaphore to postpone playback happening 1539 1532 * until jack detection done. 1540 1533 */ 1541 - nau8824_sema_acquire(nau8824, 0); 1534 + nau8824->resume_lock = true; 1535 + ret = nau8824_sema_acquire(nau8824, 0); 1536 + if (ret) 1537 + nau8824->resume_lock = false; 1542 1538 enable_irq(nau8824->irq); 1543 1539 } 1544 1540 ··· 1950 1940 nau8824->regmap = devm_regmap_init_i2c(i2c, &nau8824_regmap_config); 1951 1941 if (IS_ERR(nau8824->regmap)) 1952 1942 return PTR_ERR(nau8824->regmap); 1943 + nau8824->resume_lock = false; 1953 1944 nau8824->dev = dev; 1954 1945 nau8824->irq = i2c->irq; 1955 1946 sema_init(&nau8824->jd_sem, 1);
+1
sound/soc/codecs/nau8824.h
··· 436 436 struct semaphore jd_sem; 437 437 int fs; 438 438 int irq; 439 + int resume_lock; 439 440 int micbias_voltage; 440 441 int vref_impedance; 441 442 int jkdet_polarity;
+45 -19
sound/soc/codecs/rt5640.c
··· 2494 2494 2495 2495 /* Select JD-source */ 2496 2496 snd_soc_component_update_bits(component, RT5640_JD_CTRL, 2497 - RT5640_JD_MASK, rt5640->jd_src); 2497 + RT5640_JD_MASK, rt5640->jd_src << RT5640_JD_SFT); 2498 2498 2499 2499 /* Selecting GPIO01 as an interrupt */ 2500 2500 snd_soc_component_update_bits(component, RT5640_GPIO_CTRL1, ··· 2504 2504 snd_soc_component_update_bits(component, RT5640_GPIO_CTRL3, 2505 2505 RT5640_GP1_PF_MASK, RT5640_GP1_PF_OUT); 2506 2506 2507 - /* Enabling jd2 in general control 1 */ 2508 2507 snd_soc_component_write(component, RT5640_DUMMY1, 0x3f41); 2509 - 2510 - /* Enabling jd2 in general control 2 */ 2511 - snd_soc_component_write(component, RT5640_DUMMY2, 0x4001); 2512 2508 2513 2509 rt5640_set_ovcd_params(component); 2514 2510 ··· 2514 2518 * pin 0/1 instead of it being stuck to 1. So we invert the JD polarity 2515 2519 * on systems where the hardware does not already do this. 2516 2520 */ 2517 - if (rt5640->jd_inverted) 2518 - snd_soc_component_write(component, RT5640_IRQ_CTRL1, 2519 - RT5640_IRQ_JD_NOR); 2520 - else 2521 - snd_soc_component_write(component, RT5640_IRQ_CTRL1, 2522 - RT5640_IRQ_JD_NOR | RT5640_JD_P_INV); 2521 + if (rt5640->jd_inverted) { 2522 + if (rt5640->jd_src == RT5640_JD_SRC_JD1_IN4P) 2523 + snd_soc_component_write(component, RT5640_IRQ_CTRL1, 2524 + RT5640_IRQ_JD_NOR); 2525 + else if (rt5640->jd_src == RT5640_JD_SRC_JD2_IN4N) 2526 + snd_soc_component_update_bits(component, RT5640_DUMMY2, 2527 + RT5640_IRQ_JD2_MASK | RT5640_JD2_MASK, 2528 + RT5640_IRQ_JD2_NOR | RT5640_JD2_EN); 2529 + } else { 2530 + if (rt5640->jd_src == RT5640_JD_SRC_JD1_IN4P) 2531 + snd_soc_component_write(component, RT5640_IRQ_CTRL1, 2532 + RT5640_IRQ_JD_NOR | RT5640_JD_P_INV); 2533 + else if (rt5640->jd_src == RT5640_JD_SRC_JD2_IN4N) 2534 + snd_soc_component_update_bits(component, RT5640_DUMMY2, 2535 + RT5640_IRQ_JD2_MASK | RT5640_JD2_P_MASK | 2536 + RT5640_JD2_MASK, 2537 + RT5640_IRQ_JD2_NOR | RT5640_JD2_P_INV | 2538 + RT5640_JD2_EN); 2539 + } 2523 2540 2524 2541 rt5640->jack = jack; 2525 2542 if (rt5640->jack->status & SND_JACK_MICROPHONE) { ··· 2734 2725 2735 2726 if (device_property_read_u32(component->dev, 2736 2727 "realtek,jack-detect-source", &val) == 0) { 2737 - if (val <= RT5640_JD_SRC_GPIO4) 2738 - rt5640->jd_src = val << RT5640_JD_SFT; 2739 - else if (val == RT5640_JD_SRC_HDA_HEADER) 2740 - rt5640->jd_src = RT5640_JD_SRC_HDA_HEADER; 2728 + if (val <= RT5640_JD_SRC_HDA_HEADER) 2729 + rt5640->jd_src = val; 2741 2730 else 2742 2731 dev_warn(component->dev, "Warning: Invalid jack-detect-source value: %d, leaving jack-detect disabled\n", 2743 2732 val); ··· 2816 2809 regcache_sync(rt5640->regmap); 2817 2810 2818 2811 if (rt5640->jack) { 2819 - if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER) 2812 + if (rt5640->jd_src == RT5640_JD_SRC_HDA_HEADER) { 2820 2813 snd_soc_component_update_bits(component, 2821 2814 RT5640_DUMMY2, 0x1100, 0x1100); 2822 - else 2823 - snd_soc_component_write(component, RT5640_DUMMY2, 2824 - 0x4001); 2815 + } else { 2816 + if (rt5640->jd_inverted) { 2817 + if (rt5640->jd_src == RT5640_JD_SRC_JD2_IN4N) 2818 + snd_soc_component_update_bits( 2819 + component, RT5640_DUMMY2, 2820 + RT5640_IRQ_JD2_MASK | 2821 + RT5640_JD2_MASK, 2822 + RT5640_IRQ_JD2_NOR | 2823 + RT5640_JD2_EN); 2824 + 2825 + } else { 2826 + if (rt5640->jd_src == RT5640_JD_SRC_JD2_IN4N) 2827 + snd_soc_component_update_bits( 2828 + component, RT5640_DUMMY2, 2829 + RT5640_IRQ_JD2_MASK | 2830 + RT5640_JD2_P_MASK | 2831 + RT5640_JD2_MASK, 2832 + RT5640_IRQ_JD2_NOR | 2833 + RT5640_JD2_P_INV | 2834 + RT5640_JD2_EN); 2835 + } 2836 + } 2825 2837 2826 2838 queue_delayed_work(system_long_wq, &rt5640->jack_work, 0); 2827 2839 }
+14
sound/soc/codecs/rt5640.h
··· 1984 1984 #define RT5640_M_MONO_ADC_R_SFT 12 1985 1985 #define RT5640_MCLK_DET (0x1 << 11) 1986 1986 1987 + /* General Control 1 (0xfb) */ 1988 + #define RT5640_IRQ_JD2_MASK (0x1 << 12) 1989 + #define RT5640_IRQ_JD2_SFT 12 1990 + #define RT5640_IRQ_JD2_BP (0x0 << 12) 1991 + #define RT5640_IRQ_JD2_NOR (0x1 << 12) 1992 + #define RT5640_JD2_P_MASK (0x1 << 10) 1993 + #define RT5640_JD2_P_SFT 10 1994 + #define RT5640_JD2_P_NOR (0x0 << 10) 1995 + #define RT5640_JD2_P_INV (0x1 << 10) 1996 + #define RT5640_JD2_MASK (0x1 << 8) 1997 + #define RT5640_JD2_SFT 8 1998 + #define RT5640_JD2_DIS (0x0 << 8) 1999 + #define RT5640_JD2_EN (0x1 << 8) 2000 + 1987 2001 /* Codec Private Register definition */ 1988 2002 1989 2003 /* MIC Over current threshold scale factor (0x15) */
+3
sound/soc/codecs/tas2770.c
··· 495 495 }, 496 496 }; 497 497 498 + static const struct regmap_config tas2770_i2c_regmap; 499 + 498 500 static int tas2770_codec_probe(struct snd_soc_component *component) 499 501 { 500 502 struct tas2770_priv *tas2770 = ··· 510 508 } 511 509 512 510 tas2770_reset(tas2770); 511 + regmap_reinit_cache(tas2770->regmap, &tas2770_i2c_regmap); 513 512 514 513 return 0; 515 514 }
+4
sound/soc/fsl/imx-card.c
··· 698 698 of_node_put(cpu); 699 699 of_node_put(codec); 700 700 of_node_put(platform); 701 + 702 + cpu = NULL; 703 + codec = NULL; 704 + platform = NULL; 701 705 } 702 706 703 707 return 0;
+10
sound/soc/intel/boards/sof_sdw.c
··· 270 270 .callback = sof_sdw_quirk_cb, 271 271 .matches = { 272 272 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), 273 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0AFF") 274 + }, 275 + .driver_data = (void *)(SOF_SDW_TGL_HDMI | 276 + RT711_JD2 | 277 + SOF_SDW_FOUR_SPK), 278 + }, 279 + { 280 + .callback = sof_sdw_quirk_cb, 281 + .matches = { 282 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), 273 283 DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0B00") 274 284 }, 275 285 .driver_data = (void *)(SOF_SDW_TGL_HDMI |
+1 -20
tools/include/linux/gfp.h
··· 3 3 #define _TOOLS_INCLUDE_LINUX_GFP_H 4 4 5 5 #include <linux/types.h> 6 - 7 - #define __GFP_BITS_SHIFT 26 8 - #define __GFP_BITS_MASK ((gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) 9 - 10 - #define __GFP_HIGH 0x20u 11 - #define __GFP_IO 0x40u 12 - #define __GFP_FS 0x80u 13 - #define __GFP_NOWARN 0x200u 14 - #define __GFP_ZERO 0x8000u 15 - #define __GFP_ATOMIC 0x80000u 16 - #define __GFP_ACCOUNT 0x100000u 17 - #define __GFP_DIRECT_RECLAIM 0x400000u 18 - #define __GFP_KSWAPD_RECLAIM 0x2000000u 19 - 20 - #define __GFP_RECLAIM (__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM) 21 - 22 - #define GFP_ZONEMASK 0x0fu 23 - #define GFP_ATOMIC (__GFP_HIGH | __GFP_ATOMIC | __GFP_KSWAPD_RECLAIM) 24 - #define GFP_KERNEL (__GFP_RECLAIM | __GFP_IO | __GFP_FS) 25 - #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) 6 + #include <linux/gfp_types.h> 26 7 27 8 static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) 28 9 {
+1
tools/include/linux/gfp_types.h
··· 1 + #include "../../../include/linux/gfp_types.h"
-77
tools/testing/nvdimm/test/ndtest.c
··· 134 134 }, 135 135 }; 136 136 137 - static struct ndtest_mapping region2_mapping[] = { 138 - { 139 - .dimm = 0, 140 - .position = 0, 141 - .start = 0, 142 - .size = DIMM_SIZE, 143 - }, 144 - }; 145 - 146 - static struct ndtest_mapping region3_mapping[] = { 147 - { 148 - .dimm = 1, 149 - .start = 0, 150 - .size = DIMM_SIZE, 151 - } 152 - }; 153 - 154 - static struct ndtest_mapping region4_mapping[] = { 155 - { 156 - .dimm = 2, 157 - .start = 0, 158 - .size = DIMM_SIZE, 159 - } 160 - }; 161 - 162 - static struct ndtest_mapping region5_mapping[] = { 163 - { 164 - .dimm = 3, 165 - .start = 0, 166 - .size = DIMM_SIZE, 167 - } 168 - }; 169 - 170 137 static struct ndtest_region bus0_regions[] = { 171 138 { 172 139 .type = ND_DEVICE_NAMESPACE_PMEM, ··· 148 181 .mapping = region1_mapping, 149 182 .size = DIMM_SIZE * 2, 150 183 .range_index = 2, 151 - }, 152 - { 153 - .type = ND_DEVICE_NAMESPACE_BLK, 154 - .num_mappings = ARRAY_SIZE(region2_mapping), 155 - .mapping = region2_mapping, 156 - .size = DIMM_SIZE, 157 - .range_index = 3, 158 - }, 159 - { 160 - .type = ND_DEVICE_NAMESPACE_BLK, 161 - .num_mappings = ARRAY_SIZE(region3_mapping), 162 - .mapping = region3_mapping, 163 - .size = DIMM_SIZE, 164 - .range_index = 4, 165 - }, 166 - { 167 - .type = ND_DEVICE_NAMESPACE_BLK, 168 - .num_mappings = ARRAY_SIZE(region4_mapping), 169 - .mapping = region4_mapping, 170 - .size = DIMM_SIZE, 171 - .range_index = 5, 172 - }, 173 - { 174 - .type = ND_DEVICE_NAMESPACE_BLK, 175 - .num_mappings = ARRAY_SIZE(region5_mapping), 176 - .mapping = region5_mapping, 177 - .size = DIMM_SIZE, 178 - .range_index = 6, 179 184 }, 180 185 }; 181 186 ··· 440 501 nd_set->altcookie = nd_set->cookie1; 441 502 ndr_desc->nd_set = nd_set; 442 503 443 - if (region->type == ND_DEVICE_NAMESPACE_BLK) { 444 - mappings[0].start = 0; 445 - mappings[0].size = DIMM_SIZE; 446 - mappings[0].nvdimm = p->config->dimms[ndimm].nvdimm; 447 - 448 - ndr_desc->mapping = &mappings[0]; 449 - ndr_desc->num_mappings = 1; 450 - ndr_desc->num_lanes = 1; 451 - ndbr_desc.enable = ndtest_blk_region_enable; 452 - ndbr_desc.do_io = ndtest_blk_do_io; 453 - region->region = nvdimm_blk_region_create(p->bus, ndr_desc); 454 - 455 - goto done; 456 - } 457 - 458 504 for (i = 0; i < region->num_mappings; i++) { 459 505 ndimm = region->mapping[i].dimm; 460 506 mappings[i].start = region->mapping[i].start; ··· 451 527 ndr_desc->num_mappings = region->num_mappings; 452 528 region->region = nvdimm_pmem_region_create(p->bus, ndr_desc); 453 529 454 - done: 455 530 if (!region->region) { 456 531 dev_err(&p->pdev.dev, "Error registering region %pR\n", 457 532 ndr_desc->res);
+1 -1
tools/testing/selftests/kvm/rseq_test.c
··· 227 227 ucall_init(vm, NULL); 228 228 229 229 pthread_create(&migration_thread, NULL, migration_worker, 230 - (void *)(unsigned long)gettid()); 230 + (void *)(unsigned long)syscall(SYS_gettid)); 231 231 232 232 for (i = 0; !done; i++) { 233 233 vcpu_run(vcpu);
+10 -9
tools/testing/selftests/landlock/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # First run: make -C ../../../.. headers_install 2 4 3 5 CFLAGS += -Wall -O2 $(KHDR_INCLUDES) 6 + LDLIBS += -lcap 7 + 8 + LOCAL_HDRS += common.h 4 9 5 10 src_test := $(wildcard *_test.c) 6 11 ··· 13 8 14 9 TEST_GEN_PROGS_EXTENDED := true 15 10 16 - OVERRIDE_TARGETS := 1 17 - top_srcdir := ../../../.. 11 + # Static linking for short targets: 12 + $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static 13 + 18 14 include ../lib.mk 19 15 20 - khdr_dir = $(top_srcdir)/usr/include 21 - 22 - $(OUTPUT)/true: true.c 23 - $(LINK.c) $< $(LDLIBS) -o $@ -static 24 - 25 - $(OUTPUT)/%_test: %_test.c $(khdr_dir)/linux/landlock.h ../kselftest_harness.h common.h 26 - $(LINK.c) $< $(LDLIBS) -o $@ -lcap -I$(khdr_dir) 16 + # Static linking for targets with $(OUTPUT)/ prefix: 17 + $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
+4
tools/testing/selftests/lib.mk
··· 42 42 selfdir = $(realpath $(dir $(filter %/lib.mk,$(MAKEFILE_LIST)))) 43 43 top_srcdir = $(selfdir)/../../.. 44 44 45 + ifeq ($(KHDR_INCLUDES),) 46 + KHDR_INCLUDES := -isystem $(top_srcdir)/usr/include 47 + endif 48 + 45 49 # The following are built by lib.mk common compile rules. 46 50 # TEST_CUSTOM_PROGS should be used by tests that require 47 51 # custom build rule and prevent common build rule use.
+1 -1
tools/testing/selftests/net/reuseport_bpf.c
··· 328 328 if (bind(fd1, addr, sockaddr_size())) 329 329 error(1, errno, "failed to bind recv socket 1"); 330 330 331 - if (!bind(fd2, addr, sockaddr_size()) && errno != EADDRINUSE) 331 + if (!bind(fd2, addr, sockaddr_size()) || errno != EADDRINUSE) 332 332 error(1, errno, "bind socket 2 should fail with EADDRINUSE"); 333 333 334 334 free(addr);