Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

ASoC: codecs: add support for ES8389

Merge series from Zhang Yi <zhangyi@everest-semi.com>:

The driver is for codec ES8389 of everest-semi.

Mark Brown dd4eb861 723059ee

+4262 -1549
+1 -1
.clippy.toml
··· 7 7 disallowed-macros = [ 8 8 # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate 9 9 # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303. 10 - { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool" }, 10 + { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool", allow-invalid = true }, 11 11 ]
+4
.mailmap
··· 447 447 Luca Weiss <luca@lucaweiss.eu> <luca@z3ntu.xyz> 448 448 Lukasz Luba <lukasz.luba@arm.com> <l.luba@partner.samsung.com> 449 449 Luo Jie <quic_luoj@quicinc.com> <luoj@codeaurora.org> 450 + Lance Yang <lance.yang@linux.dev> <ioworker0@gmail.com> 451 + Lance Yang <lance.yang@linux.dev> <mingzhe.yang@ly.com> 450 452 Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com> 451 453 Maciej W. Rozycki <macro@orcam.me.uk> <macro@linux-mips.org> 452 454 Maharaja Kennadyrajan <quic_mkenna@quicinc.com> <mkenna@codeaurora.org> ··· 485 483 Matthieu Baerts <matttbe@kernel.org> <matthieu.baerts@tessares.net> 486 484 Matthieu CASTET <castet.matthieu@free.fr> 487 485 Matti Vaittinen <mazziesaccount@gmail.com> <matti.vaittinen@fi.rohmeurope.com> 486 + Mattijs Korpershoek <mkorpershoek@kernel.org> <mkorpershoek@baylibre.com> 488 487 Matt Ranostay <matt@ranostay.sg> <matt.ranostay@konsulko.com> 489 488 Matt Ranostay <matt@ranostay.sg> <matt@ranostay.consulting> 490 489 Matt Ranostay <matt@ranostay.sg> Matthew Ranostay <mranostay@embeddedalley.com> ··· 752 749 Tycho Andersen <tycho@tycho.pizza> <tycho@tycho.ws> 753 750 Tzung-Bi Shih <tzungbi@kernel.org> <tzungbi@google.com> 754 751 Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de> 752 + Uwe Kleine-König <u.kleine-koenig@baylibre.com> <ukleinek@baylibre.com> 755 753 Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 756 754 Uwe Kleine-König <ukleinek@strlen.de> 757 755 Uwe Kleine-König <ukl@pengutronix.de>
+1 -1
Documentation/devicetree/bindings/input/mediatek,mt6779-keypad.yaml
··· 7 7 title: Mediatek's Keypad Controller 8 8 9 9 maintainers: 10 - - Mattijs Korpershoek <mkorpershoek@baylibre.com> 10 + - Mattijs Korpershoek <mkorpershoek@kernel.org> 11 11 12 12 allOf: 13 13 - $ref: /schemas/input/matrix-keymap.yaml#
+90 -7
Documentation/devicetree/bindings/net/ethernet-controller.yaml
··· 74 74 - rev-rmii 75 75 - moca 76 76 77 - # RX and TX delays are added by the MAC when required 77 + # RX and TX delays are provided by the PCB. See below 78 78 - rgmii 79 79 80 - # RGMII with internal RX and TX delays provided by the PHY, 81 - # the MAC should not add the RX or TX delays in this case 80 + # RX and TX delays are not provided by the PCB. This is the most 81 + # frequent case. See below 82 82 - rgmii-id 83 83 84 - # RGMII with internal RX delay provided by the PHY, the MAC 85 - # should not add an RX delay in this case 84 + # TX delay is provided by the PCB. See below 86 85 - rgmii-rxid 87 86 88 - # RGMII with internal TX delay provided by the PHY, the MAC 89 - # should not add an TX delay in this case 87 + # RX delay is provided by the PCB. See below 90 88 - rgmii-txid 91 89 - rtbi 92 90 - smii ··· 284 286 285 287 additionalProperties: true 286 288 289 + # Informative 290 + # =========== 291 + # 292 + # 'phy-modes' & 'phy-connection-type' properties 'rgmii', 'rgmii-id', 293 + # 'rgmii-rxid', and 'rgmii-txid' are frequently used wrongly by 294 + # developers. This informative section clarifies their usage. 295 + # 296 + # The RGMII specification requires a 2ns delay between the data and 297 + # clock signals on the RGMII bus. How this delay is implemented is not 298 + # specified. 299 + # 300 + # One option is to make the clock traces on the PCB longer than the 301 + # data traces. A sufficiently difference in length can provide the 2ns 302 + # delay. If both the RX and TX delays are implemented in this manner, 303 + # 'rgmii' should be used, so indicating the PCB adds the delays. 304 + # 305 + # If the PCB does not add these delays via extra long traces, 306 + # 'rgmii-id' should be used. Here, 'id' refers to 'internal delay', 307 + # where either the MAC or PHY adds the delay. 308 + # 309 + # If only one of the two delays are implemented via extra long clock 310 + # lines, either 'rgmii-rxid' or 'rgmii-txid' should be used, 311 + # indicating the MAC or PHY should implement one of the delays 312 + # internally, while the PCB implements the other delay. 313 + # 314 + # Device Tree describes hardware, and in this case, it describes the 315 + # PCB between the MAC and the PHY, if the PCB implements delays or 316 + # not. 317 + # 318 + # In practice, very few PCBs make use of extra long clock lines. Hence 319 + # any RGMII phy mode other than 'rgmii-id' is probably wrong, and is 320 + # unlikely to be accepted during review without details provided in 321 + # the commit description and comments in the .dts file. 322 + # 323 + # When the PCB does not implement the delays, the MAC or PHY must. As 324 + # such, this is software configuration, and so not described in Device 325 + # Tree. 326 + # 327 + # The following describes how Linux implements the configuration of 328 + # the MAC and PHY to add these delays when the PCB does not. As stated 329 + # above, developers often get this wrong, and the aim of this section 330 + # is reduce the frequency of these errors by Linux developers. Other 331 + # users of the Device Tree may implement it differently, and still be 332 + # consistent with both the normative and informative description 333 + # above. 334 + # 335 + # By default in Linux, when using phylib/phylink, the MAC is expected 336 + # to read the 'phy-mode' from Device Tree, not implement any delays, 337 + # and pass the value to the PHY. The PHY will then implement delays as 338 + # specified by the 'phy-mode'. The PHY should always be reconfigured 339 + # to implement the needed delays, replacing any setting performed by 340 + # strapping or the bootloader, etc. 341 + # 342 + # Experience to date is that all PHYs which implement RGMII also 343 + # implement the ability to add or not add the needed delays. Hence 344 + # this default is expected to work in all cases. Ignoring this default 345 + # is likely to be questioned by Reviews, and require a strong argument 346 + # to be accepted. 347 + # 348 + # There are a small number of cases where the MAC has hard coded 349 + # delays which cannot be disabled. The 'phy-mode' only describes the 350 + # PCB. The inability to disable the delays in the MAC does not change 351 + # the meaning of 'phy-mode'. It does however mean that a 'phy-mode' of 352 + # 'rgmii' is now invalid, it cannot be supported, since both the PCB 353 + # and the MAC and PHY adding delays cannot result in a functional 354 + # link. Thus the MAC should report a fatal error for any modes which 355 + # cannot be supported. When the MAC implements the delay, it must 356 + # ensure that the PHY does not also implement the same delay. So it 357 + # must modify the phy-mode it passes to the PHY, removing the delay it 358 + # has added. Failure to remove the delay will result in a 359 + # non-functioning link. 360 + # 361 + # Sometimes there is a need to fine tune the delays. Often the MAC or 362 + # PHY can perform this fine tuning. In the MAC node, the Device Tree 363 + # properties 'rx-internal-delay-ps' and 'tx-internal-delay-ps' should 364 + # be used to indicate fine tuning performed by the MAC. The values 365 + # expected here are small. A value of 2000ps, i.e 2ns, and a phy-mode 366 + # of 'rgmii' will not be accepted by Reviewers. 367 + # 368 + # If the PHY is to perform fine tuning, the properties 369 + # 'rx-internal-delay-ps' and 'tx-internal-delay-ps' in the PHY node 370 + # should be used. When the PHY is implementing delays, e.g. 'rgmii-id' 371 + # these properties should have a value near to 2000ps. If the PCB is 372 + # implementing delays, e.g. 'rgmii', a small value can be used to fine 373 + # tune the delay added by the PCB. 287 374 ...
+50
Documentation/devicetree/bindings/sound/everest,es8389.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/sound/everest,es8389.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Everest ES8389 audio CODEC 8 + 9 + maintainers: 10 + - Michael Zhang <zhangyi@everest-semi.com> 11 + 12 + allOf: 13 + - $ref: dai-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: everest,es8389 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + clocks: 23 + items: 24 + - description: clock for master clock (MCLK) 25 + 26 + clock-names: 27 + items: 28 + - const: mclk 29 + 30 + "#sound-dai-cells": 31 + const: 0 32 + 33 + required: 34 + - compatible 35 + - reg 36 + - "#sound-dai-cells" 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + i2c { 43 + #address-cells = <1>; 44 + #size-cells = <0>; 45 + es8389: codec@10 { 46 + compatible = "everest,es8389"; 47 + reg = <0x10>; 48 + #sound-dai-cells = <0>; 49 + }; 50 + };
+65 -6
MAINTAINERS
··· 2519 2519 F: arch/arm/boot/dts/nxp/imx/ 2520 2520 F: arch/arm/boot/dts/nxp/mxs/ 2521 2521 F: arch/arm64/boot/dts/freescale/ 2522 + X: Documentation/devicetree/bindings/media/i2c/ 2522 2523 X: arch/arm64/boot/dts/freescale/fsl-* 2523 2524 X: arch/arm64/boot/dts/freescale/qoriq-* 2524 2525 X: drivers/media/i2c/ ··· 8727 8726 R: Yue Hu <zbestahu@gmail.com> 8728 8727 R: Jeffle Xu <jefflexu@linux.alibaba.com> 8729 8728 R: Sandeep Dhavale <dhavale@google.com> 8729 + R: Hongbo Li <lihongbo22@huawei.com> 8730 8730 L: linux-erofs@lists.ozlabs.org 8731 8731 S: Maintained 8732 8732 W: https://erofs.docs.kernel.org ··· 11237 11235 F: drivers/i2c/busses/i2c-cht-wc.c 11238 11236 11239 11237 I2C/SMBUS ISMT DRIVER 11240 - M: Seth Heasley <seth.heasley@intel.com> 11241 11238 M: Neil Horman <nhorman@tuxdriver.com> 11242 11239 L: linux-i2c@vger.kernel.org 11243 11240 F: Documentation/i2c/busses/i2c-ismt.rst ··· 15072 15071 F: drivers/media/platform/mediatek/jpeg/ 15073 15072 15074 15073 MEDIATEK KEYPAD DRIVER 15075 - M: Mattijs Korpershoek <mkorpershoek@baylibre.com> 15074 + M: Mattijs Korpershoek <mkorpershoek@kernel.org> 15076 15075 S: Supported 15077 15076 F: Documentation/devicetree/bindings/input/mediatek,mt6779-keypad.yaml 15078 15077 F: drivers/input/keyboard/mt6779-keypad.c ··· 15495 15494 F: include/linux/gfp.h 15496 15495 F: include/linux/gfp_types.h 15497 15496 F: include/linux/memfd.h 15498 - F: include/linux/memory.h 15499 15497 F: include/linux/memory_hotplug.h 15500 15498 F: include/linux/memory-tiers.h 15501 15499 F: include/linux/mempolicy.h 15502 15500 F: include/linux/mempool.h 15503 15501 F: include/linux/memremap.h 15504 - F: include/linux/mm.h 15505 - F: include/linux/mm_*.h 15506 15502 F: include/linux/mmzone.h 15507 15503 F: include/linux/mmu_notifier.h 15508 15504 F: include/linux/pagewalk.h 15509 - F: include/linux/rmap.h 15510 15505 F: include/trace/events/ksm.h 15511 15506 F: mm/ 15512 15507 F: tools/mm/ 15513 15508 F: tools/testing/selftests/mm/ 15514 15509 N: include/linux/page[-_]* 15510 + 15511 + MEMORY MANAGEMENT - CORE 15512 + M: Andrew Morton <akpm@linux-foundation.org> 15513 + M: David Hildenbrand <david@redhat.com> 15514 + R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15515 + R: Liam R. Howlett <Liam.Howlett@oracle.com> 15516 + R: Vlastimil Babka <vbabka@suse.cz> 15517 + R: Mike Rapoport <rppt@kernel.org> 15518 + R: Suren Baghdasaryan <surenb@google.com> 15519 + R: Michal Hocko <mhocko@suse.com> 15520 + L: linux-mm@kvack.org 15521 + S: Maintained 15522 + W: http://www.linux-mm.org 15523 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 15524 + F: include/linux/memory.h 15525 + F: include/linux/mm.h 15526 + F: include/linux/mm_*.h 15527 + F: include/linux/mmdebug.h 15528 + F: include/linux/pagewalk.h 15529 + F: mm/Kconfig 15530 + F: mm/debug.c 15531 + F: mm/init-mm.c 15532 + F: mm/memory.c 15533 + F: mm/pagewalk.c 15534 + F: mm/util.c 15515 15535 15516 15536 MEMORY MANAGEMENT - EXECMEM 15517 15537 M: Andrew Morton <akpm@linux-foundation.org> ··· 15567 15545 F: include/linux/gfp.h 15568 15546 F: include/linux/compaction.h 15569 15547 15548 + MEMORY MANAGEMENT - RMAP (REVERSE MAPPING) 15549 + M: Andrew Morton <akpm@linux-foundation.org> 15550 + M: David Hildenbrand <david@redhat.com> 15551 + M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15552 + R: Rik van Riel <riel@surriel.com> 15553 + R: Liam R. Howlett <Liam.Howlett@oracle.com> 15554 + R: Vlastimil Babka <vbabka@suse.cz> 15555 + R: Harry Yoo <harry.yoo@oracle.com> 15556 + L: linux-mm@kvack.org 15557 + S: Maintained 15558 + F: include/linux/rmap.h 15559 + F: mm/rmap.c 15560 + 15570 15561 MEMORY MANAGEMENT - SECRETMEM 15571 15562 M: Andrew Morton <akpm@linux-foundation.org> 15572 15563 M: Mike Rapoport <rppt@kernel.org> ··· 15587 15552 S: Maintained 15588 15553 F: include/linux/secretmem.h 15589 15554 F: mm/secretmem.c 15555 + 15556 + MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE) 15557 + M: Andrew Morton <akpm@linux-foundation.org> 15558 + M: David Hildenbrand <david@redhat.com> 15559 + R: Zi Yan <ziy@nvidia.com> 15560 + R: Baolin Wang <baolin.wang@linux.alibaba.com> 15561 + R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> 15562 + R: Liam R. Howlett <Liam.Howlett@oracle.com> 15563 + R: Nico Pache <npache@redhat.com> 15564 + R: Ryan Roberts <ryan.roberts@arm.com> 15565 + R: Dev Jain <dev.jain@arm.com> 15566 + L: linux-mm@kvack.org 15567 + S: Maintained 15568 + W: http://www.linux-mm.org 15569 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 15570 + F: Documentation/admin-guide/mm/transhuge.rst 15571 + F: include/linux/huge_mm.h 15572 + F: include/linux/khugepaged.h 15573 + F: include/trace/events/huge_memory.h 15574 + F: mm/huge_memory.c 15575 + F: mm/khugepaged.c 15576 + F: tools/testing/selftests/mm/khugepaged.c 15577 + F: tools/testing/selftests/mm/split_huge_page_test.c 15578 + F: tools/testing/selftests/mm/transhuge-stress.c 15590 15579 15591 15580 MEMORY MANAGEMENT - USERFAULTFD 15592 15581 M: Andrew Morton <akpm@linux-foundation.org>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 15 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+3
arch/arm/boot/dts/nxp/imx/imx6ul-imx6ull-opos6ul.dtsi
··· 40 40 reg = <1>; 41 41 interrupt-parent = <&gpio4>; 42 42 interrupts = <16 IRQ_TYPE_LEVEL_LOW>; 43 + micrel,led-mode = <1>; 44 + clocks = <&clks IMX6UL_CLK_ENET_REF>; 45 + clock-names = "rmii-ref"; 43 46 status = "okay"; 44 47 }; 45 48 };
+11 -11
arch/arm64/boot/dts/arm/morello.dtsi
··· 44 44 next-level-cache = <&l2_0>; 45 45 clocks = <&scmi_dvfs 0>; 46 46 47 - l2_0: l2-cache-0 { 47 + l2_0: l2-cache { 48 48 compatible = "cache"; 49 49 cache-level = <2>; 50 50 /* 8 ways set associative */ ··· 53 53 cache-sets = <2048>; 54 54 cache-unified; 55 55 next-level-cache = <&l3_0>; 56 - 57 - l3_0: l3-cache { 58 - compatible = "cache"; 59 - cache-level = <3>; 60 - cache-size = <0x100000>; 61 - cache-unified; 62 - }; 63 56 }; 64 57 }; 65 58 ··· 71 78 next-level-cache = <&l2_1>; 72 79 clocks = <&scmi_dvfs 0>; 73 80 74 - l2_1: l2-cache-1 { 81 + l2_1: l2-cache { 75 82 compatible = "cache"; 76 83 cache-level = <2>; 77 84 /* 8 ways set associative */ ··· 98 105 next-level-cache = <&l2_2>; 99 106 clocks = <&scmi_dvfs 1>; 100 107 101 - l2_2: l2-cache-2 { 108 + l2_2: l2-cache { 102 109 compatible = "cache"; 103 110 cache-level = <2>; 104 111 /* 8 ways set associative */ ··· 125 132 next-level-cache = <&l2_3>; 126 133 clocks = <&scmi_dvfs 1>; 127 134 128 - l2_3: l2-cache-3 { 135 + l2_3: l2-cache { 129 136 compatible = "cache"; 130 137 cache-level = <2>; 131 138 /* 8 ways set associative */ ··· 135 142 cache-unified; 136 143 next-level-cache = <&l3_0>; 137 144 }; 145 + }; 146 + 147 + l3_0: l3-cache { 148 + compatible = "cache"; 149 + cache-level = <3>; 150 + cache-size = <0x100000>; 151 + cache-unified; 138 152 }; 139 153 }; 140 154
+20 -5
arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
··· 144 144 startup-delay-us = <20000>; 145 145 }; 146 146 147 + reg_usdhc2_vqmmc: regulator-usdhc2-vqmmc { 148 + compatible = "regulator-gpio"; 149 + pinctrl-names = "default"; 150 + pinctrl-0 = <&pinctrl_usdhc2_vsel>; 151 + gpios = <&gpio1 4 GPIO_ACTIVE_HIGH>; 152 + regulator-max-microvolt = <3300000>; 153 + regulator-min-microvolt = <1800000>; 154 + states = <1800000 0x1>, 155 + <3300000 0x0>; 156 + regulator-name = "PMIC_USDHC_VSELECT"; 157 + vin-supply = <&reg_nvcc_sd>; 158 + }; 159 + 147 160 reserved-memory { 148 161 #address-cells = <2>; 149 162 #size-cells = <2>; ··· 282 269 "SODIMM_19", 283 270 "", 284 271 "", 285 - "", 272 + "PMIC_USDHC_VSELECT", 286 273 "", 287 274 "", 288 275 "", ··· 798 785 pinctrl-2 = <&pinctrl_usdhc2_200mhz>, <&pinctrl_usdhc2_cd>; 799 786 pinctrl-3 = <&pinctrl_usdhc2_sleep>, <&pinctrl_usdhc2_cd_sleep>; 800 787 vmmc-supply = <&reg_usdhc2_vmmc>; 788 + vqmmc-supply = <&reg_usdhc2_vqmmc>; 801 789 }; 802 790 803 791 &wdog1 { ··· 1220 1206 <MX8MM_IOMUXC_NAND_CLE_GPIO3_IO5 0x6>; /* SODIMM 76 */ 1221 1207 }; 1222 1208 1209 + pinctrl_usdhc2_vsel: usdhc2vselgrp { 1210 + fsl,pins = 1211 + <MX8MM_IOMUXC_GPIO1_IO04_GPIO1_IO4 0x10>; /* PMIC_USDHC_VSELECT */ 1212 + }; 1213 + 1223 1214 /* 1224 1215 * Note: Due to ERR050080 we use discrete external on-module resistors pulling-up to the 1225 1216 * on-module +V3.3_1.8_SD (LDO5) rail and explicitly disable the internal pull-ups here. 1226 1217 */ 1227 1218 pinctrl_usdhc2: usdhc2grp { 1228 1219 fsl,pins = 1229 - <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>, 1230 1220 <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x90>, /* SODIMM 78 */ 1231 1221 <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x90>, /* SODIMM 74 */ 1232 1222 <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x90>, /* SODIMM 80 */ ··· 1241 1223 1242 1224 pinctrl_usdhc2_100mhz: usdhc2-100mhzgrp { 1243 1225 fsl,pins = 1244 - <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>, 1245 1226 <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x94>, 1246 1227 <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x94>, 1247 1228 <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x94>, ··· 1251 1234 1252 1235 pinctrl_usdhc2_200mhz: usdhc2-200mhzgrp { 1253 1236 fsl,pins = 1254 - <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x10>, 1255 1237 <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x96>, 1256 1238 <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x96>, 1257 1239 <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x96>, ··· 1262 1246 /* Avoid backfeeding with removed card power */ 1263 1247 pinctrl_usdhc2_sleep: usdhc2slpgrp { 1264 1248 fsl,pins = 1265 - <MX8MM_IOMUXC_GPIO1_IO04_USDHC2_VSELECT 0x0>, 1266 1249 <MX8MM_IOMUXC_SD2_CLK_USDHC2_CLK 0x0>, 1267 1250 <MX8MM_IOMUXC_SD2_CMD_USDHC2_CMD 0x0>, 1268 1251 <MX8MM_IOMUXC_SD2_DATA0_USDHC2_DATA0 0x0>,
+26
arch/arm64/boot/dts/freescale/imx8mp-nominal.dtsi
··· 24 24 fsl,operating-mode = "nominal"; 25 25 }; 26 26 27 + &gpu2d { 28 + assigned-clocks = <&clk IMX8MP_CLK_GPU2D_CORE>; 29 + assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>; 30 + assigned-clock-rates = <800000000>; 31 + }; 32 + 33 + &gpu3d { 34 + assigned-clocks = <&clk IMX8MP_CLK_GPU3D_CORE>, 35 + <&clk IMX8MP_CLK_GPU3D_SHADER_CORE>; 36 + assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>, 37 + <&clk IMX8MP_SYS_PLL1_800M>; 38 + assigned-clock-rates = <800000000>, <800000000>; 39 + }; 40 + 27 41 &pgc_hdmimix { 28 42 assigned-clocks = <&clk IMX8MP_CLK_HDMI_AXI>, 29 43 <&clk IMX8MP_CLK_HDMI_APB>; ··· 58 44 assigned-clock-parents = <&clk IMX8MP_SYS_PLL3_OUT>, 59 45 <&clk IMX8MP_SYS_PLL3_OUT>; 60 46 assigned-clock-rates = <600000000>, <300000000>; 47 + }; 48 + 49 + &pgc_mlmix { 50 + assigned-clocks = <&clk IMX8MP_CLK_ML_CORE>, 51 + <&clk IMX8MP_CLK_ML_AXI>, 52 + <&clk IMX8MP_CLK_ML_AHB>; 53 + assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>, 54 + <&clk IMX8MP_SYS_PLL1_800M>, 55 + <&clk IMX8MP_SYS_PLL1_800M>; 56 + assigned-clock-rates = <800000000>, 57 + <800000000>, 58 + <300000000>; 61 59 }; 62 60 63 61 &media_blk_ctrl {
+4 -4
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 1626 1626 reg = <0 0x4c300000 0 0x10000>, 1627 1627 <0 0x60100000 0 0xfe00000>, 1628 1628 <0 0x4c360000 0 0x10000>, 1629 - <0 0x4c340000 0 0x2000>; 1629 + <0 0x4c340000 0 0x4000>; 1630 1630 reg-names = "dbi", "config", "atu", "app"; 1631 1631 ranges = <0x81000000 0x0 0x00000000 0x0 0x6ff00000 0 0x00100000>, 1632 1632 <0x82000000 0x0 0x10000000 0x9 0x10000000 0 0x10000000>; ··· 1673 1673 reg = <0 0x4c300000 0 0x10000>, 1674 1674 <0 0x4c360000 0 0x1000>, 1675 1675 <0 0x4c320000 0 0x1000>, 1676 - <0 0x4c340000 0 0x2000>, 1676 + <0 0x4c340000 0 0x4000>, 1677 1677 <0 0x4c370000 0 0x10000>, 1678 1678 <0x9 0 1 0>; 1679 1679 reg-names = "dbi","atu", "dbi2", "app", "dma", "addr_space"; ··· 1700 1700 reg = <0 0x4c380000 0 0x10000>, 1701 1701 <8 0x80100000 0 0xfe00000>, 1702 1702 <0 0x4c3e0000 0 0x10000>, 1703 - <0 0x4c3c0000 0 0x2000>; 1703 + <0 0x4c3c0000 0 0x4000>; 1704 1704 reg-names = "dbi", "config", "atu", "app"; 1705 1705 ranges = <0x81000000 0 0x00000000 0x8 0x8ff00000 0 0x00100000>, 1706 1706 <0x82000000 0 0x10000000 0xa 0x10000000 0 0x10000000>; ··· 1749 1749 reg = <0 0x4c380000 0 0x10000>, 1750 1750 <0 0x4c3e0000 0 0x1000>, 1751 1751 <0 0x4c3a0000 0 0x1000>, 1752 - <0 0x4c3c0000 0 0x2000>, 1752 + <0 0x4c3c0000 0 0x4000>, 1753 1753 <0 0x4c3f0000 0 0x10000>, 1754 1754 <0xa 0 1 0>; 1755 1755 reg-names = "dbi", "atu", "dbi2", "app", "dma", "addr_space";
+4 -4
arch/arm64/boot/dts/st/stm32mp211.dtsi
··· 116 116 }; 117 117 118 118 intc: interrupt-controller@4ac10000 { 119 - compatible = "arm,cortex-a7-gic"; 119 + compatible = "arm,gic-400"; 120 120 reg = <0x4ac10000 0x0 0x1000>, 121 - <0x4ac20000 0x0 0x2000>, 122 - <0x4ac40000 0x0 0x2000>, 123 - <0x4ac60000 0x0 0x2000>; 121 + <0x4ac20000 0x0 0x20000>, 122 + <0x4ac40000 0x0 0x20000>, 123 + <0x4ac60000 0x0 0x20000>; 124 124 #interrupt-cells = <3>; 125 125 interrupt-controller; 126 126 };
+4 -5
arch/arm64/boot/dts/st/stm32mp231.dtsi
··· 1201 1201 }; 1202 1202 1203 1203 intc: interrupt-controller@4ac10000 { 1204 - compatible = "arm,cortex-a7-gic"; 1204 + compatible = "arm,gic-400"; 1205 1205 reg = <0x4ac10000 0x1000>, 1206 - <0x4ac20000 0x2000>, 1207 - <0x4ac40000 0x2000>, 1208 - <0x4ac60000 0x2000>; 1206 + <0x4ac20000 0x20000>, 1207 + <0x4ac40000 0x20000>, 1208 + <0x4ac60000 0x20000>; 1209 1209 #interrupt-cells = <3>; 1210 - #address-cells = <1>; 1211 1210 interrupt-controller; 1212 1211 }; 1213 1212 };
+4 -5
arch/arm64/boot/dts/st/stm32mp251.dtsi
··· 115 115 }; 116 116 117 117 intc: interrupt-controller@4ac00000 { 118 - compatible = "arm,cortex-a7-gic"; 118 + compatible = "arm,gic-400"; 119 119 #interrupt-cells = <3>; 120 - #address-cells = <1>; 121 120 interrupt-controller; 122 121 reg = <0x0 0x4ac10000 0x0 0x1000>, 123 - <0x0 0x4ac20000 0x0 0x2000>, 124 - <0x0 0x4ac40000 0x0 0x2000>, 125 - <0x0 0x4ac60000 0x0 0x2000>; 122 + <0x0 0x4ac20000 0x0 0x20000>, 123 + <0x0 0x4ac40000 0x0 0x20000>, 124 + <0x0 0x4ac60000 0x0 0x20000>; 126 125 }; 127 126 128 127 psci {
+1 -1
arch/arm64/include/asm/el2_setup.h
··· 52 52 mrs x0, id_aa64mmfr1_el1 53 53 ubfx x0, x0, #ID_AA64MMFR1_EL1_HCX_SHIFT, #4 54 54 cbz x0, .Lskip_hcrx_\@ 55 - mov_q x0, HCRX_HOST_FLAGS 55 + mov_q x0, (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En | HCRX_EL2_EnFPM) 56 56 57 57 /* Enable GCS if supported */ 58 58 mrs_s x1, SYS_ID_AA64PFR1_EL1
+1 -2
arch/arm64/include/asm/kvm_arm.h
··· 100 100 HCR_FMO | HCR_IMO | HCR_PTW | HCR_TID3 | HCR_TID1) 101 101 #define HCR_HOST_NVHE_FLAGS (HCR_RW | HCR_API | HCR_APK | HCR_ATA) 102 102 #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC) 103 - #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) 103 + #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H | HCR_AMO | HCR_IMO | HCR_FMO) 104 104 105 - #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En | HCRX_EL2_EnFPM) 106 105 #define MPAMHCR_HOST_FLAGS 0 107 106 108 107 /* TCR_EL2 Registers bits */
+13
arch/arm64/include/asm/vdso/gettimeofday.h
··· 99 99 return res; 100 100 } 101 101 102 + #if IS_ENABLED(CONFIG_CC_IS_GCC) && IS_ENABLED(CONFIG_PAGE_SIZE_64KB) 103 + static __always_inline const struct vdso_time_data *__arch_get_vdso_u_time_data(void) 104 + { 105 + const struct vdso_time_data *ret = &vdso_u_time_data; 106 + 107 + /* Work around invalid absolute relocations */ 108 + OPTIMIZER_HIDE_VAR(ret); 109 + 110 + return ret; 111 + } 112 + #define __arch_get_vdso_u_time_data __arch_get_vdso_u_time_data 113 + #endif /* IS_ENABLED(CONFIG_CC_IS_GCC) && IS_ENABLED(CONFIG_PAGE_SIZE_64KB) */ 114 + 102 115 #endif /* !__ASSEMBLY__ */ 103 116 104 117 #endif /* __ASM_VDSO_GETTIMEOFDAY_H */
+8 -1
arch/arm64/kernel/cpufeature.c
··· 114 114 115 115 DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS); 116 116 117 - bool arm64_use_ng_mappings = false; 117 + /* 118 + * arm64_use_ng_mappings must be placed in the .data section, otherwise it 119 + * ends up in the .bss section where it is initialized in early_map_kernel() 120 + * after the MMU (with the idmap) was enabled. create_init_idmap() - which 121 + * runs before early_map_kernel() and reads the variable via PTE_MAYBE_NG - 122 + * may end up generating an incorrect idmap page table attributes. 123 + */ 124 + bool arm64_use_ng_mappings __read_mostly = false; 118 125 EXPORT_SYMBOL(arm64_use_ng_mappings); 119 126 120 127 DEFINE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector) = vectors;
+6 -7
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 235 235 236 236 static inline void __activate_traps_common(struct kvm_vcpu *vcpu) 237 237 { 238 + struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt); 239 + 238 240 /* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */ 239 241 write_sysreg(1 << 15, hstr_el2); 240 242 ··· 247 245 * EL1 instead of being trapped to EL2. 248 246 */ 249 247 if (system_supports_pmuv3()) { 250 - struct kvm_cpu_context *hctxt; 251 - 252 248 write_sysreg(0, pmselr_el0); 253 249 254 - hctxt = host_data_ptr(host_ctxt); 255 250 ctxt_sys_reg(hctxt, PMUSERENR_EL0) = read_sysreg(pmuserenr_el0); 256 251 write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0); 257 252 vcpu_set_flag(vcpu, PMUSERENR_ON_CPU); ··· 268 269 hcrx &= ~clr; 269 270 } 270 271 272 + ctxt_sys_reg(hctxt, HCRX_EL2) = read_sysreg_s(SYS_HCRX_EL2); 271 273 write_sysreg_s(hcrx, SYS_HCRX_EL2); 272 274 } 273 275 ··· 278 278 279 279 static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu) 280 280 { 281 + struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt); 282 + 281 283 write_sysreg(*host_data_ptr(host_debug_state.mdcr_el2), mdcr_el2); 282 284 283 285 write_sysreg(0, hstr_el2); 284 286 if (system_supports_pmuv3()) { 285 - struct kvm_cpu_context *hctxt; 286 - 287 - hctxt = host_data_ptr(host_ctxt); 288 287 write_sysreg(ctxt_sys_reg(hctxt, PMUSERENR_EL0), pmuserenr_el0); 289 288 vcpu_clear_flag(vcpu, PMUSERENR_ON_CPU); 290 289 } 291 290 292 291 if (cpus_have_final_cap(ARM64_HAS_HCX)) 293 - write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2); 292 + write_sysreg_s(ctxt_sys_reg(hctxt, HCRX_EL2), SYS_HCRX_EL2); 294 293 295 294 __deactivate_traps_hfgxtr(vcpu); 296 295 __deactivate_traps_mpam();
+1 -1
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 503 503 { 504 504 int ret; 505 505 506 - if (!addr_is_memory(addr)) 506 + if (!range_is_memory(addr, addr + size)) 507 507 return -EPERM; 508 508 509 509 ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt,
+21 -15
arch/arm64/kvm/hyp/vgic-v3-sr.c
··· 429 429 /* 430 430 * To check whether we have a MMIO-based (GICv2 compatible) 431 431 * CPU interface, we need to disable the system register 432 - * view. To do that safely, we have to prevent any interrupt 433 - * from firing (which would be deadly). 432 + * view. 434 433 * 435 - * Note that this only makes sense on VHE, as interrupts are 436 - * already masked for nVHE as part of the exception entry to 437 - * EL2. 438 - */ 439 - if (has_vhe()) 440 - flags = local_daif_save(); 441 - 442 - /* 443 434 * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates 444 435 * that to be able to set ICC_SRE_EL1.SRE to 0, all the 445 436 * interrupt overrides must be set. You've got to love this. 437 + * 438 + * As we always run VHE with HCR_xMO set, no extra xMO 439 + * manipulation is required in that case. 440 + * 441 + * To safely disable SRE, we have to prevent any interrupt 442 + * from firing (which would be deadly). This only makes sense 443 + * on VHE, as interrupts are already masked for nVHE as part 444 + * of the exception entry to EL2. 446 445 */ 447 - sysreg_clear_set(hcr_el2, 0, HCR_AMO | HCR_FMO | HCR_IMO); 448 - isb(); 446 + if (has_vhe()) { 447 + flags = local_daif_save(); 448 + } else { 449 + sysreg_clear_set(hcr_el2, 0, HCR_AMO | HCR_FMO | HCR_IMO); 450 + isb(); 451 + } 452 + 449 453 write_gicreg(0, ICC_SRE_EL1); 450 454 isb(); 451 455 ··· 457 453 458 454 write_gicreg(sre, ICC_SRE_EL1); 459 455 isb(); 460 - sysreg_clear_set(hcr_el2, HCR_AMO | HCR_FMO | HCR_IMO, 0); 461 - isb(); 462 456 463 - if (has_vhe()) 457 + if (has_vhe()) { 464 458 local_daif_restore(flags); 459 + } else { 460 + sysreg_clear_set(hcr_el2, HCR_AMO | HCR_FMO | HCR_IMO, 0); 461 + isb(); 462 + } 465 463 466 464 val = (val & ICC_SRE_EL1_SRE) ? 0 : (1ULL << 63); 467 465 val |= read_gicreg(ICH_VTR_EL2);
+8 -5
arch/arm64/kvm/mmu.c
··· 1501 1501 return -EFAULT; 1502 1502 } 1503 1503 1504 + if (!is_protected_kvm_enabled()) 1505 + memcache = &vcpu->arch.mmu_page_cache; 1506 + else 1507 + memcache = &vcpu->arch.pkvm_memcache; 1508 + 1504 1509 /* 1505 1510 * Permission faults just need to update the existing leaf entry, 1506 1511 * and so normally don't require allocations from the memcache. The ··· 1515 1510 if (!fault_is_perm || (logging_active && write_fault)) { 1516 1511 int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); 1517 1512 1518 - if (!is_protected_kvm_enabled()) { 1519 - memcache = &vcpu->arch.mmu_page_cache; 1513 + if (!is_protected_kvm_enabled()) 1520 1514 ret = kvm_mmu_topup_memory_cache(memcache, min_pages); 1521 - } else { 1522 - memcache = &vcpu->arch.pkvm_memcache; 1515 + else 1523 1516 ret = topup_hyp_memcache(memcache, min_pages); 1524 - } 1517 + 1525 1518 if (ret) 1526 1519 return ret; 1527 1520 }
+6
arch/arm64/kvm/sys_regs.c
··· 1945 1945 if ((hw_val & mpam_mask) == (user_val & mpam_mask)) 1946 1946 user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK; 1947 1947 1948 + /* Fail the guest's request to disable the AA64 ISA at EL{0,1,2} */ 1949 + if (!FIELD_GET(ID_AA64PFR0_EL1_EL0, user_val) || 1950 + !FIELD_GET(ID_AA64PFR0_EL1_EL1, user_val) || 1951 + (vcpu_has_nv(vcpu) && !FIELD_GET(ID_AA64PFR0_EL1_EL2, user_val))) 1952 + return -EINVAL; 1953 + 1948 1954 return set_id_reg(vcpu, rd, user_val); 1949 1955 } 1950 1956
+2 -3
arch/mips/include/asm/idle.h
··· 6 6 #include <linux/linkage.h> 7 7 8 8 extern void (*cpu_wait)(void); 9 - extern void r4k_wait(void); 10 - extern asmlinkage void __r4k_wait(void); 9 + extern asmlinkage void r4k_wait(void); 11 10 extern void r4k_wait_irqoff(void); 12 11 13 - static inline int using_rollback_handler(void) 12 + static inline int using_skipover_handler(void) 14 13 { 15 14 return cpu_wait == r4k_wait; 16 15 }
+2 -1
arch/mips/include/asm/ptrace.h
··· 65 65 66 66 /* Query offset/name of register from its name/offset */ 67 67 extern int regs_query_register_offset(const char *name); 68 - #define MAX_REG_OFFSET (offsetof(struct pt_regs, __last)) 68 + #define MAX_REG_OFFSET \ 69 + (offsetof(struct pt_regs, __last) - sizeof(unsigned long)) 69 70 70 71 /** 71 72 * regs_get_register() - get register value from its offset
+41 -30
arch/mips/kernel/genex.S
··· 104 104 105 105 __FINIT 106 106 107 - .align 5 /* 32 byte rollback region */ 108 - LEAF(__r4k_wait) 109 - .set push 110 - .set noreorder 111 - /* start of rollback region */ 112 - LONG_L t0, TI_FLAGS($28) 113 - nop 114 - andi t0, _TIF_NEED_RESCHED 115 - bnez t0, 1f 116 - nop 117 - nop 118 - nop 119 - #ifdef CONFIG_CPU_MICROMIPS 120 - nop 121 - nop 122 - nop 123 - nop 124 - #endif 107 + .section .cpuidle.text,"ax" 108 + /* Align to 32 bytes for the maximum idle interrupt region size. */ 109 + .align 5 110 + LEAF(r4k_wait) 111 + /* Keep the ISA bit clear for calculations on local labels here. */ 112 + 0: .fill 0 113 + /* Start of idle interrupt region. */ 114 + local_irq_enable 115 + /* 116 + * If an interrupt lands here, before going idle on the next 117 + * instruction, we must *NOT* go idle since the interrupt could 118 + * have set TIF_NEED_RESCHED or caused a timer to need resched. 119 + * Fall through -- see skipover_handler below -- and have the 120 + * idle loop take care of things. 121 + */ 122 + 1: .fill 0 123 + /* The R2 EI/EHB sequence takes 8 bytes, otherwise pad up. */ 124 + .if 1b - 0b > 32 125 + .error "overlong idle interrupt region" 126 + .elseif 1b - 0b > 8 127 + .align 4 128 + .endif 129 + 2: .fill 0 130 + .equ r4k_wait_idle_size, 2b - 0b 131 + /* End of idle interrupt region; size has to be a power of 2. */ 125 132 .set MIPS_ISA_ARCH_LEVEL_RAW 133 + r4k_wait_insn: 126 134 wait 127 - /* end of rollback region (the region size must be power of two) */ 128 - 1: 135 + r4k_wait_exit: 136 + .set mips0 137 + local_irq_disable 129 138 jr ra 130 - nop 131 - .set pop 132 - END(__r4k_wait) 139 + END(r4k_wait) 140 + .previous 133 141 134 - .macro BUILD_ROLLBACK_PROLOGUE handler 135 - FEXPORT(rollback_\handler) 142 + .macro BUILD_SKIPOVER_PROLOGUE handler 143 + FEXPORT(skipover_\handler) 136 144 .set push 137 145 .set noat 138 146 MFC0 k0, CP0_EPC 139 - PTR_LA k1, __r4k_wait 140 - ori k0, 0x1f /* 32 byte rollback region */ 141 - xori k0, 0x1f 147 + /* Subtract/add 2 to let the ISA bit propagate through the mask. */ 148 + PTR_LA k1, r4k_wait_insn - 2 149 + ori k0, r4k_wait_idle_size - 2 150 + .set noreorder 142 151 bne k0, k1, \handler 152 + PTR_ADDIU k0, r4k_wait_exit - r4k_wait_insn + 2 153 + .set reorder 143 154 MTC0 k0, CP0_EPC 144 155 .set pop 145 156 .endm 146 157 147 158 .align 5 148 - BUILD_ROLLBACK_PROLOGUE handle_int 159 + BUILD_SKIPOVER_PROLOGUE handle_int 149 160 NESTED(handle_int, PT_SIZE, sp) 150 161 .cfi_signal_frame 151 162 #ifdef CONFIG_TRACE_IRQFLAGS ··· 276 265 * This prototype is copied to ebase + n*IntCtl.VS and patched 277 266 * to invoke the handler 278 267 */ 279 - BUILD_ROLLBACK_PROLOGUE except_vec_vi 268 + BUILD_SKIPOVER_PROLOGUE except_vec_vi 280 269 NESTED(except_vec_vi, 0, sp) 281 270 SAVE_SOME docfi=1 282 271 SAVE_AT docfi=1
-7
arch/mips/kernel/idle.c
··· 35 35 write_c0_conf(cfg | R30XX_CONF_HALT); 36 36 } 37 37 38 - void __cpuidle r4k_wait(void) 39 - { 40 - raw_local_irq_enable(); 41 - __r4k_wait(); 42 - raw_local_irq_disable(); 43 - } 44 - 45 38 /* 46 39 * This variant is preferable as it allows testing need_resched and going to 47 40 * sleep depending on the outcome atomically. Unfortunately the "It is
+4
arch/mips/kernel/smp-cps.c
··· 332 332 mips_cps_cluster_bootcfg = kcalloc(nclusters, 333 333 sizeof(*mips_cps_cluster_bootcfg), 334 334 GFP_KERNEL); 335 + if (!mips_cps_cluster_bootcfg) 336 + goto err_out; 335 337 336 338 if (nclusters > 1) 337 339 mips_cm_update_property(); ··· 350 348 mips_cps_cluster_bootcfg[cl].core_power = 351 349 kcalloc(BITS_TO_LONGS(ncores), sizeof(unsigned long), 352 350 GFP_KERNEL); 351 + if (!mips_cps_cluster_bootcfg[cl].core_power) 352 + goto err_out; 353 353 354 354 /* Allocate VPE boot configuration structs */ 355 355 for (c = 0; c < ncores; c++) {
+5 -5
arch/mips/kernel/traps.c
··· 77 77 #include "access-helper.h" 78 78 79 79 extern void check_wait(void); 80 - extern asmlinkage void rollback_handle_int(void); 80 + extern asmlinkage void skipover_handle_int(void); 81 81 extern asmlinkage void handle_int(void); 82 82 extern asmlinkage void handle_adel(void); 83 83 extern asmlinkage void handle_ades(void); ··· 2066 2066 { 2067 2067 extern const u8 except_vec_vi[]; 2068 2068 extern const u8 except_vec_vi_ori[], except_vec_vi_end[]; 2069 - extern const u8 rollback_except_vec_vi[]; 2069 + extern const u8 skipover_except_vec_vi[]; 2070 2070 unsigned long handler; 2071 2071 unsigned long old_handler = vi_handlers[n]; 2072 2072 int srssets = current_cpu_data.srsets; ··· 2095 2095 change_c0_srsmap(0xf << n*4, 0 << n*4); 2096 2096 } 2097 2097 2098 - vec_start = using_rollback_handler() ? rollback_except_vec_vi : 2098 + vec_start = using_skipover_handler() ? skipover_except_vec_vi : 2099 2099 except_vec_vi; 2100 2100 #if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_BIG_ENDIAN) 2101 2101 ori_offset = except_vec_vi_ori - vec_start + 2; ··· 2426 2426 if (board_be_init) 2427 2427 board_be_init(); 2428 2428 2429 - set_except_vector(EXCCODE_INT, using_rollback_handler() ? 2430 - rollback_handle_int : handle_int); 2429 + set_except_vector(EXCCODE_INT, using_skipover_handler() ? 2430 + skipover_handle_int : handle_int); 2431 2431 set_except_vector(EXCCODE_MOD, handle_tlbm); 2432 2432 set_except_vector(EXCCODE_TLBL, handle_tlbl); 2433 2433 set_except_vector(EXCCODE_TLBS, handle_tlbs);
+6
arch/riscv/kernel/process.c
··· 275 275 unsigned long pmm; 276 276 u8 pmlen; 277 277 278 + if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) 279 + return -EINVAL; 280 + 278 281 if (is_compat_thread(ti)) 279 282 return -EINVAL; 280 283 ··· 332 329 { 333 330 struct thread_info *ti = task_thread_info(task); 334 331 long ret = 0; 332 + 333 + if (!riscv_has_extension_unlikely(RISCV_ISA_EXT_SUPM)) 334 + return -EINVAL; 335 335 336 336 if (is_compat_thread(ti)) 337 337 return -EINVAL;
+37 -27
arch/riscv/kernel/traps.c
··· 198 198 DO_ERROR_INFO(do_trap_load_fault, 199 199 SIGSEGV, SEGV_ACCERR, "load access fault"); 200 200 201 - asmlinkage __visible __trap_section void do_trap_load_misaligned(struct pt_regs *regs) 201 + enum misaligned_access_type { 202 + MISALIGNED_STORE, 203 + MISALIGNED_LOAD, 204 + }; 205 + static const struct { 206 + const char *type_str; 207 + int (*handler)(struct pt_regs *regs); 208 + } misaligned_handler[] = { 209 + [MISALIGNED_STORE] = { 210 + .type_str = "Oops - store (or AMO) address misaligned", 211 + .handler = handle_misaligned_store, 212 + }, 213 + [MISALIGNED_LOAD] = { 214 + .type_str = "Oops - load address misaligned", 215 + .handler = handle_misaligned_load, 216 + }, 217 + }; 218 + 219 + static void do_trap_misaligned(struct pt_regs *regs, enum misaligned_access_type type) 202 220 { 221 + irqentry_state_t state; 222 + 203 223 if (user_mode(regs)) { 204 224 irqentry_enter_from_user_mode(regs); 225 + local_irq_enable(); 226 + } else { 227 + state = irqentry_nmi_enter(regs); 228 + } 205 229 206 - if (handle_misaligned_load(regs)) 207 - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc, 208 - "Oops - load address misaligned"); 230 + if (misaligned_handler[type].handler(regs)) 231 + do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc, 232 + misaligned_handler[type].type_str); 209 233 234 + if (user_mode(regs)) { 235 + local_irq_disable(); 210 236 irqentry_exit_to_user_mode(regs); 211 237 } else { 212 - irqentry_state_t state = irqentry_nmi_enter(regs); 213 - 214 - if (handle_misaligned_load(regs)) 215 - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc, 216 - "Oops - load address misaligned"); 217 - 218 238 irqentry_nmi_exit(regs, state); 219 239 } 240 + } 241 + 242 + asmlinkage __visible __trap_section void do_trap_load_misaligned(struct pt_regs *regs) 243 + { 244 + do_trap_misaligned(regs, MISALIGNED_LOAD); 220 245 } 221 246 222 247 asmlinkage __visible __trap_section void do_trap_store_misaligned(struct pt_regs *regs) 223 248 { 224 - if (user_mode(regs)) { 225 - irqentry_enter_from_user_mode(regs); 226 - 227 - if (handle_misaligned_store(regs)) 228 - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc, 229 - "Oops - store (or AMO) address misaligned"); 230 - 231 - irqentry_exit_to_user_mode(regs); 232 - } else { 233 - irqentry_state_t state = irqentry_nmi_enter(regs); 234 - 235 - if (handle_misaligned_store(regs)) 236 - do_trap_error(regs, SIGBUS, BUS_ADRALN, regs->epc, 237 - "Oops - store (or AMO) address misaligned"); 238 - 239 - irqentry_nmi_exit(regs, state); 240 - } 249 + do_trap_misaligned(regs, MISALIGNED_STORE); 241 250 } 251 + 242 252 DO_ERROR_INFO(do_trap_store_fault, 243 253 SIGSEGV, SEGV_ACCERR, "store (or AMO) access fault"); 244 254 DO_ERROR_INFO(do_trap_ecall_s,
+18 -1
arch/riscv/kernel/traps_misaligned.c
··· 88 88 #define INSN_MATCH_C_FSWSP 0xe002 89 89 #define INSN_MASK_C_FSWSP 0xe003 90 90 91 + #define INSN_MATCH_C_LHU 0x8400 92 + #define INSN_MASK_C_LHU 0xfc43 93 + #define INSN_MATCH_C_LH 0x8440 94 + #define INSN_MASK_C_LH 0xfc43 95 + #define INSN_MATCH_C_SH 0x8c00 96 + #define INSN_MASK_C_SH 0xfc43 97 + 91 98 #define INSN_LEN(insn) ((((insn) & 0x3) < 0x3) ? 2 : 4) 92 99 93 100 #if defined(CONFIG_64BIT) ··· 275 268 int __ret; \ 276 269 \ 277 270 if (user_mode(regs)) { \ 278 - __ret = __get_user(insn, (type __user *) insn_addr); \ 271 + __ret = get_user(insn, (type __user *) insn_addr); \ 279 272 } else { \ 280 273 insn = *(type *)insn_addr; \ 281 274 __ret = 0; \ ··· 438 431 fp = 1; 439 432 len = 4; 440 433 #endif 434 + } else if ((insn & INSN_MASK_C_LHU) == INSN_MATCH_C_LHU) { 435 + len = 2; 436 + insn = RVC_RS2S(insn) << SH_RD; 437 + } else if ((insn & INSN_MASK_C_LH) == INSN_MATCH_C_LH) { 438 + len = 2; 439 + shift = 8 * (sizeof(ulong) - len); 440 + insn = RVC_RS2S(insn) << SH_RD; 441 441 } else { 442 442 regs->epc = epc; 443 443 return -1; ··· 544 530 len = 4; 545 531 val.data_ulong = GET_F32_RS2C(insn, regs); 546 532 #endif 533 + } else if ((insn & INSN_MASK_C_SH) == INSN_MATCH_C_SH) { 534 + len = 2; 535 + val.data_ulong = GET_RS2S(insn, regs); 547 536 } else { 548 537 regs->epc = epc; 549 538 return -1;
+2
arch/riscv/kvm/vcpu.c
··· 77 77 memcpy(cntx, reset_cntx, sizeof(*cntx)); 78 78 spin_unlock(&vcpu->arch.reset_cntx_lock); 79 79 80 + memset(&vcpu->arch.smstateen_csr, 0, sizeof(vcpu->arch.smstateen_csr)); 81 + 80 82 kvm_riscv_vcpu_fp_reset(vcpu); 81 83 82 84 kvm_riscv_vcpu_vector_reset(vcpu);
+19 -9
arch/s390/configs/debug_defconfig
··· 38 38 CONFIG_CHECKPOINT_RESTORE=y 39 39 CONFIG_SCHED_AUTOGROUP=y 40 40 CONFIG_EXPERT=y 41 - # CONFIG_SYSFS_SYSCALL is not set 42 41 CONFIG_PROFILING=y 43 42 CONFIG_KEXEC=y 44 43 CONFIG_KEXEC_FILE=y ··· 91 92 CONFIG_IOSCHED_BFQ=y 92 93 CONFIG_BINFMT_MISC=m 93 94 CONFIG_ZSWAP=y 94 - CONFIG_ZSMALLOC=y 95 95 CONFIG_ZSMALLOC_STAT=y 96 96 CONFIG_SLAB_BUCKETS=y 97 97 CONFIG_SLUB_STATS=y ··· 393 395 CONFIG_NET_CLS_FLOW=m 394 396 CONFIG_NET_CLS_CGROUP=y 395 397 CONFIG_NET_CLS_BPF=m 398 + CONFIG_NET_CLS_FLOWER=m 399 + CONFIG_NET_CLS_MATCHALL=m 400 + CONFIG_NET_EMATCH=y 396 401 CONFIG_NET_CLS_ACT=y 397 402 CONFIG_NET_ACT_POLICE=m 398 403 CONFIG_NET_ACT_GACT=m ··· 406 405 CONFIG_NET_ACT_SIMP=m 407 406 CONFIG_NET_ACT_SKBEDIT=m 408 407 CONFIG_NET_ACT_CSUM=m 408 + CONFIG_NET_ACT_VLAN=m 409 + CONFIG_NET_ACT_TUNNEL_KEY=m 410 + CONFIG_NET_ACT_CT=m 409 411 CONFIG_NET_ACT_GATE=m 410 412 CONFIG_NET_TC_SKB_EXT=y 411 413 CONFIG_DNS_RESOLVER=y ··· 632 628 CONFIG_VIRTIO_BALLOON=m 633 629 CONFIG_VIRTIO_MEM=m 634 630 CONFIG_VIRTIO_INPUT=y 631 + CONFIG_VDPA=m 632 + CONFIG_VDPA_SIM=m 633 + CONFIG_VDPA_SIM_NET=m 634 + CONFIG_VDPA_SIM_BLOCK=m 635 + CONFIG_VDPA_USER=m 636 + CONFIG_MLX5_VDPA_NET=m 637 + CONFIG_VP_VDPA=m 635 638 CONFIG_VHOST_NET=m 636 639 CONFIG_VHOST_VSOCK=m 640 + CONFIG_VHOST_VDPA=m 637 641 CONFIG_EXT4_FS=y 638 642 CONFIG_EXT4_FS_POSIX_ACL=y 639 643 CONFIG_EXT4_FS_SECURITY=y ··· 666 654 CONFIG_BCACHEFS_FS=y 667 655 CONFIG_BCACHEFS_QUOTA=y 668 656 CONFIG_BCACHEFS_POSIX_ACL=y 669 - CONFIG_FS_DAX=y 670 657 CONFIG_EXPORTFS_BLOCK_OPS=y 671 658 CONFIG_FS_ENCRYPTION=y 672 659 CONFIG_FS_VERITY=y ··· 735 724 CONFIG_DLM=m 736 725 CONFIG_UNICODE=y 737 726 CONFIG_PERSISTENT_KEYRINGS=y 727 + CONFIG_BIG_KEYS=y 738 728 CONFIG_ENCRYPTED_KEYS=m 739 729 CONFIG_KEY_NOTIFICATIONS=y 740 730 CONFIG_SECURITY=y 741 - CONFIG_HARDENED_USERCOPY=y 742 - CONFIG_FORTIFY_SOURCE=y 743 731 CONFIG_SECURITY_SELINUX=y 744 732 CONFIG_SECURITY_SELINUX_BOOTPARAM=y 745 733 CONFIG_SECURITY_LOCKDOWN_LSM=y ··· 751 741 CONFIG_IMA_DEFAULT_HASH_SHA256=y 752 742 CONFIG_IMA_WRITE_POLICY=y 753 743 CONFIG_IMA_APPRAISE=y 744 + CONFIG_FORTIFY_SOURCE=y 745 + CONFIG_HARDENED_USERCOPY=y 754 746 CONFIG_BUG_ON_DATA_CORRUPTION=y 755 747 CONFIG_CRYPTO_USER=m 756 748 # CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set ··· 768 756 CONFIG_CRYPTO_ANUBIS=m 769 757 CONFIG_CRYPTO_ARIA=m 770 758 CONFIG_CRYPTO_BLOWFISH=m 771 - CONFIG_CRYPTO_CAMELLIA=m 772 759 CONFIG_CRYPTO_CAST5=m 773 760 CONFIG_CRYPTO_CAST6=m 774 761 CONFIG_CRYPTO_DES=m ··· 812 801 CONFIG_CRYPTO_GHASH_S390=m 813 802 CONFIG_CRYPTO_AES_S390=m 814 803 CONFIG_CRYPTO_DES_S390=m 815 - CONFIG_CRYPTO_CHACHA_S390=m 816 804 CONFIG_CRYPTO_HMAC_S390=m 817 805 CONFIG_ZCRYPT=m 818 806 CONFIG_PKEY=m ··· 822 812 CONFIG_CRYPTO_PAES_S390=m 823 813 CONFIG_CRYPTO_DEV_VIRTIO=m 824 814 CONFIG_SYSTEM_BLACKLIST_KEYRING=y 815 + CONFIG_CRYPTO_KRB5=m 816 + CONFIG_CRYPTO_KRB5_SELFTESTS=y 825 817 CONFIG_CORDIC=m 826 - CONFIG_CRYPTO_LIB_CURVE25519=m 827 - CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m 828 818 CONFIG_RANDOM32_SELFTEST=y 829 819 CONFIG_XZ_DEC_MICROLZMA=y 830 820 CONFIG_DMA_CMA=y
+17 -7
arch/s390/configs/defconfig
··· 36 36 CONFIG_CHECKPOINT_RESTORE=y 37 37 CONFIG_SCHED_AUTOGROUP=y 38 38 CONFIG_EXPERT=y 39 - # CONFIG_SYSFS_SYSCALL is not set 40 39 CONFIG_PROFILING=y 41 40 CONFIG_KEXEC=y 42 41 CONFIG_KEXEC_FILE=y ··· 85 86 CONFIG_IOSCHED_BFQ=y 86 87 CONFIG_BINFMT_MISC=m 87 88 CONFIG_ZSWAP=y 88 - CONFIG_ZSMALLOC=y 89 89 CONFIG_ZSMALLOC_STAT=y 90 90 CONFIG_SLAB_BUCKETS=y 91 91 # CONFIG_COMPAT_BRK is not set ··· 383 385 CONFIG_NET_CLS_FLOW=m 384 386 CONFIG_NET_CLS_CGROUP=y 385 387 CONFIG_NET_CLS_BPF=m 388 + CONFIG_NET_CLS_FLOWER=m 389 + CONFIG_NET_CLS_MATCHALL=m 390 + CONFIG_NET_EMATCH=y 386 391 CONFIG_NET_CLS_ACT=y 387 392 CONFIG_NET_ACT_POLICE=m 388 393 CONFIG_NET_ACT_GACT=m ··· 396 395 CONFIG_NET_ACT_SIMP=m 397 396 CONFIG_NET_ACT_SKBEDIT=m 398 397 CONFIG_NET_ACT_CSUM=m 398 + CONFIG_NET_ACT_VLAN=m 399 + CONFIG_NET_ACT_TUNNEL_KEY=m 400 + CONFIG_NET_ACT_CT=m 399 401 CONFIG_NET_ACT_GATE=m 400 402 CONFIG_NET_TC_SKB_EXT=y 401 403 CONFIG_DNS_RESOLVER=y ··· 622 618 CONFIG_VIRTIO_BALLOON=m 623 619 CONFIG_VIRTIO_MEM=m 624 620 CONFIG_VIRTIO_INPUT=y 621 + CONFIG_VDPA=m 622 + CONFIG_VDPA_SIM=m 623 + CONFIG_VDPA_SIM_NET=m 624 + CONFIG_VDPA_SIM_BLOCK=m 625 + CONFIG_VDPA_USER=m 626 + CONFIG_MLX5_VDPA_NET=m 627 + CONFIG_VP_VDPA=m 625 628 CONFIG_VHOST_NET=m 626 629 CONFIG_VHOST_VSOCK=m 630 + CONFIG_VHOST_VDPA=m 627 631 CONFIG_EXT4_FS=y 628 632 CONFIG_EXT4_FS_POSIX_ACL=y 629 633 CONFIG_EXT4_FS_SECURITY=y ··· 653 641 CONFIG_BCACHEFS_FS=m 654 642 CONFIG_BCACHEFS_QUOTA=y 655 643 CONFIG_BCACHEFS_POSIX_ACL=y 656 - CONFIG_FS_DAX=y 657 644 CONFIG_EXPORTFS_BLOCK_OPS=y 658 645 CONFIG_FS_ENCRYPTION=y 659 646 CONFIG_FS_VERITY=y ··· 722 711 CONFIG_DLM=m 723 712 CONFIG_UNICODE=y 724 713 CONFIG_PERSISTENT_KEYRINGS=y 714 + CONFIG_BIG_KEYS=y 725 715 CONFIG_ENCRYPTED_KEYS=m 726 716 CONFIG_KEY_NOTIFICATIONS=y 727 717 CONFIG_SECURITY=y ··· 754 742 CONFIG_CRYPTO_ANUBIS=m 755 743 CONFIG_CRYPTO_ARIA=m 756 744 CONFIG_CRYPTO_BLOWFISH=m 757 - CONFIG_CRYPTO_CAMELLIA=m 758 745 CONFIG_CRYPTO_CAST5=m 759 746 CONFIG_CRYPTO_CAST6=m 760 747 CONFIG_CRYPTO_DES=m ··· 799 788 CONFIG_CRYPTO_GHASH_S390=m 800 789 CONFIG_CRYPTO_AES_S390=m 801 790 CONFIG_CRYPTO_DES_S390=m 802 - CONFIG_CRYPTO_CHACHA_S390=m 803 791 CONFIG_CRYPTO_HMAC_S390=m 804 792 CONFIG_ZCRYPT=m 805 793 CONFIG_PKEY=m ··· 809 799 CONFIG_CRYPTO_PAES_S390=m 810 800 CONFIG_CRYPTO_DEV_VIRTIO=m 811 801 CONFIG_SYSTEM_BLACKLIST_KEYRING=y 802 + CONFIG_CRYPTO_KRB5=m 803 + CONFIG_CRYPTO_KRB5_SELFTESTS=y 812 804 CONFIG_CORDIC=m 813 805 CONFIG_PRIME_NUMBERS=m 814 - CONFIG_CRYPTO_LIB_CURVE25519=m 815 - CONFIG_CRYPTO_LIB_CHACHA20POLY1305=m 816 806 CONFIG_XZ_DEC_MICROLZMA=y 817 807 CONFIG_DMA_CMA=y 818 808 CONFIG_CMA_SIZE_MBYTES=0
-1
arch/s390/configs/zfcpdump_defconfig
··· 70 70 CONFIG_DEBUG_INFO_DWARF4=y 71 71 CONFIG_DEBUG_FS=y 72 72 CONFIG_PANIC_ON_OOPS=y 73 - # CONFIG_SCHED_DEBUG is not set 74 73 CONFIG_RCU_CPU_STALL_TIMEOUT=60 75 74 # CONFIG_RCU_TRACE is not set 76 75 # CONFIG_FTRACE is not set
+2 -1
arch/s390/kernel/entry.S
··· 602 602 stmg %r0,%r7,__PT_R0(%r11) 603 603 stmg %r8,%r9,__PT_PSW(%r11) 604 604 mvc __PT_R8(64,%r11),0(%r14) 605 - stg %r10,__PT_ORIG_GPR2(%r11) # store last break to orig_gpr2 605 + GET_LC %r2 606 + mvc __PT_ORIG_GPR2(8,%r11),__LC_PGM_LAST_BREAK(%r2) 606 607 xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15) 607 608 lgr %r2,%r11 # pass pointer to pt_regs 608 609 jg kernel_stack_invalid
+2
arch/s390/pci/pci_clp.c
··· 428 428 return; 429 429 } 430 430 zdev = zpci_create_device(entry->fid, entry->fh, entry->config_state); 431 + if (IS_ERR(zdev)) 432 + return; 431 433 list_add_tail(&zdev->entry, scan_list); 432 434 } 433 435
+2
arch/um/include/asm/uaccess.h
··· 55 55 goto err_label; \ 56 56 } \ 57 57 *((type *)dst) = get_unaligned((type *)(src)); \ 58 + barrier(); \ 58 59 current->thread.segv_continue = NULL; \ 59 60 } while (0) 60 61 ··· 67 66 if (__faulted) \ 68 67 goto err_label; \ 69 68 put_unaligned(*((type *)src), (type *)(dst)); \ 69 + barrier(); \ 70 70 current->thread.segv_continue = NULL; \ 71 71 } while (0) 72 72
+13 -13
arch/um/kernel/trap.c
··· 225 225 panic("Failed to sync kernel TLBs: %d", err); 226 226 goto out; 227 227 } 228 - else if (current->mm == NULL) { 229 - if (current->pagefault_disabled) { 230 - if (!mc) { 231 - show_regs(container_of(regs, struct pt_regs, regs)); 232 - panic("Segfault with pagefaults disabled but no mcontext"); 233 - } 234 - if (!current->thread.segv_continue) { 235 - show_regs(container_of(regs, struct pt_regs, regs)); 236 - panic("Segfault without recovery target"); 237 - } 238 - mc_set_rip(mc, current->thread.segv_continue); 239 - current->thread.segv_continue = NULL; 240 - goto out; 228 + else if (current->pagefault_disabled) { 229 + if (!mc) { 230 + show_regs(container_of(regs, struct pt_regs, regs)); 231 + panic("Segfault with pagefaults disabled but no mcontext"); 241 232 } 233 + if (!current->thread.segv_continue) { 234 + show_regs(container_of(regs, struct pt_regs, regs)); 235 + panic("Segfault without recovery target"); 236 + } 237 + mc_set_rip(mc, current->thread.segv_continue); 238 + current->thread.segv_continue = NULL; 239 + goto out; 240 + } 241 + else if (current->mm == NULL) { 242 242 show_regs(container_of(regs, struct pt_regs, regs)); 243 243 panic("Segfault with no mm"); 244 244 }
+1
arch/x86/Kconfig
··· 2368 2368 config CFI_AUTO_DEFAULT 2369 2369 bool "Attempt to use FineIBT by default at boot time" 2370 2370 depends on FINEIBT 2371 + depends on !RUST || RUSTC_VERSION >= 108800 2371 2372 default y 2372 2373 help 2373 2374 Attempt to use FineIBT by default at boot time. If enabled,
+2
arch/x86/include/asm/microcode.h
··· 17 17 void load_ucode_bsp(void); 18 18 void load_ucode_ap(void); 19 19 void microcode_bsp_resume(void); 20 + bool __init microcode_loader_disabled(void); 20 21 #else 21 22 static inline void load_ucode_bsp(void) { } 22 23 static inline void load_ucode_ap(void) { } 23 24 static inline void microcode_bsp_resume(void) { } 25 + static inline bool __init microcode_loader_disabled(void) { return false; } 24 26 #endif 25 27 26 28 extern unsigned long initrd_start_early;
+4 -2
arch/x86/kernel/cpu/microcode/amd.c
··· 1098 1098 1099 1099 static int __init save_microcode_in_initrd(void) 1100 1100 { 1101 - unsigned int cpuid_1_eax = native_cpuid_eax(1); 1102 1101 struct cpuinfo_x86 *c = &boot_cpu_data; 1103 1102 struct cont_desc desc = { 0 }; 1103 + unsigned int cpuid_1_eax; 1104 1104 enum ucode_state ret; 1105 1105 struct cpio_data cp; 1106 1106 1107 - if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) 1107 + if (microcode_loader_disabled() || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) 1108 1108 return 0; 1109 + 1110 + cpuid_1_eax = native_cpuid_eax(1); 1109 1111 1110 1112 if (!find_blobs_in_containers(&cp)) 1111 1113 return -EINVAL;
+35 -25
arch/x86/kernel/cpu/microcode/core.c
··· 41 41 42 42 #include "internal.h" 43 43 44 - static struct microcode_ops *microcode_ops; 45 - bool dis_ucode_ldr = true; 44 + static struct microcode_ops *microcode_ops; 45 + static bool dis_ucode_ldr = false; 46 46 47 47 bool force_minrev = IS_ENABLED(CONFIG_MICROCODE_LATE_FORCE_MINREV); 48 48 module_param(force_minrev, bool, S_IRUSR | S_IWUSR); ··· 84 84 u32 lvl, dummy, i; 85 85 u32 *levels; 86 86 87 + if (x86_cpuid_vendor() != X86_VENDOR_AMD) 88 + return false; 89 + 87 90 native_rdmsr(MSR_AMD64_PATCH_LEVEL, lvl, dummy); 88 91 89 92 levels = final_levels; ··· 98 95 return false; 99 96 } 100 97 101 - static bool __init check_loader_disabled_bsp(void) 98 + bool __init microcode_loader_disabled(void) 102 99 { 103 - static const char *__dis_opt_str = "dis_ucode_ldr"; 104 - const char *cmdline = boot_command_line; 105 - const char *option = __dis_opt_str; 106 - 107 - /* 108 - * CPUID(1).ECX[31]: reserved for hypervisor use. This is still not 109 - * completely accurate as xen pv guests don't see that CPUID bit set but 110 - * that's good enough as they don't land on the BSP path anyway. 111 - */ 112 - if (native_cpuid_ecx(1) & BIT(31)) 100 + if (dis_ucode_ldr) 113 101 return true; 114 102 115 - if (x86_cpuid_vendor() == X86_VENDOR_AMD) { 116 - if (amd_check_current_patch_level()) 117 - return true; 118 - } 119 - 120 - if (cmdline_find_option_bool(cmdline, option) <= 0) 121 - dis_ucode_ldr = false; 103 + /* 104 + * Disable when: 105 + * 106 + * 1) The CPU does not support CPUID. 107 + * 108 + * 2) Bit 31 in CPUID[1]:ECX is clear 109 + * The bit is reserved for hypervisor use. This is still not 110 + * completely accurate as XEN PV guests don't see that CPUID bit 111 + * set, but that's good enough as they don't land on the BSP 112 + * path anyway. 113 + * 114 + * 3) Certain AMD patch levels are not allowed to be 115 + * overwritten. 116 + */ 117 + if (!have_cpuid_p() || 118 + native_cpuid_ecx(1) & BIT(31) || 119 + amd_check_current_patch_level()) 120 + dis_ucode_ldr = true; 122 121 123 122 return dis_ucode_ldr; 124 123 } ··· 130 125 unsigned int cpuid_1_eax; 131 126 bool intel = true; 132 127 133 - if (!have_cpuid_p()) 128 + if (cmdline_find_option_bool(boot_command_line, "dis_ucode_ldr") > 0) 129 + dis_ucode_ldr = true; 130 + 131 + if (microcode_loader_disabled()) 134 132 return; 135 133 136 134 cpuid_1_eax = native_cpuid_eax(1); ··· 154 146 return; 155 147 } 156 148 157 - if (check_loader_disabled_bsp()) 158 - return; 159 - 160 149 if (intel) 161 150 load_ucode_intel_bsp(&early_data); 162 151 else ··· 164 159 { 165 160 unsigned int cpuid_1_eax; 166 161 162 + /* 163 + * Can't use microcode_loader_disabled() here - .init section 164 + * hell. It doesn't have to either - the BSP variant must've 165 + * parsed cmdline already anyway. 166 + */ 167 167 if (dis_ucode_ldr) 168 168 return; 169 169 ··· 820 810 struct cpuinfo_x86 *c = &boot_cpu_data; 821 811 int error; 822 812 823 - if (dis_ucode_ldr) 813 + if (microcode_loader_disabled()) 824 814 return -EINVAL; 825 815 826 816 if (c->x86_vendor == X86_VENDOR_INTEL)
+1 -1
arch/x86/kernel/cpu/microcode/intel.c
··· 389 389 if (xchg(&ucode_patch_va, NULL) != UCODE_BSP_LOADED) 390 390 return 0; 391 391 392 - if (dis_ucode_ldr || boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) 392 + if (microcode_loader_disabled() || boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) 393 393 return 0; 394 394 395 395 uci.mc = get_microcode_blob(&uci, true);
-1
arch/x86/kernel/cpu/microcode/internal.h
··· 94 94 return x86_family(eax); 95 95 } 96 96 97 - extern bool dis_ucode_ldr; 98 97 extern bool force_minrev; 99 98 100 99 #ifdef CONFIG_CPU_SUP_AMD
-4
arch/x86/kernel/head32.c
··· 145 145 *ptr = (unsigned long)ptep + PAGE_OFFSET; 146 146 147 147 #ifdef CONFIG_MICROCODE_INITRD32 148 - /* Running on a hypervisor? */ 149 - if (native_cpuid_ecx(1) & BIT(31)) 150 - return; 151 - 152 148 params = (struct boot_params *)__pa_nodebug(&boot_params); 153 149 if (!params->hdr.ramdisk_size || !params->hdr.ramdisk_image) 154 150 return;
+9 -1
arch/x86/kernel/vmlinux.lds.S
··· 466 466 } 467 467 468 468 /* 469 - * The ASSERT() sink to . is intentional, for binutils 2.14 compatibility: 469 + * COMPILE_TEST kernels can be large - CONFIG_KASAN, for example, can cause 470 + * this. Let's assume that nobody will be running a COMPILE_TEST kernel and 471 + * let's assert that fuller build coverage is more valuable than being able to 472 + * run a COMPILE_TEST kernel. 473 + */ 474 + #ifndef CONFIG_COMPILE_TEST 475 + /* 476 + * The ASSERT() sync to . is intentional, for binutils 2.14 compatibility: 470 477 */ 471 478 . = ASSERT((_end - LOAD_OFFSET <= KERNEL_IMAGE_SIZE), 472 479 "kernel image bigger than KERNEL_IMAGE_SIZE"); 480 + #endif 473 481 474 482 /* needed for Clang - see arch/x86/entry/entry.S */ 475 483 PROVIDE(__ref_stack_chk_guard = __stack_chk_guard);
+3
arch/x86/kvm/mmu.h
··· 104 104 105 105 static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu) 106 106 { 107 + if (kvm_check_request(KVM_REQ_MMU_FREE_OBSOLETE_ROOTS, vcpu)) 108 + kvm_mmu_free_obsolete_roots(vcpu); 109 + 107 110 /* 108 111 * Checking root.hpa is sufficient even when KVM has mirror root. 109 112 * We can have either:
+64 -26
arch/x86/kvm/mmu/mmu.c
··· 5974 5974 __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.root_mmu); 5975 5975 __kvm_mmu_free_obsolete_roots(vcpu->kvm, &vcpu->arch.guest_mmu); 5976 5976 } 5977 + EXPORT_SYMBOL_GPL(kvm_mmu_free_obsolete_roots); 5977 5978 5978 5979 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, 5979 5980 int *bytes) ··· 7670 7669 } 7671 7670 7672 7671 #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES 7673 - bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, 7674 - struct kvm_gfn_range *range) 7675 - { 7676 - /* 7677 - * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only 7678 - * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM 7679 - * can simply ignore such slots. But if userspace is making memory 7680 - * PRIVATE, then KVM must prevent the guest from accessing the memory 7681 - * as shared. And if userspace is making memory SHARED and this point 7682 - * is reached, then at least one page within the range was previously 7683 - * PRIVATE, i.e. the slot's possible hugepage ranges are changing. 7684 - * Zapping SPTEs in this case ensures KVM will reassess whether or not 7685 - * a hugepage can be used for affected ranges. 7686 - */ 7687 - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) 7688 - return false; 7689 - 7690 - /* Unmap the old attribute page. */ 7691 - if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE) 7692 - range->attr_filter = KVM_FILTER_SHARED; 7693 - else 7694 - range->attr_filter = KVM_FILTER_PRIVATE; 7695 - 7696 - return kvm_unmap_gfn_range(kvm, range); 7697 - } 7698 - 7699 7672 static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, 7700 7673 int level) 7701 7674 { ··· 7687 7712 { 7688 7713 lpage_info_slot(gfn, slot, level)->disallow_lpage |= KVM_LPAGE_MIXED_FLAG; 7689 7714 } 7715 + 7716 + bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, 7717 + struct kvm_gfn_range *range) 7718 + { 7719 + struct kvm_memory_slot *slot = range->slot; 7720 + int level; 7721 + 7722 + /* 7723 + * Zap SPTEs even if the slot can't be mapped PRIVATE. KVM x86 only 7724 + * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM 7725 + * can simply ignore such slots. But if userspace is making memory 7726 + * PRIVATE, then KVM must prevent the guest from accessing the memory 7727 + * as shared. And if userspace is making memory SHARED and this point 7728 + * is reached, then at least one page within the range was previously 7729 + * PRIVATE, i.e. the slot's possible hugepage ranges are changing. 7730 + * Zapping SPTEs in this case ensures KVM will reassess whether or not 7731 + * a hugepage can be used for affected ranges. 7732 + */ 7733 + if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) 7734 + return false; 7735 + 7736 + if (WARN_ON_ONCE(range->end <= range->start)) 7737 + return false; 7738 + 7739 + /* 7740 + * If the head and tail pages of the range currently allow a hugepage, 7741 + * i.e. reside fully in the slot and don't have mixed attributes, then 7742 + * add each corresponding hugepage range to the ongoing invalidation, 7743 + * e.g. to prevent KVM from creating a hugepage in response to a fault 7744 + * for a gfn whose attributes aren't changing. Note, only the range 7745 + * of gfns whose attributes are being modified needs to be explicitly 7746 + * unmapped, as that will unmap any existing hugepages. 7747 + */ 7748 + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { 7749 + gfn_t start = gfn_round_for_level(range->start, level); 7750 + gfn_t end = gfn_round_for_level(range->end - 1, level); 7751 + gfn_t nr_pages = KVM_PAGES_PER_HPAGE(level); 7752 + 7753 + if ((start != range->start || start + nr_pages > range->end) && 7754 + start >= slot->base_gfn && 7755 + start + nr_pages <= slot->base_gfn + slot->npages && 7756 + !hugepage_test_mixed(slot, start, level)) 7757 + kvm_mmu_invalidate_range_add(kvm, start, start + nr_pages); 7758 + 7759 + if (end == start) 7760 + continue; 7761 + 7762 + if ((end + nr_pages) > range->end && 7763 + (end + nr_pages) <= (slot->base_gfn + slot->npages) && 7764 + !hugepage_test_mixed(slot, end, level)) 7765 + kvm_mmu_invalidate_range_add(kvm, end, end + nr_pages); 7766 + } 7767 + 7768 + /* Unmap the old attribute page. */ 7769 + if (range->arg.attributes & KVM_MEMORY_ATTRIBUTE_PRIVATE) 7770 + range->attr_filter = KVM_FILTER_SHARED; 7771 + else 7772 + range->attr_filter = KVM_FILTER_PRIVATE; 7773 + 7774 + return kvm_unmap_gfn_range(kvm, range); 7775 + } 7776 + 7777 + 7690 7778 7691 7779 static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, 7692 7780 gfn_t gfn, int level, unsigned long attrs)
+1
arch/x86/kvm/smm.c
··· 131 131 132 132 kvm_mmu_reset_context(vcpu); 133 133 } 134 + EXPORT_SYMBOL_GPL(kvm_smm_changed); 134 135 135 136 void process_smi(struct kvm_vcpu *vcpu) 136 137 {
+19 -13
arch/x86/kvm/svm/sev.c
··· 3173 3173 kvfree(svm->sev_es.ghcb_sa); 3174 3174 } 3175 3175 3176 + static u64 kvm_ghcb_get_sw_exit_code(struct vmcb_control_area *control) 3177 + { 3178 + return (((u64)control->exit_code_hi) << 32) | control->exit_code; 3179 + } 3180 + 3176 3181 static void dump_ghcb(struct vcpu_svm *svm) 3177 3182 { 3178 - struct ghcb *ghcb = svm->sev_es.ghcb; 3183 + struct vmcb_control_area *control = &svm->vmcb->control; 3179 3184 unsigned int nbits; 3180 3185 3181 3186 /* Re-use the dump_invalid_vmcb module parameter */ ··· 3189 3184 return; 3190 3185 } 3191 3186 3192 - nbits = sizeof(ghcb->save.valid_bitmap) * 8; 3187 + nbits = sizeof(svm->sev_es.valid_bitmap) * 8; 3193 3188 3194 - pr_err("GHCB (GPA=%016llx):\n", svm->vmcb->control.ghcb_gpa); 3189 + /* 3190 + * Print KVM's snapshot of the GHCB values that were (unsuccessfully) 3191 + * used to handle the exit. If the guest has since modified the GHCB 3192 + * itself, dumping the raw GHCB won't help debug why KVM was unable to 3193 + * handle the VMGEXIT that KVM observed. 3194 + */ 3195 + pr_err("GHCB (GPA=%016llx) snapshot:\n", svm->vmcb->control.ghcb_gpa); 3195 3196 pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_code", 3196 - ghcb->save.sw_exit_code, ghcb_sw_exit_code_is_valid(ghcb)); 3197 + kvm_ghcb_get_sw_exit_code(control), kvm_ghcb_sw_exit_code_is_valid(svm)); 3197 3198 pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_info_1", 3198 - ghcb->save.sw_exit_info_1, ghcb_sw_exit_info_1_is_valid(ghcb)); 3199 + control->exit_info_1, kvm_ghcb_sw_exit_info_1_is_valid(svm)); 3199 3200 pr_err("%-20s%016llx is_valid: %u\n", "sw_exit_info_2", 3200 - ghcb->save.sw_exit_info_2, ghcb_sw_exit_info_2_is_valid(ghcb)); 3201 + control->exit_info_2, kvm_ghcb_sw_exit_info_2_is_valid(svm)); 3201 3202 pr_err("%-20s%016llx is_valid: %u\n", "sw_scratch", 3202 - ghcb->save.sw_scratch, ghcb_sw_scratch_is_valid(ghcb)); 3203 - pr_err("%-20s%*pb\n", "valid_bitmap", nbits, ghcb->save.valid_bitmap); 3203 + svm->sev_es.sw_scratch, kvm_ghcb_sw_scratch_is_valid(svm)); 3204 + pr_err("%-20s%*pb\n", "valid_bitmap", nbits, svm->sev_es.valid_bitmap); 3204 3205 } 3205 3206 3206 3207 static void sev_es_sync_to_ghcb(struct vcpu_svm *svm) ··· 3275 3264 3276 3265 /* Clear the valid entries fields */ 3277 3266 memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); 3278 - } 3279 - 3280 - static u64 kvm_ghcb_get_sw_exit_code(struct vmcb_control_area *control) 3281 - { 3282 - return (((u64)control->exit_code_hi) << 32) | control->exit_code; 3283 3267 } 3284 3268 3285 3269 static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
+69 -6
arch/x86/kvm/svm/svm.c
··· 607 607 kvm_cpu_svm_disable(); 608 608 609 609 amd_pmu_disable_virt(); 610 - 611 - if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) 612 - msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT); 613 610 } 614 611 615 612 static int svm_enable_virtualization_cpu(void) ··· 683 686 684 687 rdmsr(MSR_TSC_AUX, sev_es_host_save_area(sd)->tsc_aux, msr_hi); 685 688 } 686 - 687 - if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) 688 - msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT); 689 689 690 690 return 0; 691 691 } ··· 1512 1518 __free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE)); 1513 1519 } 1514 1520 1521 + #ifdef CONFIG_CPU_MITIGATIONS 1522 + static DEFINE_SPINLOCK(srso_lock); 1523 + static atomic_t srso_nr_vms; 1524 + 1525 + static void svm_srso_clear_bp_spec_reduce(void *ign) 1526 + { 1527 + struct svm_cpu_data *sd = this_cpu_ptr(&svm_data); 1528 + 1529 + if (!sd->bp_spec_reduce_set) 1530 + return; 1531 + 1532 + msr_clear_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT); 1533 + sd->bp_spec_reduce_set = false; 1534 + } 1535 + 1536 + static void svm_srso_vm_destroy(void) 1537 + { 1538 + if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) 1539 + return; 1540 + 1541 + if (atomic_dec_return(&srso_nr_vms)) 1542 + return; 1543 + 1544 + guard(spinlock)(&srso_lock); 1545 + 1546 + /* 1547 + * Verify a new VM didn't come along, acquire the lock, and increment 1548 + * the count before this task acquired the lock. 1549 + */ 1550 + if (atomic_read(&srso_nr_vms)) 1551 + return; 1552 + 1553 + on_each_cpu(svm_srso_clear_bp_spec_reduce, NULL, 1); 1554 + } 1555 + 1556 + static void svm_srso_vm_init(void) 1557 + { 1558 + if (!cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE)) 1559 + return; 1560 + 1561 + /* 1562 + * Acquire the lock on 0 => 1 transitions to ensure a potential 1 => 0 1563 + * transition, i.e. destroying the last VM, is fully complete, e.g. so 1564 + * that a delayed IPI doesn't clear BP_SPEC_REDUCE after a vCPU runs. 1565 + */ 1566 + if (atomic_inc_not_zero(&srso_nr_vms)) 1567 + return; 1568 + 1569 + guard(spinlock)(&srso_lock); 1570 + 1571 + atomic_inc(&srso_nr_vms); 1572 + } 1573 + #else 1574 + static void svm_srso_vm_init(void) { } 1575 + static void svm_srso_vm_destroy(void) { } 1576 + #endif 1577 + 1515 1578 static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu) 1516 1579 { 1517 1580 struct vcpu_svm *svm = to_svm(vcpu); ··· 1601 1550 (!boot_cpu_has(X86_FEATURE_V_TSC_AUX) || !sev_es_guest(vcpu->kvm))) 1602 1551 kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull); 1603 1552 1553 + if (cpu_feature_enabled(X86_FEATURE_SRSO_BP_SPEC_REDUCE) && 1554 + !sd->bp_spec_reduce_set) { 1555 + sd->bp_spec_reduce_set = true; 1556 + msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_BP_SPEC_REDUCE_BIT); 1557 + } 1604 1558 svm->guest_state_loaded = true; 1605 1559 } 1606 1560 ··· 2287 2231 */ 2288 2232 if (!sev_es_guest(vcpu->kvm)) { 2289 2233 clear_page(svm->vmcb); 2234 + #ifdef CONFIG_KVM_SMM 2235 + if (is_smm(vcpu)) 2236 + kvm_smm_changed(vcpu, false); 2237 + #endif 2290 2238 kvm_vcpu_reset(vcpu, true); 2291 2239 } 2292 2240 ··· 5096 5036 { 5097 5037 avic_vm_destroy(kvm); 5098 5038 sev_vm_destroy(kvm); 5039 + 5040 + svm_srso_vm_destroy(); 5099 5041 } 5100 5042 5101 5043 static int svm_vm_init(struct kvm *kvm) ··· 5123 5061 return ret; 5124 5062 } 5125 5063 5064 + svm_srso_vm_init(); 5126 5065 return 0; 5127 5066 } 5128 5067
+2
arch/x86/kvm/svm/svm.h
··· 335 335 u32 next_asid; 336 336 u32 min_asid; 337 337 338 + bool bp_spec_reduce_set; 339 + 338 340 struct vmcb *save_area; 339 341 unsigned long save_area_pa; 340 342
+2 -2
arch/x86/kvm/x86.c
··· 4597 4597 return type < 32 && (kvm_caps.supported_vm_types & BIT(type)); 4598 4598 } 4599 4599 4600 - static inline u32 kvm_sync_valid_fields(struct kvm *kvm) 4600 + static inline u64 kvm_sync_valid_fields(struct kvm *kvm) 4601 4601 { 4602 4602 return kvm && kvm->arch.has_protected_state ? 0 : KVM_SYNC_X86_VALID_FIELDS; 4603 4603 } ··· 11493 11493 { 11494 11494 struct kvm_queued_exception *ex = &vcpu->arch.exception; 11495 11495 struct kvm_run *kvm_run = vcpu->run; 11496 - u32 sync_valid_fields; 11496 + u64 sync_valid_fields; 11497 11497 int r; 11498 11498 11499 11499 r = kvm_mmu_post_init_vm(vcpu->kvm);
+19 -3
arch/x86/mm/tlb.c
··· 899 899 cond_mitigation(tsk); 900 900 901 901 /* 902 - * Let nmi_uaccess_okay() and finish_asid_transition() 903 - * know that CR3 is changing. 902 + * Indicate that CR3 is about to change. nmi_uaccess_okay() 903 + * and others are sensitive to the window where mm_cpumask(), 904 + * CR3 and cpu_tlbstate.loaded_mm are not all in sync. 904 905 */ 905 906 this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); 906 907 barrier(); ··· 1205 1204 1206 1205 static bool should_flush_tlb(int cpu, void *data) 1207 1206 { 1207 + struct mm_struct *loaded_mm = per_cpu(cpu_tlbstate.loaded_mm, cpu); 1208 1208 struct flush_tlb_info *info = data; 1209 + 1210 + /* 1211 + * Order the 'loaded_mm' and 'is_lazy' against their 1212 + * write ordering in switch_mm_irqs_off(). Ensure 1213 + * 'is_lazy' is at least as new as 'loaded_mm'. 1214 + */ 1215 + smp_rmb(); 1209 1216 1210 1217 /* Lazy TLB will get flushed at the next context switch. */ 1211 1218 if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu)) ··· 1223 1214 if (!info->mm) 1224 1215 return true; 1225 1216 1217 + /* 1218 + * While switching, the remote CPU could have state from 1219 + * either the prev or next mm. Assume the worst and flush. 1220 + */ 1221 + if (loaded_mm == LOADED_MM_SWITCHING) 1222 + return true; 1223 + 1226 1224 /* The target mm is loaded, and the CPU is not lazy. */ 1227 - if (per_cpu(cpu_tlbstate.loaded_mm, cpu) == info->mm) 1225 + if (loaded_mm == info->mm) 1228 1226 return true; 1229 1227 1230 1228 /* In cpumask, but not the loaded mm? Periodically remove by flushing. */
+1 -1
arch/x86/um/shared/sysdep/faultinfo_32.h
··· 31 31 32 32 #define ___backtrack_faulted(_faulted) \ 33 33 asm volatile ( \ 34 - "mov $0, %0\n" \ 35 34 "movl $__get_kernel_nofault_faulted_%=,%1\n" \ 35 + "mov $0, %0\n" \ 36 36 "jmp _end_%=\n" \ 37 37 "__get_kernel_nofault_faulted_%=:\n" \ 38 38 "mov $1, %0;" \
+1 -1
arch/x86/um/shared/sysdep/faultinfo_64.h
··· 31 31 32 32 #define ___backtrack_faulted(_faulted) \ 33 33 asm volatile ( \ 34 - "mov $0, %0\n" \ 35 34 "movq $__get_kernel_nofault_faulted_%=,%1\n" \ 35 + "mov $0, %0\n" \ 36 36 "jmp _end_%=\n" \ 37 37 "__get_kernel_nofault_faulted_%=:\n" \ 38 38 "mov $1, %0;" \
+2 -1
block/blk.h
··· 480 480 * the original BIO sector so that blk_zone_write_plug_bio_endio() can 481 481 * lookup the zone write plug. 482 482 */ 483 - if (req_op(rq) == REQ_OP_ZONE_APPEND || bio_zone_write_plugging(bio)) 483 + if (req_op(rq) == REQ_OP_ZONE_APPEND || 484 + bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) 484 485 bio->bi_iter.bi_sector = rq->__sector; 485 486 } 486 487 void blk_zone_write_plug_bio_endio(struct bio *bio);
+1 -5
block/ioprio.c
··· 46 46 */ 47 47 if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_NICE)) 48 48 return -EPERM; 49 - fallthrough; 50 - /* rt has prio field too */ 51 - case IOPRIO_CLASS_BE: 52 - if (level >= IOPRIO_NR_LEVELS) 53 - return -EINVAL; 54 49 break; 50 + case IOPRIO_CLASS_BE: 55 51 case IOPRIO_CLASS_IDLE: 56 52 break; 57 53 case IOPRIO_CLASS_NONE:
+1 -1
drivers/accel/ivpu/ivpu_hw.c
··· 119 119 else 120 120 vdev->timeout.autosuspend = 100; 121 121 vdev->timeout.d0i3_entry_msg = 5; 122 - vdev->timeout.state_dump_msg = 10; 122 + vdev->timeout.state_dump_msg = 100; 123 123 } 124 124 } 125 125
+25 -10
drivers/accel/ivpu/ivpu_job.c
··· 681 681 err_erase_xa: 682 682 xa_erase(&vdev->submitted_jobs_xa, job->job_id); 683 683 err_unlock: 684 - mutex_unlock(&vdev->submitted_jobs_lock); 685 684 mutex_unlock(&file_priv->lock); 685 + mutex_unlock(&vdev->submitted_jobs_lock); 686 686 ivpu_rpm_put(vdev); 687 687 return ret; 688 688 } ··· 874 874 int ivpu_cmdq_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file) 875 875 { 876 876 struct ivpu_file_priv *file_priv = file->driver_priv; 877 + struct ivpu_device *vdev = file_priv->vdev; 877 878 struct drm_ivpu_cmdq_create *args = data; 878 879 struct ivpu_cmdq *cmdq; 880 + int ret; 879 881 880 - if (!ivpu_is_capable(file_priv->vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 882 + if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 881 883 return -ENODEV; 882 884 883 885 if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) 884 886 return -EINVAL; 887 + 888 + ret = ivpu_rpm_get(vdev); 889 + if (ret < 0) 890 + return ret; 885 891 886 892 mutex_lock(&file_priv->lock); 887 893 ··· 896 890 args->cmdq_id = cmdq->id; 897 891 898 892 mutex_unlock(&file_priv->lock); 893 + 894 + ivpu_rpm_put(vdev); 899 895 900 896 return cmdq ? 0 : -ENOMEM; 901 897 } ··· 908 900 struct ivpu_device *vdev = file_priv->vdev; 909 901 struct drm_ivpu_cmdq_destroy *args = data; 910 902 struct ivpu_cmdq *cmdq; 911 - u32 cmdq_id; 903 + u32 cmdq_id = 0; 912 904 int ret; 913 905 914 906 if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 915 907 return -ENODEV; 908 + 909 + ret = ivpu_rpm_get(vdev); 910 + if (ret < 0) 911 + return ret; 916 912 917 913 mutex_lock(&file_priv->lock); 918 914 919 915 cmdq = xa_load(&file_priv->cmdq_xa, args->cmdq_id); 920 916 if (!cmdq || cmdq->is_legacy) { 921 917 ret = -ENOENT; 922 - goto err_unlock; 918 + } else { 919 + cmdq_id = cmdq->id; 920 + ivpu_cmdq_destroy(file_priv, cmdq); 921 + ret = 0; 923 922 } 924 923 925 - cmdq_id = cmdq->id; 926 - ivpu_cmdq_destroy(file_priv, cmdq); 927 924 mutex_unlock(&file_priv->lock); 928 - ivpu_cmdq_abort_all_jobs(vdev, file_priv->ctx.id, cmdq_id); 929 - return 0; 930 925 931 - err_unlock: 932 - mutex_unlock(&file_priv->lock); 926 + /* Abort any pending jobs only if cmdq was destroyed */ 927 + if (!ret) 928 + ivpu_cmdq_abort_all_jobs(vdev, file_priv->ctx.id, cmdq_id); 929 + 930 + ivpu_rpm_put(vdev); 931 + 933 932 return ret; 934 933 } 935 934
+3 -3
drivers/base/platform.c
··· 1440 1440 1441 1441 static int platform_dma_configure(struct device *dev) 1442 1442 { 1443 - struct platform_driver *drv = to_platform_driver(dev->driver); 1443 + struct device_driver *drv = READ_ONCE(dev->driver); 1444 1444 struct fwnode_handle *fwnode = dev_fwnode(dev); 1445 1445 enum dev_dma_attr attr; 1446 1446 int ret = 0; ··· 1451 1451 attr = acpi_get_dma_attr(to_acpi_device_node(fwnode)); 1452 1452 ret = acpi_dma_configure(dev, attr); 1453 1453 } 1454 - /* @drv may not be valid when we're called from the IOMMU layer */ 1455 - if (ret || !dev->driver || drv->driver_managed_dma) 1454 + /* @dev->driver may not be valid when we're called from the IOMMU layer */ 1455 + if (ret || !drv || to_platform_driver(drv)->driver_managed_dma) 1456 1456 return ret; 1457 1457 1458 1458 ret = iommu_device_use_default_domain(dev);
+23
drivers/block/loop.c
··· 505 505 lo->lo_min_dio_size = loop_query_min_dio_size(lo); 506 506 } 507 507 508 + static int loop_check_backing_file(struct file *file) 509 + { 510 + if (!file->f_op->read_iter) 511 + return -EINVAL; 512 + 513 + if ((file->f_mode & FMODE_WRITE) && !file->f_op->write_iter) 514 + return -EINVAL; 515 + 516 + return 0; 517 + } 518 + 508 519 /* 509 520 * loop_change_fd switched the backing store of a loopback device to 510 521 * a new file. This is useful for operating system installers to free up ··· 536 525 537 526 if (!file) 538 527 return -EBADF; 528 + 529 + error = loop_check_backing_file(file); 530 + if (error) 531 + return error; 539 532 540 533 /* suppress uevents while reconfiguring the device */ 541 534 dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1); ··· 978 963 979 964 if (!file) 980 965 return -EBADF; 966 + 967 + if ((mode & BLK_OPEN_WRITE) && !file->f_op->write_iter) 968 + return -EINVAL; 969 + 970 + error = loop_check_backing_file(file); 971 + if (error) 972 + return error; 973 + 981 974 is_loop = is_loop_device(file); 982 975 983 976 /* This is safe, since we have a reference from open(). */
+1 -3
drivers/clocksource/i8253.c
··· 103 103 #ifdef CONFIG_CLKEVT_I8253 104 104 void clockevent_i8253_disable(void) 105 105 { 106 - raw_spin_lock(&i8253_lock); 106 + guard(raw_spinlock_irqsave)(&i8253_lock); 107 107 108 108 /* 109 109 * Writing the MODE register should stop the counter, according to ··· 132 132 outb_p(0, PIT_CH0); 133 133 134 134 outb_p(0x30, PIT_MODE); 135 - 136 - raw_spin_unlock(&i8253_lock); 137 135 } 138 136 139 137 static int pit_shutdown(struct clock_event_device *evt)
+2 -1
drivers/firmware/arm_ffa/driver.c
··· 299 299 import_uuid(&buf->uuid, (u8 *)&rx_buf->uuid); 300 300 } 301 301 302 - ffa_rx_release(); 302 + if (!(flags & PARTITION_INFO_GET_RETURN_COUNT_ONLY)) 303 + ffa_rx_release(); 303 304 304 305 mutex_unlock(&drv_info->rx_lock); 305 306
+3
drivers/firmware/arm_scmi/bus.c
··· 255 255 if (!dev) 256 256 return NULL; 257 257 258 + /* Drop the refcnt bumped implicitly by device_find_child */ 259 + put_device(dev); 260 + 258 261 return to_scmi_dev(dev); 259 262 } 260 263
+8 -5
drivers/firmware/arm_scmi/driver.c
··· 1248 1248 } 1249 1249 1250 1250 static bool scmi_xfer_done_no_timeout(struct scmi_chan_info *cinfo, 1251 - struct scmi_xfer *xfer, ktime_t stop) 1251 + struct scmi_xfer *xfer, ktime_t stop, 1252 + bool *ooo) 1252 1253 { 1253 1254 struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 1254 1255 ··· 1258 1257 * in case of out-of-order receptions of delayed responses 1259 1258 */ 1260 1259 return info->desc->ops->poll_done(cinfo, xfer) || 1261 - try_wait_for_completion(&xfer->done) || 1260 + (*ooo = try_wait_for_completion(&xfer->done)) || 1262 1261 ktime_after(ktime_get(), stop); 1263 1262 } 1264 1263 ··· 1275 1274 * itself to support synchronous commands replies. 1276 1275 */ 1277 1276 if (!desc->sync_cmds_completed_on_ret) { 1277 + bool ooo = false; 1278 + 1278 1279 /* 1279 1280 * Poll on xfer using transport provided .poll_done(); 1280 1281 * assumes no completion interrupt was available. 1281 1282 */ 1282 1283 ktime_t stop = ktime_add_ms(ktime_get(), timeout_ms); 1283 1284 1284 - spin_until_cond(scmi_xfer_done_no_timeout(cinfo, 1285 - xfer, stop)); 1286 - if (ktime_after(ktime_get(), stop)) { 1285 + spin_until_cond(scmi_xfer_done_no_timeout(cinfo, xfer, 1286 + stop, &ooo)); 1287 + if (!ooo && !info->desc->ops->poll_done(cinfo, xfer)) { 1287 1288 dev_err(dev, 1288 1289 "timed out in resp(caller: %pS) - polling\n", 1289 1290 (void *)_RET_IP_);
-2
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 1614 1614 #if defined(CONFIG_ACPI) && defined(CONFIG_SUSPEND) 1615 1615 bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev); 1616 1616 bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev); 1617 - void amdgpu_choose_low_power_state(struct amdgpu_device *adev); 1618 1617 #else 1619 1618 static inline bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev) { return false; } 1620 1619 static inline bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev) { return false; } 1621 - static inline void amdgpu_choose_low_power_state(struct amdgpu_device *adev) { } 1622 1620 #endif 1623 1621 1624 1622 void amdgpu_register_gpu_instance(struct amdgpu_device *adev);
-18
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
··· 1533 1533 #endif /* CONFIG_AMD_PMC */ 1534 1534 } 1535 1535 1536 - /** 1537 - * amdgpu_choose_low_power_state 1538 - * 1539 - * @adev: amdgpu_device_pointer 1540 - * 1541 - * Choose the target low power state for the GPU 1542 - */ 1543 - void amdgpu_choose_low_power_state(struct amdgpu_device *adev) 1544 - { 1545 - if (adev->in_runpm) 1546 - return; 1547 - 1548 - if (amdgpu_acpi_is_s0ix_active(adev)) 1549 - adev->in_s0ix = true; 1550 - else if (amdgpu_acpi_is_s3_active(adev)) 1551 - adev->in_s3 = true; 1552 - } 1553 - 1554 1536 #endif /* CONFIG_SUSPEND */
+7 -22
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 4907 4907 * @data: data 4908 4908 * 4909 4909 * This function is called when the system is about to suspend or hibernate. 4910 - * It is used to evict resources from the device before the system goes to 4911 - * sleep while there is still access to swap. 4910 + * It is used to set the appropriate flags so that eviction can be optimized 4911 + * in the pm prepare callback. 4912 4912 */ 4913 4913 static int amdgpu_device_pm_notifier(struct notifier_block *nb, unsigned long mode, 4914 4914 void *data) 4915 4915 { 4916 4916 struct amdgpu_device *adev = container_of(nb, struct amdgpu_device, pm_nb); 4917 - int r; 4918 4917 4919 4918 switch (mode) { 4920 4919 case PM_HIBERNATION_PREPARE: 4921 4920 adev->in_s4 = true; 4922 - fallthrough; 4923 - case PM_SUSPEND_PREPARE: 4924 - r = amdgpu_device_evict_resources(adev); 4925 - /* 4926 - * This is considered non-fatal at this time because 4927 - * amdgpu_device_prepare() will also fatally evict resources. 4928 - * See https://gitlab.freedesktop.org/drm/amd/-/issues/3781 4929 - */ 4930 - if (r) 4931 - drm_warn(adev_to_drm(adev), "Failed to evict resources, freeze active processes if problems occur: %d\n", r); 4921 + break; 4922 + case PM_POST_HIBERNATION: 4923 + adev->in_s4 = false; 4932 4924 break; 4933 4925 } 4934 4926 ··· 4941 4949 struct amdgpu_device *adev = drm_to_adev(dev); 4942 4950 int i, r; 4943 4951 4944 - amdgpu_choose_low_power_state(adev); 4945 - 4946 4952 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 4947 4953 return 0; 4948 4954 4949 4955 /* Evict the majority of BOs before starting suspend sequence */ 4950 4956 r = amdgpu_device_evict_resources(adev); 4951 4957 if (r) 4952 - goto unprepare; 4958 + return r; 4953 4959 4954 4960 flush_delayed_work(&adev->gfx.gfx_off_delay_work); 4955 4961 ··· 4958 4968 continue; 4959 4969 r = adev->ip_blocks[i].version->funcs->prepare_suspend(&adev->ip_blocks[i]); 4960 4970 if (r) 4961 - goto unprepare; 4971 + return r; 4962 4972 } 4963 4973 4964 4974 return 0; 4965 - 4966 - unprepare: 4967 - adev->in_s0ix = adev->in_s3 = adev->in_s4 = false; 4968 - 4969 - return r; 4970 4975 } 4971 4976 4972 4977 /**
+1 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2615 2615 static int amdgpu_pmops_thaw(struct device *dev) 2616 2616 { 2617 2617 struct drm_device *drm_dev = dev_get_drvdata(dev); 2618 - struct amdgpu_device *adev = drm_to_adev(drm_dev); 2619 - int r; 2620 2618 2621 - r = amdgpu_device_resume(drm_dev, true); 2622 - adev->in_s4 = false; 2623 - 2624 - return r; 2619 + return amdgpu_device_resume(drm_dev, true); 2625 2620 } 2626 2621 2627 2622 static int amdgpu_pmops_poweroff(struct device *dev) ··· 2629 2634 static int amdgpu_pmops_restore(struct device *dev) 2630 2635 { 2631 2636 struct drm_device *drm_dev = dev_get_drvdata(dev); 2632 - struct amdgpu_device *adev = drm_to_adev(drm_dev); 2633 - 2634 - adev->in_s4 = false; 2635 2637 2636 2638 return amdgpu_device_resume(drm_dev, true); 2637 2639 }
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
··· 66 66 #define VCN_ENC_CMD_REG_WAIT 0x0000000c 67 67 68 68 #define VCN_AON_SOC_ADDRESS_2_0 0x1f800 69 - #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 70 69 #define VCN_VID_IP_ADDRESS_2_0 0x0 71 70 #define VCN_AON_IP_ADDRESS_2_0 0x30000 72 71
+6 -1
drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
··· 41 41 { 42 42 if (!ring || !ring->funcs->emit_wreg) { 43 43 WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 44 - RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 44 + /* We just need to read back a register to post the write. 45 + * Reading back the remapped register causes problems on 46 + * some platforms so just read back the memory size register. 47 + */ 48 + if (adev->nbio.funcs->get_memsize) 49 + adev->nbio.funcs->get_memsize(adev); 45 50 } else { 46 51 amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 47 52 }
+6 -1
drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
··· 32 32 { 33 33 if (!ring || !ring->funcs->emit_wreg) { 34 34 WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 35 - RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 35 + /* We just need to read back a register to post the write. 36 + * Reading back the remapped register causes problems on 37 + * some platforms so just read back the memory size register. 38 + */ 39 + if (adev->nbio.funcs->get_memsize) 40 + adev->nbio.funcs->get_memsize(adev); 36 41 } else { 37 42 amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 38 43 }
+11 -1
drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
··· 33 33 if (!ring || !ring->funcs->emit_wreg) { 34 34 WREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 35 35 0); 36 - RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 36 + if (amdgpu_sriov_vf(adev)) { 37 + /* this is fine because SR_IOV doesn't remap the register */ 38 + RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 39 + } else { 40 + /* We just need to read back a register to post the write. 41 + * Reading back the remapped register causes problems on 42 + * some platforms so just read back the memory size register. 43 + */ 44 + if (adev->nbio.funcs->get_memsize) 45 + adev->nbio.funcs->get_memsize(adev); 46 + } 37 47 } else { 38 48 amdgpu_ring_emit_wreg(ring, 39 49 (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+6 -1
drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
··· 35 35 { 36 36 if (!ring || !ring->funcs->emit_wreg) { 37 37 WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 38 - RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 38 + /* We just need to read back a register to post the write. 39 + * Reading back the remapped register causes problems on 40 + * some platforms so just read back the memory size register. 41 + */ 42 + if (adev->nbio.funcs->get_memsize) 43 + adev->nbio.funcs->get_memsize(adev); 39 44 } else { 40 45 amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 41 46 }
+6 -1
drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
··· 32 32 { 33 33 if (!ring || !ring->funcs->emit_wreg) { 34 34 WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 35 - RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2); 35 + /* We just need to read back a register to post the write. 36 + * Reading back the remapped register causes problems on 37 + * some platforms so just read back the memory size register. 38 + */ 39 + if (adev->nbio.funcs->get_memsize) 40 + adev->nbio.funcs->get_memsize(adev); 36 41 } else { 37 42 amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0); 38 43 }
+1
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
··· 39 39 40 40 #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00 41 41 #define VCN1_VID_SOC_ADDRESS_3_0 0x48200 42 + #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 42 43 43 44 #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x1fd 44 45 #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x503
+1
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
··· 39 39 40 40 #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00 41 41 #define VCN1_VID_SOC_ADDRESS_3_0 0x48200 42 + #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 42 43 43 44 #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x27 44 45 #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x0f
+1
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
··· 40 40 41 41 #define VCN_VID_SOC_ADDRESS_2_0 0x1fa00 42 42 #define VCN1_VID_SOC_ADDRESS_3_0 0x48200 43 + #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 43 44 44 45 #define mmUVD_CONTEXT_ID_INTERNAL_OFFSET 0x27 45 46 #define mmUVD_GPCOM_VCPU_CMD_INTERNAL_OFFSET 0x0f
+3 -1
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
··· 46 46 47 47 #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00 48 48 #define VCN1_VID_SOC_ADDRESS_3_0 0x48300 49 + #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 49 50 50 51 #define VCN_HARVEST_MMSCH 0 51 52 ··· 615 614 616 615 /* VCN global tiling registers */ 617 616 WREG32_SOC15_DPG_MODE(inst_idx, SOC15_DPG_MODE_OFFSET( 618 - VCN, 0, regUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect); 617 + VCN, inst_idx, regUVD_GFX10_ADDR_CONFIG), 618 + adev->gfx.config.gb_addr_config, 0, indirect); 619 619 } 620 620 621 621 /**
+1
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
··· 45 45 46 46 #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00 47 47 #define VCN1_VID_SOC_ADDRESS_3_0 0x48300 48 + #define VCN1_AON_SOC_ADDRESS_3_0 0x48000 48 49 49 50 static const struct amdgpu_hwip_reg_entry vcn_reg_list_4_0_3[] = { 50 51 SOC15_REG_ENTRY_STR(VCN, 0, regUVD_POWER_STATUS),
+1
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
··· 46 46 47 47 #define VCN_VID_SOC_ADDRESS_2_0 0x1fb00 48 48 #define VCN1_VID_SOC_ADDRESS_3_0 (0x48300 + 0x38000) 49 + #define VCN1_AON_SOC_ADDRESS_3_0 (0x48000 + 0x38000) 49 50 50 51 #define VCN_HARVEST_MMSCH 0 51 52
+2 -1
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
··· 533 533 534 534 /* VCN global tiling registers */ 535 535 WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( 536 - VCN, 0, regUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect); 536 + VCN, inst_idx, regUVD_GFX10_ADDR_CONFIG), 537 + adev->gfx.config.gb_addr_config, 0, indirect); 537 538 538 539 return; 539 540 }
+17 -19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 673 673 spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags); 674 674 675 675 if (acrtc->dm_irq_params.stream && 676 - acrtc->dm_irq_params.vrr_params.supported && 677 - acrtc->dm_irq_params.freesync_config.state == 678 - VRR_STATE_ACTIVE_VARIABLE) { 676 + acrtc->dm_irq_params.vrr_params.supported) { 677 + bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled; 678 + bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled; 679 + bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE; 680 + 679 681 mod_freesync_handle_v_update(adev->dm.freesync_module, 680 682 acrtc->dm_irq_params.stream, 681 683 &acrtc->dm_irq_params.vrr_params); 682 684 683 - dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream, 684 - &acrtc->dm_irq_params.vrr_params.adjust); 685 + /* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */ 686 + if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) { 687 + dc_stream_adjust_vmin_vmax(adev->dm.dc, 688 + acrtc->dm_irq_params.stream, 689 + &acrtc->dm_irq_params.vrr_params.adjust); 690 + } 685 691 } 686 692 687 693 /* ··· 12749 12743 * Transient states before tunneling is enabled could 12750 12744 * lead to this error. We can ignore this for now. 12751 12745 */ 12752 - if (p_notify->result != AUX_RET_ERROR_PROTOCOL_ERROR) { 12746 + if (p_notify->result == AUX_RET_ERROR_PROTOCOL_ERROR) { 12753 12747 DRM_WARN("DPIA AUX failed on 0x%x(%d), error %d\n", 12754 12748 payload->address, payload->length, 12755 12749 p_notify->result); ··· 12758 12752 goto out; 12759 12753 } 12760 12754 12755 + payload->reply[0] = adev->dm.dmub_notify->aux_reply.command & 0xF; 12756 + if (adev->dm.dmub_notify->aux_reply.command & 0xF0) 12757 + /* The reply is stored in the top nibble of the command. */ 12758 + payload->reply[0] = (adev->dm.dmub_notify->aux_reply.command >> 4) & 0xF; 12761 12759 12762 - payload->reply[0] = adev->dm.dmub_notify->aux_reply.command; 12763 - if (!payload->write && p_notify->aux_reply.length && 12764 - (payload->reply[0] == AUX_TRANSACTION_REPLY_AUX_ACK)) { 12765 - 12766 - if (payload->length != p_notify->aux_reply.length) { 12767 - DRM_WARN("invalid read length %d from DPIA AUX 0x%x(%d)!\n", 12768 - p_notify->aux_reply.length, 12769 - payload->address, payload->length); 12770 - *operation_result = AUX_RET_ERROR_INVALID_REPLY; 12771 - goto out; 12772 - } 12773 - 12760 + if (!payload->write && p_notify->aux_reply.length) 12774 12761 memcpy(payload->data, p_notify->aux_reply.data, 12775 12762 p_notify->aux_reply.length); 12776 - } 12777 12763 12778 12764 /* success */ 12779 12765 ret = p_notify->aux_reply.length;
+24 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 51 51 52 52 #define PEAK_FACTOR_X1000 1006 53 53 54 + /* 55 + * This function handles both native AUX and I2C-Over-AUX transactions. 56 + */ 54 57 static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, 55 58 struct drm_dp_aux_msg *msg) 56 59 { ··· 90 87 if (adev->dm.aux_hpd_discon_quirk) { 91 88 if (msg->address == DP_SIDEBAND_MSG_DOWN_REQ_BASE && 92 89 operation_result == AUX_RET_ERROR_HPD_DISCON) { 93 - result = 0; 90 + result = msg->size; 94 91 operation_result = AUX_RET_SUCCESS; 95 92 } 96 93 } 97 94 98 - if (payload.write && result >= 0) 99 - result = msg->size; 95 + /* 96 + * result equals to 0 includes the cases of AUX_DEFER/I2C_DEFER 97 + */ 98 + if (payload.write && result >= 0) { 99 + if (result) { 100 + /*one byte indicating partially written bytes. Force 0 to retry*/ 101 + drm_info(adev_to_drm(adev), "amdgpu: AUX partially written\n"); 102 + result = 0; 103 + } else if (!payload.reply[0]) 104 + /*I2C_ACK|AUX_ACK*/ 105 + result = msg->size; 106 + } 100 107 101 - if (result < 0) 108 + if (result < 0) { 102 109 switch (operation_result) { 103 110 case AUX_RET_SUCCESS: 104 111 break; ··· 126 113 result = -ETIMEDOUT; 127 114 break; 128 115 } 116 + 117 + drm_info(adev_to_drm(adev), "amdgpu: DP AUX transfer fail:%d\n", operation_result); 118 + } 119 + 120 + if (payload.reply[0]) 121 + drm_info(adev_to_drm(adev), "amdgpu: AUX reply command not ACK: 0x%02x.", 122 + payload.reply[0]); 129 123 130 124 return result; 131 125 }
+4 -4
drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
··· 234 234 if (!result) 235 235 return false; 236 236 237 + DC_FP_START(); 237 238 result = dml2_build_mode_programming(mode_programming); 239 + DC_FP_END(); 238 240 if (!result) 239 241 return false; 240 242 ··· 279 277 mode_support->dml2_instance = dml_init->dml2_instance; 280 278 dml21_map_dc_state_into_dml_display_cfg(in_dc, context, dml_ctx); 281 279 dml_ctx->v21.mode_programming.dml2_instance->scratch.build_mode_programming_locals.mode_programming_params.programming = dml_ctx->v21.mode_programming.programming; 280 + DC_FP_START(); 282 281 is_supported = dml2_check_mode_supported(mode_support); 282 + DC_FP_END(); 283 283 if (!is_supported) 284 284 return false; 285 285 ··· 292 288 { 293 289 bool out = false; 294 290 295 - DC_FP_START(); 296 - 297 291 /* Use dml_validate_only for fast_validate path */ 298 292 if (fast_validate) 299 293 out = dml21_check_mode_support(in_dc, context, dml_ctx); 300 294 else 301 295 out = dml21_mode_check_and_programming(in_dc, context, dml_ctx); 302 - 303 - DC_FP_END(); 304 296 305 297 return out; 306 298 }
+5 -9
drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
··· 973 973 } 974 974 } 975 975 976 - static void get_scaler_data_for_plane(const struct dc_plane_state *in, struct dc_state *context, struct scaler_data *out) 976 + static struct scaler_data *get_scaler_data_for_plane( 977 + const struct dc_plane_state *in, 978 + struct dc_state *context) 977 979 { 978 980 int i; 979 981 struct pipe_ctx *temp_pipe = &context->res_ctx.temp_pipe; ··· 996 994 } 997 995 998 996 ASSERT(i < MAX_PIPES); 999 - memcpy(out, &temp_pipe->plane_res.scl_data, sizeof(*out)); 997 + return &temp_pipe->plane_res.scl_data; 1000 998 } 1001 999 1002 1000 static void populate_dummy_dml_plane_cfg(struct dml_plane_cfg_st *out, unsigned int location, ··· 1059 1057 const struct dc_plane_state *in, struct dc_state *context, 1060 1058 const struct soc_bounding_box_st *soc) 1061 1059 { 1062 - struct scaler_data *scaler_data = kzalloc(sizeof(*scaler_data), GFP_KERNEL); 1063 - if (!scaler_data) 1064 - return; 1065 - 1066 - get_scaler_data_for_plane(in, context, scaler_data); 1060 + struct scaler_data *scaler_data = get_scaler_data_for_plane(in, context); 1067 1061 1068 1062 out->CursorBPP[location] = dml_cur_32bit; 1069 1063 out->CursorWidth[location] = 256; ··· 1124 1126 out->DynamicMetadataTransmittedBytes[location] = 0; 1125 1127 1126 1128 out->NumberOfCursors[location] = 1; 1127 - 1128 - kfree(scaler_data); 1129 1129 } 1130 1130 1131 1131 static unsigned int map_stream_to_dml_display_cfg(const struct dml2_context *dml2,
-6
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
··· 2114 2114 #define REG_STRUCT dccg_regs 2115 2115 dccg_regs_init(); 2116 2116 2117 - DC_FP_START(); 2118 - 2119 2117 ctx->dc_bios->regs = &bios_regs; 2120 2118 2121 2119 pool->base.res_cap = &res_cap_dcn32; ··· 2499 2501 if (ASICREV_IS_GC_11_0_3(dc->ctx->asic_id.hw_internal_rev) && (dc->config.sdpif_request_limit_words_per_umc == 0)) 2500 2502 dc->config.sdpif_request_limit_words_per_umc = 16; 2501 2503 2502 - DC_FP_END(); 2503 - 2504 2504 return true; 2505 2505 2506 2506 create_fail: 2507 - 2508 - DC_FP_END(); 2509 2507 2510 2508 dcn32_resource_destruct(pool); 2511 2509
+1 -1
drivers/gpu/drm/drm_drv.c
··· 549 549 if (drm_WARN_ONCE(dev, !recovery, "invalid recovery method %u\n", opt)) 550 550 break; 551 551 552 - len += scnprintf(event_string + len, sizeof(event_string), "%s,", recovery); 552 + len += scnprintf(event_string + len, sizeof(event_string) - len, "%s,", recovery); 553 553 } 554 554 555 555 if (recovery)
+1 -1
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 242 242 to_intel_connector(conn_state->connector); 243 243 const struct drm_display_mode *adjusted_mode = 244 244 &crtc_state->hw.adjusted_mode; 245 - bool is_mst = intel_dp->is_mst; 245 + bool is_mst = intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST); 246 246 int bpp_x16, slots = -EINVAL; 247 247 int dsc_slice_count = 0; 248 248 int max_dpt_bpp_x16;
+11 -3
drivers/gpu/drm/i915/gt/intel_rps.c
··· 1001 1001 if (rps_uses_slpc(rps)) { 1002 1002 slpc = rps_to_slpc(rps); 1003 1003 1004 + /* Don't decrement num_waiters for req where increment was skipped */ 1005 + if (slpc->power_profile == SLPC_POWER_PROFILES_POWER_SAVING) 1006 + return; 1007 + 1004 1008 intel_guc_slpc_dec_waiters(slpc); 1005 1009 } else { 1006 1010 atomic_dec(&rps->num_waiters); ··· 1033 1029 if (slpc->power_profile == SLPC_POWER_PROFILES_POWER_SAVING) 1034 1030 return; 1035 1031 1036 - if (slpc->min_freq_softlimit >= slpc->boost_freq) 1037 - return; 1038 - 1039 1032 /* Return if old value is non zero */ 1040 1033 if (!atomic_fetch_inc(&slpc->num_waiters)) { 1034 + /* 1035 + * Skip queuing boost work if frequency is already boosted, 1036 + * but still increment num_waiters. 1037 + */ 1038 + if (slpc->min_freq_softlimit >= slpc->boost_freq) 1039 + return; 1040 + 1041 1041 GT_TRACE(rps_to_gt(rps), "boost fence:%llx:%llx\n", 1042 1042 rq->fence.context, rq->fence.seqno); 1043 1043 queue_work(rps_to_gt(rps)->i915->unordered_wq,
+13 -12
drivers/gpu/drm/panel/panel-simple.c
··· 1027 1027 }, 1028 1028 }; 1029 1029 1030 - static const struct drm_display_mode auo_g101evn010_mode = { 1031 - .clock = 68930, 1032 - .hdisplay = 1280, 1033 - .hsync_start = 1280 + 82, 1034 - .hsync_end = 1280 + 82 + 2, 1035 - .htotal = 1280 + 82 + 2 + 84, 1036 - .vdisplay = 800, 1037 - .vsync_start = 800 + 8, 1038 - .vsync_end = 800 + 8 + 2, 1039 - .vtotal = 800 + 8 + 2 + 6, 1030 + static const struct display_timing auo_g101evn010_timing = { 1031 + .pixelclock = { 64000000, 68930000, 85000000 }, 1032 + .hactive = { 1280, 1280, 1280 }, 1033 + .hfront_porch = { 8, 64, 256 }, 1034 + .hback_porch = { 8, 64, 256 }, 1035 + .hsync_len = { 40, 168, 767 }, 1036 + .vactive = { 800, 800, 800 }, 1037 + .vfront_porch = { 4, 8, 100 }, 1038 + .vback_porch = { 4, 8, 100 }, 1039 + .vsync_len = { 8, 16, 223 }, 1040 1040 }; 1041 1041 1042 1042 static const struct panel_desc auo_g101evn010 = { 1043 - .modes = &auo_g101evn010_mode, 1044 - .num_modes = 1, 1043 + .timings = &auo_g101evn010_timing, 1044 + .num_timings = 1, 1045 1045 .bpc = 6, 1046 1046 .size = { 1047 1047 .width = 216, 1048 1048 .height = 135, 1049 1049 }, 1050 1050 .bus_format = MEDIA_BUS_FMT_RGB666_1X7X3_SPWG, 1051 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 1051 1052 .connector_type = DRM_MODE_CONNECTOR_LVDS, 1052 1053 }; 1053 1054
+12 -32
drivers/gpu/drm/ttm/ttm_backup.c
··· 8 8 #include <linux/swap.h> 9 9 10 10 /* 11 - * Casting from randomized struct file * to struct ttm_backup * is fine since 12 - * struct ttm_backup is never defined nor dereferenced. 13 - */ 14 - static struct file *ttm_backup_to_file(struct ttm_backup *backup) 15 - { 16 - return (void *)backup; 17 - } 18 - 19 - static struct ttm_backup *ttm_file_to_backup(struct file *file) 20 - { 21 - return (void *)file; 22 - } 23 - 24 - /* 25 11 * Need to map shmem indices to handle since a handle value 26 12 * of 0 means error, following the swp_entry_t convention. 27 13 */ ··· 26 40 * @backup: The struct backup pointer used to obtain the handle 27 41 * @handle: The handle obtained from the @backup_page function. 28 42 */ 29 - void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle) 43 + void ttm_backup_drop(struct file *backup, pgoff_t handle) 30 44 { 31 45 loff_t start = ttm_backup_handle_to_shmem_idx(handle); 32 46 33 47 start <<= PAGE_SHIFT; 34 - shmem_truncate_range(file_inode(ttm_backup_to_file(backup)), start, 48 + shmem_truncate_range(file_inode(backup), start, 35 49 start + PAGE_SIZE - 1); 36 50 } 37 51 ··· 41 55 * @backup: The struct backup pointer used to back up the page. 42 56 * @dst: The struct page to copy into. 43 57 * @handle: The handle returned when the page was backed up. 44 - * @intr: Try to perform waits interruptable or at least killable. 58 + * @intr: Try to perform waits interruptible or at least killable. 45 59 * 46 60 * Return: 0 on success, Negative error code on failure, notably 47 61 * -EINTR if @intr was set to true and a signal is pending. 48 62 */ 49 - int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst, 63 + int ttm_backup_copy_page(struct file *backup, struct page *dst, 50 64 pgoff_t handle, bool intr) 51 65 { 52 - struct file *filp = ttm_backup_to_file(backup); 53 - struct address_space *mapping = filp->f_mapping; 66 + struct address_space *mapping = backup->f_mapping; 54 67 struct folio *from_folio; 55 68 pgoff_t idx = ttm_backup_handle_to_shmem_idx(handle); 56 69 ··· 91 106 * the folio size- and usage. 92 107 */ 93 108 s64 94 - ttm_backup_backup_page(struct ttm_backup *backup, struct page *page, 109 + ttm_backup_backup_page(struct file *backup, struct page *page, 95 110 bool writeback, pgoff_t idx, gfp_t page_gfp, 96 111 gfp_t alloc_gfp) 97 112 { 98 - struct file *filp = ttm_backup_to_file(backup); 99 - struct address_space *mapping = filp->f_mapping; 113 + struct address_space *mapping = backup->f_mapping; 100 114 unsigned long handle = 0; 101 115 struct folio *to_folio; 102 116 int ret; ··· 145 161 * 146 162 * After a call to this function, it's illegal to use the @backup pointer. 147 163 */ 148 - void ttm_backup_fini(struct ttm_backup *backup) 164 + void ttm_backup_fini(struct file *backup) 149 165 { 150 - fput(ttm_backup_to_file(backup)); 166 + fput(backup); 151 167 } 152 168 153 169 /** ··· 178 194 * 179 195 * Create a backup utilizing shmem objects. 180 196 * 181 - * Return: A pointer to a struct ttm_backup on success, 197 + * Return: A pointer to a struct file on success, 182 198 * an error pointer on error. 183 199 */ 184 - struct ttm_backup *ttm_backup_shmem_create(loff_t size) 200 + struct file *ttm_backup_shmem_create(loff_t size) 185 201 { 186 - struct file *filp; 187 - 188 - filp = shmem_file_setup("ttm shmem backup", size, 0); 189 - 190 - return ttm_file_to_backup(filp); 202 + return shmem_file_setup("ttm shmem backup", size, 0); 191 203 }
+3 -3
drivers/gpu/drm/ttm/ttm_pool.c
··· 506 506 * if successful, populate the page-table and dma-address arrays. 507 507 */ 508 508 static int ttm_pool_restore_commit(struct ttm_pool_tt_restore *restore, 509 - struct ttm_backup *backup, 509 + struct file *backup, 510 510 const struct ttm_operation_ctx *ctx, 511 511 struct ttm_pool_alloc_state *alloc) 512 512 ··· 655 655 pgoff_t start_page, pgoff_t end_page) 656 656 { 657 657 struct page **pages = &tt->pages[start_page]; 658 - struct ttm_backup *backup = tt->backup; 658 + struct file *backup = tt->backup; 659 659 pgoff_t i, nr; 660 660 661 661 for (i = start_page; i < end_page; i += nr, pages += nr) { ··· 963 963 long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt, 964 964 const struct ttm_backup_flags *flags) 965 965 { 966 - struct ttm_backup *backup = tt->backup; 966 + struct file *backup = tt->backup; 967 967 struct page *page; 968 968 unsigned long handle; 969 969 gfp_t alloc_gfp;
+1 -1
drivers/gpu/drm/ttm/ttm_tt.c
··· 544 544 */ 545 545 int ttm_tt_setup_backup(struct ttm_tt *tt) 546 546 { 547 - struct ttm_backup *backup = 547 + struct file *backup = 548 548 ttm_backup_shmem_create(((loff_t)tt->num_pages) << PAGE_SHIFT); 549 549 550 550 if (WARN_ON_ONCE(!(tt->page_flags & TTM_TT_FLAG_EXTERNAL_MAPPABLE)))
+21 -7
drivers/gpu/drm/v3d/v3d_sched.c
··· 744 744 return DRM_GPU_SCHED_STAT_NOMINAL; 745 745 } 746 746 747 - /* If the current address or return address have changed, then the GPU 748 - * has probably made progress and we should delay the reset. This 749 - * could fail if the GPU got in an infinite loop in the CL, but that 750 - * is pretty unlikely outside of an i-g-t testcase. 751 - */ 747 + static void 748 + v3d_sched_skip_reset(struct drm_sched_job *sched_job) 749 + { 750 + struct drm_gpu_scheduler *sched = sched_job->sched; 751 + 752 + spin_lock(&sched->job_list_lock); 753 + list_add(&sched_job->list, &sched->pending_list); 754 + spin_unlock(&sched->job_list_lock); 755 + } 756 + 752 757 static enum drm_gpu_sched_stat 753 758 v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q, 754 759 u32 *timedout_ctca, u32 *timedout_ctra) ··· 763 758 u32 ctca = V3D_CORE_READ(0, V3D_CLE_CTNCA(q)); 764 759 u32 ctra = V3D_CORE_READ(0, V3D_CLE_CTNRA(q)); 765 760 761 + /* If the current address or return address have changed, then the GPU 762 + * has probably made progress and we should delay the reset. This 763 + * could fail if the GPU got in an infinite loop in the CL, but that 764 + * is pretty unlikely outside of an i-g-t testcase. 765 + */ 766 766 if (*timedout_ctca != ctca || *timedout_ctra != ctra) { 767 767 *timedout_ctca = ctca; 768 768 *timedout_ctra = ctra; 769 + 770 + v3d_sched_skip_reset(sched_job); 769 771 return DRM_GPU_SCHED_STAT_NOMINAL; 770 772 } 771 773 ··· 812 800 struct v3d_dev *v3d = job->base.v3d; 813 801 u32 batches = V3D_CORE_READ(0, V3D_CSD_CURRENT_CFG4(v3d->ver)); 814 802 815 - /* If we've made progress, skip reset and let the timer get 816 - * rearmed. 803 + /* If we've made progress, skip reset, add the job to the pending 804 + * list, and let the timer get rearmed. 817 805 */ 818 806 if (job->timedout_batches != batches) { 819 807 job->timedout_batches = batches; 808 + 809 + v3d_sched_skip_reset(sched_job); 820 810 return DRM_GPU_SCHED_STAT_NOMINAL; 821 811 } 822 812
+5 -2
drivers/gpu/drm/xe/tests/xe_mocs.c
··· 46 46 unsigned int fw_ref, i; 47 47 u32 reg_val; 48 48 49 - fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT); 50 - KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n"); 49 + fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL); 50 + if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) { 51 + xe_force_wake_put(gt_to_fw(gt), fw_ref); 52 + KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n"); 53 + } 51 54 52 55 for (i = 0; i < info->num_mocs_regs; i++) { 53 56 if (!(i & 1)) {
+22
drivers/gpu/drm/xe/xe_gsc.c
··· 555 555 flush_work(&gsc->work); 556 556 } 557 557 558 + void xe_gsc_stop_prepare(struct xe_gsc *gsc) 559 + { 560 + struct xe_gt *gt = gsc_to_gt(gsc); 561 + int ret; 562 + 563 + if (!xe_uc_fw_is_loadable(&gsc->fw) || xe_uc_fw_is_in_error_state(&gsc->fw)) 564 + return; 565 + 566 + xe_force_wake_assert_held(gt_to_fw(gt), XE_FW_GSC); 567 + 568 + /* 569 + * If the GSC FW load or the proxy init are interrupted, the only way 570 + * to recover it is to do an FLR and reload the GSC from scratch. 571 + * Therefore, let's wait for the init to complete before stopping 572 + * operations. The proxy init is the last step, so we can just wait on 573 + * that 574 + */ 575 + ret = xe_gsc_wait_for_proxy_init_done(gsc); 576 + if (ret) 577 + xe_gt_err(gt, "failed to wait for GSC init completion before uc stop\n"); 578 + } 579 + 558 580 /* 559 581 * wa_14015076503: if the GSC FW is loaded, we need to alert it before doing a 560 582 * GSC engine reset by writing a notification bit in the GS1 register and then
+1
drivers/gpu/drm/xe/xe_gsc.h
··· 16 16 int xe_gsc_init(struct xe_gsc *gsc); 17 17 int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc); 18 18 void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc); 19 + void xe_gsc_stop_prepare(struct xe_gsc *gsc); 19 20 void xe_gsc_load_start(struct xe_gsc *gsc); 20 21 void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec); 21 22
+11
drivers/gpu/drm/xe/xe_gsc_proxy.c
··· 71 71 HECI1_FWSTS1_PROXY_STATE_NORMAL; 72 72 } 73 73 74 + int xe_gsc_wait_for_proxy_init_done(struct xe_gsc *gsc) 75 + { 76 + struct xe_gt *gt = gsc_to_gt(gsc); 77 + 78 + /* Proxy init can take up to 500ms, so wait double that for safety */ 79 + return xe_mmio_wait32(&gt->mmio, HECI_FWSTS1(MTL_GSC_HECI1_BASE), 80 + HECI1_FWSTS1_CURRENT_STATE, 81 + HECI1_FWSTS1_PROXY_STATE_NORMAL, 82 + USEC_PER_SEC, NULL, false); 83 + } 84 + 74 85 static void __gsc_proxy_irq_rmw(struct xe_gsc *gsc, u32 clr, u32 set) 75 86 { 76 87 struct xe_gt *gt = gsc_to_gt(gsc);
+1
drivers/gpu/drm/xe/xe_gsc_proxy.h
··· 12 12 13 13 int xe_gsc_proxy_init(struct xe_gsc *gsc); 14 14 bool xe_gsc_proxy_init_done(struct xe_gsc *gsc); 15 + int xe_gsc_wait_for_proxy_init_done(struct xe_gsc *gsc); 15 16 int xe_gsc_proxy_start(struct xe_gsc *gsc); 16 17 17 18 int xe_gsc_proxy_request_handler(struct xe_gsc *gsc);
+1 -1
drivers/gpu/drm/xe/xe_gt.c
··· 857 857 858 858 fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL); 859 859 860 - xe_uc_stop_prepare(&gt->uc); 860 + xe_uc_suspend_prepare(&gt->uc); 861 861 862 862 xe_force_wake_put(gt_to_fw(gt), fw_ref); 863 863 }
+5 -4
drivers/gpu/drm/xe/xe_gt_debugfs.c
··· 92 92 struct xe_hw_engine *hwe; 93 93 enum xe_hw_engine_id id; 94 94 unsigned int fw_ref; 95 + int ret = 0; 95 96 96 97 xe_pm_runtime_get(xe); 97 98 fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL); 98 99 if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) { 99 - xe_pm_runtime_put(xe); 100 - xe_force_wake_put(gt_to_fw(gt), fw_ref); 101 - return -ETIMEDOUT; 100 + ret = -ETIMEDOUT; 101 + goto fw_put; 102 102 } 103 103 104 104 for_each_hw_engine(hwe, gt, id) 105 105 xe_hw_engine_print(hwe, p); 106 106 107 + fw_put: 107 108 xe_force_wake_put(gt_to_fw(gt), fw_ref); 108 109 xe_pm_runtime_put(xe); 109 110 110 - return 0; 111 + return ret; 111 112 } 112 113 113 114 static int powergate_info(struct xe_gt *gt, struct drm_printer *p)
+9 -2
drivers/gpu/drm/xe/xe_gt_pagefault.c
··· 435 435 num_eus = bitmap_weight(gt->fuse_topo.eu_mask_per_dss, 436 436 XE_MAX_EU_FUSE_BITS) * num_dss; 437 437 438 - /* user can issue separate page faults per EU and per CS */ 438 + /* 439 + * user can issue separate page faults per EU and per CS 440 + * 441 + * XXX: Multiplier required as compute UMD are getting PF queue errors 442 + * without it. Follow on why this multiplier is required. 443 + */ 444 + #define PF_MULTIPLIER 8 439 445 pf_queue->num_dw = 440 - (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW; 446 + (num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW * PF_MULTIPLIER; 447 + #undef PF_MULTIPLIER 441 448 442 449 pf_queue->gt = gt; 443 450 pf_queue->data = devm_kcalloc(xe->drm.dev, pf_queue->num_dw,
+12
drivers/gpu/drm/xe/xe_svm.c
··· 947 947 return 0; 948 948 } 949 949 #endif 950 + 951 + /** 952 + * xe_svm_flush() - SVM flush 953 + * @vm: The VM. 954 + * 955 + * Flush all SVM actions. 956 + */ 957 + void xe_svm_flush(struct xe_vm *vm) 958 + { 959 + if (xe_vm_in_fault_mode(vm)) 960 + flush_work(&vm->svm.garbage_collector.work); 961 + }
+8
drivers/gpu/drm/xe/xe_svm.h
··· 72 72 int xe_svm_bo_evict(struct xe_bo *bo); 73 73 74 74 void xe_svm_range_debug(struct xe_svm_range *range, const char *operation); 75 + 76 + void xe_svm_flush(struct xe_vm *vm); 77 + 75 78 #else 76 79 static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range) 77 80 { ··· 127 124 void xe_svm_range_debug(struct xe_svm_range *range, const char *operation) 128 125 { 129 126 } 127 + 128 + static inline void xe_svm_flush(struct xe_vm *vm) 129 + { 130 + } 131 + 130 132 #endif 131 133 132 134 /**
+7 -1
drivers/gpu/drm/xe/xe_uc.c
··· 244 244 245 245 void xe_uc_stop_prepare(struct xe_uc *uc) 246 246 { 247 - xe_gsc_wait_for_worker_completion(&uc->gsc); 247 + xe_gsc_stop_prepare(&uc->gsc); 248 248 xe_guc_stop_prepare(&uc->guc); 249 249 } 250 250 ··· 276 276 ret = xe_uc_reset_prepare(uc); 277 277 if (ret) 278 278 goto again; 279 + } 280 + 281 + void xe_uc_suspend_prepare(struct xe_uc *uc) 282 + { 283 + xe_gsc_wait_for_worker_completion(&uc->gsc); 284 + xe_guc_stop_prepare(&uc->guc); 279 285 } 280 286 281 287 int xe_uc_suspend(struct xe_uc *uc)
+1
drivers/gpu/drm/xe/xe_uc.h
··· 18 18 void xe_uc_stop_prepare(struct xe_uc *uc); 19 19 void xe_uc_stop(struct xe_uc *uc); 20 20 int xe_uc_start(struct xe_uc *uc); 21 + void xe_uc_suspend_prepare(struct xe_uc *uc); 21 22 int xe_uc_suspend(struct xe_uc *uc); 22 23 int xe_uc_sanitize_reset(struct xe_uc *uc); 23 24 void xe_uc_declare_wedged(struct xe_uc *uc);
+1 -2
drivers/gpu/drm/xe/xe_vm.c
··· 3312 3312 } 3313 3313 3314 3314 /* Ensure all UNMAPs visible */ 3315 - if (xe_vm_in_fault_mode(vm)) 3316 - flush_work(&vm->svm.garbage_collector.work); 3315 + xe_svm_flush(vm); 3317 3316 3318 3317 err = down_write_killable(&vm->lock); 3319 3318 if (err)
+1 -1
drivers/gpu/nova-core/gpu.rs
··· 93 93 // For now, redirect to fmt::Debug for convenience. 94 94 impl fmt::Display for Chipset { 95 95 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 96 - write!(f, "{:?}", self) 96 + write!(f, "{self:?}") 97 97 } 98 98 } 99 99
+6
drivers/hv/hyperv_vmbus.h
··· 477 477 478 478 #endif /* CONFIG_HYPERV_TESTING */ 479 479 480 + /* Create and remove sysfs entry for memory mapped ring buffers for a channel */ 481 + int hv_create_ring_sysfs(struct vmbus_channel *channel, 482 + int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, 483 + struct vm_area_struct *vma)); 484 + int hv_remove_ring_sysfs(struct vmbus_channel *channel); 485 + 480 486 #endif /* _HYPERV_VMBUS_H */
+108 -1
drivers/hv/vmbus_drv.c
··· 1802 1802 } 1803 1803 static VMBUS_CHAN_ATTR_RO(subchannel_id); 1804 1804 1805 + static int hv_mmap_ring_buffer_wrapper(struct file *filp, struct kobject *kobj, 1806 + const struct bin_attribute *attr, 1807 + struct vm_area_struct *vma) 1808 + { 1809 + struct vmbus_channel *channel = container_of(kobj, struct vmbus_channel, kobj); 1810 + 1811 + /* 1812 + * hv_(create|remove)_ring_sysfs implementation ensures that mmap_ring_buffer 1813 + * is not NULL. 1814 + */ 1815 + return channel->mmap_ring_buffer(channel, vma); 1816 + } 1817 + 1818 + static struct bin_attribute chan_attr_ring_buffer = { 1819 + .attr = { 1820 + .name = "ring", 1821 + .mode = 0600, 1822 + }, 1823 + .mmap = hv_mmap_ring_buffer_wrapper, 1824 + }; 1805 1825 static struct attribute *vmbus_chan_attrs[] = { 1806 1826 &chan_attr_out_mask.attr, 1807 1827 &chan_attr_in_mask.attr, ··· 1838 1818 &chan_attr_out_full_total.attr, 1839 1819 &chan_attr_monitor_id.attr, 1840 1820 &chan_attr_subchannel_id.attr, 1821 + NULL 1822 + }; 1823 + 1824 + static struct bin_attribute *vmbus_chan_bin_attrs[] = { 1825 + &chan_attr_ring_buffer, 1841 1826 NULL 1842 1827 }; 1843 1828 ··· 1866 1841 return attr->mode; 1867 1842 } 1868 1843 1844 + static umode_t vmbus_chan_bin_attr_is_visible(struct kobject *kobj, 1845 + const struct bin_attribute *attr, int idx) 1846 + { 1847 + const struct vmbus_channel *channel = 1848 + container_of(kobj, struct vmbus_channel, kobj); 1849 + 1850 + /* Hide ring attribute if channel's ring_sysfs_visible is set to false */ 1851 + if (attr == &chan_attr_ring_buffer && !channel->ring_sysfs_visible) 1852 + return 0; 1853 + 1854 + return attr->attr.mode; 1855 + } 1856 + 1857 + static size_t vmbus_chan_bin_size(struct kobject *kobj, 1858 + const struct bin_attribute *bin_attr, int a) 1859 + { 1860 + const struct vmbus_channel *channel = 1861 + container_of(kobj, struct vmbus_channel, kobj); 1862 + 1863 + return channel->ringbuffer_pagecount << PAGE_SHIFT; 1864 + } 1865 + 1869 1866 static const struct attribute_group vmbus_chan_group = { 1870 1867 .attrs = vmbus_chan_attrs, 1871 - .is_visible = vmbus_chan_attr_is_visible 1868 + .bin_attrs = vmbus_chan_bin_attrs, 1869 + .is_visible = vmbus_chan_attr_is_visible, 1870 + .is_bin_visible = vmbus_chan_bin_attr_is_visible, 1871 + .bin_size = vmbus_chan_bin_size, 1872 1872 }; 1873 1873 1874 1874 static const struct kobj_type vmbus_chan_ktype = { 1875 1875 .sysfs_ops = &vmbus_chan_sysfs_ops, 1876 1876 .release = vmbus_chan_release, 1877 1877 }; 1878 + 1879 + /** 1880 + * hv_create_ring_sysfs() - create "ring" sysfs entry corresponding to ring buffers for a channel. 1881 + * @channel: Pointer to vmbus_channel structure 1882 + * @hv_mmap_ring_buffer: function pointer for initializing the function to be called on mmap of 1883 + * channel's "ring" sysfs node, which is for the ring buffer of that channel. 1884 + * Function pointer is of below type: 1885 + * int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, 1886 + * struct vm_area_struct *vma)) 1887 + * This has a pointer to the channel and a pointer to vm_area_struct, 1888 + * used for mmap, as arguments. 1889 + * 1890 + * Sysfs node for ring buffer of a channel is created along with other fields, however its 1891 + * visibility is disabled by default. Sysfs creation needs to be controlled when the use-case 1892 + * is running. 1893 + * For example, HV_NIC device is used either by uio_hv_generic or hv_netvsc at any given point of 1894 + * time, and "ring" sysfs is needed only when uio_hv_generic is bound to that device. To avoid 1895 + * exposing the ring buffer by default, this function is reponsible to enable visibility of 1896 + * ring for userspace to use. 1897 + * Note: Race conditions can happen with userspace and it is not encouraged to create new 1898 + * use-cases for this. This was added to maintain backward compatibility, while solving 1899 + * one of the race conditions in uio_hv_generic while creating sysfs. 1900 + * 1901 + * Returns 0 on success or error code on failure. 1902 + */ 1903 + int hv_create_ring_sysfs(struct vmbus_channel *channel, 1904 + int (*hv_mmap_ring_buffer)(struct vmbus_channel *channel, 1905 + struct vm_area_struct *vma)) 1906 + { 1907 + struct kobject *kobj = &channel->kobj; 1908 + 1909 + channel->mmap_ring_buffer = hv_mmap_ring_buffer; 1910 + channel->ring_sysfs_visible = true; 1911 + 1912 + return sysfs_update_group(kobj, &vmbus_chan_group); 1913 + } 1914 + EXPORT_SYMBOL_GPL(hv_create_ring_sysfs); 1915 + 1916 + /** 1917 + * hv_remove_ring_sysfs() - remove ring sysfs entry corresponding to ring buffers for a channel. 1918 + * @channel: Pointer to vmbus_channel structure 1919 + * 1920 + * Hide "ring" sysfs for a channel by changing its is_visible attribute and updating sysfs group. 1921 + * 1922 + * Returns 0 on success or error code on failure. 1923 + */ 1924 + int hv_remove_ring_sysfs(struct vmbus_channel *channel) 1925 + { 1926 + struct kobject *kobj = &channel->kobj; 1927 + int ret; 1928 + 1929 + channel->ring_sysfs_visible = false; 1930 + ret = sysfs_update_group(kobj, &vmbus_chan_group); 1931 + channel->mmap_ring_buffer = NULL; 1932 + return ret; 1933 + } 1934 + EXPORT_SYMBOL_GPL(hv_remove_ring_sysfs); 1878 1935 1879 1936 /* 1880 1937 * vmbus_add_channel_kobj - setup a sub-directory under device/channels
+1 -1
drivers/i2c/busses/i2c-omap.c
··· 1454 1454 (1000 * omap->speed / 8); 1455 1455 } 1456 1456 1457 - if (of_property_read_bool(node, "mux-states")) { 1457 + if (of_property_present(node, "mux-states")) { 1458 1458 struct mux_state *mux_state; 1459 1459 1460 1460 mux_state = devm_mux_state_get(&pdev->dev, NULL);
+2 -2
drivers/iio/accel/adis16201.c
··· 211 211 BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14), 212 212 ADIS_AUX_ADC_CHAN(ADIS16201_AUX_ADC_REG, ADIS16201_SCAN_AUX_ADC, 0, 12), 213 213 ADIS_INCLI_CHAN(X, ADIS16201_XINCL_OUT_REG, ADIS16201_SCAN_INCLI_X, 214 - BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14), 214 + BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 12), 215 215 ADIS_INCLI_CHAN(Y, ADIS16201_YINCL_OUT_REG, ADIS16201_SCAN_INCLI_Y, 216 - BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 14), 216 + BIT(IIO_CHAN_INFO_CALIBBIAS), 0, 12), 217 217 IIO_CHAN_SOFT_TIMESTAMP(7) 218 218 }; 219 219
+1 -1
drivers/iio/accel/adxl355_core.c
··· 231 231 u8 transf_buf[3]; 232 232 struct { 233 233 u8 buf[14]; 234 - s64 ts; 234 + aligned_s64 ts; 235 235 } buffer; 236 236 } __aligned(IIO_DMA_MINALIGN); 237 237 };
+3 -7
drivers/iio/accel/adxl367.c
··· 601 601 if (ret) 602 602 return ret; 603 603 604 + st->odr = odr; 605 + 604 606 /* Activity timers depend on ODR */ 605 607 ret = _adxl367_set_act_time_ms(st, st->act_time_ms); 606 608 if (ret) 607 609 return ret; 608 610 609 - ret = _adxl367_set_inact_time_ms(st, st->inact_time_ms); 610 - if (ret) 611 - return ret; 612 - 613 - st->odr = odr; 614 - 615 - return 0; 611 + return _adxl367_set_inact_time_ms(st, st->inact_time_ms); 616 612 } 617 613 618 614 static int adxl367_set_odr(struct iio_dev *indio_dev, enum adxl367_odr odr)
+5 -2
drivers/iio/accel/fxls8962af-core.c
··· 1226 1226 if (ret) 1227 1227 return ret; 1228 1228 1229 - if (device_property_read_bool(dev, "wakeup-source")) 1230 - device_init_wakeup(dev, true); 1229 + if (device_property_read_bool(dev, "wakeup-source")) { 1230 + ret = devm_device_init_wakeup(dev); 1231 + if (ret) 1232 + return dev_err_probe(dev, ret, "Failed to init wakeup\n"); 1233 + } 1231 1234 1232 1235 return devm_iio_device_register(dev, indio_dev); 1233 1236 }
+1 -1
drivers/iio/adc/ad7266.c
··· 45 45 */ 46 46 struct { 47 47 __be16 sample[2]; 48 - s64 timestamp; 48 + aligned_s64 timestamp; 49 49 } data __aligned(IIO_DMA_MINALIGN); 50 50 }; 51 51
+22 -10
drivers/iio/adc/ad7380.c
··· 1211 1211 struct ad7380_state *st = iio_priv(indio_dev); 1212 1212 int ret; 1213 1213 1214 + spi_offload_trigger_disable(st->offload, st->offload_trigger); 1215 + spi_unoptimize_message(&st->offload_msg); 1216 + 1214 1217 if (st->seq) { 1215 1218 ret = regmap_update_bits(st->regmap, 1216 1219 AD7380_REG_ADDR_CONFIG1, ··· 1224 1221 1225 1222 st->seq = false; 1226 1223 } 1227 - 1228 - spi_offload_trigger_disable(st->offload, st->offload_trigger); 1229 - 1230 - spi_unoptimize_message(&st->offload_msg); 1231 1224 1232 1225 return 0; 1233 1226 } ··· 1610 1611 return ret; 1611 1612 } 1612 1613 1613 - static int ad7380_get_alert_th(struct ad7380_state *st, 1614 + static int ad7380_get_alert_th(struct iio_dev *indio_dev, 1615 + const struct iio_chan_spec *chan, 1614 1616 enum iio_event_direction dir, 1615 1617 int *val) 1616 1618 { 1617 - int ret, tmp; 1619 + struct ad7380_state *st = iio_priv(indio_dev); 1620 + const struct iio_scan_type *scan_type; 1621 + int ret, tmp, shift; 1622 + 1623 + scan_type = iio_get_current_scan_type(indio_dev, chan); 1624 + if (IS_ERR(scan_type)) 1625 + return PTR_ERR(scan_type); 1626 + 1627 + /* 1628 + * The register value is 12-bits and is compared to the most significant 1629 + * bits of raw value, therefore a shift is required to convert this to 1630 + * the same scale as the raw value. 1631 + */ 1632 + shift = scan_type->realbits - 12; 1618 1633 1619 1634 switch (dir) { 1620 1635 case IIO_EV_DIR_RISING: ··· 1638 1625 if (ret) 1639 1626 return ret; 1640 1627 1641 - *val = FIELD_GET(AD7380_ALERT_HIGH_TH, tmp); 1628 + *val = FIELD_GET(AD7380_ALERT_HIGH_TH, tmp) << shift; 1642 1629 return IIO_VAL_INT; 1643 1630 case IIO_EV_DIR_FALLING: 1644 1631 ret = regmap_read(st->regmap, ··· 1647 1634 if (ret) 1648 1635 return ret; 1649 1636 1650 - *val = FIELD_GET(AD7380_ALERT_LOW_TH, tmp); 1637 + *val = FIELD_GET(AD7380_ALERT_LOW_TH, tmp) << shift; 1651 1638 return IIO_VAL_INT; 1652 1639 default: 1653 1640 return -EINVAL; ··· 1661 1648 enum iio_event_info info, 1662 1649 int *val, int *val2) 1663 1650 { 1664 - struct ad7380_state *st = iio_priv(indio_dev); 1665 1651 int ret; 1666 1652 1667 1653 switch (info) { ··· 1668 1656 if (!iio_device_claim_direct(indio_dev)) 1669 1657 return -EBUSY; 1670 1658 1671 - ret = ad7380_get_alert_th(st, dir, val); 1659 + ret = ad7380_get_alert_th(indio_dev, chan, dir, val); 1672 1660 1673 1661 iio_device_release_direct(indio_dev); 1674 1662 return ret;
+8 -3
drivers/iio/adc/ad7606.c
··· 1236 1236 st->write_scale = ad7616_write_scale_sw; 1237 1237 st->write_os = &ad7616_write_os_sw; 1238 1238 1239 - ret = st->bops->sw_mode_config(indio_dev); 1240 - if (ret) 1241 - return ret; 1239 + if (st->bops->sw_mode_config) { 1240 + ret = st->bops->sw_mode_config(indio_dev); 1241 + if (ret) 1242 + return ret; 1243 + } 1242 1244 1243 1245 /* Activate Burst mode and SEQEN MODE */ 1244 1246 return ad7606_write_mask(st, AD7616_CONFIGURATION_REGISTER, ··· 1269 1267 1270 1268 st->write_scale = ad7606_write_scale_sw; 1271 1269 st->write_os = &ad7606_write_os_sw; 1270 + 1271 + if (!st->bops->sw_mode_config) 1272 + return 0; 1272 1273 1273 1274 return st->bops->sw_mode_config(indio_dev); 1274 1275 }
+1 -1
drivers/iio/adc/ad7606_spi.c
··· 131 131 { 132 132 .tx_buf = &st->d16[0], 133 133 .len = 2, 134 - .cs_change = 0, 134 + .cs_change = 1, 135 135 }, { 136 136 .rx_buf = &st->d16[1], 137 137 .len = 2,
+1 -1
drivers/iio/adc/ad7768-1.c
··· 168 168 union { 169 169 struct { 170 170 __be32 chan; 171 - s64 timestamp; 171 + aligned_s64 timestamp; 172 172 } scan; 173 173 __be32 d32; 174 174 u8 d8[2];
+1 -1
drivers/iio/adc/dln2-adc.c
··· 466 466 struct iio_dev *indio_dev = pf->indio_dev; 467 467 struct { 468 468 __le16 values[DLN2_ADC_MAX_CHANNELS]; 469 - int64_t timestamp_space; 469 + aligned_s64 timestamp_space; 470 470 } data; 471 471 struct dln2_adc_get_all_vals dev_data; 472 472 struct dln2_adc *dln2 = iio_priv(indio_dev);
+3 -1
drivers/iio/adc/qcom-spmi-iadc.c
··· 543 543 else 544 544 return ret; 545 545 } else { 546 - device_init_wakeup(iadc->dev, 1); 546 + ret = devm_device_init_wakeup(iadc->dev); 547 + if (ret) 548 + return dev_err_probe(iadc->dev, ret, "Failed to init wakeup\n"); 547 549 } 548 550 549 551 ret = iadc_update_offset(iadc);
+8 -9
drivers/iio/adc/rockchip_saradc.c
··· 520 520 if (info->reset) 521 521 rockchip_saradc_reset_controller(info->reset); 522 522 523 - /* 524 - * Use a default value for the converter clock. 525 - * This may become user-configurable in the future. 526 - */ 527 - ret = clk_set_rate(info->clk, info->data->clk_rate); 528 - if (ret < 0) 529 - return dev_err_probe(&pdev->dev, ret, 530 - "failed to set adc clk rate\n"); 531 - 532 523 ret = regulator_enable(info->vref); 533 524 if (ret < 0) 534 525 return dev_err_probe(&pdev->dev, ret, ··· 546 555 if (IS_ERR(info->clk)) 547 556 return dev_err_probe(&pdev->dev, PTR_ERR(info->clk), 548 557 "failed to get adc clock\n"); 558 + /* 559 + * Use a default value for the converter clock. 560 + * This may become user-configurable in the future. 561 + */ 562 + ret = clk_set_rate(info->clk, info->data->clk_rate); 563 + if (ret < 0) 564 + return dev_err_probe(&pdev->dev, ret, 565 + "failed to set adc clk rate\n"); 549 566 550 567 platform_set_drvdata(pdev, indio_dev); 551 568
+3 -2
drivers/iio/chemical/pms7003.c
··· 5 5 * Copyright (c) Tomasz Duszynski <tduszyns@gmail.com> 6 6 */ 7 7 8 - #include <linux/unaligned.h> 9 8 #include <linux/completion.h> 10 9 #include <linux/device.h> 11 10 #include <linux/errno.h> ··· 18 19 #include <linux/module.h> 19 20 #include <linux/mutex.h> 20 21 #include <linux/serdev.h> 22 + #include <linux/types.h> 23 + #include <linux/unaligned.h> 21 24 22 25 #define PMS7003_DRIVER_NAME "pms7003" 23 26 ··· 77 76 /* Used to construct scan to push to the IIO buffer */ 78 77 struct { 79 78 u16 data[3]; /* PM1, PM2P5, PM10 */ 80 - s64 ts; 79 + aligned_s64 ts; 81 80 } scan; 82 81 }; 83 82
+1 -1
drivers/iio/chemical/sps30.c
··· 108 108 int ret; 109 109 struct { 110 110 s32 data[4]; /* PM1, PM2P5, PM4, PM10 */ 111 - s64 ts; 111 + aligned_s64 ts; 112 112 } scan; 113 113 114 114 mutex_lock(&state->lock);
+4
drivers/iio/common/hid-sensors/hid-sensor-attributes.c
··· 66 66 {HID_USAGE_SENSOR_HUMIDITY, 0, 1000, 0}, 67 67 {HID_USAGE_SENSOR_HINGE, 0, 0, 17453293}, 68 68 {HID_USAGE_SENSOR_HINGE, HID_USAGE_SENSOR_UNITS_DEGREES, 0, 17453293}, 69 + 70 + {HID_USAGE_SENSOR_HUMAN_PRESENCE, 0, 1, 0}, 71 + {HID_USAGE_SENSOR_HUMAN_PROXIMITY, 0, 1, 0}, 72 + {HID_USAGE_SENSOR_HUMAN_ATTENTION, 0, 1, 0}, 69 73 }; 70 74 71 75 static void simple_div(int dividend, int divisor, int *whole,
+1 -1
drivers/iio/imu/adis16550.c
··· 836 836 u16 dummy; 837 837 bool valid; 838 838 struct iio_poll_func *pf = p; 839 - __be32 data[ADIS16550_MAX_SCAN_DATA]; 839 + __be32 data[ADIS16550_MAX_SCAN_DATA] __aligned(8); 840 840 struct iio_dev *indio_dev = pf->indio_dev; 841 841 struct adis16550 *st = iio_priv(indio_dev); 842 842 struct adis *adis = iio_device_get_drvdata(indio_dev);
+2 -4
drivers/iio/imu/bmi270/bmi270_core.c
··· 918 918 FIELD_PREP(BMI270_ACC_CONF_ODR_MSK, 919 919 BMI270_ACC_CONF_ODR_100HZ) | 920 920 FIELD_PREP(BMI270_ACC_CONF_BWP_MSK, 921 - BMI270_ACC_CONF_BWP_NORMAL_MODE) | 922 - BMI270_PWR_CONF_ADV_PWR_SAVE_MSK); 921 + BMI270_ACC_CONF_BWP_NORMAL_MODE)); 923 922 if (ret) 924 923 return dev_err_probe(dev, ret, "Failed to configure accelerometer"); 925 924 ··· 926 927 FIELD_PREP(BMI270_GYR_CONF_ODR_MSK, 927 928 BMI270_GYR_CONF_ODR_200HZ) | 928 929 FIELD_PREP(BMI270_GYR_CONF_BWP_MSK, 929 - BMI270_GYR_CONF_BWP_NORMAL_MODE) | 930 - BMI270_PWR_CONF_ADV_PWR_SAVE_MSK); 930 + BMI270_GYR_CONF_BWP_NORMAL_MODE)); 931 931 if (ret) 932 932 return dev_err_probe(dev, ret, "Failed to configure gyroscope"); 933 933
+1 -1
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
··· 50 50 u16 fifo_count; 51 51 u32 fifo_period; 52 52 s64 timestamp; 53 - u8 data[INV_MPU6050_OUTPUT_DATA_SIZE]; 53 + u8 data[INV_MPU6050_OUTPUT_DATA_SIZE] __aligned(8); 54 54 size_t i, nb; 55 55 56 56 mutex_lock(&st->lock);
+6
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
··· 392 392 if (fifo_status & cpu_to_le16(ST_LSM6DSX_FIFO_EMPTY_MASK)) 393 393 return 0; 394 394 395 + if (!pattern_len) 396 + pattern_len = ST_LSM6DSX_SAMPLE_SIZE; 397 + 395 398 fifo_len = (le16_to_cpu(fifo_status) & fifo_diff_mask) * 396 399 ST_LSM6DSX_CHAN_SIZE; 397 400 fifo_len = (fifo_len / pattern_len) * pattern_len; ··· 625 622 ST_LSM6DSX_TAGGED_SAMPLE_SIZE; 626 623 if (!fifo_len) 627 624 return 0; 625 + 626 + if (!pattern_len) 627 + pattern_len = ST_LSM6DSX_TAGGED_SAMPLE_SIZE; 628 628 629 629 for (read_len = 0; read_len < fifo_len; read_len += pattern_len) { 630 630 err = st_lsm6dsx_read_block(hw,
+5 -2
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
··· 2719 2719 } 2720 2720 2721 2721 if (device_property_read_bool(dev, "wakeup-source") || 2722 - (pdata && pdata->wakeup_source)) 2723 - device_init_wakeup(dev, true); 2722 + (pdata && pdata->wakeup_source)) { 2723 + err = devm_device_init_wakeup(dev); 2724 + if (err) 2725 + return dev_err_probe(dev, err, "Failed to init wakeup\n"); 2726 + } 2724 2727 2725 2728 return 0; 2726 2729 }
+14 -8
drivers/iio/light/hid-sensor-prox.c
··· 34 34 struct iio_chan_spec channels[MAX_CHANNELS]; 35 35 u32 channel2usage[MAX_CHANNELS]; 36 36 u32 human_presence[MAX_CHANNELS]; 37 - int scale_pre_decml; 38 - int scale_post_decml; 39 - int scale_precision; 37 + int scale_pre_decml[MAX_CHANNELS]; 38 + int scale_post_decml[MAX_CHANNELS]; 39 + int scale_precision[MAX_CHANNELS]; 40 40 unsigned long scan_mask[2]; /* One entry plus one terminator. */ 41 41 int num_channels; 42 42 }; ··· 116 116 ret_type = IIO_VAL_INT; 117 117 break; 118 118 case IIO_CHAN_INFO_SCALE: 119 - *val = prox_state->scale_pre_decml; 120 - *val2 = prox_state->scale_post_decml; 121 - ret_type = prox_state->scale_precision; 119 + if (chan->scan_index >= prox_state->num_channels) 120 + return -EINVAL; 121 + 122 + *val = prox_state->scale_pre_decml[chan->scan_index]; 123 + *val2 = prox_state->scale_post_decml[chan->scan_index]; 124 + ret_type = prox_state->scale_precision[chan->scan_index]; 122 125 break; 123 126 case IIO_CHAN_INFO_OFFSET: 124 - *val = hid_sensor_convert_exponent( 125 - prox_state->prox_attr[chan->scan_index].unit_expo); 127 + *val = 0; 126 128 ret_type = IIO_VAL_INT; 127 129 break; 128 130 case IIO_CHAN_INFO_SAMP_FREQ: ··· 251 249 st->prox_attr[index].size); 252 250 dev_dbg(&pdev->dev, "prox %x:%x\n", st->prox_attr[index].index, 253 251 st->prox_attr[index].report_id); 252 + st->scale_precision[index] = 253 + hid_sensor_format_scale(usage_id, &st->prox_attr[index], 254 + &st->scale_pre_decml[index], 255 + &st->scale_post_decml[index]); 254 256 index++; 255 257 } 256 258
+3 -2
drivers/iio/light/opt3001.c
··· 788 788 int ret; 789 789 bool wake_result_ready_queue = false; 790 790 enum iio_chan_type chan_type = opt->chip_info->chan_type; 791 + bool ok_to_ignore_lock = opt->ok_to_ignore_lock; 791 792 792 - if (!opt->ok_to_ignore_lock) 793 + if (!ok_to_ignore_lock) 793 794 mutex_lock(&opt->lock); 794 795 795 796 ret = i2c_smbus_read_word_swapped(opt->client, OPT3001_CONFIGURATION); ··· 827 826 } 828 827 829 828 out: 830 - if (!opt->ok_to_ignore_lock) 829 + if (!ok_to_ignore_lock) 831 830 mutex_unlock(&opt->lock); 832 831 833 832 if (wake_result_ready_queue)
+6 -11
drivers/iio/pressure/mprls0025pa.h
··· 34 34 struct mpr_data; 35 35 struct mpr_ops; 36 36 37 - /** 38 - * struct mpr_chan 39 - * @pres: pressure value 40 - * @ts: timestamp 41 - */ 42 - struct mpr_chan { 43 - s32 pres; 44 - s64 ts; 45 - }; 46 - 47 37 enum mpr_func_id { 48 38 MPR_FUNCTION_A, 49 39 MPR_FUNCTION_B, ··· 59 69 * reading in a loop until data is ready 60 70 * @completion: handshake from irq to read 61 71 * @chan: channel values for buffered mode 72 + * @chan.pres: pressure value 73 + * @chan.ts: timestamp 62 74 * @buffer: raw conversion data 63 75 */ 64 76 struct mpr_data { ··· 79 87 struct gpio_desc *gpiod_reset; 80 88 int irq; 81 89 struct completion completion; 82 - struct mpr_chan chan; 90 + struct { 91 + s32 pres; 92 + aligned_s64 ts; 93 + } chan; 83 94 u8 buffer[MPR_MEASUREMENT_RD_SIZE] __aligned(IIO_DMA_MINALIGN); 84 95 }; 85 96
+1 -1
drivers/iio/temperature/maxim_thermocouple.c
··· 121 121 struct maxim_thermocouple_data { 122 122 struct spi_device *spi; 123 123 const struct maxim_thermocouple_chip *chip; 124 + char tc_type; 124 125 125 126 u8 buffer[16] __aligned(IIO_DMA_MINALIGN); 126 - char tc_type; 127 127 }; 128 128 129 129 static int maxim_thermocouple_read(struct maxim_thermocouple_data *data,
+1 -1
drivers/input/joystick/magellan.c
··· 48 48 49 49 static int magellan_crunch_nibbles(unsigned char *data, int count) 50 50 { 51 - static unsigned char nibbles[16] __nonstring = "0AB3D56GH9:K<MN?"; 51 + static const unsigned char nibbles[16] __nonstring = "0AB3D56GH9:K<MN?"; 52 52 53 53 do { 54 54 if (data[count] == nibbles[data[count] & 0xf])
+31 -18
drivers/input/joystick/xpad.c
··· 77 77 * xbox d-pads should map to buttons, as is required for DDR pads 78 78 * but we map them to axes when possible to simplify things 79 79 */ 80 - #define MAP_DPAD_TO_BUTTONS (1 << 0) 81 - #define MAP_TRIGGERS_TO_BUTTONS (1 << 1) 82 - #define MAP_STICKS_TO_NULL (1 << 2) 83 - #define MAP_SELECT_BUTTON (1 << 3) 84 - #define MAP_PADDLES (1 << 4) 85 - #define MAP_PROFILE_BUTTON (1 << 5) 80 + #define MAP_DPAD_TO_BUTTONS BIT(0) 81 + #define MAP_TRIGGERS_TO_BUTTONS BIT(1) 82 + #define MAP_STICKS_TO_NULL BIT(2) 83 + #define MAP_SHARE_BUTTON BIT(3) 84 + #define MAP_PADDLES BIT(4) 85 + #define MAP_PROFILE_BUTTON BIT(5) 86 + #define MAP_SHARE_OFFSET BIT(6) 86 87 87 88 #define DANCEPAD_MAP_CONFIG (MAP_DPAD_TO_BUTTONS | \ 88 89 MAP_TRIGGERS_TO_BUTTONS | MAP_STICKS_TO_NULL) ··· 136 135 { 0x03f0, 0x048D, "HyperX Clutch", 0, XTYPE_XBOX360 }, /* wireless */ 137 136 { 0x03f0, 0x0495, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE }, 138 137 { 0x03f0, 0x07A0, "HyperX Clutch Gladiate RGB", 0, XTYPE_XBOXONE }, 139 - { 0x03f0, 0x08B6, "HyperX Clutch Gladiate", 0, XTYPE_XBOXONE }, /* v2 */ 138 + { 0x03f0, 0x08B6, "HyperX Clutch Gladiate", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, /* v2 */ 140 139 { 0x03f0, 0x09B4, "HyperX Clutch Tanto", 0, XTYPE_XBOXONE }, 141 140 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 142 141 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 143 142 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX }, 144 - { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE }, 145 143 { 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX }, 146 144 { 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 }, 145 + { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE }, 147 146 { 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX }, 148 147 { 0x045e, 0x0285, "Microsoft X-Box pad (Japan)", 0, XTYPE_XBOX }, 149 148 { 0x045e, 0x0287, "Microsoft Xbox Controller S", 0, XTYPE_XBOX }, ··· 160 159 { 0x045e, 0x0719, "Xbox 360 Wireless Receiver", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W }, 161 160 { 0x045e, 0x0b00, "Microsoft X-Box One Elite 2 pad", MAP_PADDLES, XTYPE_XBOXONE }, 162 161 { 0x045e, 0x0b0a, "Microsoft X-Box Adaptive Controller", MAP_PROFILE_BUTTON, XTYPE_XBOXONE }, 163 - { 0x045e, 0x0b12, "Microsoft Xbox Series S|X Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE }, 162 + { 0x045e, 0x0b12, "Microsoft Xbox Series S|X Controller", MAP_SHARE_BUTTON | MAP_SHARE_OFFSET, XTYPE_XBOXONE }, 164 163 { 0x046d, 0xc21d, "Logitech Gamepad F310", 0, XTYPE_XBOX360 }, 165 164 { 0x046d, 0xc21e, "Logitech Gamepad F510", 0, XTYPE_XBOX360 }, 166 165 { 0x046d, 0xc21f, "Logitech Gamepad F710", 0, XTYPE_XBOX360 }, ··· 206 205 { 0x0738, 0x9871, "Mad Catz Portable Drum", 0, XTYPE_XBOX360 }, 207 206 { 0x0738, 0xb726, "Mad Catz Xbox controller - MW2", 0, XTYPE_XBOX360 }, 208 207 { 0x0738, 0xb738, "Mad Catz MVC2TE Stick 2", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 209 - { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", XTYPE_XBOX360 }, 208 + { 0x0738, 0xbeef, "Mad Catz JOYTECH NEO SE Advanced GamePad", 0, XTYPE_XBOX360 }, 210 209 { 0x0738, 0xcb02, "Saitek Cyborg Rumble Pad - PC/Xbox 360", 0, XTYPE_XBOX360 }, 211 210 { 0x0738, 0xcb03, "Saitek P3200 Rumble Pad - PC/Xbox 360", 0, XTYPE_XBOX360 }, 212 211 { 0x0738, 0xcb29, "Saitek Aviator Stick AV8R02", 0, XTYPE_XBOX360 }, 213 212 { 0x0738, 0xf738, "Super SFIV FightStick TE S", 0, XTYPE_XBOX360 }, 214 213 { 0x07ff, 0xffff, "Mad Catz GamePad", 0, XTYPE_XBOX360 }, 215 - { 0x0b05, 0x1a38, "ASUS ROG RAIKIRI", 0, XTYPE_XBOXONE }, 214 + { 0x0b05, 0x1a38, "ASUS ROG RAIKIRI", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 216 215 { 0x0b05, 0x1abb, "ASUS ROG RAIKIRI PRO", 0, XTYPE_XBOXONE }, 217 216 { 0x0c12, 0x0005, "Intec wireless", 0, XTYPE_XBOX }, 218 217 { 0x0c12, 0x8801, "Nyko Xbox Controller", 0, XTYPE_XBOX }, ··· 241 240 { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE }, 242 241 { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE }, 243 242 { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 244 - { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE }, 243 + { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", 0, XTYPE_XBOXONE }, 245 244 { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 246 245 { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 247 246 { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ··· 282 281 { 0x0f0d, 0x00dc, "HORIPAD FPS for Nintendo Switch", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 283 282 { 0x0f0d, 0x0151, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 284 283 { 0x0f0d, 0x0152, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 284 + { 0x0f0d, 0x01b2, "HORI Taiko No Tatsujin Drum Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 285 285 { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX }, 286 286 { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX }, 287 287 { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX }, ··· 355 353 { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, 356 354 { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, 357 355 { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, 356 + { 0x20d6, 0x400b, "PowerA FUSION Pro 4 Wired Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 357 + { 0x20d6, 0x890b, "PowerA MOGA XP-Ultra Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 358 358 { 0x2345, 0xe00b, "Machenike G5 Pro Controller", 0, XTYPE_XBOX360 }, 359 359 { 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 360 360 { 0x24c6, 0x5300, "PowerA MINI PROEX Controller", 0, XTYPE_XBOX360 }, ··· 388 384 { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE }, 389 385 { 0x2993, 0x2001, "TECNO Pocket Go", 0, XTYPE_XBOX360 }, 390 386 { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE }, 387 + { 0x2dc8, 0x200f, "8BitDo Ultimate 3-mode Controller for Xbox", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 391 388 { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 }, 392 389 { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 }, 393 390 { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 }, 391 + { 0x2dc8, 0x310b, "8BitDo Ultimate 2 Wireless Controller", 0, XTYPE_XBOX360 }, 394 392 { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 }, 393 + { 0x2e24, 0x0423, "Hyperkin DuchesS Xbox One pad", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 395 394 { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, 396 395 { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE }, 397 - { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE }, 396 + { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SHARE_BUTTON, XTYPE_XBOXONE }, 398 397 { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 }, 399 398 { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 }, 400 399 { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 }, ··· 721 714 XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init), 722 715 XBOXONE_INIT_PKT(0x045e, 0x0b00, extra_input_packet_init), 723 716 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_led_on), 717 + XBOXONE_INIT_PKT(0x0f0d, 0x01b2, xboxone_pdp_led_on), 724 718 XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_led_on), 725 719 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_auth), 720 + XBOXONE_INIT_PKT(0x0f0d, 0x01b2, xboxone_pdp_auth), 726 721 XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_auth), 727 722 XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), 728 723 XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init), ··· 1036 1027 * The report format was gleaned from 1037 1028 * https://github.com/kylelemons/xbox/blob/master/xbox.go 1038 1029 */ 1039 - static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *data) 1030 + static void xpadone_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *data, u32 len) 1040 1031 { 1041 1032 struct input_dev *dev = xpad->dev; 1042 1033 bool do_sync = false; ··· 1077 1068 /* menu/view buttons */ 1078 1069 input_report_key(dev, BTN_START, data[4] & BIT(2)); 1079 1070 input_report_key(dev, BTN_SELECT, data[4] & BIT(3)); 1080 - if (xpad->mapping & MAP_SELECT_BUTTON) 1081 - input_report_key(dev, KEY_RECORD, data[22] & BIT(0)); 1071 + if (xpad->mapping & MAP_SHARE_BUTTON) { 1072 + if (xpad->mapping & MAP_SHARE_OFFSET) 1073 + input_report_key(dev, KEY_RECORD, data[len - 26] & BIT(0)); 1074 + else 1075 + input_report_key(dev, KEY_RECORD, data[len - 18] & BIT(0)); 1076 + } 1082 1077 1083 1078 /* buttons A,B,X,Y */ 1084 1079 input_report_key(dev, BTN_A, data[4] & BIT(4)); ··· 1230 1217 xpad360w_process_packet(xpad, 0, xpad->idata); 1231 1218 break; 1232 1219 case XTYPE_XBOXONE: 1233 - xpadone_process_packet(xpad, 0, xpad->idata); 1220 + xpadone_process_packet(xpad, 0, xpad->idata, urb->actual_length); 1234 1221 break; 1235 1222 default: 1236 1223 xpad_process_packet(xpad, 0, xpad->idata); ··· 1957 1944 xpad->xtype == XTYPE_XBOXONE) { 1958 1945 for (i = 0; xpad360_btn[i] >= 0; i++) 1959 1946 input_set_capability(input_dev, EV_KEY, xpad360_btn[i]); 1960 - if (xpad->mapping & MAP_SELECT_BUTTON) 1947 + if (xpad->mapping & MAP_SHARE_BUTTON) 1961 1948 input_set_capability(input_dev, EV_KEY, KEY_RECORD); 1962 1949 } else { 1963 1950 for (i = 0; xpad_btn[i] >= 0; i++)
+2 -2
drivers/input/keyboard/mtk-pmic-keys.c
··· 147 147 u32 value, mask; 148 148 int error; 149 149 150 - kregs_home = keys->keys[MTK_PMIC_HOMEKEY_INDEX].regs; 151 - kregs_pwr = keys->keys[MTK_PMIC_PWRKEY_INDEX].regs; 150 + kregs_home = &regs->keys_regs[MTK_PMIC_HOMEKEY_INDEX]; 151 + kregs_pwr = &regs->keys_regs[MTK_PMIC_PWRKEY_INDEX]; 152 152 153 153 error = of_property_read_u32(keys->dev->of_node, "power-off-time-sec", 154 154 &long_press_debounce);
+1 -1
drivers/input/misc/hisi_powerkey.c
··· 30 30 { 31 31 struct input_dev *input = q; 32 32 33 - pm_wakeup_event(input->dev.parent, MAX_HELD_TIME); 33 + pm_wakeup_dev_event(input->dev.parent, MAX_HELD_TIME, true); 34 34 input_report_key(input, KEY_POWER, 1); 35 35 input_sync(input); 36 36
+16 -6
drivers/input/misc/sparcspkr.c
··· 74 74 return -1; 75 75 76 76 switch (code) { 77 - case SND_BELL: if (value) value = 1000; 78 - case SND_TONE: break; 79 - default: return -1; 77 + case SND_BELL: 78 + if (value) 79 + value = 1000; 80 + break; 81 + case SND_TONE: 82 + break; 83 + default: 84 + return -1; 80 85 } 81 86 82 87 if (value > 20 && value < 32767) ··· 114 109 return -1; 115 110 116 111 switch (code) { 117 - case SND_BELL: if (value) value = 1000; 118 - case SND_TONE: break; 119 - default: return -1; 112 + case SND_BELL: 113 + if (value) 114 + value = 1000; 115 + break; 116 + case SND_TONE: 117 + break; 118 + default: 119 + return -1; 120 120 } 121 121 122 122 if (value > 20 && value < 32767)
+5
drivers/input/mouse/synaptics.c
··· 164 164 #ifdef CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS 165 165 static const char * const smbus_pnp_ids[] = { 166 166 /* all of the topbuttonpad_pnp_ids are valid, we just add some extras */ 167 + "DLL060d", /* Dell Precision M3800 */ 167 168 "LEN0048", /* X1 Carbon 3 */ 168 169 "LEN0046", /* X250 */ 169 170 "LEN0049", /* Yoga 11e */ ··· 191 190 "LEN2054", /* E480 */ 192 191 "LEN2055", /* E580 */ 193 192 "LEN2068", /* T14 Gen 1 */ 193 + "SYN1221", /* TUXEDO InfinityBook Pro 14 v5 */ 194 + "SYN3003", /* HP EliteBook 850 G1 */ 194 195 "SYN3015", /* HP EliteBook 840 G2 */ 195 196 "SYN3052", /* HP EliteBook 840 G4 */ 196 197 "SYN3221", /* HP 15-ay000 */ 197 198 "SYN323d", /* HP Spectre X360 13-w013dx */ 198 199 "SYN3257", /* HP Envy 13-ad105ng */ 200 + "TOS01f6", /* Dynabook Portege X30L-G */ 201 + "TOS0213", /* Dynabook Portege X30-D */ 199 202 NULL 200 203 }; 201 204 #endif
+5 -2
drivers/input/touchscreen/cyttsp5.c
··· 580 580 int rc; 581 581 582 582 SET_CMD_REPORT_TYPE(cmd[0], 0); 583 - SET_CMD_REPORT_ID(cmd[0], HID_POWER_SLEEP); 583 + SET_CMD_REPORT_ID(cmd[0], state); 584 584 SET_CMD_OPCODE(cmd[1], HID_CMD_SET_POWER); 585 585 586 586 rc = cyttsp5_write(ts, HID_COMMAND_REG, cmd, sizeof(cmd)); ··· 870 870 ts->input->phys = ts->phys; 871 871 input_set_drvdata(ts->input, ts); 872 872 873 - /* Reset the gpio to be in a reset state */ 873 + /* Assert gpio to be in a reset state */ 874 874 ts->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 875 875 if (IS_ERR(ts->reset_gpio)) { 876 876 error = PTR_ERR(ts->reset_gpio); 877 877 dev_err(dev, "Failed to request reset gpio, error %d\n", error); 878 878 return error; 879 879 } 880 + 881 + fsleep(10); /* Ensure long-enough reset pulse (minimum 10us). */ 882 + 880 883 gpiod_set_value_cansleep(ts->reset_gpio, 0); 881 884 882 885 /* Need a delay to have device up */
+1 -6
drivers/input/touchscreen/stmpe-ts.c
··· 366 366 }; 367 367 module_platform_driver(stmpe_ts_driver); 368 368 369 - static const struct of_device_id stmpe_ts_ids[] = { 370 - { .compatible = "st,stmpe-ts", }, 371 - { }, 372 - }; 373 - MODULE_DEVICE_TABLE(of, stmpe_ts_ids); 374 - 369 + MODULE_ALIAS("platform:stmpe-ts"); 375 370 MODULE_AUTHOR("Luotao Fu <l.fu@pengutronix.de>"); 376 371 MODULE_DESCRIPTION("STMPEXXX touchscreen driver"); 377 372 MODULE_LICENSE("GPL");
+3 -2
drivers/md/dm-table.c
··· 524 524 } 525 525 argv = kmalloc_array(new_size, sizeof(*argv), gfp); 526 526 if (argv) { 527 - *size = new_size; 528 527 if (old_argv) 529 528 memcpy(argv, old_argv, *size * sizeof(*argv)); 529 + *size = new_size; 530 530 } 531 531 532 532 kfree(old_argv); ··· 1173 1173 1174 1174 t = dm_get_live_table(md, &srcu_idx); 1175 1175 if (!t) 1176 - return 0; 1176 + goto put_live_table; 1177 1177 1178 1178 for (unsigned int i = 0; i < t->num_targets; i++) { 1179 1179 struct dm_target *ti = dm_table_get_target(t, i); ··· 1184 1184 (void *)key); 1185 1185 } 1186 1186 1187 + put_live_table: 1187 1188 dm_put_live_table(md, srcu_idx); 1188 1189 return 0; 1189 1190 }
+1
drivers/media/cec/i2c/Kconfig
··· 16 16 17 17 config CEC_NXP_TDA9950 18 18 tristate "NXP Semiconductors TDA9950/TDA998X HDMI CEC" 19 + depends on I2C 19 20 select CEC_NOTIFIER 20 21 select CEC_CORE 21 22 default DRM_I2C_NXP_TDA998X
+4 -1
drivers/media/i2c/Kconfig
··· 1149 1149 1150 1150 config VIDEO_LT6911UXE 1151 1151 tristate "Lontium LT6911UXE decoder" 1152 - depends on ACPI && VIDEO_DEV 1152 + depends on ACPI && VIDEO_DEV && I2C 1153 1153 select V4L2_FWNODE 1154 + select V4L2_CCI_I2C 1155 + select MEDIA_CONTROLLER 1156 + select VIDEO_V4L2_SUBDEV_API 1154 1157 help 1155 1158 This is a Video4Linux2 sensor-level driver for the Lontium 1156 1159 LT6911UXE HDMI to MIPI CSI-2 bridge.
+1
drivers/media/platform/synopsys/hdmirx/Kconfig
··· 2 2 3 3 config VIDEO_SYNOPSYS_HDMIRX 4 4 tristate "Synopsys DesignWare HDMI Receiver driver" 5 + depends on ARCH_ROCKCHIP || COMPILE_TEST 5 6 depends on VIDEO_DEV 6 7 select MEDIA_CONTROLLER 7 8 select VIDEO_V4L2_SUBDEV_API
+2 -1
drivers/media/test-drivers/vivid/Kconfig
··· 32 32 33 33 config VIDEO_VIVID_OSD 34 34 bool "Enable Framebuffer for testing Output Overlay" 35 - depends on VIDEO_VIVID && FB 35 + depends on VIDEO_VIVID && FB_CORE 36 + depends on VIDEO_VIVID=m || FB_CORE=y 36 37 default y 37 38 select FB_IOMEM_HELPERS 38 39 help
+2 -1
drivers/net/can/m_can/m_can.c
··· 2379 2379 SET_NETDEV_DEV(net_dev, dev); 2380 2380 2381 2381 m_can_of_parse_mram(class_dev, mram_config_vals); 2382 + spin_lock_init(&class_dev->tx_handling_spinlock); 2382 2383 out: 2383 2384 return class_dev; 2384 2385 } ··· 2463 2462 2464 2463 void m_can_class_unregister(struct m_can_classdev *cdev) 2465 2464 { 2465 + unregister_candev(cdev->net); 2466 2466 if (cdev->is_peripheral) 2467 2467 can_rx_offload_del(&cdev->offload); 2468 - unregister_candev(cdev->net); 2469 2468 } 2470 2469 EXPORT_SYMBOL_GPL(m_can_class_unregister); 2471 2470
+1 -1
drivers/net/can/rockchip/rockchip_canfd-core.c
··· 937 937 struct rkcanfd_priv *priv = platform_get_drvdata(pdev); 938 938 struct net_device *ndev = priv->ndev; 939 939 940 - can_rx_offload_del(&priv->offload); 941 940 rkcanfd_unregister(priv); 941 + can_rx_offload_del(&priv->offload); 942 942 free_candev(ndev); 943 943 } 944 944
+33 -9
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 75 75 .brp_inc = 1, 76 76 }; 77 77 78 + /* The datasheet of the mcp2518fd (DS20006027B) specifies a range of 79 + * [-64,63] for TDCO, indicating a relative TDCO. 80 + * 81 + * Manual tests have shown, that using a relative TDCO configuration 82 + * results in bus off, while an absolute configuration works. 83 + * 84 + * For TDCO use the max value (63) from the data sheet, but 0 as the 85 + * minimum. 86 + */ 87 + static const struct can_tdc_const mcp251xfd_tdc_const = { 88 + .tdcv_min = 0, 89 + .tdcv_max = 63, 90 + .tdco_min = 0, 91 + .tdco_max = 63, 92 + .tdcf_min = 0, 93 + .tdcf_max = 0, 94 + }; 95 + 78 96 static const char *__mcp251xfd_get_model_str(enum mcp251xfd_model model) 79 97 { 80 98 switch (model) { ··· 528 510 { 529 511 const struct can_bittiming *bt = &priv->can.bittiming; 530 512 const struct can_bittiming *dbt = &priv->can.data_bittiming; 531 - u32 val = 0; 532 - s8 tdco; 513 + u32 tdcmod, val = 0; 533 514 int err; 534 515 535 516 /* CAN Control Register ··· 592 575 return err; 593 576 594 577 /* Transmitter Delay Compensation */ 595 - tdco = clamp_t(int, dbt->brp * (dbt->prop_seg + dbt->phase_seg1), 596 - -64, 63); 597 - val = FIELD_PREP(MCP251XFD_REG_TDC_TDCMOD_MASK, 598 - MCP251XFD_REG_TDC_TDCMOD_AUTO) | 599 - FIELD_PREP(MCP251XFD_REG_TDC_TDCO_MASK, tdco); 578 + if (priv->can.ctrlmode & CAN_CTRLMODE_TDC_AUTO) 579 + tdcmod = MCP251XFD_REG_TDC_TDCMOD_AUTO; 580 + else if (priv->can.ctrlmode & CAN_CTRLMODE_TDC_MANUAL) 581 + tdcmod = MCP251XFD_REG_TDC_TDCMOD_MANUAL; 582 + else 583 + tdcmod = MCP251XFD_REG_TDC_TDCMOD_DISABLED; 584 + 585 + val = FIELD_PREP(MCP251XFD_REG_TDC_TDCMOD_MASK, tdcmod) | 586 + FIELD_PREP(MCP251XFD_REG_TDC_TDCV_MASK, priv->can.tdc.tdcv) | 587 + FIELD_PREP(MCP251XFD_REG_TDC_TDCO_MASK, priv->can.tdc.tdco); 600 588 601 589 return regmap_write(priv->map_reg, MCP251XFD_REG_TDC, val); 602 590 } ··· 2105 2083 priv->can.do_get_berr_counter = mcp251xfd_get_berr_counter; 2106 2084 priv->can.bittiming_const = &mcp251xfd_bittiming_const; 2107 2085 priv->can.data_bittiming_const = &mcp251xfd_data_bittiming_const; 2086 + priv->can.tdc_const = &mcp251xfd_tdc_const; 2108 2087 priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK | 2109 2088 CAN_CTRLMODE_LISTENONLY | CAN_CTRLMODE_BERR_REPORTING | 2110 2089 CAN_CTRLMODE_FD | CAN_CTRLMODE_FD_NON_ISO | 2111 - CAN_CTRLMODE_CC_LEN8_DLC; 2090 + CAN_CTRLMODE_CC_LEN8_DLC | CAN_CTRLMODE_TDC_AUTO | 2091 + CAN_CTRLMODE_TDC_MANUAL; 2112 2092 set_bit(MCP251XFD_FLAGS_DOWN, priv->flags); 2113 2093 priv->ndev = ndev; 2114 2094 priv->spi = spi; ··· 2198 2174 struct mcp251xfd_priv *priv = spi_get_drvdata(spi); 2199 2175 struct net_device *ndev = priv->ndev; 2200 2176 2201 - can_rx_offload_del(&priv->offload); 2202 2177 mcp251xfd_unregister(priv); 2178 + can_rx_offload_del(&priv->offload); 2203 2179 spi->max_speed_hz = priv->spi_max_speed_hz_orig; 2204 2180 free_candev(ndev); 2205 2181 }
+153 -60
drivers/net/dsa/b53/b53_common.c
··· 373 373 b53_read8(dev, B53_VLAN_PAGE, B53_VLAN_CTRL5, &vc5); 374 374 } 375 375 376 + vc1 &= ~VC1_RX_MCST_FWD_EN; 377 + 376 378 if (enable) { 377 379 vc0 |= VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID; 378 - vc1 |= VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN; 380 + vc1 |= VC1_RX_MCST_UNTAG_EN; 379 381 vc4 &= ~VC4_ING_VID_CHECK_MASK; 380 382 if (enable_filtering) { 381 383 vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S; 382 384 vc5 |= VC5_DROP_VTABLE_MISS; 383 385 } else { 384 - vc4 |= VC4_ING_VID_VIO_FWD << VC4_ING_VID_CHECK_S; 386 + vc4 |= VC4_NO_ING_VID_CHK << VC4_ING_VID_CHECK_S; 385 387 vc5 &= ~VC5_DROP_VTABLE_MISS; 386 388 } 387 389 ··· 395 393 396 394 } else { 397 395 vc0 &= ~(VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID); 398 - vc1 &= ~(VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN); 396 + vc1 &= ~VC1_RX_MCST_UNTAG_EN; 399 397 vc4 &= ~VC4_ING_VID_CHECK_MASK; 400 398 vc5 &= ~VC5_DROP_VTABLE_MISS; 401 399 ··· 578 576 b53_write16(dev, B53_EEE_PAGE, B53_EEE_EN_CTRL, reg); 579 577 } 580 578 579 + int b53_setup_port(struct dsa_switch *ds, int port) 580 + { 581 + struct b53_device *dev = ds->priv; 582 + 583 + b53_port_set_ucast_flood(dev, port, true); 584 + b53_port_set_mcast_flood(dev, port, true); 585 + b53_port_set_learning(dev, port, false); 586 + 587 + return 0; 588 + } 589 + EXPORT_SYMBOL(b53_setup_port); 590 + 581 591 int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy) 582 592 { 583 593 struct b53_device *dev = ds->priv; ··· 601 587 return 0; 602 588 603 589 cpu_port = dsa_to_port(ds, port)->cpu_dp->index; 604 - 605 - b53_port_set_ucast_flood(dev, port, true); 606 - b53_port_set_mcast_flood(dev, port, true); 607 - b53_port_set_learning(dev, port, false); 608 590 609 591 if (dev->ops->irq_enable) 610 592 ret = dev->ops->irq_enable(dev, port); ··· 732 722 b53_write8(dev, B53_CTRL_PAGE, B53_PORT_CTRL(port), port_ctrl); 733 723 734 724 b53_brcm_hdr_setup(dev->ds, port); 735 - 736 - b53_port_set_ucast_flood(dev, port, true); 737 - b53_port_set_mcast_flood(dev, port, true); 738 - b53_port_set_learning(dev, port, false); 739 725 } 740 726 741 727 static void b53_enable_mib(struct b53_device *dev) ··· 767 761 return dev->tag_protocol == DSA_TAG_PROTO_NONE && dsa_is_cpu_port(ds, port); 768 762 } 769 763 764 + static bool b53_vlan_port_may_join_untagged(struct dsa_switch *ds, int port) 765 + { 766 + struct b53_device *dev = ds->priv; 767 + struct dsa_port *dp; 768 + 769 + if (!dev->vlan_filtering) 770 + return true; 771 + 772 + dp = dsa_to_port(ds, port); 773 + 774 + if (dsa_port_is_cpu(dp)) 775 + return true; 776 + 777 + return dp->bridge == NULL; 778 + } 779 + 770 780 int b53_configure_vlan(struct dsa_switch *ds) 771 781 { 772 782 struct b53_device *dev = ds->priv; ··· 801 779 b53_do_vlan_op(dev, VTA_CMD_CLEAR); 802 780 } 803 781 804 - b53_enable_vlan(dev, -1, dev->vlan_enabled, ds->vlan_filtering); 782 + b53_enable_vlan(dev, -1, dev->vlan_enabled, dev->vlan_filtering); 805 783 806 784 /* Create an untagged VLAN entry for the default PVID in case 807 785 * CONFIG_VLAN_8021Q is disabled and there are no calls to ··· 809 787 * entry. Do this only when the tagging protocol is not 810 788 * DSA_TAG_PROTO_NONE 811 789 */ 790 + v = &dev->vlans[def_vid]; 812 791 b53_for_each_port(dev, i) { 813 - v = &dev->vlans[def_vid]; 814 - v->members |= BIT(i); 815 - if (!b53_vlan_port_needs_forced_tagged(ds, i)) 816 - v->untag = v->members; 817 - b53_write16(dev, B53_VLAN_PAGE, 818 - B53_VLAN_PORT_DEF_TAG(i), def_vid); 819 - } 820 - 821 - /* Upon initial call we have not set-up any VLANs, but upon 822 - * system resume, we need to restore all VLAN entries. 823 - */ 824 - for (vid = def_vid; vid < dev->num_vlans; vid++) { 825 - v = &dev->vlans[vid]; 826 - 827 - if (!v->members) 792 + if (!b53_vlan_port_may_join_untagged(ds, i)) 828 793 continue; 829 794 830 - b53_set_vlan_entry(dev, vid, v); 831 - b53_fast_age_vlan(dev, vid); 795 + vl.members |= BIT(i); 796 + if (!b53_vlan_port_needs_forced_tagged(ds, i)) 797 + vl.untag = vl.members; 798 + b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(i), 799 + def_vid); 800 + } 801 + b53_set_vlan_entry(dev, def_vid, &vl); 802 + 803 + if (dev->vlan_filtering) { 804 + /* Upon initial call we have not set-up any VLANs, but upon 805 + * system resume, we need to restore all VLAN entries. 806 + */ 807 + for (vid = def_vid + 1; vid < dev->num_vlans; vid++) { 808 + v = &dev->vlans[vid]; 809 + 810 + if (!v->members) 811 + continue; 812 + 813 + b53_set_vlan_entry(dev, vid, v); 814 + b53_fast_age_vlan(dev, vid); 815 + } 816 + 817 + b53_for_each_port(dev, i) { 818 + if (!dsa_is_cpu_port(ds, i)) 819 + b53_write16(dev, B53_VLAN_PAGE, 820 + B53_VLAN_PORT_DEF_TAG(i), 821 + dev->ports[i].pvid); 822 + } 832 823 } 833 824 834 825 return 0; ··· 1160 1125 static int b53_setup(struct dsa_switch *ds) 1161 1126 { 1162 1127 struct b53_device *dev = ds->priv; 1128 + struct b53_vlan *vl; 1163 1129 unsigned int port; 1130 + u16 pvid; 1164 1131 int ret; 1165 1132 1166 1133 /* Request bridge PVID untagged when DSA_TAG_PROTO_NONE is set ··· 1170 1133 */ 1171 1134 ds->untag_bridge_pvid = dev->tag_protocol == DSA_TAG_PROTO_NONE; 1172 1135 1136 + /* The switch does not tell us the original VLAN for untagged 1137 + * packets, so keep the CPU port always tagged. 1138 + */ 1139 + ds->untag_vlan_aware_bridge_pvid = true; 1140 + 1173 1141 ret = b53_reset_switch(dev); 1174 1142 if (ret) { 1175 1143 dev_err(ds->dev, "failed to reset switch\n"); 1176 1144 return ret; 1145 + } 1146 + 1147 + /* setup default vlan for filtering mode */ 1148 + pvid = b53_default_pvid(dev); 1149 + vl = &dev->vlans[pvid]; 1150 + b53_for_each_port(dev, port) { 1151 + vl->members |= BIT(port); 1152 + if (!b53_vlan_port_needs_forced_tagged(ds, port)) 1153 + vl->untag |= BIT(port); 1177 1154 } 1178 1155 1179 1156 b53_reset_mib(dev); ··· 1543 1492 { 1544 1493 struct b53_device *dev = ds->priv; 1545 1494 1546 - b53_enable_vlan(dev, port, dev->vlan_enabled, vlan_filtering); 1495 + if (dev->vlan_filtering != vlan_filtering) { 1496 + dev->vlan_filtering = vlan_filtering; 1497 + b53_apply_config(dev); 1498 + } 1547 1499 1548 1500 return 0; 1549 1501 } ··· 1571 1517 if (vlan->vid >= dev->num_vlans) 1572 1518 return -ERANGE; 1573 1519 1574 - b53_enable_vlan(dev, port, true, ds->vlan_filtering); 1520 + b53_enable_vlan(dev, port, true, dev->vlan_filtering); 1575 1521 1576 1522 return 0; 1577 1523 } ··· 1584 1530 bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; 1585 1531 bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID; 1586 1532 struct b53_vlan *vl; 1533 + u16 old_pvid, new_pvid; 1587 1534 int err; 1588 1535 1589 1536 err = b53_vlan_prepare(ds, port, vlan); 1590 1537 if (err) 1591 1538 return err; 1592 1539 1540 + if (vlan->vid == 0) 1541 + return 0; 1542 + 1543 + old_pvid = dev->ports[port].pvid; 1544 + if (pvid) 1545 + new_pvid = vlan->vid; 1546 + else if (!pvid && vlan->vid == old_pvid) 1547 + new_pvid = b53_default_pvid(dev); 1548 + else 1549 + new_pvid = old_pvid; 1550 + dev->ports[port].pvid = new_pvid; 1551 + 1593 1552 vl = &dev->vlans[vlan->vid]; 1594 1553 1595 - b53_get_vlan_entry(dev, vlan->vid, vl); 1596 - 1597 - if (vlan->vid == 0 && vlan->vid == b53_default_pvid(dev)) 1598 - untagged = true; 1554 + if (dsa_is_cpu_port(ds, port)) 1555 + untagged = false; 1599 1556 1600 1557 vl->members |= BIT(port); 1601 1558 if (untagged && !b53_vlan_port_needs_forced_tagged(ds, port)) ··· 1614 1549 else 1615 1550 vl->untag &= ~BIT(port); 1616 1551 1552 + if (!dev->vlan_filtering) 1553 + return 0; 1554 + 1617 1555 b53_set_vlan_entry(dev, vlan->vid, vl); 1618 1556 b53_fast_age_vlan(dev, vlan->vid); 1619 1557 1620 - if (pvid && !dsa_is_cpu_port(ds, port)) { 1558 + if (!dsa_is_cpu_port(ds, port) && new_pvid != old_pvid) { 1621 1559 b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), 1622 - vlan->vid); 1623 - b53_fast_age_vlan(dev, vlan->vid); 1560 + new_pvid); 1561 + b53_fast_age_vlan(dev, old_pvid); 1624 1562 } 1625 1563 1626 1564 return 0; ··· 1638 1570 struct b53_vlan *vl; 1639 1571 u16 pvid; 1640 1572 1641 - b53_read16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), &pvid); 1573 + if (vlan->vid == 0) 1574 + return 0; 1575 + 1576 + pvid = dev->ports[port].pvid; 1642 1577 1643 1578 vl = &dev->vlans[vlan->vid]; 1644 - 1645 - b53_get_vlan_entry(dev, vlan->vid, vl); 1646 1579 1647 1580 vl->members &= ~BIT(port); 1648 1581 1649 1582 if (pvid == vlan->vid) 1650 1583 pvid = b53_default_pvid(dev); 1584 + dev->ports[port].pvid = pvid; 1651 1585 1652 1586 if (untagged && !b53_vlan_port_needs_forced_tagged(ds, port)) 1653 1587 vl->untag &= ~(BIT(port)); 1588 + 1589 + if (!dev->vlan_filtering) 1590 + return 0; 1654 1591 1655 1592 b53_set_vlan_entry(dev, vlan->vid, vl); 1656 1593 b53_fast_age_vlan(dev, vlan->vid); ··· 1989 1916 bool *tx_fwd_offload, struct netlink_ext_ack *extack) 1990 1917 { 1991 1918 struct b53_device *dev = ds->priv; 1919 + struct b53_vlan *vl; 1992 1920 s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index; 1993 - u16 pvlan, reg; 1921 + u16 pvlan, reg, pvid; 1994 1922 unsigned int i; 1995 1923 1996 1924 /* On 7278, port 7 which connects to the ASP should only receive ··· 2000 1926 if (dev->chip_id == BCM7278_DEVICE_ID && port == 7) 2001 1927 return -EINVAL; 2002 1928 2003 - /* Make this port leave the all VLANs join since we will have proper 2004 - * VLAN entries from now on 2005 - */ 2006 - if (is58xx(dev)) { 2007 - b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, &reg); 2008 - reg &= ~BIT(port); 2009 - if ((reg & BIT(cpu_port)) == BIT(cpu_port)) 2010 - reg &= ~BIT(cpu_port); 2011 - b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg); 1929 + pvid = b53_default_pvid(dev); 1930 + vl = &dev->vlans[pvid]; 1931 + 1932 + if (dev->vlan_filtering) { 1933 + /* Make this port leave the all VLANs join since we will have 1934 + * proper VLAN entries from now on 1935 + */ 1936 + if (is58xx(dev)) { 1937 + b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, 1938 + &reg); 1939 + reg &= ~BIT(port); 1940 + if ((reg & BIT(cpu_port)) == BIT(cpu_port)) 1941 + reg &= ~BIT(cpu_port); 1942 + b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, 1943 + reg); 1944 + } 1945 + 1946 + b53_get_vlan_entry(dev, pvid, vl); 1947 + vl->members &= ~BIT(port); 1948 + if (vl->members == BIT(cpu_port)) 1949 + vl->members &= ~BIT(cpu_port); 1950 + vl->untag = vl->members; 1951 + b53_set_vlan_entry(dev, pvid, vl); 2012 1952 } 2013 1953 2014 1954 b53_read16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), &pvlan); ··· 2055 1967 void b53_br_leave(struct dsa_switch *ds, int port, struct dsa_bridge bridge) 2056 1968 { 2057 1969 struct b53_device *dev = ds->priv; 2058 - struct b53_vlan *vl = &dev->vlans[0]; 1970 + struct b53_vlan *vl; 2059 1971 s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index; 2060 1972 unsigned int i; 2061 1973 u16 pvlan, reg, pvid; ··· 2081 1993 dev->ports[port].vlan_ctl_mask = pvlan; 2082 1994 2083 1995 pvid = b53_default_pvid(dev); 1996 + vl = &dev->vlans[pvid]; 2084 1997 2085 - /* Make this port join all VLANs without VLAN entries */ 2086 - if (is58xx(dev)) { 2087 - b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, &reg); 2088 - reg |= BIT(port); 2089 - if (!(reg & BIT(cpu_port))) 2090 - reg |= BIT(cpu_port); 2091 - b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg); 2092 - } else { 1998 + if (dev->vlan_filtering) { 1999 + /* Make this port join all VLANs without VLAN entries */ 2000 + if (is58xx(dev)) { 2001 + b53_read16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, &reg); 2002 + reg |= BIT(port); 2003 + if (!(reg & BIT(cpu_port))) 2004 + reg |= BIT(cpu_port); 2005 + b53_write16(dev, B53_VLAN_PAGE, B53_JOIN_ALL_VLAN_EN, reg); 2006 + } 2007 + 2093 2008 b53_get_vlan_entry(dev, pvid, vl); 2094 2009 vl->members |= BIT(port) | BIT(cpu_port); 2095 2010 vl->untag |= BIT(port) | BIT(cpu_port); ··· 2391 2300 .phy_read = b53_phy_read16, 2392 2301 .phy_write = b53_phy_write16, 2393 2302 .phylink_get_caps = b53_phylink_get_caps, 2303 + .port_setup = b53_setup_port, 2394 2304 .port_enable = b53_enable_port, 2395 2305 .port_disable = b53_disable_port, 2396 2306 .support_eee = b53_support_eee, ··· 2849 2757 ds->ops = &b53_switch_ops; 2850 2758 ds->phylink_mac_ops = &b53_phylink_mac_ops; 2851 2759 dev->vlan_enabled = true; 2760 + dev->vlan_filtering = false; 2852 2761 /* Let DSA handle the case were multiple bridges span the same switch 2853 2762 * device and different VLAN awareness settings are requested, which 2854 2763 * would be breaking filtering semantics for any of the other bridge
+3
drivers/net/dsa/b53/b53_priv.h
··· 96 96 97 97 struct b53_port { 98 98 u16 vlan_ctl_mask; 99 + u16 pvid; 99 100 struct ethtool_keee eee; 100 101 }; 101 102 ··· 148 147 unsigned int num_vlans; 149 148 struct b53_vlan *vlans; 150 149 bool vlan_enabled; 150 + bool vlan_filtering; 151 151 unsigned int num_ports; 152 152 struct b53_port *ports; 153 153 ··· 384 382 enum dsa_tag_protocol mprot); 385 383 void b53_mirror_del(struct dsa_switch *ds, int port, 386 384 struct dsa_mall_mirror_tc_entry *mirror); 385 + int b53_setup_port(struct dsa_switch *ds, int port); 387 386 int b53_enable_port(struct dsa_switch *ds, int port, struct phy_device *phy); 388 387 void b53_disable_port(struct dsa_switch *ds, int port); 389 388 void b53_brcm_hdr_setup(struct dsa_switch *ds, int port);
+1
drivers/net/dsa/bcm_sf2.c
··· 1230 1230 .resume = bcm_sf2_sw_resume, 1231 1231 .get_wol = bcm_sf2_sw_get_wol, 1232 1232 .set_wol = bcm_sf2_sw_set_wol, 1233 + .port_setup = b53_setup_port, 1233 1234 .port_enable = bcm_sf2_port_setup, 1234 1235 .port_disable = bcm_sf2_port_disable, 1235 1236 .support_eee = b53_support_eee,
+6 -4
drivers/net/ethernet/airoha/airoha_npu.c
··· 104 104 u8 xpon_hal_api; 105 105 u8 wan_xsi; 106 106 u8 ct_joyme4; 107 - int ppe_type; 108 - int wan_mode; 109 - int wan_sel; 107 + u8 max_packet; 108 + u8 rsv[3]; 109 + u32 ppe_type; 110 + u32 wan_mode; 111 + u32 wan_sel; 110 112 } init_info; 111 113 struct { 112 - int func_id; 114 + u32 func_id; 113 115 u32 size; 114 116 u32 data; 115 117 } set_info;
+18 -29
drivers/net/ethernet/intel/ice/ice_adapter.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 // SPDX-FileCopyrightText: Copyright Red Hat 3 3 4 - #include <linux/bitfield.h> 5 4 #include <linux/cleanup.h> 6 5 #include <linux/mutex.h> 7 6 #include <linux/pci.h> ··· 13 14 static DEFINE_XARRAY(ice_adapters); 14 15 static DEFINE_MUTEX(ice_adapters_mutex); 15 16 16 - /* PCI bus number is 8 bits. Slot is 5 bits. Domain can have the rest. */ 17 - #define INDEX_FIELD_DOMAIN GENMASK(BITS_PER_LONG - 1, 13) 18 - #define INDEX_FIELD_DEV GENMASK(31, 16) 19 - #define INDEX_FIELD_BUS GENMASK(12, 5) 20 - #define INDEX_FIELD_SLOT GENMASK(4, 0) 21 - 22 - static unsigned long ice_adapter_index(const struct pci_dev *pdev) 17 + static unsigned long ice_adapter_index(u64 dsn) 23 18 { 24 - unsigned int domain = pci_domain_nr(pdev->bus); 25 - 26 - WARN_ON(domain > FIELD_MAX(INDEX_FIELD_DOMAIN)); 27 - 28 - switch (pdev->device) { 29 - case ICE_DEV_ID_E825C_BACKPLANE: 30 - case ICE_DEV_ID_E825C_QSFP: 31 - case ICE_DEV_ID_E825C_SFP: 32 - case ICE_DEV_ID_E825C_SGMII: 33 - return FIELD_PREP(INDEX_FIELD_DEV, pdev->device); 34 - default: 35 - return FIELD_PREP(INDEX_FIELD_DOMAIN, domain) | 36 - FIELD_PREP(INDEX_FIELD_BUS, pdev->bus->number) | 37 - FIELD_PREP(INDEX_FIELD_SLOT, PCI_SLOT(pdev->devfn)); 38 - } 19 + #if BITS_PER_LONG == 64 20 + return dsn; 21 + #else 22 + return (u32)dsn ^ (u32)(dsn >> 32); 23 + #endif 39 24 } 40 25 41 - static struct ice_adapter *ice_adapter_new(void) 26 + static struct ice_adapter *ice_adapter_new(u64 dsn) 42 27 { 43 28 struct ice_adapter *adapter; 44 29 ··· 30 47 if (!adapter) 31 48 return NULL; 32 49 50 + adapter->device_serial_number = dsn; 33 51 spin_lock_init(&adapter->ptp_gltsyn_time_lock); 34 52 refcount_set(&adapter->refcount, 1); 35 53 ··· 61 77 * Return: Pointer to ice_adapter on success. 62 78 * ERR_PTR() on error. -ENOMEM is the only possible error. 63 79 */ 64 - struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev) 80 + struct ice_adapter *ice_adapter_get(struct pci_dev *pdev) 65 81 { 66 - unsigned long index = ice_adapter_index(pdev); 82 + u64 dsn = pci_get_dsn(pdev); 67 83 struct ice_adapter *adapter; 84 + unsigned long index; 68 85 int err; 69 86 87 + index = ice_adapter_index(dsn); 70 88 scoped_guard(mutex, &ice_adapters_mutex) { 71 89 err = xa_insert(&ice_adapters, index, NULL, GFP_KERNEL); 72 90 if (err == -EBUSY) { 73 91 adapter = xa_load(&ice_adapters, index); 74 92 refcount_inc(&adapter->refcount); 93 + WARN_ON_ONCE(adapter->device_serial_number != dsn); 75 94 return adapter; 76 95 } 77 96 if (err) 78 97 return ERR_PTR(err); 79 98 80 - adapter = ice_adapter_new(); 99 + adapter = ice_adapter_new(dsn); 81 100 if (!adapter) 82 101 return ERR_PTR(-ENOMEM); 83 102 xa_store(&ice_adapters, index, adapter, GFP_KERNEL); ··· 97 110 * 98 111 * Context: Process, may sleep. 99 112 */ 100 - void ice_adapter_put(const struct pci_dev *pdev) 113 + void ice_adapter_put(struct pci_dev *pdev) 101 114 { 102 - unsigned long index = ice_adapter_index(pdev); 115 + u64 dsn = pci_get_dsn(pdev); 103 116 struct ice_adapter *adapter; 117 + unsigned long index; 104 118 119 + index = ice_adapter_index(dsn); 105 120 scoped_guard(mutex, &ice_adapters_mutex) { 106 121 adapter = xa_load(&ice_adapters, index); 107 122 if (WARN_ON(!adapter))
+4 -2
drivers/net/ethernet/intel/ice/ice_adapter.h
··· 32 32 * @refcount: Reference count. struct ice_pf objects hold the references. 33 33 * @ctrl_pf: Control PF of the adapter 34 34 * @ports: Ports list 35 + * @device_serial_number: DSN cached for collision detection on 32bit systems 35 36 */ 36 37 struct ice_adapter { 37 38 refcount_t refcount; ··· 41 40 42 41 struct ice_pf *ctrl_pf; 43 42 struct ice_port_list ports; 43 + u64 device_serial_number; 44 44 }; 45 45 46 - struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev); 47 - void ice_adapter_put(const struct pci_dev *pdev); 46 + struct ice_adapter *ice_adapter_get(struct pci_dev *pdev); 47 + void ice_adapter_put(struct pci_dev *pdev); 48 48 49 49 #endif /* _ICE_ADAPTER_H */
+12 -7
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3186 3186 static void mtk_dma_free(struct mtk_eth *eth) 3187 3187 { 3188 3188 const struct mtk_soc_data *soc = eth->soc; 3189 - int i; 3189 + int i, j, txqs = 1; 3190 3190 3191 - for (i = 0; i < MTK_MAX_DEVS; i++) 3192 - if (eth->netdev[i]) 3193 - netdev_reset_queue(eth->netdev[i]); 3191 + if (MTK_HAS_CAPS(eth->soc->caps, MTK_QDMA)) 3192 + txqs = MTK_QDMA_NUM_QUEUES; 3193 + 3194 + for (i = 0; i < MTK_MAX_DEVS; i++) { 3195 + if (!eth->netdev[i]) 3196 + continue; 3197 + 3198 + for (j = 0; j < txqs; j++) 3199 + netdev_tx_reset_subqueue(eth->netdev[i], j); 3200 + } 3201 + 3194 3202 if (!MTK_HAS_CAPS(soc->caps, MTK_SRAM) && eth->scratch_ring) { 3195 3203 dma_free_coherent(eth->dma_dev, 3196 3204 MTK_QDMA_RING_SIZE * soc->tx.desc_size, ··· 3473 3465 } 3474 3466 mtk_gdm_config(eth, target_mac->id, gdm_config); 3475 3467 } 3476 - /* Reset and enable PSE */ 3477 - mtk_w32(eth, RST_GL_PSE, MTK_RST_GL); 3478 - mtk_w32(eth, 0, MTK_RST_GL); 3479 3468 3480 3469 napi_enable(&eth->tx_napi); 3481 3470 napi_enable(&eth->rx_napi);
+4 -4
drivers/net/ethernet/meta/fbnic/fbnic.h
··· 154 154 void fbnic_devlink_register(struct fbnic_dev *fbd); 155 155 void fbnic_devlink_unregister(struct fbnic_dev *fbd); 156 156 157 - int fbnic_fw_enable_mbx(struct fbnic_dev *fbd); 158 - void fbnic_fw_disable_mbx(struct fbnic_dev *fbd); 157 + int fbnic_fw_request_mbx(struct fbnic_dev *fbd); 158 + void fbnic_fw_free_mbx(struct fbnic_dev *fbd); 159 159 160 160 void fbnic_hwmon_register(struct fbnic_dev *fbd); 161 161 void fbnic_hwmon_unregister(struct fbnic_dev *fbd); 162 162 163 - int fbnic_pcs_irq_enable(struct fbnic_dev *fbd); 164 - void fbnic_pcs_irq_disable(struct fbnic_dev *fbd); 163 + int fbnic_pcs_request_irq(struct fbnic_dev *fbd); 164 + void fbnic_pcs_free_irq(struct fbnic_dev *fbd); 165 165 166 166 void fbnic_napi_name_irqs(struct fbnic_dev *fbd); 167 167 int fbnic_napi_request_irq(struct fbnic_dev *fbd,
+2
drivers/net/ethernet/meta/fbnic/fbnic_csr.h
··· 796 796 /* PUL User Registers */ 797 797 #define FBNIC_CSR_START_PUL_USER 0x31000 /* CSR section delimiter */ 798 798 #define FBNIC_PUL_OB_TLP_HDR_AW_CFG 0x3103d /* 0xc40f4 */ 799 + #define FBNIC_PUL_OB_TLP_HDR_AW_CFG_FLUSH CSR_BIT(19) 799 800 #define FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME CSR_BIT(18) 800 801 #define FBNIC_PUL_OB_TLP_HDR_AR_CFG 0x3103e /* 0xc40f8 */ 802 + #define FBNIC_PUL_OB_TLP_HDR_AR_CFG_FLUSH CSR_BIT(19) 801 803 #define FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME CSR_BIT(18) 802 804 #define FBNIC_PUL_USER_OB_RD_TLP_CNT_31_0 \ 803 805 0x3106e /* 0xc41b8 */
+119 -78
drivers/net/ethernet/meta/fbnic/fbnic_fw.c
··· 17 17 { 18 18 u32 desc_offset = FBNIC_IPC_MBX(mbx_idx, desc_idx); 19 19 20 + /* Write the upper 32b and then the lower 32b. Doing this the 21 + * FW can then read lower, upper, lower to verify that the state 22 + * of the descriptor wasn't changed mid-transaction. 23 + */ 20 24 fw_wr32(fbd, desc_offset + 1, upper_32_bits(desc)); 21 25 fw_wrfl(fbd); 22 26 fw_wr32(fbd, desc_offset, lower_32_bits(desc)); 27 + } 28 + 29 + static void __fbnic_mbx_invalidate_desc(struct fbnic_dev *fbd, int mbx_idx, 30 + int desc_idx, u32 desc) 31 + { 32 + u32 desc_offset = FBNIC_IPC_MBX(mbx_idx, desc_idx); 33 + 34 + /* For initialization we write the lower 32b of the descriptor first. 35 + * This way we can set the state to mark it invalid before we clear the 36 + * upper 32b. 37 + */ 38 + fw_wr32(fbd, desc_offset, desc); 39 + fw_wrfl(fbd); 40 + fw_wr32(fbd, desc_offset + 1, 0); 23 41 } 24 42 25 43 static u64 __fbnic_mbx_rd_desc(struct fbnic_dev *fbd, int mbx_idx, int desc_idx) ··· 51 33 return desc; 52 34 } 53 35 54 - static void fbnic_mbx_init_desc_ring(struct fbnic_dev *fbd, int mbx_idx) 36 + static void fbnic_mbx_reset_desc_ring(struct fbnic_dev *fbd, int mbx_idx) 55 37 { 56 38 int desc_idx; 39 + 40 + /* Disable DMA transactions from the device, 41 + * and flush any transactions triggered during cleaning 42 + */ 43 + switch (mbx_idx) { 44 + case FBNIC_IPC_MBX_RX_IDX: 45 + wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG, 46 + FBNIC_PUL_OB_TLP_HDR_AW_CFG_FLUSH); 47 + break; 48 + case FBNIC_IPC_MBX_TX_IDX: 49 + wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG, 50 + FBNIC_PUL_OB_TLP_HDR_AR_CFG_FLUSH); 51 + break; 52 + } 53 + 54 + wrfl(fbd); 57 55 58 56 /* Initialize first descriptor to all 0s. Doing this gives us a 59 57 * solid stop for the firmware to hit when it is done looping 60 58 * through the ring. 61 59 */ 62 - __fbnic_mbx_wr_desc(fbd, mbx_idx, 0, 0); 63 - 64 - fw_wrfl(fbd); 60 + __fbnic_mbx_invalidate_desc(fbd, mbx_idx, 0, 0); 65 61 66 62 /* We then fill the rest of the ring starting at the end and moving 67 63 * back toward descriptor 0 with skip descriptors that have no 68 64 * length nor address, and tell the firmware that they can skip 69 65 * them and just move past them to the one we initialized to 0. 70 66 */ 71 - for (desc_idx = FBNIC_IPC_MBX_DESC_LEN; --desc_idx;) { 72 - __fbnic_mbx_wr_desc(fbd, mbx_idx, desc_idx, 73 - FBNIC_IPC_MBX_DESC_FW_CMPL | 74 - FBNIC_IPC_MBX_DESC_HOST_CMPL); 75 - fw_wrfl(fbd); 76 - } 67 + for (desc_idx = FBNIC_IPC_MBX_DESC_LEN; --desc_idx;) 68 + __fbnic_mbx_invalidate_desc(fbd, mbx_idx, desc_idx, 69 + FBNIC_IPC_MBX_DESC_FW_CMPL | 70 + FBNIC_IPC_MBX_DESC_HOST_CMPL); 77 71 } 78 72 79 73 void fbnic_mbx_init(struct fbnic_dev *fbd) ··· 106 76 wr32(fbd, FBNIC_INTR_CLEAR(0), 1u << FBNIC_FW_MSIX_ENTRY); 107 77 108 78 for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++) 109 - fbnic_mbx_init_desc_ring(fbd, i); 79 + fbnic_mbx_reset_desc_ring(fbd, i); 110 80 } 111 81 112 82 static int fbnic_mbx_map_msg(struct fbnic_dev *fbd, int mbx_idx, ··· 171 141 { 172 142 int i; 173 143 174 - fbnic_mbx_init_desc_ring(fbd, mbx_idx); 144 + fbnic_mbx_reset_desc_ring(fbd, mbx_idx); 175 145 176 146 for (i = FBNIC_IPC_MBX_DESC_LEN; i--;) 177 147 fbnic_mbx_unmap_and_free_msg(fbd, mbx_idx, i); ··· 352 322 return err; 353 323 } 354 324 355 - /** 356 - * fbnic_fw_xmit_cap_msg - Allocate and populate a FW capabilities message 357 - * @fbd: FBNIC device structure 358 - * 359 - * Return: NULL on failure to allocate, error pointer on error, or pointer 360 - * to new TLV test message. 361 - * 362 - * Sends a single TLV header indicating the host wants the firmware to 363 - * confirm the capabilities and version. 364 - **/ 365 - static int fbnic_fw_xmit_cap_msg(struct fbnic_dev *fbd) 366 - { 367 - int err = fbnic_fw_xmit_simple_msg(fbd, FBNIC_TLV_MSG_ID_HOST_CAP_REQ); 368 - 369 - /* Return 0 if we are not calling this on ASIC */ 370 - return (err == -EOPNOTSUPP) ? 0 : err; 371 - } 372 - 373 - static void fbnic_mbx_postinit_desc_ring(struct fbnic_dev *fbd, int mbx_idx) 325 + static void fbnic_mbx_init_desc_ring(struct fbnic_dev *fbd, int mbx_idx) 374 326 { 375 327 struct fbnic_fw_mbx *mbx = &fbd->mbx[mbx_idx]; 376 - 377 - /* This is a one time init, so just exit if it is completed */ 378 - if (mbx->ready) 379 - return; 380 328 381 329 mbx->ready = true; 382 330 383 331 switch (mbx_idx) { 384 332 case FBNIC_IPC_MBX_RX_IDX: 333 + /* Enable DMA writes from the device */ 334 + wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG, 335 + FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME); 336 + 385 337 /* Make sure we have a page for the FW to write to */ 386 338 fbnic_mbx_alloc_rx_msgs(fbd); 387 339 break; 388 340 case FBNIC_IPC_MBX_TX_IDX: 389 - /* Force version to 1 if we successfully requested an update 390 - * from the firmware. This should be overwritten once we get 391 - * the actual version from the firmware in the capabilities 392 - * request message. 393 - */ 394 - if (!fbnic_fw_xmit_cap_msg(fbd) && 395 - !fbd->fw_cap.running.mgmt.version) 396 - fbd->fw_cap.running.mgmt.version = 1; 341 + /* Enable DMA reads from the device */ 342 + wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG, 343 + FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME); 397 344 break; 398 345 } 399 346 } 400 347 401 - static void fbnic_mbx_postinit(struct fbnic_dev *fbd) 348 + static bool fbnic_mbx_event(struct fbnic_dev *fbd) 402 349 { 403 - int i; 404 - 405 - /* We only need to do this on the first interrupt following init. 350 + /* We only need to do this on the first interrupt following reset. 406 351 * this primes the mailbox so that we will have cleared all the 407 352 * skip descriptors. 408 353 */ 409 354 if (!(rd32(fbd, FBNIC_INTR_STATUS(0)) & (1u << FBNIC_FW_MSIX_ENTRY))) 410 - return; 355 + return false; 411 356 412 357 wr32(fbd, FBNIC_INTR_CLEAR(0), 1u << FBNIC_FW_MSIX_ENTRY); 413 358 414 - for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++) 415 - fbnic_mbx_postinit_desc_ring(fbd, i); 359 + return true; 416 360 } 417 361 418 362 /** ··· 863 859 864 860 void fbnic_mbx_poll(struct fbnic_dev *fbd) 865 861 { 866 - fbnic_mbx_postinit(fbd); 862 + fbnic_mbx_event(fbd); 867 863 868 864 fbnic_mbx_process_tx_msgs(fbd); 869 865 fbnic_mbx_process_rx_msgs(fbd); ··· 871 867 872 868 int fbnic_mbx_poll_tx_ready(struct fbnic_dev *fbd) 873 869 { 874 - struct fbnic_fw_mbx *tx_mbx; 875 - int attempts = 50; 870 + unsigned long timeout = jiffies + 10 * HZ + 1; 871 + int err, i; 876 872 877 - /* Immediate fail if BAR4 isn't there */ 878 - if (!fbnic_fw_present(fbd)) 879 - return -ENODEV; 873 + do { 874 + if (!time_is_after_jiffies(timeout)) 875 + return -ETIMEDOUT; 880 876 881 - tx_mbx = &fbd->mbx[FBNIC_IPC_MBX_TX_IDX]; 882 - while (!tx_mbx->ready && --attempts) { 883 877 /* Force the firmware to trigger an interrupt response to 884 878 * avoid the mailbox getting stuck closed if the interrupt 885 879 * is reset. 886 880 */ 887 - fbnic_mbx_init_desc_ring(fbd, FBNIC_IPC_MBX_TX_IDX); 881 + fbnic_mbx_reset_desc_ring(fbd, FBNIC_IPC_MBX_TX_IDX); 888 882 889 - msleep(200); 883 + /* Immediate fail if BAR4 went away */ 884 + if (!fbnic_fw_present(fbd)) 885 + return -ENODEV; 890 886 891 - fbnic_mbx_poll(fbd); 887 + msleep(20); 888 + } while (!fbnic_mbx_event(fbd)); 889 + 890 + /* FW has shown signs of life. Enable DMA and start Tx/Rx */ 891 + for (i = 0; i < FBNIC_IPC_MBX_INDICES; i++) 892 + fbnic_mbx_init_desc_ring(fbd, i); 893 + 894 + /* Request an update from the firmware. This should overwrite 895 + * mgmt.version once we get the actual version from the firmware 896 + * in the capabilities request message. 897 + */ 898 + err = fbnic_fw_xmit_simple_msg(fbd, FBNIC_TLV_MSG_ID_HOST_CAP_REQ); 899 + if (err) 900 + goto clean_mbx; 901 + 902 + /* Use "1" to indicate we entered the state waiting for a response */ 903 + fbd->fw_cap.running.mgmt.version = 1; 904 + 905 + return 0; 906 + clean_mbx: 907 + /* Cleanup Rx buffers and disable mailbox */ 908 + fbnic_mbx_clean(fbd); 909 + return err; 910 + } 911 + 912 + static void __fbnic_fw_evict_cmpl(struct fbnic_fw_completion *cmpl_data) 913 + { 914 + cmpl_data->result = -EPIPE; 915 + complete(&cmpl_data->done); 916 + } 917 + 918 + static void fbnic_mbx_evict_all_cmpl(struct fbnic_dev *fbd) 919 + { 920 + if (fbd->cmpl_data) { 921 + __fbnic_fw_evict_cmpl(fbd->cmpl_data); 922 + fbd->cmpl_data = NULL; 892 923 } 893 - 894 - return attempts ? 0 : -ETIMEDOUT; 895 924 } 896 925 897 926 void fbnic_mbx_flush_tx(struct fbnic_dev *fbd) 898 927 { 928 + unsigned long timeout = jiffies + 10 * HZ + 1; 899 929 struct fbnic_fw_mbx *tx_mbx; 900 - int attempts = 50; 901 - u8 count = 0; 902 - 903 - /* Nothing to do if there is no mailbox */ 904 - if (!fbnic_fw_present(fbd)) 905 - return; 930 + u8 tail; 906 931 907 932 /* Record current Rx stats */ 908 933 tx_mbx = &fbd->mbx[FBNIC_IPC_MBX_TX_IDX]; 909 934 910 - /* Nothing to do if mailbox never got to ready */ 911 - if (!tx_mbx->ready) 912 - return; 935 + spin_lock_irq(&fbd->fw_tx_lock); 936 + 937 + /* Clear ready to prevent any further attempts to transmit */ 938 + tx_mbx->ready = false; 939 + 940 + /* Read tail to determine the last tail state for the ring */ 941 + tail = tx_mbx->tail; 942 + 943 + /* Flush any completions as we are no longer processing Rx */ 944 + fbnic_mbx_evict_all_cmpl(fbd); 945 + 946 + spin_unlock_irq(&fbd->fw_tx_lock); 913 947 914 948 /* Give firmware time to process packet, 915 - * we will wait up to 10 seconds which is 50 waits of 200ms. 949 + * we will wait up to 10 seconds which is 500 waits of 20ms. 916 950 */ 917 951 do { 918 952 u8 head = tx_mbx->head; 919 953 920 - if (head == tx_mbx->tail) 954 + /* Tx ring is empty once head == tail */ 955 + if (head == tail) 921 956 break; 922 957 923 - msleep(200); 958 + msleep(20); 924 959 fbnic_mbx_process_tx_msgs(fbd); 925 - 926 - count += (tx_mbx->head - head) % FBNIC_IPC_MBX_DESC_LEN; 927 - } while (count < FBNIC_IPC_MBX_DESC_LEN && --attempts); 960 + } while (time_is_after_jiffies(timeout)); 928 961 } 929 962 930 963 void fbnic_get_fw_ver_commit_str(struct fbnic_dev *fbd, char *fw_version,
+99 -49
drivers/net/ethernet/meta/fbnic/fbnic_irq.c
··· 19 19 return IRQ_HANDLED; 20 20 } 21 21 22 - /** 23 - * fbnic_fw_enable_mbx - Configure and initialize Firmware Mailbox 24 - * @fbd: Pointer to device to initialize 25 - * 26 - * This function will initialize the firmware mailbox rings, enable the IRQ 27 - * and initialize the communication between the Firmware and the host. The 28 - * firmware is expected to respond to the initialization by sending an 29 - * interrupt essentially notifying the host that it has seen the 30 - * initialization and is now synced up. 31 - * 32 - * Return: non-zero on failure. 33 - **/ 34 - int fbnic_fw_enable_mbx(struct fbnic_dev *fbd) 22 + static int __fbnic_fw_enable_mbx(struct fbnic_dev *fbd, int vector) 35 23 { 36 - u32 vector = fbd->fw_msix_vector; 37 24 int err; 38 - 39 - /* Request the IRQ for FW Mailbox vector. */ 40 - err = request_threaded_irq(vector, NULL, &fbnic_fw_msix_intr, 41 - IRQF_ONESHOT, dev_name(fbd->dev), fbd); 42 - if (err) 43 - return err; 44 25 45 26 /* Initialize mailbox and attempt to poll it into ready state */ 46 27 fbnic_mbx_init(fbd); 47 28 err = fbnic_mbx_poll_tx_ready(fbd); 48 29 if (err) { 49 30 dev_warn(fbd->dev, "FW mailbox did not enter ready state\n"); 50 - free_irq(vector, fbd); 51 31 return err; 52 32 } 53 33 54 - /* Enable interrupts */ 34 + /* Enable interrupt and unmask the vector */ 35 + enable_irq(vector); 55 36 fbnic_wr32(fbd, FBNIC_INTR_MASK_CLEAR(0), 1u << FBNIC_FW_MSIX_ENTRY); 56 37 57 38 return 0; 58 39 } 59 40 60 41 /** 61 - * fbnic_fw_disable_mbx - Disable mailbox and place it in standby state 62 - * @fbd: Pointer to device to disable 42 + * fbnic_fw_request_mbx - Configure and initialize Firmware Mailbox 43 + * @fbd: Pointer to device to initialize 63 44 * 64 - * This function will disable the mailbox interrupt, free any messages still 65 - * in the mailbox and place it into a standby state. The firmware is 66 - * expected to see the update and assume that the host is in the reset state. 45 + * This function will allocate the IRQ and then reinitialize the mailbox 46 + * starting communication between the host and firmware. 47 + * 48 + * Return: non-zero on failure. 67 49 **/ 68 - void fbnic_fw_disable_mbx(struct fbnic_dev *fbd) 50 + int fbnic_fw_request_mbx(struct fbnic_dev *fbd) 69 51 { 70 - /* Disable interrupt and free vector */ 71 - fbnic_wr32(fbd, FBNIC_INTR_MASK_SET(0), 1u << FBNIC_FW_MSIX_ENTRY); 52 + struct pci_dev *pdev = to_pci_dev(fbd->dev); 53 + int vector, err; 72 54 73 - /* Free the vector */ 74 - free_irq(fbd->fw_msix_vector, fbd); 55 + WARN_ON(fbd->fw_msix_vector); 56 + 57 + vector = pci_irq_vector(pdev, FBNIC_FW_MSIX_ENTRY); 58 + if (vector < 0) 59 + return vector; 60 + 61 + /* Request the IRQ for FW Mailbox vector. */ 62 + err = request_threaded_irq(vector, NULL, &fbnic_fw_msix_intr, 63 + IRQF_ONESHOT | IRQF_NO_AUTOEN, 64 + dev_name(fbd->dev), fbd); 65 + if (err) 66 + return err; 67 + 68 + /* Initialize mailbox and attempt to poll it into ready state */ 69 + err = __fbnic_fw_enable_mbx(fbd, vector); 70 + if (err) 71 + free_irq(vector, fbd); 72 + 73 + fbd->fw_msix_vector = vector; 74 + 75 + return err; 76 + } 77 + 78 + /** 79 + * fbnic_fw_disable_mbx - Temporarily place mailbox in standby state 80 + * @fbd: Pointer to device 81 + * 82 + * Shutdown the mailbox by notifying the firmware to stop sending us logs, mask 83 + * and synchronize the IRQ, and then clean up the rings. 84 + **/ 85 + static void fbnic_fw_disable_mbx(struct fbnic_dev *fbd) 86 + { 87 + /* Disable interrupt and synchronize the IRQ */ 88 + disable_irq(fbd->fw_msix_vector); 89 + 90 + /* Mask the vector */ 91 + fbnic_wr32(fbd, FBNIC_INTR_MASK_SET(0), 1u << FBNIC_FW_MSIX_ENTRY); 75 92 76 93 /* Make sure disabling logs message is sent, must be done here to 77 94 * avoid risk of completing without a running interrupt. 78 95 */ 79 96 fbnic_mbx_flush_tx(fbd); 80 - 81 - /* Reset the mailboxes to the initialized state */ 82 97 fbnic_mbx_clean(fbd); 98 + } 99 + 100 + /** 101 + * fbnic_fw_free_mbx - Disable mailbox and place it in standby state 102 + * @fbd: Pointer to device to disable 103 + * 104 + * This function will disable the mailbox interrupt, free any messages still 105 + * in the mailbox and place it into a disabled state. The firmware is 106 + * expected to see the update and assume that the host is in the reset state. 107 + **/ 108 + void fbnic_fw_free_mbx(struct fbnic_dev *fbd) 109 + { 110 + /* Vector has already been freed */ 111 + if (!fbd->fw_msix_vector) 112 + return; 113 + 114 + fbnic_fw_disable_mbx(fbd); 115 + 116 + /* Free the vector */ 117 + free_irq(fbd->fw_msix_vector, fbd); 118 + fbd->fw_msix_vector = 0; 83 119 } 84 120 85 121 static irqreturn_t fbnic_pcs_msix_intr(int __always_unused irq, void *data) ··· 137 101 } 138 102 139 103 /** 140 - * fbnic_pcs_irq_enable - Configure the MAC to enable it to advertise link 104 + * fbnic_pcs_request_irq - Configure the PCS to enable it to advertise link 141 105 * @fbd: Pointer to device to initialize 142 106 * 143 107 * This function provides basic bringup for the MAC/PCS IRQ. For now the IRQ ··· 145 109 * 146 110 * Return: non-zero on failure. 147 111 **/ 148 - int fbnic_pcs_irq_enable(struct fbnic_dev *fbd) 112 + int fbnic_pcs_request_irq(struct fbnic_dev *fbd) 149 113 { 150 - u32 vector = fbd->pcs_msix_vector; 151 - int err; 114 + struct pci_dev *pdev = to_pci_dev(fbd->dev); 115 + int vector, err; 152 116 153 - /* Request the IRQ for MAC link vector. 154 - * Map MAC cause to it, and unmask it 117 + WARN_ON(fbd->pcs_msix_vector); 118 + 119 + vector = pci_irq_vector(pdev, FBNIC_PCS_MSIX_ENTRY); 120 + if (vector < 0) 121 + return vector; 122 + 123 + /* Request the IRQ for PCS link vector. 124 + * Map PCS cause to it, and unmask it 155 125 */ 156 126 err = request_irq(vector, &fbnic_pcs_msix_intr, 0, 157 127 fbd->netdev->name, fbd); 158 128 if (err) 159 129 return err; 160 130 131 + /* Map and enable interrupt, unmask vector after link is configured */ 161 132 fbnic_wr32(fbd, FBNIC_INTR_MSIX_CTRL(FBNIC_INTR_MSIX_CTRL_PCS_IDX), 162 133 FBNIC_PCS_MSIX_ENTRY | FBNIC_INTR_MSIX_CTRL_ENABLE); 134 + 135 + fbd->pcs_msix_vector = vector; 163 136 164 137 return 0; 165 138 } 166 139 167 140 /** 168 - * fbnic_pcs_irq_disable - Teardown the MAC IRQ to prepare for stopping 141 + * fbnic_pcs_free_irq - Teardown the PCS IRQ to prepare for stopping 169 142 * @fbd: Pointer to device that is stopping 170 143 * 171 - * This function undoes the work done in fbnic_pcs_irq_enable and prepares 144 + * This function undoes the work done in fbnic_pcs_request_irq and prepares 172 145 * the device to no longer receive traffic on the host interface. 173 146 **/ 174 - void fbnic_pcs_irq_disable(struct fbnic_dev *fbd) 147 + void fbnic_pcs_free_irq(struct fbnic_dev *fbd) 175 148 { 149 + /* Vector has already been freed */ 150 + if (!fbd->pcs_msix_vector) 151 + return; 152 + 176 153 /* Disable interrupt */ 177 154 fbnic_wr32(fbd, FBNIC_INTR_MSIX_CTRL(FBNIC_INTR_MSIX_CTRL_PCS_IDX), 178 155 FBNIC_PCS_MSIX_ENTRY); 156 + fbnic_wrfl(fbd); 157 + 158 + /* Synchronize IRQ to prevent race that would unmask vector */ 159 + synchronize_irq(fbd->pcs_msix_vector); 160 + 161 + /* Mask the vector */ 179 162 fbnic_wr32(fbd, FBNIC_INTR_MASK_SET(0), 1u << FBNIC_PCS_MSIX_ENTRY); 180 163 181 164 /* Free the vector */ 182 165 free_irq(fbd->pcs_msix_vector, fbd); 166 + fbd->pcs_msix_vector = 0; 183 167 } 184 168 185 169 void fbnic_synchronize_irq(struct fbnic_dev *fbd, int nr) ··· 282 226 { 283 227 struct pci_dev *pdev = to_pci_dev(fbd->dev); 284 228 285 - fbd->pcs_msix_vector = 0; 286 - fbd->fw_msix_vector = 0; 287 - 288 229 fbd->num_irqs = 0; 289 230 290 231 pci_free_irq_vectors(pdev); ··· 306 253 num_irqs, wanted_irqs); 307 254 308 255 fbd->num_irqs = num_irqs; 309 - 310 - fbd->pcs_msix_vector = pci_irq_vector(pdev, FBNIC_PCS_MSIX_ENTRY); 311 - fbd->fw_msix_vector = pci_irq_vector(pdev, FBNIC_FW_MSIX_ENTRY); 312 256 313 257 return 0; 314 258 }
-6
drivers/net/ethernet/meta/fbnic/fbnic_mac.c
··· 79 79 fbnic_init_readrq(fbd, FBNIC_QM_RNI_RBP_CTL, cls, readrq); 80 80 fbnic_init_mps(fbd, FBNIC_QM_RNI_RDE_CTL, cls, mps); 81 81 fbnic_init_mps(fbd, FBNIC_QM_RNI_RCM_CTL, cls, mps); 82 - 83 - /* Enable XALI AR/AW outbound */ 84 - wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AW_CFG, 85 - FBNIC_PUL_OB_TLP_HDR_AW_CFG_BME); 86 - wr32(fbd, FBNIC_PUL_OB_TLP_HDR_AR_CFG, 87 - FBNIC_PUL_OB_TLP_HDR_AR_CFG_BME); 88 82 } 89 83 90 84 static void fbnic_mac_init_qm(struct fbnic_dev *fbd)
+3 -2
drivers/net/ethernet/meta/fbnic/fbnic_netdev.c
··· 44 44 if (err) 45 45 goto time_stop; 46 46 47 - err = fbnic_pcs_irq_enable(fbd); 47 + err = fbnic_pcs_request_irq(fbd); 48 48 if (err) 49 49 goto time_stop; 50 + 50 51 /* Pull the BMC config and initialize the RPC */ 51 52 fbnic_bmc_rpc_init(fbd); 52 53 fbnic_rss_reinit(fbd, fbn); ··· 83 82 struct fbnic_net *fbn = netdev_priv(netdev); 84 83 85 84 fbnic_down(fbn); 86 - fbnic_pcs_irq_disable(fbn->fbd); 85 + fbnic_pcs_free_irq(fbn->fbd); 87 86 88 87 fbnic_time_stop(fbn); 89 88 fbnic_fw_xmit_ownership_msg(fbn->fbd, false);
+7 -7
drivers/net/ethernet/meta/fbnic/fbnic_pci.c
··· 283 283 goto free_irqs; 284 284 } 285 285 286 - err = fbnic_fw_enable_mbx(fbd); 286 + err = fbnic_fw_request_mbx(fbd); 287 287 if (err) { 288 288 dev_err(&pdev->dev, 289 289 "Firmware mailbox initialization failure\n"); ··· 363 363 fbnic_hwmon_unregister(fbd); 364 364 fbnic_dbg_fbd_exit(fbd); 365 365 fbnic_devlink_unregister(fbd); 366 - fbnic_fw_disable_mbx(fbd); 366 + fbnic_fw_free_mbx(fbd); 367 367 fbnic_free_irqs(fbd); 368 368 369 369 fbnic_devlink_free(fbd); ··· 387 387 rtnl_unlock(); 388 388 389 389 null_uc_addr: 390 - fbnic_fw_disable_mbx(fbd); 390 + fbnic_fw_free_mbx(fbd); 391 391 392 392 /* Free the IRQs so they aren't trying to occupy sleeping CPUs */ 393 393 fbnic_free_irqs(fbd); ··· 420 420 fbd->mac->init_regs(fbd); 421 421 422 422 /* Re-enable mailbox */ 423 - err = fbnic_fw_enable_mbx(fbd); 423 + err = fbnic_fw_request_mbx(fbd); 424 424 if (err) 425 425 goto err_free_irqs; 426 426 ··· 438 438 if (netif_running(netdev)) { 439 439 err = __fbnic_open(fbn); 440 440 if (err) 441 - goto err_disable_mbx; 441 + goto err_free_mbx; 442 442 } 443 443 444 444 rtnl_unlock(); 445 445 446 446 return 0; 447 - err_disable_mbx: 447 + err_free_mbx: 448 448 rtnl_unlock(); 449 - fbnic_fw_disable_mbx(fbd); 449 + fbnic_fw_free_mbx(fbd); 450 450 err_free_irqs: 451 451 fbnic_free_irqs(fbd); 452 452 err_invalidate_uc_addr:
+13 -2
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 187 187 xdp_return_frame(xdpf); 188 188 break; 189 189 default: 190 - netdev_err(ndev, "tx_complete: invalid swdata type %d\n", swdata->type); 191 190 prueth_xmit_free(tx_chn, desc_tx); 192 191 ndev->stats.tx_dropped++; 193 192 continue; ··· 566 567 { 567 568 struct cppi5_host_desc_t *first_desc; 568 569 struct net_device *ndev = emac->ndev; 570 + struct netdev_queue *netif_txq; 569 571 struct prueth_tx_chn *tx_chn; 570 572 dma_addr_t desc_dma, buf_dma; 571 573 struct prueth_swdata *swdata; ··· 620 620 swdata->data.xdpf = xdpf; 621 621 } 622 622 623 + /* Report BQL before sending the packet */ 624 + netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); 625 + netdev_tx_sent_queue(netif_txq, xdpf->len); 626 + 623 627 cppi5_hdesc_set_pktlen(first_desc, xdpf->len); 624 628 desc_dma = k3_cppi_desc_pool_virt2dma(tx_chn->desc_pool, first_desc); 625 629 626 630 ret = k3_udma_glue_push_tx_chn(tx_chn->tx_chn, first_desc, desc_dma); 627 631 if (ret) { 628 632 netdev_err(ndev, "xdp tx: push failed: %d\n", ret); 633 + netdev_tx_completed_queue(netif_txq, 1, xdpf->len); 629 634 goto drop_free_descs; 630 635 } 631 636 ··· 655 650 struct page *page, u32 *len) 656 651 { 657 652 struct net_device *ndev = emac->ndev; 653 + struct netdev_queue *netif_txq; 654 + int cpu = smp_processor_id(); 658 655 struct bpf_prog *xdp_prog; 659 656 struct xdp_frame *xdpf; 660 657 u32 pkt_len = *len; ··· 676 669 goto drop; 677 670 } 678 671 679 - q_idx = smp_processor_id() % emac->tx_ch_num; 672 + q_idx = cpu % emac->tx_ch_num; 673 + netif_txq = netdev_get_tx_queue(ndev, q_idx); 674 + __netif_tx_lock(netif_txq, cpu); 680 675 result = emac_xmit_xdp_frame(emac, xdpf, page, q_idx); 676 + __netif_tx_unlock(netif_txq); 681 677 if (result == ICSSG_XDP_CONSUMED) { 682 678 ndev->stats.tx_dropped++; 683 679 goto drop; ··· 989 979 ret = k3_udma_glue_push_tx_chn(tx_chn->tx_chn, first_desc, desc_dma); 990 980 if (ret) { 991 981 netdev_err(ndev, "tx: push failed: %d\n", ret); 982 + netdev_tx_completed_queue(netif_txq, 1, pkt_len); 992 983 goto drop_free_descs; 993 984 } 994 985
+10 -6
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 1075 1075 { 1076 1076 struct prueth_emac *emac = netdev_priv(dev); 1077 1077 struct net_device *ndev = emac->ndev; 1078 + struct netdev_queue *netif_txq; 1079 + int cpu = smp_processor_id(); 1078 1080 struct xdp_frame *xdpf; 1079 1081 unsigned int q_idx; 1080 1082 int nxmit = 0; 1081 1083 u32 err; 1082 1084 int i; 1083 1085 1084 - q_idx = smp_processor_id() % emac->tx_ch_num; 1086 + q_idx = cpu % emac->tx_ch_num; 1087 + netif_txq = netdev_get_tx_queue(ndev, q_idx); 1085 1088 1086 1089 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 1087 1090 return -EINVAL; 1088 1091 1092 + __netif_tx_lock(netif_txq, cpu); 1089 1093 for (i = 0; i < n; i++) { 1090 1094 xdpf = frames[i]; 1091 1095 err = emac_xmit_xdp_frame(emac, xdpf, NULL, q_idx); ··· 1099 1095 } 1100 1096 nxmit++; 1101 1097 } 1098 + __netif_tx_unlock(netif_txq); 1102 1099 1103 1100 return nxmit; 1104 1101 } ··· 1114 1109 static int emac_xdp_setup(struct prueth_emac *emac, struct netdev_bpf *bpf) 1115 1110 { 1116 1111 struct bpf_prog *prog = bpf->prog; 1117 - xdp_features_t val; 1118 - 1119 - val = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 1120 - NETDEV_XDP_ACT_NDO_XMIT; 1121 - xdp_set_features_flag(emac->ndev, val); 1122 1112 1123 1113 if (!emac->xdpi.prog && !prog) 1124 1114 return 0; ··· 1291 1291 ndev->hw_features = NETIF_F_SG; 1292 1292 ndev->features = ndev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER; 1293 1293 ndev->hw_features |= NETIF_PRUETH_HSR_OFFLOAD_FEATURES; 1294 + xdp_set_features_flag(ndev, 1295 + NETDEV_XDP_ACT_BASIC | 1296 + NETDEV_XDP_ACT_REDIRECT | 1297 + NETDEV_XDP_ACT_NDO_XMIT); 1294 1298 1295 1299 netif_napi_add(ndev, &emac->napi_rx, icssg_napi_rx_poll); 1296 1300 hrtimer_setup(&emac->rx_hrtimer, &emac_rx_timer_callback, CLOCK_MONOTONIC,
+18 -5
drivers/net/virtio_net.c
··· 3383 3383 bool refill) 3384 3384 { 3385 3385 bool running = netif_running(vi->dev); 3386 + bool schedule_refill = false; 3386 3387 3387 3388 if (refill && !try_fill_recv(vi, rq, GFP_KERNEL)) 3388 - schedule_delayed_work(&vi->refill, 0); 3389 - 3389 + schedule_refill = true; 3390 3390 if (running) 3391 3391 virtnet_napi_enable(rq); 3392 + 3393 + if (schedule_refill) 3394 + schedule_delayed_work(&vi->refill, 0); 3392 3395 } 3393 3396 3394 3397 static void virtnet_rx_resume_all(struct virtnet_info *vi) ··· 3731 3728 succ: 3732 3729 vi->curr_queue_pairs = queue_pairs; 3733 3730 /* virtnet_open() will refill when device is going to up. */ 3734 - if (dev->flags & IFF_UP) 3731 + spin_lock_bh(&vi->refill_lock); 3732 + if (dev->flags & IFF_UP && vi->refill_enabled) 3735 3733 schedule_delayed_work(&vi->refill, 0); 3734 + spin_unlock_bh(&vi->refill_lock); 3736 3735 3737 3736 return 0; 3738 3737 } ··· 5678 5673 5679 5674 if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_SPEED) 5680 5675 tx->hw_drop_ratelimits = 0; 5676 + 5677 + netdev_stat_queue_sum(dev, 5678 + dev->real_num_rx_queues, vi->max_queue_pairs, rx, 5679 + dev->real_num_tx_queues, vi->max_queue_pairs, tx); 5681 5680 } 5682 5681 5683 5682 static const struct netdev_stat_ops virtnet_stat_ops = { ··· 5894 5885 5895 5886 hdr_dma = virtqueue_dma_map_single_attrs(sq->vq, &xsk_hdr, vi->hdr_len, 5896 5887 DMA_TO_DEVICE, 0); 5897 - if (virtqueue_dma_mapping_error(sq->vq, hdr_dma)) 5898 - return -ENOMEM; 5888 + if (virtqueue_dma_mapping_error(sq->vq, hdr_dma)) { 5889 + err = -ENOMEM; 5890 + goto err_free_buffs; 5891 + } 5899 5892 5900 5893 err = xsk_pool_dma_map(pool, dma_dev, 0); 5901 5894 if (err) ··· 5925 5914 err_xsk_map: 5926 5915 virtqueue_dma_unmap_single_attrs(rq->vq, hdr_dma, vi->hdr_len, 5927 5916 DMA_TO_DEVICE, 0); 5917 + err_free_buffs: 5918 + kvfree(rq->xsk_buffs); 5928 5919 return err; 5929 5920 } 5930 5921
+2
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 588 588 IWL_DEV_INFO(0x7A70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name), 589 589 IWL_DEV_INFO(0x7AF0, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name), 590 590 IWL_DEV_INFO(0x7AF0, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name), 591 + IWL_DEV_INFO(0x7F70, 0x1691, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690s_name), 592 + IWL_DEV_INFO(0x7F70, 0x1692, iwlax411_2ax_cfg_so_gf4_a0, iwl_ax411_killer_1690i_name), 591 593 592 594 IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name), 593 595 IWL_DEV_INFO(0x7E40, 0x1691, iwl_cfg_ma, iwl_ax411_killer_1690s_name),
+2 -1
drivers/nvme/host/core.c
··· 4493 4493 msleep(100); 4494 4494 } 4495 4495 4496 - if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) 4496 + if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_CONNECTING) || 4497 + !nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE)) 4497 4498 return; 4498 4499 4499 4500 nvme_unquiesce_io_queues(ctrl);
-1
drivers/pci/hotplug/s390_pci_hpc.c
··· 59 59 60 60 pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn); 61 61 if (pdev && pci_num_vf(pdev)) { 62 - pci_dev_put(pdev); 63 62 rc = -EBUSY; 64 63 goto out; 65 64 }
+1 -2
drivers/s390/block/Kconfig
··· 5 5 config DCSSBLK 6 6 def_tristate m 7 7 prompt "DCSSBLK support" 8 - depends on S390 && BLOCK 8 + depends on S390 && BLOCK && (DAX || DAX=n) 9 9 help 10 10 Support for dcss block device 11 11 ··· 14 14 depends on DCSSBLK 15 15 # requires S390 ZONE_DEVICE support 16 16 depends on BROKEN 17 - select DAX 18 17 prompt "DCSSBLK DAX support" 19 18 help 20 19 Enable DAX operation for the dcss block device
+3 -11
drivers/staging/axis-fifo/axis-fifo.c
··· 393 393 394 394 bytes_available = ioread32(fifo->base_addr + XLLF_RLR_OFFSET); 395 395 if (!bytes_available) { 396 - dev_err(fifo->dt_device, "received a packet of length 0 - fifo core will be reset\n"); 397 - reset_ip_core(fifo); 396 + dev_err(fifo->dt_device, "received a packet of length 0\n"); 398 397 ret = -EIO; 399 398 goto end_unlock; 400 399 } 401 400 402 401 if (bytes_available > len) { 403 - dev_err(fifo->dt_device, "user read buffer too small (available bytes=%zu user buffer bytes=%zu) - fifo core will be reset\n", 402 + dev_err(fifo->dt_device, "user read buffer too small (available bytes=%zu user buffer bytes=%zu)\n", 404 403 bytes_available, len); 405 - reset_ip_core(fifo); 406 404 ret = -EINVAL; 407 405 goto end_unlock; 408 406 } ··· 409 411 /* this probably can't happen unless IP 410 412 * registers were previously mishandled 411 413 */ 412 - dev_err(fifo->dt_device, "received a packet that isn't word-aligned - fifo core will be reset\n"); 413 - reset_ip_core(fifo); 414 + dev_err(fifo->dt_device, "received a packet that isn't word-aligned\n"); 414 415 ret = -EIO; 415 416 goto end_unlock; 416 417 } ··· 430 433 431 434 if (copy_to_user(buf + copied * sizeof(u32), tmp_buf, 432 435 copy * sizeof(u32))) { 433 - reset_ip_core(fifo); 434 436 ret = -EFAULT; 435 437 goto end_unlock; 436 438 } ··· 538 542 539 543 if (copy_from_user(tmp_buf, buf + copied * sizeof(u32), 540 544 copy * sizeof(u32))) { 541 - reset_ip_core(fifo); 542 545 ret = -EFAULT; 543 546 goto end_unlock; 544 547 } ··· 769 774 ret = -EIO; 770 775 goto end; 771 776 } 772 - 773 - /* IP sets TDFV to fifo depth - 4 so we will do the same */ 774 - fifo->tx_fifo_depth -= 4; 775 777 776 778 ret = get_dts_property(fifo, "xlnx,use-rx-data", &fifo->has_rx_fifo); 777 779 if (ret) {
+1 -1
drivers/staging/iio/adc/ad7816.c
··· 136 136 struct iio_dev *indio_dev = dev_to_iio_dev(dev); 137 137 struct ad7816_chip_info *chip = iio_priv(indio_dev); 138 138 139 - if (strcmp(buf, "full")) { 139 + if (strcmp(buf, "full") == 0) { 140 140 gpiod_set_value(chip->rdwr_pin, 1); 141 141 chip->mode = AD7816_FULL; 142 142 } else {
+1
drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
··· 1900 1900 __func__, ret); 1901 1901 goto free_dev; 1902 1902 } 1903 + dev->v4l2_dev.dev = &device->dev; 1903 1904 1904 1905 /* setup v4l controls */ 1905 1906 ret = bcm2835_mmal_init_controls(dev, &dev->ctrl_handler);
+17 -22
drivers/uio/uio_hv_generic.c
··· 131 131 vmbus_device_unregister(channel->device_obj); 132 132 } 133 133 134 - /* Sysfs API to allow mmap of the ring buffers 134 + /* Function used for mmap of ring buffer sysfs interface. 135 135 * The ring buffer is allocated as contiguous memory by vmbus_open 136 136 */ 137 - static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj, 138 - const struct bin_attribute *attr, 139 - struct vm_area_struct *vma) 137 + static int 138 + hv_uio_ring_mmap(struct vmbus_channel *channel, struct vm_area_struct *vma) 140 139 { 141 - struct vmbus_channel *channel 142 - = container_of(kobj, struct vmbus_channel, kobj); 143 140 void *ring_buffer = page_address(channel->ringbuffer_page); 144 141 145 142 if (channel->state != CHANNEL_OPENED_STATE) ··· 145 148 return vm_iomap_memory(vma, virt_to_phys(ring_buffer), 146 149 channel->ringbuffer_pagecount << PAGE_SHIFT); 147 150 } 148 - 149 - static const struct bin_attribute ring_buffer_bin_attr = { 150 - .attr = { 151 - .name = "ring", 152 - .mode = 0600, 153 - }, 154 - .size = 2 * SZ_2M, 155 - .mmap = hv_uio_ring_mmap, 156 - }; 157 151 158 152 /* Callback from VMBUS subsystem when new channel created. */ 159 153 static void ··· 166 178 /* Disable interrupts on sub channel */ 167 179 new_sc->inbound.ring_buffer->interrupt_mask = 1; 168 180 set_channel_read_mode(new_sc, HV_CALL_ISR); 169 - 170 - ret = sysfs_create_bin_file(&new_sc->kobj, &ring_buffer_bin_attr); 181 + ret = hv_create_ring_sysfs(new_sc, hv_uio_ring_mmap); 171 182 if (ret) { 172 183 dev_err(device, "sysfs create ring bin file failed; %d\n", ret); 173 184 vmbus_close(new_sc); ··· 337 350 goto fail_close; 338 351 } 339 352 340 - ret = sysfs_create_bin_file(&channel->kobj, &ring_buffer_bin_attr); 341 - if (ret) 342 - dev_notice(&dev->device, 343 - "sysfs create ring bin file failed; %d\n", ret); 353 + /* 354 + * This internally calls sysfs_update_group, which returns a non-zero value if it executes 355 + * before sysfs_create_group. This is expected as the 'ring' will be created later in 356 + * vmbus_device_register() -> vmbus_add_channel_kobj(). Thus, no need to check the return 357 + * value and print warning. 358 + * 359 + * Creating/exposing sysfs in driver probe is not encouraged as it can lead to race 360 + * conditions with userspace. For backward compatibility, "ring" sysfs could not be removed 361 + * or decoupled from uio_hv_generic probe. Userspace programs can make use of inotify 362 + * APIs to make sure that ring is created. 363 + */ 364 + hv_create_ring_sysfs(channel, hv_uio_ring_mmap); 344 365 345 366 hv_set_drvdata(dev, pdata); 346 367 ··· 370 375 if (!pdata) 371 376 return; 372 377 373 - sysfs_remove_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr); 378 + hv_remove_ring_sysfs(dev->channel); 374 379 uio_unregister_device(&pdata->info); 375 380 hv_uio_cleanup(dev, pdata); 376 381
+31
drivers/usb/cdns3/cdnsp-gadget.c
··· 139 139 (portsc & PORT_CHANGE_BITS), port_regs); 140 140 } 141 141 142 + static void cdnsp_set_apb_timeout_value(struct cdnsp_device *pdev) 143 + { 144 + struct cdns *cdns = dev_get_drvdata(pdev->dev); 145 + __le32 __iomem *reg; 146 + void __iomem *base; 147 + u32 offset = 0; 148 + u32 val; 149 + 150 + if (!cdns->override_apb_timeout) 151 + return; 152 + 153 + base = &pdev->cap_regs->hc_capbase; 154 + offset = cdnsp_find_next_ext_cap(base, offset, D_XEC_PRE_REGS_CAP); 155 + reg = base + offset + REG_CHICKEN_BITS_3_OFFSET; 156 + 157 + val = le32_to_cpu(readl(reg)); 158 + val = CHICKEN_APB_TIMEOUT_SET(val, cdns->override_apb_timeout); 159 + writel(cpu_to_le32(val), reg); 160 + } 161 + 142 162 static void cdnsp_set_chicken_bits_2(struct cdnsp_device *pdev, u32 bit) 143 163 { 144 164 __le32 __iomem *reg; ··· 1793 1773 reg += cdnsp_find_next_ext_cap(reg, 0, RTL_REV_CAP); 1794 1774 pdev->rev_cap = reg; 1795 1775 1776 + pdev->rtl_revision = readl(&pdev->rev_cap->rtl_revision); 1777 + 1796 1778 dev_info(pdev->dev, "Rev: %08x/%08x, eps: %08x, buff: %08x/%08x\n", 1797 1779 readl(&pdev->rev_cap->ctrl_revision), 1798 1780 readl(&pdev->rev_cap->rtl_revision), ··· 1819 1797 pdev->hcc_params = readl(&pdev->cap_regs->hc_capbase); 1820 1798 pdev->hci_version = HC_VERSION(pdev->hcc_params); 1821 1799 pdev->hcc_params = readl(&pdev->cap_regs->hcc_params); 1800 + 1801 + /* 1802 + * Override the APB timeout value to give the controller more time for 1803 + * enabling UTMI clock and synchronizing APB and UTMI clock domains. 1804 + * This fix is platform specific and is required to fixes issue with 1805 + * reading incorrect value from PORTSC register after resuming 1806 + * from L1 state. 1807 + */ 1808 + cdnsp_set_apb_timeout_value(pdev); 1822 1809 1823 1810 cdnsp_get_rev_cap(pdev); 1824 1811
+6
drivers/usb/cdns3/cdnsp-gadget.h
··· 520 520 #define REG_CHICKEN_BITS_2_OFFSET 0x48 521 521 #define CHICKEN_XDMA_2_TP_CACHE_DIS BIT(28) 522 522 523 + #define REG_CHICKEN_BITS_3_OFFSET 0x4C 524 + #define CHICKEN_APB_TIMEOUT_SET(p, val) (((p) & ~GENMASK(21, 0)) | (val)) 525 + 523 526 /* XBUF Extended Capability ID. */ 524 527 #define XBUF_CAP_ID 0xCB 525 528 #define XBUF_RX_TAG_MASK_0_OFFSET 0x1C ··· 1360 1357 * @rev_cap: Controller Capabilities Registers. 1361 1358 * @hcs_params1: Cached register copies of read-only HCSPARAMS1 1362 1359 * @hcc_params: Cached register copies of read-only HCCPARAMS1 1360 + * @rtl_revision: Cached controller rtl revision. 1363 1361 * @setup: Temporary buffer for setup packet. 1364 1362 * @ep0_preq: Internal allocated request used during enumeration. 1365 1363 * @ep0_stage: ep0 stage during enumeration process. ··· 1415 1411 __u32 hcs_params1; 1416 1412 __u32 hcs_params3; 1417 1413 __u32 hcc_params; 1414 + #define RTL_REVISION_NEW_LPM 0x2700 1415 + __u32 rtl_revision; 1418 1416 /* Lock used in interrupt thread context. */ 1419 1417 spinlock_t lock; 1420 1418 struct usb_ctrlrequest setup;
+10 -2
drivers/usb/cdns3/cdnsp-pci.c
··· 28 28 #define PCI_DRIVER_NAME "cdns-pci-usbssp" 29 29 #define PLAT_DRIVER_NAME "cdns-usbssp" 30 30 31 + #define CHICKEN_APB_TIMEOUT_VALUE 0x1C20 32 + 31 33 static struct pci_dev *cdnsp_get_second_fun(struct pci_dev *pdev) 32 34 { 33 35 /* ··· 141 139 cdnsp->otg_irq = pdev->irq; 142 140 } 143 141 142 + /* 143 + * Cadence PCI based platform require some longer timeout for APB 144 + * to fixes domain clock synchronization issue after resuming 145 + * controller from L1 state. 146 + */ 147 + cdnsp->override_apb_timeout = CHICKEN_APB_TIMEOUT_VALUE; 148 + pci_set_drvdata(pdev, cdnsp); 149 + 144 150 if (pci_is_enabled(func)) { 145 151 cdnsp->dev = dev; 146 152 cdnsp->gadget_init = cdnsp_gadget_init; ··· 157 147 if (ret) 158 148 goto free_cdnsp; 159 149 } 160 - 161 - pci_set_drvdata(pdev, cdnsp); 162 150 163 151 device_wakeup_enable(&pdev->dev); 164 152 if (pci_dev_run_wake(pdev))
+2 -1
drivers/usb/cdns3/cdnsp-ring.c
··· 308 308 309 309 writel(db_value, reg_addr); 310 310 311 - cdnsp_force_l0_go(pdev); 311 + if (pdev->rtl_revision < RTL_REVISION_NEW_LPM) 312 + cdnsp_force_l0_go(pdev); 312 313 313 314 /* Doorbell was set. */ 314 315 return true;
+3
drivers/usb/cdns3/core.h
··· 79 79 * @pdata: platform data from glue layer 80 80 * @lock: spinlock structure 81 81 * @xhci_plat_data: xhci private data structure pointer 82 + * @override_apb_timeout: hold value of APB timeout. For value 0 the default 83 + * value in CHICKEN_BITS_3 will be preserved. 82 84 * @gadget_init: pointer to gadget initialization function 83 85 */ 84 86 struct cdns { ··· 119 117 struct cdns3_platform_data *pdata; 120 118 spinlock_t lock; 121 119 struct xhci_plat_priv *xhci_plat_data; 120 + u32 override_apb_timeout; 122 121 123 122 int (*gadget_init)(struct cdns *cdns); 124 123 };
+36 -23
drivers/usb/class/usbtmc.c
··· 482 482 u8 *buffer; 483 483 u8 tag; 484 484 int rv; 485 + long wait_rv; 485 486 486 487 dev_dbg(dev, "Enter ioctl_read_stb iin_ep_present: %d\n", 487 488 data->iin_ep_present); ··· 512 511 } 513 512 514 513 if (data->iin_ep_present) { 515 - rv = wait_event_interruptible_timeout( 514 + wait_rv = wait_event_interruptible_timeout( 516 515 data->waitq, 517 516 atomic_read(&data->iin_data_valid) != 0, 518 517 file_data->timeout); 519 - if (rv < 0) { 520 - dev_dbg(dev, "wait interrupted %d\n", rv); 518 + if (wait_rv < 0) { 519 + dev_dbg(dev, "wait interrupted %ld\n", wait_rv); 520 + rv = wait_rv; 521 521 goto exit; 522 522 } 523 523 524 - if (rv == 0) { 524 + if (wait_rv == 0) { 525 525 dev_dbg(dev, "wait timed out\n"); 526 526 rv = -ETIMEDOUT; 527 527 goto exit; ··· 540 538 } 541 539 542 540 dev_dbg(dev, "stb:0x%02x received %d\n", (unsigned int)*stb, rv); 541 + 542 + rv = 0; 543 543 544 544 exit: 545 545 /* bump interrupt bTag */ ··· 606 602 { 607 603 struct usbtmc_device_data *data = file_data->data; 608 604 struct device *dev = &data->intf->dev; 609 - int rv; 610 605 u32 timeout; 611 606 unsigned long expire; 607 + long wait_rv; 612 608 613 609 if (!data->iin_ep_present) { 614 610 dev_dbg(dev, "no interrupt endpoint present\n"); ··· 622 618 623 619 mutex_unlock(&data->io_mutex); 624 620 625 - rv = wait_event_interruptible_timeout( 626 - data->waitq, 627 - atomic_read(&file_data->srq_asserted) != 0 || 628 - atomic_read(&file_data->closing), 629 - expire); 621 + wait_rv = wait_event_interruptible_timeout( 622 + data->waitq, 623 + atomic_read(&file_data->srq_asserted) != 0 || 624 + atomic_read(&file_data->closing), 625 + expire); 630 626 631 627 mutex_lock(&data->io_mutex); 632 628 633 629 /* Note! disconnect or close could be called in the meantime */ 634 630 if (atomic_read(&file_data->closing) || data->zombie) 635 - rv = -ENODEV; 631 + return -ENODEV; 636 632 637 - if (rv < 0) { 638 - /* dev can be invalid now! */ 639 - pr_debug("%s - wait interrupted %d\n", __func__, rv); 640 - return rv; 633 + if (wait_rv < 0) { 634 + dev_dbg(dev, "%s - wait interrupted %ld\n", __func__, wait_rv); 635 + return wait_rv; 641 636 } 642 637 643 - if (rv == 0) { 638 + if (wait_rv == 0) { 644 639 dev_dbg(dev, "%s - wait timed out\n", __func__); 645 640 return -ETIMEDOUT; 646 641 } ··· 833 830 unsigned long expire; 834 831 int bufcount = 1; 835 832 int again = 0; 833 + long wait_rv; 836 834 837 835 /* mutex already locked */ 838 836 ··· 946 942 if (!(flags & USBTMC_FLAG_ASYNC)) { 947 943 dev_dbg(dev, "%s: before wait time %lu\n", 948 944 __func__, expire); 949 - retval = wait_event_interruptible_timeout( 945 + wait_rv = wait_event_interruptible_timeout( 950 946 file_data->wait_bulk_in, 951 947 usbtmc_do_transfer(file_data), 952 948 expire); 953 949 954 - dev_dbg(dev, "%s: wait returned %d\n", 955 - __func__, retval); 950 + dev_dbg(dev, "%s: wait returned %ld\n", 951 + __func__, wait_rv); 956 952 957 - if (retval <= 0) { 958 - if (retval == 0) 959 - retval = -ETIMEDOUT; 953 + if (wait_rv < 0) { 954 + retval = wait_rv; 960 955 goto error; 961 956 } 957 + 958 + if (wait_rv == 0) { 959 + retval = -ETIMEDOUT; 960 + goto error; 961 + } 962 + 962 963 } 963 964 964 965 urb = usb_get_from_anchor(&file_data->in_anchor); ··· 1389 1380 if (!buffer) 1390 1381 return -ENOMEM; 1391 1382 1392 - mutex_lock(&data->io_mutex); 1383 + retval = mutex_lock_interruptible(&data->io_mutex); 1384 + if (retval < 0) 1385 + goto exit_nolock; 1386 + 1393 1387 if (data->zombie) { 1394 1388 retval = -ENODEV; 1395 1389 goto exit; ··· 1515 1503 1516 1504 exit: 1517 1505 mutex_unlock(&data->io_mutex); 1506 + exit_nolock: 1518 1507 kfree(buffer); 1519 1508 return retval; 1520 1509 }
+4
drivers/usb/dwc3/core.h
··· 1164 1164 * @gsbuscfg0_reqinfo: store GSBUSCFG0.DATRDREQINFO, DESRDREQINFO, 1165 1165 * DATWRREQINFO, and DESWRREQINFO value passed from 1166 1166 * glue driver. 1167 + * @wakeup_pending_funcs: Indicates whether any interface has requested for 1168 + * function wakeup in bitmap format where bit position 1169 + * represents interface_id. 1167 1170 */ 1168 1171 struct dwc3 { 1169 1172 struct work_struct drd_work; ··· 1397 1394 int num_ep_resized; 1398 1395 struct dentry *debug_root; 1399 1396 u32 gsbuscfg0_reqinfo; 1397 + u32 wakeup_pending_funcs; 1400 1398 }; 1401 1399 1402 1400 #define INCRX_BURST_MODE 0
+23 -37
drivers/usb/dwc3/gadget.c
··· 276 276 return ret; 277 277 } 278 278 279 - static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async); 280 - 281 279 /** 282 280 * dwc3_send_gadget_ep_cmd - issue an endpoint command 283 281 * @dep: the endpoint to which the command is going to be issued ··· 2357 2359 return __dwc3_gadget_get_frame(dwc); 2358 2360 } 2359 2361 2360 - static int __dwc3_gadget_wakeup(struct dwc3 *dwc, bool async) 2362 + static int __dwc3_gadget_wakeup(struct dwc3 *dwc) 2361 2363 { 2362 - int retries; 2363 - 2364 2364 int ret; 2365 2365 u32 reg; 2366 2366 ··· 2386 2390 return -EINVAL; 2387 2391 } 2388 2392 2389 - if (async) 2390 - dwc3_gadget_enable_linksts_evts(dwc, true); 2393 + dwc3_gadget_enable_linksts_evts(dwc, true); 2391 2394 2392 2395 ret = dwc3_gadget_set_link_state(dwc, DWC3_LINK_STATE_RECOV); 2393 2396 if (ret < 0) { ··· 2405 2410 2406 2411 /* 2407 2412 * Since link status change events are enabled we will receive 2408 - * an U0 event when wakeup is successful. So bail out. 2413 + * an U0 event when wakeup is successful. 2409 2414 */ 2410 - if (async) 2411 - return 0; 2412 - 2413 - /* poll until Link State changes to ON */ 2414 - retries = 20000; 2415 - 2416 - while (retries--) { 2417 - reg = dwc3_readl(dwc->regs, DWC3_DSTS); 2418 - 2419 - /* in HS, means ON */ 2420 - if (DWC3_DSTS_USBLNKST(reg) == DWC3_LINK_STATE_U0) 2421 - break; 2422 - } 2423 - 2424 - if (DWC3_DSTS_USBLNKST(reg) != DWC3_LINK_STATE_U0) { 2425 - dev_err(dwc->dev, "failed to send remote wakeup\n"); 2426 - return -EINVAL; 2427 - } 2428 - 2429 2415 return 0; 2430 2416 } 2431 2417 ··· 2427 2451 spin_unlock_irqrestore(&dwc->lock, flags); 2428 2452 return -EINVAL; 2429 2453 } 2430 - ret = __dwc3_gadget_wakeup(dwc, true); 2454 + ret = __dwc3_gadget_wakeup(dwc); 2431 2455 2432 2456 spin_unlock_irqrestore(&dwc->lock, flags); 2433 2457 ··· 2455 2479 */ 2456 2480 link_state = dwc3_gadget_get_link_state(dwc); 2457 2481 if (link_state == DWC3_LINK_STATE_U3) { 2458 - ret = __dwc3_gadget_wakeup(dwc, false); 2459 - if (ret) { 2460 - spin_unlock_irqrestore(&dwc->lock, flags); 2461 - return -EINVAL; 2462 - } 2463 - dwc3_resume_gadget(dwc); 2464 - dwc->suspended = false; 2465 - dwc->link_state = DWC3_LINK_STATE_U0; 2482 + dwc->wakeup_pending_funcs |= BIT(intf_id); 2483 + ret = __dwc3_gadget_wakeup(dwc); 2484 + spin_unlock_irqrestore(&dwc->lock, flags); 2485 + return ret; 2466 2486 } 2467 2487 2468 2488 ret = dwc3_send_gadget_generic_command(dwc, DWC3_DGCMD_DEV_NOTIFICATION, ··· 4325 4353 { 4326 4354 enum dwc3_link_state next = evtinfo & DWC3_LINK_STATE_MASK; 4327 4355 unsigned int pwropt; 4356 + int ret; 4357 + int intf_id; 4328 4358 4329 4359 /* 4330 4360 * WORKAROUND: DWC3 < 2.50a have an issue when configured without ··· 4402 4428 4403 4429 switch (next) { 4404 4430 case DWC3_LINK_STATE_U0: 4405 - if (dwc->gadget->wakeup_armed) { 4431 + if (dwc->gadget->wakeup_armed || dwc->wakeup_pending_funcs) { 4406 4432 dwc3_gadget_enable_linksts_evts(dwc, false); 4407 4433 dwc3_resume_gadget(dwc); 4408 4434 dwc->suspended = false; ··· 4425 4451 } 4426 4452 4427 4453 dwc->link_state = next; 4454 + 4455 + /* Proceed with func wakeup if any interfaces that has requested */ 4456 + while (dwc->wakeup_pending_funcs && (next == DWC3_LINK_STATE_U0)) { 4457 + intf_id = ffs(dwc->wakeup_pending_funcs) - 1; 4458 + ret = dwc3_send_gadget_generic_command(dwc, DWC3_DGCMD_DEV_NOTIFICATION, 4459 + DWC3_DGCMDPAR_DN_FUNC_WAKE | 4460 + DWC3_DGCMDPAR_INTF_SEL(intf_id)); 4461 + if (ret) 4462 + dev_err(dwc->dev, "Failed to send DN wake for intf %d\n", intf_id); 4463 + 4464 + dwc->wakeup_pending_funcs &= ~BIT(intf_id); 4465 + } 4428 4466 } 4429 4467 4430 4468 static void dwc3_gadget_suspend_interrupt(struct dwc3 *dwc,
+5 -7
drivers/usb/gadget/composite.c
··· 2011 2011 2012 2012 if (f->get_status) { 2013 2013 status = f->get_status(f); 2014 + 2014 2015 if (status < 0) 2015 2016 break; 2016 - } else { 2017 - /* Set D0 and D1 bits based on func wakeup capability */ 2018 - if (f->config->bmAttributes & USB_CONFIG_ATT_WAKEUP) { 2019 - status |= USB_INTRF_STAT_FUNC_RW_CAP; 2020 - if (f->func_wakeup_armed) 2021 - status |= USB_INTRF_STAT_FUNC_RW; 2022 - } 2017 + 2018 + /* if D5 is not set, then device is not wakeup capable */ 2019 + if (!(f->config->bmAttributes & USB_CONFIG_ATT_WAKEUP)) 2020 + status &= ~(USB_INTRF_STAT_FUNC_RW_CAP | USB_INTRF_STAT_FUNC_RW); 2023 2021 } 2024 2022 2025 2023 put_unaligned_le16(status & 0x0000ffff, req->buf);
+7
drivers/usb/gadget/function/f_ecm.c
··· 892 892 gether_resume(&ecm->port); 893 893 } 894 894 895 + static int ecm_get_status(struct usb_function *f) 896 + { 897 + return (f->func_wakeup_armed ? USB_INTRF_STAT_FUNC_RW : 0) | 898 + USB_INTRF_STAT_FUNC_RW_CAP; 899 + } 900 + 895 901 static void ecm_free(struct usb_function *f) 896 902 { 897 903 struct f_ecm *ecm; ··· 966 960 ecm->port.func.disable = ecm_disable; 967 961 ecm->port.func.free_func = ecm_free; 968 962 ecm->port.func.suspend = ecm_suspend; 963 + ecm->port.func.get_status = ecm_get_status; 969 964 ecm->port.func.resume = ecm_resume; 970 965 971 966 return &ecm->port.func;
+4
drivers/usb/gadget/udc/tegra-xudc.c
··· 1749 1749 val = xudc_readl(xudc, CTRL); 1750 1750 val &= ~CTRL_RUN; 1751 1751 xudc_writel(xudc, val, CTRL); 1752 + 1753 + val = xudc_readl(xudc, ST); 1754 + if (val & ST_RC) 1755 + xudc_writel(xudc, ST_RC, ST); 1752 1756 } 1753 1757 1754 1758 dev_info(xudc->dev, "ep %u disabled\n", ep->index);
+1 -1
drivers/usb/host/uhci-platform.c
··· 121 121 } 122 122 123 123 /* Get and enable clock if any specified */ 124 - uhci->clk = devm_clk_get(&pdev->dev, NULL); 124 + uhci->clk = devm_clk_get_optional(&pdev->dev, NULL); 125 125 if (IS_ERR(uhci->clk)) { 126 126 ret = PTR_ERR(uhci->clk); 127 127 goto err_rmr;
+16 -3
drivers/usb/host/xhci-dbgcap.c
··· 823 823 { 824 824 dma_addr_t deq; 825 825 union xhci_trb *evt; 826 + enum evtreturn ret = EVT_DONE; 826 827 u32 ctrl, portsc; 827 828 bool update_erdp = false; 828 829 ··· 910 909 break; 911 910 case TRB_TYPE(TRB_TRANSFER): 912 911 dbc_handle_xfer_event(dbc, evt); 912 + ret = EVT_XFER_DONE; 913 913 break; 914 914 default: 915 915 break; ··· 929 927 lo_hi_writeq(deq, &dbc->regs->erdp); 930 928 } 931 929 932 - return EVT_DONE; 930 + return ret; 933 931 } 934 932 935 933 static void xhci_dbc_handle_events(struct work_struct *work) ··· 938 936 struct xhci_dbc *dbc; 939 937 unsigned long flags; 940 938 unsigned int poll_interval; 939 + unsigned long busypoll_timelimit; 941 940 942 941 dbc = container_of(to_delayed_work(work), struct xhci_dbc, event_work); 943 942 poll_interval = dbc->poll_interval; ··· 957 954 dbc->driver->disconnect(dbc); 958 955 break; 959 956 case EVT_DONE: 960 - /* set fast poll rate if there are pending data transfers */ 957 + /* 958 + * Set fast poll rate if there are pending out transfers, or 959 + * a transfer was recently processed 960 + */ 961 + busypoll_timelimit = dbc->xfer_timestamp + 962 + msecs_to_jiffies(DBC_XFER_INACTIVITY_TIMEOUT); 963 + 961 964 if (!list_empty(&dbc->eps[BULK_OUT].list_pending) || 962 - !list_empty(&dbc->eps[BULK_IN].list_pending)) 965 + time_is_after_jiffies(busypoll_timelimit)) 963 966 poll_interval = 0; 967 + break; 968 + case EVT_XFER_DONE: 969 + dbc->xfer_timestamp = jiffies; 970 + poll_interval = 0; 964 971 break; 965 972 default: 966 973 dev_info(dbc->dev, "stop handling dbc events\n");
+3
drivers/usb/host/xhci-dbgcap.h
··· 96 96 #define DBC_WRITE_BUF_SIZE 8192 97 97 #define DBC_POLL_INTERVAL_DEFAULT 64 /* milliseconds */ 98 98 #define DBC_POLL_INTERVAL_MAX 5000 /* milliseconds */ 99 + #define DBC_XFER_INACTIVITY_TIMEOUT 10 /* milliseconds */ 99 100 /* 100 101 * Private structure for DbC hardware state: 101 102 */ ··· 143 142 enum dbc_state state; 144 143 struct delayed_work event_work; 145 144 unsigned int poll_interval; /* ms */ 145 + unsigned long xfer_timestamp; 146 146 unsigned resume_required:1; 147 147 struct dbc_ep eps[2]; 148 148 ··· 189 187 enum evtreturn { 190 188 EVT_ERR = -1, 191 189 EVT_DONE, 190 + EVT_XFER_DONE, 192 191 EVT_GSER, 193 192 EVT_DISC, 194 193 };
+9 -10
drivers/usb/host/xhci-ring.c
··· 699 699 int new_cycle; 700 700 dma_addr_t addr; 701 701 u64 hw_dequeue; 702 - bool cycle_found = false; 702 + bool hw_dequeue_found = false; 703 703 bool td_last_trb_found = false; 704 704 u32 trb_sct = 0; 705 705 int ret; ··· 715 715 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id); 716 716 new_seg = ep_ring->deq_seg; 717 717 new_deq = ep_ring->dequeue; 718 - new_cycle = hw_dequeue & 0x1; 718 + new_cycle = le32_to_cpu(td->end_trb->generic.field[3]) & TRB_CYCLE; 719 719 720 720 /* 721 - * We want to find the pointer, segment and cycle state of the new trb 722 - * (the one after current TD's end_trb). We know the cycle state at 723 - * hw_dequeue, so walk the ring until both hw_dequeue and end_trb are 724 - * found. 721 + * Walk the ring until both the next TRB and hw_dequeue are found (don't 722 + * move hw_dequeue back if it went forward due to a HW bug). Cycle state 723 + * is loaded from a known good TRB, track later toggles to maintain it. 725 724 */ 726 725 do { 727 - if (!cycle_found && xhci_trb_virt_to_dma(new_seg, new_deq) 726 + if (!hw_dequeue_found && xhci_trb_virt_to_dma(new_seg, new_deq) 728 727 == (dma_addr_t)(hw_dequeue & ~0xf)) { 729 - cycle_found = true; 728 + hw_dequeue_found = true; 730 729 if (td_last_trb_found) 731 730 break; 732 731 } 733 732 if (new_deq == td->end_trb) 734 733 td_last_trb_found = true; 735 734 736 - if (cycle_found && trb_is_link(new_deq) && 735 + if (td_last_trb_found && trb_is_link(new_deq) && 737 736 link_trb_toggles_cycle(new_deq)) 738 737 new_cycle ^= 0x1; 739 738 ··· 744 745 return -EINVAL; 745 746 } 746 747 747 - } while (!cycle_found || !td_last_trb_found); 748 + } while (!hw_dequeue_found || !td_last_trb_found); 748 749 749 750 /* Don't update the ring cycle state for the producer (us). */ 750 751 addr = xhci_trb_virt_to_dma(new_seg, new_deq);
+3
drivers/usb/host/xhci-tegra.c
··· 1364 1364 tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl, 1365 1365 tegra->otg_usb2_port); 1366 1366 1367 + pm_runtime_get_sync(tegra->dev); 1367 1368 if (tegra->host_mode) { 1368 1369 /* switch to host mode */ 1369 1370 if (tegra->otg_usb3_port >= 0) { ··· 1394 1393 } 1395 1394 1396 1395 tegra_xhci_set_port_power(tegra, true, true); 1396 + pm_runtime_mark_last_busy(tegra->dev); 1397 1397 1398 1398 } else { 1399 1399 if (tegra->otg_usb3_port >= 0) ··· 1402 1400 1403 1401 tegra_xhci_set_port_power(tegra, true, false); 1404 1402 } 1403 + pm_runtime_put_autosuspend(tegra->dev); 1405 1404 } 1406 1405 1407 1406 #if IS_ENABLED(CONFIG_PM) || IS_ENABLED(CONFIG_PM_SLEEP)
+8 -2
drivers/usb/misc/onboard_usb_dev.c
··· 569 569 } 570 570 571 571 static const struct usb_device_id onboard_dev_id_table[] = { 572 - { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6504) }, /* CYUSB33{0,1,2}x/CYUSB230x 3.0 HUB */ 573 - { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6506) }, /* CYUSB33{0,1,2}x/CYUSB230x 2.0 HUB */ 572 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6500) }, /* CYUSB330x 3.0 HUB */ 573 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6502) }, /* CYUSB330x 2.0 HUB */ 574 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6503) }, /* CYUSB33{0,1}x 2.0 HUB, Vendor Mode */ 575 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6504) }, /* CYUSB331x 3.0 HUB */ 576 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6506) }, /* CYUSB331x 2.0 HUB */ 577 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6507) }, /* CYUSB332x 2.0 HUB, Vendor Mode */ 578 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6508) }, /* CYUSB332x 3.0 HUB */ 579 + { USB_DEVICE(VENDOR_ID_CYPRESS, 0x650a) }, /* CYUSB332x 2.0 HUB */ 574 580 { USB_DEVICE(VENDOR_ID_CYPRESS, 0x6570) }, /* CY7C6563x 2.0 HUB */ 575 581 { USB_DEVICE(VENDOR_ID_GENESYS, 0x0608) }, /* Genesys Logic GL850G USB 2.0 HUB */ 576 582 { USB_DEVICE(VENDOR_ID_GENESYS, 0x0610) }, /* Genesys Logic GL852G USB 2.0 HUB */
+1 -1
drivers/usb/typec/tcpm/tcpm.c
··· 5965 5965 case SNK_TRY_WAIT_DEBOUNCE: 5966 5966 if (!tcpm_port_is_sink(port)) { 5967 5967 port->max_wait = 0; 5968 - tcpm_set_state(port, SRC_TRYWAIT, 0); 5968 + tcpm_set_state(port, SRC_TRYWAIT, PD_T_PD_DEBOUNCE); 5969 5969 } 5970 5970 break; 5971 5971 case SRC_TRY_WAIT:
+13 -8
drivers/usb/typec/ucsi/displayport.c
··· 54 54 u8 cur = 0; 55 55 int ret; 56 56 57 - mutex_lock(&dp->con->lock); 57 + if (!ucsi_con_mutex_lock(dp->con)) 58 + return -ENOTCONN; 58 59 59 60 if (!dp->override && dp->initialized) { 60 61 const struct typec_altmode *p = typec_altmode_get_partner(alt); ··· 101 100 schedule_work(&dp->work); 102 101 ret = 0; 103 102 err_unlock: 104 - mutex_unlock(&dp->con->lock); 103 + ucsi_con_mutex_unlock(dp->con); 105 104 106 105 return ret; 107 106 } ··· 113 112 u64 command; 114 113 int ret = 0; 115 114 116 - mutex_lock(&dp->con->lock); 115 + if (!ucsi_con_mutex_lock(dp->con)) 116 + return -ENOTCONN; 117 117 118 118 if (!dp->override) { 119 119 const struct typec_altmode *p = typec_altmode_get_partner(alt); ··· 146 144 schedule_work(&dp->work); 147 145 148 146 out_unlock: 149 - mutex_unlock(&dp->con->lock); 147 + ucsi_con_mutex_unlock(dp->con); 150 148 151 149 return ret; 152 150 } ··· 204 202 int cmd = PD_VDO_CMD(header); 205 203 int svdm_version; 206 204 207 - mutex_lock(&dp->con->lock); 205 + if (!ucsi_con_mutex_lock(dp->con)) 206 + return -ENOTCONN; 208 207 209 208 if (!dp->override && dp->initialized) { 210 209 const struct typec_altmode *p = typec_altmode_get_partner(alt); 211 210 212 211 dev_warn(&p->dev, 213 212 "firmware doesn't support alternate mode overriding\n"); 214 - mutex_unlock(&dp->con->lock); 213 + ucsi_con_mutex_unlock(dp->con); 215 214 return -EOPNOTSUPP; 216 215 } 217 216 218 217 svdm_version = typec_altmode_get_svdm_version(alt); 219 218 if (svdm_version < 0) { 220 - mutex_unlock(&dp->con->lock); 219 + ucsi_con_mutex_unlock(dp->con); 221 220 return svdm_version; 222 221 } 223 222 ··· 262 259 break; 263 260 } 264 261 265 - mutex_unlock(&dp->con->lock); 262 + ucsi_con_mutex_unlock(dp->con); 266 263 267 264 return 0; 268 265 } ··· 298 295 dp = typec_altmode_get_drvdata(alt); 299 296 if (!dp) 300 297 return; 298 + 299 + cancel_work_sync(&dp->work); 301 300 302 301 dp->data.conf = 0; 303 302 dp->data.status = 0;
+34
drivers/usb/typec/ucsi/ucsi.c
··· 1923 1923 EXPORT_SYMBOL_GPL(ucsi_set_drvdata); 1924 1924 1925 1925 /** 1926 + * ucsi_con_mutex_lock - Acquire the connector mutex 1927 + * @con: The connector interface to lock 1928 + * 1929 + * Returns true on success, false if the connector is disconnected 1930 + */ 1931 + bool ucsi_con_mutex_lock(struct ucsi_connector *con) 1932 + { 1933 + bool mutex_locked = false; 1934 + bool connected = true; 1935 + 1936 + while (connected && !mutex_locked) { 1937 + mutex_locked = mutex_trylock(&con->lock) != 0; 1938 + connected = UCSI_CONSTAT(con, CONNECTED); 1939 + if (connected && !mutex_locked) 1940 + msleep(20); 1941 + } 1942 + 1943 + connected = connected && con->partner; 1944 + if (!connected && mutex_locked) 1945 + mutex_unlock(&con->lock); 1946 + 1947 + return connected; 1948 + } 1949 + 1950 + /** 1951 + * ucsi_con_mutex_unlock - Release the connector mutex 1952 + * @con: The connector interface to unlock 1953 + */ 1954 + void ucsi_con_mutex_unlock(struct ucsi_connector *con) 1955 + { 1956 + mutex_unlock(&con->lock); 1957 + } 1958 + 1959 + /** 1926 1960 * ucsi_create - Allocate UCSI instance 1927 1961 * @dev: Device interface to the PPM (Platform Policy Manager) 1928 1962 * @ops: I/O routines
+2
drivers/usb/typec/ucsi/ucsi.h
··· 94 94 void ucsi_unregister(struct ucsi *ucsi); 95 95 void *ucsi_get_drvdata(struct ucsi *ucsi); 96 96 void ucsi_set_drvdata(struct ucsi *ucsi, void *data); 97 + bool ucsi_con_mutex_lock(struct ucsi_connector *con); 98 + void ucsi_con_mutex_unlock(struct ucsi_connector *con); 97 99 98 100 void ucsi_connector_change(struct ucsi *ucsi, u8 num); 99 101
+6 -6
drivers/vfio/pci/vfio_pci_core.c
··· 1646 1646 { 1647 1647 struct vm_area_struct *vma = vmf->vma; 1648 1648 struct vfio_pci_core_device *vdev = vma->vm_private_data; 1649 - unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff; 1649 + unsigned long addr = vmf->address & ~((PAGE_SIZE << order) - 1); 1650 + unsigned long pgoff = (addr - vma->vm_start) >> PAGE_SHIFT; 1651 + unsigned long pfn = vma_to_pfn(vma) + pgoff; 1650 1652 vm_fault_t ret = VM_FAULT_SIGBUS; 1651 1653 1652 - pfn = vma_to_pfn(vma) + pgoff; 1653 - 1654 - if (order && (pfn & ((1 << order) - 1) || 1655 - vmf->address & ((PAGE_SIZE << order) - 1) || 1656 - vmf->address + (PAGE_SIZE << order) > vma->vm_end)) { 1654 + if (order && (addr < vma->vm_start || 1655 + addr + (PAGE_SIZE << order) > vma->vm_end || 1656 + pfn & ((1 << order) - 1))) { 1657 1657 ret = VM_FAULT_FALLBACK; 1658 1658 goto out; 1659 1659 }
+1
drivers/xen/swiotlb-xen.c
··· 217 217 * buffering it. 218 218 */ 219 219 if (dma_capable(dev, dev_addr, size, true) && 220 + !dma_kmalloc_needs_bounce(dev, size, dir) && 220 221 !range_straddles_page_boundary(phys, size) && 221 222 !xen_arch_need_swiotlb(dev, phys, dev_addr) && 222 223 !is_swiotlb_force_bounce(dev))
+2
drivers/xen/xenbus/xenbus.h
··· 77 77 struct xb_req_data { 78 78 struct list_head list; 79 79 wait_queue_head_t wq; 80 + struct kref kref; 80 81 struct xsd_sockmsg msg; 81 82 uint32_t caller_req_id; 82 83 enum xsd_sockmsg_type type; ··· 104 103 void xb_deinit_comms(void); 105 104 int xs_watch_msg(struct xs_watch_event *event); 106 105 void xs_request_exit(struct xb_req_data *req); 106 + void xs_free_req(struct kref *kref); 107 107 108 108 int xenbus_match(struct device *_dev, const struct device_driver *_drv); 109 109 int xenbus_dev_probe(struct device *_dev);
+4 -5
drivers/xen/xenbus/xenbus_comms.c
··· 309 309 virt_wmb(); 310 310 req->state = xb_req_state_got_reply; 311 311 req->cb(req); 312 - } else 313 - kfree(req); 312 + } 313 + kref_put(&req->kref, xs_free_req); 314 314 } 315 315 316 316 mutex_unlock(&xs_response_mutex); ··· 386 386 state.req->msg.type = XS_ERROR; 387 387 state.req->err = err; 388 388 list_del(&state.req->list); 389 - if (state.req->state == xb_req_state_aborted) 390 - kfree(state.req); 391 - else { 389 + if (state.req->state != xb_req_state_aborted) { 392 390 /* write err, then update state */ 393 391 virt_wmb(); 394 392 state.req->state = xb_req_state_got_reply; 395 393 wake_up(&state.req->wq); 396 394 } 395 + kref_put(&state.req->kref, xs_free_req); 397 396 398 397 mutex_unlock(&xb_write_mutex); 399 398
+1 -1
drivers/xen/xenbus/xenbus_dev_frontend.c
··· 406 406 mutex_unlock(&u->reply_mutex); 407 407 408 408 kfree(req->body); 409 - kfree(req); 409 + kref_put(&req->kref, xs_free_req); 410 410 411 411 kref_put(&u->kref, xenbus_file_free); 412 412
+8 -6
drivers/xen/xenbus/xenbus_probe.c
··· 966 966 if (xen_pv_domain()) 967 967 xen_store_domain_type = XS_PV; 968 968 if (xen_hvm_domain()) 969 + { 969 970 xen_store_domain_type = XS_HVM; 970 - if (xen_hvm_domain() && xen_initial_domain()) 971 - xen_store_domain_type = XS_LOCAL; 971 + err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v); 972 + if (err) 973 + goto out_error; 974 + xen_store_evtchn = (int)v; 975 + if (!v && xen_initial_domain()) 976 + xen_store_domain_type = XS_LOCAL; 977 + } 972 978 if (xen_pv_domain() && !xen_start_info->store_evtchn) 973 979 xen_store_domain_type = XS_LOCAL; 974 980 if (xen_pv_domain() && xen_start_info->store_evtchn) ··· 993 987 xen_store_interface = gfn_to_virt(xen_store_gfn); 994 988 break; 995 989 case XS_HVM: 996 - err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v); 997 - if (err) 998 - goto out_error; 999 - xen_store_evtchn = (int)v; 1000 990 err = hvm_get_parameter(HVM_PARAM_STORE_PFN, &v); 1001 991 if (err) 1002 992 goto out_error;
+16 -2
drivers/xen/xenbus/xenbus_xs.c
··· 112 112 wake_up_all(&xs_state_enter_wq); 113 113 } 114 114 115 + void xs_free_req(struct kref *kref) 116 + { 117 + struct xb_req_data *req = container_of(kref, struct xb_req_data, kref); 118 + kfree(req); 119 + } 120 + 115 121 static uint32_t xs_request_enter(struct xb_req_data *req) 116 122 { 117 123 uint32_t rq_id; ··· 243 237 req->caller_req_id = req->msg.req_id; 244 238 req->msg.req_id = xs_request_enter(req); 245 239 240 + /* 241 + * Take 2nd ref. One for this thread, and the second for the 242 + * xenbus_thread. 243 + */ 244 + kref_get(&req->kref); 245 + 246 246 mutex_lock(&xb_write_mutex); 247 247 list_add_tail(&req->list, &xb_write_list); 248 248 notify = list_is_singular(&xb_write_list); ··· 273 261 if (req->state == xb_req_state_queued || 274 262 req->state == xb_req_state_wait_reply) 275 263 req->state = xb_req_state_aborted; 276 - else 277 - kfree(req); 264 + 265 + kref_put(&req->kref, xs_free_req); 278 266 mutex_unlock(&xb_write_mutex); 279 267 280 268 return ret; ··· 303 291 req->cb = xenbus_dev_queue_reply; 304 292 req->par = par; 305 293 req->user_req = true; 294 + kref_init(&req->kref); 306 295 307 296 xs_send(req, msg); 308 297 ··· 332 319 req->num_vecs = num_vecs; 333 320 req->cb = xs_wake_up; 334 321 req->user_req = false; 322 + kref_init(&req->kref); 335 323 336 324 msg.req_id = 0; 337 325 msg.tx_id = t.id;
+21 -1
fs/bcachefs/alloc_foreground.c
··· 1422 1422 1423 1423 wp->sectors_free = UINT_MAX; 1424 1424 1425 - open_bucket_for_each(c, &wp->ptrs, ob, i) 1425 + open_bucket_for_each(c, &wp->ptrs, ob, i) { 1426 + /* 1427 + * Ensure proper write alignment - either due to misaligned 1428 + * bucket sizes (from buggy bcachefs-tools), or writes that mix 1429 + * logical/physical alignment: 1430 + */ 1431 + struct bch_dev *ca = ob_dev(c, ob); 1432 + u64 offset = bucket_to_sector(ca, ob->bucket) + 1433 + ca->mi.bucket_size - 1434 + ob->sectors_free; 1435 + unsigned align = round_up(offset, block_sectors(c)) - offset; 1436 + 1437 + ob->sectors_free = max_t(int, 0, ob->sectors_free - align); 1438 + 1426 1439 wp->sectors_free = min(wp->sectors_free, ob->sectors_free); 1440 + } 1427 1441 1428 1442 wp->sectors_free = rounddown(wp->sectors_free, block_sectors(c)); 1443 + 1444 + /* Did alignment use up space in an open_bucket? */ 1445 + if (unlikely(!wp->sectors_free)) { 1446 + bch2_alloc_sectors_done(c, wp); 1447 + goto retry; 1448 + } 1429 1449 1430 1450 BUG_ON(!wp->sectors_free || wp->sectors_free == UINT_MAX); 1431 1451
+8 -1
fs/bcachefs/btree_io.c
··· 41 41 42 42 clear_btree_node_write_in_flight_inner(b); 43 43 clear_btree_node_write_in_flight(b); 44 + smp_mb__after_atomic(); 44 45 wake_up_bit(&b->flags, BTREE_NODE_write_in_flight); 45 46 } 46 47 ··· 1401 1400 1402 1401 printbuf_exit(&buf); 1403 1402 clear_btree_node_read_in_flight(b); 1403 + smp_mb__after_atomic(); 1404 1404 wake_up_bit(&b->flags, BTREE_NODE_read_in_flight); 1405 1405 } 1406 1406 ··· 1597 1595 printbuf_exit(&buf); 1598 1596 1599 1597 clear_btree_node_read_in_flight(b); 1598 + smp_mb__after_atomic(); 1600 1599 wake_up_bit(&b->flags, BTREE_NODE_read_in_flight); 1601 1600 } 1602 1601 ··· 1724 1721 set_btree_node_read_error(b); 1725 1722 bch2_btree_lost_data(c, b->c.btree_id); 1726 1723 clear_btree_node_read_in_flight(b); 1724 + smp_mb__after_atomic(); 1727 1725 wake_up_bit(&b->flags, BTREE_NODE_read_in_flight); 1728 1726 printbuf_exit(&buf); 1729 1727 return; ··· 2065 2061 2066 2062 if (new & (1U << BTREE_NODE_write_in_flight)) 2067 2063 __bch2_btree_node_write(c, b, BTREE_WRITE_ALREADY_STARTED|type); 2068 - else 2064 + else { 2065 + smp_mb__after_atomic(); 2069 2066 wake_up_bit(&b->flags, BTREE_NODE_write_in_flight); 2067 + } 2070 2068 } 2071 2069 2072 2070 static void btree_node_write_done(struct bch_fs *c, struct btree *b, u64 start_time) ··· 2181 2175 } 2182 2176 2183 2177 clear_btree_node_write_in_flight_inner(b); 2178 + smp_mb__after_atomic(); 2184 2179 wake_up_bit(&b->flags, BTREE_NODE_write_in_flight_inner); 2185 2180 INIT_WORK(&wb->work, btree_node_write_work); 2186 2181 queue_work(c->btree_io_complete_wq, &wb->work);
+1
fs/bcachefs/buckets.h
··· 44 44 BUILD_BUG_ON(!((union ulong_byte_assert) { .ulong = 1UL << BUCKET_LOCK_BITNR }).byte); 45 45 46 46 clear_bit_unlock(BUCKET_LOCK_BITNR, (void *) &b->lock); 47 + smp_mb__after_atomic(); 47 48 wake_up_bit((void *) &b->lock, BUCKET_LOCK_BITNR); 48 49 } 49 50
+1
fs/bcachefs/ec.h
··· 160 160 BUILD_BUG_ON(!((union ulong_byte_assert) { .ulong = 1UL << BUCKET_LOCK_BITNR }).byte); 161 161 162 162 clear_bit_unlock(BUCKET_LOCK_BITNR, (void *) &s->lock); 163 + smp_mb__after_atomic(); 163 164 wake_up_bit((void *) &s->lock, BUCKET_LOCK_BITNR); 164 165 } 165 166
+1 -1
fs/bcachefs/errcode.h
··· 269 269 x(BCH_ERR_invalid_sb, invalid_sb_downgrade) \ 270 270 x(BCH_ERR_invalid, invalid_bkey) \ 271 271 x(BCH_ERR_operation_blocked, nocow_lock_blocked) \ 272 - x(EIO, journal_shutdown) \ 272 + x(EROFS, journal_shutdown) \ 273 273 x(EIO, journal_flush_err) \ 274 274 x(EIO, journal_write_err) \ 275 275 x(EIO, btree_node_read_err) \
+3 -2
fs/bcachefs/extents.c
··· 1056 1056 static bool want_cached_ptr(struct bch_fs *c, struct bch_io_opts *opts, 1057 1057 struct bch_extent_ptr *ptr) 1058 1058 { 1059 - if (!opts->promote_target || 1060 - !bch2_dev_in_target(c, ptr->dev, opts->promote_target)) 1059 + unsigned target = opts->promote_target ?: opts->foreground_target; 1060 + 1061 + if (target && !bch2_dev_in_target(c, ptr->dev, target)) 1061 1062 return false; 1062 1063 1063 1064 struct bch_dev *ca = bch2_dev_rcu_noerror(c, ptr->dev);
+3 -8
fs/bcachefs/fs.c
··· 2502 2502 2503 2503 bch2_opts_apply(&c->opts, opts); 2504 2504 2505 - /* 2506 - * need to initialise sb and set c->vfs_sb _before_ starting fs, 2507 - * for blk_holder_ops 2508 - */ 2505 + ret = bch2_fs_start(c); 2506 + if (ret) 2507 + goto err_stop_fs; 2509 2508 2510 2509 sb = sget(fc->fs_type, NULL, bch2_set_super, fc->sb_flags|SB_NOSEC, c); 2511 2510 ret = PTR_ERR_OR_ZERO(sb); ··· 2565 2566 #endif 2566 2567 2567 2568 sb->s_shrink->seeks = 0; 2568 - 2569 - ret = bch2_fs_start(c); 2570 - if (ret) 2571 - goto err_put_super; 2572 2569 2573 2570 #ifdef CONFIG_UNICODE 2574 2571 sb->s_encoding = c->cf_encoding;
+3 -1
fs/bcachefs/journal_io.c
··· 19 19 20 20 #include <linux/ioprio.h> 21 21 #include <linux/string_choices.h> 22 + #include <linux/sched/sysctl.h> 22 23 23 24 void bch2_journal_pos_from_member_info_set(struct bch_fs *c) 24 25 { ··· 1263 1262 degraded = true; 1264 1263 } 1265 1264 1266 - closure_sync(&jlist.cl); 1265 + while (closure_sync_timeout(&jlist.cl, sysctl_hung_task_timeout_secs * HZ / 2)) 1266 + ; 1267 1267 1268 1268 if (jlist.ret) 1269 1269 return jlist.ret;
+4 -3
fs/bcachefs/journal_reclaim.c
··· 266 266 267 267 static bool should_discard_bucket(struct journal *j, struct journal_device *ja) 268 268 { 269 - bool ret; 270 - 271 269 spin_lock(&j->lock); 272 - ret = ja->discard_idx != ja->dirty_idx_ondisk; 270 + unsigned min_free = max(4, ja->nr / 8); 271 + 272 + bool ret = bch2_journal_dev_buckets_available(j, ja, journal_space_discarded) < min_free && 273 + ja->discard_idx != ja->dirty_idx_ondisk; 273 274 spin_unlock(&j->lock); 274 275 275 276 return ret;
+2 -1
fs/bcachefs/move.c
··· 784 784 goto err; 785 785 786 786 ret = bch2_btree_write_buffer_tryflush(trans); 787 - bch_err_msg(c, ret, "flushing btree write buffer"); 787 + if (!bch2_err_matches(ret, EROFS)) 788 + bch_err_msg(c, ret, "flushing btree write buffer"); 788 789 if (ret) 789 790 goto err; 790 791
+5
fs/bcachefs/super.c
··· 377 377 bch_verbose(c, "marking filesystem clean"); 378 378 bch2_fs_mark_clean(c); 379 379 } else { 380 + /* Make sure error counts/counters are persisted */ 381 + mutex_lock(&c->sb_lock); 382 + bch2_write_super(c); 383 + mutex_unlock(&c->sb_lock); 384 + 380 385 bch_verbose(c, "done going read-only, filesystem not clean"); 381 386 } 382 387 }
+3 -1
fs/bcachefs/thread_with_file.c
··· 455 455 struct stdio_buf *buf = &stdio->output; 456 456 unsigned long flags; 457 457 ssize_t ret; 458 - 459 458 again: 459 + if (stdio->done) 460 + return -EPIPE; 461 + 460 462 spin_lock_irqsave(&buf->lock, flags); 461 463 ret = bch2_darray_vprintf(&buf->buf, GFP_NOWAIT, fmt, args); 462 464 spin_unlock_irqrestore(&buf->lock, flags);
+1 -1
fs/btrfs/compression.c
··· 606 606 free_extent_map(em); 607 607 608 608 cb->nr_folios = DIV_ROUND_UP(compressed_len, PAGE_SIZE); 609 - cb->compressed_folios = kcalloc(cb->nr_folios, sizeof(struct page *), GFP_NOFS); 609 + cb->compressed_folios = kcalloc(cb->nr_folios, sizeof(struct folio *), GFP_NOFS); 610 610 if (!cb->compressed_folios) { 611 611 ret = BLK_STS_RESOURCE; 612 612 goto out_free_bio;
+2 -2
fs/btrfs/extent_io.c
··· 3508 3508 ASSERT(folio_test_locked(folio)); 3509 3509 xa_lock_irq(&folio->mapping->i_pages); 3510 3510 if (!folio_test_dirty(folio)) 3511 - __xa_clear_mark(&folio->mapping->i_pages, 3512 - folio_index(folio), PAGECACHE_TAG_DIRTY); 3511 + __xa_clear_mark(&folio->mapping->i_pages, folio->index, 3512 + PAGECACHE_TAG_DIRTY); 3513 3513 xa_unlock_irq(&folio->mapping->i_pages); 3514 3514 } 3515 3515
+2
fs/btrfs/extent_io.h
··· 298 298 */ 299 299 static inline int __pure num_extent_folios(const struct extent_buffer *eb) 300 300 { 301 + if (!eb->folios[0]) 302 + return 0; 301 303 if (folio_order(eb->folios[0])) 302 304 return 1; 303 305 return num_extent_pages(eb);
+2 -2
fs/btrfs/scrub.c
··· 1541 1541 u64 extent_gen; 1542 1542 int ret; 1543 1543 1544 - if (unlikely(!extent_root)) { 1545 - btrfs_err(fs_info, "no valid extent root for scrub"); 1544 + if (unlikely(!extent_root || !csum_root)) { 1545 + btrfs_err(fs_info, "no valid extent or csum root for scrub"); 1546 1546 return -EUCLEAN; 1547 1547 } 1548 1548 memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *
+1 -90
fs/btrfs/volumes.c
··· 733 733 return has_metadata_uuid ? sb->metadata_uuid : sb->fsid; 734 734 } 735 735 736 - /* 737 - * We can have very weird soft links passed in. 738 - * One example is "/proc/self/fd/<fd>", which can be a soft link to 739 - * a block device. 740 - * 741 - * But it's never a good idea to use those weird names. 742 - * Here we check if the path (not following symlinks) is a good one inside 743 - * "/dev/". 744 - */ 745 - static bool is_good_dev_path(const char *dev_path) 746 - { 747 - struct path path = { .mnt = NULL, .dentry = NULL }; 748 - char *path_buf = NULL; 749 - char *resolved_path; 750 - bool is_good = false; 751 - int ret; 752 - 753 - if (!dev_path) 754 - goto out; 755 - 756 - path_buf = kmalloc(PATH_MAX, GFP_KERNEL); 757 - if (!path_buf) 758 - goto out; 759 - 760 - /* 761 - * Do not follow soft link, just check if the original path is inside 762 - * "/dev/". 763 - */ 764 - ret = kern_path(dev_path, 0, &path); 765 - if (ret) 766 - goto out; 767 - resolved_path = d_path(&path, path_buf, PATH_MAX); 768 - if (IS_ERR(resolved_path)) 769 - goto out; 770 - if (strncmp(resolved_path, "/dev/", strlen("/dev/"))) 771 - goto out; 772 - is_good = true; 773 - out: 774 - kfree(path_buf); 775 - path_put(&path); 776 - return is_good; 777 - } 778 - 779 - static int get_canonical_dev_path(const char *dev_path, char *canonical) 780 - { 781 - struct path path = { .mnt = NULL, .dentry = NULL }; 782 - char *path_buf = NULL; 783 - char *resolved_path; 784 - int ret; 785 - 786 - if (!dev_path) { 787 - ret = -EINVAL; 788 - goto out; 789 - } 790 - 791 - path_buf = kmalloc(PATH_MAX, GFP_KERNEL); 792 - if (!path_buf) { 793 - ret = -ENOMEM; 794 - goto out; 795 - } 796 - 797 - ret = kern_path(dev_path, LOOKUP_FOLLOW, &path); 798 - if (ret) 799 - goto out; 800 - resolved_path = d_path(&path, path_buf, PATH_MAX); 801 - if (IS_ERR(resolved_path)) { 802 - ret = PTR_ERR(resolved_path); 803 - goto out; 804 - } 805 - ret = strscpy(canonical, resolved_path, PATH_MAX); 806 - out: 807 - kfree(path_buf); 808 - path_put(&path); 809 - return ret; 810 - } 811 - 812 736 static bool is_same_device(struct btrfs_device *device, const char *new_path) 813 737 { 814 738 struct path old = { .mnt = NULL, .dentry = NULL }; ··· 1437 1513 bool new_device_added = false; 1438 1514 struct btrfs_device *device = NULL; 1439 1515 struct file *bdev_file; 1440 - char *canonical_path = NULL; 1441 1516 u64 bytenr; 1442 1517 dev_t devt; 1443 1518 int ret; 1444 1519 1445 1520 lockdep_assert_held(&uuid_mutex); 1446 1521 1447 - if (!is_good_dev_path(path)) { 1448 - canonical_path = kmalloc(PATH_MAX, GFP_KERNEL); 1449 - if (canonical_path) { 1450 - ret = get_canonical_dev_path(path, canonical_path); 1451 - if (ret < 0) { 1452 - kfree(canonical_path); 1453 - canonical_path = NULL; 1454 - } 1455 - } 1456 - } 1457 1522 /* 1458 1523 * Avoid an exclusive open here, as the systemd-udev may initiate the 1459 1524 * device scan which may race with the user's mount or mkfs command, ··· 1487 1574 goto free_disk_super; 1488 1575 } 1489 1576 1490 - device = device_list_add(canonical_path ? : path, disk_super, 1491 - &new_device_added); 1577 + device = device_list_add(path, disk_super, &new_device_added); 1492 1578 if (!IS_ERR(device) && new_device_added) 1493 1579 btrfs_free_stale_devices(device->devt, device); 1494 1580 ··· 1496 1584 1497 1585 error_bdev_put: 1498 1586 fput(bdev_file); 1499 - kfree(canonical_path); 1500 1587 1501 1588 return device; 1502 1589 }
+2 -2
fs/erofs/fileio.c
··· 150 150 io->rq->bio.bi_iter.bi_sector = io->dev.m_pa >> 9; 151 151 attached = 0; 152 152 } 153 - if (!attached++) 154 - erofs_onlinefolio_split(folio); 155 153 if (!bio_add_folio(&io->rq->bio, folio, len, cur)) 156 154 goto io_retry; 155 + if (!attached++) 156 + erofs_onlinefolio_split(folio); 157 157 io->dev.m_pa += len; 158 158 } 159 159 cur += len;
-1
fs/erofs/super.c
··· 357 357 enum { 358 358 Opt_user_xattr, Opt_acl, Opt_cache_strategy, Opt_dax, Opt_dax_enum, 359 359 Opt_device, Opt_fsid, Opt_domain_id, Opt_directio, 360 - Opt_err 361 360 }; 362 361 363 362 static const struct constant_table erofs_param_cache_strategy[] = {
+13 -16
fs/erofs/zdata.c
··· 79 79 /* L: whether partial decompression or not */ 80 80 bool partial; 81 81 82 - /* L: indicate several pageofs_outs or not */ 83 - bool multibases; 84 - 85 82 /* L: whether extra buffer allocations are best-effort */ 86 83 bool besteffort; 87 84 ··· 1043 1046 break; 1044 1047 1045 1048 erofs_onlinefolio_split(folio); 1046 - if (f->pcl->pageofs_out != (map->m_la & ~PAGE_MASK)) 1047 - f->pcl->multibases = true; 1048 1049 if (f->pcl->length < offset + end - map->m_la) { 1049 1050 f->pcl->length = offset + end - map->m_la; 1050 1051 f->pcl->pageofs_out = map->m_la & ~PAGE_MASK; ··· 1088 1093 struct page *onstack_pages[Z_EROFS_ONSTACK_PAGES]; 1089 1094 struct super_block *sb; 1090 1095 struct z_erofs_pcluster *pcl; 1091 - 1092 1096 /* pages with the longest decompressed length for deduplication */ 1093 1097 struct page **decompressed_pages; 1094 1098 /* pages to keep the compressed data */ ··· 1096 1102 struct list_head decompressed_secondary_bvecs; 1097 1103 struct page **pagepool; 1098 1104 unsigned int onstack_used, nr_pages; 1105 + /* indicate if temporary copies should be preserved for later use */ 1106 + bool keepxcpy; 1099 1107 }; 1100 1108 1101 1109 struct z_erofs_bvec_item { ··· 1108 1112 static void z_erofs_do_decompressed_bvec(struct z_erofs_backend *be, 1109 1113 struct z_erofs_bvec *bvec) 1110 1114 { 1115 + int poff = bvec->offset + be->pcl->pageofs_out; 1111 1116 struct z_erofs_bvec_item *item; 1112 - unsigned int pgnr; 1117 + struct page **page; 1113 1118 1114 - if (!((bvec->offset + be->pcl->pageofs_out) & ~PAGE_MASK) && 1115 - (bvec->end == PAGE_SIZE || 1116 - bvec->offset + bvec->end == be->pcl->length)) { 1117 - pgnr = (bvec->offset + be->pcl->pageofs_out) >> PAGE_SHIFT; 1118 - DBG_BUGON(pgnr >= be->nr_pages); 1119 - if (!be->decompressed_pages[pgnr]) { 1120 - be->decompressed_pages[pgnr] = bvec->page; 1119 + if (!(poff & ~PAGE_MASK) && (bvec->end == PAGE_SIZE || 1120 + bvec->offset + bvec->end == be->pcl->length)) { 1121 + DBG_BUGON((poff >> PAGE_SHIFT) >= be->nr_pages); 1122 + page = be->decompressed_pages + (poff >> PAGE_SHIFT); 1123 + if (!*page) { 1124 + *page = bvec->page; 1121 1125 return; 1122 1126 } 1127 + } else { 1128 + be->keepxcpy = true; 1123 1129 } 1124 1130 1125 1131 /* (cold path) one pcluster is requested multiple times */ ··· 1287 1289 .alg = pcl->algorithmformat, 1288 1290 .inplace_io = overlapped, 1289 1291 .partial_decoding = pcl->partial, 1290 - .fillgaps = pcl->multibases, 1292 + .fillgaps = be->keepxcpy, 1291 1293 .gfp = pcl->besteffort ? GFP_KERNEL : 1292 1294 GFP_NOWAIT | __GFP_NORETRY 1293 1295 }, be->pagepool); ··· 1344 1346 1345 1347 pcl->length = 0; 1346 1348 pcl->partial = true; 1347 - pcl->multibases = false; 1348 1349 pcl->besteffort = false; 1349 1350 pcl->bvset.nextpage = NULL; 1350 1351 pcl->vcnt = 0;
+7 -10
fs/namespace.c
··· 787 787 return 0; 788 788 mnt = real_mount(bastard); 789 789 mnt_add_count(mnt, 1); 790 - smp_mb(); // see mntput_no_expire() 790 + smp_mb(); // see mntput_no_expire() and do_umount() 791 791 if (likely(!read_seqretry(&mount_lock, seq))) 792 792 return 0; 793 - if (bastard->mnt_flags & MNT_SYNC_UMOUNT) { 794 - mnt_add_count(mnt, -1); 795 - return 1; 796 - } 797 793 lock_mount_hash(); 798 - if (unlikely(bastard->mnt_flags & MNT_DOOMED)) { 794 + if (unlikely(bastard->mnt_flags & (MNT_SYNC_UMOUNT | MNT_DOOMED))) { 799 795 mnt_add_count(mnt, -1); 800 796 unlock_mount_hash(); 801 797 return 1; ··· 2044 2048 umount_tree(mnt, UMOUNT_PROPAGATE); 2045 2049 retval = 0; 2046 2050 } else { 2051 + smp_mb(); // paired with __legitimize_mnt() 2047 2052 shrink_submounts(mnt); 2048 2053 retval = -EBUSY; 2049 2054 if (!propagate_mount_busy(mnt, 2)) { ··· 3557 3560 * @mnt_from itself. This defeats the whole purpose of mounting 3558 3561 * @mnt_from beneath @mnt_to. 3559 3562 */ 3560 - if (propagation_would_overmount(parent_mnt_to, mnt_from, mp)) 3563 + if (check_mnt(mnt_from) && 3564 + propagation_would_overmount(parent_mnt_to, mnt_from, mp)) 3561 3565 return -EINVAL; 3562 3566 3563 3567 return 0; ··· 3716 3718 if (err) 3717 3719 goto out; 3718 3720 3719 - if (is_anon_ns(ns)) 3720 - ns->mntns_flags &= ~MNTNS_PROPAGATING; 3721 - 3722 3721 /* if the mount is moved, it should no longer be expire 3723 3722 * automatically */ 3724 3723 list_del_init(&old->mnt_expire); 3725 3724 if (attached) 3726 3725 put_mountpoint(old_mp); 3727 3726 out: 3727 + if (is_anon_ns(ns)) 3728 + ns->mntns_flags &= ~MNTNS_PROPAGATING; 3728 3729 unlock_mount(mp); 3729 3730 if (!err) { 3730 3731 if (attached) {
-3
fs/nilfs2/the_nilfs.c
··· 705 705 int blocksize; 706 706 int err; 707 707 708 - down_write(&nilfs->ns_sem); 709 - 710 708 blocksize = sb_min_blocksize(sb, NILFS_MIN_BLOCK_SIZE); 711 709 if (!blocksize) { 712 710 nilfs_err(sb, "unable to set blocksize"); ··· 777 779 set_nilfs_init(nilfs); 778 780 err = 0; 779 781 out: 780 - up_write(&nilfs->ns_sem); 781 782 return err; 782 783 783 784 failed_sbh:
+1
fs/ocfs2/alloc.c
··· 6918 6918 if (IS_ERR(folios[numfolios])) { 6919 6919 ret = PTR_ERR(folios[numfolios]); 6920 6920 mlog_errno(ret); 6921 + folios[numfolios] = NULL; 6921 6922 goto out; 6922 6923 } 6923 6924
+58 -22
fs/ocfs2/journal.c
··· 174 174 struct ocfs2_recovery_map *rm; 175 175 176 176 mutex_init(&osb->recovery_lock); 177 - osb->disable_recovery = 0; 177 + osb->recovery_state = OCFS2_REC_ENABLED; 178 178 osb->recovery_thread_task = NULL; 179 179 init_waitqueue_head(&osb->recovery_event); 180 180 ··· 190 190 return 0; 191 191 } 192 192 193 - /* we can't grab the goofy sem lock from inside wait_event, so we use 194 - * memory barriers to make sure that we'll see the null task before 195 - * being woken up */ 196 193 static int ocfs2_recovery_thread_running(struct ocfs2_super *osb) 197 194 { 198 - mb(); 199 195 return osb->recovery_thread_task != NULL; 196 + } 197 + 198 + static void ocfs2_recovery_disable(struct ocfs2_super *osb, 199 + enum ocfs2_recovery_state state) 200 + { 201 + mutex_lock(&osb->recovery_lock); 202 + /* 203 + * If recovery thread is not running, we can directly transition to 204 + * final state. 205 + */ 206 + if (!ocfs2_recovery_thread_running(osb)) { 207 + osb->recovery_state = state + 1; 208 + goto out_lock; 209 + } 210 + osb->recovery_state = state; 211 + /* Wait for recovery thread to acknowledge state transition */ 212 + wait_event_cmd(osb->recovery_event, 213 + !ocfs2_recovery_thread_running(osb) || 214 + osb->recovery_state >= state + 1, 215 + mutex_unlock(&osb->recovery_lock), 216 + mutex_lock(&osb->recovery_lock)); 217 + out_lock: 218 + mutex_unlock(&osb->recovery_lock); 219 + 220 + /* 221 + * At this point we know that no more recovery work can be queued so 222 + * wait for any recovery completion work to complete. 223 + */ 224 + if (osb->ocfs2_wq) 225 + flush_workqueue(osb->ocfs2_wq); 226 + } 227 + 228 + void ocfs2_recovery_disable_quota(struct ocfs2_super *osb) 229 + { 230 + ocfs2_recovery_disable(osb, OCFS2_REC_QUOTA_WANT_DISABLE); 200 231 } 201 232 202 233 void ocfs2_recovery_exit(struct ocfs2_super *osb) ··· 236 205 237 206 /* disable any new recovery threads and wait for any currently 238 207 * running ones to exit. Do this before setting the vol_state. */ 239 - mutex_lock(&osb->recovery_lock); 240 - osb->disable_recovery = 1; 241 - mutex_unlock(&osb->recovery_lock); 242 - wait_event(osb->recovery_event, !ocfs2_recovery_thread_running(osb)); 243 - 244 - /* At this point, we know that no more recovery threads can be 245 - * launched, so wait for any recovery completion work to 246 - * complete. */ 247 - if (osb->ocfs2_wq) 248 - flush_workqueue(osb->ocfs2_wq); 208 + ocfs2_recovery_disable(osb, OCFS2_REC_WANT_DISABLE); 249 209 250 210 /* 251 211 * Now that recovery is shut down, and the osb is about to be ··· 1494 1472 } 1495 1473 } 1496 1474 restart: 1475 + if (quota_enabled) { 1476 + mutex_lock(&osb->recovery_lock); 1477 + /* Confirm that recovery thread will no longer recover quotas */ 1478 + if (osb->recovery_state == OCFS2_REC_QUOTA_WANT_DISABLE) { 1479 + osb->recovery_state = OCFS2_REC_QUOTA_DISABLED; 1480 + wake_up(&osb->recovery_event); 1481 + } 1482 + if (osb->recovery_state >= OCFS2_REC_QUOTA_DISABLED) 1483 + quota_enabled = 0; 1484 + mutex_unlock(&osb->recovery_lock); 1485 + } 1486 + 1497 1487 status = ocfs2_super_lock(osb, 1); 1498 1488 if (status < 0) { 1499 1489 mlog_errno(status); ··· 1603 1569 1604 1570 ocfs2_free_replay_slots(osb); 1605 1571 osb->recovery_thread_task = NULL; 1606 - mb(); /* sync with ocfs2_recovery_thread_running */ 1572 + if (osb->recovery_state == OCFS2_REC_WANT_DISABLE) 1573 + osb->recovery_state = OCFS2_REC_DISABLED; 1607 1574 wake_up(&osb->recovery_event); 1608 1575 1609 1576 mutex_unlock(&osb->recovery_lock); 1610 1577 1611 - if (quota_enabled) 1612 - kfree(rm_quota); 1578 + kfree(rm_quota); 1613 1579 1614 1580 return status; 1615 1581 } 1616 1582 1617 1583 void ocfs2_recovery_thread(struct ocfs2_super *osb, int node_num) 1618 1584 { 1585 + int was_set = -1; 1586 + 1619 1587 mutex_lock(&osb->recovery_lock); 1588 + if (osb->recovery_state < OCFS2_REC_WANT_DISABLE) 1589 + was_set = ocfs2_recovery_map_set(osb, node_num); 1620 1590 1621 1591 trace_ocfs2_recovery_thread(node_num, osb->node_num, 1622 - osb->disable_recovery, osb->recovery_thread_task, 1623 - osb->disable_recovery ? 1624 - -1 : ocfs2_recovery_map_set(osb, node_num)); 1592 + osb->recovery_state, osb->recovery_thread_task, was_set); 1625 1593 1626 - if (osb->disable_recovery) 1594 + if (osb->recovery_state >= OCFS2_REC_WANT_DISABLE) 1627 1595 goto out; 1628 1596 1629 1597 if (osb->recovery_thread_task)
+1
fs/ocfs2/journal.h
··· 148 148 149 149 int ocfs2_recovery_init(struct ocfs2_super *osb); 150 150 void ocfs2_recovery_exit(struct ocfs2_super *osb); 151 + void ocfs2_recovery_disable_quota(struct ocfs2_super *osb); 151 152 152 153 int ocfs2_compute_replay_slots(struct ocfs2_super *osb); 153 154 void ocfs2_free_replay_slots(struct ocfs2_super *osb);
+16 -1
fs/ocfs2/ocfs2.h
··· 308 308 void ocfs2_initialize_journal_triggers(struct super_block *sb, 309 309 struct ocfs2_triggers triggers[]); 310 310 311 + enum ocfs2_recovery_state { 312 + OCFS2_REC_ENABLED = 0, 313 + OCFS2_REC_QUOTA_WANT_DISABLE, 314 + /* 315 + * Must be OCFS2_REC_QUOTA_WANT_DISABLE + 1 for 316 + * ocfs2_recovery_disable_quota() to work. 317 + */ 318 + OCFS2_REC_QUOTA_DISABLED, 319 + OCFS2_REC_WANT_DISABLE, 320 + /* 321 + * Must be OCFS2_REC_WANT_DISABLE + 1 for ocfs2_recovery_exit() to work 322 + */ 323 + OCFS2_REC_DISABLED, 324 + }; 325 + 311 326 struct ocfs2_journal; 312 327 struct ocfs2_slot_info; 313 328 struct ocfs2_recovery_map; ··· 385 370 struct ocfs2_recovery_map *recovery_map; 386 371 struct ocfs2_replay_map *replay_map; 387 372 struct task_struct *recovery_thread_task; 388 - int disable_recovery; 373 + enum ocfs2_recovery_state recovery_state; 389 374 wait_queue_head_t checkpoint_event; 390 375 struct ocfs2_journal *journal; 391 376 unsigned long osb_commit_interval;
+2 -7
fs/ocfs2/quota_local.c
··· 453 453 454 454 /* Sync changes in local quota file into global quota file and 455 455 * reinitialize local quota file. 456 - * The function expects local quota file to be already locked and 457 - * s_umount locked in shared mode. */ 456 + * The function expects local quota file to be already locked. */ 458 457 static int ocfs2_recover_local_quota_file(struct inode *lqinode, 459 458 int type, 460 459 struct ocfs2_quota_recovery *rec) ··· 587 588 { 588 589 unsigned int ino[OCFS2_MAXQUOTAS] = { LOCAL_USER_QUOTA_SYSTEM_INODE, 589 590 LOCAL_GROUP_QUOTA_SYSTEM_INODE }; 590 - struct super_block *sb = osb->sb; 591 591 struct ocfs2_local_disk_dqinfo *ldinfo; 592 592 struct buffer_head *bh; 593 593 handle_t *handle; ··· 598 600 printk(KERN_NOTICE "ocfs2: Finishing quota recovery on device (%s) for " 599 601 "slot %u\n", osb->dev_str, slot_num); 600 602 601 - down_read(&sb->s_umount); 602 603 for (type = 0; type < OCFS2_MAXQUOTAS; type++) { 603 604 if (list_empty(&(rec->r_list[type]))) 604 605 continue; ··· 674 677 break; 675 678 } 676 679 out: 677 - up_read(&sb->s_umount); 678 680 kfree(rec); 679 681 return status; 680 682 } ··· 839 843 ocfs2_release_local_quota_bitmaps(&oinfo->dqi_chunk); 840 844 841 845 /* 842 - * s_umount held in exclusive mode protects us against racing with 843 - * recovery thread... 846 + * ocfs2_dismount_volume() has already aborted quota recovery... 844 847 */ 845 848 if (oinfo->dqi_rec) { 846 849 ocfs2_free_quota_recovery(oinfo->dqi_rec);
+32 -6
fs/ocfs2/suballoc.c
··· 698 698 699 699 bg_bh = ocfs2_block_group_alloc_contig(osb, handle, alloc_inode, 700 700 ac, cl); 701 - if (PTR_ERR(bg_bh) == -ENOSPC) 701 + if (PTR_ERR(bg_bh) == -ENOSPC) { 702 + ac->ac_which = OCFS2_AC_USE_MAIN_DISCONTIG; 702 703 bg_bh = ocfs2_block_group_alloc_discontig(handle, 703 704 alloc_inode, 704 705 ac, cl); 706 + } 705 707 if (IS_ERR(bg_bh)) { 706 708 status = PTR_ERR(bg_bh); 707 709 bg_bh = NULL; ··· 1796 1794 { 1797 1795 int status; 1798 1796 u16 chain; 1797 + u32 contig_bits; 1799 1798 u64 next_group; 1800 1799 struct inode *alloc_inode = ac->ac_inode; 1801 1800 struct buffer_head *group_bh = NULL; ··· 1822 1819 status = -ENOSPC; 1823 1820 /* for now, the chain search is a bit simplistic. We just use 1824 1821 * the 1st group with any empty bits. */ 1825 - while ((status = ac->ac_group_search(alloc_inode, group_bh, 1826 - bits_wanted, min_bits, 1827 - ac->ac_max_block, 1828 - res)) == -ENOSPC) { 1822 + while (1) { 1823 + if (ac->ac_which == OCFS2_AC_USE_MAIN_DISCONTIG) { 1824 + contig_bits = le16_to_cpu(bg->bg_contig_free_bits); 1825 + if (!contig_bits) 1826 + contig_bits = ocfs2_find_max_contig_free_bits(bg->bg_bitmap, 1827 + le16_to_cpu(bg->bg_bits), 0); 1828 + if (bits_wanted > contig_bits && contig_bits >= min_bits) 1829 + bits_wanted = contig_bits; 1830 + } 1831 + 1832 + status = ac->ac_group_search(alloc_inode, group_bh, 1833 + bits_wanted, min_bits, 1834 + ac->ac_max_block, res); 1835 + if (status != -ENOSPC) 1836 + break; 1829 1837 if (!bg->bg_next_group) 1830 1838 break; 1831 1839 ··· 1996 1982 victim = ocfs2_find_victim_chain(cl); 1997 1983 ac->ac_chain = victim; 1998 1984 1985 + search: 1999 1986 status = ocfs2_search_chain(ac, handle, bits_wanted, min_bits, 2000 1987 res, &bits_left); 2001 1988 if (!status) { ··· 2035 2020 mlog_errno(status); 2036 2021 goto bail; 2037 2022 } 2023 + } 2024 + 2025 + /* Chains can't supply the bits_wanted contiguous space. 2026 + * We should switch to using every single bit when allocating 2027 + * from the global bitmap. */ 2028 + if (i == le16_to_cpu(cl->cl_next_free_rec) && 2029 + status == -ENOSPC && ac->ac_which == OCFS2_AC_USE_MAIN) { 2030 + ac->ac_which = OCFS2_AC_USE_MAIN_DISCONTIG; 2031 + ac->ac_chain = victim; 2032 + goto search; 2038 2033 } 2039 2034 2040 2035 set_hint: ··· 2390 2365 BUG_ON(ac->ac_bits_given >= ac->ac_bits_wanted); 2391 2366 2392 2367 BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL 2393 - && ac->ac_which != OCFS2_AC_USE_MAIN); 2368 + && ac->ac_which != OCFS2_AC_USE_MAIN 2369 + && ac->ac_which != OCFS2_AC_USE_MAIN_DISCONTIG); 2394 2370 2395 2371 if (ac->ac_which == OCFS2_AC_USE_LOCAL) { 2396 2372 WARN_ON(min_clusters > 1);
+1
fs/ocfs2/suballoc.h
··· 29 29 #define OCFS2_AC_USE_MAIN 2 30 30 #define OCFS2_AC_USE_INODE 3 31 31 #define OCFS2_AC_USE_META 4 32 + #define OCFS2_AC_USE_MAIN_DISCONTIG 5 32 33 u32 ac_which; 33 34 34 35 /* these are used by the chain search */
+3
fs/ocfs2/super.c
··· 1812 1812 /* Orphan scan should be stopped as early as possible */ 1813 1813 ocfs2_orphan_scan_stop(osb); 1814 1814 1815 + /* Stop quota recovery so that we can disable quotas */ 1816 + ocfs2_recovery_disable_quota(osb); 1817 + 1815 1818 ocfs2_disable_quotas(osb); 1816 1819 1817 1820 /* All dquots should be freed by now */
+9 -8
fs/pnode.c
··· 150 150 struct mount *origin) 151 151 { 152 152 /* are there any slaves of this mount? */ 153 - if (!IS_MNT_PROPAGATED(m) && !list_empty(&m->mnt_slave_list)) 153 + if (!IS_MNT_NEW(m) && !list_empty(&m->mnt_slave_list)) 154 154 return first_slave(m); 155 155 156 156 while (1) { ··· 174 174 * Advance m such that propagation_next will not return 175 175 * the slaves of m. 176 176 */ 177 - if (!IS_MNT_PROPAGATED(m) && !list_empty(&m->mnt_slave_list)) 177 + if (!IS_MNT_NEW(m) && !list_empty(&m->mnt_slave_list)) 178 178 m = last_slave(m); 179 179 180 180 return m; ··· 185 185 while (1) { 186 186 while (1) { 187 187 struct mount *next; 188 - if (!IS_MNT_PROPAGATED(m) && !list_empty(&m->mnt_slave_list)) 188 + if (!IS_MNT_NEW(m) && !list_empty(&m->mnt_slave_list)) 189 189 return first_slave(m); 190 190 next = next_peer(m); 191 191 if (m->mnt_group_id == origin->mnt_group_id) { ··· 226 226 struct mount *child; 227 227 int type; 228 228 /* skip ones added by this propagate_mnt() */ 229 - if (IS_MNT_PROPAGATED(m)) 229 + if (IS_MNT_NEW(m)) 230 230 return 0; 231 - /* skip if mountpoint isn't covered by it */ 231 + /* skip if mountpoint isn't visible in m */ 232 232 if (!is_subdir(dest_mp->m_dentry, m->mnt.mnt_root)) 233 233 return 0; 234 + /* skip if m is in the anon_ns we are emptying */ 235 + if (m->mnt_ns->mntns_flags & MNTNS_PROPAGATING) 236 + return 0; 237 + 234 238 if (peers(m, last_dest)) { 235 239 type = CL_MAKE_SHARED; 236 240 } else { ··· 382 378 const struct mountpoint *mp) 383 379 { 384 380 if (!IS_MNT_SHARED(from)) 385 - return false; 386 - 387 - if (IS_MNT_PROPAGATED(to)) 388 381 return false; 389 382 390 383 if (to->mnt.mnt_root != mp->m_dentry)
+1 -1
fs/pnode.h
··· 12 12 13 13 #define IS_MNT_SHARED(m) ((m)->mnt.mnt_flags & MNT_SHARED) 14 14 #define IS_MNT_SLAVE(m) ((m)->mnt_master) 15 - #define IS_MNT_PROPAGATED(m) (!(m)->mnt_ns || ((m)->mnt_ns->mntns_flags & MNTNS_PROPAGATING)) 15 + #define IS_MNT_NEW(m) (!(m)->mnt_ns) 16 16 #define CLEAR_MNT_SHARED(m) ((m)->mnt.mnt_flags &= ~MNT_SHARED) 17 17 #define IS_MNT_UNBINDABLE(m) ((m)->mnt.mnt_flags & MNT_UNBINDABLE) 18 18 #define IS_MNT_MARKED(m) ((m)->mnt.mnt_flags & MNT_MARKED)
+2 -8
fs/smb/client/cached_dir.c
··· 29 29 { 30 30 struct cached_fid *cfid; 31 31 32 - spin_lock(&cfids->cfid_list_lock); 33 32 list_for_each_entry(cfid, &cfids->entries, entry) { 34 33 if (!strcmp(cfid->path, path)) { 35 34 /* ··· 37 38 * being deleted due to a lease break. 38 39 */ 39 40 if (!cfid->time || !cfid->has_lease) { 40 - spin_unlock(&cfids->cfid_list_lock); 41 41 return NULL; 42 42 } 43 43 kref_get(&cfid->refcount); 44 - spin_unlock(&cfids->cfid_list_lock); 45 44 return cfid; 46 45 } 47 46 } 48 47 if (lookup_only) { 49 - spin_unlock(&cfids->cfid_list_lock); 50 48 return NULL; 51 49 } 52 50 if (cfids->num_entries >= max_cached_dirs) { 53 - spin_unlock(&cfids->cfid_list_lock); 54 51 return NULL; 55 52 } 56 53 cfid = init_cached_dir(path); 57 54 if (cfid == NULL) { 58 - spin_unlock(&cfids->cfid_list_lock); 59 55 return NULL; 60 56 } 61 57 cfid->cfids = cfids; ··· 68 74 */ 69 75 cfid->has_lease = true; 70 76 71 - spin_unlock(&cfids->cfid_list_lock); 72 77 return cfid; 73 78 } 74 79 ··· 180 187 if (!utf16_path) 181 188 return -ENOMEM; 182 189 190 + spin_lock(&cfids->cfid_list_lock); 183 191 cfid = find_or_create_cached_dir(cfids, path, lookup_only, tcon->max_cached_dirs); 184 192 if (cfid == NULL) { 193 + spin_unlock(&cfids->cfid_list_lock); 185 194 kfree(utf16_path); 186 195 return -ENOENT; 187 196 } ··· 192 197 * Otherwise, it is either a new entry or laundromat worker removed it 193 198 * from @cfids->entries. Caller will put last reference if the latter. 194 199 */ 195 - spin_lock(&cfids->cfid_list_lock); 196 200 if (cfid->has_lease && cfid->time) { 197 201 spin_unlock(&cfids->cfid_list_lock); 198 202 *ret_cfid = cfid;
+2
fs/smb/client/smb2inode.c
··· 666 666 /* smb2_parse_contexts() fills idata->fi.IndexNumber */ 667 667 rc = smb2_parse_contexts(server, &rsp_iov[0], &oparms->fid->epoch, 668 668 oparms->fid->lease_key, &oplock, &idata->fi, NULL); 669 + if (rc) 670 + cifs_dbg(VFS, "rc: %d parsing context of compound op\n", rc); 669 671 } 670 672 671 673 for (i = 0; i < num_cmds; i++) {
+5 -2
fs/smb/server/oplock.c
··· 1496 1496 1497 1497 if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) < 1498 1498 sizeof(struct create_lease_v2) - 4) 1499 - return NULL; 1499 + goto err_out; 1500 1500 1501 1501 memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1502 1502 lreq->req_state = lc->lcontext.LeaseState; ··· 1512 1512 1513 1513 if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) < 1514 1514 sizeof(struct create_lease)) 1515 - return NULL; 1515 + goto err_out; 1516 1516 1517 1517 memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE); 1518 1518 lreq->req_state = lc->lcontext.LeaseState; ··· 1521 1521 lreq->version = 1; 1522 1522 } 1523 1523 return lreq; 1524 + err_out: 1525 + kfree(lreq); 1526 + return NULL; 1524 1527 } 1525 1528 1526 1529 /**
+5
fs/smb/server/smb2pdu.c
··· 633 633 return name; 634 634 } 635 635 636 + if (*name == '\0') { 637 + kfree(name); 638 + return ERR_PTR(-EINVAL); 639 + } 640 + 636 641 if (*name == '\\') { 637 642 pr_err("not allow directory name included leading slash\n"); 638 643 kfree(name);
+7
fs/smb/server/vfs.c
··· 426 426 goto out; 427 427 } 428 428 429 + if (v_len <= *pos) { 430 + pr_err("stream write position %lld is out of bounds (stream length: %zd)\n", 431 + *pos, v_len); 432 + err = -EINVAL; 433 + goto out; 434 + } 435 + 429 436 if (v_len < size) { 430 437 wbuf = kvzalloc(size, KSMBD_DEFAULT_GFP); 431 438 if (!wbuf) {
+26 -7
fs/smb/server/vfs_cache.c
··· 661 661 bool (*skip)(struct ksmbd_tree_connect *tcon, 662 662 struct ksmbd_file *fp)) 663 663 { 664 - unsigned int id; 665 - struct ksmbd_file *fp; 666 - int num = 0; 664 + struct ksmbd_file *fp; 665 + unsigned int id = 0; 666 + int num = 0; 667 667 668 - idr_for_each_entry(ft->idr, fp, id) { 669 - if (skip(tcon, fp)) 668 + while (1) { 669 + write_lock(&ft->lock); 670 + fp = idr_get_next(ft->idr, &id); 671 + if (!fp) { 672 + write_unlock(&ft->lock); 673 + break; 674 + } 675 + 676 + if (skip(tcon, fp) || 677 + !atomic_dec_and_test(&fp->refcount)) { 678 + id++; 679 + write_unlock(&ft->lock); 670 680 continue; 681 + } 671 682 672 683 set_close_state_blocked_works(fp); 684 + idr_remove(ft->idr, fp->volatile_id); 685 + fp->volatile_id = KSMBD_NO_FID; 686 + write_unlock(&ft->lock); 673 687 674 - if (!atomic_dec_and_test(&fp->refcount)) 675 - continue; 688 + down_write(&fp->f_ci->m_lock); 689 + list_del_init(&fp->node); 690 + up_write(&fp->f_ci->m_lock); 691 + 676 692 __ksmbd_close_fd(ft, fp); 693 + 677 694 num++; 695 + id++; 678 696 } 697 + 679 698 return num; 680 699 } 681 700
+22 -6
fs/userfaultfd.c
··· 1585 1585 user_uffdio_copy = (struct uffdio_copy __user *) arg; 1586 1586 1587 1587 ret = -EAGAIN; 1588 - if (atomic_read(&ctx->mmap_changing)) 1588 + if (unlikely(atomic_read(&ctx->mmap_changing))) { 1589 + if (unlikely(put_user(ret, &user_uffdio_copy->copy))) 1590 + return -EFAULT; 1589 1591 goto out; 1592 + } 1590 1593 1591 1594 ret = -EFAULT; 1592 1595 if (copy_from_user(&uffdio_copy, user_uffdio_copy, ··· 1644 1641 user_uffdio_zeropage = (struct uffdio_zeropage __user *) arg; 1645 1642 1646 1643 ret = -EAGAIN; 1647 - if (atomic_read(&ctx->mmap_changing)) 1644 + if (unlikely(atomic_read(&ctx->mmap_changing))) { 1645 + if (unlikely(put_user(ret, &user_uffdio_zeropage->zeropage))) 1646 + return -EFAULT; 1648 1647 goto out; 1648 + } 1649 1649 1650 1650 ret = -EFAULT; 1651 1651 if (copy_from_user(&uffdio_zeropage, user_uffdio_zeropage, ··· 1750 1744 user_uffdio_continue = (struct uffdio_continue __user *)arg; 1751 1745 1752 1746 ret = -EAGAIN; 1753 - if (atomic_read(&ctx->mmap_changing)) 1747 + if (unlikely(atomic_read(&ctx->mmap_changing))) { 1748 + if (unlikely(put_user(ret, &user_uffdio_continue->mapped))) 1749 + return -EFAULT; 1754 1750 goto out; 1751 + } 1755 1752 1756 1753 ret = -EFAULT; 1757 1754 if (copy_from_user(&uffdio_continue, user_uffdio_continue, ··· 1810 1801 user_uffdio_poison = (struct uffdio_poison __user *)arg; 1811 1802 1812 1803 ret = -EAGAIN; 1813 - if (atomic_read(&ctx->mmap_changing)) 1804 + if (unlikely(atomic_read(&ctx->mmap_changing))) { 1805 + if (unlikely(put_user(ret, &user_uffdio_poison->updated))) 1806 + return -EFAULT; 1814 1807 goto out; 1808 + } 1815 1809 1816 1810 ret = -EFAULT; 1817 1811 if (copy_from_user(&uffdio_poison, user_uffdio_poison, ··· 1882 1870 1883 1871 user_uffdio_move = (struct uffdio_move __user *) arg; 1884 1872 1885 - if (atomic_read(&ctx->mmap_changing)) 1886 - return -EAGAIN; 1873 + ret = -EAGAIN; 1874 + if (unlikely(atomic_read(&ctx->mmap_changing))) { 1875 + if (unlikely(put_user(ret, &user_uffdio_move->move))) 1876 + return -EFAULT; 1877 + goto out; 1878 + } 1887 1879 1888 1880 if (copy_from_user(&uffdio_move, user_uffdio_move, 1889 1881 /* don't copy "move" last field */
+8 -10
include/drm/ttm/ttm_backup.h
··· 9 9 #include <linux/mm_types.h> 10 10 #include <linux/shmem_fs.h> 11 11 12 - struct ttm_backup; 13 - 14 12 /** 15 13 * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer 16 14 * @handle: The handle to convert. 17 15 * 18 16 * Converts an opaque handle received from the 19 - * struct ttm_backoup_ops::backup_page() function to an (invalid) 17 + * ttm_backup_backup_page() function to an (invalid) 20 18 * struct page pointer suitable for a struct page array. 21 19 * 22 20 * Return: An (invalid) struct page pointer. ··· 43 45 * 44 46 * Return: The handle that was previously used in 45 47 * ttm_backup_handle_to_page_ptr() to obtain a struct page pointer, suitable 46 - * for use as argument in the struct ttm_backup_ops drop() or 47 - * copy_backed_up_page() functions. 48 + * for use as argument in the struct ttm_backup_drop() or 49 + * ttm_backup_copy_page() functions. 48 50 */ 49 51 static inline unsigned long 50 52 ttm_backup_page_ptr_to_handle(const struct page *page) ··· 53 55 return (unsigned long)page >> 1; 54 56 } 55 57 56 - void ttm_backup_drop(struct ttm_backup *backup, pgoff_t handle); 58 + void ttm_backup_drop(struct file *backup, pgoff_t handle); 57 59 58 - int ttm_backup_copy_page(struct ttm_backup *backup, struct page *dst, 60 + int ttm_backup_copy_page(struct file *backup, struct page *dst, 59 61 pgoff_t handle, bool intr); 60 62 61 63 s64 62 - ttm_backup_backup_page(struct ttm_backup *backup, struct page *page, 64 + ttm_backup_backup_page(struct file *backup, struct page *page, 63 65 bool writeback, pgoff_t idx, gfp_t page_gfp, 64 66 gfp_t alloc_gfp); 65 67 66 - void ttm_backup_fini(struct ttm_backup *backup); 68 + void ttm_backup_fini(struct file *backup); 67 69 68 70 u64 ttm_backup_bytes_avail(void); 69 71 70 - struct ttm_backup *ttm_backup_shmem_create(loff_t size); 72 + struct file *ttm_backup_shmem_create(loff_t size); 71 73 72 74 #endif
+1 -1
include/drm/ttm/ttm_tt.h
··· 118 118 * ttm_tt_create() callback is responsible for assigning 119 119 * this field. 120 120 */ 121 - struct ttm_backup *backup; 121 + struct file *backup; 122 122 /** 123 123 * @caching: The current caching state of the pages, see enum 124 124 * ttm_caching.
+6
include/linux/hyperv.h
··· 1002 1002 1003 1003 /* The max size of a packet on this channel */ 1004 1004 u32 max_pkt_size; 1005 + 1006 + /* function to mmap ring buffer memory to the channel's sysfs ring attribute */ 1007 + int (*mmap_ring_buffer)(struct vmbus_channel *channel, struct vm_area_struct *vma); 1008 + 1009 + /* boolean to control visibility of sysfs for ring buffer */ 1010 + bool ring_sysfs_visible; 1005 1011 }; 1006 1012 1007 1013 #define lock_requestor(channel, flags) \
+1 -1
include/linux/ieee80211.h
··· 1526 1526 struct { 1527 1527 u8 action_code; 1528 1528 u8 dialog_token; 1529 - u8 status_code; 1529 + __le16 status_code; 1530 1530 u8 variable[]; 1531 1531 } __packed ttlm_res; 1532 1532 struct {
+1
include/linux/netdevice.h
··· 4972 4972 4973 4973 /* Functions used for secondary unicast and multicast support */ 4974 4974 void dev_set_rx_mode(struct net_device *dev); 4975 + int netif_set_promiscuity(struct net_device *dev, int inc); 4975 4976 int dev_set_promiscuity(struct net_device *dev, int inc); 4976 4977 int netif_set_allmulti(struct net_device *dev, int inc, bool notify); 4977 4978 int dev_set_allmulti(struct net_device *dev, int inc);
+5 -3
include/linux/timekeeper_internal.h
··· 51 51 * @offs_real: Offset clock monotonic -> clock realtime 52 52 * @offs_boot: Offset clock monotonic -> clock boottime 53 53 * @offs_tai: Offset clock monotonic -> clock tai 54 - * @tai_offset: The current UTC to TAI offset in seconds 54 + * @coarse_nsec: The nanoseconds part for coarse time getters 55 55 * @tkr_raw: The readout base structure for CLOCK_MONOTONIC_RAW 56 56 * @raw_sec: CLOCK_MONOTONIC_RAW time in seconds 57 57 * @clock_was_set_seq: The sequence number of clock was set events ··· 76 76 * ntp shifted nano seconds. 77 77 * @ntp_err_mult: Multiplication factor for scaled math conversion 78 78 * @skip_second_overflow: Flag used to avoid updating NTP twice with same second 79 + * @tai_offset: The current UTC to TAI offset in seconds 79 80 * 80 81 * Note: For timespec(64) based interfaces wall_to_monotonic is what 81 82 * we need to add to xtime (or xtime corrected for sub jiffy times) ··· 101 100 * which results in the following cacheline layout: 102 101 * 103 102 * 0: seqcount, tkr_mono 104 - * 1: xtime_sec ... tai_offset 103 + * 1: xtime_sec ... coarse_nsec 105 104 * 2: tkr_raw, raw_sec 106 105 * 3,4: Internal variables 107 106 * ··· 122 121 ktime_t offs_real; 123 122 ktime_t offs_boot; 124 123 ktime_t offs_tai; 125 - s32 tai_offset; 124 + u32 coarse_nsec; 126 125 127 126 /* Cacheline 2: */ 128 127 struct tk_read_base tkr_raw; ··· 145 144 u32 ntp_error_shift; 146 145 u32 ntp_err_mult; 147 146 u32 skip_second_overflow; 147 + s32 tai_offset; 148 148 }; 149 149 150 150 #ifdef CONFIG_GENERIC_TIME_VSYSCALL
+1
include/linux/vmalloc.h
··· 61 61 unsigned int nr_pages; 62 62 phys_addr_t phys_addr; 63 63 const void *caller; 64 + unsigned long requested_size; 64 65 }; 65 66 66 67 struct vmap_area {
+6
include/net/netdev_queues.h
··· 103 103 struct netdev_queue_stats_tx *tx); 104 104 }; 105 105 106 + void netdev_stat_queue_sum(struct net_device *netdev, 107 + int rx_start, int rx_end, 108 + struct netdev_queue_stats_rx *rx_sum, 109 + int tx_start, int tx_end, 110 + struct netdev_queue_stats_tx *tx_sum); 111 + 106 112 /** 107 113 * struct netdev_queue_mgmt_ops - netdev ops for queue management 108 114 *
+1 -1
include/trace/events/btrfs.h
··· 1928 1928 TP_PROTO(const struct btrfs_fs_info *fs_info, 1929 1929 const struct prelim_ref *oldref, 1930 1930 const struct prelim_ref *newref, u64 tree_size), 1931 - TP_ARGS(fs_info, newref, oldref, tree_size), 1931 + TP_ARGS(fs_info, oldref, newref, tree_size), 1932 1932 1933 1933 TP_STRUCT__entry_btrfs( 1934 1934 __field( u64, root_id )
+3
include/uapi/linux/bpf.h
··· 4968 4968 * the netns switch takes place from ingress to ingress without 4969 4969 * going through the CPU's backlog queue. 4970 4970 * 4971 + * *skb*\ **->mark** and *skb*\ **->tstamp** are not cleared during 4972 + * the netns switch. 4973 + * 4971 4974 * The *flags* argument is reserved and must be 0. The helper is 4972 4975 * currently only supported for tc BPF program types at the 4973 4976 * ingress hook and for veth and netkit target device types. The
+3
init/Kconfig
··· 140 140 config RUSTC_HAS_COERCE_POINTEE 141 141 def_bool RUSTC_VERSION >= 108400 142 142 143 + config RUSTC_HAS_UNNECESSARY_TRANSMUTES 144 + def_bool RUSTC_VERSION >= 108800 145 + 143 146 config PAHOLE_VERSION 144 147 int 145 148 default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+23 -35
io_uring/io_uring.c
··· 448 448 return req->link; 449 449 } 450 450 451 - static inline struct io_kiocb *io_prep_linked_timeout(struct io_kiocb *req) 452 - { 453 - if (likely(!(req->flags & REQ_F_ARM_LTIMEOUT))) 454 - return NULL; 455 - return __io_prep_linked_timeout(req); 456 - } 457 - 458 - static noinline void __io_arm_ltimeout(struct io_kiocb *req) 459 - { 460 - io_queue_linked_timeout(__io_prep_linked_timeout(req)); 461 - } 462 - 463 - static inline void io_arm_ltimeout(struct io_kiocb *req) 464 - { 465 - if (unlikely(req->flags & REQ_F_ARM_LTIMEOUT)) 466 - __io_arm_ltimeout(req); 467 - } 468 - 469 451 static void io_prep_async_work(struct io_kiocb *req) 470 452 { 471 453 const struct io_issue_def *def = &io_issue_defs[req->opcode]; ··· 500 518 501 519 static void io_queue_iowq(struct io_kiocb *req) 502 520 { 503 - struct io_kiocb *link = io_prep_linked_timeout(req); 504 521 struct io_uring_task *tctx = req->tctx; 505 522 506 523 BUG_ON(!tctx); ··· 524 543 525 544 trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work)); 526 545 io_wq_enqueue(tctx->io_wq, &req->work); 527 - if (link) 528 - io_queue_linked_timeout(link); 529 546 } 530 547 531 548 static void io_req_queue_iowq_tw(struct io_kiocb *req, io_tw_token_t tw) ··· 847 868 { 848 869 struct io_ring_ctx *ctx = req->ctx; 849 870 bool posted; 871 + 872 + /* 873 + * If multishot has already posted deferred completions, ensure that 874 + * those are flushed first before posting this one. If not, CQEs 875 + * could get reordered. 876 + */ 877 + if (!wq_list_empty(&ctx->submit_state.compl_reqs)) 878 + __io_submit_flush_completions(ctx); 850 879 851 880 lockdep_assert(!io_wq_current_is_worker()); 852 881 lockdep_assert_held(&ctx->uring_lock); ··· 1711 1724 return !!req->file; 1712 1725 } 1713 1726 1727 + #define REQ_ISSUE_SLOW_FLAGS (REQ_F_CREDS | REQ_F_ARM_LTIMEOUT) 1728 + 1714 1729 static inline int __io_issue_sqe(struct io_kiocb *req, 1715 1730 unsigned int issue_flags, 1716 1731 const struct io_issue_def *def) 1717 1732 { 1718 1733 const struct cred *creds = NULL; 1734 + struct io_kiocb *link = NULL; 1719 1735 int ret; 1720 1736 1721 - if (unlikely((req->flags & REQ_F_CREDS) && req->creds != current_cred())) 1722 - creds = override_creds(req->creds); 1737 + if (unlikely(req->flags & REQ_ISSUE_SLOW_FLAGS)) { 1738 + if ((req->flags & REQ_F_CREDS) && req->creds != current_cred()) 1739 + creds = override_creds(req->creds); 1740 + if (req->flags & REQ_F_ARM_LTIMEOUT) 1741 + link = __io_prep_linked_timeout(req); 1742 + } 1723 1743 1724 1744 if (!def->audit_skip) 1725 1745 audit_uring_entry(req->opcode); ··· 1736 1742 if (!def->audit_skip) 1737 1743 audit_uring_exit(!ret, ret); 1738 1744 1739 - if (creds) 1740 - revert_creds(creds); 1745 + if (unlikely(creds || link)) { 1746 + if (creds) 1747 + revert_creds(creds); 1748 + if (link) 1749 + io_queue_linked_timeout(link); 1750 + } 1741 1751 1742 1752 return ret; 1743 1753 } ··· 1767 1769 1768 1770 if (ret == IOU_ISSUE_SKIP_COMPLETE) { 1769 1771 ret = 0; 1770 - io_arm_ltimeout(req); 1771 1772 1772 1773 /* If the op doesn't have a file, we're not polling for it */ 1773 1774 if ((req->ctx->flags & IORING_SETUP_IOPOLL) && def->iopoll_queue) ··· 1820 1823 __io_req_set_refcount(req, 2); 1821 1824 else 1822 1825 req_ref_get(req); 1823 - 1824 - io_arm_ltimeout(req); 1825 1826 1826 1827 /* either cancelled or io-wq is dying, so don't touch tctx->iowq */ 1827 1828 if (atomic_read(&work->flags) & IO_WQ_WORK_CANCEL) { ··· 1936 1941 static void io_queue_async(struct io_kiocb *req, int ret) 1937 1942 __must_hold(&req->ctx->uring_lock) 1938 1943 { 1939 - struct io_kiocb *linked_timeout; 1940 - 1941 1944 if (ret != -EAGAIN || (req->flags & REQ_F_NOWAIT)) { 1942 1945 io_req_defer_failed(req, ret); 1943 1946 return; 1944 1947 } 1945 - 1946 - linked_timeout = io_prep_linked_timeout(req); 1947 1948 1948 1949 switch (io_arm_poll_handler(req, 0)) { 1949 1950 case IO_APOLL_READY: ··· 1953 1962 case IO_APOLL_OK: 1954 1963 break; 1955 1964 } 1956 - 1957 - if (linked_timeout) 1958 - io_queue_linked_timeout(linked_timeout); 1959 1965 } 1960 1966 1961 1967 static inline void io_queue_sqe(struct io_kiocb *req)
+1 -1
io_uring/sqpoll.c
··· 20 20 #include "sqpoll.h" 21 21 22 22 #define IORING_SQPOLL_CAP_ENTRIES_VALUE 8 23 - #define IORING_TW_CAP_ENTRIES_VALUE 8 23 + #define IORING_TW_CAP_ENTRIES_VALUE 32 24 24 25 25 enum { 26 26 IO_SQ_THREAD_SHOULD_STOP = 0,
+3 -1
kernel/params.c
··· 943 943 static void module_kobj_release(struct kobject *kobj) 944 944 { 945 945 struct module_kobject *mk = to_module_kobject(kobj); 946 - complete(mk->kobj_completion); 946 + 947 + if (mk->kobj_completion) 948 + complete(mk->kobj_completion); 947 949 } 948 950 949 951 const struct kobj_type module_ktype = {
+42 -8
kernel/time/timekeeping.c
··· 164 164 return ts; 165 165 } 166 166 167 + static inline struct timespec64 tk_xtime_coarse(const struct timekeeper *tk) 168 + { 169 + struct timespec64 ts; 170 + 171 + ts.tv_sec = tk->xtime_sec; 172 + ts.tv_nsec = tk->coarse_nsec; 173 + return ts; 174 + } 175 + 176 + /* 177 + * Update the nanoseconds part for the coarse time keepers. They can't rely 178 + * on xtime_nsec because xtime_nsec could be adjusted by a small negative 179 + * amount when the multiplication factor of the clock is adjusted, which 180 + * could cause the coarse clocks to go slightly backwards. See 181 + * timekeeping_apply_adjustment(). Thus we keep a separate copy for the coarse 182 + * clockids which only is updated when the clock has been set or we have 183 + * accumulated time. 184 + */ 185 + static inline void tk_update_coarse_nsecs(struct timekeeper *tk) 186 + { 187 + tk->coarse_nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 188 + } 189 + 167 190 static void tk_set_xtime(struct timekeeper *tk, const struct timespec64 *ts) 168 191 { 169 192 tk->xtime_sec = ts->tv_sec; 170 193 tk->tkr_mono.xtime_nsec = (u64)ts->tv_nsec << tk->tkr_mono.shift; 194 + tk_update_coarse_nsecs(tk); 171 195 } 172 196 173 197 static void tk_xtime_add(struct timekeeper *tk, const struct timespec64 *ts) ··· 199 175 tk->xtime_sec += ts->tv_sec; 200 176 tk->tkr_mono.xtime_nsec += (u64)ts->tv_nsec << tk->tkr_mono.shift; 201 177 tk_normalize_xtime(tk); 178 + tk_update_coarse_nsecs(tk); 202 179 } 203 180 204 181 static void tk_set_wall_to_mono(struct timekeeper *tk, struct timespec64 wtm) ··· 733 708 tk_normalize_xtime(tk); 734 709 delta -= incr; 735 710 } 711 + tk_update_coarse_nsecs(tk); 736 712 } 737 713 738 714 /** ··· 830 804 ktime_t ktime_get_coarse_with_offset(enum tk_offsets offs) 831 805 { 832 806 struct timekeeper *tk = &tk_core.timekeeper; 833 - unsigned int seq; 834 807 ktime_t base, *offset = offsets[offs]; 808 + unsigned int seq; 835 809 u64 nsecs; 836 810 837 811 WARN_ON(timekeeping_suspended); ··· 839 813 do { 840 814 seq = read_seqcount_begin(&tk_core.seq); 841 815 base = ktime_add(tk->tkr_mono.base, *offset); 842 - nsecs = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 816 + nsecs = tk->coarse_nsec; 843 817 844 818 } while (read_seqcount_retry(&tk_core.seq, seq)); 845 819 ··· 2187 2161 struct timekeeper *real_tk = &tk_core.timekeeper; 2188 2162 unsigned int clock_set = 0; 2189 2163 int shift = 0, maxshift; 2190 - u64 offset; 2164 + u64 offset, orig_offset; 2191 2165 2192 2166 guard(raw_spinlock_irqsave)(&tk_core.lock); 2193 2167 ··· 2198 2172 offset = clocksource_delta(tk_clock_read(&tk->tkr_mono), 2199 2173 tk->tkr_mono.cycle_last, tk->tkr_mono.mask, 2200 2174 tk->tkr_mono.clock->max_raw_delta); 2201 - 2175 + orig_offset = offset; 2202 2176 /* Check if there's really nothing to do */ 2203 2177 if (offset < real_tk->cycle_interval && mode == TK_ADV_TICK) 2204 2178 return false; ··· 2230 2204 * xtime_nsec isn't larger than NSEC_PER_SEC 2231 2205 */ 2232 2206 clock_set |= accumulate_nsecs_to_secs(tk); 2207 + 2208 + /* 2209 + * To avoid inconsistencies caused adjtimex TK_ADV_FREQ calls 2210 + * making small negative adjustments to the base xtime_nsec 2211 + * value, only update the coarse clocks if we accumulated time 2212 + */ 2213 + if (orig_offset != offset) 2214 + tk_update_coarse_nsecs(tk); 2233 2215 2234 2216 timekeeping_update_from_shadow(&tk_core, clock_set); 2235 2217 ··· 2282 2248 do { 2283 2249 seq = read_seqcount_begin(&tk_core.seq); 2284 2250 2285 - *ts = tk_xtime(tk); 2251 + *ts = tk_xtime_coarse(tk); 2286 2252 } while (read_seqcount_retry(&tk_core.seq, seq)); 2287 2253 } 2288 2254 EXPORT_SYMBOL(ktime_get_coarse_real_ts64); ··· 2305 2271 2306 2272 do { 2307 2273 seq = read_seqcount_begin(&tk_core.seq); 2308 - *ts = tk_xtime(tk); 2274 + *ts = tk_xtime_coarse(tk); 2309 2275 offset = tk_core.timekeeper.offs_real; 2310 2276 } while (read_seqcount_retry(&tk_core.seq, seq)); 2311 2277 ··· 2384 2350 do { 2385 2351 seq = read_seqcount_begin(&tk_core.seq); 2386 2352 2387 - now = tk_xtime(tk); 2353 + now = tk_xtime_coarse(tk); 2388 2354 mono = tk->wall_to_monotonic; 2389 2355 } while (read_seqcount_retry(&tk_core.seq, seq)); 2390 2356 2391 2357 set_normalized_timespec64(ts, now.tv_sec + mono.tv_sec, 2392 - now.tv_nsec + mono.tv_nsec); 2358 + now.tv_nsec + mono.tv_nsec); 2393 2359 } 2394 2360 EXPORT_SYMBOL(ktime_get_coarse_ts64); 2395 2361
+2 -2
kernel/time/vsyscall.c
··· 98 98 /* CLOCK_REALTIME_COARSE */ 99 99 vdso_ts = &vc[CS_HRES_COARSE].basetime[CLOCK_REALTIME_COARSE]; 100 100 vdso_ts->sec = tk->xtime_sec; 101 - vdso_ts->nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 101 + vdso_ts->nsec = tk->coarse_nsec; 102 102 103 103 /* CLOCK_MONOTONIC_COARSE */ 104 104 vdso_ts = &vc[CS_HRES_COARSE].basetime[CLOCK_MONOTONIC_COARSE]; 105 105 vdso_ts->sec = tk->xtime_sec + tk->wall_to_monotonic.tv_sec; 106 - nsec = tk->tkr_mono.xtime_nsec >> tk->tkr_mono.shift; 106 + nsec = tk->coarse_nsec; 107 107 nsec = nsec + tk->wall_to_monotonic.tv_nsec; 108 108 vdso_ts->sec += __iter_div_u64_rem(nsec, NSEC_PER_SEC, &vdso_ts->nsec); 109 109
+8 -3
mm/huge_memory.c
··· 3075 3075 void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, 3076 3076 pmd_t *pmd, bool freeze, struct folio *folio) 3077 3077 { 3078 + bool pmd_migration = is_pmd_migration_entry(*pmd); 3079 + 3078 3080 VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); 3079 3081 VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); 3080 3082 VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); ··· 3087 3085 * require a folio to check the PMD against. Otherwise, there 3088 3086 * is a risk of replacing the wrong folio. 3089 3087 */ 3090 - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || 3091 - is_pmd_migration_entry(*pmd)) { 3092 - if (folio && folio != pmd_folio(*pmd)) 3088 + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || pmd_migration) { 3089 + /* 3090 + * Do not apply pmd_folio() to a migration entry; and folio lock 3091 + * guarantees that it must be of the wrong folio anyway. 3092 + */ 3093 + if (folio && (pmd_migration || folio != pmd_folio(*pmd))) 3093 3094 return; 3094 3095 __split_huge_pmd_locked(vma, pmd, address, freeze); 3095 3096 }
+6
mm/hugetlb.c
··· 4034 4034 4035 4035 list_for_each_entry_safe(folio, next, src_list, lru) { 4036 4036 int i; 4037 + bool cma; 4037 4038 4038 4039 if (folio_test_hugetlb_vmemmap_optimized(folio)) 4039 4040 continue; 4041 + 4042 + cma = folio_test_hugetlb_cma(folio); 4040 4043 4041 4044 list_del(&folio->lru); 4042 4045 ··· 4056 4053 4057 4054 new_folio->mapping = NULL; 4058 4055 init_new_hugetlb_folio(dst, new_folio); 4056 + /* Copy the CMA flag so that it is freed correctly */ 4057 + if (cma) 4058 + folio_set_hugetlb_cma(new_folio); 4059 4059 list_add(&new_folio->lru, &dst_list); 4060 4060 } 4061 4061 }
+11 -16
mm/internal.h
··· 248 248 pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, 249 249 bool *any_writable, bool *any_young, bool *any_dirty) 250 250 { 251 - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); 252 - const pte_t *end_ptep = start_ptep + max_nr; 253 251 pte_t expected_pte, *ptep; 254 252 bool writable, young, dirty; 255 - int nr; 253 + int nr, cur_nr; 256 254 257 255 if (any_writable) 258 256 *any_writable = false; ··· 263 265 VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); 264 266 VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio); 265 267 268 + /* Limit max_nr to the actual remaining PFNs in the folio we could batch. */ 269 + max_nr = min_t(unsigned long, max_nr, 270 + folio_pfn(folio) + folio_nr_pages(folio) - pte_pfn(pte)); 271 + 266 272 nr = pte_batch_hint(start_ptep, pte); 267 273 expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); 268 274 ptep = start_ptep + nr; 269 275 270 - while (ptep < end_ptep) { 276 + while (nr < max_nr) { 271 277 pte = ptep_get(ptep); 272 278 if (any_writable) 273 279 writable = !!pte_write(pte); ··· 284 282 if (!pte_same(pte, expected_pte)) 285 283 break; 286 284 287 - /* 288 - * Stop immediately once we reached the end of the folio. In 289 - * corner cases the next PFN might fall into a different 290 - * folio. 291 - */ 292 - if (pte_pfn(pte) >= folio_end_pfn) 293 - break; 294 - 295 285 if (any_writable) 296 286 *any_writable |= writable; 297 287 if (any_young) ··· 291 297 if (any_dirty) 292 298 *any_dirty |= dirty; 293 299 294 - nr = pte_batch_hint(ptep, pte); 295 - expected_pte = pte_advance_pfn(expected_pte, nr); 296 - ptep += nr; 300 + cur_nr = pte_batch_hint(ptep, pte); 301 + expected_pte = pte_advance_pfn(expected_pte, cur_nr); 302 + ptep += cur_nr; 303 + nr += cur_nr; 297 304 } 298 305 299 - return min(ptep - start_ptep, max_nr); 306 + return min(nr, max_nr); 300 307 } 301 308 302 309 /**
+8 -1
mm/memblock.c
··· 457 457 min(new_area_start, memblock.current_limit), 458 458 new_alloc_size, PAGE_SIZE); 459 459 460 - new_array = addr ? __va(addr) : NULL; 460 + if (addr) { 461 + /* The memory may not have been accepted, yet. */ 462 + accept_memory(addr, new_alloc_size); 463 + 464 + new_array = __va(addr); 465 + } else { 466 + new_array = NULL; 467 + } 461 468 } 462 469 if (!addr) { 463 470 pr_err("memblock: Failed to double %s array from %ld to %ld entries !\n",
+1 -1
mm/mm_init.c
··· 1786 1786 return IS_ENABLED(CONFIG_ARC) && !IS_ENABLED(CONFIG_ARC_HAS_PAE40); 1787 1787 } 1788 1788 1789 - static void set_high_memory(void) 1789 + static void __init set_high_memory(void) 1790 1790 { 1791 1791 phys_addr_t highmem = memblock_end_of_DRAM(); 1792 1792
+16 -7
mm/swapfile.c
··· 1272 1272 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); 1273 1273 VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio); 1274 1274 1275 - /* 1276 - * Should not even be attempting large allocations when huge 1277 - * page swap is disabled. Warn and fail the allocation. 1278 - */ 1279 - if (order && (!IS_ENABLED(CONFIG_THP_SWAP) || size > SWAPFILE_CLUSTER)) { 1280 - VM_WARN_ON_ONCE(1); 1281 - return -EINVAL; 1275 + if (order) { 1276 + /* 1277 + * Reject large allocation when THP_SWAP is disabled, 1278 + * the caller should split the folio and try again. 1279 + */ 1280 + if (!IS_ENABLED(CONFIG_THP_SWAP)) 1281 + return -EAGAIN; 1282 + 1283 + /* 1284 + * Allocation size should never exceed cluster size 1285 + * (HPAGE_PMD_SIZE). 1286 + */ 1287 + if (size > SWAPFILE_CLUSTER) { 1288 + VM_WARN_ON_ONCE(1); 1289 + return -EINVAL; 1290 + } 1282 1291 } 1283 1292 1284 1293 local_lock(&percpu_swap_cluster.lock);
+24 -7
mm/vmalloc.c
··· 1940 1940 { 1941 1941 vm->flags = flags; 1942 1942 vm->addr = (void *)va->va_start; 1943 - vm->size = va_size(va); 1943 + vm->size = vm->requested_size = va_size(va); 1944 1944 vm->caller = caller; 1945 1945 va->vm = vm; 1946 1946 } ··· 3133 3133 3134 3134 area->flags = flags; 3135 3135 area->caller = caller; 3136 + area->requested_size = requested_size; 3136 3137 3137 3138 va = alloc_vmap_area(size, align, start, end, node, gfp_mask, 0, area); 3138 3139 if (IS_ERR(va)) { ··· 4064 4063 */ 4065 4064 void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) 4066 4065 { 4066 + struct vm_struct *vm = NULL; 4067 + size_t alloced_size = 0; 4067 4068 size_t old_size = 0; 4068 4069 void *n; 4069 4070 ··· 4075 4072 } 4076 4073 4077 4074 if (p) { 4078 - struct vm_struct *vm; 4079 - 4080 4075 vm = find_vm_area(p); 4081 4076 if (unlikely(!vm)) { 4082 4077 WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p); 4083 4078 return NULL; 4084 4079 } 4085 4080 4086 - old_size = get_vm_area_size(vm); 4081 + alloced_size = get_vm_area_size(vm); 4082 + old_size = vm->requested_size; 4083 + if (WARN(alloced_size < old_size, 4084 + "vrealloc() has mismatched area vs requested sizes (%p)\n", p)) 4085 + return NULL; 4087 4086 } 4088 4087 4089 4088 /* ··· 4093 4088 * would be a good heuristic for when to shrink the vm_area? 4094 4089 */ 4095 4090 if (size <= old_size) { 4096 - /* Zero out spare memory. */ 4097 - if (want_init_on_alloc(flags)) 4091 + /* Zero out "freed" memory. */ 4092 + if (want_init_on_free()) 4098 4093 memset((void *)p + size, 0, old_size - size); 4094 + vm->requested_size = size; 4099 4095 kasan_poison_vmalloc(p + size, old_size - size); 4100 - kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL); 4101 4096 return (void *)p; 4097 + } 4098 + 4099 + /* 4100 + * We already have the bytes available in the allocation; use them. 4101 + */ 4102 + if (size <= alloced_size) { 4103 + kasan_unpoison_vmalloc(p + old_size, size - old_size, 4104 + KASAN_VMALLOC_PROT_NORMAL); 4105 + /* Zero out "alloced" memory. */ 4106 + if (want_init_on_alloc(flags)) 4107 + memset((void *)p + old_size, 0, size - old_size); 4108 + vm->requested_size = size; 4102 4109 } 4103 4110 4104 4111 /* TODO: Grow the vm_area, i.e. allocate and map additional pages. */
+91 -60
net/can/gw.c
··· 130 130 u32 handled_frames; 131 131 u32 dropped_frames; 132 132 u32 deleted_frames; 133 - struct cf_mod mod; 133 + struct cf_mod __rcu *cf_mod; 134 134 union { 135 135 /* CAN frame data source */ 136 136 struct net_device *dev; ··· 459 459 struct cgw_job *gwj = (struct cgw_job *)data; 460 460 struct canfd_frame *cf; 461 461 struct sk_buff *nskb; 462 + struct cf_mod *mod; 462 463 int modidx = 0; 463 464 464 465 /* process strictly Classic CAN or CAN FD frames */ ··· 507 506 * When there is at least one modification function activated, 508 507 * we need to copy the skb as we want to modify skb->data. 509 508 */ 510 - if (gwj->mod.modfunc[0]) 509 + mod = rcu_dereference(gwj->cf_mod); 510 + if (mod->modfunc[0]) 511 511 nskb = skb_copy(skb, GFP_ATOMIC); 512 512 else 513 513 nskb = skb_clone(skb, GFP_ATOMIC); ··· 531 529 cf = (struct canfd_frame *)nskb->data; 532 530 533 531 /* perform preprocessed modification functions if there are any */ 534 - while (modidx < MAX_MODFUNCTIONS && gwj->mod.modfunc[modidx]) 535 - (*gwj->mod.modfunc[modidx++])(cf, &gwj->mod); 532 + while (modidx < MAX_MODFUNCTIONS && mod->modfunc[modidx]) 533 + (*mod->modfunc[modidx++])(cf, mod); 536 534 537 535 /* Has the CAN frame been modified? */ 538 536 if (modidx) { ··· 548 546 } 549 547 550 548 /* check for checksum updates */ 551 - if (gwj->mod.csumfunc.crc8) 552 - (*gwj->mod.csumfunc.crc8)(cf, &gwj->mod.csum.crc8); 549 + if (mod->csumfunc.crc8) 550 + (*mod->csumfunc.crc8)(cf, &mod->csum.crc8); 553 551 554 - if (gwj->mod.csumfunc.xor) 555 - (*gwj->mod.csumfunc.xor)(cf, &gwj->mod.csum.xor); 552 + if (mod->csumfunc.xor) 553 + (*mod->csumfunc.xor)(cf, &mod->csum.xor); 556 554 } 557 555 558 556 /* clear the skb timestamp if not configured the other way */ ··· 583 581 { 584 582 struct cgw_job *gwj = container_of(rcu_head, struct cgw_job, rcu); 585 583 584 + /* cgw_job::cf_mod is always accessed from the same cgw_job object within 585 + * the same RCU read section. Once cgw_job is scheduled for removal, 586 + * cf_mod can also be removed without mandating an additional grace period. 587 + */ 588 + kfree(rcu_access_pointer(gwj->cf_mod)); 586 589 kmem_cache_free(cgw_cache, gwj); 590 + } 591 + 592 + /* Return cgw_job::cf_mod with RTNL protected section */ 593 + static struct cf_mod *cgw_job_cf_mod(struct cgw_job *gwj) 594 + { 595 + return rcu_dereference_protected(gwj->cf_mod, rtnl_is_locked()); 587 596 } 588 597 589 598 static int cgw_notifier(struct notifier_block *nb, ··· 629 616 { 630 617 struct rtcanmsg *rtcan; 631 618 struct nlmsghdr *nlh; 619 + struct cf_mod *mod; 632 620 633 621 nlh = nlmsg_put(skb, pid, seq, type, sizeof(*rtcan), flags); 634 622 if (!nlh) ··· 664 650 goto cancel; 665 651 } 666 652 653 + mod = cgw_job_cf_mod(gwj); 667 654 if (gwj->flags & CGW_FLAGS_CAN_FD) { 668 655 struct cgw_fdframe_mod mb; 669 656 670 - if (gwj->mod.modtype.and) { 671 - memcpy(&mb.cf, &gwj->mod.modframe.and, sizeof(mb.cf)); 672 - mb.modtype = gwj->mod.modtype.and; 657 + if (mod->modtype.and) { 658 + memcpy(&mb.cf, &mod->modframe.and, sizeof(mb.cf)); 659 + mb.modtype = mod->modtype.and; 673 660 if (nla_put(skb, CGW_FDMOD_AND, sizeof(mb), &mb) < 0) 674 661 goto cancel; 675 662 } 676 663 677 - if (gwj->mod.modtype.or) { 678 - memcpy(&mb.cf, &gwj->mod.modframe.or, sizeof(mb.cf)); 679 - mb.modtype = gwj->mod.modtype.or; 664 + if (mod->modtype.or) { 665 + memcpy(&mb.cf, &mod->modframe.or, sizeof(mb.cf)); 666 + mb.modtype = mod->modtype.or; 680 667 if (nla_put(skb, CGW_FDMOD_OR, sizeof(mb), &mb) < 0) 681 668 goto cancel; 682 669 } 683 670 684 - if (gwj->mod.modtype.xor) { 685 - memcpy(&mb.cf, &gwj->mod.modframe.xor, sizeof(mb.cf)); 686 - mb.modtype = gwj->mod.modtype.xor; 671 + if (mod->modtype.xor) { 672 + memcpy(&mb.cf, &mod->modframe.xor, sizeof(mb.cf)); 673 + mb.modtype = mod->modtype.xor; 687 674 if (nla_put(skb, CGW_FDMOD_XOR, sizeof(mb), &mb) < 0) 688 675 goto cancel; 689 676 } 690 677 691 - if (gwj->mod.modtype.set) { 692 - memcpy(&mb.cf, &gwj->mod.modframe.set, sizeof(mb.cf)); 693 - mb.modtype = gwj->mod.modtype.set; 678 + if (mod->modtype.set) { 679 + memcpy(&mb.cf, &mod->modframe.set, sizeof(mb.cf)); 680 + mb.modtype = mod->modtype.set; 694 681 if (nla_put(skb, CGW_FDMOD_SET, sizeof(mb), &mb) < 0) 695 682 goto cancel; 696 683 } 697 684 } else { 698 685 struct cgw_frame_mod mb; 699 686 700 - if (gwj->mod.modtype.and) { 701 - memcpy(&mb.cf, &gwj->mod.modframe.and, sizeof(mb.cf)); 702 - mb.modtype = gwj->mod.modtype.and; 687 + if (mod->modtype.and) { 688 + memcpy(&mb.cf, &mod->modframe.and, sizeof(mb.cf)); 689 + mb.modtype = mod->modtype.and; 703 690 if (nla_put(skb, CGW_MOD_AND, sizeof(mb), &mb) < 0) 704 691 goto cancel; 705 692 } 706 693 707 - if (gwj->mod.modtype.or) { 708 - memcpy(&mb.cf, &gwj->mod.modframe.or, sizeof(mb.cf)); 709 - mb.modtype = gwj->mod.modtype.or; 694 + if (mod->modtype.or) { 695 + memcpy(&mb.cf, &mod->modframe.or, sizeof(mb.cf)); 696 + mb.modtype = mod->modtype.or; 710 697 if (nla_put(skb, CGW_MOD_OR, sizeof(mb), &mb) < 0) 711 698 goto cancel; 712 699 } 713 700 714 - if (gwj->mod.modtype.xor) { 715 - memcpy(&mb.cf, &gwj->mod.modframe.xor, sizeof(mb.cf)); 716 - mb.modtype = gwj->mod.modtype.xor; 701 + if (mod->modtype.xor) { 702 + memcpy(&mb.cf, &mod->modframe.xor, sizeof(mb.cf)); 703 + mb.modtype = mod->modtype.xor; 717 704 if (nla_put(skb, CGW_MOD_XOR, sizeof(mb), &mb) < 0) 718 705 goto cancel; 719 706 } 720 707 721 - if (gwj->mod.modtype.set) { 722 - memcpy(&mb.cf, &gwj->mod.modframe.set, sizeof(mb.cf)); 723 - mb.modtype = gwj->mod.modtype.set; 708 + if (mod->modtype.set) { 709 + memcpy(&mb.cf, &mod->modframe.set, sizeof(mb.cf)); 710 + mb.modtype = mod->modtype.set; 724 711 if (nla_put(skb, CGW_MOD_SET, sizeof(mb), &mb) < 0) 725 712 goto cancel; 726 713 } 727 714 } 728 715 729 - if (gwj->mod.uid) { 730 - if (nla_put_u32(skb, CGW_MOD_UID, gwj->mod.uid) < 0) 716 + if (mod->uid) { 717 + if (nla_put_u32(skb, CGW_MOD_UID, mod->uid) < 0) 731 718 goto cancel; 732 719 } 733 720 734 - if (gwj->mod.csumfunc.crc8) { 721 + if (mod->csumfunc.crc8) { 735 722 if (nla_put(skb, CGW_CS_CRC8, CGW_CS_CRC8_LEN, 736 - &gwj->mod.csum.crc8) < 0) 723 + &mod->csum.crc8) < 0) 737 724 goto cancel; 738 725 } 739 726 740 - if (gwj->mod.csumfunc.xor) { 727 + if (mod->csumfunc.xor) { 741 728 if (nla_put(skb, CGW_CS_XOR, CGW_CS_XOR_LEN, 742 - &gwj->mod.csum.xor) < 0) 729 + &mod->csum.xor) < 0) 743 730 goto cancel; 744 731 } 745 732 ··· 1074 1059 struct net *net = sock_net(skb->sk); 1075 1060 struct rtcanmsg *r; 1076 1061 struct cgw_job *gwj; 1077 - struct cf_mod mod; 1062 + struct cf_mod *mod; 1078 1063 struct can_can_gw ccgw; 1079 1064 u8 limhops = 0; 1080 1065 int err = 0; ··· 1093 1078 if (r->gwtype != CGW_TYPE_CAN_CAN) 1094 1079 return -EINVAL; 1095 1080 1096 - err = cgw_parse_attr(nlh, &mod, CGW_TYPE_CAN_CAN, &ccgw, &limhops); 1097 - if (err < 0) 1098 - return err; 1081 + mod = kmalloc(sizeof(*mod), GFP_KERNEL); 1082 + if (!mod) 1083 + return -ENOMEM; 1099 1084 1100 - if (mod.uid) { 1085 + err = cgw_parse_attr(nlh, mod, CGW_TYPE_CAN_CAN, &ccgw, &limhops); 1086 + if (err < 0) 1087 + goto out_free_cf; 1088 + 1089 + if (mod->uid) { 1101 1090 ASSERT_RTNL(); 1102 1091 1103 1092 /* check for updating an existing job with identical uid */ 1104 1093 hlist_for_each_entry(gwj, &net->can.cgw_list, list) { 1105 - if (gwj->mod.uid != mod.uid) 1094 + struct cf_mod *old_cf; 1095 + 1096 + old_cf = cgw_job_cf_mod(gwj); 1097 + if (old_cf->uid != mod->uid) 1106 1098 continue; 1107 1099 1108 1100 /* interfaces & filters must be identical */ 1109 - if (memcmp(&gwj->ccgw, &ccgw, sizeof(ccgw))) 1110 - return -EINVAL; 1101 + if (memcmp(&gwj->ccgw, &ccgw, sizeof(ccgw))) { 1102 + err = -EINVAL; 1103 + goto out_free_cf; 1104 + } 1111 1105 1112 - /* update modifications with disabled softirq & quit */ 1113 - local_bh_disable(); 1114 - memcpy(&gwj->mod, &mod, sizeof(mod)); 1115 - local_bh_enable(); 1106 + rcu_assign_pointer(gwj->cf_mod, mod); 1107 + kfree_rcu_mightsleep(old_cf); 1116 1108 return 0; 1117 1109 } 1118 1110 } 1119 1111 1120 1112 /* ifindex == 0 is not allowed for job creation */ 1121 - if (!ccgw.src_idx || !ccgw.dst_idx) 1122 - return -ENODEV; 1113 + if (!ccgw.src_idx || !ccgw.dst_idx) { 1114 + err = -ENODEV; 1115 + goto out_free_cf; 1116 + } 1123 1117 1124 1118 gwj = kmem_cache_alloc(cgw_cache, GFP_KERNEL); 1125 - if (!gwj) 1126 - return -ENOMEM; 1119 + if (!gwj) { 1120 + err = -ENOMEM; 1121 + goto out_free_cf; 1122 + } 1127 1123 1128 1124 gwj->handled_frames = 0; 1129 1125 gwj->dropped_frames = 0; ··· 1144 1118 gwj->limit_hops = limhops; 1145 1119 1146 1120 /* insert already parsed information */ 1147 - memcpy(&gwj->mod, &mod, sizeof(mod)); 1121 + RCU_INIT_POINTER(gwj->cf_mod, mod); 1148 1122 memcpy(&gwj->ccgw, &ccgw, sizeof(ccgw)); 1149 1123 1150 1124 err = -ENODEV; ··· 1178 1152 if (!err) 1179 1153 hlist_add_head_rcu(&gwj->list, &net->can.cgw_list); 1180 1154 out: 1181 - if (err) 1155 + if (err) { 1182 1156 kmem_cache_free(cgw_cache, gwj); 1183 - 1157 + out_free_cf: 1158 + kfree(mod); 1159 + } 1184 1160 return err; 1185 1161 } 1186 1162 ··· 1242 1214 1243 1215 /* remove only the first matching entry */ 1244 1216 hlist_for_each_entry_safe(gwj, nx, &net->can.cgw_list, list) { 1217 + struct cf_mod *cf_mod; 1218 + 1245 1219 if (gwj->flags != r->flags) 1246 1220 continue; 1247 1221 1248 1222 if (gwj->limit_hops != limhops) 1249 1223 continue; 1250 1224 1225 + cf_mod = cgw_job_cf_mod(gwj); 1251 1226 /* we have a match when uid is enabled and identical */ 1252 - if (gwj->mod.uid || mod.uid) { 1253 - if (gwj->mod.uid != mod.uid) 1227 + if (cf_mod->uid || mod.uid) { 1228 + if (cf_mod->uid != mod.uid) 1254 1229 continue; 1255 1230 } else { 1256 1231 /* no uid => check for identical modifications */ 1257 - if (memcmp(&gwj->mod, &mod, sizeof(mod))) 1232 + if (memcmp(cf_mod, &mod, sizeof(mod))) 1258 1233 continue; 1259 1234 } 1260 1235
+4 -14
net/core/dev.c
··· 9193 9193 return 0; 9194 9194 } 9195 9195 9196 - /** 9197 - * dev_set_promiscuity - update promiscuity count on a device 9198 - * @dev: device 9199 - * @inc: modifier 9200 - * 9201 - * Add or remove promiscuity from a device. While the count in the device 9202 - * remains above zero the interface remains promiscuous. Once it hits zero 9203 - * the device reverts back to normal filtering operation. A negative inc 9204 - * value is used to drop promiscuity on the device. 9205 - * Return 0 if successful or a negative errno code on error. 9206 - */ 9207 - int dev_set_promiscuity(struct net_device *dev, int inc) 9196 + int netif_set_promiscuity(struct net_device *dev, int inc) 9208 9197 { 9209 9198 unsigned int old_flags = dev->flags; 9210 9199 int err; ··· 9205 9216 dev_set_rx_mode(dev); 9206 9217 return err; 9207 9218 } 9208 - EXPORT_SYMBOL(dev_set_promiscuity); 9209 9219 9210 9220 int netif_set_allmulti(struct net_device *dev, int inc, bool notify) 9211 9221 { ··· 11954 11966 struct sk_buff *skb = NULL; 11955 11967 11956 11968 /* Shutdown queueing discipline. */ 11969 + netdev_lock_ops(dev); 11957 11970 dev_shutdown(dev); 11958 11971 dev_tcx_uninstall(dev); 11959 - netdev_lock_ops(dev); 11960 11972 dev_xdp_uninstall(dev); 11961 11973 dev_memory_provider_uninstall(dev); 11962 11974 netdev_unlock_ops(dev); ··· 12149 12161 synchronize_net(); 12150 12162 12151 12163 /* Shutdown queueing discipline. */ 12164 + netdev_lock_ops(dev); 12152 12165 dev_shutdown(dev); 12166 + netdev_unlock_ops(dev); 12153 12167 12154 12168 /* Notify protocols, that we are about to destroy 12155 12169 * this device. They should clean all the things.
+23
net/core/dev_api.c
··· 268 268 EXPORT_SYMBOL(dev_disable_lro); 269 269 270 270 /** 271 + * dev_set_promiscuity() - update promiscuity count on a device 272 + * @dev: device 273 + * @inc: modifier 274 + * 275 + * Add or remove promiscuity from a device. While the count in the device 276 + * remains above zero the interface remains promiscuous. Once it hits zero 277 + * the device reverts back to normal filtering operation. A negative inc 278 + * value is used to drop promiscuity on the device. 279 + * Return 0 if successful or a negative errno code on error. 280 + */ 281 + int dev_set_promiscuity(struct net_device *dev, int inc) 282 + { 283 + int ret; 284 + 285 + netdev_lock_ops(dev); 286 + ret = netif_set_promiscuity(dev, inc); 287 + netdev_unlock_ops(dev); 288 + 289 + return ret; 290 + } 291 + EXPORT_SYMBOL(dev_set_promiscuity); 292 + 293 + /** 271 294 * dev_set_allmulti() - update allmulti count on a device 272 295 * @dev: device 273 296 * @inc: modifier
+1
net/core/filter.c
··· 2509 2509 goto out_drop; 2510 2510 skb->dev = dev; 2511 2511 dev_sw_netstats_rx_add(dev, skb->len); 2512 + skb_scrub_packet(skb, false); 2512 2513 return -EAGAIN; 2513 2514 } 2514 2515 return flags & BPF_F_NEIGH ?
+50 -19
net/core/netdev-genl.c
··· 708 708 return 0; 709 709 } 710 710 711 + /** 712 + * netdev_stat_queue_sum() - add up queue stats from range of queues 713 + * @netdev: net_device 714 + * @rx_start: index of the first Rx queue to query 715 + * @rx_end: index after the last Rx queue (first *not* to query) 716 + * @rx_sum: output Rx stats, should be already initialized 717 + * @tx_start: index of the first Tx queue to query 718 + * @tx_end: index after the last Tx queue (first *not* to query) 719 + * @tx_sum: output Tx stats, should be already initialized 720 + * 721 + * Add stats from [start, end) range of queue IDs to *x_sum structs. 722 + * The sum structs must be already initialized. Usually this 723 + * helper is invoked from the .get_base_stats callbacks of drivers 724 + * to account for stats of disabled queues. In that case the ranges 725 + * are usually [netdev->real_num_*x_queues, netdev->num_*x_queues). 726 + */ 727 + void netdev_stat_queue_sum(struct net_device *netdev, 728 + int rx_start, int rx_end, 729 + struct netdev_queue_stats_rx *rx_sum, 730 + int tx_start, int tx_end, 731 + struct netdev_queue_stats_tx *tx_sum) 732 + { 733 + const struct netdev_stat_ops *ops; 734 + struct netdev_queue_stats_rx rx; 735 + struct netdev_queue_stats_tx tx; 736 + int i; 737 + 738 + ops = netdev->stat_ops; 739 + 740 + for (i = rx_start; i < rx_end; i++) { 741 + memset(&rx, 0xff, sizeof(rx)); 742 + if (ops->get_queue_stats_rx) 743 + ops->get_queue_stats_rx(netdev, i, &rx); 744 + netdev_nl_stats_add(rx_sum, &rx, sizeof(rx)); 745 + } 746 + for (i = tx_start; i < tx_end; i++) { 747 + memset(&tx, 0xff, sizeof(tx)); 748 + if (ops->get_queue_stats_tx) 749 + ops->get_queue_stats_tx(netdev, i, &tx); 750 + netdev_nl_stats_add(tx_sum, &tx, sizeof(tx)); 751 + } 752 + } 753 + EXPORT_SYMBOL(netdev_stat_queue_sum); 754 + 711 755 static int 712 756 netdev_nl_stats_by_netdev(struct net_device *netdev, struct sk_buff *rsp, 713 757 const struct genl_info *info) 714 758 { 715 - struct netdev_queue_stats_rx rx_sum, rx; 716 - struct netdev_queue_stats_tx tx_sum, tx; 717 - const struct netdev_stat_ops *ops; 759 + struct netdev_queue_stats_rx rx_sum; 760 + struct netdev_queue_stats_tx tx_sum; 718 761 void *hdr; 719 - int i; 720 762 721 - ops = netdev->stat_ops; 722 763 /* Netdev can't guarantee any complete counters */ 723 - if (!ops->get_base_stats) 764 + if (!netdev->stat_ops->get_base_stats) 724 765 return 0; 725 766 726 767 memset(&rx_sum, 0xff, sizeof(rx_sum)); 727 768 memset(&tx_sum, 0xff, sizeof(tx_sum)); 728 769 729 - ops->get_base_stats(netdev, &rx_sum, &tx_sum); 770 + netdev->stat_ops->get_base_stats(netdev, &rx_sum, &tx_sum); 730 771 731 772 /* The op was there, but nothing reported, don't bother */ 732 773 if (!memchr_inv(&rx_sum, 0xff, sizeof(rx_sum)) && ··· 780 739 if (nla_put_u32(rsp, NETDEV_A_QSTATS_IFINDEX, netdev->ifindex)) 781 740 goto nla_put_failure; 782 741 783 - for (i = 0; i < netdev->real_num_rx_queues; i++) { 784 - memset(&rx, 0xff, sizeof(rx)); 785 - if (ops->get_queue_stats_rx) 786 - ops->get_queue_stats_rx(netdev, i, &rx); 787 - netdev_nl_stats_add(&rx_sum, &rx, sizeof(rx)); 788 - } 789 - for (i = 0; i < netdev->real_num_tx_queues; i++) { 790 - memset(&tx, 0xff, sizeof(tx)); 791 - if (ops->get_queue_stats_tx) 792 - ops->get_queue_stats_tx(netdev, i, &tx); 793 - netdev_nl_stats_add(&tx_sum, &tx, sizeof(tx)); 794 - } 742 + netdev_stat_queue_sum(netdev, 0, netdev->real_num_rx_queues, &rx_sum, 743 + 0, netdev->real_num_tx_queues, &tx_sum); 795 744 796 745 if (netdev_nl_stats_write_rx(rsp, &rx_sum) || 797 746 netdev_nl_stats_write_tx(rsp, &tx_sum))
+9 -6
net/ipv6/addrconf.c
··· 3214 3214 struct in6_addr addr; 3215 3215 struct net_device *dev; 3216 3216 struct net *net = dev_net(idev->dev); 3217 - int scope, plen, offset = 0; 3217 + int scope, plen; 3218 3218 u32 pflags = 0; 3219 3219 3220 3220 ASSERT_RTNL(); 3221 3221 3222 3222 memset(&addr, 0, sizeof(struct in6_addr)); 3223 - /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */ 3224 - if (idev->dev->addr_len == sizeof(struct in6_addr)) 3225 - offset = sizeof(struct in6_addr) - 4; 3226 - memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); 3223 + memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4); 3227 3224 3228 3225 if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) { 3229 3226 scope = IPV6_ADDR_COMPATv4; ··· 3531 3534 return; 3532 3535 } 3533 3536 3534 - if (dev->type == ARPHRD_ETHER) { 3537 + /* Generate the IPv6 link-local address using addrconf_addr_gen(), 3538 + * unless we have an IPv4 GRE device not bound to an IP address and 3539 + * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this 3540 + * case). Such devices fall back to add_v4_addrs() instead. 3541 + */ 3542 + if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 && 3543 + idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) { 3535 3544 addrconf_addr_gen(idev, true); 3536 3545 return; 3537 3546 }
+6 -6
net/mac80211/mlme.c
··· 7675 7675 int hdr_len = offsetofend(struct ieee80211_mgmt, u.action.u.ttlm_res); 7676 7676 int ttlm_max_len = 2 + 1 + sizeof(struct ieee80211_ttlm_elem) + 1 + 7677 7677 2 * 2 * IEEE80211_TTLM_NUM_TIDS; 7678 + u16 status_code; 7678 7679 7679 7680 skb = dev_alloc_skb(local->tx_headroom + hdr_len + ttlm_max_len); 7680 7681 if (!skb) ··· 7698 7697 WARN_ON(1); 7699 7698 fallthrough; 7700 7699 case NEG_TTLM_RES_REJECT: 7701 - mgmt->u.action.u.ttlm_res.status_code = 7702 - WLAN_STATUS_DENIED_TID_TO_LINK_MAPPING; 7700 + status_code = WLAN_STATUS_DENIED_TID_TO_LINK_MAPPING; 7703 7701 break; 7704 7702 case NEG_TTLM_RES_ACCEPT: 7705 - mgmt->u.action.u.ttlm_res.status_code = WLAN_STATUS_SUCCESS; 7703 + status_code = WLAN_STATUS_SUCCESS; 7706 7704 break; 7707 7705 case NEG_TTLM_RES_SUGGEST_PREFERRED: 7708 - mgmt->u.action.u.ttlm_res.status_code = 7709 - WLAN_STATUS_PREF_TID_TO_LINK_MAPPING_SUGGESTED; 7706 + status_code = WLAN_STATUS_PREF_TID_TO_LINK_MAPPING_SUGGESTED; 7710 7707 ieee80211_neg_ttlm_add_suggested_map(skb, neg_ttlm); 7711 7708 break; 7712 7709 } 7713 7710 7711 + mgmt->u.action.u.ttlm_res.status_code = cpu_to_le16(status_code); 7714 7712 ieee80211_tx_skb(sdata, skb); 7715 7713 } 7716 7714 ··· 7875 7875 * This can be better implemented in the future, to handle request 7876 7876 * rejections. 7877 7877 */ 7878 - if (mgmt->u.action.u.ttlm_res.status_code != WLAN_STATUS_SUCCESS) 7878 + if (le16_to_cpu(mgmt->u.action.u.ttlm_res.status_code) != WLAN_STATUS_SUCCESS) 7879 7879 __ieee80211_disconnect(sdata); 7880 7880 } 7881 7881
+1 -1
net/netfilter/ipset/ip_set_hash_gen.h
··· 64 64 #define ahash_sizeof_regions(htable_bits) \ 65 65 (ahash_numof_locks(htable_bits) * sizeof(struct ip_set_region)) 66 66 #define ahash_region(n, htable_bits) \ 67 - ((n) % ahash_numof_locks(htable_bits)) 67 + ((n) / jhash_size(HTABLE_REGION_BITS)) 68 68 #define ahash_bucket_start(h, htable_bits) \ 69 69 ((htable_bits) < HTABLE_REGION_BITS ? 0 \ 70 70 : (h) * jhash_size(HTABLE_REGION_BITS))
+8 -19
net/netfilter/ipvs/ip_vs_xmit.c
··· 119 119 return false; 120 120 } 121 121 122 - /* Get route to daddr, update *saddr, optionally bind route to saddr */ 122 + /* Get route to daddr, optionally bind route to saddr */ 123 123 static struct rtable *do_output_route4(struct net *net, __be32 daddr, 124 - int rt_mode, __be32 *saddr) 124 + int rt_mode, __be32 *ret_saddr) 125 125 { 126 126 struct flowi4 fl4; 127 127 struct rtable *rt; 128 - bool loop = false; 129 128 130 129 memset(&fl4, 0, sizeof(fl4)); 131 130 fl4.daddr = daddr; ··· 134 135 retry: 135 136 rt = ip_route_output_key(net, &fl4); 136 137 if (IS_ERR(rt)) { 137 - /* Invalid saddr ? */ 138 - if (PTR_ERR(rt) == -EINVAL && *saddr && 139 - rt_mode & IP_VS_RT_MODE_CONNECT && !loop) { 140 - *saddr = 0; 141 - flowi4_update_output(&fl4, 0, daddr, 0); 142 - goto retry; 143 - } 144 138 IP_VS_DBG_RL("ip_route_output error, dest: %pI4\n", &daddr); 145 139 return NULL; 146 - } else if (!*saddr && rt_mode & IP_VS_RT_MODE_CONNECT && fl4.saddr) { 140 + } 141 + if (rt_mode & IP_VS_RT_MODE_CONNECT && fl4.saddr) { 147 142 ip_rt_put(rt); 148 - *saddr = fl4.saddr; 149 143 flowi4_update_output(&fl4, 0, daddr, fl4.saddr); 150 - loop = true; 144 + rt_mode = 0; 151 145 goto retry; 152 146 } 153 - *saddr = fl4.saddr; 147 + if (ret_saddr) 148 + *ret_saddr = fl4.saddr; 154 149 return rt; 155 150 } 156 151 ··· 337 344 if (ret_saddr) 338 345 *ret_saddr = dest_dst->dst_saddr.ip; 339 346 } else { 340 - __be32 saddr = htonl(INADDR_ANY); 341 - 342 347 noref = 0; 343 348 344 349 /* For such unconfigured boxes avoid many route lookups 345 350 * for performance reasons because we do not remember saddr 346 351 */ 347 352 rt_mode &= ~IP_VS_RT_MODE_CONNECT; 348 - rt = do_output_route4(net, daddr, rt_mode, &saddr); 353 + rt = do_output_route4(net, daddr, rt_mode, ret_saddr); 349 354 if (!rt) 350 355 goto err_unreach; 351 - if (ret_saddr) 352 - *ret_saddr = saddr; 353 356 } 354 357 355 358 local = (rt->rt_flags & RTCF_LOCAL) ? 1 : 0;
+1 -2
net/openvswitch/actions.c
··· 975 975 upcall.cmd = OVS_PACKET_CMD_ACTION; 976 976 upcall.mru = OVS_CB(skb)->mru; 977 977 978 - for (a = nla_data(attr), rem = nla_len(attr); rem > 0; 979 - a = nla_next(a, &rem)) { 978 + nla_for_each_nested(a, attr, rem) { 980 979 switch (nla_type(a)) { 981 980 case OVS_USERSPACE_ATTR_USERDATA: 982 981 upcall.userdata = a;
+6 -9
net/sched/sch_htb.c
··· 348 348 */ 349 349 static inline void htb_next_rb_node(struct rb_node **n) 350 350 { 351 - *n = rb_next(*n); 351 + if (*n) 352 + *n = rb_next(*n); 352 353 } 353 354 354 355 /** ··· 610 609 */ 611 610 static inline void htb_deactivate(struct htb_sched *q, struct htb_class *cl) 612 611 { 613 - WARN_ON(!cl->prio_activity); 614 - 612 + if (!cl->prio_activity) 613 + return; 615 614 htb_deactivate_prios(q, cl); 616 615 cl->prio_activity = 0; 617 616 } ··· 1486 1485 { 1487 1486 struct htb_class *cl = (struct htb_class *)arg; 1488 1487 1489 - if (!cl->prio_activity) 1490 - return; 1491 1488 htb_deactivate(qdisc_priv(sch), cl); 1492 1489 } 1493 1490 ··· 1739 1740 if (cl->parent) 1740 1741 cl->parent->children--; 1741 1742 1742 - if (cl->prio_activity) 1743 - htb_deactivate(q, cl); 1743 + htb_deactivate(q, cl); 1744 1744 1745 1745 if (cl->cmode != HTB_CAN_SEND) 1746 1746 htb_safe_rb_erase(&cl->pq_node, ··· 1947 1949 /* turn parent into inner node */ 1948 1950 qdisc_purge_queue(parent->leaf.q); 1949 1951 parent_qdisc = parent->leaf.q; 1950 - if (parent->prio_activity) 1951 - htb_deactivate(q, parent); 1952 + htb_deactivate(q, parent); 1952 1953 1953 1954 /* remove from evt list because of level change */ 1954 1955 if (parent->cmode != HTB_CAN_SEND) {
+1 -1
net/wireless/scan.c
··· 2681 2681 /* Required length for first defragmentation */ 2682 2682 buf_len = mle->datalen - 1; 2683 2683 for_each_element(elem, mle->data + mle->datalen, 2684 - ielen - sizeof(*mle) + mle->datalen) { 2684 + ie + ielen - mle->data - mle->datalen) { 2685 2685 if (elem->id != WLAN_EID_FRAGMENT) 2686 2686 break; 2687 2687
+1
rust/bindings/lib.rs
··· 26 26 27 27 #[allow(dead_code)] 28 28 #[allow(clippy::undocumented_unsafe_blocks)] 29 + #[cfg_attr(CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES, allow(unnecessary_transmutes))] 29 30 mod bindings_raw { 30 31 // Manual definition for blocklisted types. 31 32 type __kernel_size_t = usize;
+3
rust/kernel/alloc/kvec.rs
··· 2 2 3 3 //! Implementation of [`Vec`]. 4 4 5 + // May not be needed in Rust 1.87.0 (pending beta backport). 6 + #![allow(clippy::ptr_eq)] 7 + 5 8 use super::{ 6 9 allocator::{KVmalloc, Kmalloc, Vmalloc}, 7 10 layout::ArrayLayout,
+3
rust/kernel/list.rs
··· 4 4 5 5 //! A linked list implementation. 6 6 7 + // May not be needed in Rust 1.87.0 (pending beta backport). 8 + #![allow(clippy::ptr_eq)] 9 + 7 10 use crate::sync::ArcBorrow; 8 11 use crate::types::Opaque; 9 12 use core::iter::{DoubleEndedIterator, FusedIterator};
+23 -23
rust/kernel/str.rs
··· 73 73 b'\r' => f.write_str("\\r")?, 74 74 // Printable characters. 75 75 0x20..=0x7e => f.write_char(b as char)?, 76 - _ => write!(f, "\\x{:02x}", b)?, 76 + _ => write!(f, "\\x{b:02x}")?, 77 77 } 78 78 } 79 79 Ok(()) ··· 109 109 b'\\' => f.write_str("\\\\")?, 110 110 // Printable characters. 111 111 0x20..=0x7e => f.write_char(b as char)?, 112 - _ => write!(f, "\\x{:02x}", b)?, 112 + _ => write!(f, "\\x{b:02x}")?, 113 113 } 114 114 } 115 115 f.write_char('"') ··· 447 447 // Printable character. 448 448 f.write_char(c as char)?; 449 449 } else { 450 - write!(f, "\\x{:02x}", c)?; 450 + write!(f, "\\x{c:02x}")?; 451 451 } 452 452 } 453 453 Ok(()) ··· 479 479 // Printable characters. 480 480 b'\"' => f.write_str("\\\"")?, 481 481 0x20..=0x7e => f.write_char(c as char)?, 482 - _ => write!(f, "\\x{:02x}", c)?, 482 + _ => write!(f, "\\x{c:02x}")?, 483 483 } 484 484 } 485 485 f.write_str("\"") ··· 641 641 #[test] 642 642 fn test_cstr_display() { 643 643 let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap(); 644 - assert_eq!(format!("{}", hello_world), "hello, world!"); 644 + assert_eq!(format!("{hello_world}"), "hello, world!"); 645 645 let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap(); 646 - assert_eq!(format!("{}", non_printables), "\\x01\\x09\\x0a"); 646 + assert_eq!(format!("{non_printables}"), "\\x01\\x09\\x0a"); 647 647 let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap(); 648 - assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu"); 648 + assert_eq!(format!("{non_ascii}"), "d\\xe9j\\xe0 vu"); 649 649 let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap(); 650 - assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80"); 650 + assert_eq!(format!("{good_bytes}"), "\\xf0\\x9f\\xa6\\x80"); 651 651 } 652 652 653 653 #[test] ··· 658 658 bytes[i as usize] = i.wrapping_add(1); 659 659 } 660 660 let cstr = CStr::from_bytes_with_nul(&bytes).unwrap(); 661 - assert_eq!(format!("{}", cstr), ALL_ASCII_CHARS); 661 + assert_eq!(format!("{cstr}"), ALL_ASCII_CHARS); 662 662 } 663 663 664 664 #[test] 665 665 fn test_cstr_debug() { 666 666 let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0").unwrap(); 667 - assert_eq!(format!("{:?}", hello_world), "\"hello, world!\""); 667 + assert_eq!(format!("{hello_world:?}"), "\"hello, world!\""); 668 668 let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0").unwrap(); 669 - assert_eq!(format!("{:?}", non_printables), "\"\\x01\\x09\\x0a\""); 669 + assert_eq!(format!("{non_printables:?}"), "\"\\x01\\x09\\x0a\""); 670 670 let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0").unwrap(); 671 - assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\""); 671 + assert_eq!(format!("{non_ascii:?}"), "\"d\\xe9j\\xe0 vu\""); 672 672 let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0").unwrap(); 673 - assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\""); 673 + assert_eq!(format!("{good_bytes:?}"), "\"\\xf0\\x9f\\xa6\\x80\""); 674 674 } 675 675 676 676 #[test] 677 677 fn test_bstr_display() { 678 678 let hello_world = BStr::from_bytes(b"hello, world!"); 679 - assert_eq!(format!("{}", hello_world), "hello, world!"); 679 + assert_eq!(format!("{hello_world}"), "hello, world!"); 680 680 let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_"); 681 - assert_eq!(format!("{}", escapes), "_\\t_\\n_\\r_\\_'_\"_"); 681 + assert_eq!(format!("{escapes}"), "_\\t_\\n_\\r_\\_'_\"_"); 682 682 let others = BStr::from_bytes(b"\x01"); 683 - assert_eq!(format!("{}", others), "\\x01"); 683 + assert_eq!(format!("{others}"), "\\x01"); 684 684 let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu"); 685 - assert_eq!(format!("{}", non_ascii), "d\\xe9j\\xe0 vu"); 685 + assert_eq!(format!("{non_ascii}"), "d\\xe9j\\xe0 vu"); 686 686 let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80"); 687 - assert_eq!(format!("{}", good_bytes), "\\xf0\\x9f\\xa6\\x80"); 687 + assert_eq!(format!("{good_bytes}"), "\\xf0\\x9f\\xa6\\x80"); 688 688 } 689 689 690 690 #[test] 691 691 fn test_bstr_debug() { 692 692 let hello_world = BStr::from_bytes(b"hello, world!"); 693 - assert_eq!(format!("{:?}", hello_world), "\"hello, world!\""); 693 + assert_eq!(format!("{hello_world:?}"), "\"hello, world!\""); 694 694 let escapes = BStr::from_bytes(b"_\t_\n_\r_\\_\'_\"_"); 695 - assert_eq!(format!("{:?}", escapes), "\"_\\t_\\n_\\r_\\\\_'_\\\"_\""); 695 + assert_eq!(format!("{escapes:?}"), "\"_\\t_\\n_\\r_\\\\_'_\\\"_\""); 696 696 let others = BStr::from_bytes(b"\x01"); 697 - assert_eq!(format!("{:?}", others), "\"\\x01\""); 697 + assert_eq!(format!("{others:?}"), "\"\\x01\""); 698 698 let non_ascii = BStr::from_bytes(b"d\xe9j\xe0 vu"); 699 - assert_eq!(format!("{:?}", non_ascii), "\"d\\xe9j\\xe0 vu\""); 699 + assert_eq!(format!("{non_ascii:?}"), "\"d\\xe9j\\xe0 vu\""); 700 700 let good_bytes = BStr::from_bytes(b"\xf0\x9f\xa6\x80"); 701 - assert_eq!(format!("{:?}", good_bytes), "\"\\xf0\\x9f\\xa6\\x80\""); 701 + assert_eq!(format!("{good_bytes:?}"), "\"\\xf0\\x9f\\xa6\\x80\""); 702 702 } 703 703 } 704 704
+4 -9
rust/macros/kunit.rs
··· 15 15 } 16 16 17 17 if attr.len() > 255 { 18 - panic!( 19 - "The test suite name `{}` exceeds the maximum length of 255 bytes", 20 - attr 21 - ) 18 + panic!("The test suite name `{attr}` exceeds the maximum length of 255 bytes") 22 19 } 23 20 24 21 let mut tokens: Vec<_> = ts.into_iter().collect(); ··· 99 102 let mut kunit_macros = "".to_owned(); 100 103 let mut test_cases = "".to_owned(); 101 104 for test in &tests { 102 - let kunit_wrapper_fn_name = format!("kunit_rust_wrapper_{}", test); 105 + let kunit_wrapper_fn_name = format!("kunit_rust_wrapper_{test}"); 103 106 let kunit_wrapper = format!( 104 - "unsafe extern \"C\" fn {}(_test: *mut kernel::bindings::kunit) {{ {}(); }}", 105 - kunit_wrapper_fn_name, test 107 + "unsafe extern \"C\" fn {kunit_wrapper_fn_name}(_test: *mut kernel::bindings::kunit) {{ {test}(); }}" 106 108 ); 107 109 writeln!(kunit_macros, "{kunit_wrapper}").unwrap(); 108 110 writeln!( 109 111 test_cases, 110 - " kernel::kunit::kunit_case(kernel::c_str!(\"{}\"), {}),", 111 - test, kunit_wrapper_fn_name 112 + " kernel::kunit::kunit_case(kernel::c_str!(\"{test}\"), {kunit_wrapper_fn_name})," 112 113 ) 113 114 .unwrap(); 114 115 }
+5 -14
rust/macros/module.rs
··· 48 48 ) 49 49 } else { 50 50 // Loadable modules' modinfo strings go as-is. 51 - format!("{field}={content}\0", field = field, content = content) 51 + format!("{field}={content}\0") 52 52 }; 53 53 54 54 write!( ··· 126 126 }; 127 127 128 128 if seen_keys.contains(&key) { 129 - panic!( 130 - "Duplicated key \"{}\". Keys can only be specified once.", 131 - key 132 - ); 129 + panic!("Duplicated key \"{key}\". Keys can only be specified once."); 133 130 } 134 131 135 132 assert_eq!(expect_punct(it), ':'); ··· 140 143 "license" => info.license = expect_string_ascii(it), 141 144 "alias" => info.alias = Some(expect_string_array(it)), 142 145 "firmware" => info.firmware = Some(expect_string_array(it)), 143 - _ => panic!( 144 - "Unknown key \"{}\". Valid keys are: {:?}.", 145 - key, EXPECTED_KEYS 146 - ), 146 + _ => panic!("Unknown key \"{key}\". Valid keys are: {EXPECTED_KEYS:?}."), 147 147 } 148 148 149 149 assert_eq!(expect_punct(it), ','); ··· 152 158 153 159 for key in REQUIRED_KEYS { 154 160 if !seen_keys.iter().any(|e| e == key) { 155 - panic!("Missing required key \"{}\".", key); 161 + panic!("Missing required key \"{key}\"."); 156 162 } 157 163 } 158 164 ··· 164 170 } 165 171 166 172 if seen_keys != ordered_keys { 167 - panic!( 168 - "Keys are not ordered as expected. Order them like: {:?}.", 169 - ordered_keys 170 - ); 173 + panic!("Keys are not ordered as expected. Order them like: {ordered_keys:?}."); 171 174 } 172 175 173 176 info
+1 -1
rust/macros/paste.rs
··· 50 50 let tokens = group.stream().into_iter().collect::<Vec<TokenTree>>(); 51 51 segments.append(&mut concat_helper(tokens.as_slice())); 52 52 } 53 - token => panic!("unexpected token in paste segments: {:?}", token), 53 + token => panic!("unexpected token in paste segments: {token:?}"), 54 54 }; 55 55 } 56 56
+1 -2
rust/pin-init/internal/src/pinned_drop.rs
··· 28 28 // Found the end of the generics, this should be `PinnedDrop`. 29 29 assert!( 30 30 matches!(tt, TokenTree::Ident(i) if i.to_string() == "PinnedDrop"), 31 - "expected 'PinnedDrop', found: '{:?}'", 32 - tt 31 + "expected 'PinnedDrop', found: '{tt:?}'" 33 32 ); 34 33 pinned_drop_idx = Some(i); 35 34 break;
+1
rust/uapi/lib.rs
··· 24 24 unreachable_pub, 25 25 unsafe_op_in_unsafe_fn 26 26 )] 27 + #![cfg_attr(CONFIG_RUSTC_HAS_UNNECESSARY_TRANSMUTES, allow(unnecessary_transmutes))] 27 28 28 29 // Manual definition of blocklisted types. 29 30 type __kernel_size_t = usize;
+1 -1
scripts/Makefile.vmlinux
··· 13 13 vmlinux-final := vmlinux.unstripped 14 14 15 15 quiet_cmd_strip_relocs = RSTRIP $@ 16 - cmd_strip_relocs = $(OBJCOPY) --remove-section='.rel*' $< $@ 16 + cmd_strip_relocs = $(OBJCOPY) --remove-section='.rel*' --remove-section=!'.rel*.dyn' $< $@ 17 17 18 18 vmlinux: $(vmlinux-final) FORCE 19 19 $(call if_changed,strip_relocs)
+5
sound/soc/codecs/Kconfig
··· 120 120 imply SND_SOC_ES8326 121 121 imply SND_SOC_ES8328_SPI 122 122 imply SND_SOC_ES8328_I2C 123 + imply SND_SOC_ES8389 123 124 imply SND_SOC_ES7134 124 125 imply SND_SOC_ES7241 125 126 imply SND_SOC_FRAMER ··· 1210 1209 tristate "Everest Semi ES8328 CODEC (SPI)" 1211 1210 depends on SPI_MASTER 1212 1211 select SND_SOC_ES8328 1212 + 1213 + config SND_SOC_ES8389 1214 + tristate "Everest Semi ES8389 CODEC" 1215 + depends on I2C 1213 1216 1214 1217 config SND_SOC_FRAMER 1215 1218 tristate "Framer codec"
+2
sound/soc/codecs/Makefile
··· 134 134 snd-soc-es8328-y := es8328.o 135 135 snd-soc-es8328-i2c-y := es8328-i2c.o 136 136 snd-soc-es8328-spi-y := es8328-spi.o 137 + snd-soc-es8389-y := es8389.o 137 138 snd-soc-framer-y := framer-codec.o 138 139 snd-soc-gtm601-y := gtm601.o 139 140 snd-soc-hdac-hdmi-y := hdac_hdmi.o ··· 556 555 obj-$(CONFIG_SND_SOC_ES8328) += snd-soc-es8328.o 557 556 obj-$(CONFIG_SND_SOC_ES8328_I2C)+= snd-soc-es8328-i2c.o 558 557 obj-$(CONFIG_SND_SOC_ES8328_SPI)+= snd-soc-es8328-spi.o 558 + obj-$(CONFIG_SND_SOC_ES8389) += snd-soc-es8389.o 559 559 obj-$(CONFIG_SND_SOC_FRAMER) += snd-soc-framer.o 560 560 obj-$(CONFIG_SND_SOC_GTM601) += snd-soc-gtm601.o 561 561 obj-$(CONFIG_SND_SOC_HDAC_HDMI) += snd-soc-hdac-hdmi.o
+962
sound/soc/codecs/es8389.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * es8389.c -- ES8389 ALSA SoC Audio Codec 4 + * 5 + * Copyright Everest Semiconductor Co., Ltd 6 + * 7 + * Authors: Michael Zhang (zhangyi@everest-semi.com) 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/module.h> 16 + #include <linux/kernel.h> 17 + #include <linux/delay.h> 18 + #include <linux/i2c.h> 19 + #include <linux/regmap.h> 20 + #include <sound/core.h> 21 + #include <sound/pcm.h> 22 + #include <sound/pcm_params.h> 23 + #include <sound/tlv.h> 24 + #include <sound/soc.h> 25 + 26 + #include "es8389.h" 27 + 28 + 29 + /* codec private data */ 30 + 31 + struct es8389_private { 32 + struct regmap *regmap; 33 + struct clk *mclk; 34 + unsigned int sysclk; 35 + int mastermode; 36 + 37 + u8 mclk_src; 38 + enum snd_soc_bias_level bias_level; 39 + }; 40 + 41 + static bool es8389_volatile_register(struct device *dev, 42 + unsigned int reg) 43 + { 44 + if ((reg <= 0xff)) 45 + return true; 46 + else 47 + return false; 48 + } 49 + 50 + static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -9550, 50, 0); 51 + static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -9550, 50, 0); 52 + static const DECLARE_TLV_DB_SCALE(pga_vol_tlv, 0, 300, 0); 53 + static const DECLARE_TLV_DB_SCALE(mix_vol_tlv, -9500, 100, 0); 54 + static const DECLARE_TLV_DB_SCALE(alc_target_tlv, -3200, 200, 0); 55 + static const DECLARE_TLV_DB_SCALE(alc_max_level, -3200, 200, 0); 56 + 57 + static int es8389_dmic_set(struct snd_kcontrol *kcontrol, 58 + struct snd_ctl_elem_value *ucontrol) 59 + { 60 + struct snd_soc_component *component = snd_soc_dapm_kcontrol_component(kcontrol); 61 + struct snd_soc_dapm_context *dapm = snd_soc_dapm_kcontrol_dapm(kcontrol); 62 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 63 + struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 64 + unsigned int val; 65 + bool changed1, changed2; 66 + 67 + val = ucontrol->value.integer.value[0]; 68 + if (val > 1) 69 + return -EINVAL; 70 + 71 + if (val) { 72 + regmap_update_bits_check(es8389->regmap, ES8389_DMIC_EN, 0xC0, 0xC0, &changed1); 73 + regmap_update_bits_check(es8389->regmap, ES8389_ADC_MODE, 0x03, 0x03, &changed2); 74 + } else { 75 + regmap_update_bits_check(es8389->regmap, ES8389_DMIC_EN, 0xC0, 0x00, &changed1); 76 + regmap_update_bits_check(es8389->regmap, ES8389_ADC_MODE, 0x03, 0x00, &changed2); 77 + } 78 + 79 + if (changed1 & changed2) 80 + return snd_soc_dapm_mux_update_power(dapm, kcontrol, val, e, NULL); 81 + else 82 + return 0; 83 + } 84 + 85 + static const char *const alc[] = { 86 + "ALC OFF", 87 + "ADCR ALC ON", 88 + "ADCL ALC ON", 89 + "ADCL & ADCL ALC ON", 90 + }; 91 + 92 + static const char *const ramprate[] = { 93 + "0.125db/1 LRCK", 94 + "0.125db/4 LRCK", 95 + "0.125db/8 LRCK", 96 + "0.125db/16 LRCK", 97 + "0.125db/32 LRCK", 98 + "0.125db/64 LRCK", 99 + "0.125db/128 LRCK", 100 + "0.125db/256 LRCK", 101 + "0.125db/512 LRCK", 102 + "0.125db/1024 LRCK", 103 + "0.125db/2048 LRCK", 104 + "0.125db/4096 LRCK", 105 + "0.125db/8192 LRCK", 106 + "0.125db/16384 LRCK", 107 + "0.125db/32768 LRCK", 108 + "0.125db/65536 LRCK", 109 + }; 110 + 111 + static const char *const winsize[] = { 112 + "2 LRCK", 113 + "4 LRCK", 114 + "8 LRCK", 115 + "16 LRCK", 116 + "32 LRCK", 117 + "64 LRCK", 118 + "128 LRCK", 119 + "256 LRCK", 120 + "512 LRCK", 121 + "1024 LRCK", 122 + "2048 LRCK", 123 + "4096 LRCK", 124 + "8192 LRCK", 125 + "16384 LRCK", 126 + "32768 LRCK", 127 + "65536 LRCK", 128 + }; 129 + 130 + static const struct soc_enum alc_enable = 131 + SOC_ENUM_SINGLE(ES8389_ALC_ON, 5, 4, alc); 132 + static const struct soc_enum alc_ramprate = 133 + SOC_ENUM_SINGLE(ES8389_ALC_CTL, 4, 16, ramprate); 134 + static const struct soc_enum alc_winsize = 135 + SOC_ENUM_SINGLE(ES8389_ALC_CTL, 0, 16, winsize); 136 + 137 + static const char *const es8389_outl_mux_txt[] = { 138 + "Normal", 139 + "DAC2 channel to DAC1 channel", 140 + }; 141 + 142 + static const char *const es8389_outr_mux_txt[] = { 143 + "Normal", 144 + "DAC1 channel to DAC2 channel", 145 + }; 146 + 147 + static const char *const es8389_dmic_mux_txt[] = { 148 + "AMIC", 149 + "DMIC", 150 + }; 151 + 152 + static const char *const es8389_pga1_texts[] = { 153 + "DifferentialL", "Line 1P", "Line 2P" 154 + }; 155 + 156 + static const char *const es8389_pga2_texts[] = { 157 + "DifferentialR", "Line 2N", "Line 1N" 158 + }; 159 + 160 + static const unsigned int es8389_pga_values[] = { 161 + 1, 5, 6 162 + }; 163 + 164 + static const struct soc_enum es8389_outl_mux_enum = 165 + SOC_ENUM_SINGLE(ES8389_DAC_MIX, 5, 166 + ARRAY_SIZE(es8389_outl_mux_txt), es8389_outl_mux_txt); 167 + 168 + static const struct snd_kcontrol_new es8389_outl_mux_controls = 169 + SOC_DAPM_ENUM("OUTL MUX", es8389_outl_mux_enum); 170 + 171 + static const struct soc_enum es8389_outr_mux_enum = 172 + SOC_ENUM_SINGLE(ES8389_DAC_MIX, 4, 173 + ARRAY_SIZE(es8389_outr_mux_txt), es8389_outr_mux_txt); 174 + 175 + static const struct snd_kcontrol_new es8389_outr_mux_controls = 176 + SOC_DAPM_ENUM("OUTR MUX", es8389_outr_mux_enum); 177 + 178 + static SOC_ENUM_SINGLE_DECL( 179 + es8389_dmic_mux_enum, ES8389_DMIC_EN, 6, es8389_dmic_mux_txt); 180 + 181 + static const struct soc_enum es8389_pgal_enum = 182 + SOC_VALUE_ENUM_SINGLE(ES8389_MIC1_GAIN, 4, 7, 183 + ARRAY_SIZE(es8389_pga1_texts), es8389_pga1_texts, 184 + es8389_pga_values); 185 + 186 + static const struct soc_enum es8389_pgar_enum = 187 + SOC_VALUE_ENUM_SINGLE(ES8389_MIC2_GAIN, 4, 7, 188 + ARRAY_SIZE(es8389_pga2_texts), es8389_pga2_texts, 189 + es8389_pga_values); 190 + 191 + static const struct snd_kcontrol_new es8389_dmic_mux_controls = 192 + SOC_DAPM_ENUM_EXT("ADC MUX", es8389_dmic_mux_enum, 193 + snd_soc_dapm_get_enum_double, es8389_dmic_set); 194 + 195 + static const struct snd_kcontrol_new es8389_left_mixer_controls[] = { 196 + SOC_DAPM_SINGLE("DACR DACL Mixer", ES8389_DAC_MIX, 3, 1, 0), 197 + }; 198 + 199 + static const struct snd_kcontrol_new es8389_right_mixer_controls[] = { 200 + SOC_DAPM_SINGLE("DACL DACR Mixer", ES8389_DAC_MIX, 2, 1, 0), 201 + }; 202 + 203 + static const struct snd_kcontrol_new es8389_leftadc_mixer_controls[] = { 204 + SOC_DAPM_SINGLE("ADCL DACL Mixer", ES8389_DAC_MIX, 1, 1, 0), 205 + }; 206 + 207 + static const struct snd_kcontrol_new es8389_rightadc_mixer_controls[] = { 208 + SOC_DAPM_SINGLE("ADCR DACR Mixer", ES8389_DAC_MIX, 0, 1, 0), 209 + }; 210 + 211 + static const struct snd_kcontrol_new es8389_adc_mixer_controls[] = { 212 + SOC_DAPM_SINGLE("DACL ADCL Mixer", ES8389_ADC_RESET, 7, 1, 0), 213 + SOC_DAPM_SINGLE("DACR ADCR Mixer", ES8389_ADC_RESET, 6, 1, 0), 214 + }; 215 + 216 + static const struct snd_kcontrol_new es8389_snd_controls[] = { 217 + SOC_SINGLE_TLV("ADCL Capture Volume", ES8389_ADCL_VOL, 0, 0xFF, 0, adc_vol_tlv), 218 + SOC_SINGLE_TLV("ADCR Capture Volume", ES8389_ADCR_VOL, 0, 0xFF, 0, adc_vol_tlv), 219 + SOC_SINGLE_TLV("ADCL PGA Volume", ES8389_MIC1_GAIN, 0, 0x0E, 0, pga_vol_tlv), 220 + SOC_SINGLE_TLV("ADCR PGA Volume", ES8389_MIC2_GAIN, 0, 0x0E, 0, pga_vol_tlv), 221 + 222 + SOC_ENUM("PGAL Select", es8389_pgal_enum), 223 + SOC_ENUM("PGAR Select", es8389_pgar_enum), 224 + SOC_ENUM("ALC Capture Switch", alc_enable), 225 + SOC_SINGLE_TLV("ALC Capture Target Level", ES8389_ALC_TARGET, 226 + 0, 0x0f, 0, alc_target_tlv), 227 + SOC_SINGLE_TLV("ALC Capture Max Gain", ES8389_ALC_GAIN, 228 + 0, 0x0f, 0, alc_max_level), 229 + SOC_ENUM("ADC Ramp Rate", alc_ramprate), 230 + SOC_ENUM("ALC Capture Winsize", alc_winsize), 231 + SOC_DOUBLE("ADC OSR Volume ON Switch", ES8389_ADC_MUTE, 6, 7, 1, 0), 232 + SOC_SINGLE_TLV("ADC OSR Volume", ES8389_OSR_VOL, 0, 0xFF, 0, adc_vol_tlv), 233 + SOC_DOUBLE("ADC OUTPUT Invert Switch", ES8389_ADC_HPF2, 5, 6, 1, 0), 234 + 235 + SOC_SINGLE_TLV("DACL Playback Volume", ES8389_DACL_VOL, 0, 0xFF, 0, dac_vol_tlv), 236 + SOC_SINGLE_TLV("DACR Playback Volume", ES8389_DACR_VOL, 0, 0xFF, 0, dac_vol_tlv), 237 + SOC_DOUBLE("DAC OUTPUT Invert Switch", ES8389_DAC_INV, 5, 6, 1, 0), 238 + SOC_SINGLE_TLV("ADC2DAC Mixer Volume", ES8389_MIX_VOL, 0, 0x7F, 0, mix_vol_tlv), 239 + }; 240 + 241 + static const struct snd_soc_dapm_widget es8389_dapm_widgets[] = { 242 + /*Input Side*/ 243 + SND_SOC_DAPM_INPUT("INPUT1"), 244 + SND_SOC_DAPM_INPUT("INPUT2"), 245 + SND_SOC_DAPM_INPUT("DMIC"), 246 + SND_SOC_DAPM_PGA("PGAL", SND_SOC_NOPM, 4, 0, NULL, 0), 247 + SND_SOC_DAPM_PGA("PGAR", SND_SOC_NOPM, 4, 0, NULL, 0), 248 + 249 + /*ADCs*/ 250 + SND_SOC_DAPM_ADC("ADCL", NULL, SND_SOC_NOPM, 0, 0), 251 + SND_SOC_DAPM_ADC("ADCR", NULL, SND_SOC_NOPM, 0, 0), 252 + 253 + /* Audio Interface */ 254 + SND_SOC_DAPM_AIF_OUT("I2S OUT", "I2S Capture", 0, SND_SOC_NOPM, 0, 0), 255 + SND_SOC_DAPM_AIF_IN("I2S IN", "I2S Playback", 0, SND_SOC_NOPM, 0, 0), 256 + 257 + /*DACs*/ 258 + SND_SOC_DAPM_DAC("DACL", NULL, SND_SOC_NOPM, 0, 0), 259 + SND_SOC_DAPM_DAC("DACR", NULL, SND_SOC_NOPM, 0, 0), 260 + 261 + /*Output Side*/ 262 + SND_SOC_DAPM_OUTPUT("HPOL"), 263 + SND_SOC_DAPM_OUTPUT("HPOR"), 264 + 265 + /* Digital Interface */ 266 + SND_SOC_DAPM_PGA("IF DAC", SND_SOC_NOPM, 0, 0, NULL, 0), 267 + SND_SOC_DAPM_PGA("IF DACL1", SND_SOC_NOPM, 0, 0, NULL, 0), 268 + SND_SOC_DAPM_PGA("IF DACR1", SND_SOC_NOPM, 0, 0, NULL, 0), 269 + SND_SOC_DAPM_PGA("IF DACL2", SND_SOC_NOPM, 0, 0, NULL, 0), 270 + SND_SOC_DAPM_PGA("IF DACR2", SND_SOC_NOPM, 0, 0, NULL, 0), 271 + SND_SOC_DAPM_PGA("IF DACL3", SND_SOC_NOPM, 0, 0, NULL, 0), 272 + SND_SOC_DAPM_PGA("IF DACR3", SND_SOC_NOPM, 0, 0, NULL, 0), 273 + 274 + /* Digital Interface Select */ 275 + SND_SOC_DAPM_MIXER("IF DACL Mixer", SND_SOC_NOPM, 0, 0, 276 + &es8389_left_mixer_controls[0], 277 + ARRAY_SIZE(es8389_left_mixer_controls)), 278 + SND_SOC_DAPM_MIXER("IF DACR Mixer", SND_SOC_NOPM, 0, 0, 279 + &es8389_right_mixer_controls[0], 280 + ARRAY_SIZE(es8389_right_mixer_controls)), 281 + SND_SOC_DAPM_MIXER("IF ADCDACL Mixer", SND_SOC_NOPM, 0, 0, 282 + &es8389_leftadc_mixer_controls[0], 283 + ARRAY_SIZE(es8389_leftadc_mixer_controls)), 284 + SND_SOC_DAPM_MIXER("IF ADCDACR Mixer", SND_SOC_NOPM, 0, 0, 285 + &es8389_rightadc_mixer_controls[0], 286 + ARRAY_SIZE(es8389_rightadc_mixer_controls)), 287 + 288 + SND_SOC_DAPM_MIXER("ADC Mixer", SND_SOC_NOPM, 0, 0, 289 + &es8389_adc_mixer_controls[0], 290 + ARRAY_SIZE(es8389_adc_mixer_controls)), 291 + SND_SOC_DAPM_MUX("ADC MUX", SND_SOC_NOPM, 0, 0, &es8389_dmic_mux_controls), 292 + 293 + SND_SOC_DAPM_MUX("OUTL MUX", SND_SOC_NOPM, 0, 0, &es8389_outl_mux_controls), 294 + SND_SOC_DAPM_MUX("OUTR MUX", SND_SOC_NOPM, 0, 0, &es8389_outr_mux_controls), 295 + }; 296 + 297 + 298 + static const struct snd_soc_dapm_route es8389_dapm_routes[] = { 299 + {"PGAL", NULL, "INPUT1"}, 300 + {"PGAR", NULL, "INPUT2"}, 301 + 302 + {"ADCL", NULL, "PGAL"}, 303 + {"ADCR", NULL, "PGAR"}, 304 + 305 + {"ADC Mixer", "DACL ADCL Mixer", "DACL"}, 306 + {"ADC Mixer", "DACR ADCR Mixer", "DACR"}, 307 + {"ADC Mixer", NULL, "ADCL"}, 308 + {"ADC Mixer", NULL, "ADCR"}, 309 + 310 + {"ADC MUX", "AMIC", "ADC Mixer"}, 311 + {"ADC MUX", "DMIC", "DMIC"}, 312 + 313 + {"I2S OUT", NULL, "ADC MUX"}, 314 + 315 + {"DACL", NULL, "I2S IN"}, 316 + {"DACR", NULL, "I2S IN"}, 317 + 318 + {"IF DACL1", NULL, "DACL"}, 319 + {"IF DACR1", NULL, "DACR"}, 320 + {"IF DACL2", NULL, "DACL"}, 321 + {"IF DACR2", NULL, "DACR"}, 322 + {"IF DACL3", NULL, "DACL"}, 323 + {"IF DACR3", NULL, "DACR"}, 324 + 325 + {"IF DACL Mixer", NULL, "IF DACL2"}, 326 + {"IF DACL Mixer", "DACR DACL Mixer", "IF DACR1"}, 327 + {"IF DACR Mixer", NULL, "IF DACR2"}, 328 + {"IF DACR Mixer", "DACL DACR Mixer", "IF DACL1"}, 329 + 330 + {"IF ADCDACL Mixer", NULL, "IF DACL Mixer"}, 331 + {"IF ADCDACL Mixer", "ADCL DACL Mixer", "IF DACL3"}, 332 + {"IF ADCDACR Mixer", NULL, "IF DACR Mixer"}, 333 + {"IF ADCDACR Mixer", "ADCR DACR Mixer", "IF DACR3"}, 334 + 335 + {"OUTL MUX", "Normal", "IF ADCDACL Mixer"}, 336 + {"OUTL MUX", "DAC2 channel to DAC1 channel", "IF ADCDACR Mixer"}, 337 + {"OUTR MUX", "Normal", "IF ADCDACR Mixer"}, 338 + {"OUTR MUX", "DAC1 channel to DAC2 channel", "IF ADCDACL Mixer"}, 339 + 340 + {"HPOL", NULL, "OUTL MUX"}, 341 + {"HPOR", NULL, "OUTR MUX"}, 342 + 343 + }; 344 + 345 + struct _coeff_div { 346 + u16 fs; 347 + u32 mclk; 348 + u32 rate; 349 + u8 Reg0x04; 350 + u8 Reg0x05; 351 + u8 Reg0x06; 352 + u8 Reg0x07; 353 + u8 Reg0x08; 354 + u8 Reg0x09; 355 + u8 Reg0x0A; 356 + u8 Reg0x0F; 357 + u8 Reg0x11; 358 + u8 Reg0x21; 359 + u8 Reg0x22; 360 + u8 Reg0x26; 361 + u8 Reg0x30; 362 + u8 Reg0x41; 363 + u8 Reg0x42; 364 + u8 Reg0x43; 365 + u8 Reg0xF0; 366 + u8 Reg0xF1; 367 + u8 Reg0x16; 368 + u8 Reg0x18; 369 + u8 Reg0x19; 370 + }; 371 + 372 + /* codec hifi mclk clock divider coefficients */ 373 + static const struct _coeff_div coeff_div[] = { 374 + {32, 256000, 8000, 0x00, 0x57, 0x84, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 375 + {36, 288000, 8000, 0x00, 0x55, 0x84, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 376 + {48, 384000, 8000, 0x02, 0x5F, 0x04, 0xC0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 377 + {64, 512000, 8000, 0x00, 0x4D, 0x24, 0xC0, 0x03, 0xD1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 378 + {72, 576000, 8000, 0x00, 0x45, 0x24, 0xC0, 0x01, 0xD1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 379 + {96, 768000, 8000, 0x02, 0x57, 0x84, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 380 + {128, 1024000, 8000, 0x00, 0x45, 0x04, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 381 + {192, 1536000, 8000, 0x02, 0x4D, 0x24, 0xC0, 0x03, 0xD1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 382 + {256, 2048000, 8000, 0x01, 0x45, 0x04, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 383 + {288, 2304000, 8000, 0x01, 0x51, 0x00, 0xC0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 384 + {384, 3072000, 8000, 0x02, 0x45, 0x04, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 385 + {512, 4096000, 8000, 0x00, 0x41, 0x04, 0xE0, 0x00, 0xD1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 386 + {768, 6144000, 8000, 0x05, 0x45, 0x04, 0xD0, 0x03, 0xC1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 387 + {1024, 8192000, 8000, 0x01, 0x41, 0x06, 0xE0, 0x00, 0xD1, 0xB0, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 388 + {1536, 12288000, 8000, 0x02, 0x41, 0x04, 0xE0, 0x00, 0xD1, 0xB0, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 389 + {1625, 13000000, 8000, 0x40, 0x6E, 0x05, 0xC8, 0x01, 0xC2, 0x90, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x09, 0x19, 0x07}, 390 + {2048, 16384000, 8000, 0x03, 0x44, 0x01, 0xC0, 0x00, 0xD2, 0x80, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 391 + {2304, 18432000, 8000, 0x11, 0x45, 0x25, 0xF0, 0x00, 0xD1, 0xB0, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 392 + {3072, 24576000, 8000, 0x05, 0x44, 0x01, 0xC0, 0x00, 0xD2, 0x80, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 393 + {32, 512000, 16000, 0x00, 0x55, 0x84, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 394 + {36, 576000, 16000, 0x00, 0x55, 0x84, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x12, 0x31, 0x0E}, 395 + {48, 768000, 16000, 0x02, 0x57, 0x04, 0xC0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 396 + {50, 800000, 16000, 0x00, 0x7E, 0x01, 0xD9, 0x00, 0xC2, 0x80, 0x00, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 397 + {64, 1024000, 16000, 0x00, 0x45, 0x24, 0xC0, 0x01, 0xD1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 398 + {72, 1152000, 16000, 0x00, 0x45, 0x24, 0xC0, 0x01, 0xD1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x12, 0x31, 0x0E}, 399 + {96, 1536000, 16000, 0x02, 0x55, 0x84, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 400 + {128, 2048000, 16000, 0x00, 0x51, 0x04, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 401 + {144, 2304000, 16000, 0x00, 0x51, 0x00, 0xC0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x23, 0x8F, 0xB7, 0xC0, 0x1F, 0x8F, 0x01, 0x12, 0x00, 0x12, 0x31, 0x0E}, 402 + {192, 3072000, 16000, 0x02, 0x65, 0x25, 0xE0, 0x00, 0xE1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 403 + {256, 4096000, 16000, 0x00, 0x41, 0x04, 0xC0, 0x01, 0xD1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 404 + {300, 4800000, 16000, 0x02, 0x66, 0x01, 0xD9, 0x00, 0xC2, 0x80, 0x00, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 405 + {384, 6144000, 16000, 0x02, 0x51, 0x04, 0xD0, 0x01, 0xC1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 406 + {512, 8192000, 16000, 0x01, 0x41, 0x04, 0xC0, 0x01, 0xD1, 0x90, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 407 + {750, 12000000, 16000, 0x0E, 0x7E, 0x01, 0xC9, 0x00, 0xC2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 408 + {768, 12288000, 16000, 0x02, 0x41, 0x04, 0xC0, 0x01, 0xD1, 0x90, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 409 + {1024, 16384000, 16000, 0x03, 0x41, 0x04, 0xC0, 0x01, 0xD1, 0x90, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 410 + {1152, 18432000, 16000, 0x08, 0x51, 0x04, 0xD0, 0x01, 0xC1, 0x90, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 411 + {1200, 19200000, 16000, 0x0B, 0x66, 0x01, 0xD9, 0x00, 0xC2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 412 + {1500, 24000000, 16000, 0x0E, 0x26, 0x01, 0xD9, 0x00, 0xC2, 0x80, 0xC0, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 413 + {1536, 24576000, 16000, 0x05, 0x41, 0x04, 0xC0, 0x01, 0xD1, 0x90, 0xC0, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0xFF, 0x7F, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 414 + {1625, 26000000, 16000, 0x40, 0x6E, 0x05, 0xC8, 0x01, 0xC2, 0x90, 0xC0, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x12, 0x31, 0x0E}, 415 + {800, 19200000, 24000, 0x07, 0x66, 0x01, 0xD9, 0x00, 0xC2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0xC7, 0x95, 0x00, 0x12, 0x00, 0x1A, 0x49, 0x14}, 416 + {600, 19200000, 32000, 0x05, 0x46, 0x01, 0xD8, 0x10, 0xD2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x23, 0x61, 0x1B}, 417 + {32, 1411200, 44100, 0x00, 0x45, 0xA4, 0xD0, 0x10, 0xD1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 418 + {64, 2822400, 44100, 0x00, 0x51, 0x00, 0xC0, 0x10, 0xC1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 419 + {128, 5644800, 44100, 0x00, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 420 + {256, 11289600, 44100, 0x01, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 421 + {512, 22579200, 44100, 0x03, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0xC0, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 422 + {32, 1536000, 48000, 0x00, 0x45, 0xA4, 0xD0, 0x10, 0xD1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 423 + {48, 2304000, 48000, 0x02, 0x55, 0x04, 0xC0, 0x10, 0xC1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 424 + {50, 2400000, 48000, 0x00, 0x76, 0x01, 0xC8, 0x10, 0xC2, 0x80, 0x00, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 425 + {64, 3072000, 48000, 0x00, 0x51, 0x04, 0xC0, 0x10, 0xC1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 426 + {100, 4800000, 48000, 0x00, 0x46, 0x01, 0xD8, 0x10, 0xD2, 0x80, 0x00, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 427 + {125, 6000000, 48000, 0x04, 0x6E, 0x05, 0xC8, 0x10, 0xC2, 0x80, 0x00, 0x01, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 428 + {128, 6144000, 48000, 0x00, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0x00, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 429 + {200, 9600000, 48000, 0x01, 0x46, 0x01, 0xD8, 0x10, 0xD2, 0x80, 0x00, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 430 + {250, 12000000, 48000, 0x04, 0x76, 0x01, 0xC8, 0x10, 0xC2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 431 + {256, 12288000, 48000, 0x01, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 432 + {384, 18432000, 48000, 0x02, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0x40, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 433 + {400, 19200000, 48000, 0x03, 0x46, 0x01, 0xD8, 0x10, 0xD2, 0x80, 0x40, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 434 + {500, 24000000, 48000, 0x04, 0x46, 0x01, 0xD8, 0x10, 0xD2, 0x80, 0xC0, 0x00, 0x18, 0x95, 0xD0, 0xC0, 0x63, 0x95, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 435 + {512, 24576000, 48000, 0x03, 0x41, 0x04, 0xD0, 0x10, 0xD1, 0x80, 0xC0, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 436 + {800, 38400000, 48000, 0x18, 0x45, 0x04, 0xC0, 0x10, 0xC1, 0x80, 0xC0, 0x00, 0x1F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x00, 0x12, 0x00, 0x35, 0x91, 0x28}, 437 + {128, 11289600, 88200, 0x00, 0x50, 0x00, 0xC0, 0x10, 0xC1, 0x80, 0x40, 0x00, 0x9F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x80, 0x12, 0xC0, 0x32, 0x89, 0x25}, 438 + {64, 6144000, 96000, 0x00, 0x41, 0x00, 0xD0, 0x10, 0xD1, 0x80, 0x00, 0x00, 0x9F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x80, 0x12, 0xC0, 0x35, 0x91, 0x28}, 439 + {128, 12288000, 96000, 0x00, 0x50, 0x00, 0xC0, 0x10, 0xC1, 0x80, 0xC0, 0x00, 0x9F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x80, 0x12, 0xC0, 0x35, 0x91, 0x28}, 440 + {256, 24576000, 96000, 0x00, 0x40, 0x00, 0xC0, 0x10, 0xC1, 0x80, 0xC0, 0x00, 0x9F, 0x7F, 0xBF, 0xC0, 0x7F, 0x7F, 0x80, 0x12, 0xC0, 0x35, 0x91, 0x28}, 441 + {128, 24576000, 192000, 0x00, 0x50, 0x00, 0xC0, 0x18, 0xC1, 0x81, 0xC0, 0x00, 0x8F, 0x7F, 0xEF, 0xC0, 0x3F, 0x7F, 0x80, 0x12, 0xC0, 0x3F, 0xF9, 0x3F}, 442 + 443 + {50, 400000, 8000, 0x00, 0x75, 0x05, 0xC8, 0x01, 0xC1, 0x90, 0x10, 0x00, 0x18, 0xC7, 0xD0, 0xC0, 0x8F, 0xC7, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 444 + {600, 4800000, 8000, 0x05, 0x65, 0x25, 0xF9, 0x00, 0xD1, 0x90, 0x10, 0x00, 0x18, 0xC7, 0xD0, 0xC0, 0x8F, 0xC7, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 445 + {1500, 12000000, 8000, 0x0E, 0x25, 0x25, 0xE8, 0x00, 0xD1, 0x90, 0x40, 0x00, 0x31, 0xC7, 0xC5, 0x00, 0x8F, 0xC7, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 446 + {2400, 19200000, 8000, 0x0B, 0x01, 0x00, 0xD0, 0x00, 0xD1, 0x80, 0x90, 0x00, 0x31, 0xC7, 0xC5, 0x00, 0xC7, 0xC7, 0x00, 0x12, 0x00, 0x09, 0x19, 0x07}, 447 + {3000, 24000000, 8000, 0x0E, 0x24, 0x05, 0xD0, 0x00, 0xC2, 0x80, 0xC0, 0x00, 0x31, 0xC7, 0xC5, 0x00, 0x8F, 0xC7, 0x01, 0x12, 0x00, 0x09, 0x19, 0x07}, 448 + {3250, 26000000, 8000, 0x40, 0x05, 0xA4, 0xC0, 0x00, 0xD1, 0x80, 0xD0, 0x00, 0x31, 0xC7, 0xC5, 0x00, 0xC7, 0xC7, 0x00, 0x12, 0x00, 0x09, 0x19, 0x07}, 449 + }; 450 + 451 + static inline int get_coeff(int mclk, int rate) 452 + { 453 + int i; 454 + 455 + for (i = 0; i < ARRAY_SIZE(coeff_div); i++) { 456 + if (coeff_div[i].rate == rate && coeff_div[i].mclk == mclk) 457 + return i; 458 + } 459 + return -EINVAL; 460 + } 461 + 462 + /* 463 + * if PLL not be used, use internal clk1 for mclk,otherwise, use internal clk2 for PLL source. 464 + */ 465 + static int es8389_set_dai_sysclk(struct snd_soc_dai *dai, 466 + int clk_id, unsigned int freq, int dir) 467 + { 468 + struct snd_soc_component *component = dai->component; 469 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 470 + 471 + es8389->sysclk = freq; 472 + 473 + return 0; 474 + } 475 + 476 + static int es8389_set_tdm_slot(struct snd_soc_dai *dai, 477 + unsigned int tx_mask, unsigned int rx_mask, int slots, int slot_width) 478 + { 479 + struct snd_soc_component *component = dai->component; 480 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 481 + 482 + regmap_update_bits(es8389->regmap, ES8389_PTDM_SLOT, 483 + ES8389_TDM_SLOT, (slots << ES8389_TDM_SHIFT)); 484 + regmap_update_bits(es8389->regmap, ES8389_DAC_RAMP, 485 + ES8389_TDM_SLOT, (slots << ES8389_TDM_SHIFT)); 486 + 487 + return 0; 488 + } 489 + 490 + static int es8389_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt) 491 + { 492 + struct snd_soc_component *component = dai->component; 493 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 494 + u8 state = 0; 495 + 496 + switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) { 497 + case SND_SOC_DAIFMT_CBC_CFP: 498 + regmap_update_bits(es8389->regmap, ES8389_MASTER_MODE, 499 + ES8389_MASTER_MODE_EN, ES8389_MASTER_MODE_EN); 500 + es8389->mastermode = 1; 501 + break; 502 + case SND_SOC_DAIFMT_CBC_CFC: 503 + es8389->mastermode = 0; 504 + break; 505 + default: 506 + return -EINVAL; 507 + } 508 + 509 + switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 510 + case SND_SOC_DAIFMT_I2S: 511 + state |= ES8389_DAIFMT_I2S; 512 + break; 513 + case SND_SOC_DAIFMT_RIGHT_J: 514 + dev_err(component->dev, "component driver does not support right justified\n"); 515 + return -EINVAL; 516 + case SND_SOC_DAIFMT_LEFT_J: 517 + state |= ES8389_DAIFMT_LEFT_J; 518 + break; 519 + case SND_SOC_DAIFMT_DSP_A: 520 + state |= ES8389_DAIFMT_DSP_A; 521 + break; 522 + case SND_SOC_DAIFMT_DSP_B: 523 + state |= ES8389_DAIFMT_DSP_B; 524 + break; 525 + default: 526 + break; 527 + } 528 + regmap_update_bits(es8389->regmap, ES8389_ADC_FORMAT_MUTE, ES8389_DAIFMT_MASK, state); 529 + regmap_update_bits(es8389->regmap, ES8389_DAC_FORMAT_MUTE, ES8389_DAIFMT_MASK, state); 530 + 531 + return 0; 532 + } 533 + 534 + static int es8389_pcm_hw_params(struct snd_pcm_substream *substream, 535 + struct snd_pcm_hw_params *params, 536 + struct snd_soc_dai *dai) 537 + { 538 + struct snd_soc_component *component = dai->component; 539 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 540 + int coeff; 541 + u8 state = 0; 542 + 543 + switch (params_format(params)) { 544 + case SNDRV_PCM_FORMAT_S16_LE: 545 + state |= ES8389_S16_LE; 546 + break; 547 + case SNDRV_PCM_FORMAT_S20_3LE: 548 + state |= ES8389_S20_3_LE; 549 + break; 550 + case SNDRV_PCM_FORMAT_S18_3LE: 551 + state |= ES8389_S18_LE; 552 + break; 553 + case SNDRV_PCM_FORMAT_S24_LE: 554 + state |= ES8389_S24_LE; 555 + break; 556 + case SNDRV_PCM_FORMAT_S32_LE: 557 + state |= ES8389_S32_LE; 558 + break; 559 + default: 560 + return -EINVAL; 561 + } 562 + 563 + regmap_update_bits(es8389->regmap, ES8389_ADC_FORMAT_MUTE, ES8389_DATA_LEN_MASK, state); 564 + regmap_update_bits(es8389->regmap, ES8389_DAC_FORMAT_MUTE, ES8389_DATA_LEN_MASK, state); 565 + 566 + if (es8389->mclk_src == ES8389_SCLK_PIN) { 567 + regmap_update_bits(es8389->regmap, ES8389_MASTER_CLK, 568 + ES8389_MCLK_SOURCE, es8389->mclk_src); 569 + es8389->sysclk = params_channels(params) * params_width(params) * params_rate(params); 570 + } 571 + 572 + coeff = get_coeff(es8389->sysclk, params_rate(params)); 573 + if (coeff >= 0) { 574 + regmap_write(es8389->regmap, ES8389_CLK_DIV1, coeff_div[coeff].Reg0x04); 575 + regmap_write(es8389->regmap, ES8389_CLK_MUL, coeff_div[coeff].Reg0x05); 576 + regmap_write(es8389->regmap, ES8389_CLK_MUX1, coeff_div[coeff].Reg0x06); 577 + regmap_write(es8389->regmap, ES8389_CLK_MUX2, coeff_div[coeff].Reg0x07); 578 + regmap_write(es8389->regmap, ES8389_CLK_CTL1, coeff_div[coeff].Reg0x08); 579 + regmap_write(es8389->regmap, ES8389_CLK_CTL2, coeff_div[coeff].Reg0x09); 580 + regmap_write(es8389->regmap, ES8389_CLK_CTL3, coeff_div[coeff].Reg0x0A); 581 + regmap_update_bits(es8389->regmap, ES8389_OSC_CLK, 582 + 0xC0, coeff_div[coeff].Reg0x0F); 583 + regmap_write(es8389->regmap, ES8389_CLK_DIV2, coeff_div[coeff].Reg0x11); 584 + regmap_write(es8389->regmap, ES8389_ADC_OSR, coeff_div[coeff].Reg0x21); 585 + regmap_write(es8389->regmap, ES8389_ADC_DSP, coeff_div[coeff].Reg0x22); 586 + regmap_write(es8389->regmap, ES8389_OSR_VOL, coeff_div[coeff].Reg0x26); 587 + regmap_update_bits(es8389->regmap, ES8389_SYSTEM30, 588 + 0xC0, coeff_div[coeff].Reg0x30); 589 + regmap_write(es8389->regmap, ES8389_DAC_DSM_OSR, coeff_div[coeff].Reg0x41); 590 + regmap_write(es8389->regmap, ES8389_DAC_DSP_OSR, coeff_div[coeff].Reg0x42); 591 + regmap_update_bits(es8389->regmap, ES8389_DAC_MISC, 592 + 0x81, coeff_div[coeff].Reg0x43); 593 + regmap_update_bits(es8389->regmap, ES8389_CHIP_MISC, 594 + 0x72, coeff_div[coeff].Reg0xF0); 595 + regmap_write(es8389->regmap, ES8389_CSM_STATE1, coeff_div[coeff].Reg0xF1); 596 + regmap_write(es8389->regmap, ES8389_SYSTEM16, coeff_div[coeff].Reg0x16); 597 + regmap_write(es8389->regmap, ES8389_SYSTEM18, coeff_div[coeff].Reg0x18); 598 + regmap_write(es8389->regmap, ES8389_SYSTEM19, coeff_div[coeff].Reg0x19); 599 + } else { 600 + dev_warn(component->dev, "Clock coefficients do not match"); 601 + } 602 + 603 + return 0; 604 + } 605 + 606 + static int es8389_set_bias_level(struct snd_soc_component *component, 607 + enum snd_soc_bias_level level) 608 + { 609 + int ret; 610 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 611 + 612 + switch (level) { 613 + case SND_SOC_BIAS_ON: 614 + ret = clk_prepare_enable(es8389->mclk); 615 + if (ret) 616 + return ret; 617 + 618 + regmap_update_bits(es8389->regmap, ES8389_HPSW, 0x20, 0x20); 619 + regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0xD9); 620 + regmap_write(es8389->regmap, ES8389_ADC_EN, 0x8F); 621 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0xE4); 622 + regmap_write(es8389->regmap, ES8389_RESET, 0x01); 623 + regmap_write(es8389->regmap, ES8389_CLK_OFF1, 0xC3); 624 + regmap_update_bits(es8389->regmap, ES8389_ADC_HPF1, 0x0f, 0x0a); 625 + regmap_update_bits(es8389->regmap, ES8389_ADC_HPF2, 0x0f, 0x0a); 626 + usleep_range(70000, 72000); 627 + regmap_write(es8389->regmap, ES8389_DAC_RESET, 0X00); 628 + break; 629 + case SND_SOC_BIAS_PREPARE: 630 + break; 631 + case SND_SOC_BIAS_STANDBY: 632 + regmap_update_bits(es8389->regmap, ES8389_ADC_HPF1, 0x0f, 0x04); 633 + regmap_update_bits(es8389->regmap, ES8389_ADC_HPF2, 0x0f, 0x04); 634 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0xD4); 635 + usleep_range(70000, 72000); 636 + regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0x59); 637 + regmap_write(es8389->regmap, ES8389_ADC_EN, 0x00); 638 + regmap_write(es8389->regmap, ES8389_CLK_OFF1, 0x00); 639 + regmap_write(es8389->regmap, ES8389_RESET, 0x7E); 640 + regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x80); 641 + usleep_range(8000, 8500); 642 + regmap_update_bits(es8389->regmap, ES8389_DAC_INV, 0x80, 0x00); 643 + 644 + clk_disable_unprepare(es8389->mclk); 645 + break; 646 + case SND_SOC_BIAS_OFF: 647 + break; 648 + } 649 + return 0; 650 + } 651 + 652 + 653 + 654 + static int es8389_mute(struct snd_soc_dai *dai, int mute, int direction) 655 + { 656 + struct snd_soc_component *component = dai->component; 657 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 658 + 659 + if (mute) { 660 + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { 661 + regmap_update_bits(es8389->regmap, ES8389_DAC_FORMAT_MUTE, 662 + 0x03, 0x03); 663 + } else { 664 + regmap_update_bits(es8389->regmap, ES8389_ADC_FORMAT_MUTE, 665 + 0x03, 0x03); 666 + } 667 + } else { 668 + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { 669 + regmap_update_bits(es8389->regmap, ES8389_DAC_FORMAT_MUTE, 670 + 0x03, 0x00); 671 + } else { 672 + regmap_update_bits(es8389->regmap, ES8389_ADC_FORMAT_MUTE, 673 + 0x03, 0x00); 674 + } 675 + } 676 + 677 + return 0; 678 + } 679 + 680 + #define es8389_RATES SNDRV_PCM_RATE_8000_96000 681 + 682 + #define es8389_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S20_3LE |\ 683 + SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S24_3LE | SNDRV_PCM_FMTBIT_S32_LE) 684 + 685 + static const struct snd_soc_dai_ops es8389_ops = { 686 + .hw_params = es8389_pcm_hw_params, 687 + .set_fmt = es8389_set_dai_fmt, 688 + .set_sysclk = es8389_set_dai_sysclk, 689 + .set_tdm_slot = es8389_set_tdm_slot, 690 + .mute_stream = es8389_mute, 691 + }; 692 + 693 + static struct snd_soc_dai_driver es8389_dai = { 694 + .name = "ES8389 HiFi", 695 + .playback = { 696 + .stream_name = "Playback", 697 + .channels_min = 1, 698 + .channels_max = 2, 699 + .rates = es8389_RATES, 700 + .formats = es8389_FORMATS, 701 + }, 702 + .capture = { 703 + .stream_name = "Capture", 704 + .channels_min = 1, 705 + .channels_max = 2, 706 + .rates = es8389_RATES, 707 + .formats = es8389_FORMATS, 708 + }, 709 + .ops = &es8389_ops, 710 + .symmetric_rate = 1, 711 + }; 712 + 713 + static void es8389_init(struct snd_soc_component *component) 714 + { 715 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 716 + 717 + regmap_write(es8389->regmap, ES8389_ISO_CTL, 0x00); 718 + regmap_write(es8389->regmap, ES8389_RESET, 0x7E); 719 + regmap_write(es8389->regmap, ES8389_ISO_CTL, 0x38); 720 + regmap_write(es8389->regmap, ES8389_ADC_HPF1, 0x64); 721 + regmap_write(es8389->regmap, ES8389_ADC_HPF2, 0x04); 722 + regmap_write(es8389->regmap, ES8389_DAC_INV, 0x03); 723 + 724 + regmap_write(es8389->regmap, ES8389_VMID, 0x2A); 725 + regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0xC9); 726 + regmap_write(es8389->regmap, ES8389_ANA_VSEL, 0x4F); 727 + regmap_write(es8389->regmap, ES8389_ANA_CTL2, 0x06); 728 + regmap_write(es8389->regmap, ES8389_LOW_POWER1, 0x00); 729 + regmap_write(es8389->regmap, ES8389_DMIC_EN, 0x16); 730 + 731 + regmap_write(es8389->regmap, ES8389_PGA_SW, 0xAA); 732 + regmap_write(es8389->regmap, ES8389_MOD_SW1, 0x66); 733 + regmap_write(es8389->regmap, ES8389_MOD_SW2, 0x99); 734 + regmap_write(es8389->regmap, ES8389_ADC_MODE, (0x00 | ES8389_TDM_MODE)); 735 + regmap_update_bits(es8389->regmap, ES8389_DMIC_EN, 0xC0, 0x00); 736 + regmap_update_bits(es8389->regmap, ES8389_ADC_MODE, 0x03, 0x00); 737 + 738 + regmap_update_bits(es8389->regmap, ES8389_MIC1_GAIN, 739 + ES8389_MIC_SEL_MASK, ES8389_MIC_DEFAULT); 740 + regmap_update_bits(es8389->regmap, ES8389_MIC2_GAIN, 741 + ES8389_MIC_SEL_MASK, ES8389_MIC_DEFAULT); 742 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0xC4); 743 + regmap_write(es8389->regmap, ES8389_MASTER_MODE, 0x08); 744 + regmap_write(es8389->regmap, ES8389_CSM_STATE1, 0x00); 745 + regmap_write(es8389->regmap, ES8389_SYSTEM12, 0x01); 746 + regmap_write(es8389->regmap, ES8389_SYSTEM13, 0x01); 747 + regmap_write(es8389->regmap, ES8389_SYSTEM14, 0x01); 748 + regmap_write(es8389->regmap, ES8389_SYSTEM15, 0x01); 749 + regmap_write(es8389->regmap, ES8389_SYSTEM16, 0x35); 750 + regmap_write(es8389->regmap, ES8389_SYSTEM17, 0x09); 751 + regmap_write(es8389->regmap, ES8389_SYSTEM18, 0x91); 752 + regmap_write(es8389->regmap, ES8389_SYSTEM19, 0x28); 753 + regmap_write(es8389->regmap, ES8389_SYSTEM1A, 0x01); 754 + regmap_write(es8389->regmap, ES8389_SYSTEM1B, 0x01); 755 + regmap_write(es8389->regmap, ES8389_SYSTEM1C, 0x11); 756 + 757 + regmap_write(es8389->regmap, ES8389_CHIP_MISC, 0x13); 758 + regmap_write(es8389->regmap, ES8389_MASTER_CLK, 0x00); 759 + regmap_write(es8389->regmap, ES8389_CLK_DIV1, 0x00); 760 + regmap_write(es8389->regmap, ES8389_CLK_MUL, 0x10); 761 + regmap_write(es8389->regmap, ES8389_CLK_MUX1, 0x00); 762 + regmap_write(es8389->regmap, ES8389_CLK_MUX2, 0xC0); 763 + regmap_write(es8389->regmap, ES8389_CLK_CTL1, 0x00); 764 + regmap_write(es8389->regmap, ES8389_CLK_CTL2, 0xC0); 765 + regmap_write(es8389->regmap, ES8389_CLK_CTL3, 0x80); 766 + regmap_write(es8389->regmap, ES8389_SCLK_DIV, 0x04); 767 + regmap_write(es8389->regmap, ES8389_LRCK_DIV1, 0x01); 768 + regmap_write(es8389->regmap, ES8389_LRCK_DIV2, 0x00); 769 + regmap_write(es8389->regmap, ES8389_OSC_CLK, 0x00); 770 + regmap_write(es8389->regmap, ES8389_ADC_OSR, 0x1F); 771 + regmap_write(es8389->regmap, ES8389_ADC_DSP, 0x7F); 772 + regmap_write(es8389->regmap, ES8389_ADC_MUTE, 0xC0); 773 + regmap_write(es8389->regmap, ES8389_SYSTEM30, 0xF4); 774 + regmap_write(es8389->regmap, ES8389_DAC_DSM_OSR, 0x7F); 775 + regmap_write(es8389->regmap, ES8389_DAC_DSP_OSR, 0x7F); 776 + regmap_write(es8389->regmap, ES8389_DAC_MISC, 0x10); 777 + regmap_write(es8389->regmap, ES8389_DAC_RAMP, 0x0F); 778 + regmap_write(es8389->regmap, ES8389_SYSTEM4C, 0xC0); 779 + regmap_write(es8389->regmap, ES8389_RESET, 0x00); 780 + regmap_write(es8389->regmap, ES8389_CLK_OFF1, 0xC1); 781 + regmap_write(es8389->regmap, ES8389_RESET, 0x01); 782 + regmap_write(es8389->regmap, ES8389_DAC_RESET, 0x02); 783 + 784 + regmap_update_bits(es8389->regmap, ES8389_ADC_FORMAT_MUTE, 0x03, 0x03); 785 + regmap_update_bits(es8389->regmap, ES8389_DAC_FORMAT_MUTE, 0x03, 0x03); 786 + } 787 + 788 + static int es8389_suspend(struct snd_soc_component *component) 789 + { 790 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 791 + 792 + es8389_set_bias_level(component, SND_SOC_BIAS_STANDBY); 793 + regcache_cache_only(es8389->regmap, true); 794 + regcache_mark_dirty(es8389->regmap); 795 + 796 + return 0; 797 + } 798 + 799 + static int es8389_resume(struct snd_soc_component *component) 800 + { 801 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 802 + unsigned int regv; 803 + 804 + regcache_cache_only(es8389->regmap, false); 805 + regcache_cache_bypass(es8389->regmap, true); 806 + regmap_read(es8389->regmap, ES8389_RESET, &regv); 807 + regcache_cache_bypass(es8389->regmap, false); 808 + 809 + if (regv == 0xff) 810 + es8389_init(component); 811 + else 812 + es8389_set_bias_level(component, SND_SOC_BIAS_ON); 813 + 814 + regcache_sync(es8389->regmap); 815 + 816 + return 0; 817 + } 818 + 819 + static int es8389_probe(struct snd_soc_component *component) 820 + { 821 + int ret; 822 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 823 + 824 + ret = device_property_read_u8(component->dev, "everest,mclk-src", &es8389->mclk_src); 825 + if (ret != 0) { 826 + dev_dbg(component->dev, "mclk-src return %d", ret); 827 + es8389->mclk_src = ES8389_MCLK_SOURCE; 828 + } 829 + 830 + es8389->mclk = devm_clk_get(component->dev, "mclk"); 831 + if (IS_ERR(es8389->mclk)) 832 + return dev_err_probe(component->dev, PTR_ERR(es8389->mclk), 833 + "ES8389 is unable to get mclk\n"); 834 + 835 + if (!es8389->mclk) 836 + dev_err(component->dev, "%s, assuming static mclk\n", __func__); 837 + 838 + ret = clk_prepare_enable(es8389->mclk); 839 + if (ret) { 840 + dev_err(component->dev, "%s, unable to enable mclk\n", __func__); 841 + return ret; 842 + } 843 + 844 + es8389_init(component); 845 + es8389_set_bias_level(component, SND_SOC_BIAS_STANDBY); 846 + 847 + return 0; 848 + } 849 + 850 + static void es8389_remove(struct snd_soc_component *component) 851 + { 852 + struct es8389_private *es8389 = snd_soc_component_get_drvdata(component); 853 + 854 + regmap_write(es8389->regmap, ES8389_MASTER_MODE, 0x28); 855 + regmap_write(es8389->regmap, ES8389_HPSW, 0x00); 856 + regmap_write(es8389->regmap, ES8389_VMID, 0x00); 857 + regmap_write(es8389->regmap, ES8389_RESET, 0x00); 858 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0xCC); 859 + usleep_range(500000, 550000);//500MS 860 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0x00); 861 + regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0x08); 862 + regmap_write(es8389->regmap, ES8389_ISO_CTL, 0xC1); 863 + regmap_write(es8389->regmap, ES8389_PULL_DOWN, 0x00); 864 + 865 + } 866 + 867 + static const struct snd_soc_component_driver soc_codec_dev_es8389 = { 868 + .probe = es8389_probe, 869 + .remove = es8389_remove, 870 + .suspend = es8389_suspend, 871 + .resume = es8389_resume, 872 + .set_bias_level = es8389_set_bias_level, 873 + 874 + .controls = es8389_snd_controls, 875 + .num_controls = ARRAY_SIZE(es8389_snd_controls), 876 + .dapm_widgets = es8389_dapm_widgets, 877 + .num_dapm_widgets = ARRAY_SIZE(es8389_dapm_widgets), 878 + .dapm_routes = es8389_dapm_routes, 879 + .num_dapm_routes = ARRAY_SIZE(es8389_dapm_routes), 880 + .idle_bias_on = 1, 881 + .use_pmdown_time = 1, 882 + }; 883 + 884 + static const struct regmap_config es8389_regmap = { 885 + .reg_bits = 8, 886 + .val_bits = 8, 887 + 888 + .max_register = ES8389_MAX_REGISTER, 889 + 890 + .volatile_reg = es8389_volatile_register, 891 + .cache_type = REGCACHE_MAPLE, 892 + }; 893 + 894 + static void es8389_i2c_shutdown(struct i2c_client *i2c) 895 + { 896 + struct es8389_private *es8389; 897 + 898 + es8389 = i2c_get_clientdata(i2c); 899 + 900 + regmap_write(es8389->regmap, ES8389_MASTER_MODE, 0x28); 901 + regmap_write(es8389->regmap, ES8389_HPSW, 0x00); 902 + regmap_write(es8389->regmap, ES8389_VMID, 0x00); 903 + regmap_write(es8389->regmap, ES8389_RESET, 0x00); 904 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0xCC); 905 + usleep_range(500000, 550000);//500MS 906 + regmap_write(es8389->regmap, ES8389_CSM_JUMP, 0x00); 907 + regmap_write(es8389->regmap, ES8389_ANA_CTL1, 0x08); 908 + regmap_write(es8389->regmap, ES8389_ISO_CTL, 0xC1); 909 + regmap_write(es8389->regmap, ES8389_PULL_DOWN, 0x00); 910 + } 911 + 912 + static int es8389_i2c_probe(struct i2c_client *i2c_client) 913 + { 914 + struct es8389_private *es8389; 915 + int ret; 916 + 917 + es8389 = devm_kzalloc(&i2c_client->dev, sizeof(*es8389), GFP_KERNEL); 918 + if (es8389 == NULL) 919 + return -ENOMEM; 920 + 921 + i2c_set_clientdata(i2c_client, es8389); 922 + es8389->regmap = devm_regmap_init_i2c(i2c_client, &es8389_regmap); 923 + if (IS_ERR(es8389->regmap)) 924 + return dev_err_probe(&i2c_client->dev, PTR_ERR(es8389->regmap), 925 + "regmap_init() failed\n"); 926 + 927 + ret = devm_snd_soc_register_component(&i2c_client->dev, 928 + &soc_codec_dev_es8389, 929 + &es8389_dai, 930 + 1); 931 + 932 + return ret; 933 + } 934 + 935 + #ifdef CONFIG_OF 936 + static const struct of_device_id es8389_if_dt_ids[] = { 937 + { .compatible = "everest,es8389", }, 938 + { } 939 + }; 940 + MODULE_DEVICE_TABLE(of, es8389_if_dt_ids); 941 + #endif 942 + 943 + static const struct i2c_device_id es8389_i2c_id[] = { 944 + {"es8389"}, 945 + { } 946 + }; 947 + MODULE_DEVICE_TABLE(i2c, es8389_i2c_id); 948 + 949 + static struct i2c_driver es8389_i2c_driver = { 950 + .driver = { 951 + .name = "es8389", 952 + .of_match_table = of_match_ptr(es8389_if_dt_ids), 953 + }, 954 + .shutdown = es8389_i2c_shutdown, 955 + .probe = es8389_i2c_probe, 956 + .id_table = es8389_i2c_id, 957 + }; 958 + module_i2c_driver(es8389_i2c_driver); 959 + 960 + MODULE_DESCRIPTION("ASoC es8389 driver"); 961 + MODULE_AUTHOR("Michael Zhang <zhangyi@everest-semi.com>"); 962 + MODULE_LICENSE("GPL");
+140
sound/soc/codecs/es8389.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * ES8389.h -- ES8389 ALSA SoC Audio Codec 4 + * 5 + * Authors: 6 + * 7 + * Based on ES8374.h by Michael Zhang 8 + */ 9 + 10 + #ifndef _ES8389_H 11 + #define _ES8389_H 12 + 13 + /* 14 + * ES8389_REGISTER NAME_REG_REGISTER ADDRESS 15 + */ 16 + #define ES8389_RESET 0x00 /*reset digital,csm,clock manager etc.*/ 17 + 18 + /* 19 + * Clock Scheme Register definition 20 + */ 21 + #define ES8389_MASTER_MODE 0x01 22 + #define ES8389_MASTER_CLK 0x02 23 + #define ES8389_CLK_OFF1 0x03 24 + #define ES8389_CLK_DIV1 0x04 25 + #define ES8389_CLK_MUL 0x05 26 + #define ES8389_CLK_MUX1 0x06 27 + #define ES8389_CLK_MUX2 0x07 28 + #define ES8389_CLK_CTL1 0x08 29 + #define ES8389_CLK_CTL2 0x09 30 + #define ES8389_CLK_CTL3 0x0A 31 + #define ES8389_SCLK_DIV 0x0B 32 + #define ES8389_LRCK_DIV1 0x0C 33 + #define ES8389_LRCK_DIV2 0x0D 34 + #define ES8389_CLK_OFF2 0x0E 35 + #define ES8389_OSC_CLK 0x0F 36 + #define ES8389_CSM_JUMP 0x10 37 + #define ES8389_CLK_DIV2 0x11 38 + #define ES8389_SYSTEM12 0x12 39 + #define ES8389_SYSTEM13 0x13 40 + #define ES8389_SYSTEM14 0x14 41 + #define ES8389_SYSTEM15 0x15 42 + #define ES8389_SYSTEM16 0x16 43 + #define ES8389_SYSTEM17 0x17 44 + #define ES8389_SYSTEM18 0x18 45 + #define ES8389_SYSTEM19 0x19 46 + #define ES8389_SYSTEM1A 0x1A 47 + #define ES8389_SYSTEM1B 0x1B 48 + #define ES8389_SYSTEM1C 0x1C 49 + #define ES8389_ADC_FORMAT_MUTE 0x20 50 + #define ES8389_ADC_OSR 0x21 51 + #define ES8389_ADC_DSP 0x22 52 + #define ES8389_ADC_MODE 0x23 53 + #define ES8389_ADC_HPF1 0x24 54 + #define ES8389_ADC_HPF2 0x25 55 + #define ES8389_OSR_VOL 0x26 56 + #define ES8389_ADCL_VOL 0x27 57 + #define ES8389_ADCR_VOL 0x28 58 + #define ES8389_ALC_CTL 0x29 59 + #define ES8389_PTDM_SLOT 0x2A 60 + #define ES8389_ALC_ON 0x2B 61 + #define ES8389_ALC_TARGET 0x2C 62 + #define ES8389_ALC_GAIN 0x2D 63 + #define ES8389_SYSTEM2E 0x2E 64 + #define ES8389_ADC_MUTE 0x2F 65 + #define ES8389_SYSTEM30 0x30 66 + #define ES8389_ADC_RESET 0x31 67 + #define ES8389_DAC_FORMAT_MUTE 0x40 68 + #define ES8389_DAC_DSM_OSR 0x41 69 + #define ES8389_DAC_DSP_OSR 0x42 70 + #define ES8389_DAC_MISC 0x43 71 + #define ES8389_DAC_MIX 0x44 72 + #define ES8389_DAC_INV 0x45 73 + #define ES8389_DACL_VOL 0x46 74 + #define ES8389_DACR_VOL 0x47 75 + #define ES8389_MIX_VOL 0x48 76 + #define ES8389_DAC_RAMP 0x49 77 + #define ES8389_SYSTEM4C 0x4C 78 + #define ES8389_DAC_RESET 0x4D 79 + #define ES8389_VMID 0x60 80 + #define ES8389_ANA_CTL1 0x61 81 + #define ES8389_ANA_VSEL 0x62 82 + #define ES8389_ANA_CTL2 0x63 83 + #define ES8389_ADC_EN 0x64 84 + #define ES8389_HPSW 0x69 85 + #define ES8389_LOW_POWER1 0x6B 86 + #define ES8389_LOW_POWER2 0x6C 87 + #define ES8389_DMIC_EN 0x6D 88 + #define ES8389_PGA_SW 0x6E 89 + #define ES8389_MOD_SW1 0x6F 90 + #define ES8389_MOD_SW2 0x70 91 + #define ES8389_MOD_SW3 0x71 92 + #define ES8389_MIC1_GAIN 0x72 93 + #define ES8389_MIC2_GAIN 0x73 94 + 95 + #define ES8389_CHIP_MISC 0xF0 96 + #define ES8389_CSM_STATE1 0xF1 97 + #define ES8389_PULL_DOWN 0xF2 98 + #define ES8389_ISO_CTL 0xF3 99 + #define ES8389_CSM_STATE2 0xF4 100 + 101 + #define ES8389_CHIP_ID0 0xFD 102 + #define ES8389_CHIP_ID1 0xFE 103 + 104 + #define ES8389_MAX_REGISTER 0xFF 105 + 106 + #define ES8389_MIC_SEL_MASK (7 << 4) 107 + #define ES8389_MIC_DEFAULT (1 << 4) 108 + 109 + #define ES8389_MASTER_MODE_EN (1 << 0) 110 + 111 + #define ES8389_TDM_OFF (0 << 0) 112 + #define ES8389_STDM_ON (1 << 7) 113 + #define ES8389_PTDM_ON (1 << 6) 114 + 115 + #define ES8389_TDM_MODE ES8389_TDM_OFF 116 + #define ES8389_TDM_SLOT (0x70 << 0) 117 + #define ES8389_TDM_SHIFT 4 118 + 119 + #define ES8389_MCLK_SOURCE (1 << 6) 120 + #define ES8389_MCLK_PIN (1 << 6) 121 + #define ES8389_SCLK_PIN (0 << 6) 122 + 123 + /* ES8389_FMT */ 124 + #define ES8389_S24_LE (0 << 5) 125 + #define ES8389_S20_3_LE (1 << 5) 126 + #define ES8389_S18_LE (2 << 5) 127 + #define ES8389_S16_LE (3 << 5) 128 + #define ES8389_S32_LE (4 << 5) 129 + #define ES8389_DATA_LEN_MASK (7 << 5) 130 + 131 + #define ES8389_DAIFMT_MASK (7 << 2) 132 + #define ES8389_DAIFMT_I2S 0 133 + #define ES8389_DAIFMT_LEFT_J (1 << 2) 134 + #define ES8389_DAIFMT_DSP_A (1 << 3) 135 + #define ES8389_DAIFMT_DSP_B (3 << 3) 136 + 137 + #define ES8389_STATE_ON (13 << 0) 138 + #define ES8389_STATE_STANDBY (7 << 0) 139 + 140 + #endif
+3
tools/include/uapi/linux/bpf.h
··· 4968 4968 * the netns switch takes place from ingress to ingress without 4969 4969 * going through the CPU's backlog queue. 4970 4970 * 4971 + * *skb*\ **->mark** and *skb*\ **->tstamp** are not cleared during 4972 + * the netns switch. 4973 + * 4971 4974 * The *flags* argument is reserved and must be 0. The helper is 4972 4975 * currently only supported for tc BPF program types at the 4973 4976 * ingress hook and for veth and netkit target device types. The
+1 -1
tools/net/ynl/lib/ynl.c
··· 364 364 "Invalid attribute (binary %s)", policy->name); 365 365 return -1; 366 366 case YNL_PT_NUL_STR: 367 - if ((!policy->len || len <= policy->len) && !data[len - 1]) 367 + if (len && (!policy->len || len <= policy->len) && !data[len - 1]) 368 368 break; 369 369 yerr(yarg->ys, YNL_ERROR_ATTR_INVALID, 370 370 "Invalid attribute (string %s)", policy->name);
+1
tools/objtool/check.c
··· 227 227 str_ends_with(func->name, "_4core9panicking19assert_failed_inner") || 228 228 str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") || 229 229 str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") || 230 + str_ends_with(func->name, "_7___rustc17rust_begin_unwind") || 230 231 strstr(func->name, "_4core9panicking13assert_failed") || 231 232 strstr(func->name, "_4core9panicking11panic_const24panic_const_") || 232 233 (strstr(func->name, "_4core5slice5index24slice_") &&
+27 -18
tools/testing/selftests/drivers/net/ping.py
··· 9 9 from lib.py import bkg, cmd, wait_port_listen, rand_port 10 10 from lib.py import defer, ethtool, ip 11 11 12 - remote_ifname="" 13 12 no_sleep=False 14 13 15 14 def _test_v4(cfg) -> None: 16 - cfg.require_ipver("4") 15 + if not cfg.addr_v["4"]: 16 + return 17 17 18 18 cmd("ping -c 1 -W0.5 " + cfg.remote_addr_v["4"]) 19 19 cmd("ping -c 1 -W0.5 " + cfg.addr_v["4"], host=cfg.remote) ··· 21 21 cmd("ping -s 65000 -c 1 -W0.5 " + cfg.addr_v["4"], host=cfg.remote) 22 22 23 23 def _test_v6(cfg) -> None: 24 - cfg.require_ipver("6") 24 + if not cfg.addr_v["6"]: 25 + return 25 26 26 27 cmd("ping -c 1 -W5 " + cfg.remote_addr_v["6"]) 27 28 cmd("ping -c 1 -W5 " + cfg.addr_v["6"], host=cfg.remote) ··· 58 57 59 58 def _set_xdp_generic_sb_on(cfg) -> None: 60 59 prog = cfg.net_lib_dir / "xdp_dummy.bpf.o" 61 - cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 60 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 1500", shell=True, host=cfg.remote) 62 61 cmd(f"ip link set dev {cfg.ifname} mtu 1500 xdpgeneric obj {prog} sec xdp", shell=True) 63 62 defer(cmd, f"ip link set dev {cfg.ifname} xdpgeneric off") 64 63 ··· 67 66 68 67 def _set_xdp_generic_mb_on(cfg) -> None: 69 68 prog = cfg.net_lib_dir / "xdp_dummy.bpf.o" 70 - cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 71 - defer(ip, f"link set dev {remote_ifname} mtu 1500", host=cfg.remote) 69 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 9000", shell=True, host=cfg.remote) 70 + defer(ip, f"link set dev {cfg.remote_ifname} mtu 1500", host=cfg.remote) 72 71 ip("link set dev %s mtu 9000 xdpgeneric obj %s sec xdp.frags" % (cfg.ifname, prog)) 73 72 defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdpgeneric off") 74 73 ··· 77 76 78 77 def _set_xdp_native_sb_on(cfg) -> None: 79 78 prog = cfg.net_lib_dir / "xdp_dummy.bpf.o" 80 - cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 79 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 1500", shell=True, host=cfg.remote) 81 80 cmd(f"ip -j link set dev {cfg.ifname} mtu 1500 xdp obj {prog} sec xdp", shell=True) 82 81 defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdp off") 83 82 xdp_info = ip("-d link show %s" % (cfg.ifname), json=True)[0] ··· 94 93 95 94 def _set_xdp_native_mb_on(cfg) -> None: 96 95 prog = cfg.net_lib_dir / "xdp_dummy.bpf.o" 97 - cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 98 - defer(ip, f"link set dev {remote_ifname} mtu 1500", host=cfg.remote) 96 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 9000", shell=True, host=cfg.remote) 97 + defer(ip, f"link set dev {cfg.remote_ifname} mtu 1500", host=cfg.remote) 99 98 try: 100 99 cmd(f"ip link set dev {cfg.ifname} mtu 9000 xdp obj {prog} sec xdp.frags", shell=True) 101 100 defer(ip, f"link set dev {cfg.ifname} mtu 1500 xdp off") ··· 113 112 except Exception as e: 114 113 raise KsftSkipEx('device does not support offloaded XDP') 115 114 defer(ip, f"link set dev {cfg.ifname} xdpoffload off") 116 - cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 115 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 1500", shell=True, host=cfg.remote) 117 116 118 117 if no_sleep != True: 119 118 time.sleep(10) 120 119 121 120 def get_interface_info(cfg) -> None: 122 - global remote_ifname 123 121 global no_sleep 124 122 125 - remote_info = cmd(f"ip -4 -o addr show to {cfg.remote_addr_v['4']} | awk '{{print $2}}'", shell=True, host=cfg.remote).stdout 126 - remote_ifname = remote_info.rstrip('\n') 127 - if remote_ifname == "": 123 + if cfg.remote_ifname == "": 128 124 raise KsftFailEx('Can not get remote interface') 129 125 local_info = ip("-d link show %s" % (cfg.ifname), json=True)[0] 130 126 if 'parentbus' in local_info and local_info['parentbus'] == "netdevsim": ··· 134 136 cmd(f"ip link set dev {cfg.ifname} xdp off ", shell=True) 135 137 cmd(f"ip link set dev {cfg.ifname} xdpgeneric off ", shell=True) 136 138 cmd(f"ip link set dev {cfg.ifname} xdpoffload off", shell=True) 137 - cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 139 + cmd(f"ip link set dev {cfg.remote_ifname} mtu 1500", shell=True, host=cfg.remote) 138 140 139 - def test_default(cfg, netnl) -> None: 141 + def test_default_v4(cfg, netnl) -> None: 142 + cfg.require_ipver("4") 143 + 140 144 _set_offload_checksum(cfg, netnl, "off") 141 145 _test_v4(cfg) 142 - _test_v6(cfg) 143 146 _test_tcp(cfg) 144 147 _set_offload_checksum(cfg, netnl, "on") 145 148 _test_v4(cfg) 149 + _test_tcp(cfg) 150 + 151 + def test_default_v6(cfg, netnl) -> None: 152 + cfg.require_ipver("6") 153 + 154 + _set_offload_checksum(cfg, netnl, "off") 155 + _test_v6(cfg) 156 + _test_tcp(cfg) 157 + _set_offload_checksum(cfg, netnl, "on") 146 158 _test_v6(cfg) 147 159 _test_tcp(cfg) 148 160 ··· 210 202 with NetDrvEpEnv(__file__) as cfg: 211 203 get_interface_info(cfg) 212 204 set_interface_init(cfg) 213 - ksft_run([test_default, 205 + ksft_run([test_default_v4, 206 + test_default_v6, 214 207 test_xdp_generic_sb, 215 208 test_xdp_generic_mb, 216 209 test_xdp_native_sb,
+4 -4
tools/testing/selftests/kvm/arm64/set_id_regs.c
··· 129 129 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0), 130 130 REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0), 131 131 REG_FTR_BITS(FTR_EXACT, ID_AA64PFR0_EL1, GIC, 0), 132 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0), 133 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0), 134 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0), 135 - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0), 132 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 1), 133 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 1), 134 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 1), 135 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 1), 136 136 REG_FTR_END, 137 137 }; 138 138
+14 -5
tools/testing/selftests/mm/compaction_test.c
··· 90 90 int compaction_index = 0; 91 91 char nr_hugepages[20] = {0}; 92 92 char init_nr_hugepages[24] = {0}; 93 + char target_nr_hugepages[24] = {0}; 94 + int slen; 93 95 94 96 snprintf(init_nr_hugepages, sizeof(init_nr_hugepages), 95 97 "%lu", initial_nr_hugepages); ··· 108 106 goto out; 109 107 } 110 108 111 - /* Request a large number of huge pages. The Kernel will allocate 112 - as much as it can */ 113 - if (write(fd, "100000", (6*sizeof(char))) != (6*sizeof(char))) { 114 - ksft_print_msg("Failed to write 100000 to /proc/sys/vm/nr_hugepages: %s\n", 115 - strerror(errno)); 109 + /* 110 + * Request huge pages for about half of the free memory. The Kernel 111 + * will allocate as much as it can, and we expect it will get at least 1/3 112 + */ 113 + nr_hugepages_ul = mem_free / hugepage_size / 2; 114 + snprintf(target_nr_hugepages, sizeof(target_nr_hugepages), 115 + "%lu", nr_hugepages_ul); 116 + 117 + slen = strlen(target_nr_hugepages); 118 + if (write(fd, target_nr_hugepages, slen) != slen) { 119 + ksft_print_msg("Failed to write %lu to /proc/sys/vm/nr_hugepages: %s\n", 120 + nr_hugepages_ul, strerror(errno)); 116 121 goto close_fd; 117 122 } 118 123
+10 -6
tools/testing/selftests/mm/guard-regions.c
··· 271 271 self->page_size = (unsigned long)sysconf(_SC_PAGESIZE); 272 272 setup_sighandler(); 273 273 274 - if (variant->backing == ANON_BACKED) 274 + switch (variant->backing) { 275 + case ANON_BACKED: 275 276 return; 276 - 277 - self->fd = open_file( 278 - variant->backing == SHMEM_BACKED ? "/tmp/" : "", 279 - self->path); 277 + case LOCAL_FILE_BACKED: 278 + self->fd = open_file("", self->path); 279 + break; 280 + case SHMEM_BACKED: 281 + self->fd = memfd_create(self->path, 0); 282 + break; 283 + } 280 284 281 285 /* We truncate file to at least 100 pages, tests can modify as needed. */ 282 286 ASSERT_EQ(ftruncate(self->fd, 100 * self->page_size), 0); ··· 1700 1696 char *ptr; 1701 1697 int i; 1702 1698 1703 - if (variant->backing == ANON_BACKED) 1699 + if (variant->backing != LOCAL_FILE_BACKED) 1704 1700 SKIP(return, "Read-only test specific to file-backed"); 1705 1701 1706 1702 /* Map shared so we can populate with pattern, populate it, unmap. */
+13 -1
tools/testing/selftests/mm/pkey-powerpc.h
··· 3 3 #ifndef _PKEYS_POWERPC_H 4 4 #define _PKEYS_POWERPC_H 5 5 6 + #include <sys/stat.h> 7 + 6 8 #ifndef SYS_pkey_alloc 7 9 # define SYS_pkey_alloc 384 8 10 # define SYS_pkey_free 385 ··· 104 102 return; 105 103 } 106 104 105 + #define REPEAT_8(s) s s s s s s s s 106 + #define REPEAT_64(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) \ 107 + REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) REPEAT_8(s) 108 + #define REPEAT_512(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) \ 109 + REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) REPEAT_64(s) 110 + #define REPEAT_4096(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) \ 111 + REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) REPEAT_512(s) 112 + #define REPEAT_16384(s) REPEAT_4096(s) REPEAT_4096(s) \ 113 + REPEAT_4096(s) REPEAT_4096(s) 114 + 107 115 /* 4-byte instructions * 16384 = 64K page */ 108 - #define __page_o_noops() asm(".rept 16384 ; nop; .endr") 116 + #define __page_o_noops() asm(REPEAT_16384("nop\n")) 109 117 110 118 static inline void *malloc_pkey_with_mprotect_subpage(long size, int prot, u16 pkey) 111 119 {
+1
tools/testing/selftests/mm/pkey_util.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 + #define __SANE_USERSPACE_TYPES__ 2 3 #include <sys/syscall.h> 3 4 #include <unistd.h> 4 5
+1
tools/testing/selftests/net/Makefile
··· 31 31 TEST_PROGS += ioam6.sh 32 32 TEST_PROGS += gro.sh 33 33 TEST_PROGS += gre_gso.sh 34 + TEST_PROGS += gre_ipv6_lladdr.sh 34 35 TEST_PROGS += cmsg_so_mark.sh 35 36 TEST_PROGS += cmsg_so_priority.sh 36 37 TEST_PROGS += test_so_rcv.sh
+177
tools/testing/selftests/net/gre_ipv6_lladdr.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source ./lib.sh 5 + 6 + PAUSE_ON_FAIL="no" 7 + 8 + # The trap function handler 9 + # 10 + exit_cleanup_all() 11 + { 12 + cleanup_all_ns 13 + 14 + exit "${EXIT_STATUS}" 15 + } 16 + 17 + # Add fake IPv4 and IPv6 networks on the loopback device, to be used as 18 + # underlay by future GRE devices. 19 + # 20 + setup_basenet() 21 + { 22 + ip -netns "${NS0}" link set dev lo up 23 + ip -netns "${NS0}" address add dev lo 192.0.2.10/24 24 + ip -netns "${NS0}" address add dev lo 2001:db8::10/64 nodad 25 + } 26 + 27 + # Check if network device has an IPv6 link-local address assigned. 28 + # 29 + # Parameters: 30 + # 31 + # * $1: The network device to test 32 + # * $2: An extra regular expression that should be matched (to verify the 33 + # presence of extra attributes) 34 + # * $3: The expected return code from grep (to allow checking the absence of 35 + # a link-local address) 36 + # * $4: The user visible name for the scenario being tested 37 + # 38 + check_ipv6_ll_addr() 39 + { 40 + local DEV="$1" 41 + local EXTRA_MATCH="$2" 42 + local XRET="$3" 43 + local MSG="$4" 44 + 45 + RET=0 46 + set +e 47 + ip -netns "${NS0}" -6 address show dev "${DEV}" scope link | grep "fe80::" | grep -q "${EXTRA_MATCH}" 48 + check_err_fail "${XRET}" $? "" 49 + log_test "${MSG}" 50 + set -e 51 + } 52 + 53 + # Create a GRE device and verify that it gets an IPv6 link-local address as 54 + # expected. 55 + # 56 + # Parameters: 57 + # 58 + # * $1: The device type (gre, ip6gre, gretap or ip6gretap) 59 + # * $2: The local underlay IP address (can be an IPv4, an IPv6 or "any") 60 + # * $3: The remote underlay IP address (can be an IPv4, an IPv6 or "any") 61 + # * $4: The IPv6 interface identifier generation mode to use for the GRE 62 + # device (eui64, none, stable-privacy or random). 63 + # 64 + test_gre_device() 65 + { 66 + local GRE_TYPE="$1" 67 + local LOCAL_IP="$2" 68 + local REMOTE_IP="$3" 69 + local MODE="$4" 70 + local ADDR_GEN_MODE 71 + local MATCH_REGEXP 72 + local MSG 73 + 74 + ip link add netns "${NS0}" name gretest type "${GRE_TYPE}" local "${LOCAL_IP}" remote "${REMOTE_IP}" 75 + 76 + case "${MODE}" in 77 + "eui64") 78 + ADDR_GEN_MODE=0 79 + MATCH_REGEXP="" 80 + MSG="${GRE_TYPE}, mode: 0 (EUI64), ${LOCAL_IP} -> ${REMOTE_IP}" 81 + XRET=0 82 + ;; 83 + "none") 84 + ADDR_GEN_MODE=1 85 + MATCH_REGEXP="" 86 + MSG="${GRE_TYPE}, mode: 1 (none), ${LOCAL_IP} -> ${REMOTE_IP}" 87 + XRET=1 # No link-local address should be generated 88 + ;; 89 + "stable-privacy") 90 + ADDR_GEN_MODE=2 91 + MATCH_REGEXP="stable-privacy" 92 + MSG="${GRE_TYPE}, mode: 2 (stable privacy), ${LOCAL_IP} -> ${REMOTE_IP}" 93 + XRET=0 94 + # Initialise stable_secret (required for stable-privacy mode) 95 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.stable_secret="2001:db8::abcd" 96 + ;; 97 + "random") 98 + ADDR_GEN_MODE=3 99 + MATCH_REGEXP="stable-privacy" 100 + MSG="${GRE_TYPE}, mode: 3 (random), ${LOCAL_IP} -> ${REMOTE_IP}" 101 + XRET=0 102 + ;; 103 + esac 104 + 105 + # Check that IPv6 link-local address is generated when device goes up 106 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 107 + ip -netns "${NS0}" link set dev gretest up 108 + check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}" 109 + 110 + # Now disable link-local address generation 111 + ip -netns "${NS0}" link set dev gretest down 112 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode=1 113 + ip -netns "${NS0}" link set dev gretest up 114 + 115 + # Check that link-local address generation works when re-enabled while 116 + # the device is already up 117 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 118 + check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}" 119 + 120 + ip -netns "${NS0}" link del dev gretest 121 + } 122 + 123 + test_gre4() 124 + { 125 + local GRE_TYPE 126 + local MODE 127 + 128 + for GRE_TYPE in "gre" "gretap"; do 129 + printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 130 + 131 + for MODE in "eui64" "none" "stable-privacy" "random"; do 132 + test_gre_device "${GRE_TYPE}" 192.0.2.10 192.0.2.11 "${MODE}" 133 + test_gre_device "${GRE_TYPE}" any 192.0.2.11 "${MODE}" 134 + test_gre_device "${GRE_TYPE}" 192.0.2.10 any "${MODE}" 135 + done 136 + done 137 + } 138 + 139 + test_gre6() 140 + { 141 + local GRE_TYPE 142 + local MODE 143 + 144 + for GRE_TYPE in "ip6gre" "ip6gretap"; do 145 + printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 146 + 147 + for MODE in "eui64" "none" "stable-privacy" "random"; do 148 + test_gre_device "${GRE_TYPE}" 2001:db8::10 2001:db8::11 "${MODE}" 149 + test_gre_device "${GRE_TYPE}" any 2001:db8::11 "${MODE}" 150 + test_gre_device "${GRE_TYPE}" 2001:db8::10 any "${MODE}" 151 + done 152 + done 153 + } 154 + 155 + usage() 156 + { 157 + echo "Usage: $0 [-p]" 158 + exit 1 159 + } 160 + 161 + while getopts :p o 162 + do 163 + case $o in 164 + p) PAUSE_ON_FAIL="yes";; 165 + *) usage;; 166 + esac 167 + done 168 + 169 + setup_ns NS0 170 + 171 + set -e 172 + trap exit_cleanup_all EXIT 173 + 174 + setup_basenet 175 + 176 + test_gre4 177 + test_gre6
+35
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 538 538 "$TC qdisc del dev $DUMMY handle 1:0 root", 539 539 "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 540 540 ] 541 + }, 542 + { 543 + "id": "62c4", 544 + "name": "Test HTB with FQ_CODEL - basic functionality", 545 + "category": [ 546 + "qdisc", 547 + "htb", 548 + "fq_codel" 549 + ], 550 + "plugins": { 551 + "requires": [ 552 + "nsPlugin", 553 + "scapyPlugin" 554 + ] 555 + }, 556 + "setup": [ 557 + "$TC qdisc add dev $DEV1 root handle 1: htb default 11", 558 + "$TC class add dev $DEV1 parent 1: classid 1:1 htb rate 10kbit", 559 + "$TC class add dev $DEV1 parent 1:1 classid 1:11 htb rate 10kbit prio 0 quantum 1486", 560 + "$TC qdisc add dev $DEV1 parent 1:11 fq_codel quantum 300 noecn", 561 + "sleep 0.5" 562 + ], 563 + "scapy": { 564 + "iface": "$DEV0", 565 + "count": 5, 566 + "packet": "Ether()/IP(dst='10.10.10.1', src='10.10.10.10')/TCP(sport=12345, dport=80)" 567 + }, 568 + "cmdUnderTest": "$TC -s qdisc show dev $DEV1", 569 + "expExitCode": "0", 570 + "verifyCmd": "$TC -s qdisc show dev $DEV1 | grep -A 5 'qdisc fq_codel'", 571 + "matchPattern": "Sent [0-9]+ bytes [0-9]+ pkt", 572 + "matchCount": "1", 573 + "teardown": [ 574 + "$TC qdisc del dev $DEV1 handle 1: root" 575 + ] 541 576 } 542 577 ]