Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

drivers/net/can/dev.c
b552766c872f ("can: dev: prevent potential information leak in can_fill_info()")
3e77f70e7345 ("can: dev: move driver related infrastructure into separate subdir")
0a042c6ec991 ("can: dev: move netlink related code into seperate file")

Code move.

drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
57ac4a31c483 ("net/mlx5e: Correctly handle changing the number of queues when the interface is down")
214baf22870c ("net/mlx5e: Support HTB offload")

Adjacent code changes

net/switchdev/switchdev.c
20776b465c0c ("net: switchdev: don't set port_obj_info->handled true when -EOPNOTSUPP")
ffb68fc58e96 ("net: switchdev: remove the transaction structure from port object notifiers")
bae33f2b5afe ("net: switchdev: remove the transaction structure from port attributes")

Transaction parameter gets dropped otherwise keep the fix.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3966 -2259
-3
.mailmap
··· 9 9 # 10 10 # Please keep this list dictionary sorted. 11 11 # 12 - # This comment is parsed by git-shortlog: 13 - # repo-abbrev: /pub/scm/linux/kernel/git/ 14 - # 15 12 Aaron Durbin <adurbin@google.com> 16 13 Adam Oldham <oldhamca@gmail.com> 17 14 Adam Radford <aradford@gmail.com>
+3 -2
Documentation/ABI/testing/sysfs-devices-consumer
··· 4 4 Description: 5 5 The /sys/devices/.../consumer:<consumer> are symlinks to device 6 6 links where this device is the supplier. <consumer> denotes the 7 - name of the consumer in that device link. There can be zero or 8 - more of these symlinks for a given device. 7 + name of the consumer in that device link and is of the form 8 + bus:device name. There can be zero or more of these symlinks 9 + for a given device.
+3 -2
Documentation/ABI/testing/sysfs-devices-supplier
··· 4 4 Description: 5 5 The /sys/devices/.../supplier:<supplier> are symlinks to device 6 6 links where this device is the consumer. <supplier> denotes the 7 - name of the supplier in that device link. There can be zero or 8 - more of these symlinks for a given device. 7 + name of the supplier in that device link and is of the form 8 + bus:device name. There can be zero or more of these symlinks 9 + for a given device.
+22 -14
Documentation/ABI/testing/sysfs-driver-ufs
··· 916 916 Contact: Subhash Jadavani <subhashj@codeaurora.org> 917 917 Description: This entry could be used to set or show the UFS device 918 918 runtime power management level. The current driver 919 - implementation supports 6 levels with next target states: 919 + implementation supports 7 levels with next target states: 920 920 921 921 == ==================================================== 922 - 0 an UFS device will stay active, an UIC link will 922 + 0 UFS device will stay active, UIC link will 923 923 stay active 924 - 1 an UFS device will stay active, an UIC link will 924 + 1 UFS device will stay active, UIC link will 925 925 hibernate 926 - 2 an UFS device will moved to sleep, an UIC link will 926 + 2 UFS device will be moved to sleep, UIC link will 927 927 stay active 928 - 3 an UFS device will moved to sleep, an UIC link will 928 + 3 UFS device will be moved to sleep, UIC link will 929 929 hibernate 930 - 4 an UFS device will be powered off, an UIC link will 930 + 4 UFS device will be powered off, UIC link will 931 931 hibernate 932 - 5 an UFS device will be powered off, an UIC link will 932 + 5 UFS device will be powered off, UIC link will 933 933 be powered off 934 + 6 UFS device will be moved to deep sleep, UIC link 935 + will be powered off. Note, deep sleep might not be 936 + supported in which case this value will not be 937 + accepted 934 938 == ==================================================== 935 939 936 940 What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state ··· 958 954 Contact: Subhash Jadavani <subhashj@codeaurora.org> 959 955 Description: This entry could be used to set or show the UFS device 960 956 system power management level. The current driver 961 - implementation supports 6 levels with next target states: 957 + implementation supports 7 levels with next target states: 962 958 963 959 == ==================================================== 964 - 0 an UFS device will stay active, an UIC link will 960 + 0 UFS device will stay active, UIC link will 965 961 stay active 966 - 1 an UFS device will stay active, an UIC link will 962 + 1 UFS device will stay active, UIC link will 967 963 hibernate 968 - 2 an UFS device will moved to sleep, an UIC link will 964 + 2 UFS device will be moved to sleep, UIC link will 969 965 stay active 970 - 3 an UFS device will moved to sleep, an UIC link will 966 + 3 UFS device will be moved to sleep, UIC link will 971 967 hibernate 972 - 4 an UFS device will be powered off, an UIC link will 968 + 4 UFS device will be powered off, UIC link will 973 969 hibernate 974 - 5 an UFS device will be powered off, an UIC link will 970 + 5 UFS device will be powered off, UIC link will 975 971 be powered off 972 + 6 UFS device will be moved to deep sleep, UIC link 973 + will be powered off. Note, deep sleep might not be 974 + supported in which case this value will not be 975 + accepted 976 976 == ==================================================== 977 977 978 978 What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state
+9 -3
Documentation/admin-guide/device-mapper/dm-integrity.rst
··· 177 177 The bitmap flush interval in milliseconds. The metadata buffers 178 178 are synchronized when this interval expires. 179 179 180 + allow_discards 181 + Allow block discard requests (a.k.a. TRIM) for the integrity device. 182 + Discards are only allowed to devices using internal hash. 183 + 180 184 fix_padding 181 185 Use a smaller padding of the tag area that is more 182 186 space-efficient. If this option is not present, large padding is 183 187 used - that is for compatibility with older kernels. 184 188 185 - allow_discards 186 - Allow block discard requests (a.k.a. TRIM) for the integrity device. 187 - Discards are only allowed to devices using internal hash. 189 + legacy_recalculate 190 + Allow recalculating of volumes with HMAC keys. This is disabled by 191 + default for security reasons - an attacker could modify the volume, 192 + set recalc_sector to zero, and the kernel would not detect the 193 + modification. 188 194 189 195 The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and 190 196 allow_discards can be changed when reloading the target (load an inactive
+6 -21
Documentation/dev-tools/kasan.rst
··· 160 160 boot parameters that allow to disable KASAN competely or otherwise control 161 161 particular KASAN features. 162 162 163 - The things that can be controlled are: 163 + - ``kasan=off`` or ``=on`` controls whether KASAN is enabled (default: ``on``). 164 164 165 - 1. Whether KASAN is enabled at all. 166 - 2. Whether KASAN collects and saves alloc/free stacks. 167 - 3. Whether KASAN panics on a detected bug or not. 165 + - ``kasan.stacktrace=off`` or ``=on`` disables or enables alloc and free stack 166 + traces collection (default: ``on`` for ``CONFIG_DEBUG_KERNEL=y``, otherwise 167 + ``off``). 168 168 169 - The ``kasan.mode`` boot parameter allows to choose one of three main modes: 170 - 171 - - ``kasan.mode=off`` - KASAN is disabled, no tag checks are performed 172 - - ``kasan.mode=prod`` - only essential production features are enabled 173 - - ``kasan.mode=full`` - all KASAN features are enabled 174 - 175 - The chosen mode provides default control values for the features mentioned 176 - above. However it's also possible to override the default values by providing: 177 - 178 - - ``kasan.stacktrace=off`` or ``=on`` - enable alloc/free stack collection 179 - (default: ``on`` for ``mode=full``, 180 - otherwise ``off``) 181 - - ``kasan.fault=report`` or ``=panic`` - only print KASAN report or also panic 182 - (default: ``report``) 183 - 184 - If ``kasan.mode`` parameter is not provided, it defaults to ``full`` when 185 - ``CONFIG_DEBUG_KERNEL`` is enabled, and to ``prod`` otherwise. 169 + - ``kasan.fault=report`` or ``=panic`` controls whether to only print a KASAN 170 + report or also panic the kernel (default: ``report``). 186 171 187 172 For developers 188 173 ~~~~~~~~~~~~~~
+57
Documentation/dev-tools/kunit/usage.rst
··· 522 522 * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256`` 523 523 field and reuse ``cases``. 524 524 525 + * be converted to a "parameterized test", see below. 526 + 527 + Parameterized Testing 528 + ~~~~~~~~~~~~~~~~~~~~~ 529 + 530 + The table-driven testing pattern is common enough that KUnit has special 531 + support for it. 532 + 533 + Reusing the same ``cases`` array from above, we can write the test as a 534 + "parameterized test" with the following. 535 + 536 + .. code-block:: c 537 + 538 + // This is copy-pasted from above. 539 + struct sha1_test_case { 540 + const char *str; 541 + const char *sha1; 542 + }; 543 + struct sha1_test_case cases[] = { 544 + { 545 + .str = "hello world", 546 + .sha1 = "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed", 547 + }, 548 + { 549 + .str = "hello world!", 550 + .sha1 = "430ce34d020724ed75a196dfc2ad67c77772d169", 551 + }, 552 + }; 553 + 554 + // Need a helper function to generate a name for each test case. 555 + static void case_to_desc(const struct sha1_test_case *t, char *desc) 556 + { 557 + strcpy(desc, t->str); 558 + } 559 + // Creates `sha1_gen_params()` to iterate over `cases`. 560 + KUNIT_ARRAY_PARAM(sha1, cases, case_to_desc); 561 + 562 + // Looks no different from a normal test. 563 + static void sha1_test(struct kunit *test) 564 + { 565 + // This function can just contain the body of the for-loop. 566 + // The former `cases[i]` is accessible under test->param_value. 567 + char out[40]; 568 + struct sha1_test_case *test_param = (struct sha1_test_case *)(test->param_value); 569 + 570 + sha1sum(test_param->str, out); 571 + KUNIT_EXPECT_STREQ_MSG(test, (char *)out, test_param->sha1, 572 + "sha1sum(%s)", test_param->str); 573 + } 574 + 575 + // Instead of KUNIT_CASE, we use KUNIT_CASE_PARAM and pass in the 576 + // function declared by KUNIT_ARRAY_PARAM. 577 + static struct kunit_case sha1_test_cases[] = { 578 + KUNIT_CASE_PARAM(sha1_test, sha1_gen_params), 579 + {} 580 + }; 581 + 525 582 .. _kunit-on-non-uml: 526 583 527 584 KUnit on non-UML architectures
+2 -2
Documentation/devicetree/bindings/iio/accel/bosch,bma255.yaml
··· 16 16 properties: 17 17 compatible: 18 18 enum: 19 - - bosch,bmc150 20 - - bosch,bmi055 19 + - bosch,bmc150_accel 20 + - bosch,bmi055_accel 21 21 - bosch,bma255 22 22 - bosch,bma250e 23 23 - bosch,bma222
+2 -2
Documentation/devicetree/bindings/sound/mt8192-mt6359-rt1015-rt5682.yaml
··· 7 7 title: Mediatek MT8192 with MT6359, RT1015 and RT5682 ASoC sound card driver 8 8 9 9 maintainers: 10 - - Jiaxin Yu <jiaxin.yu@mediatek.com> 11 - - Shane Chien <shane.chien@mediatek.com> 10 + - Jiaxin Yu <jiaxin.yu@mediatek.com> 11 + - Shane Chien <shane.chien@mediatek.com> 12 12 13 13 description: 14 14 This binding describes the MT8192 sound card.
+12
Documentation/networking/ip-sysctl.rst
··· 1807 1807 ``conf/default/*``: 1808 1808 Change the interface-specific default settings. 1809 1809 1810 + These settings would be used during creating new interfaces. 1811 + 1810 1812 1811 1813 ``conf/all/*``: 1812 1814 Change all the interface-specific settings. 1813 1815 1814 1816 [XXX: Other special features than forwarding?] 1817 + 1818 + conf/all/disable_ipv6 - BOOLEAN 1819 + Changing this value is same as changing ``conf/default/disable_ipv6`` 1820 + setting and also all per-interface ``disable_ipv6`` settings to the same 1821 + value. 1822 + 1823 + Reading this value does not have any particular meaning. It does not say 1824 + whether IPv6 support is enabled or disabled. Returned value can be 1 1825 + also in the case when some interface has ``disable_ipv6`` set to 0 and 1826 + has configured IPv6 addresses. 1815 1827 1816 1828 conf/all/forwarding - BOOLEAN 1817 1829 Enable global IPv6 forwarding between all interfaces.
+11 -10
Documentation/virt/kvm/api.rst
··· 360 360 memory slot. Ensure the entire structure is cleared to avoid padding 361 361 issues. 362 362 363 - If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 specifies 364 - the address space for which you want to return the dirty bitmap. 365 - They must be less than the value that KVM_CHECK_EXTENSION returns for 366 - the KVM_CAP_MULTI_ADDRESS_SPACE capability. 363 + If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of slot field specifies 364 + the address space for which you want to return the dirty bitmap. See 365 + KVM_SET_USER_MEMORY_REGION for details on the usage of slot field. 367 366 368 367 The bits in the dirty bitmap are cleared before the ioctl returns, unless 369 368 KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is enabled. For more information, ··· 1280 1281 the entire memory slot size. Any object may back this memory, including 1281 1282 anonymous memory, ordinary files, and hugetlbfs. 1282 1283 1284 + On architectures that support a form of address tagging, userspace_addr must 1285 + be an untagged address. 1286 + 1283 1287 It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr 1284 1288 be identical. This allows large pages in the guest to be backed by large 1285 1289 pages in the host. ··· 1335 1333 1336 1334 :Capability: KVM_CAP_ENABLE_CAP_VM 1337 1335 :Architectures: all 1338 - :Type: vcpu ioctl 1336 + :Type: vm ioctl 1339 1337 :Parameters: struct kvm_enable_cap (in) 1340 1338 :Returns: 0 on success; -1 on error 1341 1339 ··· 4434 4432 :Capability: KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 4435 4433 :Architectures: x86, arm, arm64, mips 4436 4434 :Type: vm ioctl 4437 - :Parameters: struct kvm_dirty_log (in) 4435 + :Parameters: struct kvm_clear_dirty_log (in) 4438 4436 :Returns: 0 on success, -1 on error 4439 4437 4440 4438 :: ··· 4461 4459 (for example via write-protection, or by clearing the dirty bit in 4462 4460 a page table entry). 4463 4461 4464 - If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 specifies 4465 - the address space for which you want to return the dirty bitmap. 4466 - They must be less than the value that KVM_CHECK_EXTENSION returns for 4467 - the KVM_CAP_MULTI_ADDRESS_SPACE capability. 4462 + If KVM_CAP_MULTI_ADDRESS_SPACE is available, bits 16-31 of slot field specifies 4463 + the address space for which you want to clear the dirty status. See 4464 + KVM_SET_USER_MEMORY_REGION for details on the usage of slot field. 4468 4465 4469 4466 This ioctl is mostly useful when KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 4470 4467 is enabled; for more information, see the description of the capability.
+19 -10
MAINTAINERS
··· 3247 3247 S: Supported 3248 3248 W: http://sourceforge.net/projects/bonding/ 3249 3249 F: drivers/net/bonding/ 3250 + F: include/net/bonding.h 3250 3251 F: include/uapi/linux/if_bonding.h 3251 3252 3252 3253 BOSCH SENSORTEC BMA400 ACCELEROMETER IIO DRIVER ··· 3421 3420 F: drivers/pci/controller/pcie-brcmstb.c 3422 3421 F: drivers/staging/vc04_services 3423 3422 N: bcm2711 3424 - N: bcm2835 3423 + N: bcm283* 3425 3424 3426 3425 BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE 3427 3426 M: Florian Fainelli <f.fainelli@gmail.com> ··· 3900 3899 F: drivers/mtd/nand/raw/cadence-nand-controller.c 3901 3900 3902 3901 CADENCE USB3 DRD IP DRIVER 3903 - M: Peter Chen <peter.chen@nxp.com> 3902 + M: Peter Chen <peter.chen@kernel.org> 3904 3903 M: Pawel Laszczak <pawell@cadence.com> 3905 3904 R: Roger Quadros <rogerq@kernel.org> 3906 3905 R: Aswath Govindraju <a-govindraju@ti.com> ··· 4185 4184 F: Documentation/translations/zh_CN/ 4186 4185 4187 4186 CHIPIDEA USB HIGH SPEED DUAL ROLE CONTROLLER 4188 - M: Peter Chen <Peter.Chen@nxp.com> 4187 + M: Peter Chen <peter.chen@kernel.org> 4189 4188 L: linux-usb@vger.kernel.org 4190 4189 S: Maintained 4191 4190 T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git ··· 4335 4334 B: https://github.com/ClangBuiltLinux/linux/issues 4336 4335 C: irc://chat.freenode.net/clangbuiltlinux 4337 4336 F: Documentation/kbuild/llvm.rst 4337 + F: include/linux/compiler-clang.h 4338 4338 F: scripts/clang-tools/ 4339 + F: scripts/clang-version.sh 4339 4340 F: scripts/lld-version.sh 4340 4341 K: \b(?i:clang|llvm)\b 4341 4342 ··· 8457 8454 F: include/linux/i3c/ 8458 8455 8459 8456 IA64 (Itanium) PLATFORM 8460 - M: Tony Luck <tony.luck@intel.com> 8461 - M: Fenghua Yu <fenghua.yu@intel.com> 8462 8457 L: linux-ia64@vger.kernel.org 8463 - S: Odd Fixes 8464 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git 8458 + S: Orphan 8465 8459 F: Documentation/ia64/ 8466 8460 F: arch/ia64/ 8467 8461 ··· 12436 12436 NETWORKING [IPv4/IPv6] 12437 12437 M: "David S. Miller" <davem@davemloft.net> 12438 12438 M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> 12439 + M: David Ahern <dsahern@kernel.org> 12439 12440 L: netdev@vger.kernel.org 12440 12441 S: Maintained 12441 12442 T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git ··· 14528 14527 F: drivers/crypto/qat/ 14529 14528 14530 14529 QCOM AUDIO (ASoC) DRIVERS 14531 - M: Patrick Lai <plai@codeaurora.org> 14530 + M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 14532 14531 M: Banajit Goswami <bgoswami@codeaurora.org> 14533 14532 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 14534 14533 S: Supported 14534 + F: sound/soc/codecs/lpass-va-macro.c 14535 + F: sound/soc/codecs/lpass-wsa-macro.* 14536 + F: sound/soc/codecs/msm8916-wcd-analog.c 14537 + F: sound/soc/codecs/msm8916-wcd-digital.c 14538 + F: sound/soc/codecs/wcd9335.* 14539 + F: sound/soc/codecs/wcd934x.c 14540 + F: sound/soc/codecs/wcd-clsh-v2.* 14541 + F: sound/soc/codecs/wsa881x.c 14535 14542 F: sound/soc/qcom/ 14536 14543 14537 14544 QCOM IPA DRIVER ··· 16991 16982 M: Arnaud Pouliquen <arnaud.pouliquen@st.com> 16992 16983 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 16993 16984 S: Maintained 16994 - F: Documentation/devicetree/bindings/sound/st,stm32-*.txt 16985 + F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml 16995 16986 F: sound/soc/stm/ 16996 16987 16997 16988 STM32 TIMER/LPTIMER DRIVERS ··· 18444 18435 F: drivers/usb/host/ohci* 18445 18436 18446 18437 USB OTG FSM (Finite State Machine) 18447 - M: Peter Chen <Peter.Chen@nxp.com> 18438 + M: Peter Chen <peter.chen@kernel.org> 18448 18439 L: linux-usb@vger.kernel.org 18449 18440 S: Maintained 18450 18441 T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 11 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION*
+7
arch/arm/boot/dts/imx6q-tbs2910.dts
··· 16 16 stdout-path = &uart1; 17 17 }; 18 18 19 + aliases { 20 + mmc0 = &usdhc2; 21 + mmc1 = &usdhc3; 22 + mmc2 = &usdhc4; 23 + /delete-property/ mmc3; 24 + }; 25 + 19 26 memory@10000000 { 20 27 device_type = "memory"; 21 28 reg = <0x10000000 0x80000000>;
+1 -1
arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
··· 418 418 419 419 /* VDD_AUD_1P8: Audio codec */ 420 420 reg_aud_1p8v: ldo3 { 421 - regulator-name = "vdd1p8"; 421 + regulator-name = "vdd1p8a"; 422 422 regulator-min-microvolt = <1800000>; 423 423 regulator-max-microvolt = <1800000>; 424 424 regulator-boot-on;
+3 -3
arch/arm/boot/dts/imx6qdl-kontron-samx6i.dtsi
··· 137 137 138 138 lcd_backlight: lcd-backlight { 139 139 compatible = "pwm-backlight"; 140 - pwms = <&pwm4 0 5000000>; 140 + pwms = <&pwm4 0 5000000 0>; 141 141 pwm-names = "LCD_BKLT_PWM"; 142 142 143 143 brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>; ··· 167 167 i2c-gpio,delay-us = <2>; /* ~100 kHz */ 168 168 #address-cells = <1>; 169 169 #size-cells = <0>; 170 - status = "disabld"; 170 + status = "disabled"; 171 171 }; 172 172 173 173 i2c_cam: i2c-gpio-cam { ··· 179 179 i2c-gpio,delay-us = <2>; /* ~100 kHz */ 180 180 #address-cells = <1>; 181 181 #size-cells = <0>; 182 - status = "disabld"; 182 + status = "disabled"; 183 183 }; 184 184 }; 185 185
+10 -2
arch/arm/boot/dts/imx6qdl-sr-som.dtsi
··· 53 53 &fec { 54 54 pinctrl-names = "default"; 55 55 pinctrl-0 = <&pinctrl_microsom_enet_ar8035>; 56 - phy-handle = <&phy>; 57 56 phy-mode = "rgmii-id"; 58 57 phy-reset-duration = <2>; 59 58 phy-reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>; ··· 62 63 #address-cells = <1>; 63 64 #size-cells = <0>; 64 65 65 - phy: ethernet-phy@0 { 66 + /* 67 + * The PHY can appear at either address 0 or 4 due to the 68 + * configuration (LED) pin not being pulled sufficiently. 69 + */ 70 + ethernet-phy@0 { 66 71 reg = <0>; 72 + qca,clk-out-frequency = <125000000>; 73 + }; 74 + 75 + ethernet-phy@4 { 76 + reg = <4>; 67 77 qca,clk-out-frequency = <125000000>; 68 78 }; 69 79 };
+1
arch/arm/boot/dts/imx7d-flex-concentrator.dts
··· 115 115 compatible = "nxp,pcf2127"; 116 116 reg = <0>; 117 117 spi-max-frequency = <2000000>; 118 + reset-source; 118 119 }; 119 120 }; 120 121
+38
arch/arm/boot/dts/ste-db8500.dtsi
··· 12 12 200000 0>; 13 13 }; 14 14 }; 15 + 16 + reserved-memory { 17 + #address-cells = <1>; 18 + #size-cells = <1>; 19 + ranges; 20 + 21 + /* Modem trace memory */ 22 + ram@06000000 { 23 + reg = <0x06000000 0x00f00000>; 24 + no-map; 25 + }; 26 + 27 + /* Modem shared memory */ 28 + ram@06f00000 { 29 + reg = <0x06f00000 0x00100000>; 30 + no-map; 31 + }; 32 + 33 + /* Modem private memory */ 34 + ram@07000000 { 35 + reg = <0x07000000 0x01000000>; 36 + no-map; 37 + }; 38 + 39 + /* 40 + * Initial Secure Software ISSW memory 41 + * 42 + * This is probably only used if the kernel tries 43 + * to actually call into trustzone to run secure 44 + * applications, which the mainline kernel probably 45 + * will not do on this old chipset. But you can never 46 + * be too careful, so reserve this memory anyway. 47 + */ 48 + ram@17f00000 { 49 + reg = <0x17f00000 0x00100000>; 50 + no-map; 51 + }; 52 + }; 15 53 };
+38
arch/arm/boot/dts/ste-db8520.dtsi
··· 12 12 200000 0>; 13 13 }; 14 14 }; 15 + 16 + reserved-memory { 17 + #address-cells = <1>; 18 + #size-cells = <1>; 19 + ranges; 20 + 21 + /* Modem trace memory */ 22 + ram@06000000 { 23 + reg = <0x06000000 0x00f00000>; 24 + no-map; 25 + }; 26 + 27 + /* Modem shared memory */ 28 + ram@06f00000 { 29 + reg = <0x06f00000 0x00100000>; 30 + no-map; 31 + }; 32 + 33 + /* Modem private memory */ 34 + ram@07000000 { 35 + reg = <0x07000000 0x01000000>; 36 + no-map; 37 + }; 38 + 39 + /* 40 + * Initial Secure Software ISSW memory 41 + * 42 + * This is probably only used if the kernel tries 43 + * to actually call into trustzone to run secure 44 + * applications, which the mainline kernel probably 45 + * will not do on this old chipset. But you can never 46 + * be too careful, so reserve this memory anyway. 47 + */ 48 + ram@17f00000 { 49 + reg = <0x17f00000 0x00100000>; 50 + no-map; 51 + }; 52 + }; 15 53 };
+35
arch/arm/boot/dts/ste-db9500.dtsi
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include "ste-dbx5x0.dtsi" 4 + 5 + / { 6 + cpus { 7 + cpu@300 { 8 + /* cpufreq controls */ 9 + operating-points = <1152000 0 10 + 800000 0 11 + 400000 0 12 + 200000 0>; 13 + }; 14 + }; 15 + 16 + reserved-memory { 17 + #address-cells = <1>; 18 + #size-cells = <1>; 19 + ranges; 20 + 21 + /* 22 + * Initial Secure Software ISSW memory 23 + * 24 + * This is probably only used if the kernel tries 25 + * to actually call into trustzone to run secure 26 + * applications, which the mainline kernel probably 27 + * will not do on this old chipset. But you can never 28 + * be too careful, so reserve this memory anyway. 29 + */ 30 + ram@17f00000 { 31 + reg = <0x17f00000 0x00100000>; 32 + no-map; 33 + }; 34 + }; 35 + };
+1 -1
arch/arm/boot/dts/ste-snowball.dts
··· 4 4 */ 5 5 6 6 /dts-v1/; 7 - #include "ste-db8500.dtsi" 7 + #include "ste-db9500.dtsi" 8 8 #include "ste-href-ab8500.dtsi" 9 9 #include "ste-href-family-pinctrl.dtsi" 10 10
+1
arch/arm/mach-imx/suspend-imx6.S
··· 67 67 #define MX6Q_CCM_CCR 0x0 68 68 69 69 .align 3 70 + .arm 70 71 71 72 .macro sync_l2_cache 72 73
+6 -1
arch/arm64/boot/dts/broadcom/stingray/stingray-usb.dtsi
··· 4 4 */ 5 5 usb { 6 6 compatible = "simple-bus"; 7 - dma-ranges; 8 7 #address-cells = <2>; 9 8 #size-cells = <2>; 10 9 ranges = <0x0 0x0 0x0 0x68500000 0x0 0x00400000>; 10 + 11 + /* 12 + * Internally, USB bus to the interconnect can only address up 13 + * to 40-bit 14 + */ 15 + dma-ranges = <0 0 0 0 0x100 0x0>; 11 16 12 17 usbphy0: usb-phy@0 { 13 18 compatible = "brcm,sr-usb-combo-phy";
+1 -1
arch/arm64/boot/dts/freescale/fsl-ls1028a.dtsi
··· 101 101 reboot { 102 102 compatible ="syscon-reboot"; 103 103 regmap = <&rst>; 104 - offset = <0xb0>; 104 + offset = <0>; 105 105 mask = <0x02>; 106 106 }; 107 107
+1 -1
arch/arm64/boot/dts/freescale/imx8mn.dtsi
··· 253 253 #size-cells = <1>; 254 254 ranges; 255 255 256 - spba: bus@30000000 { 256 + spba: spba-bus@30000000 { 257 257 compatible = "fsl,spba-bus", "simple-bus"; 258 258 #address-cells = <1>; 259 259 #size-cells = <1>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 266 266 #gpio-cells = <2>; 267 267 interrupt-controller; 268 268 #interrupt-cells = <2>; 269 - gpio-ranges = <&iomuxc 0 56 26>, <&iomuxc 0 144 4>; 269 + gpio-ranges = <&iomuxc 0 56 26>, <&iomuxc 26 144 4>; 270 270 }; 271 271 272 272 gpio4: gpio@30230000 {
-2
arch/arm64/configs/defconfig
··· 991 991 CONFIG_ARCH_TEGRA_186_SOC=y 992 992 CONFIG_ARCH_TEGRA_194_SOC=y 993 993 CONFIG_ARCH_TEGRA_234_SOC=y 994 - CONFIG_ARCH_K3_AM6_SOC=y 995 - CONFIG_ARCH_K3_J721E_SOC=y 996 994 CONFIG_TI_SCI_PM_DOMAINS=y 997 995 CONFIG_EXTCON_PTN5150=m 998 996 CONFIG_EXTCON_USB_GPIO=y
+2 -2
arch/arm64/kernel/probes/kprobes.c
··· 352 352 unsigned long addr = instruction_pointer(regs); 353 353 struct kprobe *cur = kprobe_running(); 354 354 355 - if (cur && (kcb->kprobe_status == KPROBE_HIT_SS) 356 - && ((unsigned long)&cur->ainsn.api.insn[1] == addr)) { 355 + if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) && 356 + ((unsigned long)&cur->ainsn.api.insn[1] == addr)) { 357 357 kprobes_restore_local_irqflag(kcb, regs); 358 358 post_kprobe_handler(cur, kcb, regs); 359 359
+2 -1
arch/arm64/kvm/arm.c
··· 1396 1396 * Calculate the raw per-cpu offset without a translation from the 1397 1397 * kernel's mapping to the linear mapping, and store it in tpidr_el2 1398 1398 * so that we can use adr_l to access per-cpu variables in EL2. 1399 + * Also drop the KASAN tag which gets in the way... 1399 1400 */ 1400 - params->tpidr_el2 = (unsigned long)this_cpu_ptr_nvhe_sym(__per_cpu_start) - 1401 + params->tpidr_el2 = (unsigned long)kasan_reset_tag(this_cpu_ptr_nvhe_sym(__per_cpu_start)) - 1401 1402 (unsigned long)kvm_ksym_ref(CHOOSE_NVHE_SYM(__per_cpu_start)); 1402 1403 1403 1404 params->mair_el2 = read_sysreg(mair_el1);
+5 -8
arch/arm64/kvm/hyp/nvhe/psci-relay.c
··· 77 77 cpu_reg(host_ctxt, 2), cpu_reg(host_ctxt, 3)); 78 78 } 79 79 80 - static __noreturn unsigned long psci_forward_noreturn(struct kvm_cpu_context *host_ctxt) 81 - { 82 - psci_forward(host_ctxt); 83 - hyp_panic(); /* unreachable */ 84 - } 85 - 86 80 static unsigned int find_cpu_id(u64 mpidr) 87 81 { 88 82 unsigned int i; ··· 245 251 case PSCI_0_2_FN_MIGRATE_INFO_TYPE: 246 252 case PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU: 247 253 return psci_forward(host_ctxt); 254 + /* 255 + * SYSTEM_OFF/RESET should not return according to the spec. 256 + * Allow it so as to stay robust to broken firmware. 257 + */ 248 258 case PSCI_0_2_FN_SYSTEM_OFF: 249 259 case PSCI_0_2_FN_SYSTEM_RESET: 250 - psci_forward_noreturn(host_ctxt); 251 - unreachable(); 260 + return psci_forward(host_ctxt); 252 261 case PSCI_0_2_FN64_CPU_SUSPEND: 253 262 return psci_cpu_suspend(func_id, host_ctxt); 254 263 case PSCI_0_2_FN64_CPU_ON:
+7 -3
arch/arm64/kvm/pmu-emul.c
··· 788 788 { 789 789 unsigned long *bmap = vcpu->kvm->arch.pmu_filter; 790 790 u64 val, mask = 0; 791 - int base, i; 791 + int base, i, nr_events; 792 792 793 793 if (!pmceid1) { 794 794 val = read_sysreg(pmceid0_el0); ··· 801 801 if (!bmap) 802 802 return val; 803 803 804 + nr_events = kvm_pmu_event_mask(vcpu->kvm) + 1; 805 + 804 806 for (i = 0; i < 32; i += 8) { 805 807 u64 byte; 806 808 807 809 byte = bitmap_get_value8(bmap, base + i); 808 810 mask |= byte << i; 809 - byte = bitmap_get_value8(bmap, 0x4000 + base + i); 810 - mask |= byte << (32 + i); 811 + if (nr_events >= (0x4000 + base + 32)) { 812 + byte = bitmap_get_value8(bmap, 0x4000 + base + i); 813 + mask |= byte << (32 + i); 814 + } 811 815 } 812 816 813 817 return val & mask;
+56 -37
arch/arm64/kvm/sys_regs.c
··· 43 43 * 64bit interface. 44 44 */ 45 45 46 + #define reg_to_encoding(x) \ 47 + sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ 48 + (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) 49 + 46 50 static bool read_from_write_only(struct kvm_vcpu *vcpu, 47 51 struct sys_reg_params *params, 48 52 const struct sys_reg_desc *r) ··· 277 273 const struct sys_reg_desc *r) 278 274 { 279 275 u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 280 - u32 sr = sys_reg((u32)r->Op0, (u32)r->Op1, 281 - (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); 276 + u32 sr = reg_to_encoding(r); 282 277 283 278 if (!(val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))) { 284 279 kvm_inject_undefined(vcpu); ··· 593 590 vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1); 594 591 } 595 592 593 + static unsigned int pmu_visibility(const struct kvm_vcpu *vcpu, 594 + const struct sys_reg_desc *r) 595 + { 596 + if (kvm_vcpu_has_pmu(vcpu)) 597 + return 0; 598 + 599 + return REG_HIDDEN; 600 + } 601 + 596 602 static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 597 603 { 598 604 u64 pmcr, val; ··· 625 613 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags) 626 614 { 627 615 u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0); 628 - bool enabled = kvm_vcpu_has_pmu(vcpu); 616 + bool enabled = (reg & flags) || vcpu_mode_priv(vcpu); 629 617 630 - enabled &= (reg & flags) || vcpu_mode_priv(vcpu); 631 618 if (!enabled) 632 619 kvm_inject_undefined(vcpu); 633 620 ··· 911 900 static bool access_pmuserenr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, 912 901 const struct sys_reg_desc *r) 913 902 { 914 - if (!kvm_vcpu_has_pmu(vcpu)) { 915 - kvm_inject_undefined(vcpu); 916 - return false; 917 - } 918 - 919 903 if (p->is_write) { 920 904 if (!vcpu_mode_priv(vcpu)) { 921 905 kvm_inject_undefined(vcpu); ··· 927 921 return true; 928 922 } 929 923 930 - #define reg_to_encoding(x) \ 931 - sys_reg((u32)(x)->Op0, (u32)(x)->Op1, \ 932 - (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2) 933 - 934 924 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ 935 925 #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ 936 926 { SYS_DESC(SYS_DBGBVRn_EL1(n)), \ ··· 938 936 { SYS_DESC(SYS_DBGWCRn_EL1(n)), \ 939 937 trap_wcr, reset_wcr, 0, 0, get_wcr, set_wcr } 940 938 939 + #define PMU_SYS_REG(r) \ 940 + SYS_DESC(r), .reset = reset_unknown, .visibility = pmu_visibility 941 + 941 942 /* Macro to expand the PMEVCNTRn_EL0 register */ 942 943 #define PMU_PMEVCNTR_EL0(n) \ 943 - { SYS_DESC(SYS_PMEVCNTRn_EL0(n)), \ 944 - access_pmu_evcntr, reset_unknown, (PMEVCNTR0_EL0 + n), } 944 + { PMU_SYS_REG(SYS_PMEVCNTRn_EL0(n)), \ 945 + .access = access_pmu_evcntr, .reg = (PMEVCNTR0_EL0 + n), } 945 946 946 947 /* Macro to expand the PMEVTYPERn_EL0 register */ 947 948 #define PMU_PMEVTYPER_EL0(n) \ 948 - { SYS_DESC(SYS_PMEVTYPERn_EL0(n)), \ 949 - access_pmu_evtyper, reset_unknown, (PMEVTYPER0_EL0 + n), } 949 + { PMU_SYS_REG(SYS_PMEVTYPERn_EL0(n)), \ 950 + .access = access_pmu_evtyper, .reg = (PMEVTYPER0_EL0 + n), } 950 951 951 952 static bool undef_access(struct kvm_vcpu *vcpu, struct sys_reg_params *p, 952 953 const struct sys_reg_desc *r) ··· 1025 1020 static u64 read_id_reg(const struct kvm_vcpu *vcpu, 1026 1021 struct sys_reg_desc const *r, bool raz) 1027 1022 { 1028 - u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, 1029 - (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); 1023 + u32 id = reg_to_encoding(r); 1030 1024 u64 val = raz ? 0 : read_sanitised_ftr_reg(id); 1031 1025 1032 1026 if (id == SYS_ID_AA64PFR0_EL1) { ··· 1066 1062 static unsigned int id_visibility(const struct kvm_vcpu *vcpu, 1067 1063 const struct sys_reg_desc *r) 1068 1064 { 1069 - u32 id = sys_reg((u32)r->Op0, (u32)r->Op1, 1070 - (u32)r->CRn, (u32)r->CRm, (u32)r->Op2); 1065 + u32 id = reg_to_encoding(r); 1071 1066 1072 1067 switch (id) { 1073 1068 case SYS_ID_AA64ZFR0_EL1: ··· 1489 1486 { SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 }, 1490 1487 { SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 }, 1491 1488 1492 - { SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, PMINTENSET_EL1 }, 1493 - { SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, reset_unknown, PMINTENSET_EL1 }, 1489 + { PMU_SYS_REG(SYS_PMINTENSET_EL1), 1490 + .access = access_pminten, .reg = PMINTENSET_EL1 }, 1491 + { PMU_SYS_REG(SYS_PMINTENCLR_EL1), 1492 + .access = access_pminten, .reg = PMINTENSET_EL1 }, 1494 1493 1495 1494 { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, 1496 1495 { SYS_DESC(SYS_AMAIR_EL1), access_vm_reg, reset_amair_el1, AMAIR_EL1 }, ··· 1531 1526 { SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 }, 1532 1527 { SYS_DESC(SYS_CTR_EL0), access_ctr }, 1533 1528 1534 - { SYS_DESC(SYS_PMCR_EL0), access_pmcr, reset_pmcr, PMCR_EL0 }, 1535 - { SYS_DESC(SYS_PMCNTENSET_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, 1536 - { SYS_DESC(SYS_PMCNTENCLR_EL0), access_pmcnten, reset_unknown, PMCNTENSET_EL0 }, 1537 - { SYS_DESC(SYS_PMOVSCLR_EL0), access_pmovs, reset_unknown, PMOVSSET_EL0 }, 1538 - { SYS_DESC(SYS_PMSWINC_EL0), access_pmswinc, reset_unknown, PMSWINC_EL0 }, 1539 - { SYS_DESC(SYS_PMSELR_EL0), access_pmselr, reset_unknown, PMSELR_EL0 }, 1540 - { SYS_DESC(SYS_PMCEID0_EL0), access_pmceid }, 1541 - { SYS_DESC(SYS_PMCEID1_EL0), access_pmceid }, 1542 - { SYS_DESC(SYS_PMCCNTR_EL0), access_pmu_evcntr, reset_unknown, PMCCNTR_EL0 }, 1543 - { SYS_DESC(SYS_PMXEVTYPER_EL0), access_pmu_evtyper }, 1544 - { SYS_DESC(SYS_PMXEVCNTR_EL0), access_pmu_evcntr }, 1529 + { PMU_SYS_REG(SYS_PMCR_EL0), .access = access_pmcr, 1530 + .reset = reset_pmcr, .reg = PMCR_EL0 }, 1531 + { PMU_SYS_REG(SYS_PMCNTENSET_EL0), 1532 + .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 1533 + { PMU_SYS_REG(SYS_PMCNTENCLR_EL0), 1534 + .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 1535 + { PMU_SYS_REG(SYS_PMOVSCLR_EL0), 1536 + .access = access_pmovs, .reg = PMOVSSET_EL0 }, 1537 + { PMU_SYS_REG(SYS_PMSWINC_EL0), 1538 + .access = access_pmswinc, .reg = PMSWINC_EL0 }, 1539 + { PMU_SYS_REG(SYS_PMSELR_EL0), 1540 + .access = access_pmselr, .reg = PMSELR_EL0 }, 1541 + { PMU_SYS_REG(SYS_PMCEID0_EL0), 1542 + .access = access_pmceid, .reset = NULL }, 1543 + { PMU_SYS_REG(SYS_PMCEID1_EL0), 1544 + .access = access_pmceid, .reset = NULL }, 1545 + { PMU_SYS_REG(SYS_PMCCNTR_EL0), 1546 + .access = access_pmu_evcntr, .reg = PMCCNTR_EL0 }, 1547 + { PMU_SYS_REG(SYS_PMXEVTYPER_EL0), 1548 + .access = access_pmu_evtyper, .reset = NULL }, 1549 + { PMU_SYS_REG(SYS_PMXEVCNTR_EL0), 1550 + .access = access_pmu_evcntr, .reset = NULL }, 1545 1551 /* 1546 1552 * PMUSERENR_EL0 resets as unknown in 64bit mode while it resets as zero 1547 1553 * in 32bit mode. Here we choose to reset it as zero for consistency. 1548 1554 */ 1549 - { SYS_DESC(SYS_PMUSERENR_EL0), access_pmuserenr, reset_val, PMUSERENR_EL0, 0 }, 1550 - { SYS_DESC(SYS_PMOVSSET_EL0), access_pmovs, reset_unknown, PMOVSSET_EL0 }, 1555 + { PMU_SYS_REG(SYS_PMUSERENR_EL0), .access = access_pmuserenr, 1556 + .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, 1557 + { PMU_SYS_REG(SYS_PMOVSSET_EL0), 1558 + .access = access_pmovs, .reg = PMOVSSET_EL0 }, 1551 1559 1552 1560 { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, 1553 1561 { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, ··· 1712 1694 * PMCCFILTR_EL0 resets as unknown in 64bit mode while it resets as zero 1713 1695 * in 32bit mode. Here we choose to reset it as zero for consistency. 1714 1696 */ 1715 - { SYS_DESC(SYS_PMCCFILTR_EL0), access_pmu_evtyper, reset_val, PMCCFILTR_EL0, 0 }, 1697 + { PMU_SYS_REG(SYS_PMCCFILTR_EL0), .access = access_pmu_evtyper, 1698 + .reset = reset_val, .reg = PMCCFILTR_EL0, .val = 0 }, 1716 1699 1717 1700 { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, 1718 1701 { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+4 -3
arch/arm64/mm/fault.c
··· 709 709 struct pt_regs *regs) 710 710 { 711 711 /* 712 - * The architecture specifies that bits 63:60 of FAR_EL1 are UNKNOWN for tag 713 - * check faults. Mask them out now so that userspace doesn't see them. 712 + * The architecture specifies that bits 63:60 of FAR_EL1 are UNKNOWN 713 + * for tag check faults. Set them to corresponding bits in the untagged 714 + * address. 714 715 */ 715 - far &= (1UL << 60) - 1; 716 + far = (__untagged_addr(far) & ~MTE_TAG_MASK) | (far & MTE_TAG_MASK); 716 717 do_bad_area(far, esr, regs); 717 718 return 0; 718 719 }
+1 -1
arch/ia64/include/uapi/asm/cmpxchg.h
··· 54 54 }) 55 55 56 56 #define xchg(ptr, x) \ 57 - ((__typeof__(*(ptr))) __xchg((unsigned long) (x), (ptr), sizeof(*(ptr)))) 57 + ({(__typeof__(*(ptr))) __xchg((unsigned long) (x), (ptr), sizeof(*(ptr)));}) 58 58 59 59 /* 60 60 * Atomic compare and exchange. Compare OLD with MEM, if identical,
+19 -14
arch/ia64/kernel/time.c
··· 171 171 static irqreturn_t 172 172 timer_interrupt (int irq, void *dev_id) 173 173 { 174 - unsigned long cur_itm, new_itm, ticks; 174 + unsigned long new_itm; 175 175 176 176 if (cpu_is_offline(smp_processor_id())) { 177 177 return IRQ_HANDLED; 178 178 } 179 179 180 180 new_itm = local_cpu_data->itm_next; 181 - cur_itm = ia64_get_itc(); 182 181 183 - if (!time_after(cur_itm, new_itm)) { 182 + if (!time_after(ia64_get_itc(), new_itm)) 184 183 printk(KERN_ERR "Oops: timer tick before it's due (itc=%lx,itm=%lx)\n", 185 - cur_itm, new_itm); 186 - ticks = 1; 187 - } else { 188 - ticks = DIV_ROUND_UP(cur_itm - new_itm, 189 - local_cpu_data->itm_delta); 190 - new_itm += ticks * local_cpu_data->itm_delta; 184 + ia64_get_itc(), new_itm); 185 + 186 + while (1) { 187 + new_itm += local_cpu_data->itm_delta; 188 + 189 + legacy_timer_tick(smp_processor_id() == time_keeper_id); 190 + 191 + local_cpu_data->itm_next = new_itm; 192 + 193 + if (time_after(new_itm, ia64_get_itc())) 194 + break; 195 + 196 + /* 197 + * Allow IPIs to interrupt the timer loop. 198 + */ 199 + local_irq_enable(); 200 + local_irq_disable(); 191 201 } 192 - 193 - if (smp_processor_id() != time_keeper_id) 194 - ticks = 0; 195 - 196 - legacy_timer_tick(ticks); 197 202 198 203 do { 199 204 /*
+1
arch/mips/include/asm/highmem.h
··· 51 51 52 52 #define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases) 53 53 54 + #define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) set_pte(ptep, ptev) 54 55 #define arch_kmap_local_post_map(vaddr, pteval) local_flush_tlb_one(vaddr) 55 56 #define arch_kmap_local_post_unmap(vaddr) local_flush_tlb_one(vaddr) 56 57
+1 -1
arch/openrisc/include/asm/io.h
··· 31 31 void __iomem *ioremap(phys_addr_t offset, unsigned long size); 32 32 33 33 #define iounmap iounmap 34 - extern void iounmap(void *addr); 34 + extern void iounmap(void __iomem *addr); 35 35 36 36 #include <asm-generic/io.h> 37 37
+1 -1
arch/openrisc/mm/ioremap.c
··· 77 77 } 78 78 EXPORT_SYMBOL(ioremap); 79 79 80 - void iounmap(void *addr) 80 + void iounmap(void __iomem *addr) 81 81 { 82 82 /* If the page is from the fixmap pool then we just clear out 83 83 * the fixmap mapping.
+2 -3
arch/parisc/Kconfig
··· 202 202 depends on PA8X00 || PA7200 203 203 204 204 config MLONGCALLS 205 - bool "Enable the -mlong-calls compiler option for big kernels" 206 - default y if !MODULES || UBSAN || FTRACE 207 - default n 205 + def_bool y if !MODULES || UBSAN || FTRACE 206 + bool "Enable the -mlong-calls compiler option for big kernels" if MODULES && !UBSAN && !FTRACE 208 207 depends on PA8X00 209 208 help 210 209 If you configure the kernel to include many drivers built-in instead
-3
arch/parisc/include/asm/irq.h
··· 47 47 extern int cpu_claim_irq(unsigned int irq, struct irq_chip *, void *); 48 48 extern int cpu_check_affinity(struct irq_data *d, const struct cpumask *dest); 49 49 50 - /* soft power switch support (power.c) */ 51 - extern struct tasklet_struct power_tasklet; 52 - 53 50 #endif /* _ASM_PARISC_IRQ_H */
+10 -3
arch/parisc/kernel/entry.S
··· 997 997 bb,<,n %r20, 31 - PSW_SM_I, intr_restore 998 998 nop 999 999 1000 + /* ssm PSW_SM_I done later in intr_restore */ 1001 + #ifdef CONFIG_MLONGCALLS 1002 + ldil L%intr_restore, %r2 1003 + load32 preempt_schedule_irq, %r1 1004 + bv %r0(%r1) 1005 + ldo R%intr_restore(%r2), %r2 1006 + #else 1007 + ldil L%intr_restore, %r1 1000 1008 BL preempt_schedule_irq, %r2 1001 - nop 1002 - 1003 - b,n intr_restore /* ssm PSW_SM_I done by intr_restore */ 1009 + ldo R%intr_restore(%r1), %r2 1010 + #endif 1004 1011 #endif /* CONFIG_PREEMPTION */ 1005 1012 1006 1013 /*
+13
arch/powerpc/include/asm/exception-64s.h
··· 63 63 nop; \ 64 64 nop; 65 65 66 + #define SCV_ENTRY_FLUSH_SLOT \ 67 + SCV_ENTRY_FLUSH_FIXUP_SECTION; \ 68 + nop; \ 69 + nop; \ 70 + nop; 71 + 66 72 /* 67 73 * r10 must be free to use, r13 must be paca 68 74 */ 69 75 #define INTERRUPT_TO_KERNEL \ 70 76 STF_ENTRY_BARRIER_SLOT; \ 71 77 ENTRY_FLUSH_SLOT 78 + 79 + /* 80 + * r10, ctr must be free to use, r13 must be paca 81 + */ 82 + #define SCV_INTERRUPT_TO_KERNEL \ 83 + STF_ENTRY_BARRIER_SLOT; \ 84 + SCV_ENTRY_FLUSH_SLOT 72 85 73 86 /* 74 87 * Macros for annotating the expected destination of (h)rfid
+10
arch/powerpc/include/asm/feature-fixups.h
··· 240 240 FTR_ENTRY_OFFSET 957b-958b; \ 241 241 .popsection; 242 242 243 + #define SCV_ENTRY_FLUSH_FIXUP_SECTION \ 244 + 957: \ 245 + .pushsection __scv_entry_flush_fixup,"a"; \ 246 + .align 2; \ 247 + 958: \ 248 + FTR_ENTRY_OFFSET 957b-958b; \ 249 + .popsection; 250 + 243 251 #define RFI_FLUSH_FIXUP_SECTION \ 244 252 951: \ 245 253 .pushsection __rfi_flush_fixup,"a"; \ ··· 281 273 282 274 extern long stf_barrier_fallback; 283 275 extern long entry_flush_fallback; 276 + extern long scv_entry_flush_fallback; 284 277 extern long __start___stf_entry_barrier_fixup, __stop___stf_entry_barrier_fixup; 285 278 extern long __start___stf_exit_barrier_fixup, __stop___stf_exit_barrier_fixup; 286 279 extern long __start___uaccess_flush_fixup, __stop___uaccess_flush_fixup; 287 280 extern long __start___entry_flush_fixup, __stop___entry_flush_fixup; 281 + extern long __start___scv_entry_flush_fixup, __stop___scv_entry_flush_fixup; 288 282 extern long __start___rfi_flush_fixup, __stop___rfi_flush_fixup; 289 283 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; 290 284 extern long __start__btb_flush_fixup, __stop__btb_flush_fixup;
+2
arch/powerpc/include/asm/highmem.h
··· 58 58 59 59 #define flush_cache_kmaps() flush_cache_all() 60 60 61 + #define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) \ 62 + __set_pte_at(mm, vaddr, ptep, ptev, 1) 61 63 #define arch_kmap_local_post_map(vaddr, pteval) \ 62 64 local_flush_tlb_page(NULL, vaddr) 63 65 #define arch_kmap_local_post_unmap(vaddr) \
+1 -1
arch/powerpc/kernel/entry_64.S
··· 75 75 bne .Ltabort_syscall 76 76 END_FTR_SECTION_IFSET(CPU_FTR_TM) 77 77 #endif 78 - INTERRUPT_TO_KERNEL 78 + SCV_INTERRUPT_TO_KERNEL 79 79 mr r10,r1 80 80 ld r1,PACAKSAVE(r13) 81 81 std r10,0(r1)
+19
arch/powerpc/kernel/exceptions-64s.S
··· 2993 2993 ld r11,PACA_EXRFI+EX_R11(r13) 2994 2994 blr 2995 2995 2996 + /* 2997 + * The SCV entry flush happens with interrupts enabled, so it must disable 2998 + * to prevent EXRFI being clobbered by NMIs (e.g., soft_nmi_common). r10 2999 + * (containing LR) does not need to be preserved here because scv entry 3000 + * puts 0 in the pt_regs, CTR can be clobbered for the same reason. 3001 + */ 3002 + TRAMP_REAL_BEGIN(scv_entry_flush_fallback) 3003 + li r10,0 3004 + mtmsrd r10,1 3005 + lbz r10,PACAIRQHAPPENED(r13) 3006 + ori r10,r10,PACA_IRQ_HARD_DIS 3007 + stb r10,PACAIRQHAPPENED(r13) 3008 + std r11,PACA_EXRFI+EX_R11(r13) 3009 + L1D_DISPLACEMENT_FLUSH 3010 + ld r11,PACA_EXRFI+EX_R11(r13) 3011 + li r10,MSR_RI 3012 + mtmsrd r10,1 3013 + blr 3014 + 2996 3015 TRAMP_REAL_BEGIN(rfi_flush_fallback) 2997 3016 SET_SCRATCH0(r13); 2998 3017 GET_PACA(r13);
+7
arch/powerpc/kernel/vmlinux.lds.S
··· 146 146 } 147 147 148 148 . = ALIGN(8); 149 + __scv_entry_flush_fixup : AT(ADDR(__scv_entry_flush_fixup) - LOAD_OFFSET) { 150 + __start___scv_entry_flush_fixup = .; 151 + *(__scv_entry_flush_fixup) 152 + __stop___scv_entry_flush_fixup = .; 153 + } 154 + 155 + . = ALIGN(8); 149 156 __stf_exit_barrier_fixup : AT(ADDR(__stf_exit_barrier_fixup) - LOAD_OFFSET) { 150 157 __start___stf_exit_barrier_fixup = .; 151 158 *(__stf_exit_barrier_fixup)
+21 -3
arch/powerpc/lib/feature-fixups.c
··· 290 290 long *start, *end; 291 291 int i; 292 292 293 - start = PTRRELOC(&__start___entry_flush_fixup); 294 - end = PTRRELOC(&__stop___entry_flush_fixup); 295 - 296 293 instrs[0] = 0x60000000; /* nop */ 297 294 instrs[1] = 0x60000000; /* nop */ 298 295 instrs[2] = 0x60000000; /* nop */ ··· 309 312 if (types & L1D_FLUSH_MTTRIG) 310 313 instrs[i++] = 0x7c12dba6; /* mtspr TRIG2,r0 (SPR #882) */ 311 314 315 + start = PTRRELOC(&__start___entry_flush_fixup); 316 + end = PTRRELOC(&__stop___entry_flush_fixup); 312 317 for (i = 0; start < end; start++, i++) { 313 318 dest = (void *)start + *start; 314 319 ··· 326 327 327 328 patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 328 329 } 330 + 331 + start = PTRRELOC(&__start___scv_entry_flush_fixup); 332 + end = PTRRELOC(&__stop___scv_entry_flush_fixup); 333 + for (; start < end; start++, i++) { 334 + dest = (void *)start + *start; 335 + 336 + pr_devel("patching dest %lx\n", (unsigned long)dest); 337 + 338 + patch_instruction((struct ppc_inst *)dest, ppc_inst(instrs[0])); 339 + 340 + if (types == L1D_FLUSH_FALLBACK) 341 + patch_branch((struct ppc_inst *)(dest + 1), (unsigned long)&scv_entry_flush_fallback, 342 + BRANCH_SET_LINK); 343 + else 344 + patch_instruction((struct ppc_inst *)(dest + 1), ppc_inst(instrs[1])); 345 + 346 + patch_instruction((struct ppc_inst *)(dest + 2), ppc_inst(instrs[2])); 347 + } 348 + 329 349 330 350 printk(KERN_DEBUG "entry-flush: patched %d locations (%s flush)\n", i, 331 351 (types == L1D_FLUSH_NONE) ? "no" :
-1
arch/sh/Kconfig
··· 29 29 select HAVE_ARCH_KGDB 30 30 select HAVE_ARCH_SECCOMP_FILTER 31 31 select HAVE_ARCH_TRACEHOOK 32 - select HAVE_COPY_THREAD_TLS 33 32 select HAVE_DEBUG_BUGVERBOSE 34 33 select HAVE_DEBUG_KMEMLEAK 35 34 select HAVE_DYNAMIC_FTRACE
-1
arch/sh/boards/mach-sh03/rtc.c
··· 11 11 #include <linux/sched.h> 12 12 #include <linux/time.h> 13 13 #include <linux/bcd.h> 14 - #include <linux/rtc.h> 15 14 #include <linux/spinlock.h> 16 15 #include <linux/io.h> 17 16 #include <linux/rtc.h>
+4 -5
arch/sh/configs/landisk_defconfig
··· 27 27 CONFIG_ATALK=m 28 28 CONFIG_BLK_DEV_LOOP=y 29 29 CONFIG_BLK_DEV_RAM=y 30 - CONFIG_IDE=y 31 - CONFIG_BLK_DEV_IDECD=y 32 - CONFIG_BLK_DEV_OFFBOARD=y 33 - CONFIG_BLK_DEV_GENERIC=y 34 - CONFIG_BLK_DEV_AEC62XX=y 30 + CONFIG_ATA=y 31 + CONFIG_ATA_GENERIC=y 32 + CONFIG_PATA_ATP867X=y 35 33 CONFIG_SCSI=y 36 34 CONFIG_BLK_DEV_SD=y 35 + CONFIG_BLK_DEV_SR=y 37 36 CONFIG_SCSI_MULTI_LUN=y 38 37 CONFIG_MD=y 39 38 CONFIG_BLK_DEV_MD=m
-2
arch/sh/configs/microdev_defconfig
··· 20 20 # CONFIG_IPV6 is not set 21 21 # CONFIG_FW_LOADER is not set 22 22 CONFIG_BLK_DEV_RAM=y 23 - CONFIG_IDE=y 24 - CONFIG_BLK_DEV_IDECD=y 25 23 CONFIG_NETDEVICES=y 26 24 CONFIG_NET_ETHERNET=y 27 25 CONFIG_SMC91X=y
+2 -4
arch/sh/configs/sdk7780_defconfig
··· 44 44 CONFIG_PARPORT=y 45 45 CONFIG_BLK_DEV_LOOP=y 46 46 CONFIG_BLK_DEV_RAM=y 47 - CONFIG_IDE=y 48 - CONFIG_BLK_DEV_IDECD=y 49 - CONFIG_BLK_DEV_PLATFORM=y 50 - CONFIG_BLK_DEV_GENERIC=y 51 47 CONFIG_BLK_DEV_SD=y 52 48 CONFIG_BLK_DEV_SR=y 53 49 CONFIG_CHR_DEV_SG=y 54 50 CONFIG_SCSI_SPI_ATTRS=y 55 51 CONFIG_SCSI_FC_ATTRS=y 56 52 CONFIG_ATA=y 53 + CONFIG_ATA_GENERIC=y 54 + CONFIG_PATA_PLATFORM=y 57 55 CONFIG_MD=y 58 56 CONFIG_BLK_DEV_DM=y 59 57 CONFIG_NETDEVICES=y
-3
arch/sh/configs/sdk7786_defconfig
··· 116 116 CONFIG_BLK_DEV_LOOP=y 117 117 CONFIG_BLK_DEV_CRYPTOLOOP=y 118 118 CONFIG_BLK_DEV_RAM=y 119 - CONFIG_IDE=y 120 - CONFIG_BLK_DEV_IDECD=y 121 - CONFIG_BLK_DEV_PLATFORM=y 122 119 CONFIG_BLK_DEV_SD=y 123 120 CONFIG_BLK_DEV_SR=y 124 121 CONFIG_SCSI_MULTI_LUN=y
-1
arch/sh/configs/se7750_defconfig
··· 29 29 CONFIG_MTD_CFI=y 30 30 CONFIG_MTD_CFI_AMDSTD=y 31 31 CONFIG_MTD_ROM=y 32 - CONFIG_IDE=y 33 32 CONFIG_SCSI=y 34 33 CONFIG_NETDEVICES=y 35 34 CONFIG_NET_ETHERNET=y
-3
arch/sh/configs/sh03_defconfig
··· 39 39 CONFIG_BLK_DEV_LOOP=y 40 40 CONFIG_BLK_DEV_NBD=y 41 41 CONFIG_BLK_DEV_RAM=y 42 - CONFIG_IDE=y 43 - CONFIG_BLK_DEV_IDECD=m 44 - CONFIG_BLK_DEV_IDETAPE=m 45 42 CONFIG_SCSI=m 46 43 CONFIG_BLK_DEV_SD=m 47 44 CONFIG_BLK_DEV_SR=m
+1 -2
arch/sh/drivers/dma/Kconfig
··· 63 63 64 64 config G2_DMA 65 65 tristate "G2 Bus DMA support" 66 - depends on SH_DREAMCAST 67 - select SH_DMA_API 66 + depends on SH_DREAMCAST && SH_DMA_API 68 67 help 69 68 This enables support for the DMA controller for the Dreamcast's 70 69 G2 bus. Drivers that want this will generally enable this on
-1
arch/sh/include/asm/gpio.h
··· 16 16 #include <cpu/gpio.h> 17 17 #endif 18 18 19 - #define ARCH_NR_GPIOS 512 20 19 #include <asm-generic/gpio.h> 21 20 22 21 #ifdef CONFIG_GPIOLIB
-1
arch/sh/kernel/cpu/sh3/entry.S
··· 14 14 #include <cpu/mmu_context.h> 15 15 #include <asm/page.h> 16 16 #include <asm/cache.h> 17 - #include <asm/thread_info.h> 18 17 19 18 ! NOTE: 20 19 ! GNU as (as of 2.9.1) changes bf/s into bt/s and bra, when the address
+1 -1
arch/sh/mm/Kconfig
··· 105 105 (the default value) say Y. 106 106 107 107 config NUMA 108 - bool "Non Uniform Memory Access (NUMA) Support" 108 + bool "Non-Uniform Memory Access (NUMA) Support" 109 109 depends on MMU && SYS_SUPPORTS_NUMA 110 110 select ARCH_WANT_NUMA_VARIABLE_LOCALITY 111 111 default n
+2 -13
arch/sh/mm/asids-debugfs.c
··· 26 26 #include <asm/processor.h> 27 27 #include <asm/mmu_context.h> 28 28 29 - static int asids_seq_show(struct seq_file *file, void *iter) 29 + static int asids_debugfs_show(struct seq_file *file, void *iter) 30 30 { 31 31 struct task_struct *p; 32 32 ··· 48 48 return 0; 49 49 } 50 50 51 - static int asids_debugfs_open(struct inode *inode, struct file *file) 52 - { 53 - return single_open(file, asids_seq_show, inode->i_private); 54 - } 55 - 56 - static const struct file_operations asids_debugfs_fops = { 57 - .owner = THIS_MODULE, 58 - .open = asids_debugfs_open, 59 - .read = seq_read, 60 - .llseek = seq_lseek, 61 - .release = single_release, 62 - }; 51 + DEFINE_SHOW_ATTRIBUTE(asids_debugfs); 63 52 64 53 static int __init asids_debugfs_init(void) 65 54 {
+2 -13
arch/sh/mm/cache-debugfs.c
··· 22 22 CACHE_TYPE_UNIFIED, 23 23 }; 24 24 25 - static int cache_seq_show(struct seq_file *file, void *iter) 25 + static int cache_debugfs_show(struct seq_file *file, void *iter) 26 26 { 27 27 unsigned int cache_type = (unsigned int)file->private; 28 28 struct cache_info *cache; ··· 94 94 return 0; 95 95 } 96 96 97 - static int cache_debugfs_open(struct inode *inode, struct file *file) 98 - { 99 - return single_open(file, cache_seq_show, inode->i_private); 100 - } 101 - 102 - static const struct file_operations cache_debugfs_fops = { 103 - .owner = THIS_MODULE, 104 - .open = cache_debugfs_open, 105 - .read = seq_read, 106 - .llseek = seq_lseek, 107 - .release = single_release, 108 - }; 97 + DEFINE_SHOW_ATTRIBUTE(cache_debugfs); 109 98 110 99 static int __init cache_debugfs_init(void) 111 100 {
+2 -13
arch/sh/mm/pmb.c
··· 812 812 return (__raw_readl(PMB_PASCR) & PASCR_SE) == 0; 813 813 } 814 814 815 - static int pmb_seq_show(struct seq_file *file, void *iter) 815 + static int pmb_debugfs_show(struct seq_file *file, void *iter) 816 816 { 817 817 int i; 818 818 ··· 846 846 return 0; 847 847 } 848 848 849 - static int pmb_debugfs_open(struct inode *inode, struct file *file) 850 - { 851 - return single_open(file, pmb_seq_show, NULL); 852 - } 853 - 854 - static const struct file_operations pmb_debugfs_fops = { 855 - .owner = THIS_MODULE, 856 - .open = pmb_debugfs_open, 857 - .read = seq_read, 858 - .llseek = seq_lseek, 859 - .release = single_release, 860 - }; 849 + DEFINE_SHOW_ATTRIBUTE(pmb_debugfs); 861 850 862 851 static int __init pmb_debugfs_init(void) 863 852 {
+5 -4
arch/sparc/include/asm/highmem.h
··· 50 50 51 51 #define flush_cache_kmaps() flush_cache_all() 52 52 53 - /* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */ 54 - #define arch_kmap_local_post_map(vaddr, pteval) flush_cache_all() 55 - #define arch_kmap_local_post_unmap(vaddr) flush_cache_all() 56 - 53 + /* FIXME: Use __flush_*_one(vaddr) instead of flush_*_all() -- Anton */ 54 + #define arch_kmap_local_pre_map(vaddr, pteval) flush_cache_all() 55 + #define arch_kmap_local_pre_unmap(vaddr) flush_cache_all() 56 + #define arch_kmap_local_post_map(vaddr, pteval) flush_tlb_all() 57 + #define arch_kmap_local_post_unmap(vaddr) flush_tlb_all() 57 58 58 59 #endif /* __KERNEL__ */ 59 60
+7 -3
arch/x86/entry/common.c
··· 73 73 unsigned int nr) 74 74 { 75 75 if (likely(nr < IA32_NR_syscalls)) { 76 - instrumentation_begin(); 77 76 nr = array_index_nospec(nr, IA32_NR_syscalls); 78 77 regs->ax = ia32_sys_call_table[nr](regs); 79 - instrumentation_end(); 80 78 } 81 79 } 82 80 ··· 89 91 * or may not be necessary, but it matches the old asm behavior. 90 92 */ 91 93 nr = (unsigned int)syscall_enter_from_user_mode(regs, nr); 94 + instrumentation_begin(); 92 95 93 96 do_syscall_32_irqs_on(regs, nr); 97 + 98 + instrumentation_end(); 94 99 syscall_exit_to_user_mode(regs); 95 100 } 96 101 ··· 122 121 res = get_user(*(u32 *)&regs->bp, 123 122 (u32 __user __force *)(unsigned long)(u32)regs->sp); 124 123 } 125 - instrumentation_end(); 126 124 127 125 if (res) { 128 126 /* User code screwed up. */ 129 127 regs->ax = -EFAULT; 128 + 129 + instrumentation_end(); 130 130 syscall_exit_to_user_mode(regs); 131 131 return false; 132 132 } ··· 137 135 138 136 /* Now this is just like a normal syscall. */ 139 137 do_syscall_32_irqs_on(regs, nr); 138 + 139 + instrumentation_end(); 140 140 syscall_exit_to_user_mode(regs); 141 141 return true; 142 142 }
+13 -2
arch/x86/include/asm/fpu/api.h
··· 16 16 * Use kernel_fpu_begin/end() if you intend to use FPU in kernel context. It 17 17 * disables preemption so be careful if you intend to use it for long periods 18 18 * of time. 19 - * If you intend to use the FPU in softirq you need to check first with 19 + * If you intend to use the FPU in irq/softirq you need to check first with 20 20 * irq_fpu_usable() if it is possible. 21 21 */ 22 - extern void kernel_fpu_begin(void); 22 + 23 + /* Kernel FPU states to initialize in kernel_fpu_begin_mask() */ 24 + #define KFPU_387 _BITUL(0) /* 387 state will be initialized */ 25 + #define KFPU_MXCSR _BITUL(1) /* MXCSR will be initialized */ 26 + 27 + extern void kernel_fpu_begin_mask(unsigned int kfpu_mask); 23 28 extern void kernel_fpu_end(void); 24 29 extern bool irq_fpu_usable(void); 25 30 extern void fpregs_mark_activate(void); 31 + 32 + /* Code that is unaware of kernel_fpu_begin_mask() can use this */ 33 + static inline void kernel_fpu_begin(void) 34 + { 35 + kernel_fpu_begin_mask(KFPU_387 | KFPU_MXCSR); 36 + } 26 37 27 38 /* 28 39 * Use fpregs_lock() while editing CPU's FPU registers or fpu->state.
+1
arch/x86/include/asm/idtentry.h
··· 613 613 614 614 #ifdef CONFIG_XEN_PV 615 615 DECLARE_IDTENTRY_XENCB(X86_TRAP_OTHER, exc_xen_hypervisor_callback); 616 + DECLARE_IDTENTRY_RAW(X86_TRAP_OTHER, exc_xen_unknown_trap); 616 617 #endif 617 618 618 619 /* Device interrupts common/spurious */
+1
arch/x86/include/asm/intel-family.h
··· 97 97 98 98 #define INTEL_FAM6_LAKEFIELD 0x8A 99 99 #define INTEL_FAM6_ALDERLAKE 0x97 100 + #define INTEL_FAM6_ALDERLAKE_L 0x9A 100 101 101 102 /* "Small Core" Processors (Atom) */ 102 103
+2 -2
arch/x86/include/asm/msr.h
··· 86 86 * think of extending them - you will be slapped with a stinking trout or a frozen 87 87 * shark will reach you, wherever you are! You've been warned. 88 88 */ 89 - static inline unsigned long long notrace __rdmsr(unsigned int msr) 89 + static __always_inline unsigned long long __rdmsr(unsigned int msr) 90 90 { 91 91 DECLARE_ARGS(val, low, high); 92 92 ··· 98 98 return EAX_EDX_VAL(val, low, high); 99 99 } 100 100 101 - static inline void notrace __wrmsr(unsigned int msr, u32 low, u32 high) 101 + static __always_inline void __wrmsr(unsigned int msr, u32 low, u32 high) 102 102 { 103 103 asm volatile("1: wrmsr\n" 104 104 "2:\n"
+2 -2
arch/x86/include/asm/topology.h
··· 110 110 #define topology_die_id(cpu) (cpu_data(cpu).cpu_die_id) 111 111 #define topology_core_id(cpu) (cpu_data(cpu).cpu_core_id) 112 112 113 + extern unsigned int __max_die_per_package; 114 + 113 115 #ifdef CONFIG_SMP 114 116 #define topology_die_cpumask(cpu) (per_cpu(cpu_die_map, cpu)) 115 117 #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu)) ··· 119 117 120 118 extern unsigned int __max_logical_packages; 121 119 #define topology_max_packages() (__max_logical_packages) 122 - 123 - extern unsigned int __max_die_per_package; 124 120 125 121 static inline int topology_max_die_per_package(void) 126 122 {
+2 -2
arch/x86/kernel/cpu/amd.c
··· 542 542 u32 ecx; 543 543 544 544 ecx = cpuid_ecx(0x8000001e); 545 - nodes_per_socket = ((ecx >> 8) & 7) + 1; 545 + __max_die_per_package = nodes_per_socket = ((ecx >> 8) & 7) + 1; 546 546 } else if (boot_cpu_has(X86_FEATURE_NODEID_MSR)) { 547 547 u64 value; 548 548 549 549 rdmsrl(MSR_FAM10H_NODE_ID, value); 550 - nodes_per_socket = ((value >> 3) & 7) + 1; 550 + __max_die_per_package = nodes_per_socket = ((value >> 3) & 7) + 1; 551 551 } 552 552 553 553 if (!boot_cpu_has(X86_FEATURE_AMD_SSBD) &&
+4 -3
arch/x86/kernel/cpu/mce/core.c
··· 1992 1992 * that out because it's an indirect call. Annotate it. 1993 1993 */ 1994 1994 instrumentation_begin(); 1995 - trace_hardirqs_off_finish(); 1995 + 1996 1996 machine_check_vector(regs); 1997 - if (regs->flags & X86_EFLAGS_IF) 1998 - trace_hardirqs_on_prepare(); 1997 + 1999 1998 instrumentation_end(); 2000 1999 irqentry_nmi_exit(regs, irq_state); 2001 2000 } ··· 2003 2004 { 2004 2005 irqentry_enter_from_user_mode(regs); 2005 2006 instrumentation_begin(); 2007 + 2006 2008 machine_check_vector(regs); 2009 + 2007 2010 instrumentation_end(); 2008 2011 irqentry_exit_to_user_mode(regs); 2009 2012 }
+1 -1
arch/x86/kernel/cpu/topology.c
··· 25 25 #define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f) 26 26 #define LEVEL_MAX_SIBLINGS(ebx) ((ebx) & 0xffff) 27 27 28 - #ifdef CONFIG_SMP 29 28 unsigned int __max_die_per_package __read_mostly = 1; 30 29 EXPORT_SYMBOL(__max_die_per_package); 31 30 31 + #ifdef CONFIG_SMP 32 32 /* 33 33 * Check if given CPUID extended toplogy "leaf" is implemented 34 34 */
+5 -4
arch/x86/kernel/fpu/core.c
··· 121 121 } 122 122 EXPORT_SYMBOL(copy_fpregs_to_fpstate); 123 123 124 - void kernel_fpu_begin(void) 124 + void kernel_fpu_begin_mask(unsigned int kfpu_mask) 125 125 { 126 126 preempt_disable(); 127 127 ··· 141 141 } 142 142 __cpu_invalidate_fpregs_state(); 143 143 144 - if (boot_cpu_has(X86_FEATURE_XMM)) 144 + /* Put sane initial values into the control registers. */ 145 + if (likely(kfpu_mask & KFPU_MXCSR) && boot_cpu_has(X86_FEATURE_XMM)) 145 146 ldmxcsr(MXCSR_DEFAULT); 146 147 147 - if (boot_cpu_has(X86_FEATURE_FPU)) 148 + if (unlikely(kfpu_mask & KFPU_387) && boot_cpu_has(X86_FEATURE_FPU)) 148 149 asm volatile ("fninit"); 149 150 } 150 - EXPORT_SYMBOL_GPL(kernel_fpu_begin); 151 + EXPORT_SYMBOL_GPL(kernel_fpu_begin_mask); 151 152 152 153 void kernel_fpu_end(void) 153 154 {
+9 -11
arch/x86/kernel/setup.c
··· 661 661 static void __init trim_bios_range(void) 662 662 { 663 663 /* 664 - * A special case is the first 4Kb of memory; 665 - * This is a BIOS owned area, not kernel ram, but generally 666 - * not listed as such in the E820 table. 667 - * 668 - * This typically reserves additional memory (64KiB by default) 669 - * since some BIOSes are known to corrupt low memory. See the 670 - * Kconfig help text for X86_RESERVE_LOW. 671 - */ 672 - e820__range_update(0, PAGE_SIZE, E820_TYPE_RAM, E820_TYPE_RESERVED); 673 - 674 - /* 675 664 * special case: Some BIOSes report the PC BIOS 676 665 * area (640Kb -> 1Mb) as RAM even though it is not. 677 666 * take them out. ··· 717 728 718 729 static void __init trim_low_memory_range(void) 719 730 { 731 + /* 732 + * A special case is the first 4Kb of memory; 733 + * This is a BIOS owned area, not kernel ram, but generally 734 + * not listed as such in the E820 table. 735 + * 736 + * This typically reserves additional memory (64KiB by default) 737 + * since some BIOSes are known to corrupt low memory. See the 738 + * Kconfig help text for X86_RESERVE_LOW. 739 + */ 720 740 memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); 721 741 } 722 742
+13 -1
arch/x86/kernel/sev-es.c
··· 225 225 return __rdmsr(MSR_AMD64_SEV_ES_GHCB); 226 226 } 227 227 228 - static inline void sev_es_wr_ghcb_msr(u64 val) 228 + static __always_inline void sev_es_wr_ghcb_msr(u64 val) 229 229 { 230 230 u32 low, high; 231 231 ··· 286 286 u16 d2; 287 287 u8 d1; 288 288 289 + /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ 290 + if (!user_mode(ctxt->regs) && !access_ok(target, size)) { 291 + memcpy(dst, buf, size); 292 + return ES_OK; 293 + } 294 + 289 295 switch (size) { 290 296 case 1: 291 297 memcpy(&d1, buf, 1); ··· 340 334 u32 d4; 341 335 u16 d2; 342 336 u8 d1; 337 + 338 + /* If instruction ran in kernel mode and the I/O buffer is in kernel space */ 339 + if (!user_mode(ctxt->regs) && !access_ok(s, size)) { 340 + memcpy(buf, src, size); 341 + return ES_OK; 342 + } 343 343 344 344 switch (size) { 345 345 case 1:
+19
arch/x86/kernel/smpboot.c
··· 56 56 #include <linux/numa.h> 57 57 #include <linux/pgtable.h> 58 58 #include <linux/overflow.h> 59 + #include <linux/syscore_ops.h> 59 60 60 61 #include <asm/acpi.h> 61 62 #include <asm/desc.h> ··· 2084 2083 this_cpu_write(arch_prev_mperf, mperf); 2085 2084 } 2086 2085 2086 + #ifdef CONFIG_PM_SLEEP 2087 + static struct syscore_ops freq_invariance_syscore_ops = { 2088 + .resume = init_counter_refs, 2089 + }; 2090 + 2091 + static void register_freq_invariance_syscore_ops(void) 2092 + { 2093 + /* Bail out if registered already. */ 2094 + if (freq_invariance_syscore_ops.node.prev) 2095 + return; 2096 + 2097 + register_syscore_ops(&freq_invariance_syscore_ops); 2098 + } 2099 + #else 2100 + static inline void register_freq_invariance_syscore_ops(void) {} 2101 + #endif 2102 + 2087 2103 static void init_freq_invariance(bool secondary, bool cppc_ready) 2088 2104 { 2089 2105 bool ret = false; ··· 2127 2109 if (ret) { 2128 2110 init_counter_refs(); 2129 2111 static_branch_enable(&arch_scale_freq_key); 2112 + register_freq_invariance_syscore_ops(); 2130 2113 pr_info("Estimated ratio of average max frequency by base frequency (times 1024): %llu\n", arch_max_freq_ratio); 2131 2114 } else { 2132 2115 pr_debug("Couldn't determine max cpu frequency, necessary for scale-invariant accounting.\n");
+28 -29
arch/x86/kvm/kvm_cache_regs.h
··· 9 9 (X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR \ 10 10 | X86_CR4_OSXMMEXCPT | X86_CR4_PGE | X86_CR4_TSD | X86_CR4_FSGSBASE) 11 11 12 + #define BUILD_KVM_GPR_ACCESSORS(lname, uname) \ 13 + static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\ 14 + { \ 15 + return vcpu->arch.regs[VCPU_REGS_##uname]; \ 16 + } \ 17 + static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \ 18 + unsigned long val) \ 19 + { \ 20 + vcpu->arch.regs[VCPU_REGS_##uname] = val; \ 21 + } 22 + BUILD_KVM_GPR_ACCESSORS(rax, RAX) 23 + BUILD_KVM_GPR_ACCESSORS(rbx, RBX) 24 + BUILD_KVM_GPR_ACCESSORS(rcx, RCX) 25 + BUILD_KVM_GPR_ACCESSORS(rdx, RDX) 26 + BUILD_KVM_GPR_ACCESSORS(rbp, RBP) 27 + BUILD_KVM_GPR_ACCESSORS(rsi, RSI) 28 + BUILD_KVM_GPR_ACCESSORS(rdi, RDI) 29 + #ifdef CONFIG_X86_64 30 + BUILD_KVM_GPR_ACCESSORS(r8, R8) 31 + BUILD_KVM_GPR_ACCESSORS(r9, R9) 32 + BUILD_KVM_GPR_ACCESSORS(r10, R10) 33 + BUILD_KVM_GPR_ACCESSORS(r11, R11) 34 + BUILD_KVM_GPR_ACCESSORS(r12, R12) 35 + BUILD_KVM_GPR_ACCESSORS(r13, R13) 36 + BUILD_KVM_GPR_ACCESSORS(r14, R14) 37 + BUILD_KVM_GPR_ACCESSORS(r15, R15) 38 + #endif 39 + 12 40 static inline bool kvm_register_is_available(struct kvm_vcpu *vcpu, 13 41 enum kvm_reg reg) 14 42 { ··· 61 33 __set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail); 62 34 __set_bit(reg, (unsigned long *)&vcpu->arch.regs_dirty); 63 35 } 64 - 65 - #define BUILD_KVM_GPR_ACCESSORS(lname, uname) \ 66 - static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\ 67 - { \ 68 - return vcpu->arch.regs[VCPU_REGS_##uname]; \ 69 - } \ 70 - static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \ 71 - unsigned long val) \ 72 - { \ 73 - vcpu->arch.regs[VCPU_REGS_##uname] = val; \ 74 - kvm_register_mark_dirty(vcpu, VCPU_REGS_##uname); \ 75 - } 76 - BUILD_KVM_GPR_ACCESSORS(rax, RAX) 77 - BUILD_KVM_GPR_ACCESSORS(rbx, RBX) 78 - BUILD_KVM_GPR_ACCESSORS(rcx, RCX) 79 - BUILD_KVM_GPR_ACCESSORS(rdx, RDX) 80 - BUILD_KVM_GPR_ACCESSORS(rbp, RBP) 81 - BUILD_KVM_GPR_ACCESSORS(rsi, RSI) 82 - BUILD_KVM_GPR_ACCESSORS(rdi, RDI) 83 - #ifdef CONFIG_X86_64 84 - BUILD_KVM_GPR_ACCESSORS(r8, R8) 85 - BUILD_KVM_GPR_ACCESSORS(r9, R9) 86 - BUILD_KVM_GPR_ACCESSORS(r10, R10) 87 - BUILD_KVM_GPR_ACCESSORS(r11, R11) 88 - BUILD_KVM_GPR_ACCESSORS(r12, R12) 89 - BUILD_KVM_GPR_ACCESSORS(r13, R13) 90 - BUILD_KVM_GPR_ACCESSORS(r14, R14) 91 - BUILD_KVM_GPR_ACCESSORS(r15, R15) 92 - #endif 93 36 94 37 static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu, int reg) 95 38 {
+8 -1
arch/x86/kvm/mmu.h
··· 44 44 #define PT32_ROOT_LEVEL 2 45 45 #define PT32E_ROOT_LEVEL 3 46 46 47 - static inline u64 rsvd_bits(int s, int e) 47 + static __always_inline u64 rsvd_bits(int s, int e) 48 48 { 49 + BUILD_BUG_ON(__builtin_constant_p(e) && __builtin_constant_p(s) && e < s); 50 + 51 + if (__builtin_constant_p(e)) 52 + BUILD_BUG_ON(e > 63); 53 + else 54 + e &= 63; 55 + 49 56 if (e < s) 50 57 return 0; 51 58
+3
arch/x86/kvm/svm/nested.c
··· 200 200 { 201 201 struct vcpu_svm *svm = to_svm(vcpu); 202 202 203 + if (WARN_ON(!is_guest_mode(vcpu))) 204 + return true; 205 + 203 206 if (!nested_svm_vmrun_msrpm(svm)) { 204 207 vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 205 208 vcpu->run->internal.suberror =
+6 -9
arch/x86/kvm/svm/sev.c
··· 1415 1415 * to be returned: 1416 1416 * GPRs RAX, RBX, RCX, RDX 1417 1417 * 1418 - * Copy their values to the GHCB if they are dirty. 1418 + * Copy their values, even if they may not have been written during the 1419 + * VM-Exit. It's the guest's responsibility to not consume random data. 1419 1420 */ 1420 - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RAX)) 1421 - ghcb_set_rax(ghcb, vcpu->arch.regs[VCPU_REGS_RAX]); 1422 - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RBX)) 1423 - ghcb_set_rbx(ghcb, vcpu->arch.regs[VCPU_REGS_RBX]); 1424 - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RCX)) 1425 - ghcb_set_rcx(ghcb, vcpu->arch.regs[VCPU_REGS_RCX]); 1426 - if (kvm_register_is_dirty(vcpu, VCPU_REGS_RDX)) 1427 - ghcb_set_rdx(ghcb, vcpu->arch.regs[VCPU_REGS_RDX]); 1421 + ghcb_set_rax(ghcb, vcpu->arch.regs[VCPU_REGS_RAX]); 1422 + ghcb_set_rbx(ghcb, vcpu->arch.regs[VCPU_REGS_RBX]); 1423 + ghcb_set_rcx(ghcb, vcpu->arch.regs[VCPU_REGS_RCX]); 1424 + ghcb_set_rdx(ghcb, vcpu->arch.regs[VCPU_REGS_RDX]); 1428 1425 } 1429 1426 1430 1427 static void sev_es_sync_from_ghcb(struct vcpu_svm *svm)
+2
arch/x86/kvm/svm/svm.c
··· 3739 3739 { 3740 3740 struct vcpu_svm *svm = to_svm(vcpu); 3741 3741 3742 + trace_kvm_entry(vcpu); 3743 + 3742 3744 svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX]; 3743 3745 svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP]; 3744 3746 svm->vmcb->save.rip = vcpu->arch.regs[VCPU_REGS_RIP];
+33 -11
arch/x86/kvm/vmx/nested.c
··· 3124 3124 return 0; 3125 3125 } 3126 3126 3127 - static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) 3127 + static bool nested_get_evmcs_page(struct kvm_vcpu *vcpu) 3128 3128 { 3129 - struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 3130 3129 struct vcpu_vmx *vmx = to_vmx(vcpu); 3131 - struct kvm_host_map *map; 3132 - struct page *page; 3133 - u64 hpa; 3134 3130 3135 3131 /* 3136 3132 * hv_evmcs may end up being not mapped after migration (when ··· 3148 3152 return false; 3149 3153 } 3150 3154 } 3155 + 3156 + return true; 3157 + } 3158 + 3159 + static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu) 3160 + { 3161 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 3162 + struct vcpu_vmx *vmx = to_vmx(vcpu); 3163 + struct kvm_host_map *map; 3164 + struct page *page; 3165 + u64 hpa; 3151 3166 3152 3167 if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) { 3153 3168 /* ··· 3228 3221 exec_controls_setbit(vmx, CPU_BASED_USE_MSR_BITMAPS); 3229 3222 else 3230 3223 exec_controls_clearbit(vmx, CPU_BASED_USE_MSR_BITMAPS); 3224 + 3225 + return true; 3226 + } 3227 + 3228 + static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu) 3229 + { 3230 + if (!nested_get_evmcs_page(vcpu)) 3231 + return false; 3232 + 3233 + if (is_guest_mode(vcpu) && !nested_get_vmcs12_pages(vcpu)) 3234 + return false; 3235 + 3231 3236 return true; 3232 3237 } 3233 3238 ··· 6096 6077 if (is_guest_mode(vcpu)) { 6097 6078 sync_vmcs02_to_vmcs12(vcpu, vmcs12); 6098 6079 sync_vmcs02_to_vmcs12_rare(vcpu, vmcs12); 6099 - } else if (!vmx->nested.need_vmcs12_to_shadow_sync) { 6100 - if (vmx->nested.hv_evmcs) 6101 - copy_enlightened_to_vmcs12(vmx); 6102 - else if (enable_shadow_vmcs) 6103 - copy_shadow_to_vmcs12(vmx); 6080 + } else { 6081 + copy_vmcs02_to_vmcs12_rare(vcpu, get_vmcs12(vcpu)); 6082 + if (!vmx->nested.need_vmcs12_to_shadow_sync) { 6083 + if (vmx->nested.hv_evmcs) 6084 + copy_enlightened_to_vmcs12(vmx); 6085 + else if (enable_shadow_vmcs) 6086 + copy_shadow_to_vmcs12(vmx); 6087 + } 6104 6088 } 6105 6089 6106 6090 BUILD_BUG_ON(sizeof(user_vmx_nested_state->vmcs12) < VMCS12_SIZE); ··· 6624 6602 .hv_timer_pending = nested_vmx_preemption_timer_pending, 6625 6603 .get_state = vmx_get_nested_state, 6626 6604 .set_state = vmx_set_nested_state, 6627 - .get_nested_state_pages = nested_get_vmcs12_pages, 6605 + .get_nested_state_pages = vmx_get_nested_state_pages, 6628 6606 .write_log_dirty = nested_vmx_write_pml_buffer, 6629 6607 .enable_evmcs = nested_enable_evmcs, 6630 6608 .get_evmcs_version = nested_get_evmcs_version,
+5 -1
arch/x86/kvm/vmx/pmu_intel.c
··· 29 29 [4] = { 0x2e, 0x41, PERF_COUNT_HW_CACHE_MISSES }, 30 30 [5] = { 0xc4, 0x00, PERF_COUNT_HW_BRANCH_INSTRUCTIONS }, 31 31 [6] = { 0xc5, 0x00, PERF_COUNT_HW_BRANCH_MISSES }, 32 - [7] = { 0x00, 0x30, PERF_COUNT_HW_REF_CPU_CYCLES }, 32 + [7] = { 0x00, 0x03, PERF_COUNT_HW_REF_CPU_CYCLES }, 33 33 }; 34 34 35 35 /* mapping between fixed pmc index and intel_arch_events array */ ··· 345 345 346 346 pmu->nr_arch_gp_counters = min_t(int, eax.split.num_counters, 347 347 x86_pmu.num_counters_gp); 348 + eax.split.bit_width = min_t(int, eax.split.bit_width, x86_pmu.bit_width_gp); 348 349 pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << eax.split.bit_width) - 1; 350 + eax.split.mask_length = min_t(int, eax.split.mask_length, x86_pmu.events_mask_len); 349 351 pmu->available_event_types = ~entry->ebx & 350 352 ((1ull << eax.split.mask_length) - 1); 351 353 ··· 357 355 pmu->nr_arch_fixed_counters = 358 356 min_t(int, edx.split.num_counters_fixed, 359 357 x86_pmu.num_counters_fixed); 358 + edx.split.bit_width_fixed = min_t(int, 359 + edx.split.bit_width_fixed, x86_pmu.bit_width_fixed); 360 360 pmu->counter_bitmask[KVM_PMC_FIXED] = 361 361 ((u64)1 << edx.split.bit_width_fixed) - 1; 362 362 }
+2
arch/x86/kvm/vmx/vmx.c
··· 6653 6653 if (vmx->emulation_required) 6654 6654 return EXIT_FASTPATH_NONE; 6655 6655 6656 + trace_kvm_entry(vcpu); 6657 + 6656 6658 if (vmx->ple_window_dirty) { 6657 6659 vmx->ple_window_dirty = false; 6658 6660 vmcs_write32(PLE_WINDOW, vmx->ple_window);
+6 -5
arch/x86/kvm/x86.c
··· 105 105 106 106 static void update_cr8_intercept(struct kvm_vcpu *vcpu); 107 107 static void process_nmi(struct kvm_vcpu *vcpu); 108 + static void process_smi(struct kvm_vcpu *vcpu); 108 109 static void enter_smm(struct kvm_vcpu *vcpu); 109 110 static void __kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags); 110 111 static void store_regs(struct kvm_vcpu *vcpu); ··· 4230 4229 struct kvm_vcpu_events *events) 4231 4230 { 4232 4231 process_nmi(vcpu); 4232 + 4233 + if (kvm_check_request(KVM_REQ_SMI, vcpu)) 4234 + process_smi(vcpu); 4233 4235 4234 4236 /* 4235 4237 * In guest mode, payload delivery should be deferred, ··· 8806 8802 8807 8803 if (kvm_request_pending(vcpu)) { 8808 8804 if (kvm_check_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu)) { 8809 - if (WARN_ON_ONCE(!is_guest_mode(vcpu))) 8810 - ; 8811 - else if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) { 8805 + if (unlikely(!kvm_x86_ops.nested_ops->get_nested_state_pages(vcpu))) { 8812 8806 r = 0; 8813 8807 goto out; 8814 8808 } ··· 8989 8987 kvm_make_request(KVM_REQ_EVENT, vcpu); 8990 8988 kvm_x86_ops.request_immediate_exit(vcpu); 8991 8989 } 8992 - 8993 - trace_kvm_entry(vcpu); 8994 8990 8995 8991 fpregs_assert_state_consistent(); 8996 8992 if (test_thread_flag(TIF_NEED_FPU_LOAD)) ··· 11556 11556 } 11557 11557 EXPORT_SYMBOL_GPL(kvm_sev_es_string_io); 11558 11558 11559 + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry); 11559 11560 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); 11560 11561 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio); 11561 11562 EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
+15 -5
arch/x86/lib/mmx_32.c
··· 26 26 #include <asm/fpu/api.h> 27 27 #include <asm/asm.h> 28 28 29 + /* 30 + * Use KFPU_387. MMX instructions are not affected by MXCSR, 31 + * but both AMD and Intel documentation states that even integer MMX 32 + * operations will result in #MF if an exception is pending in FCW. 33 + * 34 + * EMMS is not needed afterwards because, after calling kernel_fpu_end(), 35 + * any subsequent user of the 387 stack will reinitialize it using 36 + * KFPU_387. 37 + */ 38 + 29 39 void *_mmx_memcpy(void *to, const void *from, size_t len) 30 40 { 31 41 void *p; ··· 47 37 p = to; 48 38 i = len >> 6; /* len/64 */ 49 39 50 - kernel_fpu_begin(); 40 + kernel_fpu_begin_mask(KFPU_387); 51 41 52 42 __asm__ __volatile__ ( 53 43 "1: prefetch (%0)\n" /* This set is 28 bytes */ ··· 137 127 { 138 128 int i; 139 129 140 - kernel_fpu_begin(); 130 + kernel_fpu_begin_mask(KFPU_387); 141 131 142 132 __asm__ __volatile__ ( 143 133 " pxor %%mm0, %%mm0\n" : : ··· 170 160 { 171 161 int i; 172 162 173 - kernel_fpu_begin(); 163 + kernel_fpu_begin_mask(KFPU_387); 174 164 175 165 /* 176 166 * maybe the prefetch stuff can go before the expensive fnsave... ··· 257 247 { 258 248 int i; 259 249 260 - kernel_fpu_begin(); 250 + kernel_fpu_begin_mask(KFPU_387); 261 251 262 252 __asm__ __volatile__ ( 263 253 " pxor %%mm0, %%mm0\n" : : ··· 292 282 { 293 283 int i; 294 284 295 - kernel_fpu_begin(); 285 + kernel_fpu_begin_mask(KFPU_387); 296 286 297 287 __asm__ __volatile__ ( 298 288 "1: prefetch (%0)\n"
+14 -1
arch/x86/xen/enlighten_pv.c
··· 583 583 exc_debug(regs); 584 584 } 585 585 586 + DEFINE_IDTENTRY_RAW(exc_xen_unknown_trap) 587 + { 588 + /* This should never happen and there is no way to handle it. */ 589 + pr_err("Unknown trap in Xen PV mode."); 590 + BUG(); 591 + } 592 + 586 593 struct trap_array_entry { 587 594 void (*orig)(void); 588 595 void (*xen)(void); ··· 638 631 { 639 632 unsigned int nr; 640 633 bool ist_okay = false; 634 + bool found = false; 641 635 642 636 /* 643 637 * Replace trap handler addresses by Xen specific ones. ··· 653 645 if (*addr == entry->orig) { 654 646 *addr = entry->xen; 655 647 ist_okay = entry->ist_okay; 648 + found = true; 656 649 break; 657 650 } 658 651 } ··· 664 655 nr = (*addr - (void *)early_idt_handler_array[0]) / 665 656 EARLY_IDT_HANDLER_SIZE; 666 657 *addr = (void *)xen_early_idt_handler_array[nr]; 658 + found = true; 667 659 } 668 660 669 - if (WARN_ON(ist != 0 && !ist_okay)) 661 + if (!found) 662 + *addr = (void *)xen_asm_exc_xen_unknown_trap; 663 + 664 + if (WARN_ON(found && ist != 0 && !ist_okay)) 670 665 return false; 671 666 672 667 return true;
+1
arch/x86/xen/xen-asm.S
··· 178 178 #ifdef CONFIG_IA32_EMULATION 179 179 xen_pv_trap entry_INT80_compat 180 180 #endif 181 + xen_pv_trap asm_exc_xen_unknown_trap 181 182 xen_pv_trap asm_exc_xen_hypervisor_callback 182 183 183 184 __INIT
+2
drivers/acpi/scan.c
··· 586 586 if (!device) 587 587 return -EINVAL; 588 588 589 + *device = NULL; 590 + 589 591 status = acpi_get_data_full(handle, acpi_scan_drop_device, 590 592 (void **)device, callback); 591 593 if (ACPI_FAILURE(status) || !*device) {
+31 -13
drivers/base/core.c
··· 208 208 #endif 209 209 #endif /* !CONFIG_SRCU */ 210 210 211 + static bool device_is_ancestor(struct device *dev, struct device *target) 212 + { 213 + while (target->parent) { 214 + target = target->parent; 215 + if (dev == target) 216 + return true; 217 + } 218 + return false; 219 + } 220 + 211 221 /** 212 222 * device_is_dependent - Check if one device depends on another one 213 223 * @dev: Device to check dependencies for. ··· 231 221 struct device_link *link; 232 222 int ret; 233 223 234 - if (dev == target) 224 + /* 225 + * The "ancestors" check is needed to catch the case when the target 226 + * device has not been completely initialized yet and it is still 227 + * missing from the list of children of its parent device. 228 + */ 229 + if (dev == target || device_is_ancestor(dev, target)) 235 230 return 1; 236 231 237 232 ret = device_for_each_child(dev, target, device_is_dependent); ··· 471 456 struct device *con = link->consumer; 472 457 char *buf; 473 458 474 - len = max(strlen(dev_name(sup)), strlen(dev_name(con))); 459 + len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)), 460 + strlen(dev_bus_name(con)) + strlen(dev_name(con))); 461 + len += strlen(":"); 475 462 len += strlen("supplier:") + 1; 476 463 buf = kzalloc(len, GFP_KERNEL); 477 464 if (!buf) ··· 487 470 if (ret) 488 471 goto err_con; 489 472 490 - snprintf(buf, len, "consumer:%s", dev_name(con)); 473 + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); 491 474 ret = sysfs_create_link(&sup->kobj, &link->link_dev.kobj, buf); 492 475 if (ret) 493 476 goto err_con_dev; 494 477 495 - snprintf(buf, len, "supplier:%s", dev_name(sup)); 478 + snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup)); 496 479 ret = sysfs_create_link(&con->kobj, &link->link_dev.kobj, buf); 497 480 if (ret) 498 481 goto err_sup_dev; ··· 500 483 goto out; 501 484 502 485 err_sup_dev: 503 - snprintf(buf, len, "consumer:%s", dev_name(con)); 486 + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); 504 487 sysfs_remove_link(&sup->kobj, buf); 505 488 err_con_dev: 506 489 sysfs_remove_link(&link->link_dev.kobj, "consumer"); ··· 523 506 sysfs_remove_link(&link->link_dev.kobj, "consumer"); 524 507 sysfs_remove_link(&link->link_dev.kobj, "supplier"); 525 508 526 - len = max(strlen(dev_name(sup)), strlen(dev_name(con))); 509 + len = max(strlen(dev_bus_name(sup)) + strlen(dev_name(sup)), 510 + strlen(dev_bus_name(con)) + strlen(dev_name(con))); 511 + len += strlen(":"); 527 512 len += strlen("supplier:") + 1; 528 513 buf = kzalloc(len, GFP_KERNEL); 529 514 if (!buf) { ··· 533 514 return; 534 515 } 535 516 536 - snprintf(buf, len, "supplier:%s", dev_name(sup)); 517 + snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup)); 537 518 sysfs_remove_link(&con->kobj, buf); 538 - snprintf(buf, len, "consumer:%s", dev_name(con)); 519 + snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); 539 520 sysfs_remove_link(&sup->kobj, buf); 540 521 kfree(buf); 541 522 } ··· 756 737 757 738 link->link_dev.class = &devlink_class; 758 739 device_set_pm_not_required(&link->link_dev); 759 - dev_set_name(&link->link_dev, "%s--%s", 760 - dev_name(supplier), dev_name(consumer)); 740 + dev_set_name(&link->link_dev, "%s:%s--%s:%s", 741 + dev_bus_name(supplier), dev_name(supplier), 742 + dev_bus_name(consumer), dev_name(consumer)); 761 743 if (device_register(&link->link_dev)) { 762 744 put_device(consumer); 763 745 put_device(supplier); ··· 1828 1808 * never change once they are set, so they don't need special care. 1829 1809 */ 1830 1810 drv = READ_ONCE(dev->driver); 1831 - return drv ? drv->name : 1832 - (dev->bus ? dev->bus->name : 1833 - (dev->class ? dev->class->name : "")); 1811 + return drv ? drv->name : dev_bus_name(dev); 1834 1812 } 1835 1813 EXPORT_SYMBOL(dev_driver_string); 1836 1814
+2 -7
drivers/base/dd.c
··· 371 371 device_pm_check_callbacks(dev); 372 372 373 373 /* 374 - * Reorder successfully probed devices to the end of the device list. 375 - * This ensures that suspend/resume order matches probe order, which 376 - * is usually what drivers rely on. 377 - */ 378 - device_pm_move_to_tail(dev); 379 - 380 - /* 381 374 * Make sure the device is no longer in one of the deferred lists and 382 375 * kick off retrying all pending devices 383 376 */ ··· 612 619 else if (drv->remove) 613 620 drv->remove(dev); 614 621 probe_failed: 622 + kfree(dev->dma_range_map); 623 + dev->dma_range_map = NULL; 615 624 if (dev->bus) 616 625 blocking_notifier_call_chain(&dev->bus->p->bus_notifier, 617 626 BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+2
drivers/base/platform.c
··· 366 366 return -ERANGE; 367 367 368 368 nvec = platform_irq_count(dev); 369 + if (nvec < 0) 370 + return nvec; 369 371 370 372 if (nvec < minvec) 371 373 return -ENOSPC;
+7 -13
drivers/block/xen-blkfront.c
··· 945 945 if (info->feature_discard) { 946 946 blk_queue_flag_set(QUEUE_FLAG_DISCARD, rq); 947 947 blk_queue_max_discard_sectors(rq, get_capacity(gd)); 948 - rq->limits.discard_granularity = info->discard_granularity; 948 + rq->limits.discard_granularity = info->discard_granularity ?: 949 + info->physical_sector_size; 949 950 rq->limits.discard_alignment = info->discard_alignment; 950 951 if (info->feature_secdiscard) 951 952 blk_queue_flag_set(QUEUE_FLAG_SECERASE, rq); ··· 2180 2179 2181 2180 static void blkfront_setup_discard(struct blkfront_info *info) 2182 2181 { 2183 - int err; 2184 - unsigned int discard_granularity; 2185 - unsigned int discard_alignment; 2186 - 2187 2182 info->feature_discard = 1; 2188 - err = xenbus_gather(XBT_NIL, info->xbdev->otherend, 2189 - "discard-granularity", "%u", &discard_granularity, 2190 - "discard-alignment", "%u", &discard_alignment, 2191 - NULL); 2192 - if (!err) { 2193 - info->discard_granularity = discard_granularity; 2194 - info->discard_alignment = discard_alignment; 2195 - } 2183 + info->discard_granularity = xenbus_read_unsigned(info->xbdev->otherend, 2184 + "discard-granularity", 2185 + 0); 2186 + info->discard_alignment = xenbus_read_unsigned(info->xbdev->otherend, 2187 + "discard-alignment", 0); 2196 2188 info->feature_secdiscard = 2197 2189 !!xenbus_read_unsigned(info->xbdev->otherend, "discard-secure", 2198 2190 0);
+1
drivers/bus/arm-integrator-lm.c
··· 54 54 ret = of_platform_default_populate(child, NULL, dev); 55 55 if (ret) { 56 56 dev_err(dev, "failed to populate module\n"); 57 + of_node_put(child); 57 58 return ret; 58 59 } 59 60 }
-2
drivers/clk/imx/Kconfig
··· 6 6 7 7 config MXC_CLK_SCU 8 8 tristate 9 - depends on ARCH_MXC 10 - depends on IMX_SCU && HAVE_ARM_SMCCC 11 9 12 10 config CLK_IMX1 13 11 def_bool SOC_IMX1
+4 -2
drivers/clk/mmp/clk-audio.c
··· 392 392 return 0; 393 393 } 394 394 395 - static int __maybe_unused mmp2_audio_clk_suspend(struct device *dev) 395 + #ifdef CONFIG_PM 396 + static int mmp2_audio_clk_suspend(struct device *dev) 396 397 { 397 398 struct mmp2_audio_clk *priv = dev_get_drvdata(dev); 398 399 ··· 405 404 return 0; 406 405 } 407 406 408 - static int __maybe_unused mmp2_audio_clk_resume(struct device *dev) 407 + static int mmp2_audio_clk_resume(struct device *dev) 409 408 { 410 409 struct mmp2_audio_clk *priv = dev_get_drvdata(dev); 411 410 ··· 416 415 417 416 return 0; 418 417 } 418 + #endif 419 419 420 420 static const struct dev_pm_ops mmp2_audio_clk_pm_ops = { 421 421 SET_RUNTIME_PM_OPS(mmp2_audio_clk_suspend, mmp2_audio_clk_resume, NULL)
+3 -18
drivers/clk/qcom/gcc-sc7180.c
··· 891 891 }, 892 892 }; 893 893 894 - static struct clk_branch gcc_camera_ahb_clk = { 895 - .halt_reg = 0xb008, 896 - .halt_check = BRANCH_HALT, 897 - .hwcg_reg = 0xb008, 898 - .hwcg_bit = 1, 899 - .clkr = { 900 - .enable_reg = 0xb008, 901 - .enable_mask = BIT(0), 902 - .hw.init = &(struct clk_init_data){ 903 - .name = "gcc_camera_ahb_clk", 904 - .ops = &clk_branch2_ops, 905 - }, 906 - }, 907 - }; 908 - 909 894 static struct clk_branch gcc_camera_hf_axi_clk = { 910 895 .halt_reg = 0xb020, 911 896 .halt_check = BRANCH_HALT, ··· 2302 2317 [GCC_AGGRE_UFS_PHY_AXI_CLK] = &gcc_aggre_ufs_phy_axi_clk.clkr, 2303 2318 [GCC_AGGRE_USB3_PRIM_AXI_CLK] = &gcc_aggre_usb3_prim_axi_clk.clkr, 2304 2319 [GCC_BOOT_ROM_AHB_CLK] = &gcc_boot_rom_ahb_clk.clkr, 2305 - [GCC_CAMERA_AHB_CLK] = &gcc_camera_ahb_clk.clkr, 2306 2320 [GCC_CAMERA_HF_AXI_CLK] = &gcc_camera_hf_axi_clk.clkr, 2307 2321 [GCC_CAMERA_THROTTLE_HF_AXI_CLK] = &gcc_camera_throttle_hf_axi_clk.clkr, 2308 2322 [GCC_CAMERA_XO_CLK] = &gcc_camera_xo_clk.clkr, ··· 2503 2519 2504 2520 /* 2505 2521 * Keep the clocks always-ON 2506 - * GCC_CPUSS_GNOC_CLK, GCC_VIDEO_AHB_CLK, GCC_DISP_AHB_CLK 2507 - * GCC_GPU_CFG_AHB_CLK 2522 + * GCC_CPUSS_GNOC_CLK, GCC_VIDEO_AHB_CLK, GCC_CAMERA_AHB_CLK, 2523 + * GCC_DISP_AHB_CLK, GCC_GPU_CFG_AHB_CLK 2508 2524 */ 2509 2525 regmap_update_bits(regmap, 0x48004, BIT(0), BIT(0)); 2510 2526 regmap_update_bits(regmap, 0x0b004, BIT(0), BIT(0)); 2527 + regmap_update_bits(regmap, 0x0b008, BIT(0), BIT(0)); 2511 2528 regmap_update_bits(regmap, 0x0b00c, BIT(0), BIT(0)); 2512 2529 regmap_update_bits(regmap, 0x71004, BIT(0), BIT(0)); 2513 2530
+2 -2
drivers/clk/qcom/gcc-sm8250.c
··· 722 722 .name = "gcc_sdcc2_apps_clk_src", 723 723 .parent_data = gcc_parent_data_4, 724 724 .num_parents = 5, 725 - .ops = &clk_rcg2_ops, 725 + .ops = &clk_rcg2_floor_ops, 726 726 }, 727 727 }; 728 728 ··· 745 745 .name = "gcc_sdcc4_apps_clk_src", 746 746 .parent_data = gcc_parent_data_0, 747 747 .num_parents = 3, 748 - .ops = &clk_rcg2_ops, 748 + .ops = &clk_rcg2_floor_ops, 749 749 }, 750 750 }; 751 751
-35
drivers/counter/ti-eqep.c
··· 235 235 return len; 236 236 } 237 237 238 - static ssize_t ti_eqep_position_floor_read(struct counter_device *counter, 239 - struct counter_count *count, 240 - void *ext_priv, char *buf) 241 - { 242 - struct ti_eqep_cnt *priv = counter->priv; 243 - u32 qposinit; 244 - 245 - regmap_read(priv->regmap32, QPOSINIT, &qposinit); 246 - 247 - return sprintf(buf, "%u\n", qposinit); 248 - } 249 - 250 - static ssize_t ti_eqep_position_floor_write(struct counter_device *counter, 251 - struct counter_count *count, 252 - void *ext_priv, const char *buf, 253 - size_t len) 254 - { 255 - struct ti_eqep_cnt *priv = counter->priv; 256 - int err; 257 - u32 res; 258 - 259 - err = kstrtouint(buf, 0, &res); 260 - if (err < 0) 261 - return err; 262 - 263 - regmap_write(priv->regmap32, QPOSINIT, res); 264 - 265 - return len; 266 - } 267 - 268 238 static ssize_t ti_eqep_position_enable_read(struct counter_device *counter, 269 239 struct counter_count *count, 270 240 void *ext_priv, char *buf) ··· 270 300 .name = "ceiling", 271 301 .read = ti_eqep_position_ceiling_read, 272 302 .write = ti_eqep_position_ceiling_write, 273 - }, 274 - { 275 - .name = "floor", 276 - .read = ti_eqep_position_floor_read, 277 - .write = ti_eqep_position_floor_write, 278 303 }, 279 304 { 280 305 .name = "enable",
+2 -2
drivers/crypto/marvell/cesa/cesa.h
··· 300 300 __le32 byte_cnt; 301 301 union { 302 302 __le32 src; 303 - dma_addr_t src_dma; 303 + u32 src_dma; 304 304 }; 305 305 union { 306 306 __le32 dst; 307 - dma_addr_t dst_dma; 307 + u32 dst_dma; 308 308 }; 309 309 __le32 next_dma; 310 310
+1
drivers/firmware/imx/Kconfig
··· 13 13 config IMX_SCU 14 14 bool "IMX SCU Protocol driver" 15 15 depends on IMX_MBOX 16 + select SOC_BUS 16 17 help 17 18 The System Controller Firmware (SCFW) is a low-level system function 18 19 which runs on a dedicated Cortex-M core to provide power, clock, and
+4 -1
drivers/gpio/Kconfig
··· 521 521 522 522 config GPIO_SIFIVE 523 523 bool "SiFive GPIO support" 524 - depends on OF_GPIO && IRQ_DOMAIN_HIERARCHY 524 + depends on OF_GPIO 525 + select IRQ_DOMAIN_HIERARCHY 525 526 select GPIO_GENERIC 526 527 select GPIOLIB_IRQCHIP 527 528 select REGMAP_MMIO ··· 598 597 default ARCH_TEGRA 599 598 depends on ARCH_TEGRA || COMPILE_TEST 600 599 depends on OF_GPIO 600 + select GPIOLIB_IRQCHIP 601 + select IRQ_DOMAIN_HIERARCHY 601 602 help 602 603 Say yes here to support GPIO pins on NVIDIA Tegra SoCs. 603 604
+8 -11
drivers/gpio/gpio-mvebu.c
··· 676 676 else 677 677 state->duty_cycle = 1; 678 678 679 + val = (unsigned long long) u; /* on duration */ 679 680 regmap_read(mvpwm->regs, mvebu_pwmreg_blink_off_duration(mvpwm), &u); 680 - val = (unsigned long long) u * NSEC_PER_SEC; 681 + val += (unsigned long long) u; /* period = on + off duration */ 682 + val *= NSEC_PER_SEC; 681 683 do_div(val, mvpwm->clk_rate); 682 - if (val < state->duty_cycle) { 684 + if (val > UINT_MAX) 685 + state->period = UINT_MAX; 686 + else if (val) 687 + state->period = val; 688 + else 683 689 state->period = 1; 684 - } else { 685 - val -= state->duty_cycle; 686 - if (val > UINT_MAX) 687 - state->period = UINT_MAX; 688 - else if (val) 689 - state->period = val; 690 - else 691 - state->period = 1; 692 - } 693 690 694 691 regmap_read(mvchip->regs, GPIO_BLINK_EN_OFF + mvchip->offset, &u); 695 692 if (u)
+73 -72
drivers/gpio/gpiolib-cdev.c
··· 1979 1979 #endif 1980 1980 }; 1981 1981 1982 + static int chipinfo_get(struct gpio_chardev_data *cdev, void __user *ip) 1983 + { 1984 + struct gpio_device *gdev = cdev->gdev; 1985 + struct gpiochip_info chipinfo; 1986 + 1987 + memset(&chipinfo, 0, sizeof(chipinfo)); 1988 + 1989 + strscpy(chipinfo.name, dev_name(&gdev->dev), sizeof(chipinfo.name)); 1990 + strscpy(chipinfo.label, gdev->label, sizeof(chipinfo.label)); 1991 + chipinfo.lines = gdev->ngpio; 1992 + if (copy_to_user(ip, &chipinfo, sizeof(chipinfo))) 1993 + return -EFAULT; 1994 + return 0; 1995 + } 1996 + 1982 1997 #ifdef CONFIG_GPIO_CDEV_V1 1983 1998 /* 1984 1999 * returns 0 if the versions match, else the previously selected ABI version ··· 2007 1992 return 0; 2008 1993 2009 1994 return abiv; 1995 + } 1996 + 1997 + static int lineinfo_get_v1(struct gpio_chardev_data *cdev, void __user *ip, 1998 + bool watch) 1999 + { 2000 + struct gpio_desc *desc; 2001 + struct gpioline_info lineinfo; 2002 + struct gpio_v2_line_info lineinfo_v2; 2003 + 2004 + if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) 2005 + return -EFAULT; 2006 + 2007 + /* this doubles as a range check on line_offset */ 2008 + desc = gpiochip_get_desc(cdev->gdev->chip, lineinfo.line_offset); 2009 + if (IS_ERR(desc)) 2010 + return PTR_ERR(desc); 2011 + 2012 + if (watch) { 2013 + if (lineinfo_ensure_abi_version(cdev, 1)) 2014 + return -EPERM; 2015 + 2016 + if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines)) 2017 + return -EBUSY; 2018 + } 2019 + 2020 + gpio_desc_to_lineinfo(desc, &lineinfo_v2); 2021 + gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); 2022 + 2023 + if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) { 2024 + if (watch) 2025 + clear_bit(lineinfo.line_offset, cdev->watched_lines); 2026 + return -EFAULT; 2027 + } 2028 + 2029 + return 0; 2010 2030 } 2011 2031 #endif 2012 2032 ··· 2080 2030 return 0; 2081 2031 } 2082 2032 2033 + static int lineinfo_unwatch(struct gpio_chardev_data *cdev, void __user *ip) 2034 + { 2035 + __u32 offset; 2036 + 2037 + if (copy_from_user(&offset, ip, sizeof(offset))) 2038 + return -EFAULT; 2039 + 2040 + if (offset >= cdev->gdev->ngpio) 2041 + return -EINVAL; 2042 + 2043 + if (!test_and_clear_bit(offset, cdev->watched_lines)) 2044 + return -EBUSY; 2045 + 2046 + return 0; 2047 + } 2048 + 2083 2049 /* 2084 2050 * gpio_ioctl() - ioctl handler for the GPIO chardev 2085 2051 */ ··· 2103 2037 { 2104 2038 struct gpio_chardev_data *cdev = file->private_data; 2105 2039 struct gpio_device *gdev = cdev->gdev; 2106 - struct gpio_chip *gc = gdev->chip; 2107 2040 void __user *ip = (void __user *)arg; 2108 - __u32 offset; 2109 2041 2110 2042 /* We fail any subsequent ioctl():s when the chip is gone */ 2111 - if (!gc) 2043 + if (!gdev->chip) 2112 2044 return -ENODEV; 2113 2045 2114 2046 /* Fill in the struct and pass to userspace */ 2115 2047 if (cmd == GPIO_GET_CHIPINFO_IOCTL) { 2116 - struct gpiochip_info chipinfo; 2117 - 2118 - memset(&chipinfo, 0, sizeof(chipinfo)); 2119 - 2120 - strscpy(chipinfo.name, dev_name(&gdev->dev), 2121 - sizeof(chipinfo.name)); 2122 - strscpy(chipinfo.label, gdev->label, 2123 - sizeof(chipinfo.label)); 2124 - chipinfo.lines = gdev->ngpio; 2125 - if (copy_to_user(ip, &chipinfo, sizeof(chipinfo))) 2126 - return -EFAULT; 2127 - return 0; 2048 + return chipinfo_get(cdev, ip); 2128 2049 #ifdef CONFIG_GPIO_CDEV_V1 2129 - } else if (cmd == GPIO_GET_LINEINFO_IOCTL) { 2130 - struct gpio_desc *desc; 2131 - struct gpioline_info lineinfo; 2132 - struct gpio_v2_line_info lineinfo_v2; 2133 - 2134 - if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) 2135 - return -EFAULT; 2136 - 2137 - /* this doubles as a range check on line_offset */ 2138 - desc = gpiochip_get_desc(gc, lineinfo.line_offset); 2139 - if (IS_ERR(desc)) 2140 - return PTR_ERR(desc); 2141 - 2142 - gpio_desc_to_lineinfo(desc, &lineinfo_v2); 2143 - gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); 2144 - 2145 - if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) 2146 - return -EFAULT; 2147 - return 0; 2148 2050 } else if (cmd == GPIO_GET_LINEHANDLE_IOCTL) { 2149 2051 return linehandle_create(gdev, ip); 2150 2052 } else if (cmd == GPIO_GET_LINEEVENT_IOCTL) { 2151 2053 return lineevent_create(gdev, ip); 2152 - } else if (cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) { 2153 - struct gpio_desc *desc; 2154 - struct gpioline_info lineinfo; 2155 - struct gpio_v2_line_info lineinfo_v2; 2156 - 2157 - if (copy_from_user(&lineinfo, ip, sizeof(lineinfo))) 2158 - return -EFAULT; 2159 - 2160 - /* this doubles as a range check on line_offset */ 2161 - desc = gpiochip_get_desc(gc, lineinfo.line_offset); 2162 - if (IS_ERR(desc)) 2163 - return PTR_ERR(desc); 2164 - 2165 - if (lineinfo_ensure_abi_version(cdev, 1)) 2166 - return -EPERM; 2167 - 2168 - if (test_and_set_bit(lineinfo.line_offset, cdev->watched_lines)) 2169 - return -EBUSY; 2170 - 2171 - gpio_desc_to_lineinfo(desc, &lineinfo_v2); 2172 - gpio_v2_line_info_to_v1(&lineinfo_v2, &lineinfo); 2173 - 2174 - if (copy_to_user(ip, &lineinfo, sizeof(lineinfo))) { 2175 - clear_bit(lineinfo.line_offset, cdev->watched_lines); 2176 - return -EFAULT; 2177 - } 2178 - 2179 - return 0; 2054 + } else if (cmd == GPIO_GET_LINEINFO_IOCTL || 2055 + cmd == GPIO_GET_LINEINFO_WATCH_IOCTL) { 2056 + return lineinfo_get_v1(cdev, ip, 2057 + cmd == GPIO_GET_LINEINFO_WATCH_IOCTL); 2180 2058 #endif /* CONFIG_GPIO_CDEV_V1 */ 2181 2059 } else if (cmd == GPIO_V2_GET_LINEINFO_IOCTL || 2182 2060 cmd == GPIO_V2_GET_LINEINFO_WATCH_IOCTL) { ··· 2129 2119 } else if (cmd == GPIO_V2_GET_LINE_IOCTL) { 2130 2120 return linereq_create(gdev, ip); 2131 2121 } else if (cmd == GPIO_GET_LINEINFO_UNWATCH_IOCTL) { 2132 - if (copy_from_user(&offset, ip, sizeof(offset))) 2133 - return -EFAULT; 2134 - 2135 - if (offset >= cdev->gdev->ngpio) 2136 - return -EINVAL; 2137 - 2138 - if (!test_and_clear_bit(offset, cdev->watched_lines)) 2139 - return -EBUSY; 2140 - 2141 - return 0; 2122 + return lineinfo_unwatch(cdev, ip); 2142 2123 } 2143 2124 return -EINVAL; 2144 2125 }
+3
drivers/gpio/gpiolib.c
··· 1489 1489 type = IRQ_TYPE_NONE; 1490 1490 } 1491 1491 1492 + if (gc->to_irq) 1493 + chip_warn(gc, "to_irq is redefined in %s and you shouldn't rely on it\n", __func__); 1494 + 1492 1495 gc->to_irq = gpiochip_to_irq; 1493 1496 gc->irq.default_type = type; 1494 1497 gc->irq.lock_key = lock_key;
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 81 81 MODULE_FIRMWARE("amdgpu/navi14_gpu_info.bin"); 82 82 MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin"); 83 83 MODULE_FIRMWARE("amdgpu/vangogh_gpu_info.bin"); 84 - MODULE_FIRMWARE("amdgpu/green_sardine_gpu_info.bin"); 85 84 86 85 #define AMDGPU_RESUME_MS 2000 87 86
+3 -1
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 119 119 #define mmVGT_ESGS_RING_SIZE_Vangogh_BASE_IDX 1 120 120 #define mmSPI_CONFIG_CNTL_Vangogh 0x2440 121 121 #define mmSPI_CONFIG_CNTL_Vangogh_BASE_IDX 1 122 + #define mmGCR_GENERAL_CNTL_Vangogh 0x1580 123 + #define mmGCR_GENERAL_CNTL_Vangogh_BASE_IDX 0 122 124 123 125 #define mmCP_HYP_PFP_UCODE_ADDR 0x5814 124 126 #define mmCP_HYP_PFP_UCODE_ADDR_BASE_IDX 1 ··· 3246 3244 SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_DEBUG4, 0xffffffff, 0x00800000), 3247 3245 SOC15_REG_GOLDEN_VALUE(GC, 0, mmDB_EXCEPTION_CONTROL, 0x7fff0f1f, 0x00b80000), 3248 3246 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGB_ADDR_CONFIG, 0x0c1807ff, 0x00000142), 3249 - SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500), 3247 + SOC15_REG_GOLDEN_VALUE(GC, 0, mmGCR_GENERAL_CNTL_Vangogh, 0x1ff1ffff, 0x00000500), 3250 3248 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL1_PIPE_STEER, 0x000000ff, 0x000000e4), 3251 3249 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_0, 0x77777777, 0x32103210), 3252 3250 SOC15_REG_GOLDEN_VALUE(GC, 0, mmGL2_PIPE_STEER_1, 0x77777777, 0x32103210),
+59 -21
drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
··· 491 491 { 492 492 uint32_t def, data, def1, data1; 493 493 494 - def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG); 494 + def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_CGTT_CLK_CTRL); 495 495 def1 = data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2); 496 496 497 497 if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) { 498 - data |= MM_ATC_L2_MISC_CG__ENABLE_MASK; 499 - 498 + data &= ~MM_ATC_L2_CGTT_CLK_CTRL__SOFT_OVERRIDE_MASK; 500 499 data1 &= ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK | 501 500 DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK | 502 501 DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK | ··· 504 505 DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK); 505 506 506 507 } else { 507 - data &= ~MM_ATC_L2_MISC_CG__ENABLE_MASK; 508 - 508 + data |= MM_ATC_L2_CGTT_CLK_CTRL__SOFT_OVERRIDE_MASK; 509 509 data1 |= (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK | 510 510 DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK | 511 511 DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK | ··· 514 516 } 515 517 516 518 if (def != data) 517 - WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data); 519 + WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_CGTT_CLK_CTRL, data); 518 520 if (def1 != data1) 519 521 WREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2, data1); 520 522 } ··· 523 525 mmhub_v2_3_update_medium_grain_light_sleep(struct amdgpu_device *adev, 524 526 bool enable) 525 527 { 526 - uint32_t def, data; 528 + uint32_t def, data, def1, data1, def2, data2; 527 529 528 - def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG); 530 + def = data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_CGTT_CLK_CTRL); 531 + def1 = data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_WR_CGTT_CLK_CTRL); 532 + def2 = data2 = RREG32_SOC15(MMHUB, 0, mmDAGB0_RD_CGTT_CLK_CTRL); 529 533 530 - if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS)) 531 - data |= MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK; 532 - else 533 - data &= ~MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK; 534 + if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS)) { 535 + data &= ~MM_ATC_L2_CGTT_CLK_CTRL__MGLS_OVERRIDE_MASK; 536 + data1 &= !(DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 537 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 538 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 539 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 540 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK); 541 + data2 &= !(DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 542 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 543 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 544 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 545 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK); 546 + } else { 547 + data |= MM_ATC_L2_CGTT_CLK_CTRL__MGLS_OVERRIDE_MASK; 548 + data1 |= (DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 549 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 550 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 551 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 552 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK); 553 + data2 |= (DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 554 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 555 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 556 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 557 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK); 558 + } 534 559 535 560 if (def != data) 536 - WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG, data); 561 + WREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_CGTT_CLK_CTRL, data); 562 + if (def1 != data1) 563 + WREG32_SOC15(MMHUB, 0, mmDAGB0_WR_CGTT_CLK_CTRL, data1); 564 + if (def2 != data2) 565 + WREG32_SOC15(MMHUB, 0, mmDAGB0_RD_CGTT_CLK_CTRL, data2); 537 566 } 538 567 539 568 static int mmhub_v2_3_set_clockgating(struct amdgpu_device *adev, ··· 579 554 580 555 static void mmhub_v2_3_get_clockgating(struct amdgpu_device *adev, u32 *flags) 581 556 { 582 - int data, data1; 557 + int data, data1, data2, data3; 583 558 584 559 if (amdgpu_sriov_vf(adev)) 585 560 *flags = 0; 586 561 587 - data = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_MISC_CG); 588 - data1 = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2); 562 + data = RREG32_SOC15(MMHUB, 0, mmDAGB0_CNTL_MISC2); 563 + data1 = RREG32_SOC15(MMHUB, 0, mmMM_ATC_L2_CGTT_CLK_CTRL); 564 + data2 = RREG32_SOC15(MMHUB, 0, mmDAGB0_WR_CGTT_CLK_CTRL); 565 + data3 = RREG32_SOC15(MMHUB, 0, mmDAGB0_RD_CGTT_CLK_CTRL); 589 566 590 567 /* AMD_CG_SUPPORT_MC_MGCG */ 591 - if ((data & MM_ATC_L2_MISC_CG__ENABLE_MASK) && 592 - !(data1 & (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK | 568 + if (!(data & (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK | 593 569 DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK | 594 570 DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK | 595 571 DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK | 596 572 DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK | 597 - DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK))) 598 - *flags |= AMD_CG_SUPPORT_MC_MGCG; 573 + DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK)) 574 + && !(data1 & MM_ATC_L2_CGTT_CLK_CTRL__SOFT_OVERRIDE_MASK)) { 575 + *flags |= AMD_CG_SUPPORT_MC_MGCG; 576 + } 599 577 600 578 /* AMD_CG_SUPPORT_MC_LS */ 601 - if (data & MM_ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK) 579 + if (!(data1 & MM_ATC_L2_CGTT_CLK_CTRL__MGLS_OVERRIDE_MASK) 580 + && !(data2 & (DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 581 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 582 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 583 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 584 + DAGB0_WR_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK)) 585 + && !(data3 & (DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_MASK | 586 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_WRITE_MASK | 587 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_READ_MASK | 588 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_RETURN_MASK | 589 + DAGB0_RD_CGTT_CLK_CTRL__LS_OVERRIDE_REGISTER_MASK))) 602 590 *flags |= AMD_CG_SUPPORT_MC_LS; 603 591 } 604 592
+4 -2
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
··· 251 251 struct dmcu *dmcu = clk_mgr_base->ctx->dc->res_pool->dmcu; 252 252 bool force_reset = false; 253 253 bool update_uclk = false; 254 + bool p_state_change_support; 254 255 255 256 if (dc->work_arounds.skip_clock_update || !clk_mgr->smu_present) 256 257 return; ··· 292 291 clk_mgr_base->clks.socclk_khz = new_clocks->socclk_khz; 293 292 294 293 clk_mgr_base->clks.prev_p_state_change_support = clk_mgr_base->clks.p_state_change_support; 295 - if (should_update_pstate_support(safe_to_lower, new_clocks->p_state_change_support, clk_mgr_base->clks.p_state_change_support)) { 296 - clk_mgr_base->clks.p_state_change_support = new_clocks->p_state_change_support; 294 + p_state_change_support = new_clocks->p_state_change_support || (display_count == 0); 295 + if (should_update_pstate_support(safe_to_lower, p_state_change_support, clk_mgr_base->clks.p_state_change_support)) { 296 + clk_mgr_base->clks.p_state_change_support = p_state_change_support; 297 297 298 298 /* to disable P-State switching, set UCLK min = max */ 299 299 if (!clk_mgr_base->clks.p_state_change_support)
+14 -4
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 647 647 if (REG(DC_IP_REQUEST_CNTL)) { 648 648 REG_SET(DC_IP_REQUEST_CNTL, 0, 649 649 IP_REQUEST_EN, 1); 650 - hws->funcs.dpp_pg_control(hws, plane_id, true); 651 - hws->funcs.hubp_pg_control(hws, plane_id, true); 650 + 651 + if (hws->funcs.dpp_pg_control) 652 + hws->funcs.dpp_pg_control(hws, plane_id, true); 653 + 654 + if (hws->funcs.hubp_pg_control) 655 + hws->funcs.hubp_pg_control(hws, plane_id, true); 656 + 652 657 REG_SET(DC_IP_REQUEST_CNTL, 0, 653 658 IP_REQUEST_EN, 0); 654 659 DC_LOG_DEBUG( ··· 1087 1082 if (REG(DC_IP_REQUEST_CNTL)) { 1088 1083 REG_SET(DC_IP_REQUEST_CNTL, 0, 1089 1084 IP_REQUEST_EN, 1); 1090 - hws->funcs.dpp_pg_control(hws, dpp->inst, false); 1091 - hws->funcs.hubp_pg_control(hws, hubp->inst, false); 1085 + 1086 + if (hws->funcs.dpp_pg_control) 1087 + hws->funcs.dpp_pg_control(hws, dpp->inst, false); 1088 + 1089 + if (hws->funcs.hubp_pg_control) 1090 + hws->funcs.hubp_pg_control(hws, hubp->inst, false); 1091 + 1092 1092 dpp->funcs->dpp_reset(dpp); 1093 1093 REG_SET(DC_IP_REQUEST_CNTL, 0, 1094 1094 IP_REQUEST_EN, 0);
+7 -2
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 1062 1062 if (REG(DC_IP_REQUEST_CNTL)) { 1063 1063 REG_SET(DC_IP_REQUEST_CNTL, 0, 1064 1064 IP_REQUEST_EN, 1); 1065 - dcn20_dpp_pg_control(hws, pipe_ctx->plane_res.dpp->inst, true); 1066 - dcn20_hubp_pg_control(hws, pipe_ctx->plane_res.hubp->inst, true); 1065 + 1066 + if (hws->funcs.dpp_pg_control) 1067 + hws->funcs.dpp_pg_control(hws, pipe_ctx->plane_res.dpp->inst, true); 1068 + 1069 + if (hws->funcs.hubp_pg_control) 1070 + hws->funcs.hubp_pg_control(hws, pipe_ctx->plane_res.hubp->inst, true); 1071 + 1067 1072 REG_SET(DC_IP_REQUEST_CNTL, 0, 1068 1073 IP_REQUEST_EN, 0); 1069 1074 DC_LOG_DEBUG(
+4 -3
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 2517 2517 * if this primary pipe has a bottom pipe in prev. state 2518 2518 * and if the bottom pipe is still available (which it should be), 2519 2519 * pick that pipe as secondary 2520 - * Same logic applies for ODM pipes. Since mpo is not allowed with odm 2521 - * check in else case. 2520 + * Same logic applies for ODM pipes 2522 2521 */ 2523 2522 if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe) { 2524 2523 preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx; ··· 2525 2526 secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx]; 2526 2527 secondary_pipe->pipe_idx = preferred_pipe_idx; 2527 2528 } 2528 - } else if (dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) { 2529 + } 2530 + if (secondary_pipe == NULL && 2531 + dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe) { 2529 2532 preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx; 2530 2533 if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) { 2531 2534 secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
+1 -1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
··· 296 296 .num_banks = 8, 297 297 .num_chans = 4, 298 298 .vmm_page_size_bytes = 4096, 299 - .dram_clock_change_latency_us = 23.84, 299 + .dram_clock_change_latency_us = 11.72, 300 300 .return_bus_width_bytes = 64, 301 301 .dispclk_dppclk_vco_speed_mhz = 3600, 302 302 .xfc_bus_transport_time_us = 4,
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
··· 1121 1121 static int renoir_gfx_state_change_set(struct smu_context *smu, uint32_t state) 1122 1122 { 1123 1123 1124 - return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GpuChangeState, state, NULL); 1124 + return 0; 1125 1125 } 1126 1126 1127 1127 static const struct pptable_funcs renoir_ppt_funcs = {
+1 -1
drivers/gpu/drm/drm_atomic_helper.c
··· 3021 3021 3022 3022 ret = handle_conflicting_encoders(state, true); 3023 3023 if (ret) 3024 - return ret; 3024 + goto fail; 3025 3025 3026 3026 ret = drm_atomic_commit(state); 3027 3027
+11 -3
drivers/gpu/drm/drm_gem_vram_helper.c
··· 387 387 if (gbo->vmap_use_count > 0) 388 388 goto out; 389 389 390 - ret = ttm_bo_vmap(&gbo->bo, &gbo->map); 391 - if (ret) 392 - return ret; 390 + /* 391 + * VRAM helpers unmap the BO only on demand. So the previous 392 + * page mapping might still be around. Only vmap if the there's 393 + * no mapping present. 394 + */ 395 + if (dma_buf_map_is_null(&gbo->map)) { 396 + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); 397 + if (ret) 398 + return ret; 399 + } 393 400 394 401 out: 395 402 ++gbo->vmap_use_count; ··· 584 577 return; 585 578 586 579 ttm_bo_vunmap(bo, &gbo->map); 580 + dma_buf_map_clear(&gbo->map); /* explicitly clear mapping for next vmap call */ 587 581 } 588 582 589 583 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
+5 -3
drivers/gpu/drm/drm_syncobj.c
··· 388 388 return -ENOENT; 389 389 390 390 *fence = drm_syncobj_fence_get(syncobj); 391 - drm_syncobj_put(syncobj); 392 391 393 392 if (*fence) { 394 393 ret = dma_fence_chain_find_seqno(fence, point); 395 394 if (!ret) 396 - return 0; 395 + goto out; 397 396 dma_fence_put(*fence); 398 397 } else { 399 398 ret = -EINVAL; 400 399 } 401 400 402 401 if (!(flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT)) 403 - return ret; 402 + goto out; 404 403 405 404 memset(&wait, 0, sizeof(wait)); 406 405 wait.task = current; ··· 430 431 431 432 if (wait.node.next) 432 433 drm_syncobj_remove_wait(syncobj, &wait); 434 + 435 + out: 436 + drm_syncobj_put(syncobj); 433 437 434 438 return ret; 435 439 }
+1 -1
drivers/gpu/drm/i915/display/intel_ddi.c
··· 3725 3725 intel_ddi_init_dp_buf_reg(encoder, crtc_state); 3726 3726 if (!is_mst) 3727 3727 intel_dp_set_power(intel_dp, DP_SET_POWER_D0); 3728 - intel_dp_configure_protocol_converter(intel_dp); 3728 + intel_dp_configure_protocol_converter(intel_dp, crtc_state); 3729 3729 intel_dp_sink_set_decompression_state(intel_dp, crtc_state, 3730 3730 true); 3731 3731 intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
+5 -4
drivers/gpu/drm/i915/display/intel_dp.c
··· 4014 4014 intel_de_posting_read(dev_priv, intel_dp->output_reg); 4015 4015 } 4016 4016 4017 - void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp) 4017 + void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp, 4018 + const struct intel_crtc_state *crtc_state) 4018 4019 { 4019 4020 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 4020 4021 u8 tmp; ··· 4034 4033 drm_dbg_kms(&i915->drm, "Failed to set protocol converter HDMI mode to %s\n", 4035 4034 enableddisabled(intel_dp->has_hdmi_sink)); 4036 4035 4037 - tmp = intel_dp->dfp.ycbcr_444_to_420 ? 4038 - DP_CONVERSION_TO_YCBCR420_ENABLE : 0; 4036 + tmp = crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR444 && 4037 + intel_dp->dfp.ycbcr_444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0; 4039 4038 4040 4039 if (drm_dp_dpcd_writeb(&intel_dp->aux, 4041 4040 DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1) ··· 4089 4088 } 4090 4089 4091 4090 intel_dp_set_power(intel_dp, DP_SET_POWER_D0); 4092 - intel_dp_configure_protocol_converter(intel_dp); 4091 + intel_dp_configure_protocol_converter(intel_dp, pipe_config); 4093 4092 intel_dp_start_link_train(intel_dp, pipe_config); 4094 4093 intel_dp_stop_link_train(intel_dp, pipe_config); 4095 4094
+2 -1
drivers/gpu/drm/i915/display/intel_dp.h
··· 51 51 int intel_dp_retrain_link(struct intel_encoder *encoder, 52 52 struct drm_modeset_acquire_ctx *ctx); 53 53 void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode); 54 - void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp); 54 + void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp, 55 + const struct intel_crtc_state *crtc_state); 55 56 void intel_dp_sink_set_decompression_state(struct intel_dp *intel_dp, 56 57 const struct intel_crtc_state *crtc_state, 57 58 bool enable);
+9
drivers/gpu/drm/i915/display/intel_hdcp.c
··· 2210 2210 if (content_protection_type_changed) { 2211 2211 mutex_lock(&hdcp->mutex); 2212 2212 hdcp->value = DRM_MODE_CONTENT_PROTECTION_DESIRED; 2213 + drm_connector_get(&connector->base); 2213 2214 schedule_work(&hdcp->prop_work); 2214 2215 mutex_unlock(&hdcp->mutex); 2215 2216 } ··· 2222 2221 desired_and_not_enabled = 2223 2222 hdcp->value != DRM_MODE_CONTENT_PROTECTION_ENABLED; 2224 2223 mutex_unlock(&hdcp->mutex); 2224 + /* 2225 + * If HDCP already ENABLED and CP property is DESIRED, schedule 2226 + * prop_work to update correct CP property to user space. 2227 + */ 2228 + if (!desired_and_not_enabled && !content_protection_type_changed) { 2229 + drm_connector_get(&connector->base); 2230 + schedule_work(&hdcp->prop_work); 2231 + } 2225 2232 } 2226 2233 2227 2234 if (desired_and_not_enabled || content_protection_type_changed)
+2 -7
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
··· 134 134 return true; 135 135 } 136 136 137 - static inline bool __request_completed(const struct i915_request *rq) 138 - { 139 - return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno); 140 - } 141 - 142 137 __maybe_unused static bool 143 138 check_signal_order(struct intel_context *ce, struct i915_request *rq) 144 139 { ··· 252 257 list_for_each_entry_rcu(rq, &ce->signals, signal_link) { 253 258 bool release; 254 259 255 - if (!__request_completed(rq)) 260 + if (!__i915_request_is_complete(rq)) 256 261 break; 257 262 258 263 if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, ··· 374 379 * straight onto a signaled list, and queue the irq worker for 375 380 * its signal completion. 376 381 */ 377 - if (__request_completed(rq)) { 382 + if (__i915_request_is_complete(rq)) { 378 383 if (__signal_request(rq) && 379 384 llist_add(&rq->signal_node, &b->signaled_requests)) 380 385 irq_work_queue(&b->irq_work);
+3
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 3988 3988 static void lrc_destroy_wa_ctx(struct intel_engine_cs *engine) 3989 3989 { 3990 3990 i915_vma_unpin_and_release(&engine->wa_ctx.vma, 0); 3991 + 3992 + /* Called on error unwind, clear all flags to prevent further use */ 3993 + memset(&engine->wa_ctx, 0, sizeof(engine->wa_ctx)); 3991 3994 } 3992 3995 3993 3996 typedef u32 *(*wa_bb_func_t)(struct intel_engine_cs *engine, u32 *batch);
+4 -6
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 126 126 struct intel_timeline_cacheline *cl = 127 127 container_of(rcu, typeof(*cl), rcu); 128 128 129 + /* Must wait until after all *rq->hwsp are complete before removing */ 130 + i915_gem_object_unpin_map(cl->hwsp->vma->obj); 131 + __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS)); 132 + 129 133 i915_active_fini(&cl->active); 130 134 kfree(cl); 131 135 } ··· 137 133 static void __idle_cacheline_free(struct intel_timeline_cacheline *cl) 138 134 { 139 135 GEM_BUG_ON(!i915_active_is_idle(&cl->active)); 140 - 141 - i915_gem_object_unpin_map(cl->hwsp->vma->obj); 142 - i915_vma_put(cl->hwsp->vma); 143 - __idle_hwsp_free(cl->hwsp, ptr_unmask_bits(cl->vaddr, CACHELINE_BITS)); 144 - 145 136 call_rcu(&cl->rcu, __rcu_cacheline_free); 146 137 } 147 138 ··· 178 179 return ERR_CAST(vaddr); 179 180 } 180 181 181 - i915_vma_get(hwsp->vma); 182 182 cl->hwsp = hwsp; 183 183 cl->vaddr = page_pack_bits(vaddr, cacheline); 184 184
+16 -14
drivers/gpu/drm/i915/i915_pmu.c
··· 184 184 return val; 185 185 } 186 186 187 + static void init_rc6(struct i915_pmu *pmu) 188 + { 189 + struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu); 190 + intel_wakeref_t wakeref; 191 + 192 + with_intel_runtime_pm(i915->gt.uncore->rpm, wakeref) { 193 + pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt); 194 + pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = 195 + pmu->sample[__I915_SAMPLE_RC6].cur; 196 + pmu->sleep_last = ktime_get(); 197 + } 198 + } 199 + 187 200 static void park_rc6(struct drm_i915_private *i915) 188 201 { 189 202 struct i915_pmu *pmu = &i915->pmu; 190 203 191 - if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY)) 192 - pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt); 193 - 204 + pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt); 194 205 pmu->sleep_last = ktime_get(); 195 206 } 196 207 ··· 212 201 return __get_rc6(gt); 213 202 } 214 203 204 + static void init_rc6(struct i915_pmu *pmu) { } 215 205 static void park_rc6(struct drm_i915_private *i915) {} 216 206 217 207 #endif ··· 624 612 container_of(event->pmu, typeof(*i915), pmu.base); 625 613 unsigned int bit = event_enabled_bit(event); 626 614 struct i915_pmu *pmu = &i915->pmu; 627 - intel_wakeref_t wakeref; 628 615 unsigned long flags; 629 616 630 - wakeref = intel_runtime_pm_get(&i915->runtime_pm); 631 617 spin_lock_irqsave(&pmu->lock, flags); 632 618 633 619 /* ··· 635 625 BUILD_BUG_ON(ARRAY_SIZE(pmu->enable_count) != I915_PMU_MASK_BITS); 636 626 GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count)); 637 627 GEM_BUG_ON(pmu->enable_count[bit] == ~0); 638 - 639 - if (pmu->enable_count[bit] == 0 && 640 - config_enabled_mask(I915_PMU_RC6_RESIDENCY) & BIT_ULL(bit)) { 641 - pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = 0; 642 - pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt); 643 - pmu->sleep_last = ktime_get(); 644 - } 645 628 646 629 pmu->enable |= BIT_ULL(bit); 647 630 pmu->enable_count[bit]++; ··· 676 673 * an existing non-zero value. 677 674 */ 678 675 local64_set(&event->hw.prev_count, __i915_pmu_event_read(event)); 679 - 680 - intel_runtime_pm_put(&i915->runtime_pm, wakeref); 681 676 } 682 677 683 678 static void i915_pmu_disable(struct perf_event *event) ··· 1131 1130 hrtimer_init(&pmu->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 1132 1131 pmu->timer.function = i915_sample; 1133 1132 pmu->cpuhp.cpu = -1; 1133 + init_rc6(pmu); 1134 1134 1135 1135 if (!is_igp(i915)) { 1136 1136 pmu->name = kasprintf(GFP_KERNEL,
+32 -5
drivers/gpu/drm/i915/i915_request.h
··· 434 434 435 435 static inline bool __i915_request_has_started(const struct i915_request *rq) 436 436 { 437 - return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno - 1); 437 + return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno - 1); 438 438 } 439 439 440 440 /** ··· 465 465 */ 466 466 static inline bool i915_request_started(const struct i915_request *rq) 467 467 { 468 + bool result; 469 + 468 470 if (i915_request_signaled(rq)) 469 471 return true; 470 472 471 - /* Remember: started but may have since been preempted! */ 472 - return __i915_request_has_started(rq); 473 + result = true; 474 + rcu_read_lock(); /* the HWSP may be freed at runtime */ 475 + if (likely(!i915_request_signaled(rq))) 476 + /* Remember: started but may have since been preempted! */ 477 + result = __i915_request_has_started(rq); 478 + rcu_read_unlock(); 479 + 480 + return result; 473 481 } 474 482 475 483 /** ··· 490 482 */ 491 483 static inline bool i915_request_is_running(const struct i915_request *rq) 492 484 { 485 + bool result; 486 + 493 487 if (!i915_request_is_active(rq)) 494 488 return false; 495 489 496 - return __i915_request_has_started(rq); 490 + rcu_read_lock(); 491 + result = __i915_request_has_started(rq) && i915_request_is_active(rq); 492 + rcu_read_unlock(); 493 + 494 + return result; 497 495 } 498 496 499 497 /** ··· 523 509 return !list_empty(&rq->sched.link); 524 510 } 525 511 512 + static inline bool __i915_request_is_complete(const struct i915_request *rq) 513 + { 514 + return i915_seqno_passed(__hwsp_seqno(rq), rq->fence.seqno); 515 + } 516 + 526 517 static inline bool i915_request_completed(const struct i915_request *rq) 527 518 { 519 + bool result; 520 + 528 521 if (i915_request_signaled(rq)) 529 522 return true; 530 523 531 - return i915_seqno_passed(hwsp_seqno(rq), rq->fence.seqno); 524 + result = true; 525 + rcu_read_lock(); /* the HWSP may be freed at runtime */ 526 + if (likely(!i915_request_signaled(rq))) 527 + result = __i915_request_is_complete(rq); 528 + rcu_read_unlock(); 529 + 530 + return result; 532 531 } 533 532 534 533 static inline void i915_request_mark_complete(struct i915_request *rq)
+6 -5
drivers/gpu/drm/ttm/ttm_pool.c
··· 79 79 struct page *p; 80 80 void *vaddr; 81 81 82 - if (order) { 83 - gfp_flags |= GFP_TRANSHUGE_LIGHT | __GFP_NORETRY | 82 + /* Don't set the __GFP_COMP flag for higher order allocations. 83 + * Mapping pages directly into an userspace process and calling 84 + * put_page() on a TTM allocated page is illegal. 85 + */ 86 + if (order) 87 + gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | 84 88 __GFP_KSWAPD_RECLAIM; 85 - gfp_flags &= ~__GFP_MOVABLE; 86 - gfp_flags &= ~__GFP_COMP; 87 - } 88 89 89 90 if (!pool->use_dma_alloc) { 90 91 p = alloc_pages(gfp_flags, order);
+1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1267 1267 card->dai_link = dai_link; 1268 1268 card->num_links = 1; 1269 1269 card->name = vc4_hdmi->variant->card_name; 1270 + card->driver_name = "vc4-hdmi"; 1270 1271 card->dev = dev; 1271 1272 card->owner = THIS_MODULE; 1272 1273
+2 -1
drivers/hid/hid-multitouch.c
··· 758 758 MT_STORE_FIELD(inrange_state); 759 759 return 1; 760 760 case HID_DG_CONFIDENCE: 761 - if (cls->name == MT_CLS_WIN_8 && 761 + if ((cls->name == MT_CLS_WIN_8 || 762 + cls->name == MT_CLS_WIN_8_FORCE_MULTI_INPUT) && 762 763 (field->application == HID_DG_TOUCHPAD || 763 764 field->application == HID_DG_TOUCHSCREEN)) 764 765 app->quirks |= MT_QUIRK_CONFIDENCE;
+4 -3
drivers/hid/wacom_sys.c
··· 147 147 } 148 148 149 149 if (flush) 150 - wacom_wac_queue_flush(hdev, &wacom_wac->pen_fifo); 150 + wacom_wac_queue_flush(hdev, wacom_wac->pen_fifo); 151 151 else if (insert) 152 - wacom_wac_queue_insert(hdev, &wacom_wac->pen_fifo, 152 + wacom_wac_queue_insert(hdev, wacom_wac->pen_fifo, 153 153 raw_data, report_size); 154 154 155 155 return insert && !flush; ··· 1280 1280 static int wacom_devm_kfifo_alloc(struct wacom *wacom) 1281 1281 { 1282 1282 struct wacom_wac *wacom_wac = &wacom->wacom_wac; 1283 - struct kfifo_rec_ptr_2 *pen_fifo = &wacom_wac->pen_fifo; 1283 + struct kfifo_rec_ptr_2 *pen_fifo; 1284 1284 int error; 1285 1285 1286 1286 pen_fifo = devres_alloc(wacom_devm_kfifo_release, ··· 1297 1297 } 1298 1298 1299 1299 devres_add(&wacom->hdev->dev, pen_fifo); 1300 + wacom_wac->pen_fifo = pen_fifo; 1300 1301 1301 1302 return 0; 1302 1303 }
+1 -1
drivers/hid/wacom_wac.h
··· 342 342 struct input_dev *pen_input; 343 343 struct input_dev *touch_input; 344 344 struct input_dev *pad_input; 345 - struct kfifo_rec_ptr_2 pen_fifo; 345 + struct kfifo_rec_ptr_2 *pen_fifo; 346 346 int pid; 347 347 int num_contacts_left; 348 348 u8 bt_features;
+5
drivers/hwtracing/intel_th/pci.c
··· 269 269 .driver_data = (kernel_ulong_t)&intel_th_2x, 270 270 }, 271 271 { 272 + /* Alder Lake-P */ 273 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x51a6), 274 + .driver_data = (kernel_ulong_t)&intel_th_2x, 275 + }, 276 + { 272 277 /* Alder Lake CPU */ 273 278 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), 274 279 .driver_data = (kernel_ulong_t)&intel_th_2x,
+4 -2
drivers/hwtracing/stm/heartbeat.c
··· 64 64 65 65 static int stm_heartbeat_init(void) 66 66 { 67 - int i, ret = -ENOMEM; 67 + int i, ret; 68 68 69 69 if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX) 70 70 return -EINVAL; ··· 72 72 for (i = 0; i < nr_devs; i++) { 73 73 stm_heartbeat[i].data.name = 74 74 kasprintf(GFP_KERNEL, "heartbeat.%d", i); 75 - if (!stm_heartbeat[i].data.name) 75 + if (!stm_heartbeat[i].data.name) { 76 + ret = -ENOMEM; 76 77 goto fail_unregister; 78 + } 77 79 78 80 stm_heartbeat[i].data.nr_chans = 1; 79 81 stm_heartbeat[i].data.link = stm_heartbeat_link;
+1
drivers/i2c/busses/Kconfig
··· 1013 1013 config I2C_SPRD 1014 1014 tristate "Spreadtrum I2C interface" 1015 1015 depends on I2C=y && (ARCH_SPRD || COMPILE_TEST) 1016 + depends on COMMON_CLK 1016 1017 help 1017 1018 If you say yes to this option, support will be included for the 1018 1019 Spreadtrum I2C interface.
+19 -1
drivers/i2c/busses/i2c-imx.c
··· 241 241 242 242 }; 243 243 244 + static const struct platform_device_id imx_i2c_devtype[] = { 245 + { 246 + .name = "imx1-i2c", 247 + .driver_data = (kernel_ulong_t)&imx1_i2c_hwdata, 248 + }, { 249 + .name = "imx21-i2c", 250 + .driver_data = (kernel_ulong_t)&imx21_i2c_hwdata, 251 + }, { 252 + /* sentinel */ 253 + } 254 + }; 255 + MODULE_DEVICE_TABLE(platform, imx_i2c_devtype); 256 + 244 257 static const struct of_device_id i2c_imx_dt_ids[] = { 245 258 { .compatible = "fsl,imx1-i2c", .data = &imx1_i2c_hwdata, }, 246 259 { .compatible = "fsl,imx21-i2c", .data = &imx21_i2c_hwdata, }, ··· 1343 1330 return -ENOMEM; 1344 1331 1345 1332 match = device_get_match_data(&pdev->dev); 1346 - i2c_imx->hwdata = match; 1333 + if (match) 1334 + i2c_imx->hwdata = match; 1335 + else 1336 + i2c_imx->hwdata = (struct imx_i2c_hwdata *) 1337 + platform_get_device_id(pdev)->driver_data; 1347 1338 1348 1339 /* Setup i2c_imx driver structure */ 1349 1340 strlcpy(i2c_imx->adapter.name, pdev->name, sizeof(i2c_imx->adapter.name)); ··· 1515 1498 .of_match_table = i2c_imx_dt_ids, 1516 1499 .acpi_match_table = i2c_imx_acpi_ids, 1517 1500 }, 1501 + .id_table = imx_i2c_devtype, 1518 1502 }; 1519 1503 1520 1504 static int __init i2c_adap_imx_init(void)
+1 -1
drivers/i2c/busses/i2c-octeon-core.c
··· 347 347 if (result) 348 348 return result; 349 349 if (recv_len && i == 0) { 350 - if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) 350 + if (data[i] > I2C_SMBUS_BLOCK_MAX) 351 351 return -EPROTO; 352 352 length += data[i]; 353 353 }
+1 -1
drivers/i2c/busses/i2c-tegra-bpmp.c
··· 80 80 flags &= ~I2C_M_RECV_LEN; 81 81 } 82 82 83 - return (flags != 0) ? -EINVAL : 0; 83 + return 0; 84 84 } 85 85 86 86 /**
+22 -2
drivers/i2c/busses/i2c-tegra.c
··· 326 326 /* read back register to make sure that register writes completed */ 327 327 if (reg != I2C_TX_FIFO) 328 328 readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg)); 329 + else if (i2c_dev->is_vi) 330 + readl_relaxed(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, I2C_INT_STATUS)); 329 331 } 330 332 331 333 static u32 i2c_readl(struct tegra_i2c_dev *i2c_dev, unsigned int reg) ··· 339 337 unsigned int reg, unsigned int len) 340 338 { 341 339 writesl(i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg), data, len); 340 + } 341 + 342 + static void i2c_writesl_vi(struct tegra_i2c_dev *i2c_dev, void *data, 343 + unsigned int reg, unsigned int len) 344 + { 345 + u32 *data32 = data; 346 + 347 + /* 348 + * VI I2C controller has known hardware bug where writes get stuck 349 + * when immediate multiple writes happen to TX_FIFO register. 350 + * Recommended software work around is to read I2C register after 351 + * each write to TX_FIFO register to flush out the data. 352 + */ 353 + while (len--) 354 + i2c_writel(i2c_dev, *data32++, reg); 342 355 } 343 356 344 357 static void i2c_readsl(struct tegra_i2c_dev *i2c_dev, void *data, ··· 550 533 void __iomem *addr = i2c_dev->base + tegra_i2c_reg_addr(i2c_dev, reg); 551 534 u32 val; 552 535 553 - if (!i2c_dev->atomic_mode) 536 + if (!i2c_dev->atomic_mode && !in_irq()) 554 537 return readl_relaxed_poll_timeout(addr, val, !(val & mask), 555 538 delay_us, timeout_us); 556 539 ··· 828 811 i2c_dev->msg_buf_remaining = buf_remaining; 829 812 i2c_dev->msg_buf = buf + words_to_transfer * BYTES_PER_FIFO_WORD; 830 813 831 - i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); 814 + if (i2c_dev->is_vi) 815 + i2c_writesl_vi(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); 816 + else 817 + i2c_writesl(i2c_dev, buf, I2C_TX_FIFO, words_to_transfer); 832 818 833 819 buf += words_to_transfer * BYTES_PER_FIFO_WORD; 834 820 }
+1 -5
drivers/iio/adc/ti_am335x_adc.c
··· 397 397 ret = devm_request_threaded_irq(dev, irq, pollfunc_th, pollfunc_bh, 398 398 flags, indio_dev->name, indio_dev); 399 399 if (ret) 400 - goto error_kfifo_free; 400 + return ret; 401 401 402 402 indio_dev->setup_ops = setup_ops; 403 403 indio_dev->modes |= INDIO_BUFFER_SOFTWARE; 404 404 405 405 return 0; 406 - 407 - error_kfifo_free: 408 - iio_kfifo_free(indio_dev->buffer); 409 - return ret; 410 406 } 411 407 412 408 static const char * const chan_name_ain[] = {
+17 -14
drivers/iio/common/st_sensors/st_sensors_trigger.c
··· 23 23 * @sdata: Sensor data. 24 24 * 25 25 * returns: 26 - * 0 - no new samples available 27 - * 1 - new samples available 28 - * negative - error or unknown 26 + * false - no new samples available or read error 27 + * true - new samples available 29 28 */ 30 - static int st_sensors_new_samples_available(struct iio_dev *indio_dev, 31 - struct st_sensor_data *sdata) 29 + static bool st_sensors_new_samples_available(struct iio_dev *indio_dev, 30 + struct st_sensor_data *sdata) 32 31 { 33 32 int ret, status; 34 33 35 34 /* How would I know if I can't check it? */ 36 35 if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) 37 - return -EINVAL; 36 + return true; 38 37 39 38 /* No scan mask, no interrupt */ 40 39 if (!indio_dev->active_scan_mask) 41 - return 0; 40 + return false; 42 41 43 42 ret = regmap_read(sdata->regmap, 44 43 sdata->sensor_settings->drdy_irq.stat_drdy.addr, 45 44 &status); 46 45 if (ret < 0) { 47 46 dev_err(sdata->dev, "error checking samples available\n"); 48 - return ret; 47 + return false; 49 48 } 50 49 51 - if (status & sdata->sensor_settings->drdy_irq.stat_drdy.mask) 52 - return 1; 53 - 54 - return 0; 50 + return !!(status & sdata->sensor_settings->drdy_irq.stat_drdy.mask); 55 51 } 56 52 57 53 /** ··· 176 180 177 181 /* Tell the interrupt handler that we're dealing with edges */ 178 182 if (irq_trig == IRQF_TRIGGER_FALLING || 179 - irq_trig == IRQF_TRIGGER_RISING) 183 + irq_trig == IRQF_TRIGGER_RISING) { 184 + if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) { 185 + dev_err(&indio_dev->dev, 186 + "edge IRQ not supported w/o stat register.\n"); 187 + err = -EOPNOTSUPP; 188 + goto iio_trigger_free; 189 + } 180 190 sdata->edge_irq = true; 181 - else 191 + } else { 182 192 /* 183 193 * If we're not using edges (i.e. level interrupts) we 184 194 * just mask off the IRQ, handle one interrupt, then ··· 192 190 * interrupt handler top half again and start over. 193 191 */ 194 192 irq_trig |= IRQF_ONESHOT; 193 + } 195 194 196 195 /* 197 196 * If the interrupt pin is Open Drain, by definition this
+2 -2
drivers/iio/dac/ad5504.c
··· 187 187 return ret; 188 188 189 189 if (pwr_down) 190 - st->pwr_down_mask |= (1 << chan->channel); 191 - else 192 190 st->pwr_down_mask &= ~(1 << chan->channel); 191 + else 192 + st->pwr_down_mask |= (1 << chan->channel); 193 193 194 194 ret = ad5504_spi_write(st, AD5504_ADDR_CTRL, 195 195 AD5504_DAC_PWRDWN_MODE(st->pwr_down_mode) |
+3 -2
drivers/iio/proximity/sx9310.c
··· 601 601 return ret; 602 602 603 603 regval = FIELD_GET(SX9310_REG_PROX_CTRL8_9_PTHRESH_MASK, regval); 604 - if (regval > ARRAY_SIZE(sx9310_pthresh_codes)) 604 + if (regval >= ARRAY_SIZE(sx9310_pthresh_codes)) 605 605 return -EINVAL; 606 606 607 607 *val = sx9310_pthresh_codes[regval]; ··· 1305 1305 if (ret) 1306 1306 break; 1307 1307 1308 - pos = min(max(ilog2(pos), 3), 10) - 3; 1308 + /* Powers of 2, except for a gap between 16 and 64 */ 1309 + pos = clamp(ilog2(pos), 3, 11) - (pos >= 32 ? 4 : 3); 1309 1310 reg_def->def &= ~SX9310_REG_PROX_CTRL7_AVGPOSFILT_MASK; 1310 1311 reg_def->def |= FIELD_PREP(SX9310_REG_PROX_CTRL7_AVGPOSFILT_MASK, 1311 1312 pos);
+6
drivers/iio/temperature/mlx90632.c
··· 248 248 if (ret < 0) 249 249 return ret; 250 250 251 + /* 252 + * Give the mlx90632 some time to reset properly before sending a new I2C command 253 + * if this is not done, the following I2C command(s) will not be accepted. 254 + */ 255 + usleep_range(150, 200); 256 + 251 257 ret = regmap_write_bits(regmap, MLX90632_REG_CONTROL, 252 258 (MLX90632_CFG_MTYP_MASK | MLX90632_CFG_PWR_MASK), 253 259 (MLX90632_MTYP_STATUS(type) | MLX90632_PWR_STATUS_HALT));
+1 -1
drivers/infiniband/hw/cxgb4/qp.c
··· 2474 2474 init_attr->cap.max_send_wr = qhp->attr.sq_num_entries; 2475 2475 init_attr->cap.max_recv_wr = qhp->attr.rq_num_entries; 2476 2476 init_attr->cap.max_send_sge = qhp->attr.sq_max_sges; 2477 - init_attr->cap.max_recv_sge = qhp->attr.sq_max_sges; 2477 + init_attr->cap.max_recv_sge = qhp->attr.rq_max_sges; 2478 2478 init_attr->cap.max_inline_data = T4_MAX_SEND_INLINE; 2479 2479 init_attr->sq_sig_type = qhp->sq_sig_all ? IB_SIGNAL_ALL_WR : 0; 2480 2480 return 0;
+1 -1
drivers/infiniband/hw/hns/hns_roce_device.h
··· 532 532 struct hns_roce_hem_table sccc_table; 533 533 struct mutex scc_mutex; 534 534 struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM]; 535 - spinlock_t bank_lock; 535 + struct mutex bank_mutex; 536 536 }; 537 537 538 538 struct hns_roce_cq_table {
+6 -5
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 209 209 210 210 hr_qp->doorbell_qpn = 1; 211 211 } else { 212 - spin_lock(&qp_table->bank_lock); 212 + mutex_lock(&qp_table->bank_mutex); 213 213 bankid = get_least_load_bankid_for_qp(qp_table->bank); 214 214 215 215 ret = alloc_qpn_with_bankid(&qp_table->bank[bankid], bankid, ··· 217 217 if (ret) { 218 218 ibdev_err(&hr_dev->ib_dev, 219 219 "failed to alloc QPN, ret = %d\n", ret); 220 - spin_unlock(&qp_table->bank_lock); 220 + mutex_unlock(&qp_table->bank_mutex); 221 221 return ret; 222 222 } 223 223 224 224 qp_table->bank[bankid].inuse++; 225 - spin_unlock(&qp_table->bank_lock); 225 + mutex_unlock(&qp_table->bank_mutex); 226 226 227 227 hr_qp->doorbell_qpn = (u32)num; 228 228 } ··· 408 408 409 409 ida_free(&hr_dev->qp_table.bank[bankid].ida, hr_qp->qpn >> 3); 410 410 411 - spin_lock(&hr_dev->qp_table.bank_lock); 411 + mutex_lock(&hr_dev->qp_table.bank_mutex); 412 412 hr_dev->qp_table.bank[bankid].inuse--; 413 - spin_unlock(&hr_dev->qp_table.bank_lock); 413 + mutex_unlock(&hr_dev->qp_table.bank_mutex); 414 414 } 415 415 416 416 static int set_rq_size(struct hns_roce_dev *hr_dev, struct ib_qp_cap *cap, ··· 1371 1371 unsigned int i; 1372 1372 1373 1373 mutex_init(&qp_table->scc_mutex); 1374 + mutex_init(&qp_table->bank_mutex); 1374 1375 xa_init(&hr_dev->qp_table_xa); 1375 1376 1376 1377 reserved_from_bot = hr_dev->caps.reserved_qps;
+2 -4
drivers/infiniband/hw/mlx5/main.c
··· 3311 3311 int err; 3312 3312 3313 3313 dev->port[port_num].roce.nb.notifier_call = mlx5_netdev_event; 3314 - err = register_netdevice_notifier_net(mlx5_core_net(dev->mdev), 3315 - &dev->port[port_num].roce.nb); 3314 + err = register_netdevice_notifier(&dev->port[port_num].roce.nb); 3316 3315 if (err) { 3317 3316 dev->port[port_num].roce.nb.notifier_call = NULL; 3318 3317 return err; ··· 3323 3324 static void mlx5_remove_netdev_notifier(struct mlx5_ib_dev *dev, u8 port_num) 3324 3325 { 3325 3326 if (dev->port[port_num].roce.nb.notifier_call) { 3326 - unregister_netdevice_notifier_net(mlx5_core_net(dev->mdev), 3327 - &dev->port[port_num].roce.nb); 3327 + unregister_netdevice_notifier(&dev->port[port_num].roce.nb); 3328 3328 dev->port[port_num].roce.nb.notifier_call = NULL; 3329 3329 } 3330 3330 }
+3 -4
drivers/infiniband/hw/usnic/usnic_ib_sysfs.c
··· 214 214 struct usnic_vnic_res *vnic_res; 215 215 int len; 216 216 217 - len = sysfs_emit(buf, "QPN: %d State: (%s) PID: %u VF Idx: %hu ", 217 + len = sysfs_emit(buf, "QPN: %d State: (%s) PID: %u VF Idx: %hu", 218 218 qp_grp->ibqp.qp_num, 219 219 usnic_ib_qp_grp_state_to_string(qp_grp->state), 220 220 qp_grp->owner_pid, ··· 224 224 res_chunk = qp_grp->res_chunk_list[i]; 225 225 for (j = 0; j < res_chunk->cnt; j++) { 226 226 vnic_res = res_chunk->res[j]; 227 - len += sysfs_emit_at( 228 - buf, len, "%s[%d] ", 227 + len += sysfs_emit_at(buf, len, " %s[%d]", 229 228 usnic_vnic_res_type_to_str(vnic_res->type), 230 229 vnic_res->vnic_idx); 231 230 } 232 231 } 233 232 234 - len = sysfs_emit_at(buf, len, "\n"); 233 + len += sysfs_emit_at(buf, len, "\n"); 235 234 236 235 return len; 237 236 }
+14
drivers/infiniband/hw/vmw_pvrdma/pvrdma.h
··· 509 509 return flags & PVRDMA_MASK(PVRDMA_SEND_FLAGS_MAX); 510 510 } 511 511 512 + static inline int pvrdma_network_type_to_ib(enum pvrdma_network_type type) 513 + { 514 + switch (type) { 515 + case PVRDMA_NETWORK_ROCE_V1: 516 + return RDMA_NETWORK_ROCE_V1; 517 + case PVRDMA_NETWORK_IPV4: 518 + return RDMA_NETWORK_IPV4; 519 + case PVRDMA_NETWORK_IPV6: 520 + return RDMA_NETWORK_IPV6; 521 + default: 522 + return RDMA_NETWORK_IPV6; 523 + } 524 + } 525 + 512 526 void pvrdma_qp_cap_to_ib(struct ib_qp_cap *dst, 513 527 const struct pvrdma_qp_cap *src); 514 528 void ib_qp_cap_to_pvrdma(struct pvrdma_qp_cap *dst,
+1 -1
drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
··· 367 367 wc->dlid_path_bits = cqe->dlid_path_bits; 368 368 wc->port_num = cqe->port_num; 369 369 wc->vendor_err = cqe->vendor_err; 370 - wc->network_hdr_type = cqe->network_hdr_type; 370 + wc->network_hdr_type = pvrdma_network_type_to_ib(cqe->network_hdr_type); 371 371 372 372 /* Update shared ring state */ 373 373 pvrdma_idx_ring_inc(&cq->ring_state->rx.cons_head, cq->ibcq.cqe);
+6
drivers/infiniband/sw/rxe/rxe_net.c
··· 8 8 #include <linux/if_arp.h> 9 9 #include <linux/netdevice.h> 10 10 #include <linux/if.h> 11 + #include <linux/if_vlan.h> 11 12 #include <net/udp_tunnel.h> 12 13 #include <net/sch_generic.h> 13 14 #include <linux/netfilter.h> ··· 154 153 { 155 154 struct udphdr *udph; 156 155 struct net_device *ndev = skb->dev; 156 + struct net_device *rdev = ndev; 157 157 struct rxe_dev *rxe = rxe_get_dev_from_net(ndev); 158 158 struct rxe_pkt_info *pkt = SKB_TO_PKT(skb); 159 159 160 + if (!rxe && is_vlan_dev(rdev)) { 161 + rdev = vlan_dev_real_dev(ndev); 162 + rxe = rxe_get_dev_from_net(rdev); 163 + } 160 164 if (!rxe) 161 165 goto drop; 162 166
+5
drivers/infiniband/sw/rxe/rxe_resp.c
··· 872 872 else 873 873 wc->network_hdr_type = RDMA_NETWORK_IPV6; 874 874 875 + if (is_vlan_dev(skb->dev)) { 876 + wc->wc_flags |= IB_WC_WITH_VLAN; 877 + wc->vlan_id = vlan_dev_vlan_id(skb->dev); 878 + } 879 + 875 880 if (pkt->mask & RXE_IMMDT_MASK) { 876 881 wc->wc_flags |= IB_WC_WITH_IMM; 877 882 wc->ex.imm_data = immdt_imm(pkt);
+3 -2
drivers/irqchip/Kconfig
··· 493 493 TI System Controller, say Y here. Otherwise, say N. 494 494 495 495 config TI_PRUSS_INTC 496 - tristate "TI PRU-ICSS Interrupt Controller" 497 - depends on ARCH_DAVINCI || SOC_AM33XX || SOC_AM43XX || SOC_DRA7XX || ARCH_KEYSTONE || ARCH_K3 496 + tristate 497 + depends on TI_PRUSS 498 + default TI_PRUSS 498 499 select IRQ_DOMAIN 499 500 help 500 501 This enables support for the PRU-ICSS Local Interrupt Controller
+2 -2
drivers/irqchip/irq-bcm2836.c
··· 167 167 chained_irq_exit(chip, desc); 168 168 } 169 169 170 - static void bcm2836_arm_irqchip_ipi_eoi(struct irq_data *d) 170 + static void bcm2836_arm_irqchip_ipi_ack(struct irq_data *d) 171 171 { 172 172 int cpu = smp_processor_id(); 173 173 ··· 195 195 .name = "IPI", 196 196 .irq_mask = bcm2836_arm_irqchip_dummy_op, 197 197 .irq_unmask = bcm2836_arm_irqchip_dummy_op, 198 - .irq_eoi = bcm2836_arm_irqchip_ipi_eoi, 198 + .irq_ack = bcm2836_arm_irqchip_ipi_ack, 199 199 .ipi_send_mask = bcm2836_arm_irqchip_ipi_send_mask, 200 200 }; 201 201
+2 -2
drivers/irqchip/irq-loongson-liointc.c
··· 142 142 143 143 static const char * const parent_names[] = {"int0", "int1", "int2", "int3"}; 144 144 145 - int __init liointc_of_init(struct device_node *node, 146 - struct device_node *parent) 145 + static int __init liointc_of_init(struct device_node *node, 146 + struct device_node *parent) 147 147 { 148 148 struct irq_chip_generic *gc; 149 149 struct irq_domain *domain;
+7
drivers/irqchip/irq-mips-cpu.c
··· 197 197 if (ret) 198 198 return ret; 199 199 200 + ret = irq_domain_set_hwirq_and_chip(domain->parent, virq + i, hwirq, 201 + &mips_mt_cpu_irq_controller, 202 + NULL); 203 + 204 + if (ret) 205 + return ret; 206 + 200 207 ret = irq_set_irq_type(virq + i, IRQ_TYPE_LEVEL_HIGH); 201 208 if (ret) 202 209 return ret;
+1 -1
drivers/irqchip/irq-sl28cpld.c
··· 66 66 irqchip->chip.num_regs = 1; 67 67 irqchip->chip.status_base = base + INTC_IP; 68 68 irqchip->chip.mask_base = base + INTC_IE; 69 - irqchip->chip.mask_invert = true, 69 + irqchip->chip.mask_invert = true; 70 70 irqchip->chip.ack_base = base + INTC_IP; 71 71 72 72 return devm_regmap_add_irq_chip_fwnode(dev, dev_fwnode(dev),
+1 -2
drivers/lightnvm/core.c
··· 844 844 rqd.ppa_addr = generic_to_dev_addr(dev, ppa); 845 845 846 846 ret = nvm_submit_io_sync_raw(dev, &rqd); 847 + __free_page(page); 847 848 if (ret) 848 849 return ret; 849 - 850 - __free_page(page); 851 850 852 851 return rqd.error; 853 852 }
+3 -3
drivers/md/dm-crypt.c
··· 1481 1481 static int crypt_alloc_req_aead(struct crypt_config *cc, 1482 1482 struct convert_context *ctx) 1483 1483 { 1484 - if (!ctx->r.req) { 1485 - ctx->r.req = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); 1486 - if (!ctx->r.req) 1484 + if (!ctx->r.req_aead) { 1485 + ctx->r.req_aead = mempool_alloc(&cc->req_pool, in_interrupt() ? GFP_ATOMIC : GFP_NOIO); 1486 + if (!ctx->r.req_aead) 1487 1487 return -ENOMEM; 1488 1488 } 1489 1489
+30 -2
drivers/md/dm-integrity.c
··· 257 257 bool journal_uptodate; 258 258 bool just_formatted; 259 259 bool recalculate_flag; 260 - bool fix_padding; 261 260 bool discard; 261 + bool fix_padding; 262 + bool legacy_recalculate; 262 263 263 264 struct alg_spec internal_hash_alg; 264 265 struct alg_spec journal_crypt_alg; ··· 385 384 static int dm_integrity_failed(struct dm_integrity_c *ic) 386 385 { 387 386 return READ_ONCE(ic->failed); 387 + } 388 + 389 + static bool dm_integrity_disable_recalculate(struct dm_integrity_c *ic) 390 + { 391 + if ((ic->internal_hash_alg.key || ic->journal_mac_alg.key) && 392 + !ic->legacy_recalculate) 393 + return true; 394 + return false; 388 395 } 389 396 390 397 static commit_id_t dm_integrity_commit_id(struct dm_integrity_c *ic, unsigned i, ··· 3149 3140 arg_count += !!ic->journal_crypt_alg.alg_string; 3150 3141 arg_count += !!ic->journal_mac_alg.alg_string; 3151 3142 arg_count += (ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0; 3143 + arg_count += ic->legacy_recalculate; 3152 3144 DMEMIT("%s %llu %u %c %u", ic->dev->name, ic->start, 3153 3145 ic->tag_size, ic->mode, arg_count); 3154 3146 if (ic->meta_dev) ··· 3173 3163 } 3174 3164 if ((ic->sb->flags & cpu_to_le32(SB_FLAG_FIXED_PADDING)) != 0) 3175 3165 DMEMIT(" fix_padding"); 3166 + if (ic->legacy_recalculate) 3167 + DMEMIT(" legacy_recalculate"); 3176 3168 3177 3169 #define EMIT_ALG(a, n) \ 3178 3170 do { \ ··· 3804 3792 unsigned extra_args; 3805 3793 struct dm_arg_set as; 3806 3794 static const struct dm_arg _args[] = { 3807 - {0, 15, "Invalid number of feature args"}, 3795 + {0, 16, "Invalid number of feature args"}, 3808 3796 }; 3809 3797 unsigned journal_sectors, interleave_sectors, buffer_sectors, journal_watermark, sync_msec; 3810 3798 bool should_write_sb; ··· 3952 3940 ic->discard = true; 3953 3941 } else if (!strcmp(opt_string, "fix_padding")) { 3954 3942 ic->fix_padding = true; 3943 + } else if (!strcmp(opt_string, "legacy_recalculate")) { 3944 + ic->legacy_recalculate = true; 3955 3945 } else { 3956 3946 r = -EINVAL; 3957 3947 ti->error = "Invalid argument"; ··· 4249 4235 r = -ENOMEM; 4250 4236 goto bad; 4251 4237 } 4238 + } else { 4239 + if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING)) { 4240 + ti->error = "Recalculate can only be specified with internal_hash"; 4241 + r = -EINVAL; 4242 + goto bad; 4243 + } 4244 + } 4245 + 4246 + if (ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING) && 4247 + le64_to_cpu(ic->sb->recalc_sector) < ic->provided_data_sectors && 4248 + dm_integrity_disable_recalculate(ic)) { 4249 + ti->error = "Recalculating with HMAC is disabled for security reasons - if you really need it, use the argument \"legacy_recalculate\""; 4250 + r = -EOPNOTSUPP; 4251 + goto bad; 4252 4252 } 4253 4253 4254 4254 ic->bufio = dm_bufio_client_create(ic->meta_dev ? ic->meta_dev->bdev : ic->dev->bdev,
+12 -3
drivers/md/dm-table.c
··· 363 363 { 364 364 int r; 365 365 dev_t dev; 366 + unsigned int major, minor; 367 + char dummy; 366 368 struct dm_dev_internal *dd; 367 369 struct dm_table *t = ti->table; 368 370 369 371 BUG_ON(!t); 370 372 371 - dev = dm_get_dev_t(path); 372 - if (!dev) 373 - return -ENODEV; 373 + if (sscanf(path, "%u:%u%c", &major, &minor, &dummy) == 2) { 374 + /* Extract the major/minor numbers */ 375 + dev = MKDEV(major, minor); 376 + if (MAJOR(dev) != major || MINOR(dev) != minor) 377 + return -EOVERFLOW; 378 + } else { 379 + dev = dm_get_dev_t(path); 380 + if (!dev) 381 + return -ENODEV; 382 + } 374 383 375 384 dd = find_device(&t->devices, dev); 376 385 if (!dd) {
+2
drivers/md/md.c
··· 639 639 * could wait for this and below md_handle_request could wait for those 640 640 * bios because of suspend check 641 641 */ 642 + spin_lock_irq(&mddev->lock); 642 643 mddev->prev_flush_start = mddev->start_flush; 643 644 mddev->flush_bio = NULL; 645 + spin_unlock_irq(&mddev->lock); 644 646 wake_up(&mddev->sb_wait); 645 647 646 648 if (bio->bi_iter.bi_size == 0) {
+1
drivers/media/cec/platform/Makefile
··· 10 10 obj-$(CONFIG_CEC_SAMSUNG_S5P) += s5p/ 11 11 obj-$(CONFIG_CEC_SECO) += seco/ 12 12 obj-$(CONFIG_CEC_STI) += sti/ 13 + obj-$(CONFIG_CEC_STM32) += stm32/ 13 14 obj-$(CONFIG_CEC_TEGRA) += tegra/ 14 15
+1 -2
drivers/media/common/videobuf2/videobuf2-v4l2.c
··· 118 118 return -EINVAL; 119 119 } 120 120 } else { 121 - length = (b->memory == VB2_MEMORY_USERPTR || 122 - b->memory == VB2_MEMORY_DMABUF) 121 + length = (b->memory == VB2_MEMORY_USERPTR) 123 122 ? b->length : vb->planes[0].length; 124 123 125 124 if (b->bytesused > length)
+1 -7
drivers/media/i2c/ccs-pll.c
··· 772 772 773 773 switch (pll->bus_type) { 774 774 case CCS_PLL_BUS_TYPE_CSI2_DPHY: 775 - /* CSI transfers 2 bits per clock per lane; thus times 2 */ 776 - op_sys_clk_freq_hz_sdr = pll->link_freq * 2 777 - * (pll->flags & CCS_PLL_FLAG_LANE_SPEED_MODEL ? 778 - 1 : pll->csi2.lanes); 779 - break; 780 775 case CCS_PLL_BUS_TYPE_CSI2_CPHY: 781 - op_sys_clk_freq_hz_sdr = 782 - pll->link_freq 776 + op_sys_clk_freq_hz_sdr = pll->link_freq * 2 783 777 * (pll->flags & CCS_PLL_FLAG_LANE_SPEED_MODEL ? 784 778 1 : pll->csi2.lanes); 785 779 break;
+1 -1
drivers/media/i2c/ccs/ccs-data.c
··· 152 152 vv->version_major = ((u16)v->static_data_version_major[0] << 8) + 153 153 v->static_data_version_major[1]; 154 154 vv->version_minor = ((u16)v->static_data_version_minor[0] << 8) + 155 - v->static_data_version_major[1]; 155 + v->static_data_version_minor[1]; 156 156 vv->date_year = ((u16)v->year[0] << 8) + v->year[1]; 157 157 vv->date_month = v->month; 158 158 vv->date_day = v->day;
+1 -1
drivers/media/pci/intel/ipu3/ipu3-cio2.c
··· 302 302 if (!q->sensor) 303 303 return -ENODEV; 304 304 305 - freq = v4l2_get_link_rate(q->sensor->ctrl_handler, bpp, lanes); 305 + freq = v4l2_get_link_freq(q->sensor->ctrl_handler, bpp, lanes); 306 306 if (freq < 0) { 307 307 dev_err(dev, "error %lld, invalid link_freq\n", freq); 308 308 return freq;
+2
drivers/media/platform/qcom/venus/core.c
··· 349 349 { 350 350 struct venus_core *core = platform_get_drvdata(pdev); 351 351 352 + pm_runtime_get_sync(core->dev); 352 353 venus_shutdown(core); 353 354 venus_firmware_deinit(core); 355 + pm_runtime_put_sync(core->dev); 354 356 } 355 357 356 358 static __maybe_unused int venus_runtime_suspend(struct device *dev)
+1 -1
drivers/media/platform/rcar-vin/rcar-core.c
··· 654 654 out: 655 655 fwnode_handle_put(fwnode); 656 656 657 - return 0; 657 + return ret; 658 658 } 659 659 660 660 static int rvin_parallel_init(struct rvin_dev *vin)
+1 -1
drivers/media/rc/ir-mce_kbd-decoder.c
··· 320 320 data->body); 321 321 spin_lock(&data->keylock); 322 322 if (scancode) { 323 - delay = nsecs_to_jiffies(dev->timeout) + 323 + delay = usecs_to_jiffies(dev->timeout) + 324 324 msecs_to_jiffies(100); 325 325 mod_timer(&data->rx_timeout, jiffies + delay); 326 326 } else {
+1 -1
drivers/media/rc/ite-cir.c
··· 1551 1551 rdev->s_rx_carrier_range = ite_set_rx_carrier_range; 1552 1552 /* FIFO threshold is 17 bytes, so 17 * 8 samples minimum */ 1553 1553 rdev->min_timeout = 17 * 8 * ITE_BAUDRATE_DIVISOR * 1554 - itdev->params.sample_period; 1554 + itdev->params.sample_period / 1000; 1555 1555 rdev->timeout = IR_DEFAULT_TIMEOUT; 1556 1556 rdev->max_timeout = 10 * IR_DEFAULT_TIMEOUT; 1557 1557 rdev->rx_resolution = ITE_BAUDRATE_DIVISOR *
+4 -4
drivers/media/rc/rc-main.c
··· 737 737 void rc_repeat(struct rc_dev *dev) 738 738 { 739 739 unsigned long flags; 740 - unsigned int timeout = nsecs_to_jiffies(dev->timeout) + 740 + unsigned int timeout = usecs_to_jiffies(dev->timeout) + 741 741 msecs_to_jiffies(repeat_period(dev->last_protocol)); 742 742 struct lirc_scancode sc = { 743 743 .scancode = dev->last_scancode, .rc_proto = dev->last_protocol, ··· 855 855 ir_do_keydown(dev, protocol, scancode, keycode, toggle); 856 856 857 857 if (dev->keypressed) { 858 - dev->keyup_jiffies = jiffies + nsecs_to_jiffies(dev->timeout) + 858 + dev->keyup_jiffies = jiffies + usecs_to_jiffies(dev->timeout) + 859 859 msecs_to_jiffies(repeat_period(protocol)); 860 860 mod_timer(&dev->timer_keyup, dev->keyup_jiffies); 861 861 } ··· 1928 1928 goto out_raw; 1929 1929 } 1930 1930 1931 + dev->registered = true; 1932 + 1931 1933 rc = device_add(&dev->dev); 1932 1934 if (rc) 1933 1935 goto out_rx_free; ··· 1938 1936 dev_info(&dev->dev, "%s as %s\n", 1939 1937 dev->device_name ?: "Unspecified device", path ?: "N/A"); 1940 1938 kfree(path); 1941 - 1942 - dev->registered = true; 1943 1939 1944 1940 /* 1945 1941 * once the the input device is registered in rc_setup_rx_device,
+1 -1
drivers/media/rc/serial_ir.c
··· 385 385 } while (!(sinp(UART_IIR) & UART_IIR_NO_INT)); /* still pending ? */ 386 386 387 387 mod_timer(&serial_ir.timeout_timer, 388 - jiffies + nsecs_to_jiffies(serial_ir.rcdev->timeout)); 388 + jiffies + usecs_to_jiffies(serial_ir.rcdev->timeout)); 389 389 390 390 ir_raw_event_handle(serial_ir.rcdev); 391 391
+2 -2
drivers/media/v4l2-core/v4l2-common.c
··· 442 442 } 443 443 EXPORT_SYMBOL_GPL(v4l2_fill_pixfmt); 444 444 445 - s64 v4l2_get_link_rate(struct v4l2_ctrl_handler *handler, unsigned int mul, 445 + s64 v4l2_get_link_freq(struct v4l2_ctrl_handler *handler, unsigned int mul, 446 446 unsigned int div) 447 447 { 448 448 struct v4l2_ctrl *ctrl; ··· 473 473 474 474 return freq > 0 ? freq : -EINVAL; 475 475 } 476 - EXPORT_SYMBOL_GPL(v4l2_get_link_rate); 476 + EXPORT_SYMBOL_GPL(v4l2_get_link_freq);
+6 -1
drivers/misc/cardreader/rtsx_pcr.c
··· 1512 1512 struct pcr_handle *handle; 1513 1513 u32 base, len; 1514 1514 int ret, i, bar = 0; 1515 + u8 val; 1515 1516 1516 1517 dev_dbg(&(pcidev->dev), 1517 1518 ": Realtek PCI-E Card Reader found at %s [%04x:%04x] (rev %x)\n", ··· 1578 1577 pcr->host_cmds_addr = pcr->rtsx_resv_buf_addr; 1579 1578 pcr->host_sg_tbl_ptr = pcr->rtsx_resv_buf + HOST_CMDS_BUF_LEN; 1580 1579 pcr->host_sg_tbl_addr = pcr->rtsx_resv_buf_addr + HOST_CMDS_BUF_LEN; 1581 - 1580 + rtsx_pci_read_register(pcr, ASPM_FORCE_CTL, &val); 1581 + if (val & FORCE_ASPM_CTL0 && val & FORCE_ASPM_CTL1) 1582 + pcr->aspm_enabled = false; 1583 + else 1584 + pcr->aspm_enabled = true; 1582 1585 pcr->card_inserted = 0; 1583 1586 pcr->card_removed = 0; 1584 1587 INIT_DELAYED_WORK(&pcr->carddet_work, rtsx_pci_card_detect);
+10 -1
drivers/misc/habanalabs/common/device.c
··· 1037 1037 1038 1038 if (hard_reset) { 1039 1039 /* Release kernel context */ 1040 - if (hl_ctx_put(hdev->kernel_ctx) == 1) 1040 + if (hdev->kernel_ctx && hl_ctx_put(hdev->kernel_ctx) == 1) 1041 1041 hdev->kernel_ctx = NULL; 1042 1042 hl_vm_fini(hdev); 1043 1043 hl_mmu_fini(hdev); ··· 1486 1486 return; 1487 1487 } 1488 1488 } 1489 + 1490 + /* Disable PCI access from device F/W so it won't send us additional 1491 + * interrupts. We disable MSI/MSI-X at the halt_engines function and we 1492 + * can't have the F/W sending us interrupts after that. We need to 1493 + * disable the access here because if the device is marked disable, the 1494 + * message won't be send. Also, in case of heartbeat, the device CPU is 1495 + * marked as disable so this message won't be sent 1496 + */ 1497 + hl_fw_send_pci_access_msg(hdev, CPUCP_PACKET_DISABLE_PCI_ACCESS); 1489 1498 1490 1499 /* Mark device as disabled */ 1491 1500 hdev->disabled = true;
+5
drivers/misc/habanalabs/common/firmware_if.c
··· 402 402 } 403 403 counters->rx_throughput = result; 404 404 405 + memset(&pkt, 0, sizeof(pkt)); 406 + pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_THROUGHPUT_GET << 407 + CPUCP_PKT_CTL_OPCODE_SHIFT); 408 + 405 409 /* Fetch PCI tx counter */ 406 410 pkt.index = cpu_to_le32(cpucp_pcie_throughput_tx); 407 411 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), ··· 418 414 counters->tx_throughput = result; 419 415 420 416 /* Fetch PCI replay counter */ 417 + memset(&pkt, 0, sizeof(pkt)); 421 418 pkt.ctl = cpu_to_le32(CPUCP_PACKET_PCIE_REPLAY_CNT_GET << 422 419 CPUCP_PKT_CTL_OPCODE_SHIFT); 423 420
+1
drivers/misc/habanalabs/common/habanalabs.h
··· 2182 2182 int hl_mmu_va_to_pa(struct hl_ctx *ctx, u64 virt_addr, u64 *phys_addr); 2183 2183 int hl_mmu_get_tlb_info(struct hl_ctx *ctx, u64 virt_addr, 2184 2184 struct hl_mmu_hop_info *hops); 2185 + bool hl_is_dram_va(struct hl_device *hdev, u64 virt_addr); 2185 2186 2186 2187 int hl_fw_load_fw_to_device(struct hl_device *hdev, const char *fw_name, 2187 2188 void __iomem *dst, u32 src_offset, u32 size);
+2
drivers/misc/habanalabs/common/habanalabs_ioctl.c
··· 133 133 134 134 hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev, 135 135 &hw_idle.busy_engines_mask_ext, NULL); 136 + hw_idle.busy_engines_mask = 137 + lower_32_bits(hw_idle.busy_engines_mask_ext); 136 138 137 139 return copy_to_user(out, &hw_idle, 138 140 min((size_t) max_size, sizeof(hw_idle))) ? -EFAULT : 0;
+8 -2
drivers/misc/habanalabs/common/memory.c
··· 886 886 { 887 887 struct hl_device *hdev = ctx->hdev; 888 888 u64 next_vaddr, i; 889 + bool is_host_addr; 889 890 u32 page_size; 890 891 892 + is_host_addr = !hl_is_dram_va(hdev, vaddr); 891 893 page_size = phys_pg_pack->page_size; 892 894 next_vaddr = vaddr; 893 895 ··· 902 900 /* 903 901 * unmapping on Palladium can be really long, so avoid a CPU 904 902 * soft lockup bug by sleeping a little between unmapping pages 903 + * 904 + * In addition, when unmapping host memory we pass through 905 + * the Linux kernel to unpin the pages and that takes a long 906 + * time. Therefore, sleep every 32K pages to avoid soft lockup 905 907 */ 906 - if (hdev->pldm) 907 - usleep_range(500, 1000); 908 + if (hdev->pldm || (is_host_addr && (i & 0x7FFF) == 0)) 909 + usleep_range(50, 200); 908 910 } 909 911 } 910 912
+3 -3
drivers/misc/habanalabs/common/mmu.c
··· 9 9 10 10 #include "habanalabs.h" 11 11 12 - static bool is_dram_va(struct hl_device *hdev, u64 virt_addr) 12 + bool hl_is_dram_va(struct hl_device *hdev, u64 virt_addr) 13 13 { 14 14 struct asic_fixed_properties *prop = &hdev->asic_prop; 15 15 ··· 156 156 if (!hdev->mmu_enable) 157 157 return 0; 158 158 159 - is_dram_addr = is_dram_va(hdev, virt_addr); 159 + is_dram_addr = hl_is_dram_va(hdev, virt_addr); 160 160 161 161 if (is_dram_addr) 162 162 mmu_prop = &prop->dmmu; ··· 236 236 if (!hdev->mmu_enable) 237 237 return 0; 238 238 239 - is_dram_addr = is_dram_va(hdev, virt_addr); 239 + is_dram_addr = hl_is_dram_va(hdev, virt_addr); 240 240 241 241 if (is_dram_addr) 242 242 mmu_prop = &prop->dmmu;
+10 -2
drivers/misc/habanalabs/common/mmu_v1.c
··· 467 467 { 468 468 /* MMU H/W fini was already done in device hw_fini() */ 469 469 470 - kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0); 471 - gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool); 470 + if (!ZERO_OR_NULL_PTR(hdev->mmu_priv.hr.mmu_shadow_hop0)) { 471 + kvfree(hdev->mmu_priv.dr.mmu_shadow_hop0); 472 + gen_pool_destroy(hdev->mmu_priv.dr.mmu_pgt_pool); 473 + } 474 + 475 + /* Make sure that if we arrive here again without init was called we 476 + * won't cause kernel panic. This can happen for example if we fail 477 + * during hard reset code at certain points 478 + */ 479 + hdev->mmu_priv.dr.mmu_shadow_hop0 = NULL; 472 480 } 473 481 474 482 /**
+2 -1
drivers/misc/habanalabs/gaudi/gaudi.c
··· 4002 4002 vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | 4003 4003 VM_DONTCOPY | VM_NORESERVE; 4004 4004 4005 - rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size); 4005 + rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, 4006 + (dma_addr - HOST_PHYS_BASE), size); 4006 4007 if (rc) 4007 4008 dev_err(hdev->dev, "dma_mmap_coherent error %d", rc); 4008 4009
+2 -1
drivers/misc/habanalabs/goya/goya.c
··· 2719 2719 vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP | 2720 2720 VM_DONTCOPY | VM_NORESERVE; 2721 2721 2722 - rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, dma_addr, size); 2722 + rc = dma_mmap_coherent(hdev->dev, vma, cpu_addr, 2723 + (dma_addr - HOST_PHYS_BASE), size); 2723 2724 if (rc) 2724 2725 dev_err(hdev->dev, "dma_mmap_coherent error %d", rc); 2725 2726
+3 -1
drivers/mmc/core/queue.c
··· 384 384 "merging was advertised but not possible"); 385 385 blk_queue_max_segments(mq->queue, mmc_get_max_segments(host)); 386 386 387 - if (mmc_card_mmc(card)) 387 + if (mmc_card_mmc(card) && card->ext_csd.data_sector_size) { 388 388 block_size = card->ext_csd.data_sector_size; 389 + WARN_ON(block_size != 512 && block_size != 4096); 390 + } 389 391 390 392 blk_queue_logical_block_size(mq->queue, block_size); 391 393 /*
+1 -5
drivers/mmc/host/sdhci-brcmstb.c
··· 314 314 315 315 static void sdhci_brcmstb_shutdown(struct platform_device *pdev) 316 316 { 317 - int ret; 318 - 319 - ret = sdhci_pltfm_unregister(pdev); 320 - if (ret) 321 - dev_err(&pdev->dev, "failed to shutdown\n"); 317 + sdhci_pltfm_suspend(&pdev->dev); 322 318 } 323 319 324 320 MODULE_DEVICE_TABLE(of, sdhci_brcm_of_match);
+27
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 16 16 17 17 #include "sdhci-pltfm.h" 18 18 19 + #define SDHCI_DWCMSHC_ARG2_STUFF GENMASK(31, 16) 20 + 19 21 /* DWCMSHC specific Mode Select value */ 20 22 #define DWCMSHC_CTRL_HS400 0x7 21 23 ··· 49 47 addr += tmplen; 50 48 len -= tmplen; 51 49 sdhci_adma_write_desc(host, desc, addr, len, cmd); 50 + } 51 + 52 + static void dwcmshc_check_auto_cmd23(struct mmc_host *mmc, 53 + struct mmc_request *mrq) 54 + { 55 + struct sdhci_host *host = mmc_priv(mmc); 56 + 57 + /* 58 + * No matter V4 is enabled or not, ARGUMENT2 register is 32-bit 59 + * block count register which doesn't support stuff bits of 60 + * CMD23 argument on dwcmsch host controller. 61 + */ 62 + if (mrq->sbc && (mrq->sbc->arg & SDHCI_DWCMSHC_ARG2_STUFF)) 63 + host->flags &= ~SDHCI_AUTO_CMD23; 64 + else 65 + host->flags |= SDHCI_AUTO_CMD23; 66 + } 67 + 68 + static void dwcmshc_request(struct mmc_host *mmc, struct mmc_request *mrq) 69 + { 70 + dwcmshc_check_auto_cmd23(mmc, mrq); 71 + 72 + sdhci_request(mmc, mrq); 52 73 } 53 74 54 75 static void dwcmshc_set_uhs_signaling(struct sdhci_host *host, ··· 157 132 goto err_clk; 158 133 159 134 sdhci_get_of_property(pdev); 135 + 136 + host->mmc_host_ops.request = dwcmshc_request; 160 137 161 138 err = sdhci_add_host(host); 162 139 if (err)
+6 -1
drivers/mmc/host/sdhci-xenon.c
··· 168 168 /* Disable tuning request and auto-retuning again */ 169 169 xenon_retune_setup(host); 170 170 171 - xenon_set_acg(host, true); 171 + /* 172 + * The ACG should be turned off at the early init time, in order 173 + * to solve a possible issues with the 1.8V regulator stabilization. 174 + * The feature is enabled in later stage. 175 + */ 176 + xenon_set_acg(host, false); 172 177 173 178 xenon_set_sdclk_off_idle(host, sdhc_id, false); 174 179
+1 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 1615 1615 /* Extract interleaved payload data and ECC bits */ 1616 1616 for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { 1617 1617 if (buf) 1618 - nand_extract_bits(buf, step * eccsize, tmp_buf, 1618 + nand_extract_bits(buf, step * eccsize * 8, tmp_buf, 1619 1619 src_bit_off, eccsize * 8); 1620 1620 src_bit_off += eccsize * 8; 1621 1621
+3 -2
drivers/mtd/nand/raw/intel-nand-controller.c
··· 579 579 struct device *dev = &pdev->dev; 580 580 struct ebu_nand_controller *ebu_host; 581 581 struct nand_chip *nand; 582 - struct mtd_info *mtd = NULL; 582 + struct mtd_info *mtd; 583 583 struct resource *res; 584 584 char *resname; 585 585 int ret; ··· 647 647 ebu_host->ebu + EBU_ADDR_SEL(cs)); 648 648 649 649 nand_set_flash_node(&ebu_host->chip, dev->of_node); 650 + 651 + mtd = nand_to_mtd(&ebu_host->chip); 650 652 if (!mtd->name) { 651 653 dev_err(ebu_host->dev, "NAND label property is mandatory\n"); 652 654 return -EINVAL; 653 655 } 654 656 655 - mtd = nand_to_mtd(&ebu_host->chip); 656 657 mtd->dev.parent = dev; 657 658 ebu_host->dev = dev; 658 659
+3 -4
drivers/mtd/nand/raw/nandsim.c
··· 2210 2210 { 2211 2211 unsigned int eccsteps, eccbytes; 2212 2212 2213 + chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 2214 + chip->ecc.algo = bch ? NAND_ECC_ALGO_BCH : NAND_ECC_ALGO_HAMMING; 2215 + 2213 2216 if (!bch) 2214 2217 return 0; 2215 2218 ··· 2236 2233 return -EINVAL; 2237 2234 } 2238 2235 2239 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 2240 - chip->ecc.algo = NAND_ECC_ALGO_BCH; 2241 2236 chip->ecc.size = 512; 2242 2237 chip->ecc.strength = bch; 2243 2238 chip->ecc.bytes = eccbytes; ··· 2274 2273 nsmtd = nand_to_mtd(chip); 2275 2274 nand_set_controller_data(chip, (void *)ns); 2276 2275 2277 - chip->ecc.engine_type = NAND_ECC_ENGINE_TYPE_SOFT; 2278 - chip->ecc.algo = NAND_ECC_ALGO_HAMMING; 2279 2276 /* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */ 2280 2277 /* and 'badblocks' parameters to work */ 2281 2278 chip->options |= NAND_SKIP_BBTSCAN;
+9 -6
drivers/mtd/nand/raw/omap2.c
··· 15 15 #include <linux/jiffies.h> 16 16 #include <linux/sched.h> 17 17 #include <linux/mtd/mtd.h> 18 + #include <linux/mtd/nand-ecc-sw-bch.h> 18 19 #include <linux/mtd/rawnand.h> 19 20 #include <linux/mtd/partitions.h> 20 21 #include <linux/omap-dma.h> ··· 1867 1866 static int omap_sw_ooblayout_ecc(struct mtd_info *mtd, int section, 1868 1867 struct mtd_oob_region *oobregion) 1869 1868 { 1870 - struct nand_chip *chip = mtd_to_nand(mtd); 1869 + struct nand_device *nand = mtd_to_nanddev(mtd); 1870 + const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 1871 1871 int off = BADBLOCK_MARKER_LENGTH; 1872 1872 1873 - if (section >= chip->ecc.steps) 1873 + if (section >= engine_conf->nsteps) 1874 1874 return -ERANGE; 1875 1875 1876 1876 /* 1877 1877 * When SW correction is employed, one OMAP specific marker byte is 1878 1878 * reserved after each ECC step. 1879 1879 */ 1880 - oobregion->offset = off + (section * (chip->ecc.bytes + 1)); 1881 - oobregion->length = chip->ecc.bytes; 1880 + oobregion->offset = off + (section * (engine_conf->code_size + 1)); 1881 + oobregion->length = engine_conf->code_size; 1882 1882 1883 1883 return 0; 1884 1884 } ··· 1887 1885 static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section, 1888 1886 struct mtd_oob_region *oobregion) 1889 1887 { 1890 - struct nand_chip *chip = mtd_to_nand(mtd); 1888 + struct nand_device *nand = mtd_to_nanddev(mtd); 1889 + const struct nand_ecc_sw_bch_conf *engine_conf = nand->ecc.ctx.priv; 1891 1890 int off = BADBLOCK_MARKER_LENGTH; 1892 1891 1893 1892 if (section) ··· 1898 1895 * When SW correction is employed, one OMAP specific marker byte is 1899 1896 * reserved after each ECC step. 1900 1897 */ 1901 - off += ((chip->ecc.bytes + 1) * chip->ecc.steps); 1898 + off += ((engine_conf->code_size + 1) * engine_conf->nsteps); 1902 1899 if (off >= mtd->oobsize) 1903 1900 return -ERANGE; 1904 1901
+11 -3
drivers/mtd/nand/spi/core.c
··· 343 343 const struct nand_page_io_req *req) 344 344 { 345 345 struct nand_device *nand = spinand_to_nand(spinand); 346 + struct mtd_info *mtd = spinand_to_mtd(spinand); 346 347 struct spi_mem_dirmap_desc *rdesc; 347 348 unsigned int nbytes = 0; 348 349 void *buf = NULL; ··· 383 382 memcpy(req->databuf.in, spinand->databuf + req->dataoffs, 384 383 req->datalen); 385 384 386 - if (req->ooblen) 387 - memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, 388 - req->ooblen); 385 + if (req->ooblen) { 386 + if (req->mode == MTD_OPS_AUTO_OOB) 387 + mtd_ooblayout_get_databytes(mtd, req->oobbuf.in, 388 + spinand->oobbuf, 389 + req->ooboffs, 390 + req->ooblen); 391 + else 392 + memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs, 393 + req->ooblen); 394 + } 389 395 390 396 return 0; 391 397 }
+1 -1
drivers/net/can/dev/netlink.c
··· 263 263 { 264 264 struct can_priv *priv = netdev_priv(dev); 265 265 struct can_ctrlmode cm = {.flags = priv->ctrlmode}; 266 - struct can_berr_counter bec; 266 + struct can_berr_counter bec = { }; 267 267 enum can_state state = priv->state; 268 268 269 269 if (priv->do_get_state)
+6 -2
drivers/net/dsa/bcm_sf2.c
··· 510 510 /* Find our integrated MDIO bus node */ 511 511 dn = of_find_compatible_node(NULL, NULL, "brcm,unimac-mdio"); 512 512 priv->master_mii_bus = of_mdio_find_bus(dn); 513 - if (!priv->master_mii_bus) 513 + if (!priv->master_mii_bus) { 514 + of_node_put(dn); 514 515 return -EPROBE_DEFER; 516 + } 515 517 516 518 get_device(&priv->master_mii_bus->dev); 517 519 priv->master_mii_dn = dn; 518 520 519 521 priv->slave_mii_bus = devm_mdiobus_alloc(ds->dev); 520 - if (!priv->slave_mii_bus) 522 + if (!priv->slave_mii_bus) { 523 + of_node_put(dn); 521 524 return -ENOMEM; 525 + } 522 526 523 527 priv->slave_mii_bus->priv = priv; 524 528 priv->slave_mii_bus->name = "sf2 slave mii";
+20 -10
drivers/net/dsa/microchip/ksz8795.c
··· 1183 1183 .port_cnt = 5, /* total cpu and user ports */ 1184 1184 }, 1185 1185 { 1186 + /* 1187 + * WARNING 1188 + * ======= 1189 + * KSZ8794 is similar to KSZ8795, except the port map 1190 + * contains a gap between external and CPU ports, the 1191 + * port map is NOT continuous. The per-port register 1192 + * map is shifted accordingly too, i.e. registers at 1193 + * offset 0x40 are NOT used on KSZ8794 and they ARE 1194 + * used on KSZ8795 for external port 3. 1195 + * external cpu 1196 + * KSZ8794 0,1,2 4 1197 + * KSZ8795 0,1,2,3 4 1198 + * KSZ8765 0,1,2,3 4 1199 + */ 1186 1200 .chip_id = 0x8794, 1187 1201 .dev_name = "KSZ8794", 1188 1202 .num_vlans = 4096, ··· 1230 1216 dev->num_vlans = chip->num_vlans; 1231 1217 dev->num_alus = chip->num_alus; 1232 1218 dev->num_statics = chip->num_statics; 1233 - dev->port_cnt = chip->port_cnt; 1219 + dev->port_cnt = fls(chip->cpu_ports); 1220 + dev->cpu_port = fls(chip->cpu_ports) - 1; 1221 + dev->phy_port_cnt = dev->port_cnt - 1; 1234 1222 dev->cpu_ports = chip->cpu_ports; 1235 - 1223 + dev->host_mask = chip->cpu_ports; 1224 + dev->port_mask = (BIT(dev->phy_port_cnt) - 1) | 1225 + chip->cpu_ports; 1236 1226 break; 1237 1227 } 1238 1228 } ··· 1245 1227 if (!dev->cpu_ports) 1246 1228 return -ENODEV; 1247 1229 1248 - dev->port_mask = BIT(dev->port_cnt) - 1; 1249 - dev->port_mask |= dev->host_mask; 1250 - 1251 1230 dev->reg_mib_cnt = KSZ8795_COUNTER_NUM; 1252 1231 dev->mib_cnt = ARRAY_SIZE(mib_names); 1253 - 1254 - dev->phy_port_cnt = dev->port_cnt - 1; 1255 - 1256 - dev->cpu_port = dev->port_cnt - 1; 1257 - dev->host_mask = BIT(dev->cpu_port); 1258 1232 1259 1233 dev->ports = devm_kzalloc(dev->dev, 1260 1234 dev->port_cnt * sizeof(struct ksz_port),
+2 -2
drivers/net/dsa/microchip/ksz_common.c
··· 385 385 gpiod_set_value_cansleep(dev->reset_gpio, 1); 386 386 usleep_range(10000, 12000); 387 387 gpiod_set_value_cansleep(dev->reset_gpio, 0); 388 - usleep_range(100, 1000); 388 + msleep(100); 389 389 } 390 390 391 391 mutex_init(&dev->dev_mutex); ··· 419 419 if (of_property_read_u32(port, "reg", 420 420 &port_num)) 421 421 continue; 422 - if (port_num >= dev->port_cnt) 422 + if (!(dev->port_mask & BIT(port_num))) 423 423 return -EINVAL; 424 424 of_get_phy_mode(port, 425 425 &dev->ports[port_num].interface);
+3 -4
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 1158 1158 #endif 1159 1159 } 1160 1160 if (!n || !n->dev) 1161 - goto free_sk; 1161 + goto free_dst; 1162 1162 1163 1163 ndev = n->dev; 1164 - if (!ndev) 1165 - goto free_dst; 1166 1164 if (is_vlan_dev(ndev)) 1167 1165 ndev = vlan_dev_real_dev(ndev); 1168 1166 ··· 1248 1250 free_csk: 1249 1251 chtls_sock_release(&csk->kref); 1250 1252 free_dst: 1251 - neigh_release(n); 1253 + if (n) 1254 + neigh_release(n); 1252 1255 dst_release(dst); 1253 1256 free_sk: 1254 1257 inet_csk_prepare_forced_close(newsk);
+5
drivers/net/ethernet/freescale/fec.h
··· 462 462 */ 463 463 #define FEC_QUIRK_CLEAR_SETUP_MII (1 << 17) 464 464 465 + /* Some link partners do not tolerate the momentary reset of the REF_CLK 466 + * frequency when the RNCTL register is cleared by hardware reset. 467 + */ 468 + #define FEC_QUIRK_NO_HARD_RESET (1 << 18) 469 + 465 470 struct bufdesc_prop { 466 471 int qid; 467 472 /* Address of Rx and Tx buffers */
+6 -3
drivers/net/ethernet/freescale/fec_main.c
··· 100 100 static const struct fec_devinfo fec_imx28_info = { 101 101 .quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME | 102 102 FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC | 103 - FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII, 103 + FEC_QUIRK_HAS_FRREG | FEC_QUIRK_CLEAR_SETUP_MII | 104 + FEC_QUIRK_NO_HARD_RESET, 104 105 }; 105 106 106 107 static const struct fec_devinfo fec_imx6q_info = { ··· 954 953 * For i.MX6SX SOC, enet use AXI bus, we use disable MAC 955 954 * instead of reset MAC itself. 956 955 */ 957 - if (fep->quirks & FEC_QUIRK_HAS_AVB) { 956 + if (fep->quirks & FEC_QUIRK_HAS_AVB || 957 + ((fep->quirks & FEC_QUIRK_NO_HARD_RESET) && fep->link)) { 958 958 writel(0, fep->hwp + FEC_ECNTRL); 959 959 } else { 960 960 writel(1, fep->hwp + FEC_ECNTRL); ··· 2167 2165 fep->mii_bus->parent = &pdev->dev; 2168 2166 2169 2167 err = of_mdiobus_register(fep->mii_bus, node); 2170 - of_node_put(node); 2171 2168 if (err) 2172 2169 goto err_out_free_mdiobus; 2170 + of_node_put(node); 2173 2171 2174 2172 mii_cnt++; 2175 2173 ··· 2182 2180 err_out_free_mdiobus: 2183 2181 mdiobus_free(fep->mii_bus); 2184 2182 err_out: 2183 + of_node_put(node); 2185 2184 return err; 2186 2185 } 2187 2186
+6
drivers/net/ethernet/ibm/ibmvnic.c
··· 5017 5017 while (!done) { 5018 5018 /* Pull all the valid messages off the CRQ */ 5019 5019 while ((crq = ibmvnic_next_crq(adapter)) != NULL) { 5020 + /* This barrier makes sure ibmvnic_next_crq()'s 5021 + * crq->generic.first & IBMVNIC_CRQ_CMD_RSP is loaded 5022 + * before ibmvnic_handle_crq()'s 5023 + * switch(gen_crq->first) and switch(gen_crq->cmd). 5024 + */ 5025 + dma_rmb(); 5020 5026 ibmvnic_handle_crq(crq, adapter); 5021 5027 crq->generic.first = 0; 5022 5028 }
+4 -7
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 4046 4046 goto error_param; 4047 4047 4048 4048 vf = &pf->vf[vf_id]; 4049 - vsi = pf->vsi[vf->lan_vsi_idx]; 4050 4049 4051 4050 /* When the VF is resetting wait until it is done. 4052 4051 * It can take up to 200 milliseconds, 4053 4052 * but wait for up to 300 milliseconds to be safe. 4054 - * If the VF is indeed in reset, the vsi pointer has 4055 - * to show on the newly loaded vsi under pf->vsi[id]. 4053 + * Acquire the VSI pointer only after the VF has been 4054 + * properly initialized. 4056 4055 */ 4057 4056 for (i = 0; i < 15; i++) { 4058 - if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) { 4059 - if (i > 0) 4060 - vsi = pf->vsi[vf->lan_vsi_idx]; 4057 + if (test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) 4061 4058 break; 4062 - } 4063 4059 msleep(20); 4064 4060 } 4065 4061 if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) { ··· 4064 4068 ret = -EAGAIN; 4065 4069 goto error_param; 4066 4070 } 4071 + vsi = pf->vsi[vf->lan_vsi_idx]; 4067 4072 4068 4073 if (is_multicast_ether_addr(mac)) { 4069 4074 dev_err(&pf->pdev->dev,
+3 -1
drivers/net/ethernet/intel/ice/ice.h
··· 68 68 #define ICE_INT_NAME_STR_LEN (IFNAMSIZ + 16) 69 69 #define ICE_AQ_LEN 64 70 70 #define ICE_MBXSQ_LEN 64 71 - #define ICE_MIN_MSIX 2 71 + #define ICE_MIN_LAN_TXRX_MSIX 1 72 + #define ICE_MIN_LAN_OICR_MSIX 1 73 + #define ICE_MIN_MSIX (ICE_MIN_LAN_TXRX_MSIX + ICE_MIN_LAN_OICR_MSIX) 72 74 #define ICE_FDIR_MSIX 1 73 75 #define ICE_NO_VSI 0xffff 74 76 #define ICE_VSI_MAP_CONTIG 0
+4 -4
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3258 3258 */ 3259 3259 static int ice_get_max_txq(struct ice_pf *pf) 3260 3260 { 3261 - return min_t(int, num_online_cpus(), 3262 - pf->hw.func_caps.common_cap.num_txq); 3261 + return min3(pf->num_lan_msix, (u16)num_online_cpus(), 3262 + (u16)pf->hw.func_caps.common_cap.num_txq); 3263 3263 } 3264 3264 3265 3265 /** ··· 3268 3268 */ 3269 3269 static int ice_get_max_rxq(struct ice_pf *pf) 3270 3270 { 3271 - return min_t(int, num_online_cpus(), 3272 - pf->hw.func_caps.common_cap.num_rxq); 3271 + return min3(pf->num_lan_msix, (u16)num_online_cpus(), 3272 + (u16)pf->hw.func_caps.common_cap.num_rxq); 3273 3273 } 3274 3274 3275 3275 /**
+7 -1
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
··· 1576 1576 sizeof(struct in6_addr)); 1577 1577 input->ip.v6.l4_header = fsp->h_u.usr_ip6_spec.l4_4_bytes; 1578 1578 input->ip.v6.tc = fsp->h_u.usr_ip6_spec.tclass; 1579 - input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto; 1579 + 1580 + /* if no protocol requested, use IPPROTO_NONE */ 1581 + if (!fsp->m_u.usr_ip6_spec.l4_proto) 1582 + input->ip.v6.proto = IPPROTO_NONE; 1583 + else 1584 + input->ip.v6.proto = fsp->h_u.usr_ip6_spec.l4_proto; 1585 + 1580 1586 memcpy(input->mask.v6.dst_ip, fsp->m_u.usr_ip6_spec.ip6dst, 1581 1587 sizeof(struct in6_addr)); 1582 1588 memcpy(input->mask.v6.src_ip, fsp->m_u.usr_ip6_spec.ip6src,
+9 -5
drivers/net/ethernet/intel/ice/ice_lib.c
··· 161 161 162 162 switch (vsi->type) { 163 163 case ICE_VSI_PF: 164 - vsi->alloc_txq = min_t(int, ice_get_avail_txq_count(pf), 165 - num_online_cpus()); 164 + vsi->alloc_txq = min3(pf->num_lan_msix, 165 + ice_get_avail_txq_count(pf), 166 + (u16)num_online_cpus()); 166 167 if (vsi->req_txq) { 167 168 vsi->alloc_txq = vsi->req_txq; 168 169 vsi->num_txq = vsi->req_txq; ··· 175 174 if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) { 176 175 vsi->alloc_rxq = 1; 177 176 } else { 178 - vsi->alloc_rxq = min_t(int, ice_get_avail_rxq_count(pf), 179 - num_online_cpus()); 177 + vsi->alloc_rxq = min3(pf->num_lan_msix, 178 + ice_get_avail_rxq_count(pf), 179 + (u16)num_online_cpus()); 180 180 if (vsi->req_rxq) { 181 181 vsi->alloc_rxq = vsi->req_rxq; 182 182 vsi->num_rxq = vsi->req_rxq; ··· 186 184 187 185 pf->num_lan_rx = vsi->alloc_rxq; 188 186 189 - vsi->num_q_vectors = max_t(int, vsi->alloc_rxq, vsi->alloc_txq); 187 + vsi->num_q_vectors = min_t(int, pf->num_lan_msix, 188 + max_t(int, vsi->alloc_rxq, 189 + vsi->alloc_txq)); 190 190 break; 191 191 case ICE_VSI_VF: 192 192 vf = &pf->vf[vsi->vf_id];
+9 -7
drivers/net/ethernet/intel/ice/ice_main.c
··· 3430 3430 if (v_actual < v_budget) { 3431 3431 dev_warn(dev, "not enough OS MSI-X vectors. requested = %d, obtained = %d\n", 3432 3432 v_budget, v_actual); 3433 - /* 2 vectors each for LAN and RDMA (traffic + OICR), one for flow director */ 3434 - #define ICE_MIN_LAN_VECS 2 3435 - #define ICE_MIN_RDMA_VECS 2 3436 - #define ICE_MIN_VECS (ICE_MIN_LAN_VECS + ICE_MIN_RDMA_VECS + 1) 3437 3433 3438 - if (v_actual < ICE_MIN_LAN_VECS) { 3434 + if (v_actual < ICE_MIN_MSIX) { 3439 3435 /* error if we can't get minimum vectors */ 3440 3436 pci_disable_msix(pf->pdev); 3441 3437 err = -ERANGE; 3442 3438 goto msix_err; 3443 3439 } else { 3444 - pf->num_lan_msix = ICE_MIN_LAN_VECS; 3440 + pf->num_lan_msix = ICE_MIN_LAN_TXRX_MSIX; 3445 3441 } 3446 3442 } 3447 3443 ··· 4880 4884 goto err_update_filters; 4881 4885 } 4882 4886 4883 - /* Add filter for new MAC. If filter exists, just return success */ 4887 + /* Add filter for new MAC. If filter exists, return success */ 4884 4888 status = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI); 4885 4889 if (status == ICE_ERR_ALREADY_EXISTS) { 4890 + /* Although this MAC filter is already present in hardware it's 4891 + * possible in some cases (e.g. bonding) that dev_addr was 4892 + * modified outside of the driver and needs to be restored back 4893 + * to this value. 4894 + */ 4895 + memcpy(netdev->dev_addr, mac, netdev->addr_len); 4886 4896 netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac); 4887 4897 return 0; 4888 4898 }
+6 -3
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 1924 1924 ICE_TX_CTX_EIPT_IPV4_NO_CSUM; 1925 1925 l4_proto = ip.v4->protocol; 1926 1926 } else if (first->tx_flags & ICE_TX_FLAGS_IPV6) { 1927 + int ret; 1928 + 1927 1929 tunnel |= ICE_TX_CTX_EIPT_IPV6; 1928 1930 exthdr = ip.hdr + sizeof(*ip.v6); 1929 1931 l4_proto = ip.v6->nexthdr; 1930 - if (l4.hdr != exthdr) 1931 - ipv6_skip_exthdr(skb, exthdr - skb->data, 1932 - &l4_proto, &frag_off); 1932 + ret = ipv6_skip_exthdr(skb, exthdr - skb->data, 1933 + &l4_proto, &frag_off); 1934 + if (ret < 0) 1935 + return -1; 1933 1936 } 1934 1937 1935 1938 /* define outer transport */
+18 -6
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1675 1675 cmd->base.phy_address = hw->phy.addr; 1676 1676 1677 1677 /* advertising link modes */ 1678 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half); 1679 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full); 1680 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half); 1681 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full); 1682 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full); 1683 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full); 1678 + if (hw->phy.autoneg_advertised & ADVERTISE_10_HALF) 1679 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Half); 1680 + if (hw->phy.autoneg_advertised & ADVERTISE_10_FULL) 1681 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 10baseT_Full); 1682 + if (hw->phy.autoneg_advertised & ADVERTISE_100_HALF) 1683 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Half); 1684 + if (hw->phy.autoneg_advertised & ADVERTISE_100_FULL) 1685 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 100baseT_Full); 1686 + if (hw->phy.autoneg_advertised & ADVERTISE_1000_FULL) 1687 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full); 1688 + if (hw->phy.autoneg_advertised & ADVERTISE_2500_FULL) 1689 + ethtool_link_ksettings_add_link_mode(cmd, advertising, 2500baseT_Full); 1684 1690 1685 1691 /* set autoneg settings */ 1686 1692 if (hw->mac.autoneg == 1) { ··· 1798 1792 1799 1793 ethtool_convert_link_mode_to_legacy_u32(&advertising, 1800 1794 cmd->link_modes.advertising); 1795 + /* Converting to legacy u32 drops ETHTOOL_LINK_MODE_2500baseT_Full_BIT. 1796 + * We have to check this and convert it to ADVERTISE_2500_FULL 1797 + * (aka ETHTOOL_LINK_MODE_2500baseX_Full_BIT) explicitly. 1798 + */ 1799 + if (ethtool_link_ksettings_test_link_mode(cmd, advertising, 2500baseT_Full)) 1800 + advertising |= ADVERTISE_2500_FULL; 1801 1801 1802 1802 if (cmd->base.autoneg == AUTONEG_ENABLE) { 1803 1803 hw->mac.autoneg = 1;
+2 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 488 488 dma_addr_t iova; 489 489 u8 *buf; 490 490 491 - buf = napi_alloc_frag(pool->rbsize); 491 + buf = napi_alloc_frag(pool->rbsize + OTX2_ALIGN); 492 492 if (unlikely(!buf)) 493 493 return -ENOMEM; 494 494 495 + buf = PTR_ALIGN(buf, OTX2_ALIGN); 495 496 iova = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, 496 497 DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); 497 498 if (unlikely(dma_mapping_error(pfvf->dev, iova))) {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/health.c
··· 273 273 274 274 err = devlink_fmsg_binary_pair_nest_start(fmsg, "data"); 275 275 if (err) 276 - return err; 276 + goto free_page; 277 277 278 278 cmd = mlx5_rsc_dump_cmd_create(mdev, key); 279 279 if (IS_ERR(cmd)) {
+13 -7
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 167 167 .min_size = 16 * 1024, 168 168 }; 169 169 170 + static bool 171 + mlx5_tc_ct_entry_has_nat(struct mlx5_ct_entry *entry) 172 + { 173 + return !!(entry->tuple_nat_node.next); 174 + } 175 + 170 176 static int 171 177 mlx5_tc_ct_rule_to_tuple(struct mlx5_ct_tuple *tuple, struct flow_rule *rule) 172 178 { ··· 915 909 err_insert: 916 910 mlx5_tc_ct_entry_del_rules(ct_priv, entry); 917 911 err_rules: 918 - rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, 919 - &entry->tuple_nat_node, tuples_nat_ht_params); 912 + if (mlx5_tc_ct_entry_has_nat(entry)) 913 + rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, 914 + &entry->tuple_nat_node, tuples_nat_ht_params); 920 915 err_tuple_nat: 921 - if (entry->tuple_node.next) 922 - rhashtable_remove_fast(&ct_priv->ct_tuples_ht, 923 - &entry->tuple_node, 924 - tuples_ht_params); 916 + rhashtable_remove_fast(&ct_priv->ct_tuples_ht, 917 + &entry->tuple_node, 918 + tuples_ht_params); 925 919 err_tuple: 926 920 err_set: 927 921 kfree(entry); ··· 936 930 { 937 931 mlx5_tc_ct_entry_del_rules(ct_priv, entry); 938 932 mutex_lock(&ct_priv->shared_counter_lock); 939 - if (entry->tuple_node.next) 933 + if (mlx5_tc_ct_entry_has_nat(entry)) 940 934 rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, 941 935 &entry->tuple_nat_node, 942 936 tuples_nat_ht_params);
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_stats.c
··· 76 76 77 77 static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_sw) 78 78 { 79 - return NUM_IPSEC_SW_COUNTERS; 79 + return priv->ipsec ? NUM_IPSEC_SW_COUNTERS : 0; 80 80 } 81 81 82 82 static inline MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_sw) {} ··· 105 105 106 106 static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_hw) 107 107 { 108 - return (mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0; 108 + return (priv->ipsec && mlx5_fpga_ipsec_device_caps(priv->mdev)) ? NUM_IPSEC_HW_COUNTERS : 0; 109 109 } 110 110 111 111 static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_hw)
+8 -5
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 1151 1151 { 1152 1152 struct mlx5e_channels new_channels = {}; 1153 1153 bool reset_channels = true; 1154 + bool opened; 1154 1155 int err = 0; 1155 1156 1156 1157 mutex_lock(&priv->state_lock); ··· 1160 1159 mlx5e_params_calc_trust_tx_min_inline_mode(priv->mdev, &new_channels.params, 1161 1160 trust_state); 1162 1161 1163 - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 1164 - priv->channels.params = new_channels.params; 1162 + opened = test_bit(MLX5E_STATE_OPENED, &priv->state); 1163 + if (!opened) 1165 1164 reset_channels = false; 1166 - } 1167 1165 1168 1166 /* Skip if tx_min_inline is the same */ 1169 1167 if (new_channels.params.tx_min_inline_mode == 1170 1168 priv->channels.params.tx_min_inline_mode) 1171 1169 reset_channels = false; 1172 1170 1173 - if (reset_channels) 1171 + if (reset_channels) { 1174 1172 err = mlx5e_safe_switch_channels(priv, &new_channels, 1175 1173 mlx5e_update_trust_state_hw, 1176 1174 &trust_state); 1177 - else 1175 + } else { 1178 1176 err = mlx5e_update_trust_state_hw(priv, &trust_state); 1177 + if (!err && !opened) 1178 + priv->channels.params = new_channels.params; 1179 + } 1179 1180 1180 1181 mutex_unlock(&priv->state_lock); 1181 1182
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 458 458 goto out; 459 459 } 460 460 461 - new_channels.params = priv->channels.params; 461 + new_channels.params = *cur_params; 462 462 new_channels.params.num_channels = count; 463 463 464 464 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 465 + struct mlx5e_params old_params; 466 + 467 + old_params = *cur_params; 465 468 *cur_params = new_channels.params; 466 469 err = mlx5e_num_channels_changed(priv); 470 + if (err) 471 + *cur_params = old_params; 472 + 467 473 goto out; 468 474 } 469 475
+29 -10
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 3685 3685 new_channels.params.num_tc = tc ? tc : 1; 3686 3686 3687 3687 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 3688 + struct mlx5e_params old_params; 3689 + 3690 + old_params = priv->channels.params; 3688 3691 priv->channels.params = new_channels.params; 3692 + err = mlx5e_num_channels_changed(priv); 3693 + if (err) 3694 + priv->channels.params = old_params; 3695 + 3689 3696 goto out; 3690 3697 } 3691 3698 ··· 3883 3876 struct mlx5e_priv *priv = netdev_priv(netdev); 3884 3877 struct mlx5_core_dev *mdev = priv->mdev; 3885 3878 struct mlx5e_channels new_channels = {}; 3886 - struct mlx5e_params *old_params; 3879 + struct mlx5e_params *cur_params; 3887 3880 int err = 0; 3888 3881 bool reset; 3889 3882 ··· 3896 3889 goto out; 3897 3890 } 3898 3891 3899 - old_params = &priv->channels.params; 3900 - if (enable && !MLX5E_GET_PFLAG(old_params, MLX5E_PFLAG_RX_STRIDING_RQ)) { 3892 + cur_params = &priv->channels.params; 3893 + if (enable && !MLX5E_GET_PFLAG(cur_params, MLX5E_PFLAG_RX_STRIDING_RQ)) { 3901 3894 netdev_warn(netdev, "can't set LRO with legacy RQ\n"); 3902 3895 err = -EINVAL; 3903 3896 goto out; ··· 3905 3898 3906 3899 reset = test_bit(MLX5E_STATE_OPENED, &priv->state); 3907 3900 3908 - new_channels.params = *old_params; 3901 + new_channels.params = *cur_params; 3909 3902 new_channels.params.lro_en = enable; 3910 3903 3911 - if (old_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) { 3912 - if (mlx5e_rx_mpwqe_is_linear_skb(mdev, old_params, NULL) == 3904 + if (cur_params->rq_wq_type != MLX5_WQ_TYPE_CYCLIC) { 3905 + if (mlx5e_rx_mpwqe_is_linear_skb(mdev, cur_params, NULL) == 3913 3906 mlx5e_rx_mpwqe_is_linear_skb(mdev, &new_channels.params, NULL)) 3914 3907 reset = false; 3915 3908 } 3916 3909 3917 3910 if (!reset) { 3918 - *old_params = new_channels.params; 3911 + struct mlx5e_params old_params; 3912 + 3913 + old_params = *cur_params; 3914 + *cur_params = new_channels.params; 3919 3915 err = mlx5e_modify_tirs_lro(priv); 3916 + if (err) 3917 + *cur_params = old_params; 3920 3918 goto out; 3921 3919 } 3922 3920 ··· 4201 4189 } 4202 4190 4203 4191 if (!reset) { 4192 + unsigned int old_mtu = params->sw_mtu; 4193 + 4204 4194 params->sw_mtu = new_mtu; 4205 - if (preactivate) 4206 - preactivate(priv, NULL); 4195 + if (preactivate) { 4196 + err = preactivate(priv, NULL); 4197 + if (err) { 4198 + params->sw_mtu = old_mtu; 4199 + goto out; 4200 + } 4201 + } 4207 4202 netdev->mtu = params->sw_mtu; 4208 4203 goto out; 4209 4204 } ··· 5164 5145 FT_CAP(modify_root) && 5165 5146 FT_CAP(identified_miss_table_mode) && 5166 5147 FT_CAP(flow_table_modify)) { 5167 - #ifdef CONFIG_MLX5_ESWITCH 5148 + #if IS_ENABLED(CONFIG_MLX5_CLS_ACT) 5168 5149 netdev->hw_features |= NETIF_F_HW_TC; 5169 5150 #endif 5170 5151 #ifdef CONFIG_MLX5_EN_ARFS
+2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 735 735 736 736 netdev->features |= NETIF_F_NETNS_LOCAL; 737 737 738 + #if IS_ENABLED(CONFIG_MLX5_CLS_ACT) 738 739 netdev->hw_features |= NETIF_F_HW_TC; 740 + #endif 739 741 netdev->hw_features |= NETIF_F_SG; 740 742 netdev->hw_features |= NETIF_F_IP_CSUM; 741 743 netdev->hw_features |= NETIF_F_IPV6_CSUM;
+17 -5
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 67 67 #include "lib/geneve.h" 68 68 #include "lib/fs_chains.h" 69 69 #include "diag/en_tc_tracepoint.h" 70 + #include <asm/div64.h> 70 71 71 72 #define nic_chains(priv) ((priv)->fs.tc.chains) 72 73 #define MLX5_MH_ACT_SZ MLX5_UN_SZ_BYTES(set_add_copy_action_in_auto) ··· 1163 1162 struct mlx5e_tc_mod_hdr_acts *mod_hdr_acts; 1164 1163 struct mlx5_flow_handle *rule; 1165 1164 1165 + if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) 1166 + return mlx5_eswitch_add_offloaded_rule(esw, spec, attr); 1167 + 1166 1168 if (flow_flag_test(flow, CT)) { 1167 1169 mod_hdr_acts = &attr->parse_attr->mod_hdr_acts; 1168 1170 ··· 1196 1192 { 1197 1193 flow_flag_clear(flow, OFFLOADED); 1198 1194 1195 + if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) 1196 + goto offload_rule_0; 1197 + 1199 1198 if (flow_flag_test(flow, CT)) { 1200 1199 mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); 1201 1200 return; ··· 1207 1200 if (attr->esw_attr->split_count) 1208 1201 mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); 1209 1202 1203 + offload_rule_0: 1210 1204 mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); 1211 1205 } 1212 1206 ··· 2271 2263 BIT(FLOW_DISSECTOR_KEY_ENC_OPTS) | 2272 2264 BIT(FLOW_DISSECTOR_KEY_MPLS))) { 2273 2265 NL_SET_ERR_MSG_MOD(extack, "Unsupported key"); 2274 - netdev_warn(priv->netdev, "Unsupported key used: 0x%x\n", 2275 - dissector->used_keys); 2266 + netdev_dbg(priv->netdev, "Unsupported key used: 0x%x\n", 2267 + dissector->used_keys); 2276 2268 return -EOPNOTSUPP; 2277 2269 } 2278 2270 ··· 5009 5001 return err; 5010 5002 } 5011 5003 5012 - static int apply_police_params(struct mlx5e_priv *priv, u32 rate, 5004 + static int apply_police_params(struct mlx5e_priv *priv, u64 rate, 5013 5005 struct netlink_ext_ack *extack) 5014 5006 { 5015 5007 struct mlx5e_rep_priv *rpriv = priv->ppriv; 5016 5008 struct mlx5_eswitch *esw; 5009 + u32 rate_mbps = 0; 5017 5010 u16 vport_num; 5018 - u32 rate_mbps; 5019 5011 int err; 5020 5012 5021 5013 vport_num = rpriv->rep->vport; ··· 5032 5024 * Moreover, if rate is non zero we choose to configure to a minimum of 5033 5025 * 1 mbit/sec. 5034 5026 */ 5035 - rate_mbps = rate ? max_t(u32, (rate * 8 + 500000) / 1000000, 1) : 0; 5027 + if (rate) { 5028 + rate = (rate * BITS_PER_BYTE) + 500000; 5029 + rate_mbps = max_t(u32, do_div(rate, 1000000), 1); 5030 + } 5031 + 5036 5032 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps); 5037 5033 if (err) 5038 5034 NL_SET_ERR_MSG_MOD(extack, "failed applying action to hardware");
+1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 1141 1141 destroy_ft: 1142 1142 root->cmds->destroy_flow_table(root, ft); 1143 1143 free_ft: 1144 + rhltable_destroy(&ft->fgs_hash); 1144 1145 kfree(ft); 1145 1146 unlock_root: 1146 1147 mutex_unlock(&root->chain_lock);
+5
drivers/net/ethernet/mellanox/mlx5/core/lib/mlx5.h
··· 90 90 u32 key_type, u32 *p_key_id); 91 91 void mlx5_destroy_encryption_key(struct mlx5_core_dev *mdev, u32 key_id); 92 92 93 + static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev) 94 + { 95 + return devlink_net(priv_to_devlink(dev)); 96 + } 97 + 93 98 #endif
+34 -24
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 58 58 struct rb_node rb_node; 59 59 u64 addr; 60 60 struct page *page; 61 - u16 func_id; 61 + u32 function; 62 62 unsigned long bitmask; 63 63 struct list_head list; 64 64 unsigned free_count; ··· 74 74 MLX5_NUM_4K_IN_PAGE = PAGE_SIZE / MLX5_ADAPTER_PAGE_SIZE, 75 75 }; 76 76 77 - static struct rb_root *page_root_per_func_id(struct mlx5_core_dev *dev, u16 func_id) 77 + static u32 get_function(u16 func_id, bool ec_function) 78 + { 79 + return func_id & (ec_function << 16); 80 + } 81 + 82 + static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function) 78 83 { 79 84 struct rb_root *root; 80 85 int err; 81 86 82 - root = xa_load(&dev->priv.page_root_xa, func_id); 87 + root = xa_load(&dev->priv.page_root_xa, function); 83 88 if (root) 84 89 return root; 85 90 ··· 92 87 if (!root) 93 88 return ERR_PTR(-ENOMEM); 94 89 95 - err = xa_insert(&dev->priv.page_root_xa, func_id, root, GFP_KERNEL); 90 + err = xa_insert(&dev->priv.page_root_xa, function, root, GFP_KERNEL); 96 91 if (err) { 97 92 kfree(root); 98 93 return ERR_PTR(err); ··· 103 98 return root; 104 99 } 105 100 106 - static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id) 101 + static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u32 function) 107 102 { 108 103 struct rb_node *parent = NULL; 109 104 struct rb_root *root; ··· 112 107 struct fw_page *tfp; 113 108 int i; 114 109 115 - root = page_root_per_func_id(dev, func_id); 110 + root = page_root_per_function(dev, function); 116 111 if (IS_ERR(root)) 117 112 return PTR_ERR(root); 118 113 ··· 135 130 136 131 nfp->addr = addr; 137 132 nfp->page = page; 138 - nfp->func_id = func_id; 133 + nfp->function = function; 139 134 nfp->free_count = MLX5_NUM_4K_IN_PAGE; 140 135 for (i = 0; i < MLX5_NUM_4K_IN_PAGE; i++) 141 136 set_bit(i, &nfp->bitmask); ··· 148 143 } 149 144 150 145 static struct fw_page *find_fw_page(struct mlx5_core_dev *dev, u64 addr, 151 - u32 func_id) 146 + u32 function) 152 147 { 153 148 struct fw_page *result = NULL; 154 149 struct rb_root *root; 155 150 struct rb_node *tmp; 156 151 struct fw_page *tfp; 157 152 158 - root = xa_load(&dev->priv.page_root_xa, func_id); 153 + root = xa_load(&dev->priv.page_root_xa, function); 159 154 if (WARN_ON_ONCE(!root)) 160 155 return NULL; 161 156 ··· 199 194 return err; 200 195 } 201 196 202 - static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u16 func_id) 197 + static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function) 203 198 { 204 199 struct fw_page *fp = NULL; 205 200 struct fw_page *iter; 206 201 unsigned n; 207 202 208 203 list_for_each_entry(iter, &dev->priv.free_list, list) { 209 - if (iter->func_id != func_id) 204 + if (iter->function != function) 210 205 continue; 211 206 fp = iter; 212 207 } ··· 236 231 { 237 232 struct rb_root *root; 238 233 239 - root = xa_load(&dev->priv.page_root_xa, fwp->func_id); 234 + root = xa_load(&dev->priv.page_root_xa, fwp->function); 240 235 if (WARN_ON_ONCE(!root)) 241 236 return; 242 237 ··· 249 244 kfree(fwp); 250 245 } 251 246 252 - static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 func_id) 247 + static void free_4k(struct mlx5_core_dev *dev, u64 addr, u32 function) 253 248 { 254 249 struct fw_page *fwp; 255 250 int n; 256 251 257 - fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, func_id); 252 + fwp = find_fw_page(dev, addr & MLX5_U64_4K_PAGE_MASK, function); 258 253 if (!fwp) { 259 254 mlx5_core_warn_rl(dev, "page not found\n"); 260 255 return; ··· 268 263 list_add(&fwp->list, &dev->priv.free_list); 269 264 } 270 265 271 - static int alloc_system_page(struct mlx5_core_dev *dev, u16 func_id) 266 + static int alloc_system_page(struct mlx5_core_dev *dev, u32 function) 272 267 { 273 268 struct device *device = mlx5_core_dma_dev(dev); 274 269 int nid = dev_to_node(device); ··· 296 291 goto map; 297 292 } 298 293 299 - err = insert_page(dev, addr, page, func_id); 294 + err = insert_page(dev, addr, page, function); 300 295 if (err) { 301 296 mlx5_core_err(dev, "failed to track allocated page\n"); 302 297 dma_unmap_page(device, addr, PAGE_SIZE, DMA_BIDIRECTIONAL); ··· 333 328 static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, 334 329 int notify_fail, bool ec_function) 335 330 { 331 + u32 function = get_function(func_id, ec_function); 336 332 u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0}; 337 333 int inlen = MLX5_ST_SZ_BYTES(manage_pages_in); 338 334 u64 addr; ··· 351 345 352 346 for (i = 0; i < npages; i++) { 353 347 retry: 354 - err = alloc_4k(dev, &addr, func_id); 348 + err = alloc_4k(dev, &addr, function); 355 349 if (err) { 356 350 if (err == -ENOMEM) 357 - err = alloc_system_page(dev, func_id); 351 + err = alloc_system_page(dev, function); 358 352 if (err) 359 353 goto out_4k; 360 354 ··· 390 384 391 385 out_4k: 392 386 for (i--; i >= 0; i--) 393 - free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), func_id); 387 + free_4k(dev, MLX5_GET64(manage_pages_in, in, pas[i]), function); 394 388 out_free: 395 389 kvfree(in); 396 390 if (notify_fail) ··· 398 392 return err; 399 393 } 400 394 401 - static void release_all_pages(struct mlx5_core_dev *dev, u32 func_id, 395 + static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id, 402 396 bool ec_function) 403 397 { 398 + u32 function = get_function(func_id, ec_function); 404 399 struct rb_root *root; 405 400 struct rb_node *p; 406 401 int npages = 0; 407 402 408 - root = xa_load(&dev->priv.page_root_xa, func_id); 403 + root = xa_load(&dev->priv.page_root_xa, function); 409 404 if (WARN_ON_ONCE(!root)) 410 405 return; 411 406 ··· 453 446 struct rb_root *root; 454 447 struct fw_page *fwp; 455 448 struct rb_node *p; 449 + bool ec_function; 456 450 u32 func_id; 457 451 u32 npages; 458 452 u32 i = 0; ··· 464 456 /* No hard feelings, we want our pages back! */ 465 457 npages = MLX5_GET(manage_pages_in, in, input_num_entries); 466 458 func_id = MLX5_GET(manage_pages_in, in, function_id); 459 + ec_function = MLX5_GET(manage_pages_in, in, embedded_cpu_function); 467 460 468 - root = xa_load(&dev->priv.page_root_xa, func_id); 461 + root = xa_load(&dev->priv.page_root_xa, get_function(func_id, ec_function)); 469 462 if (WARN_ON_ONCE(!root)) 470 463 return -EEXIST; 471 464 ··· 482 473 return 0; 483 474 } 484 475 485 - static int reclaim_pages(struct mlx5_core_dev *dev, u32 func_id, int npages, 476 + static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, 486 477 int *nclaimed, bool ec_function) 487 478 { 479 + u32 function = get_function(func_id, ec_function); 488 480 int outlen = MLX5_ST_SZ_BYTES(manage_pages_out); 489 481 u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {}; 490 482 int num_claimed; ··· 524 514 } 525 515 526 516 for (i = 0; i < num_claimed; i++) 527 - free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), func_id); 517 + free_4k(dev, MLX5_GET64(manage_pages_out, out, pas[i]), function); 528 518 529 519 if (nclaimed) 530 520 *nclaimed = num_claimed;
+6
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
··· 157 157 158 158 static const 159 159 struct mlxsw_sp_span_entry_ops mlxsw_sp1_span_entry_ops_cpu = { 160 + .is_static = true, 160 161 .can_handle = mlxsw_sp1_span_cpu_can_handle, 161 162 .parms_set = mlxsw_sp1_span_entry_cpu_parms, 162 163 .configure = mlxsw_sp1_span_entry_cpu_configure, ··· 215 214 216 215 static const 217 216 struct mlxsw_sp_span_entry_ops mlxsw_sp_span_entry_ops_phys = { 217 + .is_static = true, 218 218 .can_handle = mlxsw_sp_port_dev_check, 219 219 .parms_set = mlxsw_sp_span_entry_phys_parms, 220 220 .configure = mlxsw_sp_span_entry_phys_configure, ··· 723 721 724 722 static const 725 723 struct mlxsw_sp_span_entry_ops mlxsw_sp2_span_entry_ops_cpu = { 724 + .is_static = true, 726 725 .can_handle = mlxsw_sp2_span_cpu_can_handle, 727 726 .parms_set = mlxsw_sp2_span_entry_cpu_parms, 728 727 .configure = mlxsw_sp2_span_entry_cpu_configure, ··· 1037 1034 struct mlxsw_sp_span_parms sparms = {NULL}; 1038 1035 1039 1036 if (!refcount_read(&curr->ref_count)) 1037 + continue; 1038 + 1039 + if (curr->ops->is_static) 1040 1040 continue; 1041 1041 1042 1042 err = curr->ops->parms_set(mlxsw_sp, curr->to_dev, &sparms);
+1
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.h
··· 60 60 }; 61 61 62 62 struct mlxsw_sp_span_entry_ops { 63 + bool is_static; 63 64 bool (*can_handle)(const struct net_device *to_dev); 64 65 int (*parms_set)(struct mlxsw_sp *mlxsw_sp, 65 66 const struct net_device *to_dev,
+2 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
··· 129 129 if (ret) { 130 130 dev_err(&pdev->dev, 131 131 "Failed to set tx_clk\n"); 132 - return ret; 132 + goto err_remove_config_dt; 133 133 } 134 134 } 135 135 } ··· 143 143 if (ret) { 144 144 dev_err(&pdev->dev, 145 145 "Failed to set clk_ptp_ref\n"); 146 - return ret; 146 + goto err_remove_config_dt; 147 147 } 148 148 } 149 149 }
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
··· 375 375 struct plat_stmmacenet_data *plat) 376 376 { 377 377 plat->bus_id = 2; 378 + plat->addr64 = 32; 378 379 return ehl_common_data(pdev, plat); 379 380 } 380 381 ··· 407 406 struct plat_stmmacenet_data *plat) 408 407 { 409 408 plat->bus_id = 3; 409 + plat->addr64 = 32; 410 410 return ehl_common_data(pdev, plat); 411 411 } 412 412
+3 -3
drivers/net/team/team.c
··· 992 992 unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE | 993 993 IFF_XMIT_DST_RELEASE_PERM; 994 994 995 - list_for_each_entry(port, &team->port_list, list) { 995 + rcu_read_lock(); 996 + list_for_each_entry_rcu(port, &team->port_list, list) { 996 997 vlan_features = netdev_increment_features(vlan_features, 997 998 port->dev->vlan_features, 998 999 TEAM_VLAN_FEATURES); ··· 1007 1006 if (port->dev->hard_header_len > max_hard_header_len) 1008 1007 max_hard_header_len = port->dev->hard_header_len; 1009 1008 } 1009 + rcu_read_unlock(); 1010 1010 1011 1011 team->dev->vlan_features = vlan_features; 1012 1012 team->dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL | ··· 1022 1020 1023 1021 static void team_compute_features(struct team *team) 1024 1022 { 1025 - mutex_lock(&team->lock); 1026 1023 __team_compute_features(team); 1027 - mutex_unlock(&team->lock); 1028 1024 netdev_change_features(team->dev); 1029 1025 } 1030 1026
+6
drivers/net/usb/cdc_ether.c
··· 969 969 USB_CDC_PROTO_NONE), 970 970 .driver_info = (unsigned long)&wwan_info, 971 971 }, { 972 + /* Cinterion PLS83/PLS63 modem by GEMALTO/THALES */ 973 + USB_DEVICE_AND_INTERFACE_INFO(0x1e2d, 0x0069, USB_CLASS_COMM, 974 + USB_CDC_SUBCLASS_ETHERNET, 975 + USB_CDC_PROTO_NONE), 976 + .driver_info = (unsigned long)&wwan_info, 977 + }, { 972 978 USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET, 973 979 USB_CDC_PROTO_NONE), 974 980 .driver_info = (unsigned long) &cdc_info,
+1
drivers/net/usb/qmi_wwan.c
··· 1303 1303 {QMI_FIXED_INTF(0x0b3c, 0xc00a, 6)}, /* Olivetti Olicard 160 */ 1304 1304 {QMI_FIXED_INTF(0x0b3c, 0xc00b, 4)}, /* Olivetti Olicard 500 */ 1305 1305 {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */ 1306 + {QMI_QUIRK_SET_DTR(0x1e2d, 0x006f, 8)}, /* Cinterion PLS83/PLS63 */ 1306 1307 {QMI_FIXED_INTF(0x1e2d, 0x0053, 4)}, /* Cinterion PHxx,PXxx */ 1307 1308 {QMI_FIXED_INTF(0x1e2d, 0x0063, 10)}, /* Cinterion ALASxx (1 RmNet) */ 1308 1309 {QMI_FIXED_INTF(0x1e2d, 0x0082, 4)}, /* Cinterion PHxx,PXxx (2 RmNet) */
+25
drivers/net/wireless/intel/iwlwifi/cfg/22000.c
··· 314 314 const char iwl_ax101_name[] = "Intel(R) Wi-Fi 6 AX101"; 315 315 const char iwl_ax200_name[] = "Intel(R) Wi-Fi 6 AX200 160MHz"; 316 316 const char iwl_ax201_name[] = "Intel(R) Wi-Fi 6 AX201 160MHz"; 317 + const char iwl_ax203_name[] = "Intel(R) Wi-Fi 6 AX203"; 317 318 const char iwl_ax211_name[] = "Intel(R) Wi-Fi 6 AX211 160MHz"; 318 319 const char iwl_ax411_name[] = "Intel(R) Wi-Fi 6 AX411 160MHz"; 319 320 const char iwl_ma_name[] = "Intel(R) Wi-Fi 6"; ··· 341 340 .num_rbds = IWL_NUM_RBDS_22000_HE, 342 341 }; 343 342 343 + const struct iwl_cfg iwl_qu_b0_hr_b0 = { 344 + .fw_name_pre = IWL_QU_B_HR_B_FW_PRE, 345 + IWL_DEVICE_22500, 346 + /* 347 + * This device doesn't support receiving BlockAck with a large bitmap 348 + * so we need to restrict the size of transmitted aggregation to the 349 + * HT size; mac80211 would otherwise pick the HE max (256) by default. 350 + */ 351 + .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 352 + .num_rbds = IWL_NUM_RBDS_22000_HE, 353 + }; 354 + 344 355 const struct iwl_cfg iwl_ax201_cfg_qu_hr = { 345 356 .name = "Intel(R) Wi-Fi 6 AX201 160MHz", 346 357 .fw_name_pre = IWL_QU_B_HR_B_FW_PRE, ··· 376 363 */ 377 364 .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 378 365 .tx_with_siso_diversity = true, 366 + .num_rbds = IWL_NUM_RBDS_22000_HE, 367 + }; 368 + 369 + const struct iwl_cfg iwl_qu_c0_hr_b0 = { 370 + .fw_name_pre = IWL_QU_C_HR_B_FW_PRE, 371 + IWL_DEVICE_22500, 372 + /* 373 + * This device doesn't support receiving BlockAck with a large bitmap 374 + * so we need to restrict the size of transmitted aggregation to the 375 + * HT size; mac80211 would otherwise pick the HE max (256) by default. 376 + */ 377 + .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 379 378 .num_rbds = IWL_NUM_RBDS_22000_HE, 380 379 }; 381 380
+50 -15
drivers/net/wireless/intel/iwlwifi/fw/acpi.c
··· 80 80 } 81 81 82 82 /* 83 - * Evaluate a DSM with no arguments and a single u8 return value (inside a 84 - * buffer object), verify and return that value. 83 + * Generic function to evaluate a DSM with no arguments 84 + * and an integer return value, 85 + * (as an integer object or inside a buffer object), 86 + * verify and assign the value in the "value" parameter. 87 + * return 0 in success and the appropriate errno otherwise. 85 88 */ 86 - int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func) 89 + static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func, 90 + u64 *value, size_t expected_size) 87 91 { 88 92 union acpi_object *obj; 89 - int ret; 93 + int ret = 0; 90 94 91 95 obj = iwl_acpi_get_dsm_object(dev, rev, func, NULL); 92 - if (IS_ERR(obj)) 96 + if (IS_ERR(obj)) { 97 + IWL_DEBUG_DEV_RADIO(dev, 98 + "Failed to get DSM object. func= %d\n", 99 + func); 93 100 return -ENOENT; 101 + } 94 102 95 - if (obj->type != ACPI_TYPE_BUFFER) { 103 + if (obj->type == ACPI_TYPE_INTEGER) { 104 + *value = obj->integer.value; 105 + } else if (obj->type == ACPI_TYPE_BUFFER) { 106 + __le64 le_value = 0; 107 + 108 + if (WARN_ON_ONCE(expected_size > sizeof(le_value))) 109 + return -EINVAL; 110 + 111 + /* if the buffer size doesn't match the expected size */ 112 + if (obj->buffer.length != expected_size) 113 + IWL_DEBUG_DEV_RADIO(dev, 114 + "ACPI: DSM invalid buffer size, padding or truncating (%d)\n", 115 + obj->buffer.length); 116 + 117 + /* assuming LE from Intel BIOS spec */ 118 + memcpy(&le_value, obj->buffer.pointer, 119 + min_t(size_t, expected_size, (size_t)obj->buffer.length)); 120 + *value = le64_to_cpu(le_value); 121 + } else { 96 122 IWL_DEBUG_DEV_RADIO(dev, 97 123 "ACPI: DSM method did not return a valid object, type=%d\n", 98 124 obj->type); ··· 126 100 goto out; 127 101 } 128 102 129 - if (obj->buffer.length != sizeof(u8)) { 130 - IWL_DEBUG_DEV_RADIO(dev, 131 - "ACPI: DSM method returned invalid buffer, length=%d\n", 132 - obj->buffer.length); 133 - ret = -EINVAL; 134 - goto out; 135 - } 136 - 137 - ret = obj->buffer.pointer[0]; 138 103 IWL_DEBUG_DEV_RADIO(dev, 139 104 "ACPI: DSM method evaluated: func=%d, ret=%d\n", 140 105 func, ret); 141 106 out: 142 107 ACPI_FREE(obj); 143 108 return ret; 109 + } 110 + 111 + /* 112 + * Evaluate a DSM with no arguments and a u8 return value, 113 + */ 114 + int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value) 115 + { 116 + int ret; 117 + u64 val; 118 + 119 + ret = iwl_acpi_get_dsm_integer(dev, rev, func, &val, sizeof(u8)); 120 + 121 + if (ret < 0) 122 + return ret; 123 + 124 + /* cast val (u64) to be u8 */ 125 + *value = (u8)val; 126 + return 0; 144 127 } 145 128 IWL_EXPORT_SYMBOL(iwl_acpi_get_dsm_u8); 146 129
+4 -3
drivers/net/wireless/intel/iwlwifi/fw/acpi.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 3 * Copyright (C) 2017 Intel Deutschland GmbH 4 - * Copyright (C) 2018-2020 Intel Corporation 4 + * Copyright (C) 2018-2021 Intel Corporation 5 5 */ 6 6 #ifndef __iwl_fw_acpi__ 7 7 #define __iwl_fw_acpi__ ··· 99 99 100 100 void *iwl_acpi_get_object(struct device *dev, acpi_string method); 101 101 102 - int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func); 102 + int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value); 103 103 104 104 union acpi_object *iwl_acpi_get_wifi_pkg(struct device *dev, 105 105 union acpi_object *data, ··· 159 159 return ERR_PTR(-ENOENT); 160 160 } 161 161 162 - static inline int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func) 162 + static inline 163 + int iwl_acpi_get_dsm_u8(struct device *dev, int rev, int func, u8 *value) 163 164 { 164 165 return -ENOENT; 165 166 }
+28 -22
drivers/net/wireless/intel/iwlwifi/fw/pnvm.c
··· 224 224 int iwl_pnvm_load(struct iwl_trans *trans, 225 225 struct iwl_notif_wait_data *notif_wait) 226 226 { 227 - const struct firmware *pnvm; 228 227 struct iwl_notification_wait pnvm_wait; 229 228 static const u16 ntf_cmds[] = { WIDE_ID(REGULATORY_AND_NVM_GROUP, 230 229 PNVM_INIT_COMPLETE_NTFY) }; 231 - char pnvm_name[64]; 232 - int ret; 233 230 234 231 /* if the SKU_ID is empty, there's nothing to do */ 235 232 if (!trans->sku_id[0] && !trans->sku_id[1] && !trans->sku_id[2]) 236 233 return 0; 237 234 238 - /* if we already have it, nothing to do either */ 239 - if (trans->pnvm_loaded) 240 - return 0; 235 + /* load from disk only if we haven't done it (or tried) before */ 236 + if (!trans->pnvm_loaded) { 237 + const struct firmware *pnvm; 238 + char pnvm_name[64]; 239 + int ret; 241 240 242 - /* 243 - * The prefix unfortunately includes a hyphen at the end, so 244 - * don't add the dot here... 245 - */ 246 - snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm", 247 - trans->cfg->fw_name_pre); 241 + /* 242 + * The prefix unfortunately includes a hyphen at the end, so 243 + * don't add the dot here... 244 + */ 245 + snprintf(pnvm_name, sizeof(pnvm_name), "%spnvm", 246 + trans->cfg->fw_name_pre); 248 247 249 - /* ...but replace the hyphen with the dot here. */ 250 - if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name)) 251 - pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.'; 248 + /* ...but replace the hyphen with the dot here. */ 249 + if (strlen(trans->cfg->fw_name_pre) < sizeof(pnvm_name)) 250 + pnvm_name[strlen(trans->cfg->fw_name_pre) - 1] = '.'; 252 251 253 - ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev); 254 - if (ret) { 255 - IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n", 256 - pnvm_name, ret); 257 - } else { 258 - iwl_pnvm_parse(trans, pnvm->data, pnvm->size); 252 + ret = firmware_request_nowarn(&pnvm, pnvm_name, trans->dev); 253 + if (ret) { 254 + IWL_DEBUG_FW(trans, "PNVM file %s not found %d\n", 255 + pnvm_name, ret); 256 + /* 257 + * Pretend we've loaded it - at least we've tried and 258 + * couldn't load it at all, so there's no point in 259 + * trying again over and over. 260 + */ 261 + trans->pnvm_loaded = true; 262 + } else { 263 + iwl_pnvm_parse(trans, pnvm->data, pnvm->size); 259 264 260 - release_firmware(pnvm); 265 + release_firmware(pnvm); 266 + } 261 267 } 262 268 263 269 iwl_init_notification_wait(notif_wait, &pnvm_wait,
+5 -2
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2005-2014, 2018-2020 Intel Corporation 3 + * Copyright (C) 2005-2014, 2018-2021 Intel Corporation 4 4 * Copyright (C) 2016-2017 Intel Deutschland GmbH 5 5 */ 6 6 #ifndef __IWL_CONFIG_H__ ··· 445 445 #define IWL_CFG_CORES_BT_GNSS 0x5 446 446 447 447 #define IWL_SUBDEVICE_RF_ID(subdevice) ((u16)((subdevice) & 0x00F0) >> 4) 448 - #define IWL_SUBDEVICE_NO_160(subdevice) ((u16)((subdevice) & 0x0100) >> 9) 448 + #define IWL_SUBDEVICE_NO_160(subdevice) ((u16)((subdevice) & 0x0200) >> 9) 449 449 #define IWL_SUBDEVICE_CORES(subdevice) ((u16)((subdevice) & 0x1C00) >> 10) 450 450 451 451 struct iwl_dev_info { ··· 491 491 extern const char iwl9560_killer_1550i_name[]; 492 492 extern const char iwl9560_killer_1550s_name[]; 493 493 extern const char iwl_ax200_name[]; 494 + extern const char iwl_ax203_name[]; 494 495 extern const char iwl_ax201_name[]; 495 496 extern const char iwl_ax101_name[]; 496 497 extern const char iwl_ax200_killer_1650w_name[]; ··· 575 574 extern const struct iwl_cfg iwl_qu_b0_hr1_b0; 576 575 extern const struct iwl_cfg iwl_qu_c0_hr1_b0; 577 576 extern const struct iwl_cfg iwl_quz_a0_hr1_b0; 577 + extern const struct iwl_cfg iwl_qu_b0_hr_b0; 578 + extern const struct iwl_cfg iwl_qu_c0_hr_b0; 578 579 extern const struct iwl_cfg iwl_ax200_cfg_cc; 579 580 extern const struct iwl_cfg iwl_ax201_cfg_qu_hr; 580 581 extern const struct iwl_cfg iwl_ax201_cfg_qu_hr;
-7
drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
··· 180 180 if (le32_to_cpu(tlv->length) < sizeof(*reg)) 181 181 return -EINVAL; 182 182 183 - /* For safe using a string from FW make sure we have a 184 - * null terminator 185 - */ 186 - reg->name[IWL_FW_INI_MAX_NAME - 1] = 0; 187 - 188 - IWL_DEBUG_FW(trans, "WRT: parsing region: %s\n", reg->name); 189 - 190 183 if (id >= IWL_FW_INI_MAX_REGION_ID) { 191 184 IWL_ERR(trans, "WRT: Invalid region id %u\n", id); 192 185 return -EINVAL;
+5 -4
drivers/net/wireless/intel/iwlwifi/iwl-io.c
··· 150 150 } 151 151 IWL_EXPORT_SYMBOL(iwl_read_prph); 152 152 153 - void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val) 153 + void iwl_write_prph_delay(struct iwl_trans *trans, u32 ofs, u32 val, u32 delay_ms) 154 154 { 155 155 unsigned long flags; 156 156 157 157 if (iwl_trans_grab_nic_access(trans, &flags)) { 158 + mdelay(delay_ms); 158 159 iwl_write_prph_no_grab(trans, ofs, val); 159 160 iwl_trans_release_nic_access(trans, &flags); 160 161 } 161 162 } 162 - IWL_EXPORT_SYMBOL(iwl_write_prph); 163 + IWL_EXPORT_SYMBOL(iwl_write_prph_delay); 163 164 164 165 int iwl_poll_prph_bit(struct iwl_trans *trans, u32 addr, 165 166 u32 bits, u32 mask, int timeout) ··· 220 219 void iwl_force_nmi(struct iwl_trans *trans) 221 220 { 222 221 if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000) 223 - iwl_write_prph(trans, DEVICE_SET_NMI_REG, 224 - DEVICE_SET_NMI_VAL_DRV); 222 + iwl_write_prph_delay(trans, DEVICE_SET_NMI_REG, 223 + DEVICE_SET_NMI_VAL_DRV, 1); 225 224 else if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) 226 225 iwl_write_umac_prph(trans, UREG_NIC_SET_NMI_DRIVER, 227 226 UREG_NIC_SET_NMI_DRIVER_NMI_FROM_DRIVER);
+8 -2
drivers/net/wireless/intel/iwlwifi/iwl-io.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2018-2019 Intel Corporation 3 + * Copyright (C) 2018-2020 Intel Corporation 4 4 */ 5 5 #ifndef __iwl_io_h__ 6 6 #define __iwl_io_h__ ··· 37 37 u32 iwl_read_prph(struct iwl_trans *trans, u32 ofs); 38 38 void iwl_write_prph_no_grab(struct iwl_trans *trans, u32 ofs, u32 val); 39 39 void iwl_write_prph64_no_grab(struct iwl_trans *trans, u64 ofs, u64 val); 40 - void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val); 40 + void iwl_write_prph_delay(struct iwl_trans *trans, u32 ofs, 41 + u32 val, u32 delay_ms); 42 + static inline void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val) 43 + { 44 + iwl_write_prph_delay(trans, ofs, val, 0); 45 + } 46 + 41 47 int iwl_poll_prph_bit(struct iwl_trans *trans, u32 addr, 42 48 u32 bits, u32 mask, int timeout); 43 49 void iwl_set_bits_prph(struct iwl_trans *trans, u32 ofs, u32 mask);
+6
drivers/net/wireless/intel/iwlwifi/iwl-prph.h
··· 301 301 #define RADIO_RSP_ADDR_POS (6) 302 302 #define RADIO_RSP_RD_CMD (3) 303 303 304 + /* LTR control (Qu only) */ 305 + #define HPM_MAC_LTR_CSR 0xa0348c 306 + #define HPM_MAC_LRT_ENABLE_ALL 0xf 307 + /* also uses CSR_LTR_* for values */ 308 + #define HPM_UMAC_LTR 0xa03480 309 + 304 310 /* FW monitor */ 305 311 #define MON_BUFF_SAMPLE_CTL (0xa03c00) 306 312 #define MON_BUFF_BASE_ADDR (0xa03c1c)
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2012-2014, 2018-2020 Intel Corporation 3 + * Copyright (C) 2012-2014, 2018-2021 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 2032 2032 2033 2033 mutex_lock(&mvm->mutex); 2034 2034 2035 - clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); 2036 - 2037 2035 /* get the BSS vif pointer again */ 2038 2036 vif = iwl_mvm_get_bss_vif(mvm); 2039 2037 if (IS_ERR_OR_NULL(vif)) ··· 2146 2148 iwl_mvm_d3_disconnect_iter, keep ? vif : NULL); 2147 2149 2148 2150 out: 2151 + clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status); 2152 + 2149 2153 /* no need to reset the device in unified images, if successful */ 2150 2154 if (unified_image && !ret) { 2151 2155 /* nothing else to do if we already sent D0I3_END_CMD */
+3
drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
··· 459 459 const size_t bufsz = sizeof(buf); 460 460 int pos = 0; 461 461 462 + mutex_lock(&mvm->mutex); 462 463 iwl_mvm_get_sync_time(mvm, &curr_gp2, &curr_os); 464 + mutex_unlock(&mvm->mutex); 465 + 463 466 do_div(curr_os, NSEC_PER_USEC); 464 467 diff = curr_os - curr_gp2; 465 468 pos += scnprintf(buf + pos, bufsz - pos, "diff=%lld\n", diff);
+14 -11
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 1090 1090 1091 1091 static u8 iwl_mvm_eval_dsm_indonesia_5g2(struct iwl_mvm *mvm) 1092 1092 { 1093 + u8 value; 1094 + 1093 1095 int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0, 1094 - DSM_FUNC_ENABLE_INDONESIA_5G2); 1096 + DSM_FUNC_ENABLE_INDONESIA_5G2, &value); 1095 1097 1096 1098 if (ret < 0) 1097 1099 IWL_DEBUG_RADIO(mvm, 1098 1100 "Failed to evaluate DSM function ENABLE_INDONESIA_5G2, ret=%d\n", 1099 1101 ret); 1100 1102 1101 - else if (ret >= DSM_VALUE_INDONESIA_MAX) 1103 + else if (value >= DSM_VALUE_INDONESIA_MAX) 1102 1104 IWL_DEBUG_RADIO(mvm, 1103 - "DSM function ENABLE_INDONESIA_5G2 return invalid value, ret=%d\n", 1104 - ret); 1105 + "DSM function ENABLE_INDONESIA_5G2 return invalid value, value=%d\n", 1106 + value); 1105 1107 1106 - else if (ret == DSM_VALUE_INDONESIA_ENABLE) { 1108 + else if (value == DSM_VALUE_INDONESIA_ENABLE) { 1107 1109 IWL_DEBUG_RADIO(mvm, 1108 1110 "Evaluated DSM function ENABLE_INDONESIA_5G2: Enabling 5g2\n"); 1109 1111 return DSM_VALUE_INDONESIA_ENABLE; ··· 1116 1114 1117 1115 static u8 iwl_mvm_eval_dsm_disable_srd(struct iwl_mvm *mvm) 1118 1116 { 1117 + u8 value; 1119 1118 int ret = iwl_acpi_get_dsm_u8((&mvm->fwrt)->dev, 0, 1120 - DSM_FUNC_DISABLE_SRD); 1119 + DSM_FUNC_DISABLE_SRD, &value); 1121 1120 1122 1121 if (ret < 0) 1123 1122 IWL_DEBUG_RADIO(mvm, 1124 1123 "Failed to evaluate DSM function DISABLE_SRD, ret=%d\n", 1125 1124 ret); 1126 1125 1127 - else if (ret >= DSM_VALUE_SRD_MAX) 1126 + else if (value >= DSM_VALUE_SRD_MAX) 1128 1127 IWL_DEBUG_RADIO(mvm, 1129 - "DSM function DISABLE_SRD return invalid value, ret=%d\n", 1130 - ret); 1128 + "DSM function DISABLE_SRD return invalid value, value=%d\n", 1129 + value); 1131 1130 1132 - else if (ret == DSM_VALUE_SRD_PASSIVE) { 1131 + else if (value == DSM_VALUE_SRD_PASSIVE) { 1133 1132 IWL_DEBUG_RADIO(mvm, 1134 1133 "Evaluated DSM function DISABLE_SRD: setting SRD to passive\n"); 1135 1134 return DSM_VALUE_SRD_PASSIVE; 1136 1135 1137 - } else if (ret == DSM_VALUE_SRD_DISABLE) { 1136 + } else if (value == DSM_VALUE_SRD_DISABLE) { 1138 1137 IWL_DEBUG_RADIO(mvm, 1139 1138 "Evaluated DSM function DISABLE_SRD: disabling SRD\n"); 1140 1139 return DSM_VALUE_SRD_DISABLE;
+3
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 4194 4194 iwl_mvm_binding_remove_vif(mvm, vif); 4195 4195 4196 4196 out: 4197 + if (fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD) && 4198 + switching_chanctx) 4199 + return; 4197 4200 mvmvif->phy_ctxt = NULL; 4198 4201 iwl_mvm_power_update_mac(mvm); 4199 4202 }
+6 -1
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
··· 791 791 if (!mvm->scan_cmd) 792 792 goto out_free; 793 793 794 + /* invalidate ids to prevent accidental removal of sta_id 0 */ 795 + mvm->aux_sta.sta_id = IWL_MVM_INVALID_STA; 796 + mvm->snif_sta.sta_id = IWL_MVM_INVALID_STA; 797 + 794 798 /* Set EBS as successful as long as not stated otherwise by the FW. */ 795 799 mvm->last_ebs_successful = true; 796 800 ··· 1209 1205 reprobe = container_of(wk, struct iwl_mvm_reprobe, work); 1210 1206 if (device_reprobe(reprobe->dev)) 1211 1207 dev_err(reprobe->dev, "reprobe failed!\n"); 1208 + put_device(reprobe->dev); 1212 1209 kfree(reprobe); 1213 1210 module_put(THIS_MODULE); 1214 1211 } ··· 1260 1255 module_put(THIS_MODULE); 1261 1256 return; 1262 1257 } 1263 - reprobe->dev = mvm->trans->dev; 1258 + reprobe->dev = get_device(mvm->trans->dev); 1264 1259 INIT_WORK(&reprobe->work, iwl_mvm_reprobe_wk); 1265 1260 schedule_work(&reprobe->work); 1266 1261 } else if (test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
+6
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 2057 2057 2058 2058 lockdep_assert_held(&mvm->mutex); 2059 2059 2060 + if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA)) 2061 + return -EINVAL; 2062 + 2060 2063 iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0); 2061 2064 ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id); 2062 2065 if (ret) ··· 2073 2070 int ret; 2074 2071 2075 2072 lockdep_assert_held(&mvm->mutex); 2073 + 2074 + if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA)) 2075 + return -EINVAL; 2076 2076 2077 2077 iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0); 2078 2078 ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id);
+3
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
··· 773 773 774 774 next = skb_gso_segment(skb, netdev_flags); 775 775 skb_shinfo(skb)->gso_size = mss; 776 + skb_shinfo(skb)->gso_type = ipv4 ? SKB_GSO_TCPV4 : SKB_GSO_TCPV6; 776 777 if (WARN_ON_ONCE(IS_ERR(next))) 777 778 return -EINVAL; 778 779 else if (next) ··· 796 795 797 796 if (tcp_payload_len > mss) { 798 797 skb_shinfo(tmp)->gso_size = mss; 798 + skb_shinfo(tmp)->gso_type = ipv4 ? SKB_GSO_TCPV4 : 799 + SKB_GSO_TCPV6; 799 800 } else { 800 801 if (qos) { 801 802 u8 *qc;
+34 -19
drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
··· 75 75 const struct fw_img *fw) 76 76 { 77 77 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 78 + u32 ltr_val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 79 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 80 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 81 + u32_encode_bits(250, 82 + CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 83 + CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 84 + u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 85 + CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 86 + u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 78 87 struct iwl_context_info_gen3 *ctxt_info_gen3; 79 88 struct iwl_prph_scratch *prph_scratch; 80 89 struct iwl_prph_scratch_ctrl_cfg *prph_sc_ctrl; ··· 198 189 /* Allocate IML */ 199 190 iml_img = dma_alloc_coherent(trans->dev, trans->iml_len, 200 191 &trans_pcie->iml_dma_addr, GFP_KERNEL); 201 - if (!iml_img) 202 - return -ENOMEM; 192 + if (!iml_img) { 193 + ret = -ENOMEM; 194 + goto err_free_ctxt_info; 195 + } 203 196 204 197 memcpy(iml_img, trans->iml, trans->iml_len); 205 198 ··· 217 206 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL, 218 207 CSR_AUTO_FUNC_BOOT_ENA); 219 208 220 - if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) { 221 - /* 222 - * The firmware initializes this again later (to a smaller 223 - * value), but for the boot process initialize the LTR to 224 - * ~250 usec. 225 - */ 226 - u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ | 227 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 228 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) | 229 - u32_encode_bits(250, 230 - CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) | 231 - CSR_LTR_LONG_VAL_AD_SNOOP_REQ | 232 - u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC, 233 - CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) | 234 - u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL); 235 - 236 - iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val); 209 + /* 210 + * To workaround hardware latency issues during the boot process, 211 + * initialize the LTR to ~250 usec (see ltr_val above). 212 + * The firmware initializes this again later (to a smaller value). 213 + */ 214 + if ((trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210 || 215 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) && 216 + !trans->trans_cfg->integrated) { 217 + iwl_write32(trans, CSR_LTR_LONG_VAL_AD, ltr_val); 218 + } else if (trans->trans_cfg->integrated && 219 + trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_22000) { 220 + iwl_write_prph(trans, HPM_MAC_LTR_CSR, HPM_MAC_LRT_ENABLE_ALL); 221 + iwl_write_prph(trans, HPM_UMAC_LTR, ltr_val); 237 222 } 238 223 239 224 if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) ··· 239 232 240 233 return 0; 241 234 235 + err_free_ctxt_info: 236 + dma_free_coherent(trans->dev, sizeof(*trans_pcie->ctxt_info_gen3), 237 + trans_pcie->ctxt_info_gen3, 238 + trans_pcie->ctxt_info_dma_addr); 239 + trans_pcie->ctxt_info_gen3 = NULL; 242 240 err_free_prph_info: 243 241 dma_free_coherent(trans->dev, 244 242 sizeof(*prph_info), ··· 305 293 ret); 306 294 return ret; 307 295 } 296 + 297 + if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size)) 298 + return -EBUSY; 308 299 309 300 prph_sc_ctrl->pnvm_cfg.pnvm_base_addr = 310 301 cpu_to_le64(trans_pcie->pnvm_dram.physical);
+10
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 910 910 IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, 911 911 IWL_CFG_ANY, IWL_CFG_ANY, 912 912 iwl_qu_b0_hr1_b0, iwl_ax101_name), 913 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 914 + IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, 915 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 916 + IWL_CFG_ANY, IWL_CFG_ANY, 917 + iwl_qu_b0_hr_b0, iwl_ax203_name), 913 918 914 919 /* Qu C step */ 915 920 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, ··· 922 917 IWL_CFG_RF_TYPE_HR1, IWL_CFG_ANY, 923 918 IWL_CFG_ANY, IWL_CFG_ANY, 924 919 iwl_qu_c0_hr1_b0, iwl_ax101_name), 920 + _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY, 921 + IWL_CFG_MAC_TYPE_QU, SILICON_C_STEP, 922 + IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, 923 + IWL_CFG_ANY, IWL_CFG_ANY, 924 + iwl_qu_c0_hr_b0, iwl_ax203_name), 925 925 926 926 /* QuZ */ 927 927 _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+8 -6
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 2107 2107 2108 2108 while (offs < dwords) { 2109 2109 /* limit the time we spin here under lock to 1/2s */ 2110 - ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC); 2110 + unsigned long end = jiffies + HZ / 2; 2111 + bool resched = false; 2111 2112 2112 2113 if (iwl_trans_grab_nic_access(trans, &flags)) { 2113 2114 iwl_write32(trans, HBUS_TARG_MEM_RADDR, ··· 2119 2118 HBUS_TARG_MEM_RDAT); 2120 2119 offs++; 2121 2120 2122 - /* calling ktime_get is expensive so 2123 - * do it once in 128 reads 2124 - */ 2125 - if (offs % 128 == 0 && ktime_after(ktime_get(), 2126 - timeout)) 2121 + if (time_after(jiffies, end)) { 2122 + resched = true; 2127 2123 break; 2124 + } 2128 2125 } 2129 2126 iwl_trans_release_nic_access(trans, &flags); 2127 + 2128 + if (resched) 2129 + cond_resched(); 2130 2130 } else { 2131 2131 return -EBUSY; 2132 2132 }
+5
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 201 201 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 202 202 struct iwl_txq *txq = trans->txqs.txq[txq_id]; 203 203 204 + if (!txq) { 205 + IWL_ERR(trans, "Trying to free a queue that wasn't allocated?\n"); 206 + return; 207 + } 208 + 204 209 spin_lock_bh(&txq->lock); 205 210 while (txq->write_ptr != txq->read_ptr) { 206 211 IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
+26 -29
drivers/net/wireless/intel/iwlwifi/queue/tx.c
··· 142 142 * idx is bounded by n_window 143 143 */ 144 144 int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); 145 + struct sk_buff *skb; 145 146 146 147 lockdep_assert_held(&txq->lock); 148 + 149 + if (!txq->entries) 150 + return; 147 151 148 152 iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta, 149 153 iwl_txq_get_tfd(trans, txq, idx)); 150 154 151 - /* free SKB */ 152 - if (txq->entries) { 153 - struct sk_buff *skb; 155 + skb = txq->entries[idx].skb; 154 156 155 - skb = txq->entries[idx].skb; 156 - 157 - /* Can be called from irqs-disabled context 158 - * If skb is not NULL, it means that the whole queue is being 159 - * freed and that the queue is not empty - free the skb 160 - */ 161 - if (skb) { 162 - iwl_op_mode_free_skb(trans->op_mode, skb); 163 - txq->entries[idx].skb = NULL; 164 - } 157 + /* Can be called from irqs-disabled context 158 + * If skb is not NULL, it means that the whole queue is being 159 + * freed and that the queue is not empty - free the skb 160 + */ 161 + if (skb) { 162 + iwl_op_mode_free_skb(trans->op_mode, skb); 163 + txq->entries[idx].skb = NULL; 165 164 } 166 165 } 167 166 ··· 840 841 int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr); 841 842 struct sk_buff *skb = txq->entries[idx].skb; 842 843 843 - if (WARN_ON_ONCE(!skb)) 844 - continue; 845 - 846 - iwl_txq_free_tso_page(trans, skb); 844 + if (!WARN_ON_ONCE(!skb)) 845 + iwl_txq_free_tso_page(trans, skb); 847 846 } 848 847 iwl_txq_gen2_free_tfd(trans, txq); 849 848 txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr); ··· 1491 1494 */ 1492 1495 int rd_ptr = txq->read_ptr; 1493 1496 int idx = iwl_txq_get_cmd_index(txq, rd_ptr); 1497 + struct sk_buff *skb; 1494 1498 1495 1499 lockdep_assert_held(&txq->lock); 1500 + 1501 + if (!txq->entries) 1502 + return; 1496 1503 1497 1504 /* We have only q->n_window txq->entries, but we use 1498 1505 * TFD_QUEUE_SIZE_MAX tfds ··· 1504 1503 iwl_txq_gen1_tfd_unmap(trans, &txq->entries[idx].meta, txq, rd_ptr); 1505 1504 1506 1505 /* free SKB */ 1507 - if (txq->entries) { 1508 - struct sk_buff *skb; 1506 + skb = txq->entries[idx].skb; 1509 1507 1510 - skb = txq->entries[idx].skb; 1511 - 1512 - /* Can be called from irqs-disabled context 1513 - * If skb is not NULL, it means that the whole queue is being 1514 - * freed and that the queue is not empty - free the skb 1515 - */ 1516 - if (skb) { 1517 - iwl_op_mode_free_skb(trans->op_mode, skb); 1518 - txq->entries[idx].skb = NULL; 1519 - } 1508 + /* Can be called from irqs-disabled context 1509 + * If skb is not NULL, it means that the whole queue is being 1510 + * freed and that the queue is not empty - free the skb 1511 + */ 1512 + if (skb) { 1513 + iwl_op_mode_free_skb(trans->op_mode, skb); 1514 + txq->entries[idx].skb = NULL; 1520 1515 } 1521 1516 } 1522 1517
+1 -1
drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
··· 231 231 int cmd, int *seq) 232 232 { 233 233 struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76); 234 - enum mt76_txq_id qid; 234 + enum mt76_mcuq_id qid; 235 235 236 236 mt7615_mcu_fill_msg(dev, skb, cmd, seq); 237 237 if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state))
+4 -5
drivers/net/wireless/mediatek/mt76/mt7615/sdio_txrx.c
··· 83 83 { 84 84 struct mt76_queue *q = &dev->q_rx[qid]; 85 85 struct mt76_sdio *sdio = &dev->sdio; 86 - int len = 0, err, i, order; 86 + int len = 0, err, i; 87 87 struct page *page; 88 88 u8 *buf; 89 89 ··· 96 96 if (len > sdio->func->cur_blksize) 97 97 len = roundup(len, sdio->func->cur_blksize); 98 98 99 - order = get_order(len); 100 - page = __dev_alloc_pages(GFP_KERNEL, order); 99 + page = __dev_alloc_pages(GFP_KERNEL, get_order(len)); 101 100 if (!page) 102 101 return -ENOMEM; 103 102 ··· 105 106 err = sdio_readsb(sdio->func, buf, MCR_WRDR(qid), len); 106 107 if (err < 0) { 107 108 dev_err(dev->dev, "sdio read data failed:%d\n", err); 108 - __free_pages(page, order); 109 + put_page(page); 109 110 return err; 110 111 } 111 112 ··· 122 123 if (q->queued + i + 1 == q->ndesc) 123 124 break; 124 125 } 125 - __free_pages(page, order); 126 + put_page(page); 126 127 127 128 spin_lock_bh(&q->lock); 128 129 q->head = (q->head + i) % q->ndesc;
+5 -5
drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
··· 256 256 struct mt7915_dev *dev = container_of(mdev, struct mt7915_dev, mt76); 257 257 struct mt7915_mcu_txd *mcu_txd; 258 258 u8 seq, pkt_fmt, qidx; 259 - enum mt76_txq_id txq; 259 + enum mt76_mcuq_id qid; 260 260 __le32 *txd; 261 261 u32 val; 262 262 ··· 268 268 seq = ++dev->mt76.mcu.msg_seq & 0xf; 269 269 270 270 if (cmd == -MCU_CMD_FW_SCATTER) { 271 - txq = MT_MCUQ_FWDL; 271 + qid = MT_MCUQ_FWDL; 272 272 goto exit; 273 273 } 274 274 275 275 mcu_txd = (struct mt7915_mcu_txd *)skb_push(skb, sizeof(*mcu_txd)); 276 276 277 277 if (test_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state)) { 278 - txq = MT_MCUQ_WA; 278 + qid = MT_MCUQ_WA; 279 279 qidx = MT_TX_MCU_PORT_RX_Q0; 280 280 pkt_fmt = MT_TX_TYPE_CMD; 281 281 } else { 282 - txq = MT_MCUQ_WM; 282 + qid = MT_MCUQ_WM; 283 283 qidx = MT_TX_MCU_PORT_RX_Q0; 284 284 pkt_fmt = MT_TX_TYPE_CMD; 285 285 } ··· 326 326 if (wait_seq) 327 327 *wait_seq = seq; 328 328 329 - return mt76_tx_queue_skb_raw(dev, mdev->q_mcu[txq], skb, 0); 329 + return mt76_tx_queue_skb_raw(dev, mdev->q_mcu[qid], skb, 0); 330 330 } 331 331 332 332 static void
+2 -3
drivers/net/wireless/mediatek/mt7601u/dma.c
··· 152 152 153 153 if (new_p) { 154 154 /* we have one extra ref from the allocator */ 155 - __free_pages(e->p, MT_RX_ORDER); 156 - 155 + put_page(e->p); 157 156 e->p = new_p; 158 157 } 159 158 } ··· 309 310 } 310 311 311 312 e = &q->e[q->end]; 312 - e->skb = skb; 313 313 usb_fill_bulk_urb(e->urb, usb_dev, snd_pipe, skb->data, skb->len, 314 314 mt7601u_complete_tx, q); 315 315 ret = usb_submit_urb(e->urb, GFP_ATOMIC); ··· 326 328 327 329 q->end = (q->end + 1) % q->entries; 328 330 q->used++; 331 + e->skb = skb; 329 332 330 333 if (q->used >= q->entries) 331 334 ieee80211_stop_queue(dev->hw, skb_get_queue_mapping(skb));
+15 -2
drivers/nvme/host/core.c
··· 1543 1543 } 1544 1544 1545 1545 length = (io.nblocks + 1) << ns->lba_shift; 1546 - meta_len = (io.nblocks + 1) * ns->ms; 1547 - metadata = nvme_to_user_ptr(io.metadata); 1546 + 1547 + if ((io.control & NVME_RW_PRINFO_PRACT) && 1548 + ns->ms == sizeof(struct t10_pi_tuple)) { 1549 + /* 1550 + * Protection information is stripped/inserted by the 1551 + * controller. 1552 + */ 1553 + if (nvme_to_user_ptr(io.metadata)) 1554 + return -EINVAL; 1555 + meta_len = 0; 1556 + metadata = NULL; 1557 + } else { 1558 + meta_len = (io.nblocks + 1) * ns->ms; 1559 + metadata = nvme_to_user_ptr(io.metadata); 1560 + } 1548 1561 1549 1562 if (ns->features & NVME_NS_EXT_LBAS) { 1550 1563 length += meta_len;
+81 -38
drivers/nvme/host/pci.c
··· 23 23 #include <linux/t10-pi.h> 24 24 #include <linux/types.h> 25 25 #include <linux/io-64-nonatomic-lo-hi.h> 26 + #include <linux/io-64-nonatomic-hi-lo.h> 26 27 #include <linux/sed-opal.h> 27 28 #include <linux/pci-p2pdma.h> 28 29 ··· 543 542 return true; 544 543 } 545 544 546 - static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) 545 + static void nvme_free_prps(struct nvme_dev *dev, struct request *req) 547 546 { 548 - struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 549 547 const int last_prp = NVME_CTRL_PAGE_SIZE / sizeof(__le64) - 1; 550 - dma_addr_t dma_addr = iod->first_dma, next_dma_addr; 548 + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 549 + dma_addr_t dma_addr = iod->first_dma; 551 550 int i; 552 551 553 - if (iod->dma_len) { 554 - dma_unmap_page(dev->dev, dma_addr, iod->dma_len, 555 - rq_dma_dir(req)); 556 - return; 552 + for (i = 0; i < iod->npages; i++) { 553 + __le64 *prp_list = nvme_pci_iod_list(req)[i]; 554 + dma_addr_t next_dma_addr = le64_to_cpu(prp_list[last_prp]); 555 + 556 + dma_pool_free(dev->prp_page_pool, prp_list, dma_addr); 557 + dma_addr = next_dma_addr; 557 558 } 558 559 559 - WARN_ON_ONCE(!iod->nents); 560 + } 561 + 562 + static void nvme_free_sgls(struct nvme_dev *dev, struct request *req) 563 + { 564 + const int last_sg = SGES_PER_PAGE - 1; 565 + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 566 + dma_addr_t dma_addr = iod->first_dma; 567 + int i; 568 + 569 + for (i = 0; i < iod->npages; i++) { 570 + struct nvme_sgl_desc *sg_list = nvme_pci_iod_list(req)[i]; 571 + dma_addr_t next_dma_addr = le64_to_cpu((sg_list[last_sg]).addr); 572 + 573 + dma_pool_free(dev->prp_page_pool, sg_list, dma_addr); 574 + dma_addr = next_dma_addr; 575 + } 576 + 577 + } 578 + 579 + static void nvme_unmap_sg(struct nvme_dev *dev, struct request *req) 580 + { 581 + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 560 582 561 583 if (is_pci_p2pdma_page(sg_page(iod->sg))) 562 584 pci_p2pdma_unmap_sg(dev->dev, iod->sg, iod->nents, 563 585 rq_dma_dir(req)); 564 586 else 565 587 dma_unmap_sg(dev->dev, iod->sg, iod->nents, rq_dma_dir(req)); 588 + } 566 589 590 + static void nvme_unmap_data(struct nvme_dev *dev, struct request *req) 591 + { 592 + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); 567 593 568 - if (iod->npages == 0) 569 - dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], 570 - dma_addr); 571 - 572 - for (i = 0; i < iod->npages; i++) { 573 - void *addr = nvme_pci_iod_list(req)[i]; 574 - 575 - if (iod->use_sgl) { 576 - struct nvme_sgl_desc *sg_list = addr; 577 - 578 - next_dma_addr = 579 - le64_to_cpu((sg_list[SGES_PER_PAGE - 1]).addr); 580 - } else { 581 - __le64 *prp_list = addr; 582 - 583 - next_dma_addr = le64_to_cpu(prp_list[last_prp]); 584 - } 585 - 586 - dma_pool_free(dev->prp_page_pool, addr, dma_addr); 587 - dma_addr = next_dma_addr; 594 + if (iod->dma_len) { 595 + dma_unmap_page(dev->dev, iod->first_dma, iod->dma_len, 596 + rq_dma_dir(req)); 597 + return; 588 598 } 589 599 600 + WARN_ON_ONCE(!iod->nents); 601 + 602 + nvme_unmap_sg(dev, req); 603 + if (iod->npages == 0) 604 + dma_pool_free(dev->prp_small_pool, nvme_pci_iod_list(req)[0], 605 + iod->first_dma); 606 + else if (iod->use_sgl) 607 + nvme_free_sgls(dev, req); 608 + else 609 + nvme_free_prps(dev, req); 590 610 mempool_free(iod->sg, dev->iod_mempool); 591 611 } 592 612 ··· 683 661 __le64 *old_prp_list = prp_list; 684 662 prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma); 685 663 if (!prp_list) 686 - return BLK_STS_RESOURCE; 664 + goto free_prps; 687 665 list[iod->npages++] = prp_list; 688 666 prp_list[0] = old_prp_list[i - 1]; 689 667 old_prp_list[i - 1] = cpu_to_le64(prp_dma); ··· 703 681 dma_addr = sg_dma_address(sg); 704 682 dma_len = sg_dma_len(sg); 705 683 } 706 - 707 684 done: 708 685 cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg)); 709 686 cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); 710 - 711 687 return BLK_STS_OK; 712 - 713 - bad_sgl: 688 + free_prps: 689 + nvme_free_prps(dev, req); 690 + return BLK_STS_RESOURCE; 691 + bad_sgl: 714 692 WARN(DO_ONCE(nvme_print_sgl, iod->sg, iod->nents), 715 693 "Invalid SGL for payload:%d nents:%d\n", 716 694 blk_rq_payload_bytes(req), iod->nents); ··· 782 760 783 761 sg_list = dma_pool_alloc(pool, GFP_ATOMIC, &sgl_dma); 784 762 if (!sg_list) 785 - return BLK_STS_RESOURCE; 763 + goto free_sgls; 786 764 787 765 i = 0; 788 766 nvme_pci_iod_list(req)[iod->npages++] = sg_list; ··· 795 773 } while (--entries > 0); 796 774 797 775 return BLK_STS_OK; 776 + free_sgls: 777 + nvme_free_sgls(dev, req); 778 + return BLK_STS_RESOURCE; 798 779 } 799 780 800 781 static blk_status_t nvme_setup_prp_simple(struct nvme_dev *dev, ··· 866 841 sg_init_table(iod->sg, blk_rq_nr_phys_segments(req)); 867 842 iod->nents = blk_rq_map_sg(req->q, req, iod->sg); 868 843 if (!iod->nents) 869 - goto out; 844 + goto out_free_sg; 870 845 871 846 if (is_pci_p2pdma_page(sg_page(iod->sg))) 872 847 nr_mapped = pci_p2pdma_map_sg_attrs(dev->dev, iod->sg, ··· 875 850 nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, 876 851 rq_dma_dir(req), DMA_ATTR_NO_WARN); 877 852 if (!nr_mapped) 878 - goto out; 853 + goto out_free_sg; 879 854 880 855 iod->use_sgl = nvme_pci_use_sgls(dev, req); 881 856 if (iod->use_sgl) 882 857 ret = nvme_pci_setup_sgls(dev, req, &cmnd->rw, nr_mapped); 883 858 else 884 859 ret = nvme_pci_setup_prps(dev, req, &cmnd->rw); 885 - out: 886 860 if (ret != BLK_STS_OK) 887 - nvme_unmap_data(dev, req); 861 + goto out_unmap_sg; 862 + return BLK_STS_OK; 863 + 864 + out_unmap_sg: 865 + nvme_unmap_sg(dev, req); 866 + out_free_sg: 867 + mempool_free(iod->sg, dev->iod_mempool); 888 868 return ret; 889 869 } 890 870 ··· 1825 1795 if (dev->cmb_size) 1826 1796 return; 1827 1797 1798 + if (NVME_CAP_CMBS(dev->ctrl.cap)) 1799 + writel(NVME_CMBMSC_CRE, dev->bar + NVME_REG_CMBMSC); 1800 + 1828 1801 dev->cmbsz = readl(dev->bar + NVME_REG_CMBSZ); 1829 1802 if (!dev->cmbsz) 1830 1803 return; ··· 1840 1807 1841 1808 if (offset > bar_size) 1842 1809 return; 1810 + 1811 + /* 1812 + * Tell the controller about the host side address mapping the CMB, 1813 + * and enable CMB decoding for the NVMe 1.4+ scheme: 1814 + */ 1815 + if (NVME_CAP_CMBS(dev->ctrl.cap)) { 1816 + hi_lo_writeq(NVME_CMBMSC_CRE | NVME_CMBMSC_CMSE | 1817 + (pci_bus_address(pdev, bar) + offset), 1818 + dev->bar + NVME_REG_CMBMSC); 1819 + } 1843 1820 1844 1821 /* 1845 1822 * Controllers may support a CMB size larger than their BAR,
+11 -4
drivers/nvme/host/rdma.c
··· 97 97 struct completion cm_done; 98 98 bool pi_support; 99 99 int cq_size; 100 + struct mutex queue_lock; 100 101 }; 101 102 102 103 struct nvme_rdma_ctrl { ··· 580 579 int ret; 581 580 582 581 queue = &ctrl->queues[idx]; 582 + mutex_init(&queue->queue_lock); 583 583 queue->ctrl = ctrl; 584 584 if (idx && ctrl->ctrl.max_integrity_segments) 585 585 queue->pi_support = true; ··· 600 598 if (IS_ERR(queue->cm_id)) { 601 599 dev_info(ctrl->ctrl.device, 602 600 "failed to create CM ID: %ld\n", PTR_ERR(queue->cm_id)); 603 - return PTR_ERR(queue->cm_id); 601 + ret = PTR_ERR(queue->cm_id); 602 + goto out_destroy_mutex; 604 603 } 605 604 606 605 if (ctrl->ctrl.opts->mask & NVMF_OPT_HOST_TRADDR) ··· 631 628 out_destroy_cm_id: 632 629 rdma_destroy_id(queue->cm_id); 633 630 nvme_rdma_destroy_queue_ib(queue); 631 + out_destroy_mutex: 632 + mutex_destroy(&queue->queue_lock); 634 633 return ret; 635 634 } 636 635 ··· 644 639 645 640 static void nvme_rdma_stop_queue(struct nvme_rdma_queue *queue) 646 641 { 647 - if (!test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags)) 648 - return; 649 - __nvme_rdma_stop_queue(queue); 642 + mutex_lock(&queue->queue_lock); 643 + if (test_and_clear_bit(NVME_RDMA_Q_LIVE, &queue->flags)) 644 + __nvme_rdma_stop_queue(queue); 645 + mutex_unlock(&queue->queue_lock); 650 646 } 651 647 652 648 static void nvme_rdma_free_queue(struct nvme_rdma_queue *queue) ··· 657 651 658 652 nvme_rdma_destroy_queue_ib(queue); 659 653 rdma_destroy_id(queue->cm_id); 654 + mutex_destroy(&queue->queue_lock); 660 655 } 661 656 662 657 static void nvme_rdma_free_io_queues(struct nvme_rdma_ctrl *ctrl)
+10 -4
drivers/nvme/host/tcp.c
··· 76 76 struct work_struct io_work; 77 77 int io_cpu; 78 78 79 + struct mutex queue_lock; 79 80 struct mutex send_mutex; 80 81 struct llist_head req_list; 81 82 struct list_head send_list; ··· 1220 1219 1221 1220 sock_release(queue->sock); 1222 1221 kfree(queue->pdu); 1222 + mutex_destroy(&queue->queue_lock); 1223 1223 } 1224 1224 1225 1225 static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue) ··· 1382 1380 struct nvme_tcp_queue *queue = &ctrl->queues[qid]; 1383 1381 int ret, rcv_pdu_size; 1384 1382 1383 + mutex_init(&queue->queue_lock); 1385 1384 queue->ctrl = ctrl; 1386 1385 init_llist_head(&queue->req_list); 1387 1386 INIT_LIST_HEAD(&queue->send_list); ··· 1401 1398 if (ret) { 1402 1399 dev_err(nctrl->device, 1403 1400 "failed to create socket: %d\n", ret); 1404 - return ret; 1401 + goto err_destroy_mutex; 1405 1402 } 1406 1403 1407 1404 /* Single syn retry */ ··· 1510 1507 err_sock: 1511 1508 sock_release(queue->sock); 1512 1509 queue->sock = NULL; 1510 + err_destroy_mutex: 1511 + mutex_destroy(&queue->queue_lock); 1513 1512 return ret; 1514 1513 } 1515 1514 ··· 1539 1534 struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl); 1540 1535 struct nvme_tcp_queue *queue = &ctrl->queues[qid]; 1541 1536 1542 - if (!test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) 1543 - return; 1544 - __nvme_tcp_stop_queue(queue); 1537 + mutex_lock(&queue->queue_lock); 1538 + if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags)) 1539 + __nvme_tcp_stop_queue(queue); 1540 + mutex_unlock(&queue->queue_lock); 1545 1541 } 1546 1542 1547 1543 static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+6 -2
drivers/nvme/target/admin-cmd.c
··· 487 487 488 488 /* return an all zeroed buffer if we can't find an active namespace */ 489 489 ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid); 490 - if (!ns) 490 + if (!ns) { 491 + status = NVME_SC_INVALID_NS; 491 492 goto done; 493 + } 492 494 493 495 nvmet_ns_revalidate(ns); 494 496 ··· 543 541 id->nsattr |= (1 << 0); 544 542 nvmet_put_namespace(ns); 545 543 done: 546 - status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); 544 + if (!status) 545 + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); 546 + 547 547 kfree(id); 548 548 out: 549 549 nvmet_req_complete(req, status);
+1 -1
drivers/phy/ingenic/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-y += phy-ingenic-usb.o 2 + obj-$(CONFIG_PHY_INGENIC_USB) += phy-ingenic-usb.o
+3 -1
drivers/phy/mediatek/Kconfig
··· 49 49 50 50 config PHY_MTK_MIPI_DSI 51 51 tristate "MediaTek MIPI-DSI Driver" 52 - depends on ARCH_MEDIATEK && OF 52 + depends on ARCH_MEDIATEK || COMPILE_TEST 53 + depends on COMMON_CLK 54 + depends on OF 53 55 select GENERIC_PHY 54 56 help 55 57 Support MIPI DSI for Mediatek SoCs.
+13 -6
drivers/phy/motorola/phy-cpcap-usb.c
··· 662 662 generic_phy = devm_phy_create(ddata->dev, NULL, &ops); 663 663 if (IS_ERR(generic_phy)) { 664 664 error = PTR_ERR(generic_phy); 665 - return PTR_ERR(generic_phy); 665 + goto out_reg_disable; 666 666 } 667 667 668 668 phy_set_drvdata(generic_phy, ddata); 669 669 670 670 phy_provider = devm_of_phy_provider_register(ddata->dev, 671 671 of_phy_simple_xlate); 672 - if (IS_ERR(phy_provider)) 673 - return PTR_ERR(phy_provider); 672 + if (IS_ERR(phy_provider)) { 673 + error = PTR_ERR(phy_provider); 674 + goto out_reg_disable; 675 + } 674 676 675 677 error = cpcap_usb_init_optional_pins(ddata); 676 678 if (error) 677 - return error; 679 + goto out_reg_disable; 678 680 679 681 cpcap_usb_init_optional_gpios(ddata); 680 682 681 683 error = cpcap_usb_init_iio(ddata); 682 684 if (error) 683 - return error; 685 + goto out_reg_disable; 684 686 685 687 error = cpcap_usb_init_interrupts(pdev, ddata); 686 688 if (error) 687 - return error; 689 + goto out_reg_disable; 688 690 689 691 usb_add_phy_dev(&ddata->phy); 690 692 atomic_set(&ddata->active, 1); 691 693 schedule_delayed_work(&ddata->detect_work, msecs_to_jiffies(1)); 692 694 693 695 return 0; 696 + 697 + out_reg_disable: 698 + regulator_disable(ddata->vusb); 699 + 700 + return error; 694 701 } 695 702 696 703 static int cpcap_usb_phy_remove(struct platform_device *pdev)
+1 -1
drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
··· 347 347 348 348 #define D22 40 349 349 SIG_EXPR_LIST_DECL_SESG(D22, SD1CLK, SD1, SIG_DESC_SET(SCU414, 8)); 350 - SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU414, 8)); 350 + SIG_EXPR_LIST_DECL_SEMG(D22, PWM8, PWM8G0, PWM8, SIG_DESC_SET(SCU4B4, 8)); 351 351 PIN_DECL_2(D22, GPIOF0, SD1CLK, PWM8); 352 352 GROUP_DECL(PWM8G0, D22); 353 353
+4
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
··· 920 920 err = hw->soc->bias_set(hw, desc, pullup); 921 921 if (err) 922 922 return err; 923 + } else if (hw->soc->bias_set_combo) { 924 + err = hw->soc->bias_set_combo(hw, desc, pullup, arg); 925 + if (err) 926 + return err; 923 927 } else { 924 928 return -ENOTSUPP; 925 929 }
-1
drivers/pinctrl/nomadik/pinctrl-nomadik.c
··· 949 949 } else { 950 950 int irq = chip->to_irq(chip, offset); 951 951 const int pullidx = pull ? 1 : 0; 952 - bool wake; 953 952 int val; 954 953 static const char * const pulls[] = { 955 954 "none ",
+40 -40
drivers/pinctrl/pinctrl-ingenic.c
··· 37 37 #define JZ4740_GPIO_TRIG 0x70 38 38 #define JZ4740_GPIO_FLAG 0x80 39 39 40 - #define JZ4760_GPIO_INT 0x10 41 - #define JZ4760_GPIO_PAT1 0x30 42 - #define JZ4760_GPIO_PAT0 0x40 43 - #define JZ4760_GPIO_FLAG 0x50 44 - #define JZ4760_GPIO_PEN 0x70 40 + #define JZ4770_GPIO_INT 0x10 41 + #define JZ4770_GPIO_PAT1 0x30 42 + #define JZ4770_GPIO_PAT0 0x40 43 + #define JZ4770_GPIO_FLAG 0x50 44 + #define JZ4770_GPIO_PEN 0x70 45 45 46 46 #define X1830_GPIO_PEL 0x110 47 47 #define X1830_GPIO_PEH 0x120 ··· 1688 1688 static void ingenic_gpio_set_value(struct ingenic_gpio_chip *jzgc, 1689 1689 u8 offset, int value) 1690 1690 { 1691 - if (jzgc->jzpc->info->version >= ID_JZ4760) 1692 - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_PAT0, offset, !!value); 1691 + if (jzgc->jzpc->info->version >= ID_JZ4770) 1692 + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_PAT0, offset, !!value); 1693 1693 else 1694 1694 ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, offset, !!value); 1695 1695 } ··· 1718 1718 break; 1719 1719 } 1720 1720 1721 - if (jzgc->jzpc->info->version >= ID_JZ4760) { 1722 - reg1 = JZ4760_GPIO_PAT1; 1723 - reg2 = JZ4760_GPIO_PAT0; 1721 + if (jzgc->jzpc->info->version >= ID_JZ4770) { 1722 + reg1 = JZ4770_GPIO_PAT1; 1723 + reg2 = JZ4770_GPIO_PAT0; 1724 1724 } else { 1725 1725 reg1 = JZ4740_GPIO_TRIG; 1726 1726 reg2 = JZ4740_GPIO_DIR; ··· 1758 1758 struct ingenic_gpio_chip *jzgc = gpiochip_get_data(gc); 1759 1759 int irq = irqd->hwirq; 1760 1760 1761 - if (jzgc->jzpc->info->version >= ID_JZ4760) 1762 - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, true); 1761 + if (jzgc->jzpc->info->version >= ID_JZ4770) 1762 + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_INT, irq, true); 1763 1763 else 1764 1764 ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, true); 1765 1765 ··· 1774 1774 1775 1775 ingenic_gpio_irq_mask(irqd); 1776 1776 1777 - if (jzgc->jzpc->info->version >= ID_JZ4760) 1778 - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_INT, irq, false); 1777 + if (jzgc->jzpc->info->version >= ID_JZ4770) 1778 + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_INT, irq, false); 1779 1779 else 1780 1780 ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_SELECT, irq, false); 1781 1781 } ··· 1799 1799 irq_set_type(jzgc, irq, IRQ_TYPE_LEVEL_HIGH); 1800 1800 } 1801 1801 1802 - if (jzgc->jzpc->info->version >= ID_JZ4760) 1803 - ingenic_gpio_set_bit(jzgc, JZ4760_GPIO_FLAG, irq, false); 1802 + if (jzgc->jzpc->info->version >= ID_JZ4770) 1803 + ingenic_gpio_set_bit(jzgc, JZ4770_GPIO_FLAG, irq, false); 1804 1804 else 1805 1805 ingenic_gpio_set_bit(jzgc, JZ4740_GPIO_DATA, irq, true); 1806 1806 } ··· 1856 1856 1857 1857 chained_irq_enter(irq_chip, desc); 1858 1858 1859 - if (jzgc->jzpc->info->version >= ID_JZ4760) 1860 - flag = ingenic_gpio_read_reg(jzgc, JZ4760_GPIO_FLAG); 1859 + if (jzgc->jzpc->info->version >= ID_JZ4770) 1860 + flag = ingenic_gpio_read_reg(jzgc, JZ4770_GPIO_FLAG); 1861 1861 else 1862 1862 flag = ingenic_gpio_read_reg(jzgc, JZ4740_GPIO_FLAG); 1863 1863 ··· 1938 1938 struct ingenic_pinctrl *jzpc = jzgc->jzpc; 1939 1939 unsigned int pin = gc->base + offset; 1940 1940 1941 - if (jzpc->info->version >= ID_JZ4760) { 1942 - if (ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_INT) || 1943 - ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PAT1)) 1941 + if (jzpc->info->version >= ID_JZ4770) { 1942 + if (ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_INT) || 1943 + ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_PAT1)) 1944 1944 return GPIO_LINE_DIRECTION_IN; 1945 1945 return GPIO_LINE_DIRECTION_OUT; 1946 1946 } ··· 1991 1991 'A' + offt, idx, func); 1992 1992 1993 1993 if (jzpc->info->version >= ID_X1000) { 1994 - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); 1994 + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); 1995 1995 ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, false); 1996 - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2); 1997 - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1); 1996 + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, func & 0x2); 1997 + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, func & 0x1); 1998 1998 ingenic_shadow_config_pin_load(jzpc, pin); 1999 - } else if (jzpc->info->version >= ID_JZ4760) { 2000 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); 1999 + } else if (jzpc->info->version >= ID_JZ4770) { 2000 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); 2001 2001 ingenic_config_pin(jzpc, pin, GPIO_MSK, false); 2002 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, func & 0x2); 2003 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, func & 0x1); 2002 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, func & 0x2); 2003 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, func & 0x1); 2004 2004 } else { 2005 2005 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_FUNC, true); 2006 2006 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_TRIG, func & 0x2); 2007 - ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func > 0); 2007 + ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, func & 0x1); 2008 2008 } 2009 2009 2010 2010 return 0; ··· 2057 2057 'A' + offt, idx, input ? "in" : "out"); 2058 2058 2059 2059 if (jzpc->info->version >= ID_X1000) { 2060 - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); 2060 + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); 2061 2061 ingenic_shadow_config_pin(jzpc, pin, GPIO_MSK, true); 2062 - ingenic_shadow_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input); 2062 + ingenic_shadow_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, input); 2063 2063 ingenic_shadow_config_pin_load(jzpc, pin); 2064 - } else if (jzpc->info->version >= ID_JZ4760) { 2065 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_INT, false); 2064 + } else if (jzpc->info->version >= ID_JZ4770) { 2065 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_INT, false); 2066 2066 ingenic_config_pin(jzpc, pin, GPIO_MSK, true); 2067 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT1, input); 2067 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT1, input); 2068 2068 } else { 2069 2069 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_SELECT, false); 2070 2070 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DIR, !input); ··· 2091 2091 unsigned int offt = pin / PINS_PER_GPIO_CHIP; 2092 2092 bool pull; 2093 2093 2094 - if (jzpc->info->version >= ID_JZ4760) 2095 - pull = !ingenic_get_pin_config(jzpc, pin, JZ4760_GPIO_PEN); 2094 + if (jzpc->info->version >= ID_JZ4770) 2095 + pull = !ingenic_get_pin_config(jzpc, pin, JZ4770_GPIO_PEN); 2096 2096 else 2097 2097 pull = !ingenic_get_pin_config(jzpc, pin, JZ4740_GPIO_PULL_DIS); 2098 2098 ··· 2141 2141 REG_SET(X1830_GPIO_PEH), bias << idxh); 2142 2142 } 2143 2143 2144 - } else if (jzpc->info->version >= ID_JZ4760) { 2145 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PEN, !bias); 2144 + } else if (jzpc->info->version >= ID_JZ4770) { 2145 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PEN, !bias); 2146 2146 } else { 2147 2147 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_PULL_DIS, !bias); 2148 2148 } ··· 2151 2151 static void ingenic_set_output_level(struct ingenic_pinctrl *jzpc, 2152 2152 unsigned int pin, bool high) 2153 2153 { 2154 - if (jzpc->info->version >= ID_JZ4760) 2155 - ingenic_config_pin(jzpc, pin, JZ4760_GPIO_PAT0, high); 2154 + if (jzpc->info->version >= ID_JZ4770) 2155 + ingenic_config_pin(jzpc, pin, JZ4770_GPIO_PAT0, high); 2156 2156 else 2157 2157 ingenic_config_pin(jzpc, pin, JZ4740_GPIO_DATA, high); 2158 2158 }
+60 -36
drivers/pinctrl/qcom/pinctrl-msm.c
··· 51 51 * @dual_edge_irqs: Bitmap of irqs that need sw emulated dual edge 52 52 * detection. 53 53 * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller 54 + * @disabled_for_mux: These IRQs were disabled because we muxed away. 54 55 * @soc: Reference to soc_data of platform specific data. 55 56 * @regs: Base addresses for the TLMM tiles. 56 57 * @phys_base: Physical base address ··· 73 72 DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO); 74 73 DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO); 75 74 DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO); 75 + DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO); 76 76 77 77 const struct msm_pinctrl_soc_data *soc; 78 78 void __iomem *regs[MAX_NR_TILES]; ··· 97 95 MSM_ACCESSOR(intr_cfg) 98 96 MSM_ACCESSOR(intr_status) 99 97 MSM_ACCESSOR(intr_target) 98 + 99 + static void msm_ack_intr_status(struct msm_pinctrl *pctrl, 100 + const struct msm_pingroup *g) 101 + { 102 + u32 val = g->intr_ack_high ? BIT(g->intr_status_bit) : 0; 103 + 104 + msm_writel_intr_status(val, pctrl, g); 105 + } 100 106 101 107 static int msm_get_groups_count(struct pinctrl_dev *pctldev) 102 108 { ··· 181 171 unsigned group) 182 172 { 183 173 struct msm_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev); 174 + struct gpio_chip *gc = &pctrl->chip; 175 + unsigned int irq = irq_find_mapping(gc->irq.domain, group); 176 + struct irq_data *d = irq_get_irq_data(irq); 177 + unsigned int gpio_func = pctrl->soc->gpio_func; 184 178 const struct msm_pingroup *g; 185 179 unsigned long flags; 186 180 u32 val, mask; ··· 201 187 if (WARN_ON(i == g->nfuncs)) 202 188 return -EINVAL; 203 189 190 + /* 191 + * If an GPIO interrupt is setup on this pin then we need special 192 + * handling. Specifically interrupt detection logic will still see 193 + * the pin twiddle even when we're muxed away. 194 + * 195 + * When we see a pin with an interrupt setup on it then we'll disable 196 + * (mask) interrupts on it when we mux away until we mux back. Note 197 + * that disable_irq() refcounts and interrupts are disabled as long as 198 + * at least one disable_irq() has been called. 199 + */ 200 + if (d && i != gpio_func && 201 + !test_and_set_bit(d->hwirq, pctrl->disabled_for_mux)) 202 + disable_irq(irq); 203 + 204 204 raw_spin_lock_irqsave(&pctrl->lock, flags); 205 205 206 206 val = msm_readl_ctl(pctrl, g); ··· 223 195 msm_writel_ctl(val, pctrl, g); 224 196 225 197 raw_spin_unlock_irqrestore(&pctrl->lock, flags); 198 + 199 + if (d && i == gpio_func && 200 + test_and_clear_bit(d->hwirq, pctrl->disabled_for_mux)) { 201 + /* 202 + * Clear interrupts detected while not GPIO since we only 203 + * masked things. 204 + */ 205 + if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) 206 + irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, false); 207 + else 208 + msm_ack_intr_status(pctrl, g); 209 + 210 + enable_irq(irq); 211 + } 226 212 227 213 return 0; 228 214 } ··· 252 210 if (!g->nfuncs) 253 211 return 0; 254 212 255 - /* For now assume function 0 is GPIO because it always is */ 256 - return msm_pinmux_set_mux(pctldev, g->funcs[0], offset); 213 + return msm_pinmux_set_mux(pctldev, g->funcs[pctrl->soc->gpio_func], offset); 257 214 } 258 215 259 216 static const struct pinmux_ops msm_pinmux_ops = { ··· 815 774 raw_spin_unlock_irqrestore(&pctrl->lock, flags); 816 775 } 817 776 818 - static void msm_gpio_irq_clear_unmask(struct irq_data *d, bool status_clear) 777 + static void msm_gpio_irq_unmask(struct irq_data *d) 819 778 { 820 779 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 821 780 struct msm_pinctrl *pctrl = gpiochip_get_data(gc); ··· 832 791 g = &pctrl->soc->groups[d->hwirq]; 833 792 834 793 raw_spin_lock_irqsave(&pctrl->lock, flags); 835 - 836 - if (status_clear) { 837 - /* 838 - * clear the interrupt status bit before unmask to avoid 839 - * any erroneous interrupts that would have got latched 840 - * when the interrupt is not in use. 841 - */ 842 - val = msm_readl_intr_status(pctrl, g); 843 - val &= ~BIT(g->intr_status_bit); 844 - msm_writel_intr_status(val, pctrl, g); 845 - } 846 794 847 795 val = msm_readl_intr_cfg(pctrl, g); 848 796 val |= BIT(g->intr_raw_status_bit); ··· 852 822 irq_chip_enable_parent(d); 853 823 854 824 if (!test_bit(d->hwirq, pctrl->skip_wake_irqs)) 855 - msm_gpio_irq_clear_unmask(d, true); 825 + msm_gpio_irq_unmask(d); 856 826 } 857 827 858 828 static void msm_gpio_irq_disable(struct irq_data *d) ··· 865 835 866 836 if (!test_bit(d->hwirq, pctrl->skip_wake_irqs)) 867 837 msm_gpio_irq_mask(d); 868 - } 869 - 870 - static void msm_gpio_irq_unmask(struct irq_data *d) 871 - { 872 - msm_gpio_irq_clear_unmask(d, false); 873 838 } 874 839 875 840 /** ··· 919 894 struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 920 895 const struct msm_pingroup *g; 921 896 unsigned long flags; 922 - u32 val; 923 897 924 898 if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) { 925 899 if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) ··· 930 906 931 907 raw_spin_lock_irqsave(&pctrl->lock, flags); 932 908 933 - val = msm_readl_intr_status(pctrl, g); 934 - if (g->intr_ack_high) 935 - val |= BIT(g->intr_status_bit); 936 - else 937 - val &= ~BIT(g->intr_status_bit); 938 - msm_writel_intr_status(val, pctrl, g); 909 + msm_ack_intr_status(pctrl, g); 939 910 940 911 if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) 941 912 msm_gpio_update_dual_edge_pos(pctrl, g, d); ··· 955 936 struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 956 937 const struct msm_pingroup *g; 957 938 unsigned long flags; 939 + bool was_enabled; 958 940 u32 val; 959 941 960 942 if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) { ··· 1017 997 * could cause the INTR_STATUS to be set for EDGE interrupts. 1018 998 */ 1019 999 val = msm_readl_intr_cfg(pctrl, g); 1000 + was_enabled = val & BIT(g->intr_raw_status_bit); 1020 1001 val |= BIT(g->intr_raw_status_bit); 1021 1002 if (g->intr_detection_width == 2) { 1022 1003 val &= ~(3 << g->intr_detection_bit); ··· 1066 1045 BUG(); 1067 1046 } 1068 1047 msm_writel_intr_cfg(val, pctrl, g); 1048 + 1049 + /* 1050 + * The first time we set RAW_STATUS_EN it could trigger an interrupt. 1051 + * Clear the interrupt. This is safe because we have 1052 + * IRQCHIP_SET_TYPE_MASKED. 1053 + */ 1054 + if (!was_enabled) 1055 + msm_ack_intr_status(pctrl, g); 1069 1056 1070 1057 if (test_bit(d->hwirq, pctrl->dual_edge_irqs)) 1071 1058 msm_gpio_update_dual_edge_pos(pctrl, g, d); ··· 1128 1099 } 1129 1100 1130 1101 /* 1131 - * Clear the interrupt that may be pending before we enable 1132 - * the line. 1133 - * This is especially a problem with the GPIOs routed to the 1134 - * PDC. These GPIOs are direct-connect interrupts to the GIC. 1135 - * Disabling the interrupt line at the PDC does not prevent 1136 - * the interrupt from being latched at the GIC. The state at 1137 - * GIC needs to be cleared before enabling. 1102 + * The disable / clear-enable workaround we do in msm_pinmux_set_mux() 1103 + * only works if disable is not lazy since we only clear any bogus 1104 + * interrupt in hardware. Explicitly mark the interrupt as UNLAZY. 1138 1105 */ 1139 - if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs)) 1140 - irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, 0); 1106 + irq_set_status_flags(d->irq, IRQ_DISABLE_UNLAZY); 1141 1107 1142 1108 return 0; 1143 1109 out:
+2
drivers/pinctrl/qcom/pinctrl-msm.h
··· 118 118 * @wakeirq_dual_edge_errata: If true then GPIOs using the wakeirq_map need 119 119 * to be aware that their parent can't handle dual 120 120 * edge interrupts. 121 + * @gpio_func: Which function number is GPIO (usually 0). 121 122 */ 122 123 struct msm_pinctrl_soc_data { 123 124 const struct pinctrl_pin_desc *pins; ··· 135 134 const struct msm_gpio_wakeirq_map *wakeirq_map; 136 135 unsigned int nwakeirq_map; 137 136 bool wakeirq_dual_edge_errata; 137 + unsigned int gpio_func; 138 138 }; 139 139 140 140 extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops;
+4 -4
drivers/platform/surface/Kconfig
··· 5 5 6 6 menuconfig SURFACE_PLATFORMS 7 7 bool "Microsoft Surface Platform-Specific Device Drivers" 8 + depends on ACPI 8 9 default y 9 10 help 10 11 Say Y here to get to see options for platform-specific device drivers ··· 30 29 31 30 config SURFACE_3_BUTTON 32 31 tristate "Power/home/volume buttons driver for Microsoft Surface 3 tablet" 33 - depends on ACPI && KEYBOARD_GPIO && I2C 32 + depends on KEYBOARD_GPIO && I2C 34 33 help 35 34 This driver handles the power/home/volume buttons on the Microsoft Surface 3 tablet. 36 35 37 36 config SURFACE_3_POWER_OPREGION 38 37 tristate "Surface 3 battery platform operation region support" 39 - depends on ACPI && I2C 38 + depends on I2C 40 39 help 41 40 This driver provides support for ACPI operation 42 41 region of the Surface 3 battery platform driver. 43 42 44 43 config SURFACE_GPE 45 44 tristate "Surface GPE/Lid Support Driver" 46 - depends on ACPI 47 45 depends on DMI 48 46 help 49 47 This driver marks the GPEs related to the ACPI lid device found on ··· 52 52 53 53 config SURFACE_PRO3_BUTTON 54 54 tristate "Power/home/volume buttons driver for Microsoft Surface Pro 3/4 tablet" 55 - depends on ACPI && INPUT 55 + depends on INPUT 56 56 help 57 57 This driver handles the power/home/volume buttons on the Microsoft Surface Pro 3/4 tablet. 58 58
+2 -2
drivers/platform/surface/surface_gpe.c
··· 181 181 return 0; 182 182 } 183 183 184 - static int surface_gpe_suspend(struct device *dev) 184 + static int __maybe_unused surface_gpe_suspend(struct device *dev) 185 185 { 186 186 return surface_lid_enable_wakeup(dev, true); 187 187 } 188 188 189 - static int surface_gpe_resume(struct device *dev) 189 + static int __maybe_unused surface_gpe_resume(struct device *dev) 190 190 { 191 191 return surface_lid_enable_wakeup(dev, false); 192 192 }
+1 -1
drivers/platform/x86/amd-pmc.c
··· 85 85 iowrite32(val, dev->regbase + reg_offset); 86 86 } 87 87 88 - #if CONFIG_DEBUG_FS 88 + #ifdef CONFIG_DEBUG_FS 89 89 static int smu_fw_info_show(struct seq_file *s, void *unused) 90 90 { 91 91 struct amd_pmc_dev *dev = s->private;
+2 -1
drivers/platform/x86/hp-wmi.c
··· 247 247 ret = bios_return->return_code; 248 248 249 249 if (ret) { 250 - if (ret != HPWMI_RET_UNKNOWN_CMDTYPE) 250 + if (ret != HPWMI_RET_UNKNOWN_COMMAND && 251 + ret != HPWMI_RET_UNKNOWN_CMDTYPE) 251 252 pr_warn("query 0x%x returned error 0x%x\n", query, ret); 252 253 goto out_free; 253 254 }
+23 -8
drivers/platform/x86/i2c-multi-instantiate.c
··· 164 164 {} 165 165 }; 166 166 167 - static const struct i2c_inst_data int3515_data[] = { 168 - { "tps6598x", IRQ_RESOURCE_APIC, 0 }, 169 - { "tps6598x", IRQ_RESOURCE_APIC, 1 }, 170 - { "tps6598x", IRQ_RESOURCE_APIC, 2 }, 171 - { "tps6598x", IRQ_RESOURCE_APIC, 3 }, 172 - {} 173 - }; 167 + /* 168 + * Device with _HID INT3515 (TI PD controllers) has some unresolved interrupt 169 + * issues. The most common problem seen is interrupt flood. 170 + * 171 + * There are at least two known causes. Firstly, on some boards, the 172 + * I2CSerialBus resource index does not match the Interrupt resource, i.e. they 173 + * are not one-to-one mapped like in the array below. Secondly, on some boards 174 + * the IRQ line from the PD controller is not actually connected at all. But the 175 + * interrupt flood is also seen on some boards where those are not a problem, so 176 + * there are some other problems as well. 177 + * 178 + * Because of the issues with the interrupt, the device is disabled for now. If 179 + * you wish to debug the issues, uncomment the below, and add an entry for the 180 + * INT3515 device to the i2c_multi_instance_ids table. 181 + * 182 + * static const struct i2c_inst_data int3515_data[] = { 183 + * { "tps6598x", IRQ_RESOURCE_APIC, 0 }, 184 + * { "tps6598x", IRQ_RESOURCE_APIC, 1 }, 185 + * { "tps6598x", IRQ_RESOURCE_APIC, 2 }, 186 + * { "tps6598x", IRQ_RESOURCE_APIC, 3 }, 187 + * { } 188 + * }; 189 + */ 174 190 175 191 /* 176 192 * Note new device-ids must also be added to i2c_multi_instantiate_ids in ··· 195 179 static const struct acpi_device_id i2c_multi_inst_acpi_ids[] = { 196 180 { "BSG1160", (unsigned long)bsg1160_data }, 197 181 { "BSG2150", (unsigned long)bsg2150_data }, 198 - { "INT3515", (unsigned long)int3515_data }, 199 182 { } 200 183 }; 201 184 MODULE_DEVICE_TABLE(acpi, i2c_multi_inst_acpi_ids);
+14 -1
drivers/platform/x86/ideapad-laptop.c
··· 92 92 struct dentry *debug; 93 93 unsigned long cfg; 94 94 bool has_hw_rfkill_switch; 95 + bool has_touchpad_switch; 95 96 const char *fnesc_guid; 96 97 }; 97 98 ··· 536 535 } else if (attr == &dev_attr_fn_lock.attr) { 537 536 supported = acpi_has_method(priv->adev->handle, "HALS") && 538 537 acpi_has_method(priv->adev->handle, "SALS"); 539 - } else 538 + } else if (attr == &dev_attr_touchpad.attr) 539 + supported = priv->has_touchpad_switch; 540 + else 540 541 supported = true; 541 542 542 543 return supported ? attr->mode : 0; ··· 870 867 { 871 868 unsigned long value; 872 869 870 + if (!priv->has_touchpad_switch) 871 + return; 872 + 873 873 /* Without reading from EC touchpad LED doesn't switch state */ 874 874 if (!read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value)) { 875 875 /* Some IdeaPads don't really turn off touchpad - they only ··· 995 989 priv->platform_device = pdev; 996 990 priv->has_hw_rfkill_switch = dmi_check_system(hw_rfkill_list); 997 991 992 + /* Most ideapads with ELAN0634 touchpad don't use EC touchpad switch */ 993 + priv->has_touchpad_switch = !acpi_dev_present("ELAN0634", NULL, -1); 994 + 998 995 ret = ideapad_sysfs_init(priv); 999 996 if (ret) 1000 997 return ret; ··· 1014 1005 */ 1015 1006 if (!priv->has_hw_rfkill_switch) 1016 1007 write_ec_cmd(priv->adev->handle, VPCCMD_W_RF, 1); 1008 + 1009 + /* The same for Touchpad */ 1010 + if (!priv->has_touchpad_switch) 1011 + write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, 1); 1017 1012 1018 1013 for (i = 0; i < IDEAPAD_RFKILL_DEV_NUM; i++) 1019 1014 if (test_bit(ideapad_rfk_data[i].cfgbit, &priv->cfg))
+6 -6
drivers/platform/x86/intel-vbtn.c
··· 207 207 { 208 208 .matches = { 209 209 DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 210 - DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"), 211 - }, 212 - }, 213 - { 214 - .matches = { 215 - DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 216 210 DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion 13 x360 PC"), 217 211 }, 218 212 }, ··· 214 220 .matches = { 215 221 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 216 222 DMI_MATCH(DMI_PRODUCT_NAME, "Switch SA5-271"), 223 + }, 224 + }, 225 + { 226 + .matches = { 227 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 228 + DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7352"), 217 229 }, 218 230 }, 219 231 {} /* Array terminator */
+3 -2
drivers/platform/x86/thinkpad_acpi.c
··· 8783 8783 TPACPI_Q_LNV3('N', '1', 'T', TPACPI_FAN_2CTL), /* P71 */ 8784 8784 TPACPI_Q_LNV3('N', '1', 'U', TPACPI_FAN_2CTL), /* P51 */ 8785 8785 TPACPI_Q_LNV3('N', '2', 'C', TPACPI_FAN_2CTL), /* P52 / P72 */ 8786 + TPACPI_Q_LNV3('N', '2', 'N', TPACPI_FAN_2CTL), /* P53 / P73 */ 8786 8787 TPACPI_Q_LNV3('N', '2', 'E', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (1st gen) */ 8787 8788 TPACPI_Q_LNV3('N', '2', 'O', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (2nd gen) */ 8788 8789 TPACPI_Q_LNV3('N', '2', 'V', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (3nd gen) */ ··· 9952 9951 if ((palm_err == -ENODEV) && (lap_err == -ENODEV)) 9953 9952 return 0; 9954 9953 /* Otherwise, if there was an error return it */ 9955 - if (palm_err && (palm_err != ENODEV)) 9954 + if (palm_err && (palm_err != -ENODEV)) 9956 9955 return palm_err; 9957 - if (lap_err && (lap_err != ENODEV)) 9956 + if (lap_err && (lap_err != -ENODEV)) 9958 9957 return lap_err; 9959 9958 9960 9959 if (has_palmsensor) {
+18
drivers/platform/x86/touchscreen_dmi.c
··· 263 263 .properties = digma_citi_e200_props, 264 264 }; 265 265 266 + static const struct property_entry estar_beauty_hd_props[] = { 267 + PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 268 + { } 269 + }; 270 + 271 + static const struct ts_dmi_data estar_beauty_hd_data = { 272 + .acpi_name = "GDIX1001:00", 273 + .properties = estar_beauty_hd_props, 274 + }; 275 + 266 276 static const struct property_entry gp_electronic_t701_props[] = { 267 277 PROPERTY_ENTRY_U32("touchscreen-size-x", 960), 268 278 PROPERTY_ENTRY_U32("touchscreen-size-y", 640), ··· 950 940 DMI_MATCH(DMI_SYS_VENDOR, "Digma"), 951 941 DMI_MATCH(DMI_PRODUCT_NAME, "CITI E200"), 952 942 DMI_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"), 943 + }, 944 + }, 945 + { 946 + /* Estar Beauty HD (MID 7316R) */ 947 + .driver_data = (void *)&estar_beauty_hd_data, 948 + .matches = { 949 + DMI_MATCH(DMI_SYS_VENDOR, "Estar"), 950 + DMI_MATCH(DMI_PRODUCT_NAME, "eSTAR BEAUTY HD Intel Quad core"), 953 951 }, 954 952 }, 955 953 {
+33 -11
drivers/regulator/core.c
··· 1813 1813 { 1814 1814 struct regulator_dev *r; 1815 1815 struct device *dev = rdev->dev.parent; 1816 - int ret; 1816 + int ret = 0; 1817 1817 1818 1818 /* No supply to resolve? */ 1819 1819 if (!rdev->supply_name) 1820 1820 return 0; 1821 1821 1822 - /* Supply already resolved? */ 1822 + /* Supply already resolved? (fast-path without locking contention) */ 1823 1823 if (rdev->supply) 1824 1824 return 0; 1825 1825 ··· 1829 1829 1830 1830 /* Did the lookup explicitly defer for us? */ 1831 1831 if (ret == -EPROBE_DEFER) 1832 - return ret; 1832 + goto out; 1833 1833 1834 1834 if (have_full_constraints()) { 1835 1835 r = dummy_regulator_rdev; ··· 1837 1837 } else { 1838 1838 dev_err(dev, "Failed to resolve %s-supply for %s\n", 1839 1839 rdev->supply_name, rdev->desc->name); 1840 - return -EPROBE_DEFER; 1840 + ret = -EPROBE_DEFER; 1841 + goto out; 1841 1842 } 1842 1843 } 1843 1844 1844 1845 if (r == rdev) { 1845 1846 dev_err(dev, "Supply for %s (%s) resolved to itself\n", 1846 1847 rdev->desc->name, rdev->supply_name); 1847 - if (!have_full_constraints()) 1848 - return -EINVAL; 1848 + if (!have_full_constraints()) { 1849 + ret = -EINVAL; 1850 + goto out; 1851 + } 1849 1852 r = dummy_regulator_rdev; 1850 1853 get_device(&r->dev); 1851 1854 } ··· 1862 1859 if (r->dev.parent && r->dev.parent != rdev->dev.parent) { 1863 1860 if (!device_is_bound(r->dev.parent)) { 1864 1861 put_device(&r->dev); 1865 - return -EPROBE_DEFER; 1862 + ret = -EPROBE_DEFER; 1863 + goto out; 1866 1864 } 1867 1865 } 1868 1866 ··· 1871 1867 ret = regulator_resolve_supply(r); 1872 1868 if (ret < 0) { 1873 1869 put_device(&r->dev); 1874 - return ret; 1870 + goto out; 1871 + } 1872 + 1873 + /* 1874 + * Recheck rdev->supply with rdev->mutex lock held to avoid a race 1875 + * between rdev->supply null check and setting rdev->supply in 1876 + * set_supply() from concurrent tasks. 1877 + */ 1878 + regulator_lock(rdev); 1879 + 1880 + /* Supply just resolved by a concurrent task? */ 1881 + if (rdev->supply) { 1882 + regulator_unlock(rdev); 1883 + put_device(&r->dev); 1884 + goto out; 1875 1885 } 1876 1886 1877 1887 ret = set_supply(rdev, r); 1878 1888 if (ret < 0) { 1889 + regulator_unlock(rdev); 1879 1890 put_device(&r->dev); 1880 - return ret; 1891 + goto out; 1881 1892 } 1893 + 1894 + regulator_unlock(rdev); 1882 1895 1883 1896 /* 1884 1897 * In set_machine_constraints() we may have turned this regulator on ··· 1907 1886 if (ret < 0) { 1908 1887 _regulator_put(rdev->supply); 1909 1888 rdev->supply = NULL; 1910 - return ret; 1889 + goto out; 1911 1890 } 1912 1891 } 1913 1892 1914 - return 0; 1893 + out: 1894 + return ret; 1915 1895 } 1916 1896 1917 1897 /* Internal regulator request function */
+5 -3
drivers/scsi/fnic/vnic_dev.c
··· 444 444 fetch_index = ioread32(&vdev->devcmd2->wq.ctrl->fetch_index); 445 445 if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone */ 446 446 pr_err("error in devcmd2 init"); 447 - return -ENODEV; 447 + err = -ENODEV; 448 + goto err_free_wq; 448 449 } 449 450 450 451 /* ··· 461 460 err = vnic_dev_alloc_desc_ring(vdev, &vdev->devcmd2->results_ring, 462 461 DEVCMD2_RING_SIZE, DEVCMD2_DESC_SIZE); 463 462 if (err) 464 - goto err_free_wq; 463 + goto err_disable_wq; 465 464 466 465 vdev->devcmd2->result = 467 466 (struct devcmd2_result *) vdev->devcmd2->results_ring.descs; ··· 482 481 483 482 err_free_desc_ring: 484 483 vnic_dev_free_desc_ring(vdev, &vdev->devcmd2->results_ring); 485 - err_free_wq: 484 + err_disable_wq: 486 485 vnic_wq_disable(&vdev->devcmd2->wq); 486 + err_free_wq: 487 487 vnic_wq_free(&vdev->devcmd2->wq); 488 488 err_free_devcmd2: 489 489 kfree(vdev->devcmd2);
+5 -3
drivers/scsi/ibmvscsi/ibmvfc.c
··· 1744 1744 iu->pri_task_attr = IBMVFC_SIMPLE_TASK; 1745 1745 } 1746 1746 1747 - vfc_cmd->correlation = cpu_to_be64(evt); 1747 + vfc_cmd->correlation = cpu_to_be64((u64)evt); 1748 1748 1749 1749 if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev)))) 1750 1750 return ibmvfc_send_event(evt, vhost, 0); ··· 2418 2418 tmf->flags = cpu_to_be16((IBMVFC_NO_MEM_DESC | IBMVFC_TMF)); 2419 2419 evt->sync_iu = &rsp_iu; 2420 2420 2421 - tmf->correlation = cpu_to_be64(evt); 2421 + tmf->correlation = cpu_to_be64((u64)evt); 2422 2422 2423 2423 init_completion(&evt->comp); 2424 2424 rsp_rc = ibmvfc_send_event(evt, vhost, default_timeout); ··· 3007 3007 unsigned long flags = 0; 3008 3008 3009 3009 spin_lock_irqsave(shost->host_lock, flags); 3010 - if (sdev->type == TYPE_DISK) 3010 + if (sdev->type == TYPE_DISK) { 3011 3011 sdev->allow_restart = 1; 3012 + blk_queue_rq_timeout(sdev->request_queue, 120 * HZ); 3013 + } 3012 3014 spin_unlock_irqrestore(shost->host_lock, flags); 3013 3015 return 0; 3014 3016 }
+14 -2
drivers/scsi/libfc/fc_exch.c
··· 1623 1623 rc = fc_exch_done_locked(ep); 1624 1624 WARN_ON(fc_seq_exch(sp) != ep); 1625 1625 spin_unlock_bh(&ep->ex_lock); 1626 - if (!rc) 1626 + if (!rc) { 1627 1627 fc_exch_delete(ep); 1628 + } else { 1629 + FC_EXCH_DBG(ep, "ep is completed already," 1630 + "hence skip calling the resp\n"); 1631 + goto skip_resp; 1632 + } 1628 1633 } 1629 1634 1630 1635 /* ··· 1648 1643 if (!fc_invoke_resp(ep, sp, fp)) 1649 1644 fc_frame_free(fp); 1650 1645 1646 + skip_resp: 1651 1647 fc_exch_release(ep); 1652 1648 return; 1653 1649 rel: ··· 1905 1899 1906 1900 fc_exch_hold(ep); 1907 1901 1908 - if (!rc) 1902 + if (!rc) { 1909 1903 fc_exch_delete(ep); 1904 + } else { 1905 + FC_EXCH_DBG(ep, "ep is completed already," 1906 + "hence skip calling the resp\n"); 1907 + goto skip_resp; 1908 + } 1910 1909 1911 1910 fc_invoke_resp(ep, sp, ERR_PTR(-FC_EX_CLOSED)); 1911 + skip_resp: 1912 1912 fc_seq_set_resp(sp, NULL, ep->arg); 1913 1913 fc_exch_release(ep); 1914 1914 }
+2 -4
drivers/scsi/megaraid/megaraid_sas_base.c
··· 8244 8244 goto out; 8245 8245 } 8246 8246 8247 + /* always store 64 bits regardless of addressing */ 8247 8248 sense_ptr = (void *)cmd->frame + ioc->sense_off; 8248 - if (instance->consistent_mask_64bit) 8249 - put_unaligned_le64(sense_handle, sense_ptr); 8250 - else 8251 - put_unaligned_le32(sense_handle, sense_ptr); 8249 + put_unaligned_le64(sense_handle, sense_ptr); 8252 8250 } 8253 8251 8254 8252 /*
+8 -1
drivers/scsi/scsi_transport_srp.c
··· 541 541 res = mutex_lock_interruptible(&rport->mutex); 542 542 if (res) 543 543 goto out; 544 - scsi_target_block(&shost->shost_gendev); 544 + if (rport->state != SRP_RPORT_FAIL_FAST) 545 + /* 546 + * sdev state must be SDEV_TRANSPORT_OFFLINE, transition 547 + * to SDEV_BLOCK is illegal. Calling scsi_target_unblock() 548 + * later is ok though, scsi_internal_device_unblock_nowait() 549 + * treats SDEV_TRANSPORT_OFFLINE like SDEV_BLOCK. 550 + */ 551 + scsi_target_block(&shost->shost_gendev); 545 552 res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV; 546 553 pr_debug("%s (state %d): transport.reconnect() returned %d\n", 547 554 dev_name(&shost->shost_gendev), rport->state, res);
+1
drivers/scsi/ufs/Kconfig
··· 72 72 config SCSI_UFSHCD_PLATFORM 73 73 tristate "Platform bus based UFS Controller support" 74 74 depends on SCSI_UFSHCD 75 + depends on HAS_IOMEM 75 76 help 76 77 This selects the UFS host controller support. Select this if 77 78 you have an UFS controller on Platform bus.
+25 -12
drivers/scsi/ufs/ufshcd.c
··· 3996 3996 if (ret) 3997 3997 dev_err(hba->dev, "%s: link recovery failed, err %d", 3998 3998 __func__, ret); 3999 + else 4000 + ufshcd_clear_ua_wluns(hba); 3999 4001 4000 4002 return ret; 4001 4003 } ··· 4994 4992 break; 4995 4993 } /* end of switch */ 4996 4994 4997 - if ((host_byte(result) != DID_OK) && !hba->silence_err_logs) 4995 + if ((host_byte(result) != DID_OK) && 4996 + (host_byte(result) != DID_REQUEUE) && !hba->silence_err_logs) 4998 4997 ufshcd_print_trs(hba, 1 << lrbp->task_tag, true); 4999 4998 return result; 5000 4999 } ··· 6004 6001 ufshcd_scsi_unblock_requests(hba); 6005 6002 ufshcd_err_handling_unprepare(hba); 6006 6003 up(&hba->eh_sem); 6004 + 6005 + if (!err && needs_reset) 6006 + ufshcd_clear_ua_wluns(hba); 6007 6007 } 6008 6008 6009 6009 /** ··· 6301 6295 intr_status = ufshcd_readl(hba, REG_INTERRUPT_STATUS); 6302 6296 } 6303 6297 6304 - if (enabled_intr_status && retval == IRQ_NONE) { 6305 - dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x\n", 6306 - __func__, intr_status); 6298 + if (enabled_intr_status && retval == IRQ_NONE && 6299 + !ufshcd_eh_in_progress(hba)) { 6300 + dev_err(hba->dev, "%s: Unhandled interrupt 0x%08x (0x%08x, 0x%08x)\n", 6301 + __func__, 6302 + intr_status, 6303 + hba->ufs_stats.last_intr_status, 6304 + enabled_intr_status); 6307 6305 ufshcd_dump_regs(hba, 0, UFSHCI_REG_SPACE_SIZE, "host_regs: "); 6308 6306 } 6309 6307 ··· 6351 6341 * Even though we use wait_event() which sleeps indefinitely, 6352 6342 * the maximum wait time is bounded by %TM_CMD_TIMEOUT. 6353 6343 */ 6354 - req = blk_get_request(q, REQ_OP_DRV_OUT, BLK_MQ_REQ_RESERVED); 6344 + req = blk_get_request(q, REQ_OP_DRV_OUT, 0); 6345 + if (IS_ERR(req)) 6346 + return PTR_ERR(req); 6347 + 6355 6348 req->end_io_data = &wait; 6356 6349 free_slot = req->tag; 6357 6350 WARN_ON_ONCE(free_slot < 0 || free_slot >= hba->nutmrs); ··· 6951 6938 ufshcd_set_clk_freq(hba, true); 6952 6939 6953 6940 err = ufshcd_hba_enable(hba); 6954 - if (err) 6955 - goto out; 6956 6941 6957 6942 /* Establish the link again and restore the device */ 6958 - err = ufshcd_probe_hba(hba, false); 6959 6943 if (!err) 6960 - ufshcd_clear_ua_wluns(hba); 6961 - out: 6944 + err = ufshcd_probe_hba(hba, false); 6945 + 6962 6946 if (err) 6963 6947 dev_err(hba->dev, "%s: Host init failed %d\n", __func__, err); 6964 6948 ufshcd_update_evt_hist(hba, UFS_EVT_HOST_RESET, (u32)err); ··· 7726 7716 if (ret) 7727 7717 goto out; 7728 7718 7719 + ufshcd_clear_ua_wluns(hba); 7720 + 7729 7721 /* Initialize devfreq after UFS device is detected */ 7730 7722 if (ufshcd_is_clkscaling_supported(hba)) { 7731 7723 memcpy(&hba->clk_scaling.saved_pwr_info.info, ··· 7929 7917 pm_runtime_put_sync(hba->dev); 7930 7918 ufshcd_exit_clk_scaling(hba); 7931 7919 ufshcd_hba_exit(hba); 7932 - } else { 7933 - ufshcd_clear_ua_wluns(hba); 7934 7920 } 7935 7921 } 7936 7922 ··· 8785 8775 ufshcd_resume_clkscaling(hba); 8786 8776 hba->clk_gating.is_suspended = false; 8787 8777 hba->dev_info.b_rpm_dev_flush_capable = false; 8778 + ufshcd_clear_ua_wluns(hba); 8788 8779 ufshcd_release(hba); 8789 8780 out: 8790 8781 if (hba->dev_info.b_rpm_dev_flush_capable) { ··· 8895 8884 hba->dev_info.b_rpm_dev_flush_capable = false; 8896 8885 cancel_delayed_work(&hba->rpm_dev_flush_recheck_work); 8897 8886 } 8887 + 8888 + ufshcd_clear_ua_wluns(hba); 8898 8889 8899 8890 /* Schedule clock gating in case of no access to UFS device yet */ 8900 8891 ufshcd_release(hba);
+1 -1
drivers/sh/intc/core.c
··· 214 214 d->window[k].phys = res->start; 215 215 d->window[k].size = resource_size(res); 216 216 d->window[k].virt = ioremap(res->start, 217 - resource_size(res)); 217 + resource_size(res)); 218 218 if (!d->window[k].virt) 219 219 goto err2; 220 220 }
+2 -12
drivers/sh/intc/virq-debugfs.c
··· 16 16 #include <linux/debugfs.h> 17 17 #include "internals.h" 18 18 19 - static int intc_irq_xlate_debug(struct seq_file *m, void *priv) 19 + static int intc_irq_xlate_show(struct seq_file *m, void *priv) 20 20 { 21 21 int i; 22 22 ··· 37 37 return 0; 38 38 } 39 39 40 - static int intc_irq_xlate_open(struct inode *inode, struct file *file) 41 - { 42 - return single_open(file, intc_irq_xlate_debug, inode->i_private); 43 - } 44 - 45 - static const struct file_operations intc_irq_xlate_fops = { 46 - .open = intc_irq_xlate_open, 47 - .read = seq_read, 48 - .llseek = seq_lseek, 49 - .release = single_release, 50 - }; 40 + DEFINE_SHOW_ATTRIBUTE(intc_irq_xlate); 51 41 52 42 static int __init intc_irq_xlate_init(void) 53 43 {
+13
drivers/soc/atmel/soc.c
··· 271 271 return soc_dev; 272 272 } 273 273 274 + static const struct of_device_id at91_soc_allowed_list[] __initconst = { 275 + { .compatible = "atmel,at91rm9200", }, 276 + { .compatible = "atmel,at91sam9", }, 277 + { .compatible = "atmel,sama5", }, 278 + { .compatible = "atmel,samv7", }, 279 + { } 280 + }; 281 + 274 282 static int __init atmel_soc_device_init(void) 275 283 { 284 + struct device_node *np = of_find_node_by_path("/"); 285 + 286 + if (!of_match_node(at91_soc_allowed_list, np)) 287 + return 0; 288 + 276 289 at91_soc_init(socs); 277 290 278 291 return 0;
+1 -1
drivers/soc/imx/Kconfig
··· 13 13 depends on ARCH_MXC || COMPILE_TEST 14 14 default ARCH_MXC && ARM64 15 15 select SOC_BUS 16 - select ARM_GIC_V3 if ARCH_MXC 16 + select ARM_GIC_V3 if ARCH_MXC && ARCH_MULTI_V7 17 17 help 18 18 If you say yes here you get support for the NXP i.MX8M family 19 19 support, it will provide the SoC info like SoC family,
+2 -1
drivers/soc/litex/litex_soc_ctrl.c
··· 140 140 void __iomem *base; 141 141 }; 142 142 143 + #ifdef CONFIG_OF 143 144 static const struct of_device_id litex_soc_ctrl_of_match[] = { 144 145 {.compatible = "litex,soc-controller"}, 145 146 {}, 146 147 }; 147 - 148 148 MODULE_DEVICE_TABLE(of, litex_soc_ctrl_of_match); 149 + #endif /* CONFIG_OF */ 149 150 150 151 static int litex_soc_ctrl_probe(struct platform_device *pdev) 151 152 {
+2 -1
drivers/spi/spi-altera.c
··· 254 254 dev_err(&pdev->dev, 255 255 "Invalid number of chipselect: %hu\n", 256 256 pdata->num_chipselect); 257 - return -EINVAL; 257 + err = -EINVAL; 258 + goto exit; 258 259 } 259 260 260 261 master->num_chipselect = pdata->num_chipselect;
+1
drivers/spi/spidev.c
··· 682 682 { .compatible = "lwn,bk4" }, 683 683 { .compatible = "dh,dhcom-board" }, 684 684 { .compatible = "menlo,m53cpld" }, 685 + { .compatible = "cisco,spi-petra" }, 685 686 {}, 686 687 }; 687 688 MODULE_DEVICE_TABLE(of, spidev_dt_ids);
+1 -1
drivers/staging/media/hantro/hantro_v4l2.c
··· 367 367 368 368 hantro_reset_fmt(raw_fmt, raw_vpu_fmt); 369 369 raw_fmt->width = encoded_fmt->width; 370 - raw_fmt->width = encoded_fmt->width; 370 + raw_fmt->height = encoded_fmt->height; 371 371 if (ctx->is_encoder) 372 372 hantro_set_fmt_out(ctx, raw_fmt); 373 373 else
+1 -1
drivers/staging/media/sunxi/cedrus/cedrus_h264.c
··· 203 203 position = cedrus_buf->codec.h264.position; 204 204 205 205 sram_array[i] |= position << 1; 206 - if (ref_list[i].fields & V4L2_H264_BOTTOM_FIELD_REF) 206 + if (ref_list[i].fields == V4L2_H264_BOTTOM_FIELD_REF) 207 207 sram_array[i] |= BIT(0); 208 208 } 209 209
+3 -3
drivers/staging/rtl8723bs/include/rtw_wifi_regd.h
··· 20 20 COUNTRY_CODE_MAX 21 21 }; 22 22 23 - int rtw_regd_init(struct adapter *padapter, 24 - void (*reg_notifier)(struct wiphy *wiphy, 25 - struct regulatory_request *request)); 23 + void rtw_regd_init(struct wiphy *wiphy, 24 + void (*reg_notifier)(struct wiphy *wiphy, 25 + struct regulatory_request *request)); 26 26 void rtw_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request); 27 27 28 28
+3 -3
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 3211 3211 rtw_cfg80211_init_ht_capab(&bands->ht_cap, NL80211_BAND_2GHZ, rf_type); 3212 3212 } 3213 3213 3214 - /* init regulary domain */ 3215 - rtw_regd_init(padapter, rtw_reg_notifier); 3216 - 3217 3214 /* copy mac_addr to wiphy */ 3218 3215 memcpy(wiphy->perm_addr, padapter->eeprompriv.mac_addr, ETH_ALEN); 3219 3216 ··· 3324 3327 set_wiphy_dev(wiphy, dev); 3325 3328 *((struct adapter **)wiphy_priv(wiphy)) = padapter; 3326 3329 rtw_cfg80211_preinit_wiphy(padapter, wiphy); 3330 + 3331 + /* init regulary domain */ 3332 + rtw_regd_init(wiphy, rtw_reg_notifier); 3327 3333 3328 3334 ret = wiphy_register(wiphy); 3329 3335 if (ret < 0) {
+3 -7
drivers/staging/rtl8723bs/os_dep/wifi_regd.c
··· 139 139 _rtw_reg_apply_flags(wiphy); 140 140 } 141 141 142 - int rtw_regd_init(struct adapter *padapter, 143 - void (*reg_notifier)(struct wiphy *wiphy, 144 - struct regulatory_request *request)) 142 + void rtw_regd_init(struct wiphy *wiphy, 143 + void (*reg_notifier)(struct wiphy *wiphy, 144 + struct regulatory_request *request)) 145 145 { 146 - struct wiphy *wiphy = padapter->rtw_wdev->wiphy; 147 - 148 146 _rtw_regd_init_wiphy(NULL, wiphy, reg_notifier); 149 - 150 - return 0; 151 147 } 152 148 153 149 void rtw_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
+8 -3
drivers/target/target_core_user.c
··· 562 562 563 563 static inline void tcmu_free_cmd(struct tcmu_cmd *tcmu_cmd) 564 564 { 565 - if (tcmu_cmd->se_cmd) 566 - tcmu_cmd->se_cmd->priv = NULL; 567 565 kfree(tcmu_cmd->dbi); 568 566 kmem_cache_free(tcmu_cmd_cache, tcmu_cmd); 569 567 } ··· 1172 1174 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 1173 1175 1174 1176 mutex_lock(&udev->cmdr_lock); 1175 - se_cmd->priv = tcmu_cmd; 1176 1177 if (!(se_cmd->transport_state & CMD_T_ABORTED)) 1177 1178 ret = queue_cmd_ring(tcmu_cmd, &scsi_ret); 1178 1179 if (ret < 0) 1179 1180 tcmu_free_cmd(tcmu_cmd); 1181 + else 1182 + se_cmd->priv = tcmu_cmd; 1180 1183 mutex_unlock(&udev->cmdr_lock); 1181 1184 return scsi_ret; 1182 1185 } ··· 1240 1241 1241 1242 list_del_init(&cmd->queue_entry); 1242 1243 tcmu_free_cmd(cmd); 1244 + se_cmd->priv = NULL; 1243 1245 target_complete_cmd(se_cmd, SAM_STAT_TASK_ABORTED); 1244 1246 unqueued = true; 1245 1247 } ··· 1332 1332 } 1333 1333 1334 1334 done: 1335 + se_cmd->priv = NULL; 1335 1336 if (read_len_valid) { 1336 1337 pr_debug("read_len = %d\n", read_len); 1337 1338 target_complete_cmd_with_length(cmd->se_cmd, ··· 1479 1478 se_cmd = cmd->se_cmd; 1480 1479 tcmu_free_cmd(cmd); 1481 1480 1481 + se_cmd->priv = NULL; 1482 1482 target_complete_cmd(se_cmd, SAM_STAT_TASK_SET_FULL); 1483 1483 } 1484 1484 ··· 1594 1592 * removed then LIO core will do the right thing and 1595 1593 * fail the retry. 1596 1594 */ 1595 + tcmu_cmd->se_cmd->priv = NULL; 1597 1596 target_complete_cmd(tcmu_cmd->se_cmd, SAM_STAT_BUSY); 1598 1597 tcmu_free_cmd(tcmu_cmd); 1599 1598 continue; ··· 1608 1605 * Ignore scsi_ret for now. target_complete_cmd 1609 1606 * drops it. 1610 1607 */ 1608 + tcmu_cmd->se_cmd->priv = NULL; 1611 1609 target_complete_cmd(tcmu_cmd->se_cmd, 1612 1610 SAM_STAT_CHECK_CONDITION); 1613 1611 tcmu_free_cmd(tcmu_cmd); ··· 2216 2212 if (!test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) { 2217 2213 WARN_ON(!cmd->se_cmd); 2218 2214 list_del_init(&cmd->queue_entry); 2215 + cmd->se_cmd->priv = NULL; 2219 2216 if (err_level == 1) { 2220 2217 /* 2221 2218 * Userspace was not able to start the
+3 -1
drivers/tee/optee/call.c
··· 7 7 #include <linux/err.h> 8 8 #include <linux/errno.h> 9 9 #include <linux/mm.h> 10 + #include <linux/sched.h> 10 11 #include <linux/slab.h> 11 12 #include <linux/tee_drv.h> 12 13 #include <linux/types.h> ··· 149 148 */ 150 149 optee_cq_wait_for_completion(&optee->call_queue, &w); 151 150 } else if (OPTEE_SMC_RETURN_IS_RPC(res.a0)) { 152 - might_sleep(); 151 + if (need_resched()) 152 + cond_resched(); 153 153 param.a0 = res.a0; 154 154 param.a1 = res.a1; 155 155 param.a2 = res.a2;
+1 -1
drivers/thunderbolt/icm.c
··· 2316 2316 2317 2317 if (auth && auth->reply.route_hi == sw->config.route_hi && 2318 2318 auth->reply.route_lo == sw->config.route_lo) { 2319 - tb_dbg(tb, "NVM_AUTH found for %llx flags 0x%#x status %#x\n", 2319 + tb_dbg(tb, "NVM_AUTH found for %llx flags %#x status %#x\n", 2320 2320 tb_route(sw), auth->reply.hdr.flags, auth->reply.status); 2321 2321 if (auth->reply.hdr.flags & ICM_FLAGS_ERROR) 2322 2322 ret = -EIO;
+2 -5
drivers/tty/n_tty.c
··· 2081 2081 return 0; 2082 2082 } 2083 2083 2084 - extern ssize_t redirected_tty_write(struct file *, const char __user *, 2085 - size_t, loff_t *); 2086 - 2087 2084 /** 2088 2085 * job_control - check job control 2089 2086 * @tty: tty ··· 2102 2105 /* NOTE: not yet done after every sleep pending a thorough 2103 2106 check of the logic of this change. -- jlc */ 2104 2107 /* don't stop on /dev/console */ 2105 - if (file->f_op->write == redirected_tty_write) 2108 + if (file->f_op->write_iter == redirected_tty_write) 2106 2109 return 0; 2107 2110 2108 2111 return __tty_check_change(tty, SIGTTIN); ··· 2306 2309 ssize_t retval = 0; 2307 2310 2308 2311 /* Job control check -- must be done at start (POSIX.1 7.1.1.4). */ 2309 - if (L_TOSTOP(tty) && file->f_op->write != redirected_tty_write) { 2312 + if (L_TOSTOP(tty) && file->f_op->write_iter != redirected_tty_write) { 2310 2313 retval = tty_check_change(tty); 2311 2314 if (retval) 2312 2315 return retval;
+9 -1
drivers/tty/serial/mvebu-uart.c
··· 648 648 (val & STAT_TX_RDY(port)), 1, 10000); 649 649 } 650 650 651 + static void wait_for_xmite(struct uart_port *port) 652 + { 653 + u32 val; 654 + 655 + readl_poll_timeout_atomic(port->membase + UART_STAT, val, 656 + (val & STAT_TX_EMP), 1, 10000); 657 + } 658 + 651 659 static void mvebu_uart_console_putchar(struct uart_port *port, int ch) 652 660 { 653 661 wait_for_xmitr(port); ··· 683 675 684 676 uart_console_write(port, s, count, mvebu_uart_console_putchar); 685 677 686 - wait_for_xmitr(port); 678 + wait_for_xmite(port); 687 679 688 680 if (ier) 689 681 writel(ier, port->membase + UART_CTRL(port));
+27 -24
drivers/tty/tty_io.c
··· 143 143 DEFINE_MUTEX(tty_mutex); 144 144 145 145 static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *); 146 - static ssize_t tty_write(struct file *, const char __user *, size_t, loff_t *); 147 - ssize_t redirected_tty_write(struct file *, const char __user *, 148 - size_t, loff_t *); 146 + static ssize_t tty_write(struct kiocb *, struct iov_iter *); 149 147 static __poll_t tty_poll(struct file *, poll_table *); 150 148 static int tty_open(struct inode *, struct file *); 151 - long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg); 152 149 #ifdef CONFIG_COMPAT 153 150 static long tty_compat_ioctl(struct file *file, unsigned int cmd, 154 151 unsigned long arg); ··· 435 438 return 0; 436 439 } 437 440 438 - static ssize_t hung_up_tty_write(struct file *file, const char __user *buf, 439 - size_t count, loff_t *ppos) 441 + static ssize_t hung_up_tty_write(struct kiocb *iocb, struct iov_iter *from) 440 442 { 441 443 return -EIO; 442 444 } ··· 474 478 static const struct file_operations tty_fops = { 475 479 .llseek = no_llseek, 476 480 .read = tty_read, 477 - .write = tty_write, 481 + .write_iter = tty_write, 482 + .splice_write = iter_file_splice_write, 478 483 .poll = tty_poll, 479 484 .unlocked_ioctl = tty_ioctl, 480 485 .compat_ioctl = tty_compat_ioctl, ··· 488 491 static const struct file_operations console_fops = { 489 492 .llseek = no_llseek, 490 493 .read = tty_read, 491 - .write = redirected_tty_write, 494 + .write_iter = redirected_tty_write, 495 + .splice_write = iter_file_splice_write, 492 496 .poll = tty_poll, 493 497 .unlocked_ioctl = tty_ioctl, 494 498 .compat_ioctl = tty_compat_ioctl, ··· 501 503 static const struct file_operations hung_up_tty_fops = { 502 504 .llseek = no_llseek, 503 505 .read = hung_up_tty_read, 504 - .write = hung_up_tty_write, 506 + .write_iter = hung_up_tty_write, 505 507 .poll = hung_up_tty_poll, 506 508 .unlocked_ioctl = hung_up_tty_ioctl, 507 509 .compat_ioctl = hung_up_tty_compat_ioctl, ··· 604 606 /* This breaks for file handles being sent over AF_UNIX sockets ? */ 605 607 list_for_each_entry(priv, &tty->tty_files, list) { 606 608 filp = priv->file; 607 - if (filp->f_op->write == redirected_tty_write) 609 + if (filp->f_op->write_iter == redirected_tty_write) 608 610 cons_filp = filp; 609 - if (filp->f_op->write != tty_write) 611 + if (filp->f_op->write_iter != tty_write) 610 612 continue; 611 613 closecount++; 612 614 __tty_fasync(-1, filp, 0); /* can't block */ ··· 899 901 ssize_t (*write)(struct tty_struct *, struct file *, const unsigned char *, size_t), 900 902 struct tty_struct *tty, 901 903 struct file *file, 902 - const char __user *buf, 903 - size_t count) 904 + struct iov_iter *from) 904 905 { 906 + size_t count = iov_iter_count(from); 905 907 ssize_t ret, written = 0; 906 908 unsigned int chunk; 907 909 ··· 953 955 size_t size = count; 954 956 if (size > chunk) 955 957 size = chunk; 958 + 956 959 ret = -EFAULT; 957 - if (copy_from_user(tty->write_buf, buf, size)) 960 + if (copy_from_iter(tty->write_buf, size, from) != size) 958 961 break; 962 + 959 963 ret = write(tty, file, tty->write_buf, size); 960 964 if (ret <= 0) 961 965 break; 966 + 967 + /* FIXME! Have Al check this! */ 968 + if (ret != size) 969 + iov_iter_revert(from, size-ret); 970 + 962 971 written += ret; 963 - buf += ret; 964 972 count -= ret; 965 973 if (!count) 966 974 break; ··· 1026 1022 * write method will not be invoked in parallel for each device. 1027 1023 */ 1028 1024 1029 - static ssize_t tty_write(struct file *file, const char __user *buf, 1030 - size_t count, loff_t *ppos) 1025 + static ssize_t tty_write(struct kiocb *iocb, struct iov_iter *from) 1031 1026 { 1027 + struct file *file = iocb->ki_filp; 1032 1028 struct tty_struct *tty = file_tty(file); 1033 1029 struct tty_ldisc *ld; 1034 1030 ssize_t ret; ··· 1042 1038 tty_err(tty, "missing write_room method\n"); 1043 1039 ld = tty_ldisc_ref_wait(tty); 1044 1040 if (!ld) 1045 - return hung_up_tty_write(file, buf, count, ppos); 1041 + return hung_up_tty_write(iocb, from); 1046 1042 if (!ld->ops->write) 1047 1043 ret = -EIO; 1048 1044 else 1049 - ret = do_tty_write(ld->ops->write, tty, file, buf, count); 1045 + ret = do_tty_write(ld->ops->write, tty, file, from); 1050 1046 tty_ldisc_deref(ld); 1051 1047 return ret; 1052 1048 } 1053 1049 1054 - ssize_t redirected_tty_write(struct file *file, const char __user *buf, 1055 - size_t count, loff_t *ppos) 1050 + ssize_t redirected_tty_write(struct kiocb *iocb, struct iov_iter *iter) 1056 1051 { 1057 1052 struct file *p = NULL; 1058 1053 ··· 1062 1059 1063 1060 if (p) { 1064 1061 ssize_t res; 1065 - res = vfs_write(p, buf, count, &p->f_pos); 1062 + res = vfs_iocb_iter_write(p, iocb, iter); 1066 1063 fput(p); 1067 1064 return res; 1068 1065 } 1069 - return tty_write(file, buf, count, ppos); 1066 + return tty_write(iocb, iter); 1070 1067 } 1071 1068 1072 1069 /* ··· 2298 2295 { 2299 2296 if (!capable(CAP_SYS_ADMIN)) 2300 2297 return -EPERM; 2301 - if (file->f_op->write == redirected_tty_write) { 2298 + if (file->f_op->write_iter == redirected_tty_write) { 2302 2299 struct file *f; 2303 2300 spin_lock(&redirect_lock); 2304 2301 f = redirect;
+11 -11
drivers/usb/cdns3/cdns3-imx.c
··· 185 185 } 186 186 187 187 data->num_clks = ARRAY_SIZE(imx_cdns3_core_clks); 188 - data->clks = (struct clk_bulk_data *)imx_cdns3_core_clks; 188 + data->clks = devm_kmemdup(dev, imx_cdns3_core_clks, 189 + sizeof(imx_cdns3_core_clks), GFP_KERNEL); 190 + if (!data->clks) 191 + return -ENOMEM; 192 + 189 193 ret = devm_clk_bulk_get(dev, data->num_clks, data->clks); 190 194 if (ret) 191 195 return ret; ··· 218 214 return ret; 219 215 } 220 216 221 - static int cdns_imx_remove_core(struct device *dev, void *data) 222 - { 223 - struct platform_device *pdev = to_platform_device(dev); 224 - 225 - platform_device_unregister(pdev); 226 - 227 - return 0; 228 - } 229 - 230 217 static int cdns_imx_remove(struct platform_device *pdev) 231 218 { 232 219 struct device *dev = &pdev->dev; 220 + struct cdns_imx *data = dev_get_drvdata(dev); 233 221 234 - device_for_each_child(dev, NULL, cdns_imx_remove_core); 222 + pm_runtime_get_sync(dev); 223 + of_platform_depopulate(dev); 224 + clk_bulk_disable_unprepare(data->num_clks, data->clks); 225 + pm_runtime_disable(dev); 226 + pm_runtime_put_noidle(dev); 235 227 platform_set_drvdata(pdev, NULL); 236 228 237 229 return 0;
+4 -1
drivers/usb/gadget/udc/aspeed-vhub/epn.c
··· 420 420 u32 state, reg, loops; 421 421 422 422 /* Stop DMA activity */ 423 - writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); 423 + if (ep->epn.desc_mode) 424 + writel(VHUB_EP_DMA_CTRL_RESET, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); 425 + else 426 + writel(0, ep->epn.regs + AST_VHUB_EP_DMA_CTLSTAT); 424 427 425 428 /* Wait for it to complete */ 426 429 for (loops = 0; loops < 1000; loops++) {
+1 -1
drivers/usb/gadget/udc/bdc/Kconfig
··· 17 17 comment "Platform Support" 18 18 config USB_BDC_PCI 19 19 tristate "BDC support for PCIe based platforms" 20 - depends on USB_PCI 20 + depends on USB_PCI && BROKEN 21 21 default USB_BDC_UDC 22 22 help 23 23 Enable support for platforms which have BDC connected through PCIe, such as Lego3 FPGA platform.
+10 -3
drivers/usb/gadget/udc/core.c
··· 1529 1529 struct device_attribute *attr, const char *buf, size_t n) 1530 1530 { 1531 1531 struct usb_udc *udc = container_of(dev, struct usb_udc, dev); 1532 + ssize_t ret; 1532 1533 1534 + mutex_lock(&udc_lock); 1533 1535 if (!udc->driver) { 1534 1536 dev_err(dev, "soft-connect without a gadget driver\n"); 1535 - return -EOPNOTSUPP; 1537 + ret = -EOPNOTSUPP; 1538 + goto out; 1536 1539 } 1537 1540 1538 1541 if (sysfs_streq(buf, "connect")) { ··· 1546 1543 usb_gadget_udc_stop(udc); 1547 1544 } else { 1548 1545 dev_err(dev, "unsupported command '%s'\n", buf); 1549 - return -EINVAL; 1546 + ret = -EINVAL; 1547 + goto out; 1550 1548 } 1551 1549 1552 - return n; 1550 + ret = n; 1551 + out: 1552 + mutex_unlock(&udc_lock); 1553 + return ret; 1553 1554 } 1554 1555 static DEVICE_ATTR_WO(soft_connect); 1555 1556
+7 -3
drivers/usb/gadget/udc/dummy_hcd.c
··· 2270 2270 } 2271 2271 fallthrough; 2272 2272 case USB_PORT_FEAT_RESET: 2273 + if (!(dum_hcd->port_status & USB_PORT_STAT_CONNECTION)) 2274 + break; 2273 2275 /* if it's already enabled, disable */ 2274 2276 if (hcd->speed == HCD_USB3) { 2275 - dum_hcd->port_status = 0; 2276 2277 dum_hcd->port_status = 2277 2278 (USB_SS_PORT_STAT_POWER | 2278 2279 USB_PORT_STAT_CONNECTION | 2279 2280 USB_PORT_STAT_RESET); 2280 - } else 2281 + } else { 2281 2282 dum_hcd->port_status &= ~(USB_PORT_STAT_ENABLE 2282 2283 | USB_PORT_STAT_LOW_SPEED 2283 2284 | USB_PORT_STAT_HIGH_SPEED); 2285 + dum_hcd->port_status |= USB_PORT_STAT_RESET; 2286 + } 2284 2287 /* 2285 2288 * We want to reset device status. All but the 2286 2289 * Self powered feature ··· 2295 2292 * interval? Is it still 50msec as for HS? 2296 2293 */ 2297 2294 dum_hcd->re_timeout = jiffies + msecs_to_jiffies(50); 2298 - fallthrough; 2295 + set_link_state(dum_hcd); 2296 + break; 2299 2297 case USB_PORT_FEAT_C_CONNECTION: 2300 2298 case USB_PORT_FEAT_C_RESET: 2301 2299 case USB_PORT_FEAT_C_ENABLE:
+12
drivers/usb/host/ehci-hcd.c
··· 574 574 struct ehci_hcd *ehci = hcd_to_ehci (hcd); 575 575 u32 temp; 576 576 u32 hcc_params; 577 + int rc; 577 578 578 579 hcd->uses_new_polling = 1; 579 580 ··· 630 629 down_write(&ehci_cf_port_reset_rwsem); 631 630 ehci->rh_state = EHCI_RH_RUNNING; 632 631 ehci_writel(ehci, FLAG_CF, &ehci->regs->configured_flag); 632 + 633 + /* Wait until HC become operational */ 633 634 ehci_readl(ehci, &ehci->regs->command); /* unblock posted writes */ 634 635 msleep(5); 636 + rc = ehci_handshake(ehci, &ehci->regs->status, STS_HALT, 0, 100 * 1000); 637 + 635 638 up_write(&ehci_cf_port_reset_rwsem); 639 + 640 + if (rc) { 641 + ehci_err(ehci, "USB %x.%x, controller refused to start: %d\n", 642 + ((ehci->sbrn & 0xf0)>>4), (ehci->sbrn & 0x0f), rc); 643 + return rc; 644 + } 645 + 636 646 ehci->last_periodic_enable = ktime_get_real(); 637 647 638 648 temp = HC_VERSION(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase));
+3
drivers/usb/host/ehci-hub.c
··· 345 345 346 346 unlink_empty_async_suspended(ehci); 347 347 348 + /* Some Synopsys controllers mistakenly leave IAA turned on */ 349 + ehci_writel(ehci, STS_IAA, &ehci->regs->status); 350 + 348 351 /* Any IAA cycle that started before the suspend is now invalid */ 349 352 end_iaa_cycle(ehci); 350 353 ehci_handle_start_intr_unlinks(ehci);
+2
drivers/usb/host/xhci-ring.c
··· 2931 2931 trb->field[0] = cpu_to_le32(field1); 2932 2932 trb->field[1] = cpu_to_le32(field2); 2933 2933 trb->field[2] = cpu_to_le32(field3); 2934 + /* make sure TRB is fully written before giving it to the controller */ 2935 + wmb(); 2934 2936 trb->field[3] = cpu_to_le32(field4); 2935 2937 2936 2938 trace_xhci_queue_trb(ring, trb);
+7
drivers/usb/host/xhci-tegra.c
··· 623 623 enable); 624 624 if (err < 0) 625 625 break; 626 + 627 + /* 628 + * wait 500us for LFPS detector to be disabled before 629 + * sending ACK 630 + */ 631 + if (!enable) 632 + usleep_range(500, 1000); 626 633 } 627 634 628 635 if (err < 0) {
+31
drivers/xen/xenbus/xenbus_probe.c
··· 714 714 #endif 715 715 } 716 716 717 + static int xenbus_probe_thread(void *unused) 718 + { 719 + DEFINE_WAIT(w); 720 + 721 + /* 722 + * We actually just want to wait for *any* trigger of xb_waitq, 723 + * and run xenbus_probe() the moment it occurs. 724 + */ 725 + prepare_to_wait(&xb_waitq, &w, TASK_INTERRUPTIBLE); 726 + schedule(); 727 + finish_wait(&xb_waitq, &w); 728 + 729 + DPRINTK("probing"); 730 + xenbus_probe(); 731 + return 0; 732 + } 733 + 717 734 static int __init xenbus_probe_initcall(void) 718 735 { 719 736 /* ··· 742 725 !xs_hvm_defer_init_for_callback())) 743 726 xenbus_probe(); 744 727 728 + /* 729 + * For XS_LOCAL, spawn a thread which will wait for xenstored 730 + * or a xenstore-stubdom to be started, then probe. It will be 731 + * triggered when communication starts happening, by waiting 732 + * on xb_waitq. 733 + */ 734 + if (xen_store_domain_type == XS_LOCAL) { 735 + struct task_struct *probe_task; 736 + 737 + probe_task = kthread_run(xenbus_probe_thread, NULL, 738 + "xenbus_probe"); 739 + if (IS_ERR(probe_task)) 740 + return PTR_ERR(probe_task); 741 + } 745 742 return 0; 746 743 } 747 744 device_initcall(xenbus_probe_initcall);
+1 -1
fs/btrfs/backref.c
··· 3117 3117 list_del_init(&lower->list); 3118 3118 if (lower == node) 3119 3119 node = NULL; 3120 - btrfs_backref_free_node(cache, lower); 3120 + btrfs_backref_drop_node(cache, lower); 3121 3121 } 3122 3122 3123 3123 btrfs_backref_cleanup_node(cache, node);
+2 -1
fs/btrfs/block-group.c
··· 2669 2669 * Go through delayed refs for all the stuff we've just kicked off 2670 2670 * and then loop back (just once) 2671 2671 */ 2672 - ret = btrfs_run_delayed_refs(trans, 0); 2672 + if (!ret) 2673 + ret = btrfs_run_delayed_refs(trans, 0); 2673 2674 if (!ret && loops == 0) { 2674 2675 loops++; 2675 2676 spin_lock(&cur_trans->dirty_bgs_lock);
+9 -1
fs/btrfs/extent-tree.c
··· 5549 5549 goto out_free; 5550 5550 } 5551 5551 5552 - trans = btrfs_start_transaction(tree_root, 0); 5552 + /* 5553 + * Use join to avoid potential EINTR from transaction 5554 + * start. See wait_reserve_ticket and the whole 5555 + * reservation callchain. 5556 + */ 5557 + if (for_reloc) 5558 + trans = btrfs_join_transaction(tree_root); 5559 + else 5560 + trans = btrfs_start_transaction(tree_root, 0); 5553 5561 if (IS_ERR(trans)) { 5554 5562 err = PTR_ERR(trans); 5555 5563 goto out_free;
+15
fs/btrfs/send.c
··· 5512 5512 break; 5513 5513 offset += clone_len; 5514 5514 clone_root->offset += clone_len; 5515 + 5516 + /* 5517 + * If we are cloning from the file we are currently processing, 5518 + * and using the send root as the clone root, we must stop once 5519 + * the current clone offset reaches the current eof of the file 5520 + * at the receiver, otherwise we would issue an invalid clone 5521 + * operation (source range going beyond eof) and cause the 5522 + * receiver to fail. So if we reach the current eof, bail out 5523 + * and fallback to a regular write. 5524 + */ 5525 + if (clone_root->root == sctx->send_root && 5526 + clone_root->ino == sctx->cur_ino && 5527 + clone_root->offset >= sctx->cur_inode_next_write_offset) 5528 + break; 5529 + 5515 5530 data_offset += clone_len; 5516 5531 next: 5517 5532 path->slots[0]++;
-8
fs/btrfs/transaction.c
··· 2265 2265 btrfs_free_log_root_tree(trans, fs_info); 2266 2266 2267 2267 /* 2268 - * commit_fs_roots() can call btrfs_save_ino_cache(), which generates 2269 - * new delayed refs. Must handle them or qgroup can be wrong. 2270 - */ 2271 - ret = btrfs_run_delayed_refs(trans, (unsigned long)-1); 2272 - if (ret) 2273 - goto unlock_tree_log; 2274 - 2275 - /* 2276 2268 * Since fs roots are all committed, we can get a quite accurate 2277 2269 * new_roots. So let's do quota accounting. 2278 2270 */
+2
fs/btrfs/volumes.c
··· 4317 4317 btrfs_warn(fs_info, 4318 4318 "balance: cannot set exclusive op status, resume manually"); 4319 4319 4320 + btrfs_release_path(path); 4321 + 4320 4322 mutex_lock(&fs_info->balance_mutex); 4321 4323 BUG_ON(fs_info->balance_ctl); 4322 4324 spin_lock(&fs_info->balance_lock);
+17 -17
fs/ceph/mds_client.c
··· 5038 5038 return; 5039 5039 } 5040 5040 5041 - static struct ceph_connection *con_get(struct ceph_connection *con) 5041 + static struct ceph_connection *mds_get_con(struct ceph_connection *con) 5042 5042 { 5043 5043 struct ceph_mds_session *s = con->private; 5044 5044 ··· 5047 5047 return NULL; 5048 5048 } 5049 5049 5050 - static void con_put(struct ceph_connection *con) 5050 + static void mds_put_con(struct ceph_connection *con) 5051 5051 { 5052 5052 struct ceph_mds_session *s = con->private; 5053 5053 ··· 5058 5058 * if the client is unresponsive for long enough, the mds will kill 5059 5059 * the session entirely. 5060 5060 */ 5061 - static void peer_reset(struct ceph_connection *con) 5061 + static void mds_peer_reset(struct ceph_connection *con) 5062 5062 { 5063 5063 struct ceph_mds_session *s = con->private; 5064 5064 struct ceph_mds_client *mdsc = s->s_mdsc; ··· 5067 5067 send_mds_reconnect(mdsc, s); 5068 5068 } 5069 5069 5070 - static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) 5070 + static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg) 5071 5071 { 5072 5072 struct ceph_mds_session *s = con->private; 5073 5073 struct ceph_mds_client *mdsc = s->s_mdsc; ··· 5125 5125 * Note: returned pointer is the address of a structure that's 5126 5126 * managed separately. Caller must *not* attempt to free it. 5127 5127 */ 5128 - static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, 5129 - int *proto, int force_new) 5128 + static struct ceph_auth_handshake * 5129 + mds_get_authorizer(struct ceph_connection *con, int *proto, int force_new) 5130 5130 { 5131 5131 struct ceph_mds_session *s = con->private; 5132 5132 struct ceph_mds_client *mdsc = s->s_mdsc; ··· 5142 5142 return auth; 5143 5143 } 5144 5144 5145 - static int add_authorizer_challenge(struct ceph_connection *con, 5145 + static int mds_add_authorizer_challenge(struct ceph_connection *con, 5146 5146 void *challenge_buf, int challenge_buf_len) 5147 5147 { 5148 5148 struct ceph_mds_session *s = con->private; ··· 5153 5153 challenge_buf, challenge_buf_len); 5154 5154 } 5155 5155 5156 - static int verify_authorizer_reply(struct ceph_connection *con) 5156 + static int mds_verify_authorizer_reply(struct ceph_connection *con) 5157 5157 { 5158 5158 struct ceph_mds_session *s = con->private; 5159 5159 struct ceph_mds_client *mdsc = s->s_mdsc; ··· 5165 5165 NULL, NULL, NULL, NULL); 5166 5166 } 5167 5167 5168 - static int invalidate_authorizer(struct ceph_connection *con) 5168 + static int mds_invalidate_authorizer(struct ceph_connection *con) 5169 5169 { 5170 5170 struct ceph_mds_session *s = con->private; 5171 5171 struct ceph_mds_client *mdsc = s->s_mdsc; ··· 5288 5288 } 5289 5289 5290 5290 static const struct ceph_connection_operations mds_con_ops = { 5291 - .get = con_get, 5292 - .put = con_put, 5293 - .dispatch = dispatch, 5294 - .get_authorizer = get_authorizer, 5295 - .add_authorizer_challenge = add_authorizer_challenge, 5296 - .verify_authorizer_reply = verify_authorizer_reply, 5297 - .invalidate_authorizer = invalidate_authorizer, 5298 - .peer_reset = peer_reset, 5291 + .get = mds_get_con, 5292 + .put = mds_put_con, 5299 5293 .alloc_msg = mds_alloc_msg, 5294 + .dispatch = mds_dispatch, 5295 + .peer_reset = mds_peer_reset, 5296 + .get_authorizer = mds_get_authorizer, 5297 + .add_authorizer_challenge = mds_add_authorizer_challenge, 5298 + .verify_authorizer_reply = mds_verify_authorizer_reply, 5299 + .invalidate_authorizer = mds_invalidate_authorizer, 5300 5300 .sign_message = mds_sign_message, 5301 5301 .check_message_signature = mds_check_message_signature, 5302 5302 .get_auth_request = mds_get_auth_request,
+2 -2
fs/cifs/connect.c
··· 2195 2195 if (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING) 2196 2196 tcon->nohandlecache = ctx->nohandlecache; 2197 2197 else 2198 - tcon->nohandlecache = 1; 2198 + tcon->nohandlecache = true; 2199 2199 tcon->nodelete = ctx->nodelete; 2200 2200 tcon->local_lease = ctx->local_lease; 2201 2201 INIT_LIST_HEAD(&tcon->pending_opens); ··· 2628 2628 } else if (ctx) 2629 2629 tcon->unix_ext = 1; /* Unix Extensions supported */ 2630 2630 2631 - if (tcon->unix_ext == 0) { 2631 + if (!tcon->unix_ext) { 2632 2632 cifs_dbg(FYI, "Unix extensions disabled so not set on reconnect\n"); 2633 2633 return; 2634 2634 }
+2 -2
fs/cifs/transport.c
··· 338 338 if (ssocket == NULL) 339 339 return -EAGAIN; 340 340 341 - if (signal_pending(current)) { 341 + if (fatal_signal_pending(current)) { 342 342 cifs_dbg(FYI, "signal pending before send request\n"); 343 343 return -ERESTARTSYS; 344 344 } ··· 429 429 430 430 if (signal_pending(current) && (total_len != send_length)) { 431 431 cifs_dbg(FYI, "signal is pending after attempt to send\n"); 432 - rc = -EINTR; 432 + rc = -ERESTARTSYS; 433 433 } 434 434 435 435 /* uncork it */
+13 -11
fs/fs-writeback.c
··· 1474 1474 } 1475 1475 1476 1476 /* 1477 + * If the inode has dirty timestamps and we need to write them, call 1478 + * mark_inode_dirty_sync() to notify the filesystem about it and to 1479 + * change I_DIRTY_TIME into I_DIRTY_SYNC. 1480 + */ 1481 + if ((inode->i_state & I_DIRTY_TIME) && 1482 + (wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync || 1483 + time_after(jiffies, inode->dirtied_time_when + 1484 + dirtytime_expire_interval * HZ))) { 1485 + trace_writeback_lazytime(inode); 1486 + mark_inode_dirty_sync(inode); 1487 + } 1488 + 1489 + /* 1477 1490 * Some filesystems may redirty the inode during the writeback 1478 1491 * due to delalloc, clear dirty metadata flags right before 1479 1492 * write_inode() 1480 1493 */ 1481 1494 spin_lock(&inode->i_lock); 1482 - 1483 1495 dirty = inode->i_state & I_DIRTY; 1484 - if ((inode->i_state & I_DIRTY_TIME) && 1485 - ((dirty & I_DIRTY_INODE) || 1486 - wbc->sync_mode == WB_SYNC_ALL || wbc->for_sync || 1487 - time_after(jiffies, inode->dirtied_time_when + 1488 - dirtytime_expire_interval * HZ))) { 1489 - dirty |= I_DIRTY_TIME; 1490 - trace_writeback_lazytime(inode); 1491 - } 1492 1496 inode->i_state &= ~dirty; 1493 1497 1494 1498 /* ··· 1513 1509 1514 1510 spin_unlock(&inode->i_lock); 1515 1511 1516 - if (dirty & I_DIRTY_TIME) 1517 - mark_inode_dirty_sync(inode); 1518 1512 /* Don't write the inode if only I_DIRTY_PAGES was set */ 1519 1513 if (dirty & ~I_DIRTY_PAGES) { 1520 1514 int err = write_inode(inode, wbc);
+47 -20
fs/io_uring.c
··· 1025 1025 static int io_setup_async_rw(struct io_kiocb *req, const struct iovec *iovec, 1026 1026 const struct iovec *fast_iov, 1027 1027 struct iov_iter *iter, bool force); 1028 + static void io_req_drop_files(struct io_kiocb *req); 1028 1029 1029 1030 static struct kmem_cache *req_cachep; 1030 1031 ··· 1049 1048 1050 1049 static inline void io_clean_op(struct io_kiocb *req) 1051 1050 { 1052 - if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED | 1053 - REQ_F_INFLIGHT)) 1051 + if (req->flags & (REQ_F_NEED_CLEANUP | REQ_F_BUFFER_SELECTED)) 1054 1052 __io_clean_op(req); 1055 1053 } 1056 1054 ··· 1075 1075 return true; 1076 1076 1077 1077 io_for_each_link(req, head) { 1078 - if ((req->flags & REQ_F_WORK_INITIALIZED) && 1079 - (req->work.flags & IO_WQ_WORK_FILES) && 1078 + if (!(req->flags & REQ_F_WORK_INITIALIZED)) 1079 + continue; 1080 + if (req->file && req->file->f_op == &io_uring_fops) 1081 + return true; 1082 + if ((req->work.flags & IO_WQ_WORK_FILES) && 1080 1083 req->work.identity->files == files) 1081 1084 return true; 1082 1085 } ··· 1397 1394 free_fs_struct(fs); 1398 1395 req->work.flags &= ~IO_WQ_WORK_FS; 1399 1396 } 1397 + if (req->flags & REQ_F_INFLIGHT) 1398 + io_req_drop_files(req); 1400 1399 1401 1400 io_put_identity(req->task->io_uring, req); 1402 1401 } ··· 1508 1503 return false; 1509 1504 atomic_inc(&id->files->count); 1510 1505 get_nsproxy(id->nsproxy); 1511 - req->flags |= REQ_F_INFLIGHT; 1512 1506 1513 - spin_lock_irq(&ctx->inflight_lock); 1514 - list_add(&req->inflight_entry, &ctx->inflight_list); 1515 - spin_unlock_irq(&ctx->inflight_lock); 1507 + if (!(req->flags & REQ_F_INFLIGHT)) { 1508 + req->flags |= REQ_F_INFLIGHT; 1509 + 1510 + spin_lock_irq(&ctx->inflight_lock); 1511 + list_add(&req->inflight_entry, &ctx->inflight_list); 1512 + spin_unlock_irq(&ctx->inflight_lock); 1513 + } 1516 1514 req->work.flags |= IO_WQ_WORK_FILES; 1517 1515 } 1518 1516 if (!(req->work.flags & IO_WQ_WORK_MM) && ··· 2278 2270 struct io_uring_task *tctx = rb->task->io_uring; 2279 2271 2280 2272 percpu_counter_sub(&tctx->inflight, rb->task_refs); 2273 + if (atomic_read(&tctx->in_idle)) 2274 + wake_up(&tctx->wait); 2281 2275 put_task_struct_many(rb->task, rb->task_refs); 2282 2276 rb->task = NULL; 2283 2277 } ··· 2298 2288 struct io_uring_task *tctx = rb->task->io_uring; 2299 2289 2300 2290 percpu_counter_sub(&tctx->inflight, rb->task_refs); 2291 + if (atomic_read(&tctx->in_idle)) 2292 + wake_up(&tctx->wait); 2301 2293 put_task_struct_many(rb->task, rb->task_refs); 2302 2294 } 2303 2295 rb->task = req->task; ··· 3560 3548 3561 3549 /* read it all, or we did blocking attempt. no retry. */ 3562 3550 if (!iov_iter_count(iter) || !force_nonblock || 3563 - (req->file->f_flags & O_NONBLOCK)) 3551 + (req->file->f_flags & O_NONBLOCK) || !(req->flags & REQ_F_ISREG)) 3564 3552 goto done; 3565 3553 3566 3554 io_size -= ret; ··· 4480 4468 * io_wq_work.flags, so initialize io_wq_work firstly. 4481 4469 */ 4482 4470 io_req_init_async(req); 4483 - req->work.flags |= IO_WQ_WORK_NO_CANCEL; 4484 4471 4485 4472 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 4486 4473 return -EINVAL; ··· 4512 4501 4513 4502 /* if the file has a flush method, be safe and punt to async */ 4514 4503 if (close->put_file->f_op->flush && force_nonblock) { 4504 + /* not safe to cancel at this point */ 4505 + req->work.flags |= IO_WQ_WORK_NO_CANCEL; 4515 4506 /* was never set, but play safe */ 4516 4507 req->flags &= ~REQ_F_NOWAIT; 4517 4508 /* avoid grabbing files - we don't need the files */ ··· 6170 6157 struct io_uring_task *tctx = req->task->io_uring; 6171 6158 unsigned long flags; 6172 6159 6173 - put_files_struct(req->work.identity->files); 6174 - put_nsproxy(req->work.identity->nsproxy); 6160 + if (req->work.flags & IO_WQ_WORK_FILES) { 6161 + put_files_struct(req->work.identity->files); 6162 + put_nsproxy(req->work.identity->nsproxy); 6163 + } 6175 6164 spin_lock_irqsave(&ctx->inflight_lock, flags); 6176 6165 list_del(&req->inflight_entry); 6177 6166 spin_unlock_irqrestore(&ctx->inflight_lock, flags); ··· 6240 6225 } 6241 6226 req->flags &= ~REQ_F_NEED_CLEANUP; 6242 6227 } 6243 - 6244 - if (req->flags & REQ_F_INFLIGHT) 6245 - io_req_drop_files(req); 6246 6228 } 6247 6229 6248 6230 static int io_issue_sqe(struct io_kiocb *req, bool force_nonblock, ··· 6456 6444 } else { 6457 6445 trace_io_uring_file_get(ctx, fd); 6458 6446 file = __io_file_get(state, fd); 6447 + } 6448 + 6449 + if (file && file->f_op == &io_uring_fops) { 6450 + io_req_init_async(req); 6451 + req->flags |= REQ_F_INFLIGHT; 6452 + 6453 + spin_lock_irq(&ctx->inflight_lock); 6454 + list_add(&req->inflight_entry, &ctx->inflight_list); 6455 + spin_unlock_irq(&ctx->inflight_lock); 6459 6456 } 6460 6457 6461 6458 return file; ··· 8877 8856 8878 8857 spin_lock_irq(&ctx->inflight_lock); 8879 8858 list_for_each_entry(req, &ctx->inflight_list, inflight_entry) { 8880 - if (req->task != task || 8881 - req->work.identity->files != files) 8859 + if (!io_match_task(req, task, files)) 8882 8860 continue; 8883 8861 found = true; 8884 8862 break; ··· 8894 8874 io_wq_cancel_cb(ctx->io_wq, io_cancel_task_cb, &cancel, true); 8895 8875 io_poll_remove_all(ctx, task, files); 8896 8876 io_kill_timeouts(ctx, task, files); 8877 + io_cqring_overflow_flush(ctx, true, task, files); 8897 8878 /* cancellations _may_ trigger task work */ 8898 8879 io_run_task_work(); 8899 8880 schedule(); ··· 8935 8914 8936 8915 static void io_disable_sqo_submit(struct io_ring_ctx *ctx) 8937 8916 { 8938 - WARN_ON_ONCE(ctx->sqo_task != current); 8939 - 8940 8917 mutex_lock(&ctx->uring_lock); 8941 8918 ctx->sqo_dead = 1; 8942 8919 mutex_unlock(&ctx->uring_lock); ··· 8956 8937 8957 8938 if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) { 8958 8939 /* for SQPOLL only sqo_task has task notes */ 8940 + WARN_ON_ONCE(ctx->sqo_task != current); 8959 8941 io_disable_sqo_submit(ctx); 8960 8942 task = ctx->sq_data->thread; 8961 8943 atomic_inc(&task->io_uring->in_idle); ··· 9102 9082 /* make sure overflow events are dropped */ 9103 9083 atomic_inc(&tctx->in_idle); 9104 9084 9085 + /* trigger io_disable_sqo_submit() */ 9086 + if (tctx->sqpoll) 9087 + __io_uring_files_cancel(NULL); 9088 + 9105 9089 do { 9106 9090 /* read completions before cancelations */ 9107 9091 inflight = tctx_inflight(tctx); ··· 9152 9128 9153 9129 if (ctx->flags & IORING_SETUP_SQPOLL) { 9154 9130 /* there is only one file note, which is owned by sqo_task */ 9155 - WARN_ON_ONCE((ctx->sqo_task == current) == 9131 + WARN_ON_ONCE(ctx->sqo_task != current && 9132 + xa_load(&tctx->xa, (unsigned long)file)); 9133 + /* sqo_dead check is for when this happens after cancellation */ 9134 + WARN_ON_ONCE(ctx->sqo_task == current && !ctx->sqo_dead && 9156 9135 !xa_load(&tctx->xa, (unsigned long)file)); 9157 9136 9158 9137 io_disable_sqo_submit(ctx);
+24 -41
fs/kernfs/file.c
··· 14 14 #include <linux/pagemap.h> 15 15 #include <linux/sched/mm.h> 16 16 #include <linux/fsnotify.h> 17 + #include <linux/uio.h> 17 18 18 19 #include "kernfs-internal.h" 19 20 ··· 181 180 * it difficult to use seq_file. Implement simplistic custom buffering for 182 181 * bin files. 183 182 */ 184 - static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of, 185 - char __user *user_buf, size_t count, 186 - loff_t *ppos) 183 + static ssize_t kernfs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) 187 184 { 188 - ssize_t len = min_t(size_t, count, PAGE_SIZE); 185 + struct kernfs_open_file *of = kernfs_of(iocb->ki_filp); 186 + ssize_t len = min_t(size_t, iov_iter_count(iter), PAGE_SIZE); 189 187 const struct kernfs_ops *ops; 190 188 char *buf; 191 189 ··· 210 210 of->event = atomic_read(&of->kn->attr.open->event); 211 211 ops = kernfs_ops(of->kn); 212 212 if (ops->read) 213 - len = ops->read(of, buf, len, *ppos); 213 + len = ops->read(of, buf, len, iocb->ki_pos); 214 214 else 215 215 len = -EINVAL; 216 216 ··· 220 220 if (len < 0) 221 221 goto out_free; 222 222 223 - if (copy_to_user(user_buf, buf, len)) { 223 + if (copy_to_iter(buf, len, iter) != len) { 224 224 len = -EFAULT; 225 225 goto out_free; 226 226 } 227 227 228 - *ppos += len; 228 + iocb->ki_pos += len; 229 229 230 230 out_free: 231 231 if (buf == of->prealloc_buf) ··· 235 235 return len; 236 236 } 237 237 238 - /** 239 - * kernfs_fop_read - kernfs vfs read callback 240 - * @file: file pointer 241 - * @user_buf: data to write 242 - * @count: number of bytes 243 - * @ppos: starting offset 244 - */ 245 - static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf, 246 - size_t count, loff_t *ppos) 238 + static ssize_t kernfs_fop_read_iter(struct kiocb *iocb, struct iov_iter *iter) 247 239 { 248 - struct kernfs_open_file *of = kernfs_of(file); 249 - 250 - if (of->kn->flags & KERNFS_HAS_SEQ_SHOW) 251 - return seq_read(file, user_buf, count, ppos); 252 - else 253 - return kernfs_file_direct_read(of, user_buf, count, ppos); 240 + if (kernfs_of(iocb->ki_filp)->kn->flags & KERNFS_HAS_SEQ_SHOW) 241 + return seq_read_iter(iocb, iter); 242 + return kernfs_file_read_iter(iocb, iter); 254 243 } 255 244 256 - /** 257 - * kernfs_fop_write - kernfs vfs write callback 258 - * @file: file pointer 259 - * @user_buf: data to write 260 - * @count: number of bytes 261 - * @ppos: starting offset 262 - * 245 + /* 263 246 * Copy data in from userland and pass it to the matching kernfs write 264 247 * operation. 265 248 * ··· 252 269 * modify only the the value you're changing, then write entire buffer 253 270 * back. 254 271 */ 255 - static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf, 256 - size_t count, loff_t *ppos) 272 + static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter) 257 273 { 258 - struct kernfs_open_file *of = kernfs_of(file); 274 + struct kernfs_open_file *of = kernfs_of(iocb->ki_filp); 275 + ssize_t len = iov_iter_count(iter); 259 276 const struct kernfs_ops *ops; 260 - ssize_t len; 261 277 char *buf; 262 278 263 279 if (of->atomic_write_len) { 264 - len = count; 265 280 if (len > of->atomic_write_len) 266 281 return -E2BIG; 267 282 } else { 268 - len = min_t(size_t, count, PAGE_SIZE); 283 + len = min_t(size_t, len, PAGE_SIZE); 269 284 } 270 285 271 286 buf = of->prealloc_buf; ··· 274 293 if (!buf) 275 294 return -ENOMEM; 276 295 277 - if (copy_from_user(buf, user_buf, len)) { 296 + if (copy_from_iter(buf, len, iter) != len) { 278 297 len = -EFAULT; 279 298 goto out_free; 280 299 } ··· 293 312 294 313 ops = kernfs_ops(of->kn); 295 314 if (ops->write) 296 - len = ops->write(of, buf, len, *ppos); 315 + len = ops->write(of, buf, len, iocb->ki_pos); 297 316 else 298 317 len = -EINVAL; 299 318 ··· 301 320 mutex_unlock(&of->mutex); 302 321 303 322 if (len > 0) 304 - *ppos += len; 323 + iocb->ki_pos += len; 305 324 306 325 out_free: 307 326 if (buf == of->prealloc_buf) ··· 654 673 655 674 /* 656 675 * Write path needs to atomic_write_len outside active reference. 657 - * Cache it in open_file. See kernfs_fop_write() for details. 676 + * Cache it in open_file. See kernfs_fop_write_iter() for details. 658 677 */ 659 678 of->atomic_write_len = ops->atomic_write_len; 660 679 ··· 941 960 EXPORT_SYMBOL_GPL(kernfs_notify); 942 961 943 962 const struct file_operations kernfs_file_fops = { 944 - .read = kernfs_fop_read, 945 - .write = kernfs_fop_write, 963 + .read_iter = kernfs_fop_read_iter, 964 + .write_iter = kernfs_fop_write_iter, 946 965 .llseek = generic_file_llseek, 947 966 .mmap = kernfs_fop_mmap, 948 967 .open = kernfs_fop_open, 949 968 .release = kernfs_fop_release, 950 969 .poll = kernfs_fop_poll, 951 970 .fsync = noop_fsync, 971 + .splice_read = generic_file_splice_read, 972 + .splice_write = iter_file_splice_write, 952 973 }; 953 974 954 975 /**
+1
fs/pipe.c
··· 1206 1206 .unlocked_ioctl = pipe_ioctl, 1207 1207 .release = pipe_release, 1208 1208 .fasync = pipe_fasync, 1209 + .splice_write = iter_file_splice_write, 1209 1210 }; 1210 1211 1211 1212 /*
+6 -1
fs/proc/proc_sysctl.c
··· 1770 1770 return 0; 1771 1771 } 1772 1772 1773 + if (!val) 1774 + return -EINVAL; 1775 + len = strlen(val); 1776 + if (len == 0) 1777 + return -EINVAL; 1778 + 1773 1779 /* 1774 1780 * To set sysctl options, we use a temporary mount of proc, look up the 1775 1781 * respective sys/ file and write to it. To avoid mounting it when no ··· 1817 1811 file, param, val); 1818 1812 goto out; 1819 1813 } 1820 - len = strlen(val); 1821 1814 wret = kernel_write(file, val, len, &pos); 1822 1815 if (wret < 0) { 1823 1816 err = wret;
+4 -3
fs/udf/super.c
··· 705 705 struct buffer_head *bh = NULL; 706 706 int nsr = 0; 707 707 struct udf_sb_info *sbi; 708 + loff_t session_offset; 708 709 709 710 sbi = UDF_SB(sb); 710 711 if (sb->s_blocksize < sizeof(struct volStructDesc)) ··· 713 712 else 714 713 sectorsize = sb->s_blocksize; 715 714 716 - sector += (((loff_t)sbi->s_session) << sb->s_blocksize_bits); 715 + session_offset = (loff_t)sbi->s_session << sb->s_blocksize_bits; 716 + sector += session_offset; 717 717 718 718 udf_debug("Starting at sector %u (%lu byte sectors)\n", 719 719 (unsigned int)(sector >> sb->s_blocksize_bits), ··· 759 757 760 758 if (nsr > 0) 761 759 return 1; 762 - else if (!bh && sector - (sbi->s_session << sb->s_blocksize_bits) == 763 - VSD_FIRST_SECTOR_OFFSET) 760 + else if (!bh && sector - session_offset == VSD_FIRST_SECTOR_OFFSET) 764 761 return -1; 765 762 else 766 763 return 0;
+3 -4
include/dt-bindings/sound/apq8016-lpass.h
··· 2 2 #ifndef __DT_APQ8016_LPASS_H 3 3 #define __DT_APQ8016_LPASS_H 4 4 5 - #define MI2S_PRIMARY 0 6 - #define MI2S_SECONDARY 1 7 - #define MI2S_TERTIARY 2 8 - #define MI2S_QUATERNARY 3 5 + #include <dt-bindings/sound/qcom,lpass.h> 6 + 7 + /* NOTE: Use qcom,lpass.h to define any AIF ID's for LPASS */ 9 8 10 9 #endif /* __DT_APQ8016_LPASS_H */
+15
include/dt-bindings/sound/qcom,lpass.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __DT_QCOM_LPASS_H 3 + #define __DT_QCOM_LPASS_H 4 + 5 + #define MI2S_PRIMARY 0 6 + #define MI2S_SECONDARY 1 7 + #define MI2S_TERTIARY 2 8 + #define MI2S_QUATERNARY 3 9 + #define MI2S_QUINARY 4 10 + 11 + #define LPASS_DP_RX 5 12 + 13 + #define LPASS_MCLK0 0 14 + 15 + #endif /* __DT_QCOM_LPASS_H */
+2 -4
include/dt-bindings/sound/sc7180-lpass.h
··· 2 2 #ifndef __DT_SC7180_LPASS_H 3 3 #define __DT_SC7180_LPASS_H 4 4 5 - #define MI2S_PRIMARY 0 6 - #define MI2S_SECONDARY 1 7 - #define LPASS_DP_RX 2 5 + #include <dt-bindings/sound/qcom,lpass.h> 8 6 9 - #define LPASS_MCLK0 0 7 + /* NOTE: Use qcom,lpass.h to define any AIF ID's for LPASS */ 10 8 11 9 #endif /* __DT_APQ8016_LPASS_H */
+12
include/linux/device.h
··· 609 609 return kobject_name(&dev->kobj); 610 610 } 611 611 612 + /** 613 + * dev_bus_name - Return a device's bus/class name, if at all possible 614 + * @dev: struct device to get the bus/class name of 615 + * 616 + * Will return the name of the bus/class the device is attached to. If it is 617 + * not attached to a bus/class, an empty string will be returned. 618 + */ 619 + static inline const char *dev_bus_name(const struct device *dev) 620 + { 621 + return dev->bus ? dev->bus->name : (dev->class ? dev->class->name : ""); 622 + } 623 + 612 624 __printf(2, 3) int dev_set_name(struct device *dev, const char *name, ...); 613 625 614 626 #ifdef CONFIG_NUMA
+3
include/linux/kthread.h
··· 33 33 unsigned int cpu, 34 34 const char *namefmt); 35 35 36 + void kthread_set_per_cpu(struct task_struct *k, int cpu); 37 + bool kthread_is_per_cpu(struct task_struct *k); 38 + 36 39 /** 37 40 * kthread_run - create and wake a thread. 38 41 * @threadfn: the function to run until signal_pending(current).
-1
include/linux/ktime.h
··· 230 230 } 231 231 232 232 # include <linux/timekeeping.h> 233 - # include <linux/timekeeping32.h> 234 233 235 234 #endif
-18
include/linux/mlx5/driver.h
··· 1239 1239 return val.vbool; 1240 1240 } 1241 1241 1242 - /** 1243 - * mlx5_core_net - Provide net namespace of the mlx5_core_dev 1244 - * @dev: mlx5 core device 1245 - * 1246 - * mlx5_core_net() returns the net namespace of mlx5 core device. 1247 - * This can be called only in below described limited context. 1248 - * (a) When a devlink instance for mlx5_core is registered and 1249 - * when devlink reload operation is disabled. 1250 - * or 1251 - * (b) during devlink reload reload_down() and reload_up callbacks 1252 - * where it is ensured that devlink instance's net namespace is 1253 - * stable. 1254 - */ 1255 - static inline struct net *mlx5_core_net(struct mlx5_core_dev *dev) 1256 - { 1257 - return devlink_net(priv_to_devlink(dev)); 1258 - } 1259 - 1260 1242 #endif /* MLX5_DRIVER_H */
+6
include/linux/nvme.h
··· 116 116 NVME_REG_BPMBL = 0x0048, /* Boot Partition Memory Buffer 117 117 * Location 118 118 */ 119 + NVME_REG_CMBMSC = 0x0050, /* Controller Memory Buffer Memory 120 + * Space Control 121 + */ 119 122 NVME_REG_PMRCAP = 0x0e00, /* Persistent Memory Capabilities */ 120 123 NVME_REG_PMRCTL = 0x0e04, /* Persistent Memory Region Control */ 121 124 NVME_REG_PMRSTS = 0x0e08, /* Persistent Memory Region Status */ ··· 138 135 #define NVME_CAP_CSS(cap) (((cap) >> 37) & 0xff) 139 136 #define NVME_CAP_MPSMIN(cap) (((cap) >> 48) & 0xf) 140 137 #define NVME_CAP_MPSMAX(cap) (((cap) >> 52) & 0xf) 138 + #define NVME_CAP_CMBS(cap) (((cap) >> 57) & 0x1) 141 139 142 140 #define NVME_CMB_BIR(cmbloc) ((cmbloc) & 0x7) 143 141 #define NVME_CMB_OFST(cmbloc) (((cmbloc) >> 12) & 0xfffff) ··· 196 192 NVME_CSTS_SHST_OCCUR = 1 << 2, 197 193 NVME_CSTS_SHST_CMPLT = 2 << 2, 198 194 NVME_CSTS_SHST_MASK = 3 << 2, 195 + NVME_CMBMSC_CRE = 1 << 0, 196 + NVME_CMBMSC_CMSE = 1 << 1, 199 197 }; 200 198 201 199 struct nvme_id_power_state {
+30
include/linux/regulator/consumer.h
··· 332 332 } 333 333 334 334 static inline struct regulator *__must_check 335 + devm_regulator_get_exclusive(struct device *dev, const char *id) 336 + { 337 + return ERR_PTR(-ENODEV); 338 + } 339 + 340 + static inline struct regulator *__must_check 335 341 regulator_get_optional(struct device *dev, const char *id) 336 342 { 337 343 return ERR_PTR(-ENODEV); ··· 492 486 return -EINVAL; 493 487 } 494 488 489 + static inline int regulator_sync_voltage(struct regulator *regulator) 490 + { 491 + return -EINVAL; 492 + } 493 + 495 494 static inline int regulator_is_supported_voltage(struct regulator *regulator, 496 495 int min_uV, int max_uV) 497 496 { ··· 587 576 struct notifier_block *nb) 588 577 { 589 578 return 0; 579 + } 580 + 581 + static inline int regulator_suspend_enable(struct regulator_dev *rdev, 582 + suspend_state_t state) 583 + { 584 + return -EINVAL; 585 + } 586 + 587 + static inline int regulator_suspend_disable(struct regulator_dev *rdev, 588 + suspend_state_t state) 589 + { 590 + return -EINVAL; 591 + } 592 + 593 + static inline int regulator_set_suspend_voltage(struct regulator *regulator, 594 + int min_uV, int max_uV, 595 + suspend_state_t state) 596 + { 597 + return -EINVAL; 590 598 } 591 599 592 600 static inline void *regulator_get_drvdata(struct regulator *regulator)
-14
include/linux/timekeeping32.h
··· 1 - #ifndef _LINUX_TIMEKEEPING32_H 2 - #define _LINUX_TIMEKEEPING32_H 3 - /* 4 - * These interfaces are all based on the old timespec type 5 - * and should get replaced with the timespec64 based versions 6 - * over time so we can remove the file here. 7 - */ 8 - 9 - static inline unsigned long get_seconds(void) 10 - { 11 - return ktime_get_real_seconds(); 12 - } 13 - 14 - #endif
+1
include/linux/tty.h
··· 421 421 extern int tty_dev_name_to_number(const char *name, dev_t *number); 422 422 extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout); 423 423 extern void tty_ldisc_unlock(struct tty_struct *tty); 424 + extern ssize_t redirected_tty_write(struct kiocb *, struct iov_iter *); 424 425 #else 425 426 static inline void tty_kref_put(struct tty_struct *tty) 426 427 { }
+2 -2
include/media/v4l2-common.h
··· 520 520 u32 width, u32 height); 521 521 522 522 /** 523 - * v4l2_get_link_rate - Get link rate from transmitter 523 + * v4l2_get_link_freq - Get link rate from transmitter 524 524 * 525 525 * @handler: The transmitter's control handler 526 526 * @mul: The multiplier between pixel rate and link frequency. Bits per pixel on ··· 537 537 * -ENOENT: Link frequency or pixel rate control not found 538 538 * -EINVAL: Invalid link frequency value 539 539 */ 540 - s64 v4l2_get_link_rate(struct v4l2_ctrl_handler *handler, unsigned int mul, 540 + s64 v4l2_get_link_freq(struct v4l2_ctrl_handler *handler, unsigned int mul, 541 541 unsigned int div); 542 542 543 543 static inline u64 v4l2_buffer_get_timestamp(const struct v4l2_buffer *buf)
+2
include/net/lapb.h
··· 92 92 unsigned short n2, n2count; 93 93 unsigned short t1, t2; 94 94 struct timer_list t1timer, t2timer; 95 + bool t1timer_stop, t2timer_stop; 95 96 96 97 /* Internal control information */ 97 98 struct sk_buff_head write_queue; ··· 104 103 struct lapb_frame frmr_data; 105 104 unsigned char frmr_type; 106 105 106 + spinlock_t lock; 107 107 refcount_t refcnt; 108 108 }; 109 109
+2
include/net/netfilter/nf_tables.h
··· 721 721 const struct nft_set_ext_tmpl *tmpl, 722 722 const u32 *key, const u32 *key_end, const u32 *data, 723 723 u64 timeout, u64 expiration, gfp_t gfp); 724 + int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set, 725 + struct nft_expr *expr_array[]); 724 726 void nft_set_elem_destroy(const struct nft_set *set, void *elem, 725 727 bool destroy_expr); 726 728
+2 -1
include/net/tcp.h
··· 630 630 631 631 unsigned int tcp_sync_mss(struct sock *sk, u32 pmtu); 632 632 unsigned int tcp_current_mss(struct sock *sk); 633 + u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when); 633 634 634 635 /* Bound MSS / TSO packet size with the half of the window */ 635 636 static inline int tcp_bound_to_half_wnd(struct tcp_sock *tp, int pktsize) ··· 2061 2060 void tcp_newreno_mark_lost(struct sock *sk, bool snd_una_advanced); 2062 2061 extern s32 tcp_rack_skb_timeout(struct tcp_sock *tp, struct sk_buff *skb, 2063 2062 u32 reo_wnd); 2064 - extern void tcp_rack_mark_lost(struct sock *sk); 2063 + extern bool tcp_rack_mark_lost(struct sock *sk); 2065 2064 extern void tcp_rack_advance(struct tcp_sock *tp, u8 sacked, u32 end_seq, 2066 2065 u64 xmit_time); 2067 2066 extern void tcp_rack_reo_timeout(struct sock *sk);
+1 -1
include/sound/pcm.h
··· 229 229 struct snd_pcm_hw_rule { 230 230 unsigned int cond; 231 231 int var; 232 - int deps[4]; 232 + int deps[5]; 233 233 234 234 snd_pcm_hw_rule_func_t func; 235 235 void *private;
+1 -1
include/trace/events/sched.h
··· 366 366 ); 367 367 368 368 /* 369 - * Tracepoint for do_fork: 369 + * Tracepoint for kernel_clone: 370 370 */ 371 371 TRACE_EVENT(sched_process_fork, 372 372
-86
include/uapi/linux/mrp_bridge.h
··· 71 71 BR_MRP_SUB_TLV_HEADER_TEST_AUTO_MGR = 0x3, 72 72 }; 73 73 74 - struct br_mrp_tlv_hdr { 75 - __u8 type; 76 - __u8 length; 77 - }; 78 - 79 - struct br_mrp_sub_tlv_hdr { 80 - __u8 type; 81 - __u8 length; 82 - }; 83 - 84 - struct br_mrp_end_hdr { 85 - struct br_mrp_tlv_hdr hdr; 86 - }; 87 - 88 - struct br_mrp_common_hdr { 89 - __be16 seq_id; 90 - __u8 domain[MRP_DOMAIN_UUID_LENGTH]; 91 - }; 92 - 93 - struct br_mrp_ring_test_hdr { 94 - __be16 prio; 95 - __u8 sa[ETH_ALEN]; 96 - __be16 port_role; 97 - __be16 state; 98 - __be16 transitions; 99 - __be32 timestamp; 100 - }; 101 - 102 - struct br_mrp_ring_topo_hdr { 103 - __be16 prio; 104 - __u8 sa[ETH_ALEN]; 105 - __be16 interval; 106 - }; 107 - 108 - struct br_mrp_ring_link_hdr { 109 - __u8 sa[ETH_ALEN]; 110 - __be16 port_role; 111 - __be16 interval; 112 - __be16 blocked; 113 - }; 114 - 115 - struct br_mrp_sub_opt_hdr { 116 - __u8 type; 117 - __u8 manufacture_data[MRP_MANUFACTURE_DATA_LENGTH]; 118 - }; 119 - 120 - struct br_mrp_test_mgr_nack_hdr { 121 - __be16 prio; 122 - __u8 sa[ETH_ALEN]; 123 - __be16 other_prio; 124 - __u8 other_sa[ETH_ALEN]; 125 - }; 126 - 127 - struct br_mrp_test_prop_hdr { 128 - __be16 prio; 129 - __u8 sa[ETH_ALEN]; 130 - __be16 other_prio; 131 - __u8 other_sa[ETH_ALEN]; 132 - }; 133 - 134 - struct br_mrp_oui_hdr { 135 - __u8 oui[MRP_OUI_LENGTH]; 136 - }; 137 - 138 - struct br_mrp_in_test_hdr { 139 - __be16 id; 140 - __u8 sa[ETH_ALEN]; 141 - __be16 port_role; 142 - __be16 state; 143 - __be16 transitions; 144 - __be32 timestamp; 145 - }; 146 - 147 - struct br_mrp_in_topo_hdr { 148 - __u8 sa[ETH_ALEN]; 149 - __be16 id; 150 - __be16 interval; 151 - }; 152 - 153 - struct br_mrp_in_link_hdr { 154 - __u8 sa[ETH_ALEN]; 155 - __be16 port_role; 156 - __be16 id; 157 - __be16 interval; 158 - }; 159 - 160 74 #endif
+3 -3
include/uapi/linux/rpl.h
··· 28 28 pad:4, 29 29 reserved1:16; 30 30 #elif defined(__BIG_ENDIAN_BITFIELD) 31 - __u32 reserved:20, 31 + __u32 cmpri:4, 32 + cmpre:4, 32 33 pad:4, 33 - cmpri:4, 34 - cmpre:4; 34 + reserved:20; 35 35 #else 36 36 #error "Please fix <asm/byteorder.h>" 37 37 #endif
+1 -1
include/uapi/linux/v4l2-subdev.h
··· 176 176 }; 177 177 178 178 /* The v4l2 sub-device video device node is registered in read-only mode. */ 179 - #define V4L2_SUBDEV_CAP_RO_SUBDEV BIT(0) 179 + #define V4L2_SUBDEV_CAP_RO_SUBDEV 0x00000001 180 180 181 181 /* Backwards compatibility define --- to be removed */ 182 182 #define v4l2_subdev_edid v4l2_edid
+7
include/uapi/rdma/vmw_pvrdma-abi.h
··· 133 133 PVRDMA_WC_FLAGS_MAX = PVRDMA_WC_WITH_NETWORK_HDR_TYPE, 134 134 }; 135 135 136 + enum pvrdma_network_type { 137 + PVRDMA_NETWORK_IB, 138 + PVRDMA_NETWORK_ROCE_V1 = PVRDMA_NETWORK_IB, 139 + PVRDMA_NETWORK_IPV4, 140 + PVRDMA_NETWORK_IPV6 141 + }; 142 + 136 143 struct pvrdma_alloc_ucontext_resp { 137 144 __u32 qp_tab_size; 138 145 __u32 reserved;
+2 -4
kernel/fork.c
··· 819 819 init_task.signal->rlim[RLIMIT_SIGPENDING] = 820 820 init_task.signal->rlim[RLIMIT_NPROC]; 821 821 822 - for (i = 0; i < UCOUNT_COUNTS; i++) { 822 + for (i = 0; i < UCOUNT_COUNTS; i++) 823 823 init_user_ns.ucount_max[i] = max_threads/2; 824 - } 825 824 826 825 #ifdef CONFIG_VMAP_STACK 827 826 cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "fork:vm_stack_cache", ··· 1653 1654 { 1654 1655 enum pid_type type; 1655 1656 1656 - for (type = PIDTYPE_PID; type < PIDTYPE_MAX; ++type) { 1657 + for (type = PIDTYPE_PID; type < PIDTYPE_MAX; ++type) 1657 1658 INIT_HLIST_NODE(&task->pid_links[type]); 1658 - } 1659 1659 } 1660 1660 1661 1661 static inline void
+96 -123
kernel/futex.c
··· 763 763 return pi_state; 764 764 } 765 765 766 + static void pi_state_update_owner(struct futex_pi_state *pi_state, 767 + struct task_struct *new_owner) 768 + { 769 + struct task_struct *old_owner = pi_state->owner; 770 + 771 + lockdep_assert_held(&pi_state->pi_mutex.wait_lock); 772 + 773 + if (old_owner) { 774 + raw_spin_lock(&old_owner->pi_lock); 775 + WARN_ON(list_empty(&pi_state->list)); 776 + list_del_init(&pi_state->list); 777 + raw_spin_unlock(&old_owner->pi_lock); 778 + } 779 + 780 + if (new_owner) { 781 + raw_spin_lock(&new_owner->pi_lock); 782 + WARN_ON(!list_empty(&pi_state->list)); 783 + list_add(&pi_state->list, &new_owner->pi_state_list); 784 + pi_state->owner = new_owner; 785 + raw_spin_unlock(&new_owner->pi_lock); 786 + } 787 + } 788 + 766 789 static void get_pi_state(struct futex_pi_state *pi_state) 767 790 { 768 791 WARN_ON_ONCE(!refcount_inc_not_zero(&pi_state->refcount)); ··· 808 785 * and has cleaned up the pi_state already 809 786 */ 810 787 if (pi_state->owner) { 811 - struct task_struct *owner; 812 788 unsigned long flags; 813 789 814 790 raw_spin_lock_irqsave(&pi_state->pi_mutex.wait_lock, flags); 815 - owner = pi_state->owner; 816 - if (owner) { 817 - raw_spin_lock(&owner->pi_lock); 818 - list_del_init(&pi_state->list); 819 - raw_spin_unlock(&owner->pi_lock); 820 - } 821 - rt_mutex_proxy_unlock(&pi_state->pi_mutex, owner); 791 + pi_state_update_owner(pi_state, NULL); 792 + rt_mutex_proxy_unlock(&pi_state->pi_mutex); 822 793 raw_spin_unlock_irqrestore(&pi_state->pi_mutex.wait_lock, flags); 823 794 } 824 795 ··· 958 941 * FUTEX_OWNER_DIED bit. See [4] 959 942 * 960 943 * [10] There is no transient state which leaves owner and user space 961 - * TID out of sync. 944 + * TID out of sync. Except one error case where the kernel is denied 945 + * write access to the user address, see fixup_pi_state_owner(). 962 946 * 963 947 * 964 948 * Serialization and lifetime rules: ··· 1539 1521 ret = -EINVAL; 1540 1522 } 1541 1523 1542 - if (ret) 1543 - goto out_unlock; 1544 - 1545 - /* 1546 - * This is a point of no return; once we modify the uval there is no 1547 - * going back and subsequent operations must not fail. 1548 - */ 1549 - 1550 - raw_spin_lock(&pi_state->owner->pi_lock); 1551 - WARN_ON(list_empty(&pi_state->list)); 1552 - list_del_init(&pi_state->list); 1553 - raw_spin_unlock(&pi_state->owner->pi_lock); 1554 - 1555 - raw_spin_lock(&new_owner->pi_lock); 1556 - WARN_ON(!list_empty(&pi_state->list)); 1557 - list_add(&pi_state->list, &new_owner->pi_state_list); 1558 - pi_state->owner = new_owner; 1559 - raw_spin_unlock(&new_owner->pi_lock); 1560 - 1561 - postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q); 1524 + if (!ret) { 1525 + /* 1526 + * This is a point of no return; once we modified the uval 1527 + * there is no going back and subsequent operations must 1528 + * not fail. 1529 + */ 1530 + pi_state_update_owner(pi_state, new_owner); 1531 + postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q); 1532 + } 1562 1533 1563 1534 out_unlock: 1564 1535 raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); ··· 2330 2323 spin_unlock(q->lock_ptr); 2331 2324 } 2332 2325 2333 - static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, 2334 - struct task_struct *argowner) 2326 + static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, 2327 + struct task_struct *argowner) 2335 2328 { 2336 2329 struct futex_pi_state *pi_state = q->pi_state; 2337 - u32 uval, curval, newval; 2338 2330 struct task_struct *oldowner, *newowner; 2339 - u32 newtid; 2340 - int ret, err = 0; 2341 - 2342 - lockdep_assert_held(q->lock_ptr); 2343 - 2344 - raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); 2331 + u32 uval, curval, newval, newtid; 2332 + int err = 0; 2345 2333 2346 2334 oldowner = pi_state->owner; 2347 2335 ··· 2370 2368 * We raced against a concurrent self; things are 2371 2369 * already fixed up. Nothing to do. 2372 2370 */ 2373 - ret = 0; 2374 - goto out_unlock; 2371 + return 0; 2375 2372 } 2376 2373 2377 2374 if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) { 2378 - /* We got the lock after all, nothing to fix. */ 2379 - ret = 0; 2380 - goto out_unlock; 2375 + /* We got the lock. pi_state is correct. Tell caller. */ 2376 + return 1; 2381 2377 } 2382 2378 2383 2379 /* ··· 2402 2402 * We raced against a concurrent self; things are 2403 2403 * already fixed up. Nothing to do. 2404 2404 */ 2405 - ret = 0; 2406 - goto out_unlock; 2405 + return 1; 2407 2406 } 2408 2407 newowner = argowner; 2409 2408 } ··· 2432 2433 * We fixed up user space. Now we need to fix the pi_state 2433 2434 * itself. 2434 2435 */ 2435 - if (pi_state->owner != NULL) { 2436 - raw_spin_lock(&pi_state->owner->pi_lock); 2437 - WARN_ON(list_empty(&pi_state->list)); 2438 - list_del_init(&pi_state->list); 2439 - raw_spin_unlock(&pi_state->owner->pi_lock); 2440 - } 2436 + pi_state_update_owner(pi_state, newowner); 2441 2437 2442 - pi_state->owner = newowner; 2443 - 2444 - raw_spin_lock(&newowner->pi_lock); 2445 - WARN_ON(!list_empty(&pi_state->list)); 2446 - list_add(&pi_state->list, &newowner->pi_state_list); 2447 - raw_spin_unlock(&newowner->pi_lock); 2448 - raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); 2449 - 2450 - return 0; 2438 + return argowner == current; 2451 2439 2452 2440 /* 2453 2441 * In order to reschedule or handle a page fault, we need to drop the ··· 2455 2469 2456 2470 switch (err) { 2457 2471 case -EFAULT: 2458 - ret = fault_in_user_writeable(uaddr); 2472 + err = fault_in_user_writeable(uaddr); 2459 2473 break; 2460 2474 2461 2475 case -EAGAIN: 2462 2476 cond_resched(); 2463 - ret = 0; 2477 + err = 0; 2464 2478 break; 2465 2479 2466 2480 default: 2467 2481 WARN_ON_ONCE(1); 2468 - ret = err; 2469 2482 break; 2470 2483 } 2471 2484 ··· 2474 2489 /* 2475 2490 * Check if someone else fixed it for us: 2476 2491 */ 2477 - if (pi_state->owner != oldowner) { 2478 - ret = 0; 2479 - goto out_unlock; 2480 - } 2492 + if (pi_state->owner != oldowner) 2493 + return argowner == current; 2481 2494 2482 - if (ret) 2483 - goto out_unlock; 2495 + /* Retry if err was -EAGAIN or the fault in succeeded */ 2496 + if (!err) 2497 + goto retry; 2484 2498 2485 - goto retry; 2499 + /* 2500 + * fault_in_user_writeable() failed so user state is immutable. At 2501 + * best we can make the kernel state consistent but user state will 2502 + * be most likely hosed and any subsequent unlock operation will be 2503 + * rejected due to PI futex rule [10]. 2504 + * 2505 + * Ensure that the rtmutex owner is also the pi_state owner despite 2506 + * the user space value claiming something different. There is no 2507 + * point in unlocking the rtmutex if current is the owner as it 2508 + * would need to wait until the next waiter has taken the rtmutex 2509 + * to guarantee consistent state. Keep it simple. Userspace asked 2510 + * for this wreckaged state. 2511 + * 2512 + * The rtmutex has an owner - either current or some other 2513 + * task. See the EAGAIN loop above. 2514 + */ 2515 + pi_state_update_owner(pi_state, rt_mutex_owner(&pi_state->pi_mutex)); 2486 2516 2487 - out_unlock: 2517 + return err; 2518 + } 2519 + 2520 + static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, 2521 + struct task_struct *argowner) 2522 + { 2523 + struct futex_pi_state *pi_state = q->pi_state; 2524 + int ret; 2525 + 2526 + lockdep_assert_held(q->lock_ptr); 2527 + 2528 + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); 2529 + ret = __fixup_pi_state_owner(uaddr, q, argowner); 2488 2530 raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); 2489 2531 return ret; 2490 2532 } ··· 2535 2523 */ 2536 2524 static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked) 2537 2525 { 2538 - int ret = 0; 2539 - 2540 2526 if (locked) { 2541 2527 /* 2542 2528 * Got the lock. We might not be the anticipated owner if we ··· 2545 2535 * stable state, anything else needs more attention. 2546 2536 */ 2547 2537 if (q->pi_state->owner != current) 2548 - ret = fixup_pi_state_owner(uaddr, q, current); 2549 - return ret ? ret : locked; 2538 + return fixup_pi_state_owner(uaddr, q, current); 2539 + return 1; 2550 2540 } 2551 2541 2552 2542 /* ··· 2557 2547 * Another speculative read; pi_state->owner == current is unstable 2558 2548 * but needs our attention. 2559 2549 */ 2560 - if (q->pi_state->owner == current) { 2561 - ret = fixup_pi_state_owner(uaddr, q, NULL); 2562 - return ret; 2563 - } 2550 + if (q->pi_state->owner == current) 2551 + return fixup_pi_state_owner(uaddr, q, NULL); 2564 2552 2565 2553 /* 2566 2554 * Paranoia check. If we did not take the lock, then we should not be 2567 - * the owner of the rt_mutex. 2555 + * the owner of the rt_mutex. Warn and establish consistent state. 2568 2556 */ 2569 - if (rt_mutex_owner(&q->pi_state->pi_mutex) == current) { 2570 - printk(KERN_ERR "fixup_owner: ret = %d pi-mutex: %p " 2571 - "pi-state %p\n", ret, 2572 - q->pi_state->pi_mutex.owner, 2573 - q->pi_state->owner); 2574 - } 2557 + if (WARN_ON_ONCE(rt_mutex_owner(&q->pi_state->pi_mutex) == current)) 2558 + return fixup_pi_state_owner(uaddr, q, current); 2575 2559 2576 - return ret; 2560 + return 0; 2577 2561 } 2578 2562 2579 2563 /** ··· 2775 2771 ktime_t *time, int trylock) 2776 2772 { 2777 2773 struct hrtimer_sleeper timeout, *to; 2778 - struct futex_pi_state *pi_state = NULL; 2779 2774 struct task_struct *exiting = NULL; 2780 2775 struct rt_mutex_waiter rt_waiter; 2781 2776 struct futex_hash_bucket *hb; ··· 2910 2907 if (res) 2911 2908 ret = (res < 0) ? res : 0; 2912 2909 2913 - /* 2914 - * If fixup_owner() faulted and was unable to handle the fault, unlock 2915 - * it and return the fault to userspace. 2916 - */ 2917 - if (ret && (rt_mutex_owner(&q.pi_state->pi_mutex) == current)) { 2918 - pi_state = q.pi_state; 2919 - get_pi_state(pi_state); 2920 - } 2921 - 2922 2910 /* Unqueue and drop the lock */ 2923 2911 unqueue_me_pi(&q); 2924 - 2925 - if (pi_state) { 2926 - rt_mutex_futex_unlock(&pi_state->pi_mutex); 2927 - put_pi_state(pi_state); 2928 - } 2929 - 2930 2912 goto out; 2931 2913 2932 2914 out_unlock_put_key: ··· 3171 3183 u32 __user *uaddr2) 3172 3184 { 3173 3185 struct hrtimer_sleeper timeout, *to; 3174 - struct futex_pi_state *pi_state = NULL; 3175 3186 struct rt_mutex_waiter rt_waiter; 3176 3187 struct futex_hash_bucket *hb; 3177 3188 union futex_key key2 = FUTEX_KEY_INIT; ··· 3248 3261 if (q.pi_state && (q.pi_state->owner != current)) { 3249 3262 spin_lock(q.lock_ptr); 3250 3263 ret = fixup_pi_state_owner(uaddr2, &q, current); 3251 - if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) { 3252 - pi_state = q.pi_state; 3253 - get_pi_state(pi_state); 3254 - } 3255 3264 /* 3256 3265 * Drop the reference to the pi state which 3257 3266 * the requeue_pi() code acquired for us. 3258 3267 */ 3259 3268 put_pi_state(q.pi_state); 3260 3269 spin_unlock(q.lock_ptr); 3270 + /* 3271 + * Adjust the return value. It's either -EFAULT or 3272 + * success (1) but the caller expects 0 for success. 3273 + */ 3274 + ret = ret < 0 ? ret : 0; 3261 3275 } 3262 3276 } else { 3263 3277 struct rt_mutex *pi_mutex; ··· 3289 3301 if (res) 3290 3302 ret = (res < 0) ? res : 0; 3291 3303 3292 - /* 3293 - * If fixup_pi_state_owner() faulted and was unable to handle 3294 - * the fault, unlock the rt_mutex and return the fault to 3295 - * userspace. 3296 - */ 3297 - if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) { 3298 - pi_state = q.pi_state; 3299 - get_pi_state(pi_state); 3300 - } 3301 - 3302 3304 /* Unqueue and drop the lock. */ 3303 3305 unqueue_me_pi(&q); 3304 - } 3305 - 3306 - if (pi_state) { 3307 - rt_mutex_futex_unlock(&pi_state->pi_mutex); 3308 - put_pi_state(pi_state); 3309 3306 } 3310 3307 3311 3308 if (ret == -EINTR) {
+1
kernel/irq/manage.c
··· 2859 2859 rcu_read_unlock(); 2860 2860 return res; 2861 2861 } 2862 + EXPORT_SYMBOL_GPL(irq_check_status_bit);
+1 -1
kernel/irq/msi.c
··· 402 402 struct msi_domain_ops *ops = info->ops; 403 403 struct irq_data *irq_data; 404 404 struct msi_desc *desc; 405 - msi_alloc_info_t arg; 405 + msi_alloc_info_t arg = { }; 406 406 int i, ret, virq; 407 407 bool can_reserve; 408 408
+27 -2
kernel/kthread.c
··· 294 294 do_exit(ret); 295 295 } 296 296 297 - /* called from do_fork() to get node information for about to be created task */ 297 + /* called from kernel_clone() to get node information for about to be created task */ 298 298 int tsk_fork_get_node(struct task_struct *tsk) 299 299 { 300 300 #ifdef CONFIG_NUMA ··· 493 493 return p; 494 494 kthread_bind(p, cpu); 495 495 /* CPU hotplug need to bind once again when unparking the thread. */ 496 - set_bit(KTHREAD_IS_PER_CPU, &to_kthread(p)->flags); 497 496 to_kthread(p)->cpu = cpu; 498 497 return p; 498 + } 499 + 500 + void kthread_set_per_cpu(struct task_struct *k, int cpu) 501 + { 502 + struct kthread *kthread = to_kthread(k); 503 + if (!kthread) 504 + return; 505 + 506 + WARN_ON_ONCE(!(k->flags & PF_NO_SETAFFINITY)); 507 + 508 + if (cpu < 0) { 509 + clear_bit(KTHREAD_IS_PER_CPU, &kthread->flags); 510 + return; 511 + } 512 + 513 + kthread->cpu = cpu; 514 + set_bit(KTHREAD_IS_PER_CPU, &kthread->flags); 515 + } 516 + 517 + bool kthread_is_per_cpu(struct task_struct *k) 518 + { 519 + struct kthread *kthread = to_kthread(k); 520 + if (!kthread) 521 + return false; 522 + 523 + return test_bit(KTHREAD_IS_PER_CPU, &kthread->flags); 499 524 } 500 525 501 526 /**
+7 -2
kernel/locking/lockdep.c
··· 79 79 DEFINE_PER_CPU(unsigned int, lockdep_recursion); 80 80 EXPORT_PER_CPU_SYMBOL_GPL(lockdep_recursion); 81 81 82 - static inline bool lockdep_enabled(void) 82 + static __always_inline bool lockdep_enabled(void) 83 83 { 84 84 if (!debug_locks) 85 85 return false; ··· 5271 5271 /* 5272 5272 * Check whether we follow the irq-flags state precisely: 5273 5273 */ 5274 - static void check_flags(unsigned long flags) 5274 + static noinstr void check_flags(unsigned long flags) 5275 5275 { 5276 5276 #if defined(CONFIG_PROVE_LOCKING) && defined(CONFIG_DEBUG_LOCKDEP) 5277 5277 if (!debug_locks) 5278 5278 return; 5279 + 5280 + /* Get the warning out.. */ 5281 + instrumentation_begin(); 5279 5282 5280 5283 if (irqs_disabled_flags(flags)) { 5281 5284 if (DEBUG_LOCKS_WARN_ON(lockdep_hardirqs_enabled())) { ··· 5307 5304 5308 5305 if (!debug_locks) 5309 5306 print_irqtrace_events(current); 5307 + 5308 + instrumentation_end(); 5310 5309 #endif 5311 5310 } 5312 5311
+1 -2
kernel/locking/rtmutex.c
··· 1716 1716 * possible because it belongs to the pi_state which is about to be freed 1717 1717 * and it is not longer visible to other tasks. 1718 1718 */ 1719 - void rt_mutex_proxy_unlock(struct rt_mutex *lock, 1720 - struct task_struct *proxy_owner) 1719 + void rt_mutex_proxy_unlock(struct rt_mutex *lock) 1721 1720 { 1722 1721 debug_rt_mutex_proxy_unlock(lock); 1723 1722 rt_mutex_set_owner(lock, NULL);
+1 -2
kernel/locking/rtmutex_common.h
··· 133 133 extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); 134 134 extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, 135 135 struct task_struct *proxy_owner); 136 - extern void rt_mutex_proxy_unlock(struct rt_mutex *lock, 137 - struct task_struct *proxy_owner); 136 + extern void rt_mutex_proxy_unlock(struct rt_mutex *lock); 138 137 extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter); 139 138 extern int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, 140 139 struct rt_mutex_waiter *waiter,
+29 -11
kernel/printk/printk.c
··· 1291 1291 * done: 1292 1292 * 1293 1293 * - Add prefix for each line. 1294 + * - Drop truncated lines that no longer fit into the buffer. 1294 1295 * - Add the trailing newline that has been removed in vprintk_store(). 1295 - * - Drop truncated lines that do not longer fit into the buffer. 1296 + * - Add a string terminator. 1297 + * 1298 + * Since the produced string is always terminated, the maximum possible 1299 + * return value is @r->text_buf_size - 1; 1296 1300 * 1297 1301 * Return: The length of the updated/prepared text, including the added 1298 - * prefixes and the newline. The dropped line(s) are not counted. 1302 + * prefixes and the newline. The terminator is not counted. The dropped 1303 + * line(s) are not counted. 1299 1304 */ 1300 1305 static size_t record_print_text(struct printk_record *r, bool syslog, 1301 1306 bool time) ··· 1343 1338 1344 1339 /* 1345 1340 * Truncate the text if there is not enough space to add the 1346 - * prefix and a trailing newline. 1341 + * prefix and a trailing newline and a terminator. 1347 1342 */ 1348 - if (len + prefix_len + text_len + 1 > buf_size) { 1343 + if (len + prefix_len + text_len + 1 + 1 > buf_size) { 1349 1344 /* Drop even the current line if no space. */ 1350 - if (len + prefix_len + line_len + 1 > buf_size) 1345 + if (len + prefix_len + line_len + 1 + 1 > buf_size) 1351 1346 break; 1352 1347 1353 - text_len = buf_size - len - prefix_len - 1; 1348 + text_len = buf_size - len - prefix_len - 1 - 1; 1354 1349 truncated = true; 1355 1350 } 1356 1351 1357 1352 memmove(text + prefix_len, text, text_len); 1358 1353 memcpy(text, prefix, prefix_len); 1359 1354 1355 + /* 1356 + * Increment the prepared length to include the text and 1357 + * prefix that were just moved+copied. Also increment for the 1358 + * newline at the end of this line. If this is the last line, 1359 + * there is no newline, but it will be added immediately below. 1360 + */ 1360 1361 len += prefix_len + line_len + 1; 1361 - 1362 1362 if (text_len == line_len) { 1363 1363 /* 1364 - * Add the trailing newline removed in 1365 - * vprintk_store(). 1364 + * This is the last line. Add the trailing newline 1365 + * removed in vprintk_store(). 1366 1366 */ 1367 1367 text[prefix_len + line_len] = '\n'; 1368 1368 break; ··· 1391 1381 */ 1392 1382 text_len -= line_len + 1; 1393 1383 } 1384 + 1385 + /* 1386 + * If a buffer was provided, it will be terminated. Space for the 1387 + * string terminator is guaranteed to be available. The terminator is 1388 + * not counted in the return value. 1389 + */ 1390 + if (buf_size > 0) 1391 + r->text_buf[len] = 0; 1394 1392 1395 1393 return len; 1396 1394 } ··· 3445 3427 while (prb_read_valid_info(prb, seq, &info, &line_count)) { 3446 3428 if (r.info->seq >= dumper->next_seq) 3447 3429 break; 3448 - l += get_record_print_text_size(&info, line_count, true, time); 3430 + l += get_record_print_text_size(&info, line_count, syslog, time); 3449 3431 seq = r.info->seq + 1; 3450 3432 } 3451 3433 ··· 3455 3437 &info, &line_count)) { 3456 3438 if (r.info->seq >= dumper->next_seq) 3457 3439 break; 3458 - l -= get_record_print_text_size(&info, line_count, true, time); 3440 + l -= get_record_print_text_size(&info, line_count, syslog, time); 3459 3441 seq = r.info->seq + 1; 3460 3442 } 3461 3443
+1 -1
kernel/printk/printk_ringbuffer.c
··· 1718 1718 1719 1719 /* Caller interested in the line count? */ 1720 1720 if (line_count) 1721 - *line_count = count_lines(data, data_size); 1721 + *line_count = count_lines(data, len); 1722 1722 1723 1723 /* Caller interested in the data content? */ 1724 1724 if (!buf || !buf_size)
+88 -23
kernel/sched/core.c
··· 1796 1796 */ 1797 1797 static inline bool is_cpu_allowed(struct task_struct *p, int cpu) 1798 1798 { 1799 + /* When not in the task's cpumask, no point in looking further. */ 1799 1800 if (!cpumask_test_cpu(cpu, p->cpus_ptr)) 1800 1801 return false; 1801 1802 1802 - if (is_per_cpu_kthread(p) || is_migration_disabled(p)) 1803 + /* migrate_disabled() must be allowed to finish. */ 1804 + if (is_migration_disabled(p)) 1803 1805 return cpu_online(cpu); 1804 1806 1805 - return cpu_active(cpu); 1807 + /* Non kernel threads are not allowed during either online or offline. */ 1808 + if (!(p->flags & PF_KTHREAD)) 1809 + return cpu_active(cpu); 1810 + 1811 + /* KTHREAD_IS_PER_CPU is always allowed. */ 1812 + if (kthread_is_per_cpu(p)) 1813 + return cpu_online(cpu); 1814 + 1815 + /* Regular kernel threads don't get to stay during offline. */ 1816 + if (cpu_rq(cpu)->balance_push) 1817 + return false; 1818 + 1819 + /* But are allowed during online. */ 1820 + return cpu_online(cpu); 1806 1821 } 1807 1822 1808 1823 /* ··· 2342 2327 2343 2328 if (p->flags & PF_KTHREAD || is_migration_disabled(p)) { 2344 2329 /* 2345 - * Kernel threads are allowed on online && !active CPUs. 2330 + * Kernel threads are allowed on online && !active CPUs, 2331 + * however, during cpu-hot-unplug, even these might get pushed 2332 + * away if not KTHREAD_IS_PER_CPU. 2346 2333 * 2347 2334 * Specifically, migration_disabled() tasks must not fail the 2348 2335 * cpumask_any_and_distribute() pick below, esp. so on ··· 2387 2370 } 2388 2371 2389 2372 __do_set_cpus_allowed(p, new_mask, flags); 2390 - 2391 - if (p->flags & PF_KTHREAD) { 2392 - /* 2393 - * For kernel threads that do indeed end up on online && 2394 - * !active we want to ensure they are strict per-CPU threads. 2395 - */ 2396 - WARN_ON(cpumask_intersects(new_mask, cpu_online_mask) && 2397 - !cpumask_intersects(new_mask, cpu_active_mask) && 2398 - p->nr_cpus_allowed != 1); 2399 - } 2400 2373 2401 2374 return affine_move_task(rq, p, &rf, dest_cpu, flags); 2402 2375 ··· 3128 3121 3129 3122 static inline bool ttwu_queue_cond(int cpu, int wake_flags) 3130 3123 { 3124 + /* 3125 + * Do not complicate things with the async wake_list while the CPU is 3126 + * in hotplug state. 3127 + */ 3128 + if (!cpu_active(cpu)) 3129 + return false; 3130 + 3131 3131 /* 3132 3132 * If the CPU does not share cache, then queue the task on the 3133 3133 * remote rqs wakelist to avoid accessing remote data. ··· 7290 7276 /* 7291 7277 * Both the cpu-hotplug and stop task are in this case and are 7292 7278 * required to complete the hotplug process. 7279 + * 7280 + * XXX: the idle task does not match kthread_is_per_cpu() due to 7281 + * histerical raisins. 7293 7282 */ 7294 - if (is_per_cpu_kthread(push_task) || is_migration_disabled(push_task)) { 7283 + if (rq->idle == push_task || 7284 + ((push_task->flags & PF_KTHREAD) && kthread_is_per_cpu(push_task)) || 7285 + is_migration_disabled(push_task)) { 7286 + 7295 7287 /* 7296 7288 * If this is the idle task on the outgoing CPU try to wake 7297 7289 * up the hotplug control thread which might wait for the ··· 7329 7309 /* 7330 7310 * At this point need_resched() is true and we'll take the loop in 7331 7311 * schedule(). The next pick is obviously going to be the stop task 7332 - * which is_per_cpu_kthread() and will push this task away. 7312 + * which kthread_is_per_cpu() and will push this task away. 7333 7313 */ 7334 7314 raw_spin_lock(&rq->lock); 7335 7315 } ··· 7340 7320 struct rq_flags rf; 7341 7321 7342 7322 rq_lock_irqsave(rq, &rf); 7343 - if (on) 7323 + rq->balance_push = on; 7324 + if (on) { 7325 + WARN_ON_ONCE(rq->balance_callback); 7344 7326 rq->balance_callback = &balance_push_callback; 7345 - else 7327 + } else if (rq->balance_callback == &balance_push_callback) { 7346 7328 rq->balance_callback = NULL; 7329 + } 7347 7330 rq_unlock_irqrestore(rq, &rf); 7348 7331 } 7349 7332 ··· 7464 7441 struct rq *rq = cpu_rq(cpu); 7465 7442 struct rq_flags rf; 7466 7443 7444 + /* 7445 + * Make sure that when the hotplug state machine does a roll-back 7446 + * we clear balance_push. Ideally that would happen earlier... 7447 + */ 7467 7448 balance_push_set(cpu, false); 7468 7449 7469 7450 #ifdef CONFIG_SCHED_SMT ··· 7510 7483 int ret; 7511 7484 7512 7485 set_cpu_active(cpu, false); 7486 + 7513 7487 /* 7514 - * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU 7515 - * users of this state to go away such that all new such users will 7516 - * observe it. 7488 + * From this point forward, this CPU will refuse to run any task that 7489 + * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively 7490 + * push those tasks away until this gets cleared, see 7491 + * sched_cpu_dying(). 7492 + */ 7493 + balance_push_set(cpu, true); 7494 + 7495 + /* 7496 + * We've cleared cpu_active_mask / set balance_push, wait for all 7497 + * preempt-disabled and RCU users of this state to go away such that 7498 + * all new such users will observe it. 7499 + * 7500 + * Specifically, we rely on ttwu to no longer target this CPU, see 7501 + * ttwu_queue_cond() and is_cpu_allowed(). 7517 7502 * 7518 7503 * Do sync before park smpboot threads to take care the rcu boost case. 7519 7504 */ 7520 7505 synchronize_rcu(); 7521 - 7522 - balance_push_set(cpu, true); 7523 7506 7524 7507 rq_lock_irqsave(rq, &rf); 7525 7508 if (rq->rd) { ··· 7611 7574 atomic_long_add(delta, &calc_load_tasks); 7612 7575 } 7613 7576 7577 + static void dump_rq_tasks(struct rq *rq, const char *loglvl) 7578 + { 7579 + struct task_struct *g, *p; 7580 + int cpu = cpu_of(rq); 7581 + 7582 + lockdep_assert_held(&rq->lock); 7583 + 7584 + printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running); 7585 + for_each_process_thread(g, p) { 7586 + if (task_cpu(p) != cpu) 7587 + continue; 7588 + 7589 + if (!task_on_rq_queued(p)) 7590 + continue; 7591 + 7592 + printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm); 7593 + } 7594 + } 7595 + 7614 7596 int sched_cpu_dying(unsigned int cpu) 7615 7597 { 7616 7598 struct rq *rq = cpu_rq(cpu); ··· 7639 7583 sched_tick_stop(cpu); 7640 7584 7641 7585 rq_lock_irqsave(rq, &rf); 7642 - BUG_ON(rq->nr_running != 1 || rq_has_pinned_tasks(rq)); 7586 + if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) { 7587 + WARN(true, "Dying CPU not properly vacated!"); 7588 + dump_rq_tasks(rq, KERN_WARNING); 7589 + } 7643 7590 rq_unlock_irqrestore(rq, &rf); 7591 + 7592 + /* 7593 + * Now that the CPU is offline, make sure we're welcome 7594 + * to new tasks once we come back up. 7595 + */ 7596 + balance_push_set(cpu, false); 7644 7597 7645 7598 calc_load_migrate(rq); 7646 7599 update_max_interval();
+1
kernel/sched/sched.h
··· 975 975 unsigned long cpu_capacity_orig; 976 976 977 977 struct callback_head *balance_callback; 978 + unsigned char balance_push; 978 979 979 980 unsigned char nohz_idle_balance; 980 981 unsigned char idle_balance;
+2 -1
kernel/signal.c
··· 3704 3704 return true; 3705 3705 } 3706 3706 3707 - static int copy_siginfo_from_user_any(kernel_siginfo_t *kinfo, siginfo_t *info) 3707 + static int copy_siginfo_from_user_any(kernel_siginfo_t *kinfo, 3708 + siginfo_t __user *info) 3708 3709 { 3709 3710 #ifdef CONFIG_COMPAT 3710 3711 /*
+1
kernel/smpboot.c
··· 188 188 kfree(td); 189 189 return PTR_ERR(tsk); 190 190 } 191 + kthread_set_per_cpu(tsk, cpu); 191 192 /* 192 193 * Park the thread so that it could start right on the CPU 193 194 * when it is available.
+2 -2
kernel/time/ntp.c
··· 498 498 static void sync_hw_clock(struct work_struct *work); 499 499 static DECLARE_WORK(sync_work, sync_hw_clock); 500 500 static struct hrtimer sync_hrtimer; 501 - #define SYNC_PERIOD_NS (11UL * 60 * NSEC_PER_SEC) 501 + #define SYNC_PERIOD_NS (11ULL * 60 * NSEC_PER_SEC) 502 502 503 503 static enum hrtimer_restart sync_timer_callback(struct hrtimer *timer) 504 504 { ··· 512 512 ktime_t exp = ktime_set(ktime_get_real_seconds(), 0); 513 513 514 514 if (retry) 515 - exp = ktime_add_ns(exp, 2 * NSEC_PER_SEC - offset_nsec); 515 + exp = ktime_add_ns(exp, 2ULL * NSEC_PER_SEC - offset_nsec); 516 516 else 517 517 exp = ktime_add_ns(exp, SYNC_PERIOD_NS - offset_nsec); 518 518
+1 -2
kernel/time/timekeeping.c
··· 991 991 /** 992 992 * ktime_get_real_seconds - Get the seconds portion of CLOCK_REALTIME 993 993 * 994 - * Returns the wall clock seconds since 1970. This replaces the 995 - * get_seconds() interface which is not y2038 safe on 32bit systems. 994 + * Returns the wall clock seconds since 1970. 996 995 * 997 996 * For 64bit systems the fast access to tk->xtime_sec is preserved. On 998 997 * 32bit systems the access must be protected with the sequence
+13 -9
kernel/workqueue.c
··· 1849 1849 mutex_lock(&wq_pool_attach_mutex); 1850 1850 1851 1851 /* 1852 - * set_cpus_allowed_ptr() will fail if the cpumask doesn't have any 1853 - * online CPUs. It'll be re-applied when any of the CPUs come up. 1854 - */ 1855 - set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); 1856 - 1857 - /* 1858 1852 * The wq_pool_attach_mutex ensures %POOL_DISASSOCIATED remains 1859 1853 * stable across this function. See the comments above the flag 1860 1854 * definition for details. 1861 1855 */ 1862 1856 if (pool->flags & POOL_DISASSOCIATED) 1863 1857 worker->flags |= WORKER_UNBOUND; 1858 + else 1859 + kthread_set_per_cpu(worker->task, pool->cpu); 1860 + 1861 + if (worker->rescue_wq) 1862 + set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); 1864 1863 1865 1864 list_add_tail(&worker->node, &pool->workers); 1866 1865 worker->pool = pool; ··· 1882 1883 1883 1884 mutex_lock(&wq_pool_attach_mutex); 1884 1885 1886 + kthread_set_per_cpu(worker->task, -1); 1885 1887 list_del(&worker->node); 1886 1888 worker->pool = NULL; 1887 1889 ··· 4919 4919 4920 4920 raw_spin_unlock_irq(&pool->lock); 4921 4921 4922 - for_each_pool_worker(worker, pool) 4923 - WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0); 4922 + for_each_pool_worker(worker, pool) { 4923 + kthread_set_per_cpu(worker->task, -1); 4924 + WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0); 4925 + } 4924 4926 4925 4927 mutex_unlock(&wq_pool_attach_mutex); 4926 4928 ··· 4974 4972 * of all workers first and then clear UNBOUND. As we're called 4975 4973 * from CPU_ONLINE, the following shouldn't fail. 4976 4974 */ 4977 - for_each_pool_worker(worker, pool) 4975 + for_each_pool_worker(worker, pool) { 4976 + kthread_set_per_cpu(worker->task, pool->cpu); 4978 4977 WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, 4979 4978 pool->attrs->cpumask) < 0); 4979 + } 4980 4980 4981 4981 raw_spin_lock_irq(&pool->lock); 4982 4982
+1
lib/Kconfig.ubsan
··· 123 123 config UBSAN_UNSIGNED_OVERFLOW 124 124 bool "Perform checking for unsigned arithmetic overflow" 125 125 depends on $(cc-option,-fsanitize=unsigned-integer-overflow) 126 + depends on !X86_32 # avoid excessive stack usage on x86-32/clang 126 127 help 127 128 This option enables -fsanitize=unsigned-integer-overflow which checks 128 129 for overflow of any arithmetic operations with unsigned integers. This
+6 -1
mm/highmem.c
··· 473 473 } 474 474 #endif 475 475 476 + #ifndef arch_kmap_local_set_pte 477 + #define arch_kmap_local_set_pte(mm, vaddr, ptep, ptev) \ 478 + set_pte_at(mm, vaddr, ptep, ptev) 479 + #endif 480 + 476 481 /* Unmap a local mapping which was obtained by kmap_high_get() */ 477 482 static inline bool kmap_high_unmap_local(unsigned long vaddr) 478 483 { ··· 520 515 vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 521 516 BUG_ON(!pte_none(*(kmap_pte - idx))); 522 517 pteval = pfn_pte(pfn, prot); 523 - set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); 518 + arch_kmap_local_set_pte(&init_mm, vaddr, kmap_pte - idx, pteval); 524 519 arch_kmap_local_post_map(vaddr, pteval); 525 520 current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; 526 521 preempt_enable();
+32 -45
mm/kasan/hw_tags.c
··· 19 19 20 20 #include "kasan.h" 21 21 22 - enum kasan_arg_mode { 23 - KASAN_ARG_MODE_DEFAULT, 24 - KASAN_ARG_MODE_OFF, 25 - KASAN_ARG_MODE_PROD, 26 - KASAN_ARG_MODE_FULL, 22 + enum kasan_arg { 23 + KASAN_ARG_DEFAULT, 24 + KASAN_ARG_OFF, 25 + KASAN_ARG_ON, 27 26 }; 28 27 29 28 enum kasan_arg_stacktrace { ··· 37 38 KASAN_ARG_FAULT_PANIC, 38 39 }; 39 40 40 - static enum kasan_arg_mode kasan_arg_mode __ro_after_init; 41 + static enum kasan_arg kasan_arg __ro_after_init; 41 42 static enum kasan_arg_stacktrace kasan_arg_stacktrace __ro_after_init; 42 43 static enum kasan_arg_fault kasan_arg_fault __ro_after_init; 43 44 ··· 51 52 /* Whether panic or disable tag checking on fault. */ 52 53 bool kasan_flag_panic __ro_after_init; 53 54 54 - /* kasan.mode=off/prod/full */ 55 - static int __init early_kasan_mode(char *arg) 55 + /* kasan=off/on */ 56 + static int __init early_kasan_flag(char *arg) 56 57 { 57 58 if (!arg) 58 59 return -EINVAL; 59 60 60 61 if (!strcmp(arg, "off")) 61 - kasan_arg_mode = KASAN_ARG_MODE_OFF; 62 - else if (!strcmp(arg, "prod")) 63 - kasan_arg_mode = KASAN_ARG_MODE_PROD; 64 - else if (!strcmp(arg, "full")) 65 - kasan_arg_mode = KASAN_ARG_MODE_FULL; 62 + kasan_arg = KASAN_ARG_OFF; 63 + else if (!strcmp(arg, "on")) 64 + kasan_arg = KASAN_ARG_ON; 66 65 else 67 66 return -EINVAL; 68 67 69 68 return 0; 70 69 } 71 - early_param("kasan.mode", early_kasan_mode); 70 + early_param("kasan", early_kasan_flag); 72 71 73 - /* kasan.stack=off/on */ 72 + /* kasan.stacktrace=off/on */ 74 73 static int __init early_kasan_flag_stacktrace(char *arg) 75 74 { 76 75 if (!arg) ··· 110 113 * as this function is only called for MTE-capable hardware. 111 114 */ 112 115 113 - /* If KASAN is disabled, do nothing. */ 114 - if (kasan_arg_mode == KASAN_ARG_MODE_OFF) 116 + /* If KASAN is disabled via command line, don't initialize it. */ 117 + if (kasan_arg == KASAN_ARG_OFF) 115 118 return; 116 119 117 120 hw_init_tags(KASAN_TAG_MAX); ··· 121 124 /* kasan_init_hw_tags() is called once on boot CPU. */ 122 125 void __init kasan_init_hw_tags(void) 123 126 { 124 - /* If hardware doesn't support MTE, do nothing. */ 127 + /* If hardware doesn't support MTE, don't initialize KASAN. */ 125 128 if (!system_supports_mte()) 126 129 return; 127 130 128 - /* Choose KASAN mode if kasan boot parameter is not provided. */ 129 - if (kasan_arg_mode == KASAN_ARG_MODE_DEFAULT) { 130 - if (IS_ENABLED(CONFIG_DEBUG_KERNEL)) 131 - kasan_arg_mode = KASAN_ARG_MODE_FULL; 132 - else 133 - kasan_arg_mode = KASAN_ARG_MODE_PROD; 134 - } 135 - 136 - /* Preset parameter values based on the mode. */ 137 - switch (kasan_arg_mode) { 138 - case KASAN_ARG_MODE_DEFAULT: 139 - /* Shouldn't happen as per the check above. */ 140 - WARN_ON(1); 131 + /* If KASAN is disabled via command line, don't initialize it. */ 132 + if (kasan_arg == KASAN_ARG_OFF) 141 133 return; 142 - case KASAN_ARG_MODE_OFF: 143 - /* If KASAN is disabled, do nothing. */ 144 - return; 145 - case KASAN_ARG_MODE_PROD: 146 - static_branch_enable(&kasan_flag_enabled); 147 - break; 148 - case KASAN_ARG_MODE_FULL: 149 - static_branch_enable(&kasan_flag_enabled); 150 - static_branch_enable(&kasan_flag_stacktrace); 151 - break; 152 - } 153 134 154 - /* Now, optionally override the presets. */ 135 + /* Enable KASAN. */ 136 + static_branch_enable(&kasan_flag_enabled); 155 137 156 138 switch (kasan_arg_stacktrace) { 157 139 case KASAN_ARG_STACKTRACE_DEFAULT: 140 + /* 141 + * Default to enabling stack trace collection for 142 + * debug kernels. 143 + */ 144 + if (IS_ENABLED(CONFIG_DEBUG_KERNEL)) 145 + static_branch_enable(&kasan_flag_stacktrace); 158 146 break; 159 147 case KASAN_ARG_STACKTRACE_OFF: 160 - static_branch_disable(&kasan_flag_stacktrace); 148 + /* Do nothing, kasan_flag_stacktrace keeps its default value. */ 161 149 break; 162 150 case KASAN_ARG_STACKTRACE_ON: 163 151 static_branch_enable(&kasan_flag_stacktrace); ··· 151 169 152 170 switch (kasan_arg_fault) { 153 171 case KASAN_ARG_FAULT_DEFAULT: 172 + /* 173 + * Default to no panic on report. 174 + * Do nothing, kasan_flag_panic keeps its default value. 175 + */ 154 176 break; 155 177 case KASAN_ARG_FAULT_REPORT: 156 - kasan_flag_panic = false; 178 + /* Do nothing, kasan_flag_panic keeps its default value. */ 157 179 break; 158 180 case KASAN_ARG_FAULT_PANIC: 181 + /* Enable panic on report. */ 159 182 kasan_flag_panic = true; 160 183 break; 161 184 }
+13 -10
mm/kasan/init.c
··· 373 373 374 374 if (kasan_pte_table(*pmd)) { 375 375 if (IS_ALIGNED(addr, PMD_SIZE) && 376 - IS_ALIGNED(next, PMD_SIZE)) 376 + IS_ALIGNED(next, PMD_SIZE)) { 377 377 pmd_clear(pmd); 378 - continue; 378 + continue; 379 + } 379 380 } 380 381 pte = pte_offset_kernel(pmd, addr); 381 382 kasan_remove_pte_table(pte, addr, next); ··· 399 398 400 399 if (kasan_pmd_table(*pud)) { 401 400 if (IS_ALIGNED(addr, PUD_SIZE) && 402 - IS_ALIGNED(next, PUD_SIZE)) 401 + IS_ALIGNED(next, PUD_SIZE)) { 403 402 pud_clear(pud); 404 - continue; 403 + continue; 404 + } 405 405 } 406 406 pmd = pmd_offset(pud, addr); 407 407 pmd_base = pmd_offset(pud, 0); ··· 426 424 427 425 if (kasan_pud_table(*p4d)) { 428 426 if (IS_ALIGNED(addr, P4D_SIZE) && 429 - IS_ALIGNED(next, P4D_SIZE)) 427 + IS_ALIGNED(next, P4D_SIZE)) { 430 428 p4d_clear(p4d); 431 - continue; 429 + continue; 430 + } 432 431 } 433 432 pud = pud_offset(p4d, addr); 434 433 kasan_remove_pud_table(pud, addr, next); ··· 460 457 461 458 if (kasan_p4d_table(*pgd)) { 462 459 if (IS_ALIGNED(addr, PGDIR_SIZE) && 463 - IS_ALIGNED(next, PGDIR_SIZE)) 460 + IS_ALIGNED(next, PGDIR_SIZE)) { 464 461 pgd_clear(pgd); 465 - continue; 462 + continue; 463 + } 466 464 } 467 465 468 466 p4d = p4d_offset(pgd, addr); ··· 486 482 487 483 ret = kasan_populate_early_shadow(shadow_start, shadow_end); 488 484 if (ret) 489 - kasan_remove_zero_shadow(shadow_start, 490 - size >> KASAN_SHADOW_SCALE_SHIFT); 485 + kasan_remove_zero_shadow(start, size); 491 486 return ret; 492 487 }
+1 -1
mm/memblock.c
··· 1427 1427 } 1428 1428 1429 1429 /** 1430 - * memblock_phys_alloc_try_nid - allocate a memory block from specified MUMA node 1430 + * memblock_phys_alloc_try_nid - allocate a memory block from specified NUMA node 1431 1431 * @size: size of memory block to be allocated in bytes 1432 1432 * @align: alignment of the region and block's size 1433 1433 * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
+1 -3
mm/memcontrol.c
··· 3115 3115 if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) 3116 3116 page_counter_uncharge(&memcg->kmem, nr_pages); 3117 3117 3118 - page_counter_uncharge(&memcg->memory, nr_pages); 3119 - if (do_memsw_account()) 3120 - page_counter_uncharge(&memcg->memsw, nr_pages); 3118 + refill_stock(memcg, nr_pages); 3121 3119 } 3122 3120 3123 3121 /**
+16 -4
mm/memory-failure.c
··· 1885 1885 return rc; 1886 1886 } 1887 1887 1888 + static void put_ref_page(struct page *page) 1889 + { 1890 + if (page) 1891 + put_page(page); 1892 + } 1893 + 1888 1894 /** 1889 1895 * soft_offline_page - Soft offline a page. 1890 1896 * @pfn: pfn to soft-offline ··· 1916 1910 int soft_offline_page(unsigned long pfn, int flags) 1917 1911 { 1918 1912 int ret; 1919 - struct page *page; 1920 1913 bool try_again = true; 1914 + struct page *page, *ref_page = NULL; 1915 + 1916 + WARN_ON_ONCE(!pfn_valid(pfn) && (flags & MF_COUNT_INCREASED)); 1921 1917 1922 1918 if (!pfn_valid(pfn)) 1923 1919 return -ENXIO; 1920 + if (flags & MF_COUNT_INCREASED) 1921 + ref_page = pfn_to_page(pfn); 1922 + 1924 1923 /* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */ 1925 1924 page = pfn_to_online_page(pfn); 1926 - if (!page) 1925 + if (!page) { 1926 + put_ref_page(ref_page); 1927 1927 return -EIO; 1928 + } 1928 1929 1929 1930 if (PageHWPoison(page)) { 1930 1931 pr_info("%s: %#lx page already poisoned\n", __func__, pfn); 1931 - if (flags & MF_COUNT_INCREASED) 1932 - put_page(page); 1932 + put_ref_page(ref_page); 1933 1933 return 0; 1934 1934 } 1935 1935
+12 -11
mm/migrate.c
··· 402 402 struct zone *oldzone, *newzone; 403 403 int dirty; 404 404 int expected_count = expected_page_refs(mapping, page) + extra_count; 405 + int nr = thp_nr_pages(page); 405 406 406 407 if (!mapping) { 407 408 /* Anonymous page without mapping */ ··· 438 437 */ 439 438 newpage->index = page->index; 440 439 newpage->mapping = page->mapping; 441 - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ 440 + page_ref_add(newpage, nr); /* add cache reference */ 442 441 if (PageSwapBacked(page)) { 443 442 __SetPageSwapBacked(newpage); 444 443 if (PageSwapCache(page)) { ··· 460 459 if (PageTransHuge(page)) { 461 460 int i; 462 461 463 - for (i = 1; i < HPAGE_PMD_NR; i++) { 462 + for (i = 1; i < nr; i++) { 464 463 xas_next(&xas); 465 464 xas_store(&xas, newpage); 466 465 } ··· 471 470 * to one less reference. 472 471 * We know this isn't the last reference. 473 472 */ 474 - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); 473 + page_ref_unfreeze(page, expected_count - nr); 475 474 476 475 xas_unlock(&xas); 477 476 /* Leave irq disabled to prevent preemption while updating stats */ ··· 494 493 old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); 495 494 new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); 496 495 497 - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); 498 - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); 496 + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); 497 + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); 499 498 if (PageSwapBacked(page) && !PageSwapCache(page)) { 500 - __dec_lruvec_state(old_lruvec, NR_SHMEM); 501 - __inc_lruvec_state(new_lruvec, NR_SHMEM); 499 + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); 500 + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); 502 501 } 503 502 if (dirty && mapping_can_writeback(mapping)) { 504 - __dec_node_state(oldzone->zone_pgdat, NR_FILE_DIRTY); 505 - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); 506 - __inc_node_state(newzone->zone_pgdat, NR_FILE_DIRTY); 507 - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); 503 + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); 504 + __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); 505 + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); 506 + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); 508 507 } 509 508 } 510 509 local_irq_enable();
+2
mm/page_alloc.c
··· 1207 1207 /* s390's use of memset() could override KASAN redzones. */ 1208 1208 kasan_disable_current(); 1209 1209 for (i = 0; i < numpages; i++) { 1210 + u8 tag = page_kasan_tag(page + i); 1210 1211 page_kasan_tag_reset(page + i); 1211 1212 clear_highpage(page + i); 1213 + page_kasan_tag_set(page + i, tag); 1212 1214 } 1213 1215 kasan_enable_current(); 1214 1216 }
+5 -6
mm/slub.c
··· 2791 2791 void *obj) 2792 2792 { 2793 2793 if (unlikely(slab_want_init_on_free(s)) && obj) 2794 - memset((void *)((char *)obj + s->offset), 0, sizeof(void *)); 2794 + memset((void *)((char *)kasan_reset_tag(obj) + s->offset), 2795 + 0, sizeof(void *)); 2795 2796 } 2796 2797 2797 2798 /* ··· 2884 2883 stat(s, ALLOC_FASTPATH); 2885 2884 } 2886 2885 2887 - maybe_wipe_obj_freeptr(s, kasan_reset_tag(object)); 2886 + maybe_wipe_obj_freeptr(s, object); 2888 2887 2889 2888 if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) 2890 2889 memset(kasan_reset_tag(object), 0, s->object_size); ··· 3330 3329 int j; 3331 3330 3332 3331 for (j = 0; j < i; j++) 3333 - memset(p[j], 0, s->object_size); 3332 + memset(kasan_reset_tag(p[j]), 0, s->object_size); 3334 3333 } 3335 3334 3336 3335 /* memcg and kmem_cache debug support */ ··· 5625 5624 5626 5625 s->kobj.kset = kset; 5627 5626 err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name); 5628 - if (err) { 5629 - kobject_put(&s->kobj); 5627 + if (err) 5630 5628 goto out; 5631 - } 5632 5629 5633 5630 err = sysfs_create_group(&s->kobj, &slab_attr_group); 5634 5631 if (err)
+29
net/bridge/br_private_mrp.h
··· 88 88 int br_mrp_ring_port_open(struct net_device *dev, u8 loc); 89 89 int br_mrp_in_port_open(struct net_device *dev, u8 loc); 90 90 91 + /* MRP protocol data units */ 92 + struct br_mrp_tlv_hdr { 93 + __u8 type; 94 + __u8 length; 95 + }; 96 + 97 + struct br_mrp_common_hdr { 98 + __be16 seq_id; 99 + __u8 domain[MRP_DOMAIN_UUID_LENGTH]; 100 + }; 101 + 102 + struct br_mrp_ring_test_hdr { 103 + __be16 prio; 104 + __u8 sa[ETH_ALEN]; 105 + __be16 port_role; 106 + __be16 state; 107 + __be16 transitions; 108 + __be32 timestamp; 109 + } __attribute__((__packed__)); 110 + 111 + struct br_mrp_in_test_hdr { 112 + __be16 id; 113 + __u8 sa[ETH_ALEN]; 114 + __be16 port_role; 115 + __be16 state; 116 + __be16 transitions; 117 + __be32 timestamp; 118 + } __attribute__((__packed__)); 119 + 91 120 #endif /* _BR_PRIVATE_MRP_H */
+34 -23
net/ceph/auth_x.c
··· 569 569 return -ERANGE; 570 570 } 571 571 572 + static int decode_con_secret(void **p, void *end, u8 *con_secret, 573 + int *con_secret_len) 574 + { 575 + int len; 576 + 577 + ceph_decode_32_safe(p, end, len, bad); 578 + ceph_decode_need(p, end, len, bad); 579 + 580 + dout("%s len %d\n", __func__, len); 581 + if (con_secret) { 582 + if (len > CEPH_MAX_CON_SECRET_LEN) { 583 + pr_err("connection secret too big %d\n", len); 584 + goto bad_memzero; 585 + } 586 + memcpy(con_secret, *p, len); 587 + *con_secret_len = len; 588 + } 589 + memzero_explicit(*p, len); 590 + *p += len; 591 + return 0; 592 + 593 + bad_memzero: 594 + memzero_explicit(*p, len); 595 + bad: 596 + pr_err("failed to decode connection secret\n"); 597 + return -EINVAL; 598 + } 599 + 572 600 static int handle_auth_session_key(struct ceph_auth_client *ac, 573 601 void **p, void *end, 574 602 u8 *session_key, int *session_key_len, ··· 640 612 dout("%s decrypted %d bytes\n", __func__, ret); 641 613 dend = dp + ret; 642 614 643 - ceph_decode_32_safe(&dp, dend, len, e_inval); 644 - if (len > CEPH_MAX_CON_SECRET_LEN) { 645 - pr_err("connection secret too big %d\n", len); 646 - return -EINVAL; 647 - } 648 - 649 - dout("%s connection secret len %d\n", __func__, len); 650 - if (con_secret) { 651 - memcpy(con_secret, dp, len); 652 - *con_secret_len = len; 653 - } 615 + ret = decode_con_secret(&dp, dend, con_secret, con_secret_len); 616 + if (ret) 617 + return ret; 654 618 } 655 619 656 620 /* service tickets */ ··· 848 828 { 849 829 void *dp, *dend; 850 830 u8 struct_v; 851 - int len; 852 831 int ret; 853 832 854 833 dp = *p + ceph_x_encrypt_offset(); ··· 862 843 ceph_decode_64_safe(&dp, dend, *nonce_plus_one, e_inval); 863 844 dout("%s nonce_plus_one %llu\n", __func__, *nonce_plus_one); 864 845 if (struct_v >= 2) { 865 - ceph_decode_32_safe(&dp, dend, len, e_inval); 866 - if (len > CEPH_MAX_CON_SECRET_LEN) { 867 - pr_err("connection secret too big %d\n", len); 868 - return -EINVAL; 869 - } 870 - 871 - dout("%s connection secret len %d\n", __func__, len); 872 - if (con_secret) { 873 - memcpy(con_secret, dp, len); 874 - *con_secret_len = len; 875 - } 846 + ret = decode_con_secret(&dp, dend, con_secret, con_secret_len); 847 + if (ret) 848 + return ret; 876 849 } 877 850 878 851 return 0;
+2 -1
net/ceph/crypto.c
··· 96 96 key->len = ceph_decode_16(p); 97 97 ceph_decode_need(p, end, key->len, bad); 98 98 ret = set_secret(key, *p); 99 + memzero_explicit(*p, key->len); 99 100 *p += key->len; 100 101 return ret; 101 102 ··· 135 134 void ceph_crypto_key_destroy(struct ceph_crypto_key *key) 136 135 { 137 136 if (key) { 138 - kfree(key->key); 137 + kfree_sensitive(key->key); 139 138 key->key = NULL; 140 139 if (key->tfm) { 141 140 crypto_free_sync_skcipher(key->tfm);
+1 -1
net/ceph/messenger_v1.c
··· 1100 1100 if (ret < 0) 1101 1101 return ret; 1102 1102 1103 - BUG_ON(!con->in_msg ^ skip); 1103 + BUG_ON((!con->in_msg) ^ skip); 1104 1104 if (skip) { 1105 1105 /* skip this message */ 1106 1106 dout("alloc_msg said skip message\n");
+26 -19
net/ceph/messenger_v2.c
··· 689 689 } 690 690 691 691 static int setup_crypto(struct ceph_connection *con, 692 - u8 *session_key, int session_key_len, 693 - u8 *con_secret, int con_secret_len) 692 + const u8 *session_key, int session_key_len, 693 + const u8 *con_secret, int con_secret_len) 694 694 { 695 695 unsigned int noio_flag; 696 - void *p; 697 696 int ret; 698 697 699 698 dout("%s con %p con_mode %d session_key_len %d con_secret_len %d\n", ··· 750 751 return ret; 751 752 } 752 753 753 - p = con_secret; 754 - WARN_ON((unsigned long)p & crypto_aead_alignmask(con->v2.gcm_tfm)); 755 - ret = crypto_aead_setkey(con->v2.gcm_tfm, p, CEPH_GCM_KEY_LEN); 754 + WARN_ON((unsigned long)con_secret & 755 + crypto_aead_alignmask(con->v2.gcm_tfm)); 756 + ret = crypto_aead_setkey(con->v2.gcm_tfm, con_secret, CEPH_GCM_KEY_LEN); 756 757 if (ret) { 757 758 pr_err("failed to set gcm key: %d\n", ret); 758 759 return ret; 759 760 } 760 761 761 - p += CEPH_GCM_KEY_LEN; 762 762 WARN_ON(crypto_aead_ivsize(con->v2.gcm_tfm) != CEPH_GCM_IV_LEN); 763 763 ret = crypto_aead_setauthsize(con->v2.gcm_tfm, CEPH_GCM_TAG_LEN); 764 764 if (ret) { ··· 775 777 aead_request_set_callback(con->v2.gcm_req, CRYPTO_TFM_REQ_MAY_BACKLOG, 776 778 crypto_req_done, &con->v2.gcm_wait); 777 779 778 - memcpy(&con->v2.in_gcm_nonce, p, CEPH_GCM_IV_LEN); 779 - memcpy(&con->v2.out_gcm_nonce, p + CEPH_GCM_IV_LEN, CEPH_GCM_IV_LEN); 780 + memcpy(&con->v2.in_gcm_nonce, con_secret + CEPH_GCM_KEY_LEN, 781 + CEPH_GCM_IV_LEN); 782 + memcpy(&con->v2.out_gcm_nonce, 783 + con_secret + CEPH_GCM_KEY_LEN + CEPH_GCM_IV_LEN, 784 + CEPH_GCM_IV_LEN); 780 785 return 0; /* auth_x, secure mode */ 781 786 } 782 787 ··· 801 800 desc->tfm = con->v2.hmac_tfm; 802 801 ret = crypto_shash_init(desc); 803 802 if (ret) 804 - return ret; 803 + goto out; 805 804 806 805 for (i = 0; i < kvec_cnt; i++) { 807 806 WARN_ON((unsigned long)kvecs[i].iov_base & ··· 809 808 ret = crypto_shash_update(desc, kvecs[i].iov_base, 810 809 kvecs[i].iov_len); 811 810 if (ret) 812 - return ret; 811 + goto out; 813 812 } 814 813 815 814 ret = crypto_shash_final(desc, hmac); 816 - if (ret) 817 - return ret; 818 815 816 + out: 819 817 shash_desc_zero(desc); 820 - return 0; /* auth_x, both plain and secure modes */ 818 + return ret; /* auth_x, both plain and secure modes */ 821 819 } 822 820 823 821 static void gcm_inc_nonce(struct ceph_gcm_nonce *nonce) ··· 2072 2072 if (con->state != CEPH_CON_S_V2_AUTH) { 2073 2073 dout("%s con %p state changed to %d\n", __func__, con, 2074 2074 con->state); 2075 - return -EAGAIN; 2075 + ret = -EAGAIN; 2076 + goto out; 2076 2077 } 2077 2078 2078 2079 dout("%s con %p handle_auth_done ret %d\n", __func__, con, ret); 2079 2080 if (ret) 2080 - return ret; 2081 + goto out; 2081 2082 2082 2083 ret = setup_crypto(con, session_key, session_key_len, con_secret, 2083 2084 con_secret_len); 2084 2085 if (ret) 2085 - return ret; 2086 + goto out; 2086 2087 2087 2088 reset_out_kvecs(con); 2088 2089 ret = prepare_auth_signature(con); 2089 2090 if (ret) { 2090 2091 pr_err("prepare_auth_signature failed: %d\n", ret); 2091 - return ret; 2092 + goto out; 2092 2093 } 2093 2094 2094 2095 con->state = CEPH_CON_S_V2_AUTH_SIGNATURE; 2095 - return 0; 2096 + 2097 + out: 2098 + memzero_explicit(session_key_buf, sizeof(session_key_buf)); 2099 + memzero_explicit(con_secret_buf, sizeof(con_secret_buf)); 2100 + return ret; 2096 2101 2097 2102 bad: 2098 2103 pr_err("failed to decode auth_done\n"); ··· 3441 3436 } 3442 3437 3443 3438 con->v2.con_mode = CEPH_CON_MODE_UNKNOWN; 3439 + memzero_explicit(&con->v2.in_gcm_nonce, CEPH_GCM_IV_LEN); 3440 + memzero_explicit(&con->v2.out_gcm_nonce, CEPH_GCM_IV_LEN); 3444 3441 3445 3442 if (con->v2.hmac_tfm) { 3446 3443 crypto_free_shash(con->v2.hmac_tfm);
+7 -7
net/ceph/mon_client.c
··· 1433 1433 /* 1434 1434 * handle incoming message 1435 1435 */ 1436 - static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) 1436 + static void mon_dispatch(struct ceph_connection *con, struct ceph_msg *msg) 1437 1437 { 1438 1438 struct ceph_mon_client *monc = con->private; 1439 1439 int type = le16_to_cpu(msg->hdr.type); ··· 1565 1565 * will come from the messenger workqueue, which is drained prior to 1566 1566 * mon_client destruction. 1567 1567 */ 1568 - static struct ceph_connection *con_get(struct ceph_connection *con) 1568 + static struct ceph_connection *mon_get_con(struct ceph_connection *con) 1569 1569 { 1570 1570 return con; 1571 1571 } 1572 1572 1573 - static void con_put(struct ceph_connection *con) 1573 + static void mon_put_con(struct ceph_connection *con) 1574 1574 { 1575 1575 } 1576 1576 1577 1577 static const struct ceph_connection_operations mon_con_ops = { 1578 - .get = con_get, 1579 - .put = con_put, 1580 - .dispatch = dispatch, 1581 - .fault = mon_fault, 1578 + .get = mon_get_con, 1579 + .put = mon_put_con, 1582 1580 .alloc_msg = mon_alloc_msg, 1581 + .dispatch = mon_dispatch, 1582 + .fault = mon_fault, 1583 1583 .get_auth_request = mon_get_auth_request, 1584 1584 .handle_auth_reply_more = mon_handle_auth_reply_more, 1585 1585 .handle_auth_done = mon_handle_auth_done,
+20 -20
net/ceph/osd_client.c
··· 5412 5412 /* 5413 5413 * handle incoming message 5414 5414 */ 5415 - static void dispatch(struct ceph_connection *con, struct ceph_msg *msg) 5415 + static void osd_dispatch(struct ceph_connection *con, struct ceph_msg *msg) 5416 5416 { 5417 5417 struct ceph_osd *osd = con->private; 5418 5418 struct ceph_osd_client *osdc = osd->o_osdc; ··· 5534 5534 return m; 5535 5535 } 5536 5536 5537 - static struct ceph_msg *alloc_msg(struct ceph_connection *con, 5538 - struct ceph_msg_header *hdr, 5539 - int *skip) 5537 + static struct ceph_msg *osd_alloc_msg(struct ceph_connection *con, 5538 + struct ceph_msg_header *hdr, 5539 + int *skip) 5540 5540 { 5541 5541 struct ceph_osd *osd = con->private; 5542 5542 int type = le16_to_cpu(hdr->type); ··· 5560 5560 /* 5561 5561 * Wrappers to refcount containing ceph_osd struct 5562 5562 */ 5563 - static struct ceph_connection *get_osd_con(struct ceph_connection *con) 5563 + static struct ceph_connection *osd_get_con(struct ceph_connection *con) 5564 5564 { 5565 5565 struct ceph_osd *osd = con->private; 5566 5566 if (get_osd(osd)) ··· 5568 5568 return NULL; 5569 5569 } 5570 5570 5571 - static void put_osd_con(struct ceph_connection *con) 5571 + static void osd_put_con(struct ceph_connection *con) 5572 5572 { 5573 5573 struct ceph_osd *osd = con->private; 5574 5574 put_osd(osd); ··· 5582 5582 * Note: returned pointer is the address of a structure that's 5583 5583 * managed separately. Caller must *not* attempt to free it. 5584 5584 */ 5585 - static struct ceph_auth_handshake *get_authorizer(struct ceph_connection *con, 5586 - int *proto, int force_new) 5585 + static struct ceph_auth_handshake * 5586 + osd_get_authorizer(struct ceph_connection *con, int *proto, int force_new) 5587 5587 { 5588 5588 struct ceph_osd *o = con->private; 5589 5589 struct ceph_osd_client *osdc = o->o_osdc; ··· 5599 5599 return auth; 5600 5600 } 5601 5601 5602 - static int add_authorizer_challenge(struct ceph_connection *con, 5602 + static int osd_add_authorizer_challenge(struct ceph_connection *con, 5603 5603 void *challenge_buf, int challenge_buf_len) 5604 5604 { 5605 5605 struct ceph_osd *o = con->private; ··· 5610 5610 challenge_buf, challenge_buf_len); 5611 5611 } 5612 5612 5613 - static int verify_authorizer_reply(struct ceph_connection *con) 5613 + static int osd_verify_authorizer_reply(struct ceph_connection *con) 5614 5614 { 5615 5615 struct ceph_osd *o = con->private; 5616 5616 struct ceph_osd_client *osdc = o->o_osdc; ··· 5622 5622 NULL, NULL, NULL, NULL); 5623 5623 } 5624 5624 5625 - static int invalidate_authorizer(struct ceph_connection *con) 5625 + static int osd_invalidate_authorizer(struct ceph_connection *con) 5626 5626 { 5627 5627 struct ceph_osd *o = con->private; 5628 5628 struct ceph_osd_client *osdc = o->o_osdc; ··· 5731 5731 } 5732 5732 5733 5733 static const struct ceph_connection_operations osd_con_ops = { 5734 - .get = get_osd_con, 5735 - .put = put_osd_con, 5736 - .dispatch = dispatch, 5737 - .get_authorizer = get_authorizer, 5738 - .add_authorizer_challenge = add_authorizer_challenge, 5739 - .verify_authorizer_reply = verify_authorizer_reply, 5740 - .invalidate_authorizer = invalidate_authorizer, 5741 - .alloc_msg = alloc_msg, 5734 + .get = osd_get_con, 5735 + .put = osd_put_con, 5736 + .alloc_msg = osd_alloc_msg, 5737 + .dispatch = osd_dispatch, 5738 + .fault = osd_fault, 5742 5739 .reencode_message = osd_reencode_message, 5740 + .get_authorizer = osd_get_authorizer, 5741 + .add_authorizer_challenge = osd_add_authorizer_challenge, 5742 + .verify_authorizer_reply = osd_verify_authorizer_reply, 5743 + .invalidate_authorizer = osd_invalidate_authorizer, 5743 5744 .sign_message = osd_sign_message, 5744 5745 .check_message_signature = osd_check_message_signature, 5745 - .fault = osd_fault, 5746 5746 .get_auth_request = osd_get_auth_request, 5747 5747 .handle_auth_reply_more = osd_handle_auth_reply_more, 5748 5748 .handle_auth_done = osd_handle_auth_done,
+1 -1
net/decnet/dn_route.c
··· 1035 1035 fld.saddr = dnet_select_source(dev_out, 0, 1036 1036 RT_SCOPE_HOST); 1037 1037 if (!fld.daddr) 1038 - goto out; 1038 + goto done; 1039 1039 } 1040 1040 fld.flowidn_oif = LOOPBACK_IFINDEX; 1041 1041 res.type = RTN_LOCAL;
+8 -6
net/ipv4/tcp_input.c
··· 2859 2859 } else if (tcp_is_rack(sk)) { 2860 2860 u32 prior_retrans = tp->retrans_out; 2861 2861 2862 - tcp_rack_mark_lost(sk); 2862 + if (tcp_rack_mark_lost(sk)) 2863 + *ack_flag &= ~FLAG_SET_XMIT_TIMER; 2863 2864 if (prior_retrans > tp->retrans_out) 2864 2865 *ack_flag |= FLAG_LOST_RETRANS; 2865 2866 } ··· 3393 3392 } else { 3394 3393 unsigned long when = tcp_probe0_when(sk, TCP_RTO_MAX); 3395 3394 3396 - tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, 3397 - when, TCP_RTO_MAX); 3395 + when = tcp_clamp_probe0_to_user_timeout(sk, when); 3396 + tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, when, TCP_RTO_MAX); 3398 3397 } 3399 3398 } 3400 3399 ··· 3817 3816 3818 3817 if (tp->tlp_high_seq) 3819 3818 tcp_process_tlp_ack(sk, ack, flag); 3820 - /* If needed, reset TLP/RTO timer; RACK may later override this. */ 3821 - if (flag & FLAG_SET_XMIT_TIMER) 3822 - tcp_set_xmit_timer(sk); 3823 3819 3824 3820 if (tcp_ack_is_dubious(sk, flag)) { 3825 3821 if (!(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP))) { ··· 3828 3830 tcp_fastretrans_alert(sk, prior_snd_una, num_dupack, &flag, 3829 3831 &rexmit); 3830 3832 } 3833 + 3834 + /* If needed, reset TLP/RTO timer when RACK doesn't set. */ 3835 + if (flag & FLAG_SET_XMIT_TIMER) 3836 + tcp_set_xmit_timer(sk); 3831 3837 3832 3838 if ((flag & FLAG_FORWARD_PROGRESS) || !(flag & FLAG_NOT_DUP)) 3833 3839 sk_dst_confirm(sk);
+2
net/ipv4/tcp_output.c
··· 4099 4099 */ 4100 4100 timeout = TCP_RESOURCE_PROBE_INTERVAL; 4101 4101 } 4102 + 4103 + timeout = tcp_clamp_probe0_to_user_timeout(sk, timeout); 4102 4104 tcp_reset_xmit_timer(sk, ICSK_TIME_PROBE0, timeout, TCP_RTO_MAX); 4103 4105 } 4104 4106
+3 -2
net/ipv4/tcp_recovery.c
··· 96 96 } 97 97 } 98 98 99 - void tcp_rack_mark_lost(struct sock *sk) 99 + bool tcp_rack_mark_lost(struct sock *sk) 100 100 { 101 101 struct tcp_sock *tp = tcp_sk(sk); 102 102 u32 timeout; 103 103 104 104 if (!tp->rack.advanced) 105 - return; 105 + return false; 106 106 107 107 /* Reset the advanced flag to avoid unnecessary queue scanning */ 108 108 tp->rack.advanced = 0; ··· 112 112 inet_csk_reset_xmit_timer(sk, ICSK_TIME_REO_TIMEOUT, 113 113 timeout, inet_csk(sk)->icsk_rto); 114 114 } 115 + return !!timeout; 115 116 } 116 117 117 118 /* Record the most recently (re)sent time among the (s)acked packets
+18
net/ipv4/tcp_timer.c
··· 40 40 return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining)); 41 41 } 42 42 43 + u32 tcp_clamp_probe0_to_user_timeout(const struct sock *sk, u32 when) 44 + { 45 + struct inet_connection_sock *icsk = inet_csk(sk); 46 + u32 remaining; 47 + s32 elapsed; 48 + 49 + if (!icsk->icsk_user_timeout || !icsk->icsk_probes_tstamp) 50 + return when; 51 + 52 + elapsed = tcp_jiffies32 - icsk->icsk_probes_tstamp; 53 + if (unlikely(elapsed < 0)) 54 + elapsed = 0; 55 + remaining = msecs_to_jiffies(icsk->icsk_user_timeout) - elapsed; 56 + remaining = max_t(u32, remaining, TCP_TIMEOUT_MIN); 57 + 58 + return min_t(u32, remaining, when); 59 + } 60 + 43 61 /** 44 62 * tcp_write_err() - close socket and save error info 45 63 * @sk: The socket the error has appeared on.
+3 -3
net/key/af_key.c
··· 2902 2902 break; 2903 2903 if (!aalg->pfkey_supported) 2904 2904 continue; 2905 - if (aalg_tmpl_set(t, aalg) && aalg->available) 2905 + if (aalg_tmpl_set(t, aalg)) 2906 2906 sz += sizeof(struct sadb_comb); 2907 2907 } 2908 2908 return sz + sizeof(struct sadb_prop); ··· 2920 2920 if (!ealg->pfkey_supported) 2921 2921 continue; 2922 2922 2923 - if (!(ealg_tmpl_set(t, ealg) && ealg->available)) 2923 + if (!(ealg_tmpl_set(t, ealg))) 2924 2924 continue; 2925 2925 2926 2926 for (k = 1; ; k++) { ··· 2931 2931 if (!aalg->pfkey_supported) 2932 2932 continue; 2933 2933 2934 - if (aalg_tmpl_set(t, aalg) && aalg->available) 2934 + if (aalg_tmpl_set(t, aalg)) 2935 2935 sz += sizeof(struct sadb_comb); 2936 2936 } 2937 2937 }
+54 -16
net/lapb/lapb_iface.c
··· 122 122 123 123 timer_setup(&lapb->t1timer, NULL, 0); 124 124 timer_setup(&lapb->t2timer, NULL, 0); 125 + lapb->t1timer_stop = true; 126 + lapb->t2timer_stop = true; 125 127 126 128 lapb->t1 = LAPB_DEFAULT_T1; 127 129 lapb->t2 = LAPB_DEFAULT_T2; ··· 131 129 lapb->mode = LAPB_DEFAULT_MODE; 132 130 lapb->window = LAPB_DEFAULT_WINDOW; 133 131 lapb->state = LAPB_STATE_0; 132 + 133 + spin_lock_init(&lapb->lock); 134 134 refcount_set(&lapb->refcnt, 1); 135 135 out: 136 136 return lapb; ··· 182 178 goto out; 183 179 lapb_put(lapb); 184 180 181 + /* Wait for other refs to "lapb" to drop */ 182 + while (refcount_read(&lapb->refcnt) > 2) 183 + usleep_range(1, 10); 184 + 185 + spin_lock_bh(&lapb->lock); 186 + 185 187 lapb_stop_t1timer(lapb); 186 188 lapb_stop_t2timer(lapb); 187 189 188 190 lapb_clear_queues(lapb); 191 + 192 + spin_unlock_bh(&lapb->lock); 193 + 194 + /* Wait for running timers to stop */ 195 + del_timer_sync(&lapb->t1timer); 196 + del_timer_sync(&lapb->t2timer); 189 197 190 198 __lapb_remove_cb(lapb); 191 199 ··· 217 201 if (!lapb) 218 202 goto out; 219 203 204 + spin_lock_bh(&lapb->lock); 205 + 220 206 parms->t1 = lapb->t1 / HZ; 221 207 parms->t2 = lapb->t2 / HZ; 222 208 parms->n2 = lapb->n2; ··· 237 219 else 238 220 parms->t2timer = (lapb->t2timer.expires - jiffies) / HZ; 239 221 222 + spin_unlock_bh(&lapb->lock); 240 223 lapb_put(lapb); 241 224 rc = LAPB_OK; 242 225 out: ··· 252 233 253 234 if (!lapb) 254 235 goto out; 236 + 237 + spin_lock_bh(&lapb->lock); 255 238 256 239 rc = LAPB_INVALUE; 257 240 if (parms->t1 < 1 || parms->t2 < 1 || parms->n2 < 1) ··· 277 256 278 257 rc = LAPB_OK; 279 258 out_put: 259 + spin_unlock_bh(&lapb->lock); 280 260 lapb_put(lapb); 281 261 out: 282 262 return rc; ··· 291 269 292 270 if (!lapb) 293 271 goto out; 272 + 273 + spin_lock_bh(&lapb->lock); 294 274 295 275 rc = LAPB_OK; 296 276 if (lapb->state == LAPB_STATE_1) ··· 309 285 310 286 rc = LAPB_OK; 311 287 out_put: 288 + spin_unlock_bh(&lapb->lock); 312 289 lapb_put(lapb); 313 290 out: 314 291 return rc; 315 292 } 316 293 EXPORT_SYMBOL(lapb_connect_request); 317 294 318 - int lapb_disconnect_request(struct net_device *dev) 295 + static int __lapb_disconnect_request(struct lapb_cb *lapb) 319 296 { 320 - struct lapb_cb *lapb = lapb_devtostruct(dev); 321 - int rc = LAPB_BADTOKEN; 322 - 323 - if (!lapb) 324 - goto out; 325 - 326 297 switch (lapb->state) { 327 298 case LAPB_STATE_0: 328 - rc = LAPB_NOTCONNECTED; 329 - goto out_put; 299 + return LAPB_NOTCONNECTED; 330 300 331 301 case LAPB_STATE_1: 332 302 lapb_dbg(1, "(%p) S1 TX DISC(1)\n", lapb->dev); ··· 328 310 lapb_send_control(lapb, LAPB_DISC, LAPB_POLLON, LAPB_COMMAND); 329 311 lapb->state = LAPB_STATE_0; 330 312 lapb_start_t1timer(lapb); 331 - rc = LAPB_NOTCONNECTED; 332 - goto out_put; 313 + return LAPB_NOTCONNECTED; 333 314 334 315 case LAPB_STATE_2: 335 - rc = LAPB_OK; 336 - goto out_put; 316 + return LAPB_OK; 337 317 } 338 318 339 319 lapb_clear_queues(lapb); ··· 344 328 lapb_dbg(1, "(%p) S3 DISC(1)\n", lapb->dev); 345 329 lapb_dbg(0, "(%p) S3 -> S2\n", lapb->dev); 346 330 347 - rc = LAPB_OK; 348 - out_put: 331 + return LAPB_OK; 332 + } 333 + 334 + int lapb_disconnect_request(struct net_device *dev) 335 + { 336 + struct lapb_cb *lapb = lapb_devtostruct(dev); 337 + int rc = LAPB_BADTOKEN; 338 + 339 + if (!lapb) 340 + goto out; 341 + 342 + spin_lock_bh(&lapb->lock); 343 + 344 + rc = __lapb_disconnect_request(lapb); 345 + 346 + spin_unlock_bh(&lapb->lock); 349 347 lapb_put(lapb); 350 348 out: 351 349 return rc; ··· 374 344 if (!lapb) 375 345 goto out; 376 346 347 + spin_lock_bh(&lapb->lock); 348 + 377 349 rc = LAPB_NOTCONNECTED; 378 350 if (lapb->state != LAPB_STATE_3 && lapb->state != LAPB_STATE_4) 379 351 goto out_put; ··· 384 352 lapb_kick(lapb); 385 353 rc = LAPB_OK; 386 354 out_put: 355 + spin_unlock_bh(&lapb->lock); 387 356 lapb_put(lapb); 388 357 out: 389 358 return rc; ··· 397 364 int rc = LAPB_BADTOKEN; 398 365 399 366 if (lapb) { 367 + spin_lock_bh(&lapb->lock); 400 368 lapb_data_input(lapb, skb); 369 + spin_unlock_bh(&lapb->lock); 401 370 lapb_put(lapb); 402 371 rc = LAPB_OK; 403 372 } ··· 470 435 if (!lapb) 471 436 return NOTIFY_DONE; 472 437 438 + spin_lock_bh(&lapb->lock); 439 + 473 440 switch (event) { 474 441 case NETDEV_UP: 475 442 lapb_dbg(0, "(%p) Interface up: %s\n", dev, dev->name); ··· 491 454 break; 492 455 case NETDEV_GOING_DOWN: 493 456 if (netif_carrier_ok(dev)) 494 - lapb_disconnect_request(dev); 457 + __lapb_disconnect_request(lapb); 495 458 break; 496 459 case NETDEV_DOWN: 497 460 lapb_dbg(0, "(%p) Interface down: %s\n", dev, dev->name); ··· 526 489 break; 527 490 } 528 491 492 + spin_unlock_bh(&lapb->lock); 529 493 lapb_put(lapb); 530 494 return NOTIFY_DONE; 531 495 }
+26 -4
net/lapb/lapb_timer.c
··· 40 40 lapb->t1timer.function = lapb_t1timer_expiry; 41 41 lapb->t1timer.expires = jiffies + lapb->t1; 42 42 43 + lapb->t1timer_stop = false; 43 44 add_timer(&lapb->t1timer); 44 45 } 45 46 ··· 51 50 lapb->t2timer.function = lapb_t2timer_expiry; 52 51 lapb->t2timer.expires = jiffies + lapb->t2; 53 52 53 + lapb->t2timer_stop = false; 54 54 add_timer(&lapb->t2timer); 55 55 } 56 56 57 57 void lapb_stop_t1timer(struct lapb_cb *lapb) 58 58 { 59 + lapb->t1timer_stop = true; 59 60 del_timer(&lapb->t1timer); 60 61 } 61 62 62 63 void lapb_stop_t2timer(struct lapb_cb *lapb) 63 64 { 65 + lapb->t2timer_stop = true; 64 66 del_timer(&lapb->t2timer); 65 67 } 66 68 ··· 76 72 { 77 73 struct lapb_cb *lapb = from_timer(lapb, t, t2timer); 78 74 75 + spin_lock_bh(&lapb->lock); 76 + if (timer_pending(&lapb->t2timer)) /* A new timer has been set up */ 77 + goto out; 78 + if (lapb->t2timer_stop) /* The timer has been stopped */ 79 + goto out; 80 + 79 81 if (lapb->condition & LAPB_ACK_PENDING_CONDITION) { 80 82 lapb->condition &= ~LAPB_ACK_PENDING_CONDITION; 81 83 lapb_timeout_response(lapb); 82 84 } 85 + 86 + out: 87 + spin_unlock_bh(&lapb->lock); 83 88 } 84 89 85 90 static void lapb_t1timer_expiry(struct timer_list *t) 86 91 { 87 92 struct lapb_cb *lapb = from_timer(lapb, t, t1timer); 93 + 94 + spin_lock_bh(&lapb->lock); 95 + if (timer_pending(&lapb->t1timer)) /* A new timer has been set up */ 96 + goto out; 97 + if (lapb->t1timer_stop) /* The timer has been stopped */ 98 + goto out; 88 99 89 100 switch (lapb->state) { 90 101 ··· 127 108 lapb->state = LAPB_STATE_0; 128 109 lapb_disconnect_indication(lapb, LAPB_TIMEDOUT); 129 110 lapb_dbg(0, "(%p) S1 -> S0\n", lapb->dev); 130 - return; 111 + goto out; 131 112 } else { 132 113 lapb->n2count++; 133 114 if (lapb->mode & LAPB_EXTENDED) { ··· 151 132 lapb->state = LAPB_STATE_0; 152 133 lapb_disconnect_confirmation(lapb, LAPB_TIMEDOUT); 153 134 lapb_dbg(0, "(%p) S2 -> S0\n", lapb->dev); 154 - return; 135 + goto out; 155 136 } else { 156 137 lapb->n2count++; 157 138 lapb_dbg(1, "(%p) S2 TX DISC(1)\n", lapb->dev); ··· 169 150 lapb_stop_t2timer(lapb); 170 151 lapb_disconnect_indication(lapb, LAPB_TIMEDOUT); 171 152 lapb_dbg(0, "(%p) S3 -> S0\n", lapb->dev); 172 - return; 153 + goto out; 173 154 } else { 174 155 lapb->n2count++; 175 156 lapb_requeue_frames(lapb); ··· 186 167 lapb->state = LAPB_STATE_0; 187 168 lapb_disconnect_indication(lapb, LAPB_TIMEDOUT); 188 169 lapb_dbg(0, "(%p) S4 -> S0\n", lapb->dev); 189 - return; 170 + goto out; 190 171 } else { 191 172 lapb->n2count++; 192 173 lapb_transmit_frmr(lapb); ··· 195 176 } 196 177 197 178 lapb_start_t1timer(lapb); 179 + 180 + out: 181 + spin_unlock_bh(&lapb->lock); 198 182 }
+1
net/mac80211/ieee80211_i.h
··· 1077 1077 IEEE80211_QUEUE_STOP_REASON_FLUSH, 1078 1078 IEEE80211_QUEUE_STOP_REASON_TDLS_TEARDOWN, 1079 1079 IEEE80211_QUEUE_STOP_REASON_RESERVE_TID, 1080 + IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE, 1080 1081 1081 1082 IEEE80211_QUEUE_STOP_REASONS, 1082 1083 };
+6
net/mac80211/iface.c
··· 1633 1633 if (ret) 1634 1634 return ret; 1635 1635 1636 + ieee80211_stop_vif_queues(local, sdata, 1637 + IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE); 1638 + synchronize_net(); 1639 + 1636 1640 ieee80211_do_stop(sdata, false); 1637 1641 1638 1642 ieee80211_teardown_sdata(sdata); ··· 1659 1655 err = ieee80211_do_open(&sdata->wdev, false); 1660 1656 WARN(err, "type change: do_open returned %d", err); 1661 1657 1658 + ieee80211_wake_vif_queues(local, sdata, 1659 + IEEE80211_QUEUE_STOP_REASON_IFTYPE_CHANGE); 1662 1660 return ret; 1663 1661 } 1664 1662
+7 -3
net/mac80211/spectmgmt.c
··· 133 133 } 134 134 135 135 if (wide_bw_chansw_ie) { 136 + u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1; 136 137 struct ieee80211_vht_operation vht_oper = { 137 138 .chan_width = 138 139 wide_bw_chansw_ie->new_channel_width, 139 140 .center_freq_seg0_idx = 140 141 wide_bw_chansw_ie->new_center_freq_seg0, 141 - .center_freq_seg1_idx = 142 - wide_bw_chansw_ie->new_center_freq_seg1, 142 + .center_freq_seg1_idx = new_seg1, 143 143 /* .basic_mcs_set doesn't matter */ 144 144 }; 145 - struct ieee80211_ht_operation ht_oper = {}; 145 + struct ieee80211_ht_operation ht_oper = { 146 + .operation_mode = 147 + cpu_to_le16(new_seg1 << 148 + IEEE80211_HT_OP_MODE_CCFS2_SHIFT), 149 + }; 146 150 147 151 /* default, for the case of IEEE80211_VHT_CHANWIDTH_USE_HT, 148 152 * to the previously parsed chandef
+2 -3
net/netfilter/nf_tables_api.c
··· 5235 5235 kfree(elem); 5236 5236 } 5237 5237 5238 - static int nft_set_elem_expr_clone(const struct nft_ctx *ctx, 5239 - struct nft_set *set, 5240 - struct nft_expr *expr_array[]) 5238 + int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set, 5239 + struct nft_expr *expr_array[]) 5241 5240 { 5242 5241 struct nft_expr *expr; 5243 5242 int err, i, k;
+26 -15
net/netfilter/nft_dynset.c
··· 295 295 err = -EOPNOTSUPP; 296 296 goto err_expr_free; 297 297 } 298 + } else if (set->num_exprs > 0) { 299 + err = nft_set_elem_expr_clone(ctx, set, priv->expr_array); 300 + if (err < 0) 301 + return err; 302 + 303 + priv->num_exprs = set->num_exprs; 298 304 } 299 305 300 306 nft_set_ext_prepare(&priv->tmpl); ··· 312 306 nft_dynset_ext_add_expr(priv); 313 307 314 308 if (set->flags & NFT_SET_TIMEOUT) { 315 - if (timeout || set->timeout) 309 + if (timeout || set->timeout) { 310 + nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_TIMEOUT); 316 311 nft_set_ext_add(&priv->tmpl, NFT_SET_EXT_EXPIRATION); 312 + } 317 313 } 318 314 319 315 priv->timeout = timeout; ··· 384 376 nf_jiffies64_to_msecs(priv->timeout), 385 377 NFTA_DYNSET_PAD)) 386 378 goto nla_put_failure; 387 - if (priv->num_exprs == 1) { 388 - if (nft_expr_dump(skb, NFTA_DYNSET_EXPR, priv->expr_array[0])) 389 - goto nla_put_failure; 390 - } else if (priv->num_exprs > 1) { 391 - struct nlattr *nest; 392 - 393 - nest = nla_nest_start_noflag(skb, NFTA_DYNSET_EXPRESSIONS); 394 - if (!nest) 395 - goto nla_put_failure; 396 - 397 - for (i = 0; i < priv->num_exprs; i++) { 398 - if (nft_expr_dump(skb, NFTA_LIST_ELEM, 399 - priv->expr_array[i])) 379 + if (priv->set->num_exprs == 0) { 380 + if (priv->num_exprs == 1) { 381 + if (nft_expr_dump(skb, NFTA_DYNSET_EXPR, 382 + priv->expr_array[0])) 400 383 goto nla_put_failure; 384 + } else if (priv->num_exprs > 1) { 385 + struct nlattr *nest; 386 + 387 + nest = nla_nest_start_noflag(skb, NFTA_DYNSET_EXPRESSIONS); 388 + if (!nest) 389 + goto nla_put_failure; 390 + 391 + for (i = 0; i < priv->num_exprs; i++) { 392 + if (nft_expr_dump(skb, NFTA_LIST_ELEM, 393 + priv->expr_array[i])) 394 + goto nla_put_failure; 395 + } 396 + nla_nest_end(skb, nest); 401 397 } 402 - nla_nest_end(skb, nest); 403 398 } 404 399 if (nla_put_be32(skb, NFTA_DYNSET_FLAGS, htonl(flags))) 405 400 goto nla_put_failure;
+1
net/nfc/netlink.c
··· 852 852 853 853 if (!dev->polling) { 854 854 device_unlock(&dev->dev); 855 + nfc_put_device(dev); 855 856 return -EINVAL; 856 857 } 857 858
+1 -1
net/nfc/rawsock.c
··· 105 105 if (addr->target_idx > dev->target_next_idx - 1 || 106 106 addr->target_idx < dev->target_next_idx - dev->n_targets) { 107 107 rc = -EINVAL; 108 - goto error; 108 + goto put_dev; 109 109 } 110 110 111 111 rc = nfc_activate_target(dev, addr->target_idx, addr->nfc_protocol);
+1
net/rxrpc/call_accept.c
··· 197 197 tail = b->peer_backlog_tail; 198 198 while (CIRC_CNT(head, tail, size) > 0) { 199 199 struct rxrpc_peer *peer = b->peer_backlog[tail]; 200 + rxrpc_put_local(peer->local); 200 201 kfree(peer); 201 202 tail = (tail + 1) & (size - 1); 202 203 }
+12 -8
net/switchdev/switchdev.c
··· 388 388 extack = switchdev_notifier_info_to_extack(&port_obj_info->info); 389 389 390 390 if (check_cb(dev)) { 391 - /* This flag is only checked if the return value is success. */ 392 - port_obj_info->handled = true; 393 - return add_cb(dev, port_obj_info->obj, extack); 391 + err = add_cb(dev, port_obj_info->obj, extack); 392 + if (err != -EOPNOTSUPP) 393 + port_obj_info->handled = true; 394 + return err; 394 395 } 395 396 396 397 /* Switch ports might be stacked under e.g. a LAG. Ignore the ··· 442 441 int err = -EOPNOTSUPP; 443 442 444 443 if (check_cb(dev)) { 445 - /* This flag is only checked if the return value is success. */ 446 - port_obj_info->handled = true; 447 - return del_cb(dev, port_obj_info->obj); 444 + err = del_cb(dev, port_obj_info->obj); 445 + if (err != -EOPNOTSUPP) 446 + port_obj_info->handled = true; 447 + return err; 448 448 } 449 449 450 450 /* Switch ports might be stacked under e.g. a LAG. Ignore the ··· 495 493 int err = -EOPNOTSUPP; 496 494 497 495 if (check_cb(dev)) { 498 - port_attr_info->handled = true; 499 - return set_cb(dev, port_attr_info->attr); 496 + err = set_cb(dev, port_attr_info->attr); 497 + if (err != -EOPNOTSUPP) 498 + port_attr_info->handled = true; 499 + return err; 500 500 } 501 501 502 502 /* Switch ports might be stacked under e.g. a LAG. Ignore the
+3 -2
net/wireless/wext-core.c
··· 896 896 int call_commit_handler(struct net_device *dev) 897 897 { 898 898 #ifdef CONFIG_WIRELESS_EXT 899 - if ((netif_running(dev)) && 900 - (dev->wireless_handlers->standard[0] != NULL)) 899 + if (netif_running(dev) && 900 + dev->wireless_handlers && 901 + dev->wireless_handlers->standard[0]) 901 902 /* Call the commit handler on the driver */ 902 903 return dev->wireless_handlers->standard[0](dev, NULL, 903 904 NULL, NULL);
+1 -1
net/xfrm/xfrm_input.c
··· 660 660 /* only the first xfrm gets the encap type */ 661 661 encap_type = 0; 662 662 663 - if (async && x->repl->recheck(x, skb, seq)) { 663 + if (x->repl->recheck(x, skb, seq)) { 664 664 XFRM_INC_STATS(net, LINUX_MIB_XFRMINSTATESEQERROR); 665 665 goto drop_unlock; 666 666 }
+20 -10
net/xfrm/xfrm_policy.c
··· 793 793 const xfrm_address_t *b, 794 794 u8 prefixlen, u16 family) 795 795 { 796 + u32 ma, mb, mask; 796 797 unsigned int pdw, pbi; 797 798 int delta = 0; 798 799 799 800 switch (family) { 800 801 case AF_INET: 801 - if (sizeof(long) == 4 && prefixlen == 0) 802 - return ntohl(a->a4) - ntohl(b->a4); 803 - return (ntohl(a->a4) & ((~0UL << (32 - prefixlen)))) - 804 - (ntohl(b->a4) & ((~0UL << (32 - prefixlen)))); 802 + if (prefixlen == 0) 803 + return 0; 804 + mask = ~0U << (32 - prefixlen); 805 + ma = ntohl(a->a4) & mask; 806 + mb = ntohl(b->a4) & mask; 807 + if (ma < mb) 808 + delta = -1; 809 + else if (ma > mb) 810 + delta = 1; 811 + break; 805 812 case AF_INET6: 806 813 pdw = prefixlen >> 5; 807 814 pbi = prefixlen & 0x1f; ··· 819 812 return delta; 820 813 } 821 814 if (pbi) { 822 - u32 mask = ~0u << (32 - pbi); 823 - 824 - delta = (ntohl(a->a6[pdw]) & mask) - 825 - (ntohl(b->a6[pdw]) & mask); 815 + mask = ~0U << (32 - pbi); 816 + ma = ntohl(a->a6[pdw]) & mask; 817 + mb = ntohl(b->a6[pdw]) & mask; 818 + if (ma < mb) 819 + delta = -1; 820 + else if (ma > mb) 821 + delta = 1; 826 822 } 827 823 break; 828 824 default: ··· 3088 3078 xflo.flags = flags; 3089 3079 3090 3080 /* To accelerate a bit... */ 3091 - if ((dst_orig->flags & DST_NOXFRM) || 3092 - !net->xfrm.policy_count[XFRM_POLICY_OUT]) 3081 + if (!if_id && ((dst_orig->flags & DST_NOXFRM) || 3082 + !net->xfrm.policy_count[XFRM_POLICY_OUT])) 3093 3083 goto nopol; 3094 3084 3095 3085 xdst = xfrm_bundle_lookup(net, fl, family, dir, &xflo, if_id);
+2 -2
sound/core/pcm_native.c
··· 382 382 continue; 383 383 384 384 /* 385 - * The 'deps' array includes maximum three dependencies 386 - * to SNDRV_PCM_HW_PARAM_XXXs for this rule. The fourth 385 + * The 'deps' array includes maximum four dependencies 386 + * to SNDRV_PCM_HW_PARAM_XXXs for this rule. The fifth 387 387 * member of this array is a sentinel and should be 388 388 * negative value. 389 389 *
+2 -1
sound/core/seq/oss/seq_oss_synth.c
··· 611 611 612 612 if (info->is_midi) { 613 613 struct midi_info minf; 614 - snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf); 614 + if (snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf)) 615 + return -ENXIO; 615 616 inf->synth_type = SYNTH_TYPE_MIDI; 616 617 inf->synth_subtype = 0; 617 618 inf->nr_voices = 16;
+4
sound/hda/intel-dsp-config.c
··· 307 307 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 308 308 .device = 0xa0c8, 309 309 }, 310 + { 311 + .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 312 + .device = 0x43c8, 313 + }, 310 314 #endif 311 315 312 316 /* Elkhart Lake */
+7 -17
sound/pci/hda/hda_codec.c
··· 2934 2934 snd_hdac_leave_pm(&codec->core); 2935 2935 } 2936 2936 2937 - static int hda_codec_suspend(struct device *dev) 2937 + static int hda_codec_runtime_suspend(struct device *dev) 2938 2938 { 2939 2939 struct hda_codec *codec = dev_to_hda_codec(dev); 2940 2940 unsigned int state; ··· 2953 2953 return 0; 2954 2954 } 2955 2955 2956 - static int hda_codec_resume(struct device *dev) 2956 + static int hda_codec_runtime_resume(struct device *dev) 2957 2957 { 2958 2958 struct hda_codec *codec = dev_to_hda_codec(dev); 2959 2959 ··· 2966 2966 hda_call_codec_resume(codec); 2967 2967 pm_runtime_mark_last_busy(dev); 2968 2968 return 0; 2969 - } 2970 - 2971 - static int hda_codec_runtime_suspend(struct device *dev) 2972 - { 2973 - return hda_codec_suspend(dev); 2974 - } 2975 - 2976 - static int hda_codec_runtime_resume(struct device *dev) 2977 - { 2978 - return hda_codec_resume(dev); 2979 2969 } 2980 2970 2981 2971 #endif /* CONFIG_PM */ ··· 2988 2998 static int hda_codec_pm_suspend(struct device *dev) 2989 2999 { 2990 3000 dev->power.power_state = PMSG_SUSPEND; 2991 - return hda_codec_suspend(dev); 3001 + return pm_runtime_force_suspend(dev); 2992 3002 } 2993 3003 2994 3004 static int hda_codec_pm_resume(struct device *dev) 2995 3005 { 2996 3006 dev->power.power_state = PMSG_RESUME; 2997 - return hda_codec_resume(dev); 3007 + return pm_runtime_force_resume(dev); 2998 3008 } 2999 3009 3000 3010 static int hda_codec_pm_freeze(struct device *dev) 3001 3011 { 3002 3012 dev->power.power_state = PMSG_FREEZE; 3003 - return hda_codec_suspend(dev); 3013 + return pm_runtime_force_suspend(dev); 3004 3014 } 3005 3015 3006 3016 static int hda_codec_pm_thaw(struct device *dev) 3007 3017 { 3008 3018 dev->power.power_state = PMSG_THAW; 3009 - return hda_codec_resume(dev); 3019 + return pm_runtime_force_resume(dev); 3010 3020 } 3011 3021 3012 3022 static int hda_codec_pm_restore(struct device *dev) 3013 3023 { 3014 3024 dev->power.power_state = PMSG_RESTORE; 3015 - return hda_codec_resume(dev); 3025 + return pm_runtime_force_resume(dev); 3016 3026 } 3017 3027 #endif /* CONFIG_PM_SLEEP */ 3018 3028
+6
sound/pci/hda/hda_intel.c
··· 2484 2484 /* CometLake-S */ 2485 2485 { PCI_DEVICE(0x8086, 0xa3f0), 2486 2486 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2487 + /* CometLake-R */ 2488 + { PCI_DEVICE(0x8086, 0xf0c8), 2489 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2487 2490 /* Icelake */ 2488 2491 { PCI_DEVICE(0x8086, 0x34c8), 2489 2492 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, ··· 2509 2506 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2510 2507 /* Alderlake-S */ 2511 2508 { PCI_DEVICE(0x8086, 0x7ad0), 2509 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2510 + /* Alderlake-P */ 2511 + { PCI_DEVICE(0x8086, 0x51c8), 2512 2512 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2513 2513 /* Elkhart Lake */ 2514 2514 { PCI_DEVICE(0x8086, 0x4b55),
+1
sound/pci/hda/patch_hdmi.c
··· 4346 4346 HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi), 4347 4347 HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI", patch_i915_tgl_hdmi), 4348 4348 HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI", patch_i915_tgl_hdmi), 4349 + HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi), 4349 4350 HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), 4350 4351 HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), 4351 4352 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi),
+9
sound/pci/hda/patch_realtek.c
··· 6371 6371 ALC256_FIXUP_HP_HEADSET_MIC, 6372 6372 ALC236_FIXUP_DELL_AIO_HEADSET_MIC, 6373 6373 ALC282_FIXUP_ACER_DISABLE_LINEOUT, 6374 + ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST, 6374 6375 }; 6375 6376 6376 6377 static const struct hda_fixup alc269_fixups[] = { ··· 7809 7808 .chained = true, 7810 7809 .chain_id = ALC269_FIXUP_HEADSET_MODE 7811 7810 }, 7811 + [ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST] = { 7812 + .type = HDA_FIXUP_FUNC, 7813 + .v.func = alc269_fixup_limit_int_mic_boost, 7814 + .chained = true, 7815 + .chain_id = ALC255_FIXUP_ACER_MIC_NO_PRESENCE, 7816 + }, 7812 7817 }; 7813 7818 7814 7819 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 7833 7826 SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), 7834 7827 SND_PCI_QUIRK(0x1025, 0x1065, "Acer Aspire C20-820", ALC269VC_FIXUP_ACER_HEADSET_MIC), 7835 7828 SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK), 7829 + SND_PCI_QUIRK(0x1025, 0x1094, "Acer Aspire E5-575T", ALC255_FIXUP_ACER_LIMIT_INT_MIC_BOOST), 7836 7830 SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 7837 7831 SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 7838 7832 SND_PCI_QUIRK(0x1025, 0x1166, "Acer Veriton N4640G", ALC269_FIXUP_LIFEBOOK), ··· 8006 7998 SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), 8007 7999 SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC), 8008 8000 SND_PCI_QUIRK(0x1043, 0x194e, "ASUS UX563FD", ALC294_FIXUP_ASUS_HPE), 8001 + SND_PCI_QUIRK(0x1043, 0x1982, "ASUS B1400CEPE", ALC256_FIXUP_ASUS_HPE), 8009 8002 SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE), 8010 8003 SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE), 8011 8004 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+2 -1
sound/pci/hda/patch_via.c
··· 113 113 spec->codec_type = VT1708S; 114 114 spec->gen.indep_hp = 1; 115 115 spec->gen.keep_eapd_on = 1; 116 + spec->gen.dac_min_mute = 1; 116 117 spec->gen.pcm_playback_hook = via_playback_pcm_hook; 117 118 spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO; 118 119 codec->power_save_node = 1; ··· 1043 1042 static const struct snd_pci_quirk vt2002p_fixups[] = { 1044 1043 SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75), 1045 1044 SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST), 1046 - SND_PCI_QUIRK(0x1558, 0x3501, "Clevo W35xSS_370SS", VIA_FIXUP_POWER_SAVE), 1045 + SND_PCI_QUIRK_VENDOR(0x1558, "Clevo", VIA_FIXUP_POWER_SAVE), 1047 1046 {} 1048 1047 }; 1049 1048
+16 -2
sound/soc/amd/renoir/rn-pci-acp3x.c
··· 165 165 166 166 static const struct dmi_system_id rn_acp_quirk_table[] = { 167 167 { 168 - /* Lenovo IdeaPad Flex 5 14ARE05, IdeaPad 5 15ARE05 */ 168 + /* Lenovo IdeaPad S340-14API */ 169 169 .matches = { 170 170 DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 171 - DMI_EXACT_MATCH(DMI_BOARD_NAME, "LNVNB161216"), 171 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81NB"), 172 + } 173 + }, 174 + { 175 + /* Lenovo IdeaPad Flex 5 14ARE05 */ 176 + .matches = { 177 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 178 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81X2"), 179 + } 180 + }, 181 + { 182 + /* Lenovo IdeaPad 5 15ARE05 */ 183 + .matches = { 184 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 185 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "81YQ"), 172 186 } 173 187 }, 174 188 {
+7 -15
sound/soc/codecs/ak4458.c
··· 595 595 .ops = &ak4458_dai_ops, 596 596 }; 597 597 598 - static void ak4458_power_off(struct ak4458_priv *ak4458) 598 + static void ak4458_reset(struct ak4458_priv *ak4458, bool active) 599 599 { 600 600 if (ak4458->reset_gpiod) { 601 - gpiod_set_value_cansleep(ak4458->reset_gpiod, 0); 602 - usleep_range(1000, 2000); 603 - } 604 - } 605 - 606 - static void ak4458_power_on(struct ak4458_priv *ak4458) 607 - { 608 - if (ak4458->reset_gpiod) { 609 - gpiod_set_value_cansleep(ak4458->reset_gpiod, 1); 601 + gpiod_set_value_cansleep(ak4458->reset_gpiod, active); 610 602 usleep_range(1000, 2000); 611 603 } 612 604 } ··· 612 620 if (ak4458->mute_gpiod) 613 621 gpiod_set_value_cansleep(ak4458->mute_gpiod, 1); 614 622 615 - ak4458_power_on(ak4458); 623 + ak4458_reset(ak4458, false); 616 624 617 625 ret = snd_soc_component_update_bits(component, AK4458_00_CONTROL1, 618 626 0x80, 0x80); /* ACKS bit = 1; 10000000 */ ··· 642 650 { 643 651 struct ak4458_priv *ak4458 = snd_soc_component_get_drvdata(component); 644 652 645 - ak4458_power_off(ak4458); 653 + ak4458_reset(ak4458, true); 646 654 } 647 655 648 656 #ifdef CONFIG_PM ··· 652 660 653 661 regcache_cache_only(ak4458->regmap, true); 654 662 655 - ak4458_power_off(ak4458); 663 + ak4458_reset(ak4458, true); 656 664 657 665 if (ak4458->mute_gpiod) 658 666 gpiod_set_value_cansleep(ak4458->mute_gpiod, 0); ··· 677 685 if (ak4458->mute_gpiod) 678 686 gpiod_set_value_cansleep(ak4458->mute_gpiod, 1); 679 687 680 - ak4458_power_off(ak4458); 681 - ak4458_power_on(ak4458); 688 + ak4458_reset(ak4458, true); 689 + ak4458_reset(ak4458, false); 682 690 683 691 regcache_cache_only(ak4458->regmap, false); 684 692 regcache_mark_dirty(ak4458->regmap);
+1 -1
sound/soc/codecs/hdmi-codec.c
··· 717 717 void *data) 718 718 { 719 719 struct hdmi_codec_priv *hcp = snd_soc_component_get_drvdata(component); 720 - int ret = -EOPNOTSUPP; 720 + int ret = -ENOTSUPP; 721 721 722 722 if (hcp->hcd.ops->hook_plugged_cb) { 723 723 hcp->jack = jack;
+3
sound/soc/codecs/wm_adsp.c
··· 2031 2031 unsigned int alg) 2032 2032 { 2033 2033 struct wm_coeff_ctl *pos, *rslt = NULL; 2034 + const char *fw_txt = wm_adsp_fw_text[dsp->fw]; 2034 2035 2035 2036 list_for_each_entry(pos, &dsp->ctl_list, list) { 2036 2037 if (!pos->subname) 2037 2038 continue; 2038 2039 if (strncmp(pos->subname, name, pos->subname_len) == 0 && 2040 + strncmp(pos->fw_name, fw_txt, 2041 + SNDRV_CTL_ELEM_ID_NAME_MAXLEN) == 0 && 2039 2042 pos->alg_region.alg == alg && 2040 2043 pos->alg_region.type == type) { 2041 2044 rslt = pos;
+1 -1
sound/soc/fsl/imx-hdmi.c
··· 90 90 } 91 91 92 92 ret = snd_soc_component_set_jack(component, &data->hdmi_jack, NULL); 93 - if (ret && ret != -EOPNOTSUPP) { 93 + if (ret && ret != -ENOTSUPP) { 94 94 dev_err(card->dev, "Can't set HDMI Jack %d\n", ret); 95 95 return ret; 96 96 }
+10
sound/soc/intel/boards/sof_sdw.c
··· 71 71 .callback = sof_sdw_quirk_cb, 72 72 .matches = { 73 73 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), 74 + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E") 75 + }, 76 + .driver_data = (void *)(SOF_RT711_JD_SRC_JD2 | 77 + SOF_RT715_DAI_ID_FIX | 78 + SOF_SDW_FOUR_SPK), 79 + }, 80 + { 81 + .callback = sof_sdw_quirk_cb, 82 + .matches = { 83 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), 74 84 DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "09C6") 75 85 }, 76 86 .driver_data = (void *)(SOF_RT711_JD_SRC_JD2 |
+8 -7
sound/soc/intel/skylake/skl-topology.c
··· 3619 3619 3620 3620 list_for_each_entry(dobj, &component->dobj_list, list) { 3621 3621 struct snd_kcontrol *kcontrol = dobj->control.kcontrol; 3622 - struct soc_enum *se = 3623 - (struct soc_enum *)kcontrol->private_value; 3624 - char **texts = dobj->control.dtexts; 3622 + struct soc_enum *se; 3623 + char **texts; 3625 3624 char chan_text[4]; 3626 3625 3627 - if (dobj->type != SND_SOC_DOBJ_ENUM || 3628 - dobj->control.kcontrol->put != 3629 - skl_tplg_multi_config_set_dmic) 3626 + if (dobj->type != SND_SOC_DOBJ_ENUM || !kcontrol || 3627 + kcontrol->put != skl_tplg_multi_config_set_dmic) 3630 3628 continue; 3629 + 3630 + se = (struct soc_enum *)kcontrol->private_value; 3631 + texts = dobj->control.dtexts; 3631 3632 sprintf(chan_text, "c%d", mach->mach_params.dmic_num); 3632 3633 3633 3634 for (i = 0; i < se->items; i++) { 3634 - struct snd_ctl_elem_value val; 3635 + struct snd_ctl_elem_value val = {}; 3635 3636 3636 3637 if (strstr(texts[i], chan_text)) { 3637 3638 val.value.enumerated.item[0] = i;
+4 -1
sound/soc/mediatek/mt8183/mt8183-da7219-max98357.c
··· 532 532 .dpcm_playback = 1, 533 533 .ignore_suspend = 1, 534 534 .be_hw_params_fixup = mt8183_i2s_hw_params_fixup, 535 + .ignore = 1, 535 536 .init = mt8183_da7219_max98357_hdmi_init, 536 537 SND_SOC_DAILINK_REG(tdm), 537 538 }, ··· 755 754 } 756 755 } 757 756 758 - if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) 757 + if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) { 759 758 dai_link->codecs->of_node = hdmi_codec; 759 + dai_link->ignore = 0; 760 + } 760 761 761 762 if (!dai_link->platforms->name) 762 763 dai_link->platforms->of_node = platform_node;
+4 -1
sound/soc/mediatek/mt8183/mt8183-mt6358-ts3a227-max98357.c
··· 515 515 .ignore_suspend = 1, 516 516 .be_hw_params_fixup = mt8183_i2s_hw_params_fixup, 517 517 .ops = &mt8183_mt6358_tdm_ops, 518 + .ignore = 1, 518 519 .init = mt8183_mt6358_ts3a227_max98357_hdmi_init, 519 520 SND_SOC_DAILINK_REG(tdm), 520 521 }, ··· 662 661 SND_SOC_DAIFMT_CBM_CFM; 663 662 } 664 663 665 - if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) 664 + if (hdmi_codec && strcmp(dai_link->name, "TDM") == 0) { 666 665 dai_link->codecs->of_node = hdmi_codec; 666 + dai_link->ignore = 0; 667 + } 667 668 668 669 if (!dai_link->platforms->name) 669 670 dai_link->platforms->of_node = platform_node;
+49
sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
··· 401 401 .startup = mt8192_mt6359_rt1015_rt5682_cap1_startup, 402 402 }; 403 403 404 + static int 405 + mt8192_mt6359_rt5682_startup(struct snd_pcm_substream *substream) 406 + { 407 + static const unsigned int channels[] = { 408 + 1, 2 409 + }; 410 + static const struct snd_pcm_hw_constraint_list constraints_channels = { 411 + .count = ARRAY_SIZE(channels), 412 + .list = channels, 413 + .mask = 0, 414 + }; 415 + static const unsigned int rates[] = { 416 + 48000 417 + }; 418 + static const struct snd_pcm_hw_constraint_list constraints_rates = { 419 + .count = ARRAY_SIZE(rates), 420 + .list = rates, 421 + .mask = 0, 422 + }; 423 + 424 + struct snd_soc_pcm_runtime *rtd = asoc_substream_to_rtd(substream); 425 + struct snd_pcm_runtime *runtime = substream->runtime; 426 + int ret; 427 + 428 + ret = snd_pcm_hw_constraint_list(runtime, 0, 429 + SNDRV_PCM_HW_PARAM_CHANNELS, 430 + &constraints_channels); 431 + if (ret < 0) { 432 + dev_err(rtd->dev, "hw_constraint_list channels failed\n"); 433 + return ret; 434 + } 435 + 436 + ret = snd_pcm_hw_constraint_list(runtime, 0, 437 + SNDRV_PCM_HW_PARAM_RATE, 438 + &constraints_rates); 439 + if (ret < 0) { 440 + dev_err(rtd->dev, "hw_constraint_list rate failed\n"); 441 + return ret; 442 + } 443 + 444 + return 0; 445 + } 446 + 447 + static const struct snd_soc_ops mt8192_mt6359_rt5682_ops = { 448 + .startup = mt8192_mt6359_rt5682_startup, 449 + }; 450 + 404 451 /* FE */ 405 452 SND_SOC_DAILINK_DEFS(playback1, 406 453 DAILINK_COMP_ARRAY(COMP_CPU("DL1")), ··· 695 648 SND_SOC_DPCM_TRIGGER_PRE}, 696 649 .dynamic = 1, 697 650 .dpcm_playback = 1, 651 + .ops = &mt8192_mt6359_rt5682_ops, 698 652 SND_SOC_DAILINK_REG(playback3), 699 653 }, 700 654 { ··· 769 721 SND_SOC_DPCM_TRIGGER_PRE}, 770 722 .dynamic = 1, 771 723 .dpcm_capture = 1, 724 + .ops = &mt8192_mt6359_rt5682_ops, 772 725 SND_SOC_DAILINK_REG(capture2), 773 726 }, 774 727 {
+22
sound/soc/qcom/lpass-cpu.c
··· 344 344 } 345 345 EXPORT_SYMBOL_GPL(asoc_qcom_lpass_cpu_dai_probe); 346 346 347 + static int asoc_qcom_of_xlate_dai_name(struct snd_soc_component *component, 348 + struct of_phandle_args *args, 349 + const char **dai_name) 350 + { 351 + struct lpass_data *drvdata = snd_soc_component_get_drvdata(component); 352 + struct lpass_variant *variant = drvdata->variant; 353 + int id = args->args[0]; 354 + int ret = -EINVAL; 355 + int i; 356 + 357 + for (i = 0; i < variant->num_dai; i++) { 358 + if (variant->dai_driver[i].id == id) { 359 + *dai_name = variant->dai_driver[i].name; 360 + ret = 0; 361 + break; 362 + } 363 + } 364 + 365 + return ret; 366 + } 367 + 347 368 static const struct snd_soc_component_driver lpass_cpu_comp_driver = { 348 369 .name = "lpass-cpu", 370 + .of_xlate_dai_name = asoc_qcom_of_xlate_dai_name, 349 371 }; 350 372 351 373 static bool lpass_cpu_regmap_writeable(struct device *dev, unsigned int reg)
+1 -1
sound/soc/qcom/lpass-ipq806x.c
··· 131 131 .micmode = REG_FIELD_ID(0x0010, 4, 7, 5, 0x4), 132 132 .micmono = REG_FIELD_ID(0x0010, 3, 3, 5, 0x4), 133 133 .wssrc = REG_FIELD_ID(0x0010, 2, 2, 5, 0x4), 134 - .bitwidth = REG_FIELD_ID(0x0010, 0, 0, 5, 0x4), 134 + .bitwidth = REG_FIELD_ID(0x0010, 0, 1, 5, 0x4), 135 135 136 136 .rdma_dyncclk = REG_FIELD_ID(0x6000, 12, 12, 4, 0x1000), 137 137 .rdma_bursten = REG_FIELD_ID(0x6000, 11, 11, 4, 0x1000),
+1 -1
sound/soc/qcom/lpass-lpaif-reg.h
··· 133 133 #define LPAIF_WRDMAPERCNT_REG(v, chan) LPAIF_WRDMA_REG_ADDR(v, 0x14, (chan)) 134 134 135 135 #define LPAIF_INTFDMA_REG(v, chan, reg, dai_id) \ 136 - ((v->dai_driver[dai_id].id == LPASS_DP_RX) ? \ 136 + ((dai_id == LPASS_DP_RX) ? \ 137 137 LPAIF_HDMI_RDMA##reg##_REG(v, chan) : \ 138 138 LPAIF_RDMA##reg##_REG(v, chan)) 139 139
+12
sound/soc/qcom/lpass-platform.c
··· 257 257 break; 258 258 case MI2S_PRIMARY: 259 259 case MI2S_SECONDARY: 260 + case MI2S_TERTIARY: 261 + case MI2S_QUATERNARY: 262 + case MI2S_QUINARY: 260 263 ret = regmap_fields_write(dmactl->intf, id, 261 264 LPAIF_DMACTL_AUDINTF(dma_port)); 262 265 if (ret) { ··· 510 507 break; 511 508 case MI2S_PRIMARY: 512 509 case MI2S_SECONDARY: 510 + case MI2S_TERTIARY: 511 + case MI2S_QUATERNARY: 512 + case MI2S_QUINARY: 513 513 reg_irqclr = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST); 514 514 val_irqclr = LPAIF_IRQ_ALL(ch); 515 515 ··· 565 559 break; 566 560 case MI2S_PRIMARY: 567 561 case MI2S_SECONDARY: 562 + case MI2S_TERTIARY: 563 + case MI2S_QUATERNARY: 564 + case MI2S_QUINARY: 568 565 reg_irqen = LPAIF_IRQEN_REG(v, LPAIF_IRQ_PORT_HOST); 569 566 val_mask = LPAIF_IRQ_ALL(ch); 570 567 val_irqen = 0; ··· 664 655 break; 665 656 case MI2S_PRIMARY: 666 657 case MI2S_SECONDARY: 658 + case MI2S_TERTIARY: 659 + case MI2S_QUATERNARY: 660 + case MI2S_QUINARY: 667 661 map = drvdata->lpaif_map; 668 662 reg = LPAIF_IRQCLEAR_REG(v, LPAIF_IRQ_PORT_HOST); 669 663 val = 0;
+4 -7
sound/soc/qcom/lpass-sc7180.c
··· 20 20 #include "lpass.h" 21 21 22 22 static struct snd_soc_dai_driver sc7180_lpass_cpu_dai_driver[] = { 23 - [MI2S_PRIMARY] = { 23 + { 24 24 .id = MI2S_PRIMARY, 25 25 .name = "Primary MI2S", 26 26 .playback = { ··· 44 44 }, 45 45 .probe = &asoc_qcom_lpass_cpu_dai_probe, 46 46 .ops = &asoc_qcom_lpass_cpu_dai_ops, 47 - }, 48 - 49 - [MI2S_SECONDARY] = { 47 + }, { 50 48 .id = MI2S_SECONDARY, 51 49 .name = "Secondary MI2S", 52 50 .playback = { ··· 58 60 }, 59 61 .probe = &asoc_qcom_lpass_cpu_dai_probe, 60 62 .ops = &asoc_qcom_lpass_cpu_dai_ops, 61 - }, 62 - [LPASS_DP_RX] = { 63 + }, { 63 64 .id = LPASS_DP_RX, 64 65 .name = "Hdmi", 65 66 .playback = { ··· 171 174 .rdma_channels = 5, 172 175 .hdmi_rdma_reg_base = 0x64000, 173 176 .hdmi_rdma_reg_stride = 0x1000, 174 - .hdmi_rdma_channels = 4, 177 + .hdmi_rdma_channels = 3, 175 178 .dmactl_audif_start = 1, 176 179 .wrdma_reg_base = 0x18000, 177 180 .wrdma_reg_stride = 0x1000,
+1 -1
sound/soc/qcom/lpass.h
··· 12 12 #include <linux/compiler.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/regmap.h> 15 - #include <dt-bindings/sound/sc7180-lpass.h> 15 + #include <dt-bindings/sound/qcom,lpass.h> 16 16 #include "lpass-hdmi.h" 17 17 18 18 #define LPASS_AHBIX_CLOCK_FREQUENCY 131072000
+6 -5
sound/soc/soc-topology.c
··· 447 447 { 448 448 struct snd_soc_dai_driver *dai_drv = 449 449 container_of(dobj, struct snd_soc_dai_driver, dobj); 450 - struct snd_soc_dai *dai; 450 + struct snd_soc_dai *dai, *_dai; 451 451 452 452 if (pass != SOC_TPLG_PASS_PCM_DAI) 453 453 return; ··· 455 455 if (dobj->ops && dobj->ops->dai_unload) 456 456 dobj->ops->dai_unload(comp, dobj); 457 457 458 - for_each_component_dais(comp, dai) 458 + for_each_component_dais_safe(comp, dai, _dai) 459 459 if (dai->driver == dai_drv) 460 - dai->driver = NULL; 460 + snd_soc_unregister_dai(dai); 461 461 462 462 list_del(&dobj->list); 463 463 } ··· 902 902 return -EINVAL; 903 903 904 904 se->dobj.control.dvalues = devm_kcalloc(tplg->dev, le32_to_cpu(ec->items), 905 - sizeof(u32), 905 + sizeof(*se->dobj.control.dvalues), 906 906 GFP_KERNEL); 907 907 if (!se->dobj.control.dvalues) 908 908 return -ENOMEM; ··· 1742 1742 list_add(&dai_drv->dobj.list, &tplg->comp->dobj_list); 1743 1743 1744 1744 /* register the DAI to the component */ 1745 - dai = devm_snd_soc_register_dai(tplg->dev, tplg->comp, dai_drv, false); 1745 + dai = snd_soc_register_dai(tplg->comp, dai_drv, false); 1746 1746 if (!dai) 1747 1747 return -ENOMEM; 1748 1748 ··· 1750 1750 ret = snd_soc_dapm_new_dai_widgets(dapm, dai); 1751 1751 if (ret != 0) { 1752 1752 dev_err(dai->dev, "Failed to create DAI widgets %d\n", ret); 1753 + snd_soc_unregister_dai(dai); 1753 1754 return ret; 1754 1755 } 1755 1756
+2 -1
sound/soc/sof/intel/Kconfig
··· 355 355 356 356 config SND_SOC_SOF_INTEL_SOUNDWIRE_LINK 357 357 bool "SOF support for SoundWire" 358 - depends on SOUNDWIRE && ACPI 358 + depends on ACPI 359 359 help 360 360 This adds support for SoundWire with Sound Open Firmware 361 361 for Intel(R) platforms. ··· 371 371 372 372 config SND_SOC_SOF_INTEL_SOUNDWIRE 373 373 tristate 374 + select SOUNDWIRE 374 375 select SOUNDWIRE_INTEL 375 376 help 376 377 This option is not user-selectable but automagically handled by
+17 -20
sound/soc/sof/intel/hda-codec.c
··· 63 63 } 64 64 65 65 /* enable controller wake up event for all codecs with jack connectors */ 66 - void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) 66 + void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable) 67 67 { 68 68 struct hda_bus *hbus = sof_to_hbus(sdev); 69 69 struct hdac_bus *bus = sof_to_bus(sdev); 70 70 struct hda_codec *codec; 71 71 unsigned int mask = 0; 72 72 73 - list_for_each_codec(codec, hbus) 74 - if (codec->jacktbl.used) 75 - mask |= BIT(codec->core.addr); 73 + if (enable) { 74 + list_for_each_codec(codec, hbus) 75 + if (codec->jacktbl.used) 76 + mask |= BIT(codec->core.addr); 77 + } 76 78 77 79 snd_hdac_chip_updatew(bus, WAKEEN, STATESTS_INT_MASK, mask); 78 80 } ··· 83 81 void hda_codec_jack_check(struct snd_sof_dev *sdev) 84 82 { 85 83 struct hda_bus *hbus = sof_to_hbus(sdev); 86 - struct hdac_bus *bus = sof_to_bus(sdev); 87 84 struct hda_codec *codec; 88 - 89 - /* disable controller Wake Up event*/ 90 - snd_hdac_chip_updatew(bus, WAKEEN, STATESTS_INT_MASK, 0); 91 85 92 86 list_for_each_codec(codec, hbus) 93 87 /* ··· 91 93 * has been recorded in STATESTS 92 94 */ 93 95 if (codec->jacktbl.used) 94 - schedule_delayed_work(&codec->jackpoll_work, 95 - codec->jackpoll_interval); 96 + pm_request_resume(&codec->core.dev); 96 97 } 97 98 #else 98 - void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev) {} 99 + void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable) {} 99 100 void hda_codec_jack_check(struct snd_sof_dev *sdev) {} 100 101 #endif /* CONFIG_SND_SOC_SOF_HDA_AUDIO_CODEC */ 101 102 EXPORT_SYMBOL_NS(hda_codec_jack_wake_enable, SND_SOC_SOF_HDA_AUDIO_CODEC); ··· 153 156 if (!hdev->bus->audio_component) { 154 157 dev_dbg(sdev->dev, 155 158 "iDisp hw present but no driver\n"); 156 - goto error; 159 + ret = -ENOENT; 160 + goto out; 157 161 } 158 162 hda_priv->need_display_power = true; 159 163 } ··· 171 173 * other return codes without modification 172 174 */ 173 175 if (ret == 0) 174 - goto error; 176 + ret = -ENOENT; 175 177 } 176 178 177 - return ret; 178 - 179 - error: 180 - snd_hdac_ext_bus_device_exit(hdev); 181 - return -ENOENT; 182 - 179 + out: 180 + if (ret < 0) { 181 + snd_hdac_device_unregister(hdev); 182 + put_device(&hdev->dev); 183 + } 183 184 #else 184 185 hdev = devm_kzalloc(sdev->dev, sizeof(*hdev), GFP_KERNEL); 185 186 if (!hdev) 186 187 return -ENOMEM; 187 188 188 189 ret = snd_hdac_ext_bus_device_init(&hbus->core, address, hdev, HDA_DEV_ASOC); 190 + #endif 189 191 190 192 return ret; 191 - #endif 192 193 } 193 194 194 195 /* Codec initialization */
+6 -3
sound/soc/sof/intel/hda-dsp.c
··· 617 617 618 618 #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 619 619 if (runtime_suspend) 620 - hda_codec_jack_wake_enable(sdev); 620 + hda_codec_jack_wake_enable(sdev, true); 621 621 622 622 /* power down all hda link */ 623 623 snd_hdac_ext_bus_link_power_down_all(bus); ··· 683 683 684 684 #if IS_ENABLED(CONFIG_SND_SOC_SOF_HDA) 685 685 /* check jack status */ 686 - if (runtime_resume) 687 - hda_codec_jack_check(sdev); 686 + if (runtime_resume) { 687 + hda_codec_jack_wake_enable(sdev, false); 688 + if (sdev->system_suspend_target == SOF_SUSPEND_NONE) 689 + hda_codec_jack_check(sdev); 690 + } 688 691 689 692 /* turn off the links that were off before suspend */ 690 693 list_for_each_entry(hlink, &bus->hlink_list, list) {
+1 -1
sound/soc/sof/intel/hda.h
··· 650 650 */ 651 651 void hda_codec_probe_bus(struct snd_sof_dev *sdev, 652 652 bool hda_codec_use_common_hdmi); 653 - void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev); 653 + void hda_codec_jack_wake_enable(struct snd_sof_dev *sdev, bool enable); 654 654 void hda_codec_jack_check(struct snd_sof_dev *sdev); 655 655 656 656 #endif /* CONFIG_SND_SOC_SOF_HDA */
+6 -5
sound/soc/sof/sof-acpi-dev.c
··· 131 131 if (!id) 132 132 return -ENODEV; 133 133 134 - ret = snd_intel_acpi_dsp_driver_probe(dev, id->id); 135 - if (ret != SND_INTEL_DSP_DRIVER_ANY && ret != SND_INTEL_DSP_DRIVER_SOF) { 136 - dev_dbg(dev, "SOF ACPI driver not selected, aborting probe\n"); 137 - return -ENODEV; 134 + if (IS_REACHABLE(CONFIG_SND_INTEL_DSP_CONFIG)) { 135 + ret = snd_intel_acpi_dsp_driver_probe(dev, id->id); 136 + if (ret != SND_INTEL_DSP_DRIVER_ANY && ret != SND_INTEL_DSP_DRIVER_SOF) { 137 + dev_dbg(dev, "SOF ACPI driver not selected, aborting probe\n"); 138 + return -ENODEV; 139 + } 138 140 } 139 - 140 141 dev_dbg(dev, "ACPI DSP detected"); 141 142 142 143 sof_pdata = devm_kzalloc(dev, sizeof(*sof_pdata), GFP_KERNEL);
+6 -4
sound/soc/sof/sof-pci-dev.c
··· 344 344 const struct snd_sof_dsp_ops *ops; 345 345 int ret; 346 346 347 - ret = snd_intel_dsp_driver_probe(pci); 348 - if (ret != SND_INTEL_DSP_DRIVER_ANY && ret != SND_INTEL_DSP_DRIVER_SOF) { 349 - dev_dbg(&pci->dev, "SOF PCI driver not selected, aborting probe\n"); 350 - return -ENODEV; 347 + if (IS_REACHABLE(CONFIG_SND_INTEL_DSP_CONFIG)) { 348 + ret = snd_intel_dsp_driver_probe(pci); 349 + if (ret != SND_INTEL_DSP_DRIVER_ANY && ret != SND_INTEL_DSP_DRIVER_SOF) { 350 + dev_dbg(&pci->dev, "SOF PCI driver not selected, aborting probe\n"); 351 + return -ENODEV; 352 + } 351 353 } 352 354 dev_dbg(&pci->dev, "PCI DSP detected"); 353 355
+6 -15
sound/usb/clock.c
··· 485 485 const struct audioformat *fmt, int rate) 486 486 { 487 487 struct usb_device *dev = chip->dev; 488 - struct usb_host_interface *alts; 489 - unsigned int ep; 490 488 unsigned char data[3]; 491 489 int err, crate; 492 - 493 - alts = snd_usb_get_host_interface(chip, fmt->iface, fmt->altsetting); 494 - if (!alts) 495 - return -EINVAL; 496 - if (get_iface_desc(alts)->bNumEndpoints < 1) 497 - return -EINVAL; 498 - ep = get_endpoint(alts, 0)->bEndpointAddress; 499 490 500 491 /* if endpoint doesn't have sampling rate control, bail out */ 501 492 if (!(fmt->attributes & UAC_EP_CS_ATTR_SAMPLE_RATE)) ··· 497 506 data[2] = rate >> 16; 498 507 err = snd_usb_ctl_msg(dev, usb_sndctrlpipe(dev, 0), UAC_SET_CUR, 499 508 USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_OUT, 500 - UAC_EP_CS_ATTR_SAMPLE_RATE << 8, ep, 501 - data, sizeof(data)); 509 + UAC_EP_CS_ATTR_SAMPLE_RATE << 8, 510 + fmt->endpoint, data, sizeof(data)); 502 511 if (err < 0) { 503 512 dev_err(&dev->dev, "%d:%d: cannot set freq %d to ep %#x\n", 504 - fmt->iface, fmt->altsetting, rate, ep); 513 + fmt->iface, fmt->altsetting, rate, fmt->endpoint); 505 514 return err; 506 515 } 507 516 ··· 515 524 516 525 err = snd_usb_ctl_msg(dev, usb_rcvctrlpipe(dev, 0), UAC_GET_CUR, 517 526 USB_TYPE_CLASS | USB_RECIP_ENDPOINT | USB_DIR_IN, 518 - UAC_EP_CS_ATTR_SAMPLE_RATE << 8, ep, 519 - data, sizeof(data)); 527 + UAC_EP_CS_ATTR_SAMPLE_RATE << 8, 528 + fmt->endpoint, data, sizeof(data)); 520 529 if (err < 0) { 521 530 dev_err(&dev->dev, "%d:%d: cannot get freq at ep %#x\n", 522 - fmt->iface, fmt->altsetting, ep); 531 + fmt->iface, fmt->altsetting, fmt->endpoint); 523 532 chip->sample_rate_read_error++; 524 533 return 0; /* some devices don't support reading */ 525 534 }
+9
sound/usb/endpoint.c
··· 1252 1252 1253 1253 /* If the interface has been already set up, just set EP parameters */ 1254 1254 if (!ep->iface_ref->need_setup) { 1255 + /* sample rate setup of UAC1 is per endpoint, and we need 1256 + * to update at each EP configuration 1257 + */ 1258 + if (ep->cur_audiofmt->protocol == UAC_VERSION_1) { 1259 + err = snd_usb_init_sample_rate(chip, ep->cur_audiofmt, 1260 + ep->cur_rate); 1261 + if (err < 0) 1262 + goto unlock; 1263 + } 1255 1264 err = snd_usb_endpoint_set_params(chip, ep); 1256 1265 if (err < 0) 1257 1266 goto unlock;
+11
sound/usb/format.c
··· 466 466 unsigned int nr_rates; 467 467 int i, err; 468 468 469 + /* performing the rate verification may lead to unexpected USB bus 470 + * behavior afterwards by some unknown reason. Do this only for the 471 + * known devices. 472 + */ 473 + switch (USB_ID_VENDOR(chip->usb_id)) { 474 + case 0x07fd: /* MOTU */ 475 + break; 476 + default: 477 + return 0; /* don't perform the validation as default */ 478 + } 479 + 469 480 table = kcalloc(fp->nr_rates, sizeof(*table), GFP_KERNEL); 470 481 if (!table) 471 482 return -ENOMEM;
+9 -8
sound/usb/implicit.c
··· 175 175 ifnum, alts); 176 176 } 177 177 178 - /* Pioneer devices: playback and capture streams sharing the same iface/altset 178 + /* Playback and capture EPs on Pioneer devices share the same iface/altset, 179 + * but they don't seem working with the implicit fb mode well, hence we 180 + * just return as if the sync were already set up. 179 181 */ 180 - static int add_pioneer_implicit_fb(struct snd_usb_audio *chip, 181 - struct audioformat *fmt, 182 - struct usb_host_interface *alts) 182 + static int skip_pioneer_sync_ep(struct snd_usb_audio *chip, 183 + struct audioformat *fmt, 184 + struct usb_host_interface *alts) 183 185 { 184 186 struct usb_endpoint_descriptor *epd; 185 187 ··· 196 194 (epd->bmAttributes & USB_ENDPOINT_USAGE_MASK) != 197 195 USB_ENDPOINT_USAGE_IMPLICIT_FB)) 198 196 return 0; 199 - return add_implicit_fb_sync_ep(chip, fmt, epd->bEndpointAddress, 1, 200 - alts->desc.bInterfaceNumber, alts); 197 + return 1; /* don't handle with the implicit fb, just skip sync EP */ 201 198 } 202 199 203 200 static int __add_generic_implicit_fb(struct snd_usb_audio *chip, ··· 299 298 return 1; 300 299 } 301 300 302 - /* Pioneer devices implicit feedback with vendor spec class */ 301 + /* Pioneer devices with vendor spec class */ 303 302 if (attr == USB_ENDPOINT_SYNC_ASYNC && 304 303 alts->desc.bInterfaceClass == USB_CLASS_VENDOR_SPEC && 305 304 USB_ID_VENDOR(chip->usb_id) == 0x2b73 /* Pioneer */) { 306 - if (add_pioneer_implicit_fb(chip, fmt, alts)) 305 + if (skip_pioneer_sync_ep(chip, fmt, alts)) 307 306 return 1; 308 307 } 309 308
+111 -63
sound/usb/pcm.c
··· 663 663 check_fmts.bits[1] = (u32)(fp->formats >> 32); 664 664 snd_mask_intersect(&check_fmts, fmts); 665 665 if (snd_mask_empty(&check_fmts)) { 666 - hwc_debug(" > check: no supported format %d\n", fp->format); 666 + hwc_debug(" > check: no supported format 0x%llx\n", fp->formats); 667 667 return 0; 668 668 } 669 669 /* check the channels */ ··· 775 775 return apply_hw_params_minmax(it, rmin, rmax); 776 776 } 777 777 778 - static int hw_rule_format(struct snd_pcm_hw_params *params, 779 - struct snd_pcm_hw_rule *rule) 778 + static int apply_hw_params_format_bits(struct snd_mask *fmt, u64 fbits) 780 779 { 781 - struct snd_usb_substream *subs = rule->private; 782 - const struct audioformat *fp; 783 - struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); 784 - u64 fbits; 785 780 u32 oldbits[2]; 786 781 int changed; 787 - 788 - hwc_debug("hw_rule_format: %x:%x\n", fmt->bits[0], fmt->bits[1]); 789 - fbits = 0; 790 - list_for_each_entry(fp, &subs->fmt_list, list) { 791 - if (!hw_check_valid_format(subs, params, fp)) 792 - continue; 793 - fbits |= fp->formats; 794 - } 795 782 796 783 oldbits[0] = fmt->bits[0]; 797 784 oldbits[1] = fmt->bits[1]; ··· 791 804 changed = (oldbits[0] != fmt->bits[0] || oldbits[1] != fmt->bits[1]); 792 805 hwc_debug(" --> %x:%x (changed = %d)\n", fmt->bits[0], fmt->bits[1], changed); 793 806 return changed; 807 + } 808 + 809 + static int hw_rule_format(struct snd_pcm_hw_params *params, 810 + struct snd_pcm_hw_rule *rule) 811 + { 812 + struct snd_usb_substream *subs = rule->private; 813 + const struct audioformat *fp; 814 + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); 815 + u64 fbits; 816 + 817 + hwc_debug("hw_rule_format: %x:%x\n", fmt->bits[0], fmt->bits[1]); 818 + fbits = 0; 819 + list_for_each_entry(fp, &subs->fmt_list, list) { 820 + if (!hw_check_valid_format(subs, params, fp)) 821 + continue; 822 + fbits |= fp->formats; 823 + } 824 + return apply_hw_params_format_bits(fmt, fbits); 794 825 } 795 826 796 827 static int hw_rule_period_time(struct snd_pcm_hw_params *params, ··· 838 833 return apply_hw_params_minmax(it, pmin, UINT_MAX); 839 834 } 840 835 841 - /* apply PCM hw constraints from the concurrent sync EP */ 842 - static int apply_hw_constraint_from_sync(struct snd_pcm_runtime *runtime, 843 - struct snd_usb_substream *subs) 836 + /* get the EP or the sync EP for implicit fb when it's already set up */ 837 + static const struct snd_usb_endpoint * 838 + get_sync_ep_from_substream(struct snd_usb_substream *subs) 844 839 { 845 840 struct snd_usb_audio *chip = subs->stream->chip; 846 - struct snd_usb_endpoint *ep; 847 841 const struct audioformat *fp; 848 - int err; 842 + const struct snd_usb_endpoint *ep; 849 843 850 844 list_for_each_entry(fp, &subs->fmt_list, list) { 851 845 ep = snd_usb_get_endpoint(chip, fp->endpoint); 852 846 if (ep && ep->cur_rate) 853 - goto found; 847 + return ep; 854 848 if (!fp->implicit_fb) 855 849 continue; 856 850 /* for the implicit fb, check the sync ep as well */ 857 851 ep = snd_usb_get_endpoint(chip, fp->sync_ep); 858 852 if (ep && ep->cur_rate) 859 - goto found; 853 + return ep; 860 854 } 861 - return 0; 855 + return NULL; 856 + } 862 857 863 - found: 864 - if (!find_format(&subs->fmt_list, ep->cur_format, ep->cur_rate, 865 - ep->cur_channels, false, NULL)) { 866 - usb_audio_dbg(chip, "EP 0x%x being used, but not applicable\n", 867 - ep->ep_num); 858 + /* additional hw constraints for implicit feedback mode */ 859 + static int hw_rule_format_implicit_fb(struct snd_pcm_hw_params *params, 860 + struct snd_pcm_hw_rule *rule) 861 + { 862 + struct snd_usb_substream *subs = rule->private; 863 + const struct snd_usb_endpoint *ep; 864 + struct snd_mask *fmt = hw_param_mask(params, SNDRV_PCM_HW_PARAM_FORMAT); 865 + 866 + ep = get_sync_ep_from_substream(subs); 867 + if (!ep) 868 868 return 0; 869 - } 870 869 871 - usb_audio_dbg(chip, "EP 0x%x being used, using fixed params:\n", 872 - ep->ep_num); 873 - usb_audio_dbg(chip, "rate=%d, period_size=%d, periods=%d\n", 874 - ep->cur_rate, ep->cur_period_frames, 875 - ep->cur_buffer_periods); 870 + hwc_debug("applying %s\n", __func__); 871 + return apply_hw_params_format_bits(fmt, pcm_format_to_bits(ep->cur_format)); 872 + } 876 873 877 - runtime->hw.formats = subs->formats; 878 - runtime->hw.rate_min = runtime->hw.rate_max = ep->cur_rate; 879 - runtime->hw.rates = SNDRV_PCM_RATE_KNOT; 880 - runtime->hw.periods_min = runtime->hw.periods_max = 881 - ep->cur_buffer_periods; 874 + static int hw_rule_rate_implicit_fb(struct snd_pcm_hw_params *params, 875 + struct snd_pcm_hw_rule *rule) 876 + { 877 + struct snd_usb_substream *subs = rule->private; 878 + const struct snd_usb_endpoint *ep; 879 + struct snd_interval *it; 882 880 883 - err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS, 884 - hw_rule_channels, subs, 885 - SNDRV_PCM_HW_PARAM_FORMAT, 886 - SNDRV_PCM_HW_PARAM_RATE, 887 - -1); 888 - if (err < 0) 889 - return err; 881 + ep = get_sync_ep_from_substream(subs); 882 + if (!ep) 883 + return 0; 890 884 891 - err = snd_pcm_hw_constraint_minmax(runtime, 892 - SNDRV_PCM_HW_PARAM_PERIOD_SIZE, 893 - ep->cur_period_frames, 894 - ep->cur_period_frames); 895 - if (err < 0) 896 - return err; 885 + hwc_debug("applying %s\n", __func__); 886 + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_RATE); 887 + return apply_hw_params_minmax(it, ep->cur_rate, ep->cur_rate); 888 + } 897 889 898 - return 1; /* notify the finding */ 890 + static int hw_rule_period_size_implicit_fb(struct snd_pcm_hw_params *params, 891 + struct snd_pcm_hw_rule *rule) 892 + { 893 + struct snd_usb_substream *subs = rule->private; 894 + const struct snd_usb_endpoint *ep; 895 + struct snd_interval *it; 896 + 897 + ep = get_sync_ep_from_substream(subs); 898 + if (!ep) 899 + return 0; 900 + 901 + hwc_debug("applying %s\n", __func__); 902 + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_PERIOD_SIZE); 903 + return apply_hw_params_minmax(it, ep->cur_period_frames, 904 + ep->cur_period_frames); 905 + } 906 + 907 + static int hw_rule_periods_implicit_fb(struct snd_pcm_hw_params *params, 908 + struct snd_pcm_hw_rule *rule) 909 + { 910 + struct snd_usb_substream *subs = rule->private; 911 + const struct snd_usb_endpoint *ep; 912 + struct snd_interval *it; 913 + 914 + ep = get_sync_ep_from_substream(subs); 915 + if (!ep) 916 + return 0; 917 + 918 + hwc_debug("applying %s\n", __func__); 919 + it = hw_param_interval(params, SNDRV_PCM_HW_PARAM_PERIODS); 920 + return apply_hw_params_minmax(it, ep->cur_buffer_periods, 921 + ep->cur_buffer_periods); 899 922 } 900 923 901 924 /* ··· 932 899 933 900 static int setup_hw_info(struct snd_pcm_runtime *runtime, struct snd_usb_substream *subs) 934 901 { 935 - struct snd_usb_audio *chip = subs->stream->chip; 936 902 const struct audioformat *fp; 937 903 unsigned int pt, ptmin; 938 904 int param_period_time_if_needed = -1; 939 905 int err; 940 - 941 - mutex_lock(&chip->mutex); 942 - err = apply_hw_constraint_from_sync(runtime, subs); 943 - mutex_unlock(&chip->mutex); 944 - if (err < 0) 945 - return err; 946 - if (err > 0) /* found the matching? */ 947 - goto add_extra_rules; 948 906 949 907 runtime->hw.formats = subs->formats; 950 908 ··· 981 957 982 958 err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, 983 959 hw_rule_rate, subs, 960 + SNDRV_PCM_HW_PARAM_RATE, 984 961 SNDRV_PCM_HW_PARAM_FORMAT, 985 962 SNDRV_PCM_HW_PARAM_CHANNELS, 986 963 param_period_time_if_needed, ··· 989 964 if (err < 0) 990 965 return err; 991 966 992 - add_extra_rules: 993 967 err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_CHANNELS, 994 968 hw_rule_channels, subs, 969 + SNDRV_PCM_HW_PARAM_CHANNELS, 995 970 SNDRV_PCM_HW_PARAM_FORMAT, 996 971 SNDRV_PCM_HW_PARAM_RATE, 997 972 param_period_time_if_needed, ··· 1000 975 return err; 1001 976 err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_FORMAT, 1002 977 hw_rule_format, subs, 978 + SNDRV_PCM_HW_PARAM_FORMAT, 1003 979 SNDRV_PCM_HW_PARAM_RATE, 1004 980 SNDRV_PCM_HW_PARAM_CHANNELS, 1005 981 param_period_time_if_needed, ··· 1018 992 if (err < 0) 1019 993 return err; 1020 994 } 995 + 996 + /* additional hw constraints for implicit fb */ 997 + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_FORMAT, 998 + hw_rule_format_implicit_fb, subs, 999 + SNDRV_PCM_HW_PARAM_FORMAT, -1); 1000 + if (err < 0) 1001 + return err; 1002 + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_RATE, 1003 + hw_rule_rate_implicit_fb, subs, 1004 + SNDRV_PCM_HW_PARAM_RATE, -1); 1005 + if (err < 0) 1006 + return err; 1007 + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, 1008 + hw_rule_period_size_implicit_fb, subs, 1009 + SNDRV_PCM_HW_PARAM_PERIOD_SIZE, -1); 1010 + if (err < 0) 1011 + return err; 1012 + err = snd_pcm_hw_rule_add(runtime, 0, SNDRV_PCM_HW_PARAM_PERIODS, 1013 + hw_rule_periods_implicit_fb, subs, 1014 + SNDRV_PCM_HW_PARAM_PERIODS, -1); 1015 + if (err < 0) 1016 + return err; 1021 1017 1022 1018 return 0; 1023 1019 }
-28
sound/usb/quirks.c
··· 1470 1470 subs->pkt_offset_adj = (emu_samplerate_id >= EMU_QUIRK_SR_176400HZ) ? 4 : 0; 1471 1471 } 1472 1472 1473 - 1474 - /* 1475 - * Pioneer DJ DJM-900NXS2 1476 - * Device needs to know the sample rate each time substream is started 1477 - */ 1478 - static int pioneer_djm_set_format_quirk(struct snd_usb_substream *subs) 1479 - { 1480 - unsigned int cur_rate = subs->data_endpoint->cur_rate; 1481 - /* Convert sample rate value to little endian */ 1482 - u8 sr[3]; 1483 - 1484 - sr[0] = cur_rate & 0xff; 1485 - sr[1] = (cur_rate >> 8) & 0xff; 1486 - sr[2] = (cur_rate >> 16) & 0xff; 1487 - 1488 - /* Configure device */ 1489 - usb_set_interface(subs->dev, 0, 1); 1490 - snd_usb_ctl_msg(subs->stream->chip->dev, 1491 - usb_rcvctrlpipe(subs->stream->chip->dev, 0), 1492 - 0x01, 0x22, 0x0100, 0x0082, &sr, 0x0003); 1493 - 1494 - return 0; 1495 - } 1496 - 1497 1473 void snd_usb_set_format_quirk(struct snd_usb_substream *subs, 1498 1474 const struct audioformat *fmt) 1499 1475 { ··· 1479 1503 case USB_ID(0x041e, 0x3f0a): /* E-Mu Tracker Pre */ 1480 1504 case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */ 1481 1505 set_format_emu_quirk(subs, fmt); 1482 - break; 1483 - case USB_ID(0x2b73, 0x000a): /* Pioneer DJ DJM-900NXS2 */ 1484 - case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */ 1485 - pioneer_djm_set_format_quirk(subs); 1486 1506 break; 1487 1507 case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ 1488 1508 subs->stream_offset_adj = 2;
+2 -2
tools/gpio/gpio-event-mon.c
··· 107 107 ret = -EIO; 108 108 break; 109 109 } 110 - fprintf(stdout, "GPIO EVENT at %llu on line %d (%d|%d) ", 111 - event.timestamp_ns, event.offset, event.line_seqno, 110 + fprintf(stdout, "GPIO EVENT at %" PRIu64 " on line %d (%d|%d) ", 111 + (uint64_t)event.timestamp_ns, event.offset, event.line_seqno, 112 112 event.seqno); 113 113 switch (event.id) { 114 114 case GPIO_V2_LINE_EVENT_RISING_EDGE:
+3 -2
tools/gpio/gpio-watch.c
··· 10 10 #include <ctype.h> 11 11 #include <errno.h> 12 12 #include <fcntl.h> 13 + #include <inttypes.h> 13 14 #include <linux/gpio.h> 14 15 #include <poll.h> 15 16 #include <stdbool.h> ··· 87 86 return EXIT_FAILURE; 88 87 } 89 88 90 - printf("line %u: %s at %llu\n", 91 - chg.info.offset, event, chg.timestamp_ns); 89 + printf("line %u: %s at %" PRIu64 "\n", 90 + chg.info.offset, event, (uint64_t)chg.timestamp_ns); 92 91 } 93 92 } 94 93
+4 -13
tools/lib/perf/evlist.c
··· 367 367 return map; 368 368 } 369 369 370 - static void perf_evlist__set_sid_idx(struct perf_evlist *evlist, 371 - struct perf_evsel *evsel, int idx, int cpu, 372 - int thread) 370 + static void perf_evsel__set_sid_idx(struct perf_evsel *evsel, int idx, int cpu, int thread) 373 371 { 374 372 struct perf_sample_id *sid = SID(evsel, cpu, thread); 375 373 376 374 sid->idx = idx; 377 - if (evlist->cpus && cpu >= 0) 378 - sid->cpu = evlist->cpus->map[cpu]; 379 - else 380 - sid->cpu = -1; 381 - if (!evsel->system_wide && evlist->threads && thread >= 0) 382 - sid->tid = perf_thread_map__pid(evlist->threads, thread); 383 - else 384 - sid->tid = -1; 375 + sid->cpu = perf_cpu_map__cpu(evsel->cpus, cpu); 376 + sid->tid = perf_thread_map__pid(evsel->threads, thread); 385 377 } 386 378 387 379 static struct perf_mmap* ··· 492 500 if (perf_evlist__id_add_fd(evlist, evsel, cpu, thread, 493 501 fd) < 0) 494 502 return -1; 495 - perf_evlist__set_sid_idx(evlist, evsel, idx, cpu, 496 - thread); 503 + perf_evsel__set_sid_idx(evsel, idx, cpu, thread); 497 504 } 498 505 } 499 506
+5 -9
tools/objtool/check.c
··· 2928 2928 warnings += ret; 2929 2929 2930 2930 out: 2931 - if (ret < 0) { 2932 - /* 2933 - * Fatal error. The binary is corrupt or otherwise broken in 2934 - * some way, or objtool itself is broken. Fail the kernel 2935 - * build. 2936 - */ 2937 - return ret; 2938 - } 2939 - 2931 + /* 2932 + * For now, don't fail the kernel build on fatal warnings. These 2933 + * errors are still fairly common due to the growing matrix of 2934 + * supported toolchains and their recent pace of change. 2935 + */ 2940 2936 return 0; 2941 2937 }
+12 -2
tools/objtool/elf.c
··· 380 380 381 381 symtab = find_section_by_name(elf, ".symtab"); 382 382 if (!symtab) { 383 - WARN("missing symbol table"); 384 - return -1; 383 + /* 384 + * A missing symbol table is actually possible if it's an empty 385 + * .o file. This can happen for thunk_64.o. 386 + */ 387 + return 0; 385 388 } 386 389 387 390 symtab_shndx = find_section_by_name(elf, ".symtab_shndx"); ··· 451 448 list_add(&sym->list, entry); 452 449 elf_hash_add(elf->symbol_hash, &sym->hash, sym->idx); 453 450 elf_hash_add(elf->symbol_name_hash, &sym->name_hash, str_hash(sym->name)); 451 + 452 + /* 453 + * Don't store empty STT_NOTYPE symbols in the rbtree. They 454 + * can exist within a function, confusing the sorting. 455 + */ 456 + if (!sym->len) 457 + rb_erase(&sym->node, &sym->sec->symbol_tree); 454 458 } 455 459 456 460 if (stats)
+17 -1
tools/perf/builtin-script.c
··· 186 186 187 187 enum { 188 188 OUTPUT_TYPE_SYNTH = PERF_TYPE_MAX, 189 + OUTPUT_TYPE_OTHER, 189 190 OUTPUT_TYPE_MAX 190 191 }; 191 192 ··· 284 283 285 284 .invalid_fields = PERF_OUTPUT_TRACE | PERF_OUTPUT_BPF_OUTPUT, 286 285 }, 286 + 287 + [OUTPUT_TYPE_OTHER] = { 288 + .user_set = false, 289 + 290 + .fields = PERF_OUTPUT_COMM | PERF_OUTPUT_TID | 291 + PERF_OUTPUT_CPU | PERF_OUTPUT_TIME | 292 + PERF_OUTPUT_EVNAME | PERF_OUTPUT_IP | 293 + PERF_OUTPUT_SYM | PERF_OUTPUT_SYMOFFSET | 294 + PERF_OUTPUT_DSO | PERF_OUTPUT_PERIOD, 295 + 296 + .invalid_fields = PERF_OUTPUT_TRACE | PERF_OUTPUT_BPF_OUTPUT, 297 + }, 287 298 }; 288 299 289 300 struct evsel_script { ··· 356 343 case PERF_TYPE_SYNTH: 357 344 return OUTPUT_TYPE_SYNTH; 358 345 default: 359 - return type; 346 + if (type < PERF_TYPE_MAX) 347 + return type; 360 348 } 349 + 350 + return OUTPUT_TYPE_OTHER; 361 351 } 362 352 363 353 static inline unsigned int attr_type(unsigned int type)
+11 -5
tools/perf/util/metricgroup.c
··· 162 162 return false; 163 163 } 164 164 165 + static bool evsel_same_pmu(struct evsel *ev1, struct evsel *ev2) 166 + { 167 + if (!ev1->pmu_name || !ev2->pmu_name) 168 + return false; 169 + 170 + return !strcmp(ev1->pmu_name, ev2->pmu_name); 171 + } 172 + 165 173 /** 166 174 * Find a group of events in perf_evlist that correspond to those from a parsed 167 175 * metric expression. Note, as find_evsel_group is called in the same order as ··· 288 280 */ 289 281 if (!has_constraint && 290 282 ev->leader != metric_events[i]->leader && 291 - !strcmp(ev->leader->pmu_name, 292 - metric_events[i]->leader->pmu_name)) 283 + evsel_same_pmu(ev->leader, metric_events[i]->leader)) 293 284 break; 294 285 if (!strcmp(metric_events[i]->name, ev->name)) { 295 286 set_bit(ev->idx, evlist_used); ··· 773 766 struct metricgroup_add_iter_data { 774 767 struct list_head *metric_list; 775 768 const char *metric; 776 - struct metric **m; 777 769 struct expr_ids *ids; 778 770 int *ret; 779 771 bool *has_match; ··· 1064 1058 void *data) 1065 1059 { 1066 1060 struct metricgroup_add_iter_data *d = data; 1061 + struct metric *m = NULL; 1067 1062 int ret; 1068 1063 1069 1064 if (!match_pe_metric(pe, d->metric)) 1070 1065 return 0; 1071 1066 1072 - ret = add_metric(d->metric_list, pe, d->metric_no_group, d->m, NULL, d->ids); 1067 + ret = add_metric(d->metric_list, pe, d->metric_no_group, &m, NULL, d->ids); 1073 1068 if (ret) 1074 1069 return ret; 1075 1070 ··· 1121 1114 .metric_list = &list, 1122 1115 .metric = metric, 1123 1116 .metric_no_group = metric_no_group, 1124 - .m = &m, 1125 1117 .ids = &ids, 1126 1118 .has_match = &has_match, 1127 1119 .ret = &ret,
+32
tools/power/x86/intel-speed-select/isst-config.c
··· 1249 1249 isst_ctdp_display_information_end(outf); 1250 1250 } 1251 1251 1252 + static void adjust_scaling_max_from_base_freq(int cpu); 1253 + 1252 1254 static void set_tdp_level_for_cpu(int cpu, void *arg1, void *arg2, void *arg3, 1253 1255 void *arg4) 1254 1256 { ··· 1269 1267 int pkg_id = get_physical_package_id(cpu); 1270 1268 int die_id = get_physical_die_id(cpu); 1271 1269 1270 + /* Wait for updated base frequencies */ 1271 + usleep(2000); 1272 + 1272 1273 fprintf(stderr, "Option is set to online/offline\n"); 1273 1274 ctdp_level.core_cpumask_size = 1274 1275 alloc_cpu_set(&ctdp_level.core_cpumask); ··· 1288 1283 if (CPU_ISSET_S(i, ctdp_level.core_cpumask_size, ctdp_level.core_cpumask)) { 1289 1284 fprintf(stderr, "online cpu %d\n", i); 1290 1285 set_cpu_online_offline(i, 1); 1286 + adjust_scaling_max_from_base_freq(i); 1291 1287 } else { 1292 1288 fprintf(stderr, "offline cpu %d\n", i); 1293 1289 set_cpu_online_offline(i, 0); ··· 1446 1440 return 0; 1447 1441 } 1448 1442 1443 + static int no_turbo(void) 1444 + { 1445 + return parse_int_file(0, "/sys/devices/system/cpu/intel_pstate/no_turbo"); 1446 + } 1447 + 1448 + static void adjust_scaling_max_from_base_freq(int cpu) 1449 + { 1450 + int base_freq, scaling_max_freq; 1451 + 1452 + scaling_max_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_max_freq", cpu); 1453 + base_freq = get_cpufreq_base_freq(cpu); 1454 + if (scaling_max_freq < base_freq || no_turbo()) 1455 + set_cpufreq_scaling_min_max(cpu, 1, base_freq); 1456 + } 1457 + 1458 + static void adjust_scaling_min_from_base_freq(int cpu) 1459 + { 1460 + int base_freq, scaling_min_freq; 1461 + 1462 + scaling_min_freq = parse_int_file(0, "/sys/devices/system/cpu/cpu%d/cpufreq/scaling_min_freq", cpu); 1463 + base_freq = get_cpufreq_base_freq(cpu); 1464 + if (scaling_min_freq < base_freq) 1465 + set_cpufreq_scaling_min_max(cpu, 0, base_freq); 1466 + } 1467 + 1449 1468 static int set_clx_pbf_cpufreq_scaling_min_max(int cpu) 1450 1469 { 1451 1470 struct isst_pkg_ctdp_level_info *ctdp_level; ··· 1568 1537 continue; 1569 1538 1570 1539 set_cpufreq_scaling_min_max_from_cpuinfo(i, 1, 0); 1540 + adjust_scaling_min_from_base_freq(i); 1571 1541 } 1572 1542 } 1573 1543
+11 -23
tools/testing/kunit/kunit.py
··· 43 43 BUILD_FAILURE = auto() 44 44 TEST_FAILURE = auto() 45 45 46 - def get_kernel_root_path(): 47 - parts = sys.argv[0] if not __file__ else __file__ 48 - parts = os.path.realpath(parts).split('tools/testing/kunit') 46 + def get_kernel_root_path() -> str: 47 + path = sys.argv[0] if not __file__ else __file__ 48 + parts = os.path.realpath(path).split('tools/testing/kunit') 49 49 if len(parts) != 2: 50 50 sys.exit(1) 51 51 return parts[0] ··· 171 171 exec_result.elapsed_time)) 172 172 return parse_result 173 173 174 - def add_common_opts(parser): 174 + def add_common_opts(parser) -> None: 175 175 parser.add_argument('--build_dir', 176 176 help='As in the make command, it specifies the build ' 177 177 'directory.', ··· 183 183 help='Run all KUnit tests through allyesconfig', 184 184 action='store_true') 185 185 186 - def add_build_opts(parser): 186 + def add_build_opts(parser) -> None: 187 187 parser.add_argument('--jobs', 188 188 help='As in the make command, "Specifies the number of ' 189 189 'jobs (commands) to run simultaneously."', 190 190 type=int, default=8, metavar='jobs') 191 191 192 - def add_exec_opts(parser): 192 + def add_exec_opts(parser) -> None: 193 193 parser.add_argument('--timeout', 194 194 help='maximum number of seconds to allow for all tests ' 195 195 'to run. This does not include time taken to build the ' ··· 198 198 default=300, 199 199 metavar='timeout') 200 200 201 - def add_parse_opts(parser): 201 + def add_parse_opts(parser) -> None: 202 202 parser.add_argument('--raw_output', help='don\'t format output from kernel', 203 203 action='store_true') 204 204 parser.add_argument('--json', ··· 256 256 os.mkdir(cli_args.build_dir) 257 257 258 258 if not linux: 259 - linux = kunit_kernel.LinuxSourceTree() 260 - 261 - linux.create_kunitconfig(cli_args.build_dir) 262 - linux.read_kunitconfig(cli_args.build_dir) 259 + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) 263 260 264 261 request = KunitRequest(cli_args.raw_output, 265 262 cli_args.timeout, ··· 274 277 os.mkdir(cli_args.build_dir) 275 278 276 279 if not linux: 277 - linux = kunit_kernel.LinuxSourceTree() 278 - 279 - linux.create_kunitconfig(cli_args.build_dir) 280 - linux.read_kunitconfig(cli_args.build_dir) 280 + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) 281 281 282 282 request = KunitConfigRequest(cli_args.build_dir, 283 283 cli_args.make_options) ··· 286 292 sys.exit(1) 287 293 elif cli_args.subcommand == 'build': 288 294 if not linux: 289 - linux = kunit_kernel.LinuxSourceTree() 290 - 291 - linux.create_kunitconfig(cli_args.build_dir) 292 - linux.read_kunitconfig(cli_args.build_dir) 295 + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) 293 296 294 297 request = KunitBuildRequest(cli_args.jobs, 295 298 cli_args.build_dir, ··· 300 309 sys.exit(1) 301 310 elif cli_args.subcommand == 'exec': 302 311 if not linux: 303 - linux = kunit_kernel.LinuxSourceTree() 304 - 305 - linux.create_kunitconfig(cli_args.build_dir) 306 - linux.read_kunitconfig(cli_args.build_dir) 312 + linux = kunit_kernel.LinuxSourceTree(cli_args.build_dir) 307 313 308 314 exec_request = KunitExecRequest(cli_args.timeout, 309 315 cli_args.build_dir,
+4 -3
tools/testing/kunit/kunit_config.py
··· 8 8 9 9 import collections 10 10 import re 11 + from typing import List, Set 11 12 12 13 CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$' 13 14 CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$' ··· 31 30 class Kconfig(object): 32 31 """Represents defconfig or .config specified using the Kconfig language.""" 33 32 34 - def __init__(self): 35 - self._entries = [] 33 + def __init__(self) -> None: 34 + self._entries = [] # type: List[KconfigEntry] 36 35 37 - def entries(self): 36 + def entries(self) -> Set[KconfigEntry]: 38 37 return set(self._entries) 39 38 40 39 def add_entry(self, entry: KconfigEntry) -> None:
+1 -1
tools/testing/kunit/kunit_json.py
··· 13 13 14 14 from kunit_parser import TestStatus 15 15 16 - def get_json_result(test_result, def_config, build_dir, json_path): 16 + def get_json_result(test_result, def_config, build_dir, json_path) -> str: 17 17 sub_groups = [] 18 18 19 19 # Each test suite is mapped to a KernelCI sub_group
+29 -27
tools/testing/kunit/kunit_kernel.py
··· 11 11 import os 12 12 import shutil 13 13 import signal 14 + from typing import Iterator 14 15 15 16 from contextlib import ExitStack 16 17 ··· 40 39 class LinuxSourceTreeOperations(object): 41 40 """An abstraction over command line operations performed on a source tree.""" 42 41 43 - def make_mrproper(self): 42 + def make_mrproper(self) -> None: 44 43 try: 45 44 subprocess.check_output(['make', 'mrproper'], stderr=subprocess.STDOUT) 46 45 except OSError as e: ··· 48 47 except subprocess.CalledProcessError as e: 49 48 raise ConfigError(e.output.decode()) 50 49 51 - def make_olddefconfig(self, build_dir, make_options): 50 + def make_olddefconfig(self, build_dir, make_options) -> None: 52 51 command = ['make', 'ARCH=um', 'olddefconfig'] 53 52 if make_options: 54 53 command.extend(make_options) ··· 61 60 except subprocess.CalledProcessError as e: 62 61 raise ConfigError(e.output.decode()) 63 62 64 - def make_allyesconfig(self, build_dir, make_options): 63 + def make_allyesconfig(self, build_dir, make_options) -> None: 65 64 kunit_parser.print_with_timestamp( 66 65 'Enabling all CONFIGs for UML...') 67 66 command = ['make', 'ARCH=um', 'allyesconfig'] ··· 83 82 kunit_parser.print_with_timestamp( 84 83 'Starting Kernel with all configs takes a few minutes...') 85 84 86 - def make(self, jobs, build_dir, make_options): 85 + def make(self, jobs, build_dir, make_options) -> None: 87 86 command = ['make', 'ARCH=um', '--jobs=' + str(jobs)] 88 87 if make_options: 89 88 command.extend(make_options) ··· 101 100 if stderr: # likely only due to build warnings 102 101 print(stderr.decode()) 103 102 104 - def linux_bin(self, params, timeout, build_dir): 103 + def linux_bin(self, params, timeout, build_dir) -> None: 105 104 """Runs the Linux UML binary. Must be named 'linux'.""" 106 105 linux_bin = get_file_path(build_dir, 'linux') 107 106 outfile = get_outfile_path(build_dir) ··· 111 110 stderr=subprocess.STDOUT) 112 111 process.wait(timeout) 113 112 114 - def get_kconfig_path(build_dir): 113 + def get_kconfig_path(build_dir) -> str: 115 114 return get_file_path(build_dir, KCONFIG_PATH) 116 115 117 - def get_kunitconfig_path(build_dir): 116 + def get_kunitconfig_path(build_dir) -> str: 118 117 return get_file_path(build_dir, KUNITCONFIG_PATH) 119 118 120 - def get_outfile_path(build_dir): 119 + def get_outfile_path(build_dir) -> str: 121 120 return get_file_path(build_dir, OUTFILE_PATH) 122 121 123 122 class LinuxSourceTree(object): 124 123 """Represents a Linux kernel source tree with KUnit tests.""" 125 124 126 - def __init__(self): 127 - self._ops = LinuxSourceTreeOperations() 125 + def __init__(self, build_dir: str, load_config=True, defconfig=DEFAULT_KUNITCONFIG_PATH) -> None: 128 126 signal.signal(signal.SIGINT, self.signal_handler) 129 127 130 - def clean(self): 128 + self._ops = LinuxSourceTreeOperations() 129 + 130 + if not load_config: 131 + return 132 + 133 + kunitconfig_path = get_kunitconfig_path(build_dir) 134 + if not os.path.exists(kunitconfig_path): 135 + shutil.copyfile(defconfig, kunitconfig_path) 136 + 137 + self._kconfig = kunit_config.Kconfig() 138 + self._kconfig.read_from_file(kunitconfig_path) 139 + 140 + def clean(self) -> bool: 131 141 try: 132 142 self._ops.make_mrproper() 133 143 except ConfigError as e: ··· 146 134 return False 147 135 return True 148 136 149 - def create_kunitconfig(self, build_dir, defconfig=DEFAULT_KUNITCONFIG_PATH): 150 - kunitconfig_path = get_kunitconfig_path(build_dir) 151 - if not os.path.exists(kunitconfig_path): 152 - shutil.copyfile(defconfig, kunitconfig_path) 153 - 154 - def read_kunitconfig(self, build_dir): 155 - kunitconfig_path = get_kunitconfig_path(build_dir) 156 - self._kconfig = kunit_config.Kconfig() 157 - self._kconfig.read_from_file(kunitconfig_path) 158 - 159 - def validate_config(self, build_dir): 137 + def validate_config(self, build_dir) -> bool: 160 138 kconfig_path = get_kconfig_path(build_dir) 161 139 validated_kconfig = kunit_config.Kconfig() 162 140 validated_kconfig.read_from_file(kconfig_path) ··· 160 158 return False 161 159 return True 162 160 163 - def build_config(self, build_dir, make_options): 161 + def build_config(self, build_dir, make_options) -> bool: 164 162 kconfig_path = get_kconfig_path(build_dir) 165 163 if build_dir and not os.path.exists(build_dir): 166 164 os.mkdir(build_dir) ··· 172 170 return False 173 171 return self.validate_config(build_dir) 174 172 175 - def build_reconfig(self, build_dir, make_options): 173 + def build_reconfig(self, build_dir, make_options) -> bool: 176 174 """Creates a new .config if it is not a subset of the .kunitconfig.""" 177 175 kconfig_path = get_kconfig_path(build_dir) 178 176 if os.path.exists(kconfig_path): ··· 188 186 print('Generating .config ...') 189 187 return self.build_config(build_dir, make_options) 190 188 191 - def build_um_kernel(self, alltests, jobs, build_dir, make_options): 189 + def build_um_kernel(self, alltests, jobs, build_dir, make_options) -> bool: 192 190 try: 193 191 if alltests: 194 192 self._ops.make_allyesconfig(build_dir, make_options) ··· 199 197 return False 200 198 return self.validate_config(build_dir) 201 199 202 - def run_kernel(self, args=[], build_dir='', timeout=None): 200 + def run_kernel(self, args=[], build_dir='', timeout=None) -> Iterator[str]: 203 201 args.extend(['mem=1G', 'console=tty']) 204 202 self._ops.linux_bin(args, timeout, build_dir) 205 203 outfile = get_outfile_path(build_dir) ··· 208 206 for line in file: 209 207 yield line 210 208 211 - def signal_handler(self, sig, frame): 209 + def signal_handler(self, sig, frame) -> None: 212 210 logging.error('Build interruption occurred. Cleaning console.') 213 211 subprocess.call(['stty', 'sane'])
+40 -41
tools/testing/kunit/kunit_parser.py
··· 12 12 from datetime import datetime 13 13 from enum import Enum, auto 14 14 from functools import reduce 15 - from typing import List, Optional, Tuple 15 + from typing import Iterable, Iterator, List, Optional, Tuple 16 16 17 17 TestResult = namedtuple('TestResult', ['status','suites','log']) 18 18 19 19 class TestSuite(object): 20 - def __init__(self): 21 - self.status = None 22 - self.name = None 23 - self.cases = [] 20 + def __init__(self) -> None: 21 + self.status = TestStatus.SUCCESS 22 + self.name = '' 23 + self.cases = [] # type: List[TestCase] 24 24 25 - def __str__(self): 26 - return 'TestSuite(' + self.status + ',' + self.name + ',' + str(self.cases) + ')' 25 + def __str__(self) -> str: 26 + return 'TestSuite(' + str(self.status) + ',' + self.name + ',' + str(self.cases) + ')' 27 27 28 - def __repr__(self): 28 + def __repr__(self) -> str: 29 29 return str(self) 30 30 31 31 class TestCase(object): 32 - def __init__(self): 33 - self.status = None 32 + def __init__(self) -> None: 33 + self.status = TestStatus.SUCCESS 34 34 self.name = '' 35 - self.log = [] 35 + self.log = [] # type: List[str] 36 36 37 - def __str__(self): 38 - return 'TestCase(' + self.status + ',' + self.name + ',' + str(self.log) + ')' 37 + def __str__(self) -> str: 38 + return 'TestCase(' + str(self.status) + ',' + self.name + ',' + str(self.log) + ')' 39 39 40 - def __repr__(self): 40 + def __repr__(self) -> str: 41 41 return str(self) 42 42 43 43 class TestStatus(Enum): ··· 51 51 kunit_end_re = re.compile('(List of all partitions:|' 52 52 'Kernel panic - not syncing: VFS:)') 53 53 54 - def isolate_kunit_output(kernel_output): 54 + def isolate_kunit_output(kernel_output) -> Iterator[str]: 55 55 started = False 56 56 for line in kernel_output: 57 57 line = line.rstrip() # line always has a trailing \n ··· 64 64 elif started: 65 65 yield line[prefix_len:] if prefix_len > 0 else line 66 66 67 - def raw_output(kernel_output): 67 + def raw_output(kernel_output) -> None: 68 68 for line in kernel_output: 69 69 print(line.rstrip()) 70 70 ··· 72 72 73 73 RESET = '\033[0;0m' 74 74 75 - def red(text): 75 + def red(text) -> str: 76 76 return '\033[1;31m' + text + RESET 77 77 78 - def yellow(text): 78 + def yellow(text) -> str: 79 79 return '\033[1;33m' + text + RESET 80 80 81 - def green(text): 81 + def green(text) -> str: 82 82 return '\033[1;32m' + text + RESET 83 83 84 - def print_with_timestamp(message): 84 + def print_with_timestamp(message) -> None: 85 85 print('[%s] %s' % (datetime.now().strftime('%H:%M:%S'), message)) 86 86 87 - def format_suite_divider(message): 87 + def format_suite_divider(message) -> str: 88 88 return '======== ' + message + ' ========' 89 89 90 - def print_suite_divider(message): 90 + def print_suite_divider(message) -> None: 91 91 print_with_timestamp(DIVIDER) 92 92 print_with_timestamp(format_suite_divider(message)) 93 93 94 - def print_log(log): 94 + def print_log(log) -> None: 95 95 for m in log: 96 96 print_with_timestamp(m) 97 97 98 98 TAP_ENTRIES = re.compile(r'^(TAP|[\s]*ok|[\s]*not ok|[\s]*[0-9]+\.\.[0-9]+|[\s]*#).*$') 99 99 100 - def consume_non_diagnositic(lines: List[str]) -> None: 100 + def consume_non_diagnostic(lines: List[str]) -> None: 101 101 while lines and not TAP_ENTRIES.match(lines[0]): 102 102 lines.pop(0) 103 103 104 - def save_non_diagnositic(lines: List[str], test_case: TestCase) -> None: 104 + def save_non_diagnostic(lines: List[str], test_case: TestCase) -> None: 105 105 while lines and not TAP_ENTRIES.match(lines[0]): 106 106 test_case.log.append(lines[0]) 107 107 lines.pop(0) ··· 113 113 OK_NOT_OK_MODULE = re.compile(r'^(ok|not ok) ([0-9]+) - (.*)$') 114 114 115 115 def parse_ok_not_ok_test_case(lines: List[str], test_case: TestCase) -> bool: 116 - save_non_diagnositic(lines, test_case) 116 + save_non_diagnostic(lines, test_case) 117 117 if not lines: 118 118 test_case.status = TestStatus.TEST_CRASHED 119 119 return True ··· 139 139 DIAGNOSTIC_CRASH_MESSAGE = re.compile(r'^[\s]+# .*?: kunit test case crashed!$') 140 140 141 141 def parse_diagnostic(lines: List[str], test_case: TestCase) -> bool: 142 - save_non_diagnositic(lines, test_case) 142 + save_non_diagnostic(lines, test_case) 143 143 if not lines: 144 144 return False 145 145 line = lines[0] ··· 155 155 156 156 def parse_test_case(lines: List[str]) -> Optional[TestCase]: 157 157 test_case = TestCase() 158 - save_non_diagnositic(lines, test_case) 158 + save_non_diagnostic(lines, test_case) 159 159 while parse_diagnostic(lines, test_case): 160 160 pass 161 161 if parse_ok_not_ok_test_case(lines, test_case): ··· 166 166 SUBTEST_HEADER = re.compile(r'^[\s]+# Subtest: (.*)$') 167 167 168 168 def parse_subtest_header(lines: List[str]) -> Optional[str]: 169 - consume_non_diagnositic(lines) 169 + consume_non_diagnostic(lines) 170 170 if not lines: 171 171 return None 172 172 match = SUBTEST_HEADER.match(lines[0]) ··· 179 179 SUBTEST_PLAN = re.compile(r'[\s]+[0-9]+\.\.([0-9]+)') 180 180 181 181 def parse_subtest_plan(lines: List[str]) -> Optional[int]: 182 - consume_non_diagnositic(lines) 182 + consume_non_diagnostic(lines) 183 183 match = SUBTEST_PLAN.match(lines[0]) 184 184 if match: 185 185 lines.pop(0) ··· 202 202 def parse_ok_not_ok_test_suite(lines: List[str], 203 203 test_suite: TestSuite, 204 204 expected_suite_index: int) -> bool: 205 - consume_non_diagnositic(lines) 205 + consume_non_diagnostic(lines) 206 206 if not lines: 207 207 test_suite.status = TestStatus.TEST_CRASHED 208 208 return False ··· 224 224 else: 225 225 return False 226 226 227 - def bubble_up_errors(to_status, status_container_list) -> TestStatus: 228 - status_list = map(to_status, status_container_list) 229 - return reduce(max_status, status_list, TestStatus.SUCCESS) 227 + def bubble_up_errors(statuses: Iterable[TestStatus]) -> TestStatus: 228 + return reduce(max_status, statuses, TestStatus.SUCCESS) 230 229 231 230 def bubble_up_test_case_errors(test_suite: TestSuite) -> TestStatus: 232 - max_test_case_status = bubble_up_errors(lambda x: x.status, test_suite.cases) 231 + max_test_case_status = bubble_up_errors(x.status for x in test_suite.cases) 233 232 return max_status(max_test_case_status, test_suite.status) 234 233 235 234 def parse_test_suite(lines: List[str], expected_suite_index: int) -> Optional[TestSuite]: 236 235 if not lines: 237 236 return None 238 - consume_non_diagnositic(lines) 237 + consume_non_diagnostic(lines) 239 238 test_suite = TestSuite() 240 239 test_suite.status = TestStatus.SUCCESS 241 240 name = parse_subtest_header(lines) ··· 263 264 TAP_HEADER = re.compile(r'^TAP version 14$') 264 265 265 266 def parse_tap_header(lines: List[str]) -> bool: 266 - consume_non_diagnositic(lines) 267 + consume_non_diagnostic(lines) 267 268 if TAP_HEADER.match(lines[0]): 268 269 lines.pop(0) 269 270 return True ··· 273 274 TEST_PLAN = re.compile(r'[0-9]+\.\.([0-9]+)') 274 275 275 276 def parse_test_plan(lines: List[str]) -> Optional[int]: 276 - consume_non_diagnositic(lines) 277 + consume_non_diagnostic(lines) 277 278 match = TEST_PLAN.match(lines[0]) 278 279 if match: 279 280 lines.pop(0) ··· 281 282 else: 282 283 return None 283 284 284 - def bubble_up_suite_errors(test_suite_list: List[TestSuite]) -> TestStatus: 285 - return bubble_up_errors(lambda x: x.status, test_suite_list) 285 + def bubble_up_suite_errors(test_suites: Iterable[TestSuite]) -> TestStatus: 286 + return bubble_up_errors(x.status for x in test_suites) 286 287 287 288 def parse_test_result(lines: List[str]) -> TestResult: 288 - consume_non_diagnositic(lines) 289 + consume_non_diagnostic(lines) 289 290 if not lines or not parse_tap_header(lines): 290 291 return TestResult(TestStatus.NO_TESTS, [], lines) 291 292 expected_test_suite_num = parse_test_plan(lines)
+1 -1
tools/testing/selftests/net/forwarding/router_mpath_nh.sh
··· 203 203 t0_rp12=$(link_stats_tx_packets_get $rp12) 204 204 t0_rp13=$(link_stats_tx_packets_get $rp13) 205 205 206 - ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \ 206 + ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \ 207 207 -d 1msec -t udp "sp=1024,dp=0-32768" 208 208 209 209 t1_rp12=$(link_stats_tx_packets_get $rp12)
+1 -1
tools/testing/selftests/net/forwarding/router_multipath.sh
··· 178 178 t0_rp12=$(link_stats_tx_packets_get $rp12) 179 179 t0_rp13=$(link_stats_tx_packets_get $rp13) 180 180 181 - ip vrf exec vrf-h1 $MZ -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \ 181 + ip vrf exec vrf-h1 $MZ $h1 -q -p 64 -A 192.0.2.2 -B 198.51.100.2 \ 182 182 -d 1msec -t udp "sp=1024,dp=0-32768" 183 183 184 184 t1_rp12=$(link_stats_tx_packets_get $rp12)
+44 -1
tools/testing/selftests/net/xfrm_policy.sh
··· 202 202 # 1: iptables -m policy rule count != 0 203 203 rval=$1 204 204 ip=$2 205 - lret=0 205 + local lret=0 206 206 207 207 ip netns exec ns1 ping -q -c 1 10.0.2.$ip > /dev/null 208 208 ··· 282 282 ret=1 283 283 return 1 284 284 fi 285 + 286 + echo "PASS: $log" 287 + return 0 288 + } 289 + 290 + # insert non-overlapping policies in a random order and check that 291 + # all of them can be fetched using the traffic selectors. 292 + check_random_order() 293 + { 294 + local ns=$1 295 + local log=$2 296 + 297 + for i in $(seq 100); do 298 + ip -net $ns xfrm policy flush 299 + for j in $(seq 0 16 255 | sort -R); do 300 + ip -net $ns xfrm policy add dst $j.0.0.0/24 dir out priority 10 action allow 301 + done 302 + for j in $(seq 0 16 255); do 303 + if ! ip -net $ns xfrm policy get dst $j.0.0.0/24 dir out > /dev/null; then 304 + echo "FAIL: $log" 1>&2 305 + return 1 306 + fi 307 + done 308 + done 309 + 310 + for i in $(seq 100); do 311 + ip -net $ns xfrm policy flush 312 + for j in $(seq 0 16 255 | sort -R); do 313 + local addr=$(printf "e000:0000:%02x00::/56" $j) 314 + ip -net $ns xfrm policy add dst $addr dir out priority 10 action allow 315 + done 316 + for j in $(seq 0 16 255); do 317 + local addr=$(printf "e000:0000:%02x00::/56" $j) 318 + if ! ip -net $ns xfrm policy get dst $addr dir out > /dev/null; then 319 + echo "FAIL: $log" 1>&2 320 + return 1 321 + fi 322 + done 323 + done 324 + 325 + ip -net $ns xfrm policy flush 285 326 286 327 echo "PASS: $log" 287 328 return 0 ··· 478 437 check_exceptions "exceptions and block policies after htresh change to normal" 479 438 480 439 check_hthresh_repeat "policies with repeated htresh change" 440 + 441 + check_random_order ns3 "policies inserted in random order" 481 442 482 443 for i in 1 2 3 4;do ip netns del ns$i;done 483 444
+4 -1
tools/testing/selftests/powerpc/alignment/alignment_handler.c
··· 443 443 LOAD_DFORM_TEST(ldu); 444 444 LOAD_XFORM_TEST(ldx); 445 445 LOAD_XFORM_TEST(ldux); 446 - LOAD_DFORM_TEST(lmw); 447 446 STORE_DFORM_TEST(stb); 448 447 STORE_XFORM_TEST(stbx); 449 448 STORE_DFORM_TEST(stbu); ··· 461 462 STORE_XFORM_TEST(stdx); 462 463 STORE_DFORM_TEST(stdu); 463 464 STORE_XFORM_TEST(stdux); 465 + 466 + #ifdef __BIG_ENDIAN__ 467 + LOAD_DFORM_TEST(lmw); 464 468 STORE_DFORM_TEST(stmw); 469 + #endif 465 470 466 471 return rc; 467 472 }
+1 -1
tools/testing/selftests/powerpc/mm/pkey_exec_prot.c
··· 290 290 291 291 int main(void) 292 292 { 293 - test_harness(test, "pkey_exec_prot"); 293 + return test_harness(test, "pkey_exec_prot"); 294 294 }
+1 -1
tools/testing/selftests/powerpc/mm/pkey_siginfo.c
··· 329 329 330 330 int main(void) 331 331 { 332 - test_harness(test, "pkey_siginfo"); 332 + return test_harness(test, "pkey_siginfo"); 333 333 }
+1
virt/kvm/kvm_main.c
··· 1292 1292 return -EINVAL; 1293 1293 /* We can read the guest memory with __xxx_user() later on. */ 1294 1294 if ((mem->userspace_addr & (PAGE_SIZE - 1)) || 1295 + (mem->userspace_addr != untagged_addr(mem->userspace_addr)) || 1295 1296 !access_ok((void __user *)(unsigned long)mem->userspace_addr, 1296 1297 mem->memory_size)) 1297 1298 return -EINVAL;