Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.2-rc7 into usb-next

We need the USB fixes in here, and this resolves a merge conflict with
the i915 driver as reported in linux-next

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4373 -2192
+2
.mailmap
··· 130 130 Douglas Gilbert <dougg@torque.net> 131 131 Ed L. Cashin <ecashin@coraid.com> 132 132 Erik Kaneda <erik.kaneda@intel.com> <erik.schmauss@intel.com> 133 + Eugen Hristev <eugen.hristev@collabora.com> <eugen.hristev@microchip.com> 133 134 Evgeniy Polyakov <johnpol@2ka.mipt.ru> 134 135 Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> <ezequiel@collabora.com> 135 136 Felipe W Damasio <felipewd@terra.com.br> ··· 215 214 Jisheng Zhang <jszhang@kernel.org> <Jisheng.Zhang@synaptics.com> 216 215 Johan Hovold <johan@kernel.org> <jhovold@gmail.com> 217 216 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com> 217 + John Crispin <john@phrozen.org> <blogic@openwrt.org> 218 218 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de> 219 219 John Stultz <johnstul@us.ibm.com> 220 220 Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org>
+15
CREDITS
··· 1173 1173 D: APM driver (early port) 1174 1174 D: DRM drivers (author of several) 1175 1175 1176 + N: Veaceslav Falico 1177 + E: vfalico@gmail.com 1178 + D: Co-maintainer and co-author of the network bonding driver. 1179 + 1176 1180 N: János Farkas 1177 1181 E: chexum@shadow.banki.hu 1178 1182 D: romfs, various (mostly networking) fixes ··· 2492 2488 D: XF86_Mach8 2493 2489 D: XF86_8514 2494 2490 D: cfdisk (curses based disk partitioning program) 2491 + 2492 + N: Mat Martineau 2493 + E: mat@martineau.name 2494 + D: MPTCP subsystem co-maintainer 2020-2023 2495 + D: Keyctl restricted keyring and Diffie-Hellman UAPI 2496 + D: Bluetooth L2CAP ERTM mode and AMP 2497 + S: USA 2495 2498 2496 2499 N: John S. Marvin 2497 2500 E: jsm@fc.hp.com ··· 4182 4171 S: B-1206 Jingmao Guojigongyu 4183 4172 S: 16 Baliqiao Nanjie, Beijing 101100 4184 4173 S: People's Repulic of China 4174 + 4175 + N: Vlad Yasevich 4176 + E: vyasevich@gmail.com 4177 + D: SCTP protocol maintainer. 4185 4178 4186 4179 N: Aviad Yehezkel 4187 4180 E: aviadye@nvidia.com
+6 -9
Documentation/admin-guide/cgroup-v2.rst
··· 1245 1245 This is a simple interface to trigger memory reclaim in the 1246 1246 target cgroup. 1247 1247 1248 - This file accepts a string which contains the number of bytes to 1249 - reclaim. 1248 + This file accepts a single key, the number of bytes to reclaim. 1249 + No nested keys are currently supported. 1250 1250 1251 1251 Example:: 1252 1252 1253 1253 echo "1G" > memory.reclaim 1254 + 1255 + The interface can be later extended with nested keys to 1256 + configure the reclaim behavior. For example, specify the 1257 + type of memory to reclaim from (anon, file, ..). 1254 1258 1255 1259 Please note that the kernel can over or under reclaim from 1256 1260 the target cgroup. If less bytes are reclaimed than the ··· 1266 1262 the memory reclaim normally is not exercised in this case. 1267 1263 This means that the networking layer will not adapt based on 1268 1264 reclaim induced by memory.reclaim. 1269 - 1270 - This file also allows the user to specify the nodes to reclaim from, 1271 - via the 'nodes=' key, for example:: 1272 - 1273 - echo "1G nodes=0,1" > memory.reclaim 1274 - 1275 - The above instructs the kernel to reclaim memory from nodes 0,1. 1276 1265 1277 1266 memory.peak 1278 1267 A read-only single value file which exists on non-root
+2 -2
Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml
··· 16 16 compatible: 17 17 items: 18 18 - enum: 19 - - renesas,i2c-r9a09g011 # RZ/V2M 19 + - renesas,r9a09g011-i2c # RZ/V2M 20 20 - const: renesas,rzv2m-i2c 21 21 22 22 reg: ··· 66 66 #include <dt-bindings/interrupt-controller/arm-gic.h> 67 67 68 68 i2c0: i2c@a4030000 { 69 - compatible = "renesas,i2c-r9a09g011", "renesas,rzv2m-i2c"; 69 + compatible = "renesas,r9a09g011-i2c", "renesas,rzv2m-i2c"; 70 70 reg = <0xa4030000 0x80>; 71 71 interrupts = <GIC_SPI 232 IRQ_TYPE_EDGE_RISING>, 72 72 <GIC_SPI 236 IRQ_TYPE_EDGE_RISING>;
+19 -2
Documentation/devicetree/bindings/regulator/samsung,s2mps14.yaml
··· 19 19 additional information and example. 20 20 21 21 patternProperties: 22 - # 25 LDOs 23 - "^LDO([1-9]|[1][0-9]|2[0-5])$": 22 + # 25 LDOs, without LDO10-12 23 + "^LDO([1-9]|1[3-9]|2[0-5])$": 24 24 type: object 25 25 $ref: regulator.yaml# 26 26 unevaluatedProperties: false 27 27 description: 28 28 Properties for single LDO regulator. 29 + 30 + required: 31 + - regulator-name 32 + 33 + "^LDO(1[0-2])$": 34 + type: object 35 + $ref: regulator.yaml# 36 + unevaluatedProperties: false 37 + description: 38 + Properties for single LDO regulator. 39 + 40 + properties: 41 + samsung,ext-control-gpios: 42 + maxItems: 1 43 + description: 44 + LDO10, LDO11 and LDO12 can be configured to external control over 45 + GPIO. 29 46 30 47 required: 31 48 - regulator-name
+1 -1
Documentation/devicetree/bindings/riscv/cpus.yaml
··· 83 83 insensitive, letters in the riscv,isa string must be all 84 84 lowercase to simplify parsing. 85 85 $ref: "/schemas/types.yaml#/definitions/string" 86 - pattern: ^rv(?:64|32)imaf?d?q?c?b?v?k?h?(?:_[hsxz](?:[a-z])+)*$ 86 + pattern: ^rv(?:64|32)imaf?d?q?c?b?k?j?p?v?h?(?:[hsxz](?:[a-z])+)?(?:_[hsxz](?:[a-z])+)*$ 87 87 88 88 # RISC-V requires 'timebase-frequency' in /cpus, so disallow it here 89 89 timebase-frequency: false
+2
Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml
··· 40 40 description: 41 41 Indicates that the setting of RTC time is allowed by the host CPU. 42 42 43 + wakeup-source: true 44 + 43 45 required: 44 46 - compatible 45 47 - reg
Documentation/devicetree/bindings/sound/everest,es8326.yaml
+1 -1
Documentation/networking/bridge.rst
··· 8 8 userspace tools. 9 9 10 10 Documentation for Linux bridging is on: 11 - http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge 11 + https://wiki.linuxfoundation.org/networking/bridge 12 12 13 13 The bridge-utilities are maintained at: 14 14 git://git.kernel.org/pub/scm/linux/kernel/git/shemminger/bridge-utils.git
+1 -1
Documentation/networking/device_drivers/ethernet/intel/ice.rst
··· 819 819 ---- 820 820 This driver supports NAPI (Rx polling mode). 821 821 For more information on NAPI, see 822 - https://www.linuxfoundation.org/collaborate/workgroups/networking/napi 822 + https://wiki.linuxfoundation.org/networking/napi 823 823 824 824 825 825 MACVLAN
+3 -7
Documentation/networking/nf_conntrack-sysctl.rst
··· 173 173 default 3 174 174 175 175 nf_conntrack_sctp_timeout_established - INTEGER (seconds) 176 - default 432000 (5 days) 176 + default 210 177 + 178 + Default is set to (hb_interval * path_max_retrans + rto_max) 177 179 178 180 nf_conntrack_sctp_timeout_shutdown_sent - INTEGER (seconds) 179 181 default 0.3 ··· 191 189 192 190 This timeout is used to setup conntrack entry on secondary paths. 193 191 Default is set to hb_interval. 194 - 195 - nf_conntrack_sctp_timeout_heartbeat_acked - INTEGER (seconds) 196 - default 210 197 - 198 - This timeout is used to setup conntrack entry on secondary paths. 199 - Default is set to (hb_interval * path_max_retrans + rto_max) 200 192 201 193 nf_conntrack_udp_timeout - INTEGER (seconds) 202 194 default 30
+7 -3
Documentation/virt/kvm/api.rst
··· 8070 8070 state is final and avoid missing dirty pages from another ioctl ordered 8071 8071 after the bitmap collection. 8072 8072 8073 - NOTE: One example of using the backup bitmap is saving arm64 vgic/its 8074 - tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on 8075 - KVM device "kvm-arm-vgic-its" when dirty ring is enabled. 8073 + NOTE: Multiple examples of using the backup bitmap: (1) save vgic/its 8074 + tables through command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} on 8075 + KVM device "kvm-arm-vgic-its". (2) restore vgic/its tables through 8076 + command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_RESTORE_TABLES} on KVM device 8077 + "kvm-arm-vgic-its". VGICv3 LPI pending status is restored. (3) save 8078 + vgic3 pending table through KVM_DEV_ARM_VGIC_{GRP_CTRL, SAVE_PENDING_TABLES} 8079 + command on KVM device "kvm-arm-vgic-v3". 8076 8080 8077 8081 8.30 KVM_CAP_XEN_HVM 8078 8082 --------------------
+36
Documentation/x86/amd-memory-encryption.rst
··· 95 95 not enable SME, then Linux will not be able to activate memory encryption, even 96 96 if configured to do so by default or the mem_encrypt=on command line parameter 97 97 is specified. 98 + 99 + Secure Nested Paging (SNP) 100 + ========================== 101 + 102 + SEV-SNP introduces new features (SEV_FEATURES[1:63]) which can be enabled 103 + by the hypervisor for security enhancements. Some of these features need 104 + guest side implementation to function correctly. The below table lists the 105 + expected guest behavior with various possible scenarios of guest/hypervisor 106 + SNP feature support. 107 + 108 + +-----------------+---------------+---------------+------------------+ 109 + | Feature Enabled | Guest needs | Guest has | Guest boot | 110 + | by the HV | implementation| implementation| behaviour | 111 + +=================+===============+===============+==================+ 112 + | No | No | No | Boot | 113 + | | | | | 114 + +-----------------+---------------+---------------+------------------+ 115 + | No | Yes | No | Boot | 116 + | | | | | 117 + +-----------------+---------------+---------------+------------------+ 118 + | No | Yes | Yes | Boot | 119 + | | | | | 120 + +-----------------+---------------+---------------+------------------+ 121 + | Yes | No | No | Boot with | 122 + | | | | feature enabled | 123 + +-----------------+---------------+---------------+------------------+ 124 + | Yes | Yes | No | Graceful boot | 125 + | | | | failure | 126 + +-----------------+---------------+---------------+------------------+ 127 + | Yes | Yes | Yes | Boot with | 128 + | | | | feature enabled | 129 + +-----------------+---------------+---------------+------------------+ 130 + 131 + More details in AMD64 APM[1] Vol 2: 15.34.10 SEV_STATUS MSR 132 + 133 + [1] https://www.amd.com/system/files/TechDocs/40332.pdf
+23 -15
MAINTAINERS
··· 1097 1097 F: drivers/dma/ptdma/ 1098 1098 1099 1099 AMD SEATTLE DEVICE TREE SUPPORT 1100 - M: Brijesh Singh <brijeshkumar.singh@amd.com> 1101 1100 M: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> 1102 1101 M: Tom Lendacky <thomas.lendacky@amd.com> 1103 1102 S: Supported ··· 2211 2212 S: Maintained 2212 2213 T: git git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux.git 2213 2214 X: drivers/media/i2c/ 2215 + F: arch/arm64/boot/dts/freescale/ 2216 + X: arch/arm64/boot/dts/freescale/fsl-* 2217 + X: arch/arm64/boot/dts/freescale/qoriq-* 2214 2218 N: imx 2215 2219 N: mxs 2216 2220 ··· 2452 2450 2453 2451 ARM/Mediatek SoC support 2454 2452 M: Matthias Brugger <matthias.bgg@gmail.com> 2453 + R: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 2454 + L: linux-kernel@vger.kernel.org 2455 2455 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2456 2456 L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 2457 2457 S: Maintained 2458 2458 W: https://mtk.wiki.kernel.org/ 2459 - C: irc://chat.freenode.net/linux-mediatek 2459 + C: irc://irc.libera.chat/linux-mediatek 2460 + F: arch/arm/boot/dts/mt2* 2460 2461 F: arch/arm/boot/dts/mt6* 2461 2462 F: arch/arm/boot/dts/mt7* 2462 2463 F: arch/arm/boot/dts/mt8* ··· 2467 2462 F: arch/arm64/boot/dts/mediatek/ 2468 2463 F: drivers/soc/mediatek/ 2469 2464 N: mtk 2470 - N: mt[678] 2465 + N: mt[2678] 2471 2466 K: mediatek 2472 2467 2473 2468 ARM/Mediatek USB3 PHY DRIVER ··· 3771 3766 3772 3767 BONDING DRIVER 3773 3768 M: Jay Vosburgh <j.vosburgh@gmail.com> 3774 - M: Veaceslav Falico <vfalico@gmail.com> 3775 3769 M: Andy Gospodarek <andy@greyhouse.net> 3776 3770 L: netdev@vger.kernel.org 3777 3771 S: Supported ··· 7619 7615 F: drivers/firmware/efi/test/ 7620 7616 7621 7617 EFI VARIABLE FILESYSTEM 7622 - M: Matthew Garrett <matthew.garrett@nebula.com> 7623 7618 M: Jeremy Kerr <jk@ozlabs.org> 7624 7619 M: Ard Biesheuvel <ardb@kernel.org> 7625 7620 L: linux-efi@vger.kernel.org ··· 7897 7894 7898 7895 EXTRA BOOT CONFIG 7899 7896 M: Masami Hiramatsu <mhiramat@kernel.org> 7897 + L: linux-kernel@vger.kernel.org 7898 + L: linux-trace-kernel@vger.kernel.org 7899 + Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/ 7900 7900 S: Maintained 7901 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git 7901 7902 F: Documentation/admin-guide/bootconfig.rst 7902 7903 F: fs/proc/bootconfig.c 7903 7904 F: include/linux/bootconfig.h ··· 8474 8467 F: include/linux/fscache*.h 8475 8468 8476 8469 FSCRYPT: FILE SYSTEM LEVEL ENCRYPTION SUPPORT 8470 + M: Eric Biggers <ebiggers@kernel.org> 8477 8471 M: Theodore Y. Ts'o <tytso@mit.edu> 8478 8472 M: Jaegeuk Kim <jaegeuk@kernel.org> 8479 - M: Eric Biggers <ebiggers@kernel.org> 8480 8473 L: linux-fscrypt@vger.kernel.org 8481 8474 S: Supported 8482 8475 Q: https://patchwork.kernel.org/project/linux-fscrypt/list/ 8483 - T: git git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git 8476 + T: git https://git.kernel.org/pub/scm/fs/fscrypt/linux.git 8484 8477 F: Documentation/filesystems/fscrypt.rst 8485 8478 F: fs/crypto/ 8486 - F: include/linux/fscrypt*.h 8479 + F: include/linux/fscrypt.h 8487 8480 F: include/uapi/linux/fscrypt.h 8488 8481 8489 8482 FSI SUBSYSTEM ··· 8526 8519 FSVERITY: READ-ONLY FILE-BASED AUTHENTICITY PROTECTION 8527 8520 M: Eric Biggers <ebiggers@kernel.org> 8528 8521 M: Theodore Y. Ts'o <tytso@mit.edu> 8529 - L: linux-fscrypt@vger.kernel.org 8522 + L: fsverity@lists.linux.dev 8530 8523 S: Supported 8531 - Q: https://patchwork.kernel.org/project/linux-fscrypt/list/ 8532 - T: git git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt.git fsverity 8524 + Q: https://patchwork.kernel.org/project/fsverity/list/ 8525 + T: git https://git.kernel.org/pub/scm/fs/fsverity/linux.git 8533 8526 F: Documentation/filesystems/fsverity.rst 8534 8527 F: fs/verity/ 8535 8528 F: include/linux/fsverity.h ··· 8577 8570 F: arch/*/*/*/*ftrace* 8578 8571 F: arch/*/*/*ftrace* 8579 8572 F: include/*/ftrace.h 8573 + F: samples/ftrace 8580 8574 8581 8575 FUNGIBLE ETHERNET DRIVERS 8582 8576 M: Dimitris Michailidis <dmichail@fungible.com> ··· 14608 14600 14609 14601 NETWORKING [IPv4/IPv6] 14610 14602 M: "David S. Miller" <davem@davemloft.net> 14611 - M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> 14612 14603 M: David Ahern <dsahern@kernel.org> 14613 14604 L: netdev@vger.kernel.org 14614 14605 S: Maintained ··· 14640 14633 F: net/netlabel/ 14641 14634 14642 14635 NETWORKING [MPTCP] 14643 - M: Mat Martineau <mathew.j.martineau@linux.intel.com> 14644 14636 M: Matthieu Baerts <matthieu.baerts@tessares.net> 14645 14637 L: netdev@vger.kernel.org 14646 14638 L: mptcp@lists.linux.dev ··· 15664 15658 M: Jonas Bonn <jonas@southpole.se> 15665 15659 M: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> 15666 15660 M: Stafford Horne <shorne@gmail.com> 15667 - L: openrisc@lists.librecores.org 15661 + L: linux-openrisc@vger.kernel.org 15668 15662 S: Maintained 15669 15663 W: http://openrisc.io 15670 15664 T: git https://github.com/openrisc/linux.git ··· 17976 17970 L: linux-riscv@lists.infradead.org 17977 17971 S: Supported 17978 17972 Q: https://patchwork.kernel.org/project/linux-riscv/list/ 17973 + C: irc://irc.libera.chat/riscv 17979 17974 P: Documentation/riscv/patch-acceptance.rst 17980 17975 T: git git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git 17981 17976 F: arch/riscv/ ··· 18698 18691 F: include/target/ 18699 18692 18700 18693 SCTP PROTOCOL 18701 - M: Vlad Yasevich <vyasevich@gmail.com> 18702 18694 M: Neil Horman <nhorman@tuxdriver.com> 18703 18695 M: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> 18696 + M: Xin Long <lucien.xin@gmail.com> 18704 18697 L: linux-sctp@vger.kernel.org 18705 18698 S: Maintained 18706 18699 W: http://lksctp.sourceforge.net ··· 21738 21731 21739 21732 USB WEBCAM GADGET 21740 21733 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 21734 + M: Daniel Scally <dan.scally@ideasonboard.com> 21741 21735 L: linux-usb@vger.kernel.org 21742 21736 S: Maintained 21743 21737 F: drivers/usb/gadget/function/*uvc*
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 2 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/Makefile
··· 132 132 133 133 ifeq ($(CONFIG_THUMB2_KERNEL),y) 134 134 CFLAGS_ISA :=-Wa,-mimplicit-it=always $(AFLAGS_NOWARN) 135 - AFLAGS_ISA :=$(CFLAGS_ISA) -Wa$(comma)-mthumb -D__thumb2__=2 135 + AFLAGS_ISA :=$(CFLAGS_ISA) -Wa$(comma)-mthumb 136 136 CFLAGS_ISA +=-mthumb 137 137 else 138 138 CFLAGS_ISA :=$(call cc-option,-marm,) $(AFLAGS_NOWARN)
+1 -1
arch/arm/boot/dts/aspeed-bmc-ibm-bonnell.dts
··· 751 751 }; 752 752 753 753 pca9849@75 { 754 - compatible = "nxp,pca849"; 754 + compatible = "nxp,pca9849"; 755 755 reg = <0x75>; 756 756 #address-cells = <1>; 757 757 #size-cells = <0>;
+2 -1
arch/arm/boot/dts/imx7d-smegw01.dts
··· 198 198 &usbotg2 { 199 199 pinctrl-names = "default"; 200 200 pinctrl-0 = <&pinctrl_usbotg2>; 201 + over-current-active-low; 201 202 dr_mode = "host"; 202 203 status = "okay"; 203 204 }; ··· 375 374 376 375 pinctrl_usbotg2: usbotg2grp { 377 376 fsl,pins = < 378 - MX7D_PAD_UART3_RTS_B__USB_OTG2_OC 0x04 377 + MX7D_PAD_UART3_RTS_B__USB_OTG2_OC 0x5c 379 378 >; 380 379 }; 381 380
+1
arch/arm/boot/dts/nuvoton-wpcm450.dtsi
··· 480 480 reg = <0xc8000000 0x1000>, <0xc0000000 0x4000000>; 481 481 reg-names = "control", "memory"; 482 482 clocks = <&clk 0>; 483 + nuvoton,shm = <&shm>; 483 484 status = "disabled"; 484 485 }; 485 486
+6 -1
arch/arm/crypto/Makefile
··· 53 53 54 54 clean-files += poly1305-core.S sha256-core.S sha512-core.S 55 55 56 + aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1 57 + 58 + AFLAGS_sha256-core.o += $(aflags-thumb2-y) 59 + AFLAGS_sha512-core.o += $(aflags-thumb2-y) 60 + 56 61 # massage the perlasm code a bit so we only get the NEON routine if we need it 57 62 poly1305-aflags-$(CONFIG_CPU_V7) := -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_ARCH__=5 58 63 poly1305-aflags-$(CONFIG_KERNEL_MODE_NEON) := -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_ARCH__=7 59 - AFLAGS_poly1305-core.o += $(poly1305-aflags-y) 64 + AFLAGS_poly1305-core.o += $(poly1305-aflags-y) $(aflags-thumb2-y)
+1 -1
arch/arm/mm/nommu.c
··· 161 161 mpu_setup(); 162 162 163 163 /* allocate the zero page. */ 164 - zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 164 + zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE); 165 165 if (!zero_page) 166 166 panic("%s: Failed to allocate %lu bytes align=0x%lx\n", 167 167 __func__, PAGE_SIZE, PAGE_SIZE);
+1
arch/arm/mm/proc-macros.S
··· 6 6 * VM_EXEC 7 7 */ 8 8 #include <asm/asm-offsets.h> 9 + #include <asm/pgtable.h> 9 10 #include <asm/thread_info.h> 10 11 11 12 #ifdef CONFIG_CPU_V7M
+1 -1
arch/arm64/boot/dts/freescale/imx8dxl.dtsi
··· 164 164 165 165 sc_pwrkey: keys { 166 166 compatible = "fsl,imx8qxp-sc-key", "fsl,imx-sc-key"; 167 - linux,keycode = <KEY_POWER>; 167 + linux,keycodes = <KEY_POWER>; 168 168 wakeup-source; 169 169 }; 170 170
+1
arch/arm64/boot/dts/freescale/imx8mm-data-modul-edm-sbc.dts
··· 88 88 pinctrl-names = "default"; 89 89 pinctrl-0 = <&pinctrl_watchdog_gpio>; 90 90 compatible = "linux,wdt-gpio"; 91 + always-running; 91 92 gpios = <&gpio1 8 GPIO_ACTIVE_HIGH>; 92 93 hw_algo = "level"; 93 94 /* Reset triggers in 2..3 seconds */
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
··· 602 602 #define MX8MM_IOMUXC_UART1_RXD_GPIO5_IO22 0x234 0x49C 0x000 0x5 0x0 603 603 #define MX8MM_IOMUXC_UART1_RXD_TPSMP_HDATA24 0x234 0x49C 0x000 0x7 0x0 604 604 #define MX8MM_IOMUXC_UART1_TXD_UART1_DCE_TX 0x238 0x4A0 0x000 0x0 0x0 605 - #define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x0 605 + #define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x1 606 606 #define MX8MM_IOMUXC_UART1_TXD_ECSPI3_MOSI 0x238 0x4A0 0x000 0x1 0x0 607 607 #define MX8MM_IOMUXC_UART1_TXD_GPIO5_IO23 0x238 0x4A0 0x000 0x5 0x0 608 608 #define MX8MM_IOMUXC_UART1_TXD_TPSMP_HDATA25 0x238 0x4A0 0x000 0x7 0x0
-1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx-0x-rs232-rts.dtso
··· 33 33 pinctrl-0 = <&pinctrl_uart2>; 34 34 rts-gpios = <&gpio5 29 GPIO_ACTIVE_LOW>; 35 35 cts-gpios = <&gpio5 28 GPIO_ACTIVE_LOW>; 36 - uart-has-rtscts; 37 36 status = "okay"; 38 37 }; 39 38
-1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx-0x-rs232-rts.dtso
··· 33 33 pinctrl-0 = <&pinctrl_uart2>; 34 34 rts-gpios = <&gpio5 29 GPIO_ACTIVE_LOW>; 35 35 cts-gpios = <&gpio5 28 GPIO_ACTIVE_LOW>; 36 - uart-has-rtscts; 37 36 status = "okay"; 38 37 }; 39 38
-1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi
··· 222 222 pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_bten>; 223 223 cts-gpios = <&gpio5 8 GPIO_ACTIVE_LOW>; 224 224 rts-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>; 225 - uart-has-rtscts; 226 225 status = "okay"; 227 226 228 227 bluetooth {
-3
arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
··· 733 733 dtr-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>; 734 734 dsr-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>; 735 735 dcd-gpios = <&gpio1 11 GPIO_ACTIVE_LOW>; 736 - uart-has-rtscts; 737 736 status = "okay"; 738 737 }; 739 738 ··· 748 749 pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>; 749 750 cts-gpios = <&gpio4 10 GPIO_ACTIVE_LOW>; 750 751 rts-gpios = <&gpio4 9 GPIO_ACTIVE_LOW>; 751 - uart-has-rtscts; 752 752 status = "okay"; 753 753 }; 754 754 ··· 756 758 pinctrl-0 = <&pinctrl_uart4>, <&pinctrl_uart4_gpio>; 757 759 cts-gpios = <&gpio5 11 GPIO_ACTIVE_LOW>; 758 760 rts-gpios = <&gpio5 12 GPIO_ACTIVE_LOW>; 759 - uart-has-rtscts; 760 761 status = "okay"; 761 762 }; 762 763
-3
arch/arm64/boot/dts/freescale/imx8mm-venice-gw7902.dts
··· 664 664 pinctrl-0 = <&pinctrl_uart1>, <&pinctrl_uart1_gpio>; 665 665 rts-gpios = <&gpio4 10 GPIO_ACTIVE_LOW>; 666 666 cts-gpios = <&gpio4 24 GPIO_ACTIVE_LOW>; 667 - uart-has-rtscts; 668 667 status = "okay"; 669 668 }; 670 669 ··· 680 681 pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>; 681 682 rts-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>; 682 683 cts-gpios = <&gpio2 0 GPIO_ACTIVE_LOW>; 683 - uart-has-rtscts; 684 684 status = "okay"; 685 685 686 686 bluetooth { ··· 697 699 dtr-gpios = <&gpio4 3 GPIO_ACTIVE_LOW>; 698 700 dsr-gpios = <&gpio4 4 GPIO_ACTIVE_LOW>; 699 701 dcd-gpios = <&gpio4 6 GPIO_ACTIVE_LOW>; 700 - uart-has-rtscts; 701 702 status = "okay"; 702 703 }; 703 704
-1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw7903.dts
··· 581 581 dtr-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>; 582 582 dsr-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>; 583 583 dcd-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>; 584 - uart-has-rtscts; 585 584 status = "okay"; 586 585 }; 587 586
+1
arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
··· 98 98 off-on-delay = <500000>; 99 99 pinctrl-names = "default"; 100 100 pinctrl-0 = <&pinctrl_reg_eth>; 101 + regulator-always-on; 101 102 regulator-boot-on; 102 103 regulator-max-microvolt = <3300000>; 103 104 regulator-min-microvolt = <3300000>;
-1
arch/arm64/boot/dts/freescale/imx8mn-venice-gw7902.dts
··· 643 643 pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>; 644 644 rts-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>; 645 645 cts-gpios = <&gpio2 0 GPIO_ACTIVE_LOW>; 646 - uart-has-rtscts; 647 646 status = "okay"; 648 647 649 648 bluetooth {
-1
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 623 623 pinctrl-0 = <&pinctrl_uart3>, <&pinctrl_uart3_gpio>; 624 624 cts-gpios = <&gpio3 21 GPIO_ACTIVE_LOW>; 625 625 rts-gpios = <&gpio3 22 GPIO_ACTIVE_LOW>; 626 - uart-has-rtscts; 627 626 status = "okay"; 628 627 629 628 bluetooth {
+9
arch/arm64/include/asm/efi.h
··· 48 48 }) 49 49 50 50 extern spinlock_t efi_rt_lock; 51 + extern u64 *efi_rt_stack_top; 51 52 efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...); 53 + 54 + /* 55 + * efi_rt_stack_top[-1] contains the value the stack pointer had before 56 + * switching to the EFI runtime stack. 57 + */ 58 + #define current_in_efi() \ 59 + (!preemptible() && efi_rt_stack_top != NULL && \ 60 + on_task_stack(current, READ_ONCE(efi_rt_stack_top[-1]), 1)) 52 61 53 62 #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT) 54 63
+15
arch/arm64/include/asm/stacktrace.h
··· 106 106 #define stackinfo_get_sdei_critical() stackinfo_get_unknown() 107 107 #endif 108 108 109 + #ifdef CONFIG_EFI 110 + extern u64 *efi_rt_stack_top; 111 + 112 + static inline struct stack_info stackinfo_get_efi(void) 113 + { 114 + unsigned long high = (u64)efi_rt_stack_top; 115 + unsigned long low = high - THREAD_SIZE; 116 + 117 + return (struct stack_info) { 118 + .low = low, 119 + .high = high, 120 + }; 121 + } 122 + #endif 123 + 109 124 #endif /* __ASM_STACKTRACE_H */
+6
arch/arm64/kernel/efi-rt-wrapper.S
··· 46 46 mov x4, x6 47 47 blr x8 48 48 49 + mov x16, sp 49 50 mov sp, x29 51 + str xzr, [x16, #8] // clear recorded task SP value 52 + 50 53 ldp x1, x2, [sp, #16] 51 54 cmp x2, x18 52 55 ldp x29, x30, [sp], #112 ··· 73 70 74 71 SYM_CODE_START(__efi_rt_asm_recover) 75 72 mov sp, x30 73 + 74 + ldr_l x16, efi_rt_stack_top // clear recorded task SP value 75 + str xzr, [x16, #-8] 76 76 77 77 ldp x19, x20, [sp, #32] 78 78 ldp x21, x22, [sp, #48]
+2 -1
arch/arm64/kernel/efi.c
··· 11 11 #include <linux/init.h> 12 12 13 13 #include <asm/efi.h> 14 + #include <asm/stacktrace.h> 14 15 15 16 static bool region_is_misaligned(const efi_memory_desc_t *md) 16 17 { ··· 155 154 bool efi_runtime_fixup_exception(struct pt_regs *regs, const char *msg) 156 155 { 157 156 /* Check whether the exception occurred while running the firmware */ 158 - if (current_work() != &efi_rts_work.work || regs->pc >= TASK_SIZE_64) 157 + if (!current_in_efi() || regs->pc >= TASK_SIZE_64) 159 158 return false; 160 159 161 160 pr_err(FW_BUG "Unable to handle %s in EFI runtime service\n", msg);
+12
arch/arm64/kernel/stacktrace.c
··· 5 5 * Copyright (C) 2012 ARM Ltd. 6 6 */ 7 7 #include <linux/kernel.h> 8 + #include <linux/efi.h> 8 9 #include <linux/export.h> 9 10 #include <linux/ftrace.h> 10 11 #include <linux/sched.h> ··· 13 12 #include <linux/sched/task_stack.h> 14 13 #include <linux/stacktrace.h> 15 14 15 + #include <asm/efi.h> 16 16 #include <asm/irq.h> 17 17 #include <asm/stack_pointer.h> 18 18 #include <asm/stacktrace.h> ··· 188 186 : stackinfo_get_unknown(); \ 189 187 }) 190 188 189 + #define STACKINFO_EFI \ 190 + ({ \ 191 + ((task == current) && current_in_efi()) \ 192 + ? stackinfo_get_efi() \ 193 + : stackinfo_get_unknown(); \ 194 + }) 195 + 191 196 noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, 192 197 void *cookie, struct task_struct *task, 193 198 struct pt_regs *regs) ··· 208 199 #if defined(CONFIG_VMAP_STACK) && defined(CONFIG_ARM_SDE_INTERFACE) 209 200 STACKINFO_SDEI(normal), 210 201 STACKINFO_SDEI(critical), 202 + #endif 203 + #ifdef CONFIG_EFI 204 + STACKINFO_EFI, 211 205 #endif 212 206 }; 213 207 struct unwind_state state = {
+1 -1
arch/arm64/kvm/guest.c
··· 1079 1079 1080 1080 /* uaccess failed, don't leave stale tags */ 1081 1081 if (num_tags != MTE_GRANULES_PER_PAGE) 1082 - mte_clear_page_tags(page); 1082 + mte_clear_page_tags(maddr); 1083 1083 set_page_mte_tagged(page); 1084 1084 1085 1085 kvm_release_pfn_dirty(pfn);
+5 -8
arch/arm64/kvm/vgic/vgic-its.c
··· 2187 2187 ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) | 2188 2188 ite->collection->collection_id; 2189 2189 val = cpu_to_le64(val); 2190 - return kvm_write_guest_lock(kvm, gpa, &val, ite_esz); 2190 + return vgic_write_guest_lock(kvm, gpa, &val, ite_esz); 2191 2191 } 2192 2192 2193 2193 /** ··· 2339 2339 (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) | 2340 2340 (dev->num_eventid_bits - 1)); 2341 2341 val = cpu_to_le64(val); 2342 - return kvm_write_guest_lock(kvm, ptr, &val, dte_esz); 2342 + return vgic_write_guest_lock(kvm, ptr, &val, dte_esz); 2343 2343 } 2344 2344 2345 2345 /** ··· 2526 2526 ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) | 2527 2527 collection->collection_id); 2528 2528 val = cpu_to_le64(val); 2529 - return kvm_write_guest_lock(its->dev->kvm, gpa, &val, esz); 2529 + return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz); 2530 2530 } 2531 2531 2532 2532 /* ··· 2607 2607 */ 2608 2608 val = 0; 2609 2609 BUG_ON(cte_esz > sizeof(val)); 2610 - ret = kvm_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); 2610 + ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); 2611 2611 return ret; 2612 2612 } 2613 2613 ··· 2743 2743 static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) 2744 2744 { 2745 2745 const struct vgic_its_abi *abi = vgic_its_get_abi(its); 2746 - struct vgic_dist *dist = &kvm->arch.vgic; 2747 2746 int ret = 0; 2748 2747 2749 2748 if (attr == KVM_DEV_ARM_VGIC_CTRL_INIT) /* Nothing to do */ ··· 2762 2763 vgic_its_reset(kvm, its); 2763 2764 break; 2764 2765 case KVM_DEV_ARM_ITS_SAVE_TABLES: 2765 - dist->save_its_tables_in_progress = true; 2766 2766 ret = abi->save_tables(its); 2767 - dist->save_its_tables_in_progress = false; 2768 2767 break; 2769 2768 case KVM_DEV_ARM_ITS_RESTORE_TABLES: 2770 2769 ret = abi->restore_tables(its); ··· 2789 2792 { 2790 2793 struct vgic_dist *dist = &kvm->arch.vgic; 2791 2794 2792 - return dist->save_its_tables_in_progress; 2795 + return dist->table_write_in_progress; 2793 2796 } 2794 2797 2795 2798 static int vgic_its_set_attr(struct kvm_device *dev,
+13 -16
arch/arm64/kvm/vgic/vgic-v3.c
··· 339 339 if (status) { 340 340 /* clear consumed data */ 341 341 val &= ~(1 << bit_nr); 342 - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); 342 + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); 343 343 if (ret) 344 344 return ret; 345 345 } ··· 350 350 * The deactivation of the doorbell interrupt will trigger the 351 351 * unmapping of the associated vPE. 352 352 */ 353 - static void unmap_all_vpes(struct vgic_dist *dist) 353 + static void unmap_all_vpes(struct kvm *kvm) 354 354 { 355 - struct irq_desc *desc; 355 + struct vgic_dist *dist = &kvm->arch.vgic; 356 356 int i; 357 357 358 - for (i = 0; i < dist->its_vm.nr_vpes; i++) { 359 - desc = irq_to_desc(dist->its_vm.vpes[i]->irq); 360 - irq_domain_deactivate_irq(irq_desc_get_irq_data(desc)); 361 - } 358 + for (i = 0; i < dist->its_vm.nr_vpes; i++) 359 + free_irq(dist->its_vm.vpes[i]->irq, kvm_get_vcpu(kvm, i)); 362 360 } 363 361 364 - static void map_all_vpes(struct vgic_dist *dist) 362 + static void map_all_vpes(struct kvm *kvm) 365 363 { 366 - struct irq_desc *desc; 364 + struct vgic_dist *dist = &kvm->arch.vgic; 367 365 int i; 368 366 369 - for (i = 0; i < dist->its_vm.nr_vpes; i++) { 370 - desc = irq_to_desc(dist->its_vm.vpes[i]->irq); 371 - irq_domain_activate_irq(irq_desc_get_irq_data(desc), false); 372 - } 367 + for (i = 0; i < dist->its_vm.nr_vpes; i++) 368 + WARN_ON(vgic_v4_request_vpe_irq(kvm_get_vcpu(kvm, i), 369 + dist->its_vm.vpes[i]->irq)); 373 370 } 374 371 375 372 /** ··· 391 394 * and enabling of the doorbells have already been done. 392 395 */ 393 396 if (kvm_vgic_global_state.has_gicv4_1) { 394 - unmap_all_vpes(dist); 397 + unmap_all_vpes(kvm); 395 398 vlpi_avail = true; 396 399 } 397 400 ··· 434 437 else 435 438 val &= ~(1 << bit_nr); 436 439 437 - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); 440 + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); 438 441 if (ret) 439 442 goto out; 440 443 } 441 444 442 445 out: 443 446 if (vlpi_avail) 444 - map_all_vpes(dist); 447 + map_all_vpes(kvm); 445 448 446 449 return ret; 447 450 }
+6 -2
arch/arm64/kvm/vgic/vgic-v4.c
··· 222 222 *val = !!(*ptr & mask); 223 223 } 224 224 225 + int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq) 226 + { 227 + return request_irq(irq, vgic_v4_doorbell_handler, 0, "vcpu", vcpu); 228 + } 229 + 225 230 /** 226 231 * vgic_v4_init - Initialize the GICv4 data structures 227 232 * @kvm: Pointer to the VM being initialized ··· 288 283 irq_flags &= ~IRQ_NOAUTOEN; 289 284 irq_set_status_flags(irq, irq_flags); 290 285 291 - ret = request_irq(irq, vgic_v4_doorbell_handler, 292 - 0, "vcpu", vcpu); 286 + ret = vgic_v4_request_vpe_irq(vcpu, irq); 293 287 if (ret) { 294 288 kvm_err("failed to allocate vcpu IRQ%d\n", irq); 295 289 /*
+15
arch/arm64/kvm/vgic/vgic.h
··· 6 6 #define __KVM_ARM_VGIC_NEW_H__ 7 7 8 8 #include <linux/irqchip/arm-gic-common.h> 9 + #include <asm/kvm_mmu.h> 9 10 10 11 #define PRODUCT_ID_KVM 0x4b /* ASCII code K */ 11 12 #define IMPLEMENTER_ARM 0x43b ··· 130 129 static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq) 131 130 { 132 131 return vgic_irq_get_lr_count(irq) > 1; 132 + } 133 + 134 + static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa, 135 + const void *data, unsigned long len) 136 + { 137 + struct vgic_dist *dist = &kvm->arch.vgic; 138 + int ret; 139 + 140 + dist->table_write_in_progress = true; 141 + ret = kvm_write_guest_lock(kvm, gpa, data, len); 142 + dist->table_write_in_progress = false; 143 + 144 + return ret; 133 145 } 134 146 135 147 /* ··· 345 331 void vgic_v4_teardown(struct kvm *kvm); 346 332 void vgic_v4_configure_vsgis(struct kvm *kvm); 347 333 void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val); 334 + int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq); 348 335 349 336 #endif
+5 -2
arch/ia64/kernel/sys_ia64.c
··· 170 170 asmlinkage long 171 171 ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user *tp) 172 172 { 173 + struct timespec64 rtn_tp; 174 + s64 tick_ns; 175 + 173 176 /* 174 177 * ia64's clock_gettime() syscall is implemented as a vdso call 175 178 * fsys_clock_gettime(). Currently it handles only ··· 188 185 switch (which_clock) { 189 186 case CLOCK_REALTIME: 190 187 case CLOCK_MONOTONIC: 191 - s64 tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); 192 - struct timespec64 rtn_tp = ns_to_timespec64(tick_ns); 188 + tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); 189 + rtn_tp = ns_to_timespec64(tick_ns); 193 190 return put_timespec64(&rtn_tp, tp); 194 191 } 195 192
+3 -2
arch/parisc/kernel/firmware.c
··· 1303 1303 */ 1304 1304 int pdc_iodc_print(const unsigned char *str, unsigned count) 1305 1305 { 1306 - unsigned int i; 1306 + unsigned int i, found = 0; 1307 1307 unsigned long flags; 1308 1308 1309 1309 count = min_t(unsigned int, count, sizeof(iodc_dbuf)); ··· 1315 1315 iodc_dbuf[i+0] = '\r'; 1316 1316 iodc_dbuf[i+1] = '\n'; 1317 1317 i += 2; 1318 + found = 1; 1318 1319 goto print; 1319 1320 default: 1320 1321 iodc_dbuf[i] = str[i]; ··· 1331 1330 __pa(pdc_result), 0, __pa(iodc_dbuf), i, 0); 1332 1331 spin_unlock_irqrestore(&pdc_lock, flags); 1333 1332 1334 - return i; 1333 + return i - found; 1335 1334 } 1336 1335 1337 1336 #if !defined(BOOTLOADER)
+16 -5
arch/parisc/kernel/ptrace.c
··· 126 126 unsigned long tmp; 127 127 long ret = -EIO; 128 128 129 + unsigned long user_regs_struct_size = sizeof(struct user_regs_struct); 130 + #ifdef CONFIG_64BIT 131 + if (is_compat_task()) 132 + user_regs_struct_size /= 2; 133 + #endif 134 + 129 135 switch (request) { 130 136 131 137 /* Read the word at location addr in the USER area. For ptraced ··· 172 166 addr >= sizeof(struct pt_regs)) 173 167 break; 174 168 if (addr == PT_IAOQ0 || addr == PT_IAOQ1) { 175 - data |= 3; /* ensure userspace privilege */ 169 + data |= PRIV_USER; /* ensure userspace privilege */ 176 170 } 177 171 if ((addr >= PT_GR1 && addr <= PT_GR31) || 178 172 addr == PT_IAOQ0 || addr == PT_IAOQ1 || ··· 187 181 return copy_regset_to_user(child, 188 182 task_user_regset_view(current), 189 183 REGSET_GENERAL, 190 - 0, sizeof(struct user_regs_struct), 184 + 0, user_regs_struct_size, 191 185 datap); 192 186 193 187 case PTRACE_SETREGS: /* Set all gp regs in the child. */ 194 188 return copy_regset_from_user(child, 195 189 task_user_regset_view(current), 196 190 REGSET_GENERAL, 197 - 0, sizeof(struct user_regs_struct), 191 + 0, user_regs_struct_size, 198 192 datap); 199 193 200 194 case PTRACE_GETFPREGS: /* Get the child FPU state. */ ··· 291 285 if (addr >= sizeof(struct pt_regs)) 292 286 break; 293 287 if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) { 294 - data |= 3; /* ensure userspace privilege */ 288 + data |= PRIV_USER; /* ensure userspace privilege */ 295 289 } 296 290 if (addr >= PT_FR0 && addr <= PT_FR31 + 4) { 297 291 /* Special case, fp regs are 64 bits anyway */ ··· 308 302 } 309 303 } 310 304 break; 305 + case PTRACE_GETREGS: 306 + case PTRACE_SETREGS: 307 + case PTRACE_GETFPREGS: 308 + case PTRACE_SETFPREGS: 309 + return arch_ptrace(child, request, addr, data); 311 310 312 311 default: 313 312 ret = compat_ptrace_request(child, request, addr, data); ··· 495 484 case RI(iaoq[0]): 496 485 case RI(iaoq[1]): 497 486 /* set 2 lowest bits to ensure userspace privilege: */ 498 - regs->iaoq[num - RI(iaoq[0])] = val | 3; 487 + regs->iaoq[num - RI(iaoq[0])] = val | PRIV_USER; 499 488 return; 500 489 case RI(sar): regs->sar = val; 501 490 return;
+2
arch/powerpc/include/asm/book3s/64/tlbflush.h
··· 97 97 { 98 98 if (radix_enabled()) 99 99 radix__tlb_flush(tlb); 100 + 101 + return hash__tlb_flush(tlb); 100 102 } 101 103 102 104 #ifdef CONFIG_SMP
+30 -13
arch/powerpc/include/asm/hw_irq.h
··· 173 173 return flags; 174 174 } 175 175 176 + static inline notrace unsigned long irq_soft_mask_andc_return(unsigned long mask) 177 + { 178 + unsigned long flags = irq_soft_mask_return(); 179 + 180 + irq_soft_mask_set(flags & ~mask); 181 + 182 + return flags; 183 + } 184 + 176 185 static inline unsigned long arch_local_save_flags(void) 177 186 { 178 187 return irq_soft_mask_return(); ··· 201 192 202 193 static inline unsigned long arch_local_irq_save(void) 203 194 { 204 - return irq_soft_mask_set_return(IRQS_DISABLED); 195 + return irq_soft_mask_or_return(IRQS_DISABLED); 205 196 } 206 197 207 198 static inline bool arch_irqs_disabled_flags(unsigned long flags) ··· 340 331 * is a different soft-masked interrupt pending that requires hard 341 332 * masking. 342 333 */ 343 - static inline bool should_hard_irq_enable(void) 334 + static inline bool should_hard_irq_enable(struct pt_regs *regs) 344 335 { 345 336 if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { 346 - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); 337 + WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED); 338 + WARN_ON(!(get_paca()->irq_happened & PACA_IRQ_HARD_DIS)); 347 339 WARN_ON(mfmsr() & MSR_EE); 348 340 } 349 341 ··· 357 347 * 358 348 * TODO: Add test for 64e 359 349 */ 360 - if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !power_pmu_wants_prompt_pmi()) 361 - return false; 350 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { 351 + if (!power_pmu_wants_prompt_pmi()) 352 + return false; 353 + /* 354 + * If PMIs are disabled then IRQs should be disabled as well, 355 + * so we shouldn't see this condition, check for it just in 356 + * case because we are about to enable PMIs. 357 + */ 358 + if (WARN_ON_ONCE(regs->softe & IRQS_PMI_DISABLED)) 359 + return false; 360 + } 362 361 363 362 if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK) 364 363 return false; ··· 377 358 378 359 /* 379 360 * Do the hard enabling, only call this if should_hard_irq_enable is true. 361 + * This allows PMI interrupts to profile irq handlers. 380 362 */ 381 363 static inline void do_hard_irq_enable(void) 382 364 { 383 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { 384 - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); 385 - WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK); 386 - WARN_ON(mfmsr() & MSR_EE); 387 - } 388 365 /* 389 - * This allows PMI interrupts (and watchdog soft-NMIs) through. 390 - * There is no other reason to enable this way. 366 + * Asynch interrupts come in with IRQS_ALL_DISABLED, 367 + * PACA_IRQ_HARD_DIS, and MSR[EE]=0. 391 368 */ 369 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) 370 + irq_soft_mask_andc_return(IRQS_PMI_DISABLED); 392 371 get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS; 393 372 __hard_irq_enable(); 394 373 } ··· 469 452 return !(regs->msr & MSR_EE); 470 453 } 471 454 472 - static __always_inline bool should_hard_irq_enable(void) 455 + static __always_inline bool should_hard_irq_enable(struct pt_regs *regs) 473 456 { 474 457 return false; 475 458 }
+1 -1
arch/powerpc/kernel/dbell.c
··· 27 27 28 28 ppc_msgsync(); 29 29 30 - if (should_hard_irq_enable()) 30 + if (should_hard_irq_enable(regs)) 31 31 do_hard_irq_enable(); 32 32 33 33 kvmppc_clear_host_ipi(smp_processor_id());
+2 -1
arch/powerpc/kernel/head_85xx.S
··· 864 864 * SPE unavailable trap from kernel - print a message, but let 865 865 * the task use SPE in the kernel until it returns to user mode. 866 866 */ 867 - KernelSPE: 867 + SYM_FUNC_START_LOCAL(KernelSPE) 868 868 lwz r3,_MSR(r1) 869 869 oris r3,r3,MSR_SPE@h 870 870 stw r3,_MSR(r1) /* enable use of SPE after return */ ··· 881 881 #endif 882 882 .align 4,0 883 883 884 + SYM_FUNC_END(KernelSPE) 884 885 #endif /* CONFIG_SPE */ 885 886 886 887 /*
+1 -1
arch/powerpc/kernel/irq.c
··· 238 238 irq = static_call(ppc_get_irq)(); 239 239 240 240 /* We can hard enable interrupts now to allow perf interrupts */ 241 - if (should_hard_irq_enable()) 241 + if (should_hard_irq_enable(regs)) 242 242 do_hard_irq_enable(); 243 243 244 244 /* And finally process it */
+1 -1
arch/powerpc/kernel/time.c
··· 515 515 } 516 516 517 517 /* Conditionally hard-enable interrupts. */ 518 - if (should_hard_irq_enable()) { 518 + if (should_hard_irq_enable(regs)) { 519 519 /* 520 520 * Ensure a positive value is written to the decrementer, or 521 521 * else some CPUs will continue to take decrementer exceptions.
+7 -4
arch/powerpc/kexec/file_load_64.c
··· 989 989 * linux,drconf-usable-memory properties. Get an approximate on the 990 990 * number of usable memory entries and use for FDT size estimation. 991 991 */ 992 - usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) + 993 - (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); 994 - 995 - extra_size = (unsigned int)(usm_entries * sizeof(u64)); 992 + if (drmem_lmb_size()) { 993 + usm_entries = ((memory_hotplug_max() / drmem_lmb_size()) + 994 + (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); 995 + extra_size = (unsigned int)(usm_entries * sizeof(u64)); 996 + } else { 997 + extra_size = 0; 998 + } 996 999 997 1000 /* 998 1001 * Get the number of CPU nodes in the current DT. This allows to
+2 -3
arch/powerpc/kvm/booke.c
··· 912 912 913 913 static void kvmppc_fill_pt_regs(struct pt_regs *regs) 914 914 { 915 - ulong r1, ip, msr, lr; 915 + ulong r1, msr, lr; 916 916 917 917 asm("mr %0, 1" : "=r"(r1)); 918 918 asm("mflr %0" : "=r"(lr)); 919 919 asm("mfmsr %0" : "=r"(msr)); 920 - asm("bl 1f; 1: mflr %0" : "=r"(ip)); 921 920 922 921 memset(regs, 0, sizeof(*regs)); 923 922 regs->gpr[1] = r1; 924 - regs->nip = ip; 923 + regs->nip = _THIS_IP_; 925 924 regs->msr = msr; 926 925 regs->link = lr; 927 926 }
+24
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 234 234 end = (unsigned long)__end_rodata; 235 235 236 236 radix__change_memory_range(start, end, _PAGE_WRITE); 237 + 238 + for (start = PAGE_OFFSET; start < (unsigned long)_stext; start += PAGE_SIZE) { 239 + end = start + PAGE_SIZE; 240 + if (overlaps_interrupt_vector_text(start, end)) 241 + radix__change_memory_range(start, end, _PAGE_WRITE); 242 + else 243 + break; 244 + } 237 245 } 238 246 239 247 void radix__mark_initmem_nx(void) ··· 270 262 static unsigned long next_boundary(unsigned long addr, unsigned long end) 271 263 { 272 264 #ifdef CONFIG_STRICT_KERNEL_RWX 265 + unsigned long stext_phys; 266 + 267 + stext_phys = __pa_symbol(_stext); 268 + 269 + // Relocatable kernel running at non-zero real address 270 + if (stext_phys != 0) { 271 + // The end of interrupts code at zero is a rodata boundary 272 + unsigned long end_intr = __pa_symbol(__end_interrupts) - stext_phys; 273 + if (addr < end_intr) 274 + return end_intr; 275 + 276 + // Start of relocated kernel text is a rodata boundary 277 + if (addr < stext_phys) 278 + return stext_phys; 279 + } 280 + 273 281 if (addr < __pa_symbol(__srwx_boundary)) 274 282 return __pa_symbol(__srwx_boundary); 275 283 #endif
+7 -7
arch/powerpc/perf/imc-pmu.c
··· 22 22 * Used to avoid races in counting the nest-pmu units during hotplug 23 23 * register and unregister 24 24 */ 25 - static DEFINE_SPINLOCK(nest_init_lock); 25 + static DEFINE_MUTEX(nest_init_lock); 26 26 static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc); 27 27 static struct imc_pmu **per_nest_pmu_arr; 28 28 static cpumask_t nest_imc_cpumask; ··· 1629 1629 static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr) 1630 1630 { 1631 1631 if (pmu_ptr->domain == IMC_DOMAIN_NEST) { 1632 - spin_lock(&nest_init_lock); 1632 + mutex_lock(&nest_init_lock); 1633 1633 if (nest_pmus == 1) { 1634 1634 cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE); 1635 1635 kfree(nest_imc_refc); ··· 1639 1639 1640 1640 if (nest_pmus > 0) 1641 1641 nest_pmus--; 1642 - spin_unlock(&nest_init_lock); 1642 + mutex_unlock(&nest_init_lock); 1643 1643 } 1644 1644 1645 1645 /* Free core_imc memory */ ··· 1796 1796 * rest. To handle the cpuhotplug callback unregister, we track 1797 1797 * the number of nest pmus in "nest_pmus". 1798 1798 */ 1799 - spin_lock(&nest_init_lock); 1799 + mutex_lock(&nest_init_lock); 1800 1800 if (nest_pmus == 0) { 1801 1801 ret = init_nest_pmu_ref(); 1802 1802 if (ret) { 1803 - spin_unlock(&nest_init_lock); 1803 + mutex_unlock(&nest_init_lock); 1804 1804 kfree(per_nest_pmu_arr); 1805 1805 per_nest_pmu_arr = NULL; 1806 1806 goto err_free_mem; ··· 1808 1808 /* Register for cpu hotplug notification. */ 1809 1809 ret = nest_pmu_cpumask_init(); 1810 1810 if (ret) { 1811 - spin_unlock(&nest_init_lock); 1811 + mutex_unlock(&nest_init_lock); 1812 1812 kfree(nest_imc_refc); 1813 1813 kfree(per_nest_pmu_arr); 1814 1814 per_nest_pmu_arr = NULL; ··· 1816 1816 } 1817 1817 } 1818 1818 nest_pmus++; 1819 - spin_unlock(&nest_init_lock); 1819 + mutex_unlock(&nest_init_lock); 1820 1820 break; 1821 1821 case IMC_DOMAIN_CORE: 1822 1822 ret = core_imc_pmu_cpumask_init();
+3
arch/riscv/Makefile
··· 80 80 KBUILD_CFLAGS += -fno-omit-frame-pointer 81 81 endif 82 82 83 + # Avoid generating .eh_frame sections. 84 + KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables 85 + 83 86 KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) 84 87 KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax) 85 88
+1 -1
arch/riscv/include/asm/alternative-macros.h
··· 46 46 47 47 .macro ALTERNATIVE_CFG_2 old_c, new_c_1, vendor_id_1, errata_id_1, enable_1, \ 48 48 new_c_2, vendor_id_2, errata_id_2, enable_2 49 - ALTERNATIVE_CFG \old_c, \new_c_1, \vendor_id_1, \errata_id_1, \enable_1 49 + ALTERNATIVE_CFG "\old_c", "\new_c_1", \vendor_id_1, \errata_id_1, \enable_1 50 50 ALT_NEW_CONTENT \vendor_id_2, \errata_id_2, \enable_2, \new_c_2 51 51 .endm 52 52
-3
arch/riscv/include/asm/hwcap.h
··· 70 70 */ 71 71 enum riscv_isa_ext_key { 72 72 RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */ 73 - RISCV_ISA_EXT_KEY_ZIHINTPAUSE, 74 73 RISCV_ISA_EXT_KEY_SVINVAL, 75 74 RISCV_ISA_EXT_KEY_MAX, 76 75 }; ··· 90 91 return RISCV_ISA_EXT_KEY_FPU; 91 92 case RISCV_ISA_EXT_d: 92 93 return RISCV_ISA_EXT_KEY_FPU; 93 - case RISCV_ISA_EXT_ZIHINTPAUSE: 94 - return RISCV_ISA_EXT_KEY_ZIHINTPAUSE; 95 94 case RISCV_ISA_EXT_SVINVAL: 96 95 return RISCV_ISA_EXT_KEY_SVINVAL; 97 96 default:
+12 -16
arch/riscv/include/asm/vdso/processor.h
··· 4 4 5 5 #ifndef __ASSEMBLY__ 6 6 7 - #include <linux/jump_label.h> 8 7 #include <asm/barrier.h> 9 - #include <asm/hwcap.h> 10 8 11 9 static inline void cpu_relax(void) 12 10 { 13 - if (!static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_ZIHINTPAUSE])) { 14 11 #ifdef __riscv_muldiv 15 - int dummy; 16 - /* In lieu of a halt instruction, induce a long-latency stall. */ 17 - __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); 12 + int dummy; 13 + /* In lieu of a halt instruction, induce a long-latency stall. */ 14 + __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); 18 15 #endif 19 - } else { 20 - /* 21 - * Reduce instruction retirement. 22 - * This assumes the PC changes. 23 - */ 24 - #ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE 25 - __asm__ __volatile__ ("pause"); 16 + 17 + #ifdef __riscv_zihintpause 18 + /* 19 + * Reduce instruction retirement. 20 + * This assumes the PC changes. 21 + */ 22 + __asm__ __volatile__ ("pause"); 26 23 #else 27 - /* Encoding of the pause instruction */ 28 - __asm__ __volatile__ (".4byte 0x100000F"); 24 + /* Encoding of the pause instruction */ 25 + __asm__ __volatile__ (".4byte 0x100000F"); 29 26 #endif 30 - } 31 27 barrier(); 32 28 } 33 29
+1 -1
arch/riscv/kernel/head.S
··· 326 326 call soc_early_init 327 327 tail start_kernel 328 328 329 - #if CONFIG_RISCV_BOOT_SPINWAIT 329 + #ifdef CONFIG_RISCV_BOOT_SPINWAIT 330 330 .Lsecondary_start: 331 331 /* Set trap vector to spin forever to help debug */ 332 332 la a3, .Lsecondary_park
+18
arch/riscv/kernel/probes/kprobes.c
··· 48 48 post_kprobe_handler(p, kcb, regs); 49 49 } 50 50 51 + static bool __kprobes arch_check_kprobe(struct kprobe *p) 52 + { 53 + unsigned long tmp = (unsigned long)p->addr - p->offset; 54 + unsigned long addr = (unsigned long)p->addr; 55 + 56 + while (tmp <= addr) { 57 + if (tmp == addr) 58 + return true; 59 + 60 + tmp += GET_INSN_LENGTH(*(u16 *)tmp); 61 + } 62 + 63 + return false; 64 + } 65 + 51 66 int __kprobes arch_prepare_kprobe(struct kprobe *p) 52 67 { 53 68 unsigned long probe_addr = (unsigned long)p->addr; 54 69 55 70 if (probe_addr & 0x1) 71 + return -EILSEQ; 72 + 73 + if (!arch_check_kprobe(p)) 56 74 return -EILSEQ; 57 75 58 76 /* copy instruction */
+2 -2
arch/riscv/kernel/probes/simulate-insn.c
··· 71 71 u32 rd_index = (opcode >> 7) & 0x1f; 72 72 u32 rs1_index = (opcode >> 15) & 0x1f; 73 73 74 - ret = rv_insn_reg_set_val(regs, rd_index, addr + 4); 74 + ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr); 75 75 if (!ret) 76 76 return ret; 77 77 78 - ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr); 78 + ret = rv_insn_reg_set_val(regs, rd_index, addr + 4); 79 79 if (!ret) 80 80 return ret; 81 81
+2 -1
arch/riscv/kernel/smpboot.c
··· 39 39 40 40 void __init smp_prepare_boot_cpu(void) 41 41 { 42 - init_cpu_topology(); 43 42 } 44 43 45 44 void __init smp_prepare_cpus(unsigned int max_cpus) ··· 46 47 int cpuid; 47 48 int ret; 48 49 unsigned int curr_cpuid; 50 + 51 + init_cpu_topology(); 49 52 50 53 curr_cpuid = smp_processor_id(); 51 54 store_cpu_topology(curr_cpuid);
+1 -1
arch/s390/boot/decompressor.c
··· 80 80 void *output = (void *)decompress_offset; 81 81 82 82 __decompress(_compressed_start, _compressed_end - _compressed_start, 83 - NULL, NULL, output, 0, NULL, error); 83 + NULL, NULL, output, vmlinux.image_size, NULL, error); 84 84 return output; 85 85 }
+1
arch/sh/kernel/vmlinux.lds.S
··· 4 4 * Written by Niibe Yutaka and Paul Mundt 5 5 */ 6 6 OUTPUT_ARCH(sh) 7 + #define RUNTIME_DISCARD_EXIT 7 8 #include <asm/thread_info.h> 8 9 #include <asm/cache.h> 9 10 #include <asm/vmlinux.lds.h>
+1 -1
arch/x86/Makefile
··· 14 14 15 15 ifdef CONFIG_CC_IS_GCC 16 16 RETPOLINE_CFLAGS := $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register) 17 - RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix) 18 17 RETPOLINE_VDSO_CFLAGS := $(call cc-option,-mindirect-branch=thunk-inline -mindirect-branch-register) 19 18 endif 20 19 ifdef CONFIG_CC_IS_CLANG 21 20 RETPOLINE_CFLAGS := -mretpoline-external-thunk 22 21 RETPOLINE_VDSO_CFLAGS := -mretpoline 23 22 endif 23 + RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch-cs-prefix) 24 24 25 25 ifdef CONFIG_RETHUNK 26 26 RETHUNK_CFLAGS := -mfunction-return=thunk-extern
+6
arch/x86/boot/compressed/ident_map_64.c
··· 180 180 181 181 /* Load the new page-table. */ 182 182 write_cr3(top_level_pgt); 183 + 184 + /* 185 + * Now that the required page table mappings are established and a 186 + * GHCB can be used, check for SNP guest/HV feature compatibility. 187 + */ 188 + snp_check_features(); 183 189 } 184 190 185 191 static pte_t *split_large_pmd(struct x86_mapping_info *info,
+2
arch/x86/boot/compressed/misc.h
··· 126 126 127 127 #ifdef CONFIG_AMD_MEM_ENCRYPT 128 128 void sev_enable(struct boot_params *bp); 129 + void snp_check_features(void); 129 130 void sev_es_shutdown_ghcb(void); 130 131 extern bool sev_es_check_ghcb_fault(unsigned long address); 131 132 void snp_set_page_private(unsigned long paddr); ··· 144 143 if (bp) 145 144 bp->cc_blob_address = 0; 146 145 } 146 + static inline void snp_check_features(void) { } 147 147 static inline void sev_es_shutdown_ghcb(void) { } 148 148 static inline bool sev_es_check_ghcb_fault(unsigned long address) 149 149 {
+70
arch/x86/boot/compressed/sev.c
··· 208 208 error("Can't unmap GHCB page"); 209 209 } 210 210 211 + static void __noreturn sev_es_ghcb_terminate(struct ghcb *ghcb, unsigned int set, 212 + unsigned int reason, u64 exit_info_2) 213 + { 214 + u64 exit_info_1 = SVM_VMGEXIT_TERM_REASON(set, reason); 215 + 216 + vc_ghcb_invalidate(ghcb); 217 + ghcb_set_sw_exit_code(ghcb, SVM_VMGEXIT_TERM_REQUEST); 218 + ghcb_set_sw_exit_info_1(ghcb, exit_info_1); 219 + ghcb_set_sw_exit_info_2(ghcb, exit_info_2); 220 + 221 + sev_es_wr_ghcb_msr(__pa(ghcb)); 222 + VMGEXIT(); 223 + 224 + while (true) 225 + asm volatile("hlt\n" : : : "memory"); 226 + } 227 + 211 228 bool sev_es_check_ghcb_fault(unsigned long address) 212 229 { 213 230 /* Check whether the fault was on the GHCB page */ ··· 285 268 attrs = 1; 286 269 if (rmpadjust((unsigned long)&boot_ghcb_page, RMP_PG_SIZE_4K, attrs)) 287 270 sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_NOT_VMPL0); 271 + } 272 + 273 + /* 274 + * SNP_FEATURES_IMPL_REQ is the mask of SNP features that will need 275 + * guest side implementation for proper functioning of the guest. If any 276 + * of these features are enabled in the hypervisor but are lacking guest 277 + * side implementation, the behavior of the guest will be undefined. The 278 + * guest could fail in non-obvious way making it difficult to debug. 279 + * 280 + * As the behavior of reserved feature bits is unknown to be on the 281 + * safe side add them to the required features mask. 282 + */ 283 + #define SNP_FEATURES_IMPL_REQ (MSR_AMD64_SNP_VTOM | \ 284 + MSR_AMD64_SNP_REFLECT_VC | \ 285 + MSR_AMD64_SNP_RESTRICTED_INJ | \ 286 + MSR_AMD64_SNP_ALT_INJ | \ 287 + MSR_AMD64_SNP_DEBUG_SWAP | \ 288 + MSR_AMD64_SNP_VMPL_SSS | \ 289 + MSR_AMD64_SNP_SECURE_TSC | \ 290 + MSR_AMD64_SNP_VMGEXIT_PARAM | \ 291 + MSR_AMD64_SNP_VMSA_REG_PROTECTION | \ 292 + MSR_AMD64_SNP_RESERVED_BIT13 | \ 293 + MSR_AMD64_SNP_RESERVED_BIT15 | \ 294 + MSR_AMD64_SNP_RESERVED_MASK) 295 + 296 + /* 297 + * SNP_FEATURES_PRESENT is the mask of SNP features that are implemented 298 + * by the guest kernel. As and when a new feature is implemented in the 299 + * guest kernel, a corresponding bit should be added to the mask. 300 + */ 301 + #define SNP_FEATURES_PRESENT (0) 302 + 303 + void snp_check_features(void) 304 + { 305 + u64 unsupported; 306 + 307 + if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) 308 + return; 309 + 310 + /* 311 + * Terminate the boot if hypervisor has enabled any feature lacking 312 + * guest side implementation. Pass on the unsupported features mask through 313 + * EXIT_INFO_2 of the GHCB protocol so that those features can be reported 314 + * as part of the guest boot failure. 315 + */ 316 + unsupported = sev_status & SNP_FEATURES_IMPL_REQ & ~SNP_FEATURES_PRESENT; 317 + if (unsupported) { 318 + if (ghcb_version < 2 || (!boot_ghcb && !early_setup_ghcb())) 319 + sev_es_terminate(SEV_TERM_SET_GEN, GHCB_SNP_UNSUPPORTED); 320 + 321 + sev_es_ghcb_terminate(boot_ghcb, SEV_TERM_SET_GEN, 322 + GHCB_SNP_UNSUPPORTED, unsupported); 323 + } 288 324 } 289 325 290 326 void sev_enable(struct boot_params *bp)
+1
arch/x86/events/intel/core.c
··· 6339 6339 break; 6340 6340 6341 6341 case INTEL_FAM6_SAPPHIRERAPIDS_X: 6342 + case INTEL_FAM6_EMERALDRAPIDS_X: 6342 6343 pmem = true; 6343 6344 x86_pmu.late_ack = true; 6344 6345 memcpy(hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+1
arch/x86/events/intel/cstate.c
··· 677 677 X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &icx_cstates), 678 678 X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &icx_cstates), 679 679 X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &icx_cstates), 680 + X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &icx_cstates), 680 681 681 682 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &icl_cstates), 682 683 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &icl_cstates),
+8
arch/x86/include/asm/acpi.h
··· 14 14 #include <asm/mmu.h> 15 15 #include <asm/mpspec.h> 16 16 #include <asm/x86_init.h> 17 + #include <asm/cpufeature.h> 17 18 18 19 #ifdef CONFIG_ACPI_APEI 19 20 # include <asm/pgtable_types.h> ··· 63 62 64 63 /* Physical address to resume after wakeup */ 65 64 unsigned long acpi_get_wakeup_address(void); 65 + 66 + static inline bool acpi_skip_set_wakeup_address(void) 67 + { 68 + return cpu_feature_enabled(X86_FEATURE_XENPV); 69 + } 70 + 71 + #define acpi_skip_set_wakeup_address acpi_skip_set_wakeup_address 66 72 67 73 /* 68 74 * Check if the CPU can handle C2 and deeper
+24 -2
arch/x86/include/asm/debugreg.h
··· 39 39 asm("mov %%db6, %0" :"=r" (val)); 40 40 break; 41 41 case 7: 42 - asm("mov %%db7, %0" :"=r" (val)); 42 + /* 43 + * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them 44 + * with other code. 45 + * 46 + * This is needed because a DR7 access can cause a #VC exception 47 + * when running under SEV-ES. Taking a #VC exception is not a 48 + * safe thing to do just anywhere in the entry code and 49 + * re-ordering might place the access into an unsafe location. 50 + * 51 + * This happened in the NMI handler, where the DR7 read was 52 + * re-ordered to happen before the call to sev_es_ist_enter(), 53 + * causing stack recursion. 54 + */ 55 + asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER); 43 56 break; 44 57 default: 45 58 BUG(); ··· 79 66 asm("mov %0, %%db6" ::"r" (value)); 80 67 break; 81 68 case 7: 82 - asm("mov %0, %%db7" ::"r" (value)); 69 + /* 70 + * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them 71 + * with other code. 72 + * 73 + * While is didn't happen with a DR7 write (see the DR7 read 74 + * comment above which explains where it happened), add the 75 + * __FORCE_ORDER here too to avoid similar problems in the 76 + * future. 77 + */ 78 + asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER); 83 79 break; 84 80 default: 85 81 BUG();
+20
arch/x86/include/asm/msr-index.h
··· 566 566 #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) 567 567 #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) 568 568 569 + /* SNP feature bits enabled by the hypervisor */ 570 + #define MSR_AMD64_SNP_VTOM BIT_ULL(3) 571 + #define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(4) 572 + #define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(5) 573 + #define MSR_AMD64_SNP_ALT_INJ BIT_ULL(6) 574 + #define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(7) 575 + #define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(8) 576 + #define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(9) 577 + #define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(10) 578 + #define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(11) 579 + #define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(12) 580 + #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14) 581 + #define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16) 582 + #define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17) 583 + 584 + /* SNP feature bits reserved for future use. */ 585 + #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13) 586 + #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15) 587 + #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18) 588 + 569 589 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f 570 590 571 591 /* AMD Collaborative Processor Performance Control MSRs */
+6
arch/x86/include/uapi/asm/svm.h
··· 116 116 #define SVM_VMGEXIT_AP_CREATE 1 117 117 #define SVM_VMGEXIT_AP_DESTROY 2 118 118 #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 119 + #define SVM_VMGEXIT_TERM_REQUEST 0x8000fffe 120 + #define SVM_VMGEXIT_TERM_REASON(reason_set, reason_code) \ 121 + /* SW_EXITINFO1[3:0] */ \ 122 + (((((u64)reason_set) & 0xf)) | \ 123 + /* SW_EXITINFO1[11:4] */ \ 124 + ((((u64)reason_code) & 0xff) << 4)) 119 125 #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff 120 126 121 127 /* Exit code reserved for hypervisor/software use */
+9
arch/x86/kernel/cpu/aperfmperf.c
··· 330 330 331 331 static void disable_freq_invariance_workfn(struct work_struct *work) 332 332 { 333 + int cpu; 334 + 333 335 static_branch_disable(&arch_scale_freq_key); 336 + 337 + /* 338 + * Set arch_freq_scale to a default value on all cpus 339 + * This negates the effect of scaling 340 + */ 341 + for_each_possible_cpu(cpu) 342 + per_cpu(arch_freq_scale, cpu) = SCHED_CAPACITY_SCALE; 334 343 } 335 344 336 345 static DECLARE_WORK(disable_freq_invariance_work,
+1
arch/x86/kernel/i8259.c
··· 114 114 disable_irq_nosync(irq); 115 115 io_apic_irqs &= ~(1<<irq); 116 116 irq_set_chip_and_handler(irq, &i8259A_chip, handle_level_irq); 117 + irq_set_status_flags(irq, IRQ_LEVEL); 117 118 enable_irq(irq); 118 119 lapic_assign_legacy_vector(irq, true); 119 120 }
+3 -1
arch/x86/kernel/irqinit.c
··· 65 65 66 66 legacy_pic->init(0); 67 67 68 - for (i = 0; i < nr_legacy_irqs(); i++) 68 + for (i = 0; i < nr_legacy_irqs(); i++) { 69 69 irq_set_chip_and_handler(i, chip, handle_level_irq); 70 + irq_set_status_flags(i, IRQ_LEVEL); 71 + } 70 72 } 71 73 72 74 void __init init_IRQ(void)
+9 -12
arch/x86/kvm/vmx/vmx.c
··· 3440 3440 { 3441 3441 u32 ar; 3442 3442 3443 - if (var->unusable || !var->present) 3444 - ar = 1 << 16; 3445 - else { 3446 - ar = var->type & 15; 3447 - ar |= (var->s & 1) << 4; 3448 - ar |= (var->dpl & 3) << 5; 3449 - ar |= (var->present & 1) << 7; 3450 - ar |= (var->avl & 1) << 12; 3451 - ar |= (var->l & 1) << 13; 3452 - ar |= (var->db & 1) << 14; 3453 - ar |= (var->g & 1) << 15; 3454 - } 3443 + ar = var->type & 15; 3444 + ar |= (var->s & 1) << 4; 3445 + ar |= (var->dpl & 3) << 5; 3446 + ar |= (var->present & 1) << 7; 3447 + ar |= (var->avl & 1) << 12; 3448 + ar |= (var->l & 1) << 13; 3449 + ar |= (var->db & 1) << 14; 3450 + ar |= (var->g & 1) << 15; 3451 + ar |= (var->unusable || !var->present) << 16; 3455 3452 3456 3453 return ar; 3457 3454 }
+2
arch/x86/pci/xen.c
··· 392 392 msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_ASSOCIATED) { 393 393 for (i = 0; i < msidesc->nvec_used; i++) 394 394 xen_destroy_irq(msidesc->irq + i); 395 + msidesc->irq = 0; 395 396 } 396 397 } 397 398 ··· 434 433 }; 435 434 436 435 static struct msi_domain_info xen_pci_msi_domain_info = { 436 + .flags = MSI_FLAG_PCI_MSIX | MSI_FLAG_FREE_MSI_DESCS | MSI_FLAG_DEV_SYSFS, 437 437 .ops = &xen_pci_msi_domain_ops, 438 438 }; 439 439
+1 -1
block/bfq-cgroup.c
··· 769 769 * request from the old cgroup. 770 770 */ 771 771 bfq_put_cooperator(sync_bfqq); 772 - bfq_release_process_ref(bfqd, sync_bfqq); 773 772 bic_set_bfqq(bic, NULL, true); 773 + bfq_release_process_ref(bfqd, sync_bfqq); 774 774 } 775 775 } 776 776 }
+3 -1
block/bfq-iosched.c
··· 5425 5425 5426 5426 bfqq = bic_to_bfqq(bic, false); 5427 5427 if (bfqq) { 5428 - bfq_release_process_ref(bfqd, bfqq); 5428 + struct bfq_queue *old_bfqq = bfqq; 5429 + 5429 5430 bfqq = bfq_get_queue(bfqd, bio, false, bic, true); 5430 5431 bic_set_bfqq(bic, bfqq, false); 5432 + bfq_release_process_ref(bfqd, old_bfqq); 5431 5433 } 5432 5434 5433 5435 bfqq = bic_to_bfqq(bic, true);
+4
block/blk-cgroup.c
··· 2001 2001 struct blkg_iostat_set *bis; 2002 2002 unsigned long flags; 2003 2003 2004 + /* Root-level stats are sourced from system-wide IO stats */ 2005 + if (!cgroup_parent(blkcg->css.cgroup)) 2006 + return; 2007 + 2004 2008 cpu = get_cpu(); 2005 2009 bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu); 2006 2010 flags = u64_stats_update_begin_irqsave(&bis->sync);
+3 -2
block/blk-mq.c
··· 4069 4069 * blk_mq_destroy_queue - shutdown a request queue 4070 4070 * @q: request queue to shutdown 4071 4071 * 4072 - * This shuts down a request queue allocated by blk_mq_init_queue() and drops 4073 - * the initial reference. All future requests will failed with -ENODEV. 4072 + * This shuts down a request queue allocated by blk_mq_init_queue(). All future 4073 + * requests will be failed with -ENODEV. The caller is responsible for dropping 4074 + * the reference from blk_mq_init_queue() by calling blk_put_queue(). 4074 4075 * 4075 4076 * Context: can sleep 4076 4077 */
+2 -2
certs/Makefile
··· 23 23 targets += blacklist_hash_list 24 24 25 25 quiet_cmd_extract_certs = CERT $@ 26 - cmd_extract_certs = $(obj)/extract-cert $(extract-cert-in) $@ 27 - extract-cert-in = $(or $(filter-out $(obj)/extract-cert, $(real-prereqs)),"") 26 + cmd_extract_certs = $(obj)/extract-cert "$(extract-cert-in)" $@ 27 + extract-cert-in = $(filter-out $(obj)/extract-cert, $(real-prereqs)) 28 28 29 29 $(obj)/system_certificates.o: $(obj)/x509_certificate_list 30 30
+5 -1
drivers/acpi/sleep.c
··· 60 60 .priority = 0, 61 61 }; 62 62 63 + #ifndef acpi_skip_set_wakeup_address 64 + #define acpi_skip_set_wakeup_address() false 65 + #endif 66 + 63 67 static int acpi_sleep_prepare(u32 acpi_state) 64 68 { 65 69 #ifdef CONFIG_ACPI_SLEEP 66 70 unsigned long acpi_wakeup_address; 67 71 68 72 /* do we have a wakeup address for S2 and S3? */ 69 - if (acpi_state == ACPI_STATE_S3) { 73 + if (acpi_state == ACPI_STATE_S3 && !acpi_skip_set_wakeup_address()) { 70 74 acpi_wakeup_address = acpi_get_wakeup_address(); 71 75 if (!acpi_wakeup_address) 72 76 return -EFAULT;
+28 -21
drivers/acpi/video_detect.c
··· 110 110 } 111 111 #endif 112 112 113 - static bool apple_gmux_backlight_present(void) 114 - { 115 - struct acpi_device *adev; 116 - struct device *dev; 117 - 118 - adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1); 119 - if (!adev) 120 - return false; 121 - 122 - dev = acpi_get_first_physical_node(adev); 123 - if (!dev) 124 - return false; 125 - 126 - /* 127 - * drivers/platform/x86/apple-gmux.c only supports old style 128 - * Apple GMUX with an IO-resource. 129 - */ 130 - return pnp_get_resource(to_pnp_dev(dev), IORESOURCE_IO, 0) != NULL; 131 - } 132 - 133 113 /* Force to use vendor driver when the ACPI device is known to be 134 114 * buggy */ 135 115 static int video_detect_force_vendor(const struct dmi_system_id *d) ··· 592 612 }, 593 613 { 594 614 .callback = video_detect_force_native, 615 + /* Asus U46E */ 616 + .matches = { 617 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."), 618 + DMI_MATCH(DMI_PRODUCT_NAME, "U46E"), 619 + }, 620 + }, 621 + { 622 + .callback = video_detect_force_native, 595 623 /* Asus UX303UB */ 596 624 .matches = { 597 625 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 598 626 DMI_MATCH(DMI_PRODUCT_NAME, "UX303UB"), 627 + }, 628 + }, 629 + { 630 + .callback = video_detect_force_native, 631 + /* HP EliteBook 8460p */ 632 + .matches = { 633 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 634 + DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 8460p"), 635 + }, 636 + }, 637 + { 638 + .callback = video_detect_force_native, 639 + /* HP Pavilion g6-1d80nr / B4U19UA */ 640 + .matches = { 641 + DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 642 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion g6 Notebook PC"), 643 + DMI_MATCH(DMI_PRODUCT_SKU, "B4U19UA"), 599 644 }, 600 645 }, 601 646 { ··· 771 766 { 772 767 static DEFINE_MUTEX(init_mutex); 773 768 static bool nvidia_wmi_ec_present; 769 + static bool apple_gmux_present; 774 770 static bool native_available; 775 771 static bool init_done; 776 772 static long video_caps; ··· 785 779 ACPI_UINT32_MAX, find_video, NULL, 786 780 &video_caps, NULL); 787 781 nvidia_wmi_ec_present = nvidia_wmi_ec_supported(); 782 + apple_gmux_present = apple_gmux_detect(NULL, NULL); 788 783 init_done = true; 789 784 } 790 785 if (native) ··· 807 800 if (nvidia_wmi_ec_present) 808 801 return acpi_backlight_nvidia_wmi_ec; 809 802 810 - if (apple_gmux_backlight_present()) 803 + if (apple_gmux_present) 811 804 return acpi_backlight_apple_gmux; 812 805 813 806 /* Use ACPI video if available, except when native should be preferred. */
+1 -1
drivers/ata/libata-core.c
··· 3109 3109 */ 3110 3110 if (spd > 1) 3111 3111 mask &= (1 << (spd - 1)) - 1; 3112 - else 3112 + else if (link->sata_spd) 3113 3113 return -EINVAL; 3114 3114 3115 3115 /* were we already at the bottom? */
+4 -5
drivers/block/ublk_drv.c
··· 137 137 138 138 char *__queues; 139 139 140 - unsigned short queue_size; 140 + unsigned int queue_size; 141 141 struct ublksrv_ctrl_dev_info dev_info; 142 142 143 143 struct blk_mq_tag_set tag_set; ··· 2092 2092 struct ublk_device *ub; 2093 2093 int id; 2094 2094 2095 - class_destroy(ublk_chr_class); 2096 - 2097 - misc_deregister(&ublk_misc); 2098 - 2099 2095 idr_for_each_entry(&ublk_index_idr, ub, id) 2100 2096 ublk_remove(ub); 2097 + 2098 + class_destroy(ublk_chr_class); 2099 + misc_deregister(&ublk_misc); 2101 2100 2102 2101 idr_destroy(&ublk_index_idr); 2103 2102 unregister_chrdev_region(ublk_chr_devt, UBLK_MINORS);
+7 -1
drivers/bus/sunxi-rsb.c
··· 857 857 return ret; 858 858 } 859 859 860 - return platform_driver_register(&sunxi_rsb_driver); 860 + ret = platform_driver_register(&sunxi_rsb_driver); 861 + if (ret) { 862 + bus_unregister(&sunxi_rsb_bus); 863 + return ret; 864 + } 865 + 866 + return 0; 861 867 } 862 868 module_init(sunxi_rsb_init); 863 869
-1
drivers/cxl/acpi.c
··· 736 736 MODULE_LICENSE("GPL v2"); 737 737 MODULE_IMPORT_NS(CXL); 738 738 MODULE_IMPORT_NS(ACPI); 739 - MODULE_SOFTDEP("pre: cxl_pmem");
+5 -39
drivers/cxl/core/pmem.c
··· 227 227 return cxl_nvd; 228 228 } 229 229 230 - static void cxl_nvd_unregister(void *_cxl_nvd) 231 - { 232 - struct cxl_nvdimm *cxl_nvd = _cxl_nvd; 233 - struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; 234 - struct cxl_nvdimm_bridge *cxl_nvb = cxlmd->cxl_nvb; 235 - 236 - /* 237 - * Either the bridge is in ->remove() context under the device_lock(), 238 - * or cxlmd_release_nvdimm() is cancelling the bridge's release action 239 - * for @cxl_nvd and doing it itself (while manually holding the bridge 240 - * lock). 241 - */ 242 - device_lock_assert(&cxl_nvb->dev); 243 - cxl_nvd->cxlmd = NULL; 244 - cxlmd->cxl_nvd = NULL; 245 - device_unregister(&cxl_nvd->dev); 246 - } 247 - 248 230 static void cxlmd_release_nvdimm(void *_cxlmd) 249 231 { 250 232 struct cxl_memdev *cxlmd = _cxlmd; 233 + struct cxl_nvdimm *cxl_nvd = cxlmd->cxl_nvd; 251 234 struct cxl_nvdimm_bridge *cxl_nvb = cxlmd->cxl_nvb; 252 235 253 - device_lock(&cxl_nvb->dev); 254 - if (cxlmd->cxl_nvd) 255 - devm_release_action(&cxl_nvb->dev, cxl_nvd_unregister, 256 - cxlmd->cxl_nvd); 257 - device_unlock(&cxl_nvb->dev); 236 + cxl_nvd->cxlmd = NULL; 237 + cxlmd->cxl_nvd = NULL; 238 + cxlmd->cxl_nvb = NULL; 239 + device_unregister(&cxl_nvd->dev); 258 240 put_device(&cxl_nvb->dev); 259 241 } 260 242 ··· 274 292 goto err; 275 293 276 294 dev_dbg(&cxlmd->dev, "register %s\n", dev_name(dev)); 277 - 278 - /* 279 - * The two actions below arrange for @cxl_nvd to be deleted when either 280 - * the top-level PMEM bridge goes down, or the endpoint device goes 281 - * through ->remove(). 282 - */ 283 - device_lock(&cxl_nvb->dev); 284 - if (cxl_nvb->dev.driver) 285 - rc = devm_add_action_or_reset(&cxl_nvb->dev, cxl_nvd_unregister, 286 - cxl_nvd); 287 - else 288 - rc = -ENXIO; 289 - device_unlock(&cxl_nvb->dev); 290 - 291 - if (rc) 292 - goto err_alloc; 293 295 294 296 /* @cxlmd carries a reference on @cxl_nvb until cxlmd_release_nvdimm */ 295 297 return devm_add_action_or_reset(&cxlmd->dev, cxlmd_release_nvdimm, cxlmd);
+5 -2
drivers/cxl/pci.c
··· 554 554 555 555 /* If multiple errors, log header points to first error from ctrl reg */ 556 556 if (hweight32(status) > 1) { 557 - addr = cxlds->regs.ras + CXL_RAS_CAP_CONTROL_OFFSET; 558 - fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK, readl(addr))); 557 + void __iomem *rcc_addr = 558 + cxlds->regs.ras + CXL_RAS_CAP_CONTROL_OFFSET; 559 + 560 + fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK, 561 + readl(rcc_addr))); 559 562 } else { 560 563 fe = status; 561 564 }
+24
drivers/cxl/pmem.c
··· 225 225 return cxl_pmem_nvdimm_ctl(nvdimm, cmd, buf, buf_len); 226 226 } 227 227 228 + static int detach_nvdimm(struct device *dev, void *data) 229 + { 230 + struct cxl_nvdimm *cxl_nvd; 231 + bool release = false; 232 + 233 + if (!is_cxl_nvdimm(dev)) 234 + return 0; 235 + 236 + device_lock(dev); 237 + if (!dev->driver) 238 + goto out; 239 + 240 + cxl_nvd = to_cxl_nvdimm(dev); 241 + if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data) 242 + release = true; 243 + out: 244 + device_unlock(dev); 245 + if (release) 246 + device_release_driver(dev); 247 + return 0; 248 + } 249 + 228 250 static void unregister_nvdimm_bus(void *_cxl_nvb) 229 251 { 230 252 struct cxl_nvdimm_bridge *cxl_nvb = _cxl_nvb; 231 253 struct nvdimm_bus *nvdimm_bus = cxl_nvb->nvdimm_bus; 254 + 255 + bus_for_each_dev(&cxl_bus_type, NULL, cxl_nvb, detach_nvdimm); 232 256 233 257 cxl_nvb->nvdimm_bus = NULL; 234 258 nvdimm_bus_unregister(nvdimm_bus);
+1 -1
drivers/dma-buf/dma-fence.c
··· 167 167 0, 0); 168 168 169 169 set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 170 - &dma_fence_stub.flags); 170 + &fence->flags); 171 171 172 172 dma_fence_signal(fence); 173 173
+7 -8
drivers/edac/edac_device.c
··· 34 34 static DEFINE_MUTEX(device_ctls_mutex); 35 35 static LIST_HEAD(edac_device_list); 36 36 37 + /* Default workqueue processing interval on this instance, in msecs */ 38 + #define DEFAULT_POLL_INTERVAL 1000 39 + 37 40 #ifdef CONFIG_EDAC_DEBUG 38 41 static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev) 39 42 { ··· 339 336 * whole one second to save timers firing all over the period 340 337 * between integral seconds 341 338 */ 342 - if (edac_dev->poll_msec == 1000) 339 + if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL) 343 340 edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay)); 344 341 else 345 342 edac_queue_work(&edac_dev->work, edac_dev->delay); ··· 369 366 * timers firing on sub-second basis, while they are happy 370 367 * to fire together on the 1 second exactly 371 368 */ 372 - if (edac_dev->poll_msec == 1000) 369 + if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL) 373 370 edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay)); 374 371 else 375 372 edac_queue_work(&edac_dev->work, edac_dev->delay); ··· 403 400 edac_dev->delay = msecs_to_jiffies(msec); 404 401 405 402 /* See comment in edac_device_workq_setup() above */ 406 - if (edac_dev->poll_msec == 1000) 403 + if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL) 407 404 edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay)); 408 405 else 409 406 edac_mod_work(&edac_dev->work, edac_dev->delay); ··· 445 442 /* This instance is NOW RUNNING */ 446 443 edac_dev->op_state = OP_RUNNING_POLL; 447 444 448 - /* 449 - * enable workq processing on this instance, 450 - * default = 1000 msec 451 - */ 452 - edac_device_workq_setup(edac_dev, 1000); 445 + edac_device_workq_setup(edac_dev, edac_dev->poll_msec ?: DEFAULT_POLL_INTERVAL); 453 446 } else { 454 447 edac_dev->op_state = OP_RUNNING_INTERRUPT; 455 448 }
+2 -3
drivers/edac/qcom_edac.c
··· 252 252 static int 253 253 dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank) 254 254 { 255 - struct llcc_drv_data *drv = edev_ctl->pvt_info; 255 + struct llcc_drv_data *drv = edev_ctl->dev->platform_data; 256 256 int ret; 257 257 258 258 ret = dump_syn_reg_values(drv, bank, err_type); ··· 289 289 llcc_ecc_irq_handler(int irq, void *edev_ctl) 290 290 { 291 291 struct edac_device_ctl_info *edac_dev_ctl = edev_ctl; 292 - struct llcc_drv_data *drv = edac_dev_ctl->pvt_info; 292 + struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data; 293 293 irqreturn_t irq_rc = IRQ_NONE; 294 294 u32 drp_error, trp_error, i; 295 295 int ret; ··· 358 358 edev_ctl->dev_name = dev_name(dev); 359 359 edev_ctl->ctl_name = "llcc"; 360 360 edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE; 361 - edev_ctl->pvt_info = llcc_driv_data; 362 361 363 362 rc = edac_device_add_device(edev_ctl); 364 363 if (rc)
+3 -1
drivers/firewire/core-cdev.c
··· 819 819 820 820 r = container_of(resource, struct inbound_transaction_resource, 821 821 resource); 822 - if (is_fcp_request(r->request)) 822 + if (is_fcp_request(r->request)) { 823 + kfree(r->data); 823 824 goto out; 825 + } 824 826 825 827 if (a->length != fw_get_response_length(r->request)) { 826 828 ret = -EINVAL;
+2
drivers/firmware/efi/efi.c
··· 1007 1007 /* first try to find a slot in an existing linked list entry */ 1008 1008 for (prsv = efi_memreserve_root->next; prsv; ) { 1009 1009 rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB); 1010 + if (!rsv) 1011 + return -ENOMEM; 1010 1012 index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size); 1011 1013 if (index < rsv->size) { 1012 1014 rsv->entry[index].base = addr;
+1 -1
drivers/firmware/efi/memattr.c
··· 33 33 return -ENOMEM; 34 34 } 35 35 36 - if (tbl->version > 1) { 36 + if (tbl->version > 2) { 37 37 pr_warn("Unexpected EFI Memory Attributes table version %d\n", 38 38 tbl->version); 39 39 goto unmap;
+12 -5
drivers/fpga/intel-m10-bmc-sec-update.c
··· 574 574 len = scnprintf(buf, SEC_UPDATE_LEN_MAX, "secure-update%d", 575 575 sec->fw_name_id); 576 576 sec->fw_name = kmemdup_nul(buf, len, GFP_KERNEL); 577 - if (!sec->fw_name) 578 - return -ENOMEM; 577 + if (!sec->fw_name) { 578 + ret = -ENOMEM; 579 + goto fw_name_fail; 580 + } 579 581 580 582 fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name, 581 583 &m10bmc_ops, sec); 582 584 if (IS_ERR(fwl)) { 583 585 dev_err(sec->dev, "Firmware Upload driver failed to start\n"); 584 - kfree(sec->fw_name); 585 - xa_erase(&fw_upload_xa, sec->fw_name_id); 586 - return PTR_ERR(fwl); 586 + ret = PTR_ERR(fwl); 587 + goto fw_uploader_fail; 587 588 } 588 589 589 590 sec->fwl = fwl; 590 591 return 0; 592 + 593 + fw_uploader_fail: 594 + kfree(sec->fw_name); 595 + fw_name_fail: 596 + xa_erase(&fw_upload_xa, sec->fw_name_id); 597 + return ret; 591 598 } 592 599 593 600 static int m10bmc_sec_remove(struct platform_device *pdev)
+2 -2
drivers/fpga/stratix10-soc.c
··· 213 213 /* Allocate buffers from the service layer's pool. */ 214 214 for (i = 0; i < NUM_SVC_BUFS; i++) { 215 215 kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE); 216 - if (!kbuf) { 216 + if (IS_ERR(kbuf)) { 217 217 s10_free_buffers(mgr); 218 - ret = -ENOMEM; 218 + ret = PTR_ERR(kbuf); 219 219 goto init_done; 220 220 } 221 221
+22 -16
drivers/gpio/gpio-ep93xx.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/gpio/driver.h> 19 19 #include <linux/bitops.h> 20 + #include <linux/seq_file.h> 20 21 21 22 #define EP93XX_GPIO_F_INT_STATUS 0x5c 22 23 #define EP93XX_GPIO_A_INT_STATUS 0xa0 ··· 41 40 #define EP93XX_GPIO_F_IRQ_BASE 80 42 41 43 42 struct ep93xx_gpio_irq_chip { 44 - struct irq_chip ic; 45 43 u8 irq_offset; 46 44 u8 int_unmasked; 47 45 u8 int_enabled; ··· 148 148 */ 149 149 struct irq_chip *irqchip = irq_desc_get_chip(desc); 150 150 unsigned int irq = irq_desc_get_irq(desc); 151 - int port_f_idx = ((irq + 1) & 7) ^ 4; /* {19..22,47..50} -> {0..7} */ 151 + int port_f_idx = (irq & 7) ^ 4; /* {20..23,48..51} -> {0..7} */ 152 152 int gpio_irq = EP93XX_GPIO_F_IRQ_BASE + port_f_idx; 153 153 154 154 chained_irq_enter(irqchip, desc); ··· 185 185 ep93xx_gpio_update_int_params(epg, eic); 186 186 187 187 writeb(port_mask, epg->base + eic->irq_offset + EP93XX_INT_EOI_OFFSET); 188 + gpiochip_disable_irq(gc, irqd_to_hwirq(d)); 188 189 } 189 190 190 191 static void ep93xx_gpio_irq_mask(struct irq_data *d) ··· 196 195 197 196 eic->int_unmasked &= ~BIT(d->irq & 7); 198 197 ep93xx_gpio_update_int_params(epg, eic); 198 + gpiochip_disable_irq(gc, irqd_to_hwirq(d)); 199 199 } 200 200 201 201 static void ep93xx_gpio_irq_unmask(struct irq_data *d) ··· 205 203 struct ep93xx_gpio_irq_chip *eic = to_ep93xx_gpio_irq_chip(gc); 206 204 struct ep93xx_gpio *epg = gpiochip_get_data(gc); 207 205 206 + gpiochip_enable_irq(gc, irqd_to_hwirq(d)); 208 207 eic->int_unmasked |= BIT(d->irq & 7); 209 208 ep93xx_gpio_update_int_params(epg, eic); 210 209 } ··· 323 320 return 0; 324 321 } 325 322 326 - static void ep93xx_init_irq_chip(struct device *dev, struct irq_chip *ic) 323 + static void ep93xx_irq_print_chip(struct irq_data *data, struct seq_file *p) 327 324 { 328 - ic->irq_ack = ep93xx_gpio_irq_ack; 329 - ic->irq_mask_ack = ep93xx_gpio_irq_mask_ack; 330 - ic->irq_mask = ep93xx_gpio_irq_mask; 331 - ic->irq_unmask = ep93xx_gpio_irq_unmask; 332 - ic->irq_set_type = ep93xx_gpio_irq_type; 325 + struct gpio_chip *gc = irq_data_get_irq_chip_data(data); 326 + 327 + seq_printf(p, dev_name(gc->parent)); 333 328 } 329 + 330 + static const struct irq_chip gpio_eic_irq_chip = { 331 + .name = "ep93xx-gpio-eic", 332 + .irq_ack = ep93xx_gpio_irq_ack, 333 + .irq_mask = ep93xx_gpio_irq_mask, 334 + .irq_unmask = ep93xx_gpio_irq_unmask, 335 + .irq_mask_ack = ep93xx_gpio_irq_mask_ack, 336 + .irq_set_type = ep93xx_gpio_irq_type, 337 + .irq_print_chip = ep93xx_irq_print_chip, 338 + .flags = IRQCHIP_IMMUTABLE, 339 + GPIOCHIP_IRQ_RESOURCE_HELPERS, 340 + }; 334 341 335 342 static int ep93xx_gpio_add_bank(struct ep93xx_gpio_chip *egc, 336 343 struct platform_device *pdev, ··· 363 350 364 351 girq = &gc->irq; 365 352 if (bank->has_irq || bank->has_hierarchical_irq) { 366 - struct irq_chip *ic; 367 - 368 353 gc->set_config = ep93xx_gpio_set_config; 369 354 egc->eic = devm_kcalloc(dev, 1, 370 355 sizeof(*egc->eic), ··· 370 359 if (!egc->eic) 371 360 return -ENOMEM; 372 361 egc->eic->irq_offset = bank->irq; 373 - ic = &egc->eic->ic; 374 - ic->name = devm_kasprintf(dev, GFP_KERNEL, "gpio-irq-%s", bank->label); 375 - if (!ic->name) 376 - return -ENOMEM; 377 - ep93xx_init_irq_chip(dev, ic); 378 - girq->chip = ic; 362 + gpio_irq_chip_set_chip(girq, &gpio_eic_irq_chip); 379 363 } 380 364 381 365 if (bank->has_irq) {
+2 -1
drivers/gpio/gpio-mxc.c
··· 249 249 } else { 250 250 pr_err("mxc: invalid configuration for GPIO %d: %x\n", 251 251 gpio, edge); 252 - return; 252 + goto unlock; 253 253 } 254 254 writel(val | (edge << (bit << 1)), reg); 255 255 256 + unlock: 256 257 raw_spin_unlock_irqrestore(&port->gc.bgpio_lock, flags); 257 258 } 258 259
+2 -1
drivers/gpio/gpiolib-acpi.c
··· 1104 1104 dev_dbg(&adev->dev, "IRQ %d already in use\n", irq); 1105 1105 } 1106 1106 1107 - if (wake_capable) 1107 + /* avoid suspend issues with GPIOs when systems are using S3 */ 1108 + if (wake_capable && acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) 1108 1109 *wake_capable = info.wake_capable; 1109 1110 1110 1111 return irq;
+2 -2
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 790 790 * zero here */ 791 791 WARN_ON(simd != 0); 792 792 793 - /* type 2 wave data */ 794 - dst[(*no_fields)++] = 2; 793 + /* type 3 wave data */ 794 + dst[(*no_fields)++] = 3; 795 795 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS); 796 796 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO); 797 797 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
+1
drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
··· 35 35 MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin"); 36 36 MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin"); 37 37 MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin"); 38 + MODULE_FIRMWARE("amdgpu/gc_11_0_4_imu.bin"); 38 39 39 40 static int imu_v11_0_init_microcode(struct amdgpu_device *adev) 40 41 {
+2 -1
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 40 40 MODULE_FIRMWARE("amdgpu/gc_11_0_2_mes1.bin"); 41 41 MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes.bin"); 42 42 MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin"); 43 + MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes.bin"); 44 + MODULE_FIRMWARE("amdgpu/gc_11_0_4_mes1.bin"); 43 45 44 46 static int mes_v11_0_hw_fini(void *handle); 45 47 static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev); ··· 198 196 mes_add_queue_pkt.trap_handler_addr = input->tba_addr; 199 197 mes_add_queue_pkt.tma_addr = input->tma_addr; 200 198 mes_add_queue_pkt.is_kfd_process = input->is_kfd_process; 201 - mes_add_queue_pkt.trap_en = 1; 202 199 203 200 /* For KFD, gds_size is re-used for queue size (needed in MES for AQL queues) */ 204 201 mes_add_queue_pkt.is_aql_queue = input->is_aql_queue;
+7 -1
drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c
··· 337 337 338 338 static void nbio_v4_3_init_registers(struct amdgpu_device *adev) 339 339 { 340 - return; 340 + if (adev->ip_versions[NBIO_HWIP][0] == IP_VERSION(4, 3, 0)) { 341 + uint32_t data; 342 + 343 + data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2); 344 + data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK; 345 + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data); 346 + } 341 347 } 342 348 343 349 static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev)
+2 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 640 640 AMD_CG_SUPPORT_GFX_CGCG | 641 641 AMD_CG_SUPPORT_GFX_CGLS | 642 642 AMD_CG_SUPPORT_REPEATER_FGCG | 643 - AMD_CG_SUPPORT_GFX_MGCG; 643 + AMD_CG_SUPPORT_GFX_MGCG | 644 + AMD_CG_SUPPORT_HDP_SD; 644 645 adev->pg_flags = AMD_PG_SUPPORT_VCN | 645 646 AMD_PG_SUPPORT_VCN_DPG | 646 647 AMD_PG_SUPPORT_JPEG;
+42
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4501 4501 static int dm_early_init(void *handle) 4502 4502 { 4503 4503 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 4504 + struct amdgpu_mode_info *mode_info = &adev->mode_info; 4505 + struct atom_context *ctx = mode_info->atom_context; 4506 + int index = GetIndexIntoMasterTable(DATA, Object_Header); 4507 + u16 data_offset; 4508 + 4509 + /* if there is no object header, skip DM */ 4510 + if (!amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) { 4511 + adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; 4512 + dev_info(adev->dev, "No object header, skipping DM\n"); 4513 + return -ENOENT; 4514 + } 4504 4515 4505 4516 switch (adev->asic_type) { 4506 4517 #if defined(CONFIG_DRM_AMD_DC_SI) ··· 8892 8881 if (!dm_old_crtc_state->stream) 8893 8882 goto skip_modeset; 8894 8883 8884 + /* Unset freesync video if it was active before */ 8885 + if (dm_old_crtc_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED) { 8886 + dm_new_crtc_state->freesync_config.state = VRR_STATE_INACTIVE; 8887 + dm_new_crtc_state->freesync_config.fixed_refresh_in_uhz = 0; 8888 + } 8889 + 8890 + /* Now check if we should set freesync video mode */ 8895 8891 if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream && 8896 8892 is_timing_unchanged_for_freesync(new_crtc_state, 8897 8893 old_crtc_state)) { ··· 9515 9497 bool lock_and_validation_needed = false; 9516 9498 struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; 9517 9499 #if defined(CONFIG_DRM_AMD_DC_DCN) 9500 + struct drm_dp_mst_topology_mgr *mgr; 9501 + struct drm_dp_mst_topology_state *mst_state; 9518 9502 struct dsc_mst_fairness_vars vars[MAX_PIPES]; 9519 9503 #endif 9520 9504 ··· 9764 9744 9765 9745 lock_and_validation_needed = true; 9766 9746 } 9747 + 9748 + #if defined(CONFIG_DRM_AMD_DC_DCN) 9749 + /* set the slot info for each mst_state based on the link encoding format */ 9750 + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { 9751 + struct amdgpu_dm_connector *aconnector; 9752 + struct drm_connector *connector; 9753 + struct drm_connector_list_iter iter; 9754 + u8 link_coding_cap; 9755 + 9756 + drm_connector_list_iter_begin(dev, &iter); 9757 + drm_for_each_connector_iter(connector, &iter) { 9758 + if (connector->index == mst_state->mgr->conn_base_id) { 9759 + aconnector = to_amdgpu_dm_connector(connector); 9760 + link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link); 9761 + drm_dp_mst_update_slots(mst_state, link_coding_cap); 9762 + 9763 + break; 9764 + } 9765 + } 9766 + drm_connector_list_iter_end(&iter); 9767 + } 9768 + #endif 9767 9769 9768 9770 /** 9769 9771 * Streams and planes are reset when there are changes that affect
+39 -12
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 120 120 } 121 121 122 122 static void 123 - fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state, 124 - struct amdgpu_dm_connector *aconnector, 123 + fill_dc_mst_payload_table_from_drm(struct dc_link *link, 124 + bool enable, 125 + struct drm_dp_mst_atomic_payload *target_payload, 125 126 struct dc_dp_mst_stream_allocation_table *table) 126 127 { 127 128 struct dc_dp_mst_stream_allocation_table new_table = { 0 }; 128 129 struct dc_dp_mst_stream_allocation *sa; 129 - struct drm_dp_mst_atomic_payload *payload; 130 + struct link_mst_stream_allocation_table copy_of_link_table = 131 + link->mst_stream_alloc_table; 132 + 133 + int i; 134 + int current_hw_table_stream_cnt = copy_of_link_table.stream_count; 135 + struct link_mst_stream_allocation *dc_alloc; 136 + 137 + /* TODO: refactor to set link->mst_stream_alloc_table directly if possible.*/ 138 + if (enable) { 139 + dc_alloc = 140 + &copy_of_link_table.stream_allocations[current_hw_table_stream_cnt]; 141 + dc_alloc->vcp_id = target_payload->vcpi; 142 + dc_alloc->slot_count = target_payload->time_slots; 143 + } else { 144 + for (i = 0; i < copy_of_link_table.stream_count; i++) { 145 + dc_alloc = 146 + &copy_of_link_table.stream_allocations[i]; 147 + 148 + if (dc_alloc->vcp_id == target_payload->vcpi) { 149 + dc_alloc->vcp_id = 0; 150 + dc_alloc->slot_count = 0; 151 + break; 152 + } 153 + } 154 + ASSERT(i != copy_of_link_table.stream_count); 155 + } 130 156 131 157 /* Fill payload info*/ 132 - list_for_each_entry(payload, &mst_state->payloads, next) { 133 - if (payload->delete) 134 - continue; 135 - 136 - sa = &new_table.stream_allocations[new_table.stream_count]; 137 - sa->slot_count = payload->time_slots; 138 - sa->vcp_id = payload->vcpi; 139 - new_table.stream_count++; 158 + for (i = 0; i < MAX_CONTROLLER_NUM; i++) { 159 + dc_alloc = 160 + &copy_of_link_table.stream_allocations[i]; 161 + if (dc_alloc->vcp_id > 0 && dc_alloc->slot_count > 0) { 162 + sa = &new_table.stream_allocations[new_table.stream_count]; 163 + sa->slot_count = dc_alloc->slot_count; 164 + sa->vcp_id = dc_alloc->vcp_id; 165 + new_table.stream_count++; 166 + } 140 167 } 141 168 142 169 /* Overwrite the old table */ ··· 212 185 * AUX message. The sequence is slot 1-63 allocated sequence for each 213 186 * stream. AMD ASIC stream slot allocation should follow the same 214 187 * sequence. copy DRM MST allocation to dc */ 215 - fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table); 188 + fill_dc_mst_payload_table_from_drm(stream->link, enable, payload, proposed_table); 216 189 217 190 return true; 218 191 }
-5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 903 903 if (IS_ERR(mst_state)) 904 904 return PTR_ERR(mst_state); 905 905 906 - mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link); 907 - #if defined(CONFIG_DRM_AMD_DC_DCN) 908 - drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link)); 909 - #endif 910 - 911 906 /* Set up params */ 912 907 for (i = 0; i < dc_state->stream_count; i++) { 913 908 struct dc_dsc_policy dsc_policy = {0};
+12 -2
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 3995 3995 struct fixed31_32 avg_time_slots_per_mtp = dc_fixpt_from_int(0); 3996 3996 int i; 3997 3997 bool mst_mode = (link->type == dc_connection_mst_branch); 3998 + /* adjust for drm changes*/ 3999 + bool update_drm_mst_state = true; 3998 4000 const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res); 3999 4001 const struct dc_link_settings empty_link_settings = {0}; 4000 4002 DC_LOGGER_INIT(link->ctx->logger); 4003 + 4001 4004 4002 4005 /* deallocate_mst_payload is called before disable link. When mode or 4003 4006 * disable/enable monitor, new stream is created which is not in link ··· 4017 4014 &empty_link_settings, 4018 4015 avg_time_slots_per_mtp); 4019 4016 4020 - if (mst_mode) { 4017 + if (mst_mode || update_drm_mst_state) { 4021 4018 /* when link is in mst mode, reply on mst manager to remove 4022 4019 * payload 4023 4020 */ ··· 4080 4077 stream->ctx, 4081 4078 stream); 4082 4079 4080 + if (!update_drm_mst_state) 4081 + dm_helpers_dp_mst_send_payload_allocation( 4082 + stream->ctx, 4083 + stream, 4084 + false); 4085 + } 4086 + 4087 + if (update_drm_mst_state) 4083 4088 dm_helpers_dp_mst_send_payload_allocation( 4084 4089 stream->ctx, 4085 4090 stream, 4086 4091 false); 4087 - } 4088 4092 4089 4093 return DC_OK; 4090 4094 }
+3 -2
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
··· 874 874 }, 875 875 876 876 // 6:1 downscaling ratio: 1000/6 = 166.666 877 + // 4:1 downscaling ratio for ARGB888 to prevent underflow during P010 playback: 1000/4 = 250 877 878 .max_downscale_factor = { 878 - .argb8888 = 167, 879 + .argb8888 = 250, 879 880 .nv12 = 167, 880 881 .fp16 = 167 881 882 }, ··· 1764 1763 pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE; 1765 1764 pool->base.pipe_count = pool->base.res_cap->num_timing_generator; 1766 1765 pool->base.mpcc_count = pool->base.res_cap->num_timing_generator; 1767 - dc->caps.max_downscale_ratio = 600; 1766 + dc->caps.max_downscale_ratio = 400; 1768 1767 dc->caps.i2c_speed_in_khz = 100; 1769 1768 dc->caps.i2c_speed_in_khz_hdcp = 100; 1770 1769 dc->caps.max_cursor_size = 256;
+1 -1
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
··· 94 94 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 95 95 .calc_vupdate_position = dcn10_calc_vupdate_position, 96 96 .apply_idle_power_optimizations = dcn32_apply_idle_power_optimizations, 97 - .does_plane_fit_in_mall = dcn30_does_plane_fit_in_mall, 97 + .does_plane_fit_in_mall = NULL, 98 98 .set_backlight_level = dcn21_set_backlight_level, 99 99 .set_abm_immediate_disable = dcn21_set_abm_immediate_disable, 100 100 .hardware_release = dcn30_hardware_release,
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
··· 3183 3183 } else { 3184 3184 v->MIN_DST_Y_NEXT_START[k] = v->VTotal[k] - v->VFrontPorch[k] + v->VTotal[k] - v->VActive[k] - v->VStartup[k]; 3185 3185 } 3186 - v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / (double)v->HTotal[k] / v->PixelClock[k], 1.0) / 4.0; 3186 + v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / ((double)v->HTotal[k] / v->PixelClock[k]), 1.0) / 4.0; 3187 3187 if (((v->VUpdateOffsetPix[k] + v->VUpdateWidthPix[k] + v->VReadyOffsetPix[k]) / v->HTotal[k]) 3188 3188 <= (isInterlaceTiming ? 3189 3189 dml_floor((v->VTotal[k] - v->VActive[k] - v->VFrontPorch[k] - v->VStartup[k]) / 2.0, 1.0) :
+12
drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
··· 532 532 if (dmub->hw_funcs.reset) 533 533 dmub->hw_funcs.reset(dmub); 534 534 535 + /* reset the cache of the last wptr as well now that hw is reset */ 536 + dmub->inbox1_last_wptr = 0; 537 + 535 538 cw0.offset.quad_part = inst_fb->gpu_addr; 536 539 cw0.region.base = DMUB_CW0_BASE; 537 540 cw0.region.top = cw0.region.base + inst_fb->size - 1; ··· 651 648 652 649 if (dmub->hw_funcs.reset) 653 650 dmub->hw_funcs.reset(dmub); 651 + 652 + /* mailboxes have been reset in hw, so reset the sw state as well */ 653 + dmub->inbox1_last_wptr = 0; 654 + dmub->inbox1_rb.wrpt = 0; 655 + dmub->inbox1_rb.rptr = 0; 656 + dmub->outbox0_rb.wrpt = 0; 657 + dmub->outbox0_rb.rptr = 0; 658 + dmub->outbox1_rb.wrpt = 0; 659 + dmub->outbox1_rb.rptr = 0; 654 660 655 661 dmub->hw_init = false; 656 662
+4 -2
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 2007 2007 gc_ver == IP_VERSION(10, 3, 0) || 2008 2008 gc_ver == IP_VERSION(10, 1, 2) || 2009 2009 gc_ver == IP_VERSION(11, 0, 0) || 2010 - gc_ver == IP_VERSION(11, 0, 2))) 2010 + gc_ver == IP_VERSION(11, 0, 2) || 2011 + gc_ver == IP_VERSION(11, 0, 3))) 2011 2012 *states = ATTR_STATE_UNSUPPORTED; 2012 2013 } else if (DEVICE_ATTR_IS(pp_dpm_dclk)) { 2013 2014 if (!(gc_ver == IP_VERSION(10, 3, 1) || 2014 2015 gc_ver == IP_VERSION(10, 3, 0) || 2015 2016 gc_ver == IP_VERSION(10, 1, 2) || 2016 2017 gc_ver == IP_VERSION(11, 0, 0) || 2017 - gc_ver == IP_VERSION(11, 0, 2))) 2018 + gc_ver == IP_VERSION(11, 0, 2) || 2019 + gc_ver == IP_VERSION(11, 0, 3))) 2018 2020 *states = ATTR_STATE_UNSUPPORTED; 2019 2021 } else if (DEVICE_ATTR_IS(pp_power_profile_mode)) { 2020 2022 if (amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
+14
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1500 1500 } 1501 1501 1502 1502 /* 1503 + * For SMU 13.0.4/11, PMFW will handle the features disablement properly 1504 + * for gpu reset case. Driver involvement is unnecessary. 1505 + */ 1506 + if (amdgpu_in_reset(adev)) { 1507 + switch (adev->ip_versions[MP1_HWIP][0]) { 1508 + case IP_VERSION(13, 0, 4): 1509 + case IP_VERSION(13, 0, 11): 1510 + return 0; 1511 + default: 1512 + break; 1513 + } 1514 + } 1515 + 1516 + /* 1503 1517 * For gpu reset, runpm and hibernation through BACO, 1504 1518 * BACO feature has to be kept enabled. 1505 1519 */
+1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 145 145 MSG_MAP(SetBadMemoryPagesRetiredFlagsPerChannel, 146 146 PPSMC_MSG_SetBadMemoryPagesRetiredFlagsPerChannel, 0), 147 147 MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0), 148 + MSG_MAP(AllowIHHostInterrupt, PPSMC_MSG_AllowIHHostInterrupt, 0), 148 149 }; 149 150 150 151 static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = {
+1
drivers/gpu/drm/bridge/synopsys/dw-hdmi-i2s-audio.c
··· 193 193 struct hdmi_codec_pdata pdata; 194 194 struct platform_device *platform; 195 195 196 + memset(&pdata, 0, sizeof(pdata)); 196 197 pdata.ops = &dw_hdmi_i2s_ops; 197 198 pdata.i2s = 1; 198 199 pdata.max_i2s_channels = 8;
+3 -1
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 3372 3372 3373 3373 mgr->payload_count--; 3374 3374 mgr->next_start_slot -= payload->time_slots; 3375 + 3376 + if (payload->delete) 3377 + drm_dp_mst_put_port_malloc(payload->port); 3375 3378 } 3376 3379 EXPORT_SYMBOL(drm_dp_remove_payload); 3377 3380 ··· 4330 4327 4331 4328 drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, payload->time_slots); 4332 4329 if (!payload->delete) { 4333 - drm_dp_mst_put_port_malloc(port); 4334 4330 payload->pbn = 0; 4335 4331 payload->delete = true; 4336 4332 topology_state->payload_mask &= ~BIT(payload->vcpi - 1);
+8 -7
drivers/gpu/drm/drm_fbdev_generic.c
··· 171 171 .fb_imageblit = drm_fbdev_fb_imageblit, 172 172 }; 173 173 174 - static struct fb_deferred_io drm_fbdev_defio = { 175 - .delay = HZ / 20, 176 - .deferred_io = drm_fb_helper_deferred_io, 177 - }; 178 - 179 174 /* 180 175 * This function uses the client API to create a framebuffer backed by a dumb buffer. 181 176 */ ··· 217 222 return -ENOMEM; 218 223 fbi->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST; 219 224 220 - fbi->fbdefio = &drm_fbdev_defio; 221 - fb_deferred_io_init(fbi); 225 + /* Set a default deferred I/O handler */ 226 + fb_helper->fbdefio.delay = HZ / 20; 227 + fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; 228 + 229 + fbi->fbdefio = &fb_helper->fbdefio; 230 + ret = fb_deferred_io_init(fbi); 231 + if (ret) 232 + return ret; 222 233 } else { 223 234 /* buffer is mapped for HW framebuffer */ 224 235 ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+54 -22
drivers/gpu/drm/drm_vma_manager.c
··· 240 240 } 241 241 EXPORT_SYMBOL(drm_vma_offset_remove); 242 242 243 - /** 244 - * drm_vma_node_allow - Add open-file to list of allowed users 245 - * @node: Node to modify 246 - * @tag: Tag of file to remove 247 - * 248 - * Add @tag to the list of allowed open-files for this node. If @tag is 249 - * already on this list, the ref-count is incremented. 250 - * 251 - * The list of allowed-users is preserved across drm_vma_offset_add() and 252 - * drm_vma_offset_remove() calls. You may even call it if the node is currently 253 - * not added to any offset-manager. 254 - * 255 - * You must remove all open-files the same number of times as you added them 256 - * before destroying the node. Otherwise, you will leak memory. 257 - * 258 - * This is locked against concurrent access internally. 259 - * 260 - * RETURNS: 261 - * 0 on success, negative error code on internal failure (out-of-mem) 262 - */ 263 - int drm_vma_node_allow(struct drm_vma_offset_node *node, struct drm_file *tag) 243 + static int vma_node_allow(struct drm_vma_offset_node *node, 244 + struct drm_file *tag, bool ref_counted) 264 245 { 265 246 struct rb_node **iter; 266 247 struct rb_node *parent = NULL; ··· 263 282 entry = rb_entry(*iter, struct drm_vma_offset_file, vm_rb); 264 283 265 284 if (tag == entry->vm_tag) { 266 - entry->vm_count++; 285 + if (ref_counted) 286 + entry->vm_count++; 267 287 goto unlock; 268 288 } else if (tag > entry->vm_tag) { 269 289 iter = &(*iter)->rb_right; ··· 289 307 kfree(new); 290 308 return ret; 291 309 } 310 + 311 + /** 312 + * drm_vma_node_allow - Add open-file to list of allowed users 313 + * @node: Node to modify 314 + * @tag: Tag of file to remove 315 + * 316 + * Add @tag to the list of allowed open-files for this node. If @tag is 317 + * already on this list, the ref-count is incremented. 318 + * 319 + * The list of allowed-users is preserved across drm_vma_offset_add() and 320 + * drm_vma_offset_remove() calls. You may even call it if the node is currently 321 + * not added to any offset-manager. 322 + * 323 + * You must remove all open-files the same number of times as you added them 324 + * before destroying the node. Otherwise, you will leak memory. 325 + * 326 + * This is locked against concurrent access internally. 327 + * 328 + * RETURNS: 329 + * 0 on success, negative error code on internal failure (out-of-mem) 330 + */ 331 + int drm_vma_node_allow(struct drm_vma_offset_node *node, struct drm_file *tag) 332 + { 333 + return vma_node_allow(node, tag, true); 334 + } 292 335 EXPORT_SYMBOL(drm_vma_node_allow); 336 + 337 + /** 338 + * drm_vma_node_allow_once - Add open-file to list of allowed users 339 + * @node: Node to modify 340 + * @tag: Tag of file to remove 341 + * 342 + * Add @tag to the list of allowed open-files for this node. 343 + * 344 + * The list of allowed-users is preserved across drm_vma_offset_add() and 345 + * drm_vma_offset_remove() calls. You may even call it if the node is currently 346 + * not added to any offset-manager. 347 + * 348 + * This is not ref-counted unlike drm_vma_node_allow() hence drm_vma_node_revoke() 349 + * should only be called once after this. 350 + * 351 + * This is locked against concurrent access internally. 352 + * 353 + * RETURNS: 354 + * 0 on success, negative error code on internal failure (out-of-mem) 355 + */ 356 + int drm_vma_node_allow_once(struct drm_vma_offset_node *node, struct drm_file *tag) 357 + { 358 + return vma_node_allow(node, tag, false); 359 + } 360 + EXPORT_SYMBOL(drm_vma_node_allow_once); 293 361 294 362 /** 295 363 * drm_vma_node_revoke - Remove open-file from list of allowed users
+1 -1
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 1319 1319 { .refclk = 24000, .cdclk = 192000, .divider = 2, .ratio = 16 }, 1320 1320 { .refclk = 24000, .cdclk = 312000, .divider = 2, .ratio = 26 }, 1321 1321 { .refclk = 24000, .cdclk = 552000, .divider = 2, .ratio = 46 }, 1322 - { .refclk = 24400, .cdclk = 648000, .divider = 2, .ratio = 54 }, 1322 + { .refclk = 24000, .cdclk = 648000, .divider = 2, .ratio = 54 }, 1323 1323 1324 1324 { .refclk = 38400, .cdclk = 179200, .divider = 3, .ratio = 14 }, 1325 1325 { .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 10 },
+12 -4
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 1861 1861 vm = ctx->vm; 1862 1862 GEM_BUG_ON(!vm); 1863 1863 1864 - err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL); 1865 - if (err) 1866 - return err; 1867 - 1864 + /* 1865 + * Get a reference for the allocated handle. Once the handle is 1866 + * visible in the vm_xa table, userspace could try to close it 1867 + * from under our feet, so we need to hold the extra reference 1868 + * first. 1869 + */ 1868 1870 i915_vm_get(vm); 1871 + 1872 + err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL); 1873 + if (err) { 1874 + i915_vm_put(vm); 1875 + return err; 1876 + } 1869 1877 1870 1878 GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */ 1871 1879 args->value = id;
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_mman.c
··· 697 697 GEM_BUG_ON(lookup_mmo(obj, mmap_type) != mmo); 698 698 out: 699 699 if (file) 700 - drm_vma_node_allow(&mmo->vma_node, file); 700 + drm_vma_node_allow_once(&mmo->vma_node, file); 701 701 return mmo; 702 702 703 703 err:
+5 -4
drivers/gpu/drm/i915/gem/i915_gem_tiling.c
··· 305 305 spin_unlock(&obj->vma.lock); 306 306 307 307 obj->tiling_and_stride = tiling | stride; 308 - i915_gem_object_unlock(obj); 309 - 310 - /* Force the fence to be reacquired for GTT access */ 311 - i915_gem_object_release_mmap_gtt(obj); 312 308 313 309 /* Try to preallocate memory required to save swizzling on put-pages */ 314 310 if (i915_gem_object_needs_bit17_swizzle(obj)) { ··· 316 320 bitmap_free(obj->bit_17); 317 321 obj->bit_17 = NULL; 318 322 } 323 + 324 + i915_gem_object_unlock(obj); 325 + 326 + /* Force the fence to be reacquired for GTT access */ 327 + i915_gem_object_release_mmap_gtt(obj); 319 328 320 329 return 0; 321 330 }
+3 -1
drivers/gpu/drm/i915/gt/intel_context.c
··· 528 528 return rq; 529 529 } 530 530 531 - struct i915_request *intel_context_find_active_request(struct intel_context *ce) 531 + struct i915_request *intel_context_get_active_request(struct intel_context *ce) 532 532 { 533 533 struct intel_context *parent = intel_context_to_parent(ce); 534 534 struct i915_request *rq, *active = NULL; ··· 552 552 553 553 active = rq; 554 554 } 555 + if (active) 556 + active = i915_request_get_rcu(active); 555 557 spin_unlock_irqrestore(&parent->guc_state.lock, flags); 556 558 557 559 return active;
+1 -2
drivers/gpu/drm/i915/gt/intel_context.h
··· 268 268 269 269 struct i915_request *intel_context_create_request(struct intel_context *ce); 270 270 271 - struct i915_request * 272 - intel_context_find_active_request(struct intel_context *ce); 271 + struct i915_request *intel_context_get_active_request(struct intel_context *ce); 273 272 274 273 static inline bool intel_context_is_barrier(const struct intel_context *ce) 275 274 {
+2 -2
drivers/gpu/drm/i915/gt/intel_engine.h
··· 248 248 ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, 249 249 ktime_t *now); 250 250 251 - struct i915_request * 252 - intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine); 251 + void intel_engine_get_hung_entity(struct intel_engine_cs *engine, 252 + struct intel_context **ce, struct i915_request **rq); 253 253 254 254 u32 intel_engine_context_size(struct intel_gt *gt, u8 class); 255 255 struct intel_context *
+39 -24
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 2185 2185 } 2186 2186 } 2187 2187 2188 - static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m) 2188 + static void engine_dump_active_requests(struct intel_engine_cs *engine, 2189 + struct drm_printer *m) 2189 2190 { 2191 + struct intel_context *hung_ce = NULL; 2190 2192 struct i915_request *hung_rq = NULL; 2191 - struct intel_context *ce; 2192 - bool guc; 2193 2193 2194 2194 /* 2195 2195 * No need for an engine->irq_seqno_barrier() before the seqno reads. ··· 2198 2198 * But the intention here is just to report an instantaneous snapshot 2199 2199 * so that's fine. 2200 2200 */ 2201 - lockdep_assert_held(&engine->sched_engine->lock); 2201 + intel_engine_get_hung_entity(engine, &hung_ce, &hung_rq); 2202 2202 2203 2203 drm_printf(m, "\tRequests:\n"); 2204 2204 2205 - guc = intel_uc_uses_guc_submission(&engine->gt->uc); 2206 - if (guc) { 2207 - ce = intel_engine_get_hung_context(engine); 2208 - if (ce) 2209 - hung_rq = intel_context_find_active_request(ce); 2210 - } else { 2211 - hung_rq = intel_engine_execlist_find_hung_request(engine); 2212 - } 2213 - 2214 2205 if (hung_rq) 2215 2206 engine_dump_request(hung_rq, m, "\t\thung"); 2207 + else if (hung_ce) 2208 + drm_printf(m, "\t\tGot hung ce but no hung rq!\n"); 2216 2209 2217 - if (guc) 2210 + if (intel_uc_uses_guc_submission(&engine->gt->uc)) 2218 2211 intel_guc_dump_active_requests(engine, hung_rq, m); 2219 2212 else 2220 - intel_engine_dump_active_requests(&engine->sched_engine->requests, 2221 - hung_rq, m); 2213 + intel_execlists_dump_active_requests(engine, hung_rq, m); 2214 + 2215 + if (hung_rq) 2216 + i915_request_put(hung_rq); 2222 2217 } 2223 2218 2224 2219 void intel_engine_dump(struct intel_engine_cs *engine, ··· 2223 2228 struct i915_gpu_error * const error = &engine->i915->gpu_error; 2224 2229 struct i915_request *rq; 2225 2230 intel_wakeref_t wakeref; 2226 - unsigned long flags; 2227 2231 ktime_t dummy; 2228 2232 2229 2233 if (header) { ··· 2259 2265 i915_reset_count(error)); 2260 2266 print_properties(engine, m); 2261 2267 2262 - spin_lock_irqsave(&engine->sched_engine->lock, flags); 2263 2268 engine_dump_active_requests(engine, m); 2264 - 2265 - drm_printf(m, "\tOn hold?: %zu\n", 2266 - list_count_nodes(&engine->sched_engine->hold)); 2267 - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 2268 2269 2269 2270 drm_printf(m, "\tMMIO base: 0x%08x\n", engine->mmio_base); 2270 2271 wakeref = intel_runtime_pm_get_if_in_use(engine->uncore->rpm); ··· 2306 2317 return siblings[0]->cops->create_virtual(siblings, count, flags); 2307 2318 } 2308 2319 2309 - struct i915_request * 2310 - intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine) 2320 + static struct i915_request *engine_execlist_find_hung_request(struct intel_engine_cs *engine) 2311 2321 { 2312 2322 struct i915_request *request, *active = NULL; 2313 2323 ··· 2356 2368 } 2357 2369 2358 2370 return active; 2371 + } 2372 + 2373 + void intel_engine_get_hung_entity(struct intel_engine_cs *engine, 2374 + struct intel_context **ce, struct i915_request **rq) 2375 + { 2376 + unsigned long flags; 2377 + 2378 + *ce = intel_engine_get_hung_context(engine); 2379 + if (*ce) { 2380 + intel_engine_clear_hung_context(engine); 2381 + 2382 + *rq = intel_context_get_active_request(*ce); 2383 + return; 2384 + } 2385 + 2386 + /* 2387 + * Getting here with GuC enabled means it is a forced error capture 2388 + * with no actual hang. So, no need to attempt the execlist search. 2389 + */ 2390 + if (intel_uc_uses_guc_submission(&engine->gt->uc)) 2391 + return; 2392 + 2393 + spin_lock_irqsave(&engine->sched_engine->lock, flags); 2394 + *rq = engine_execlist_find_hung_request(engine); 2395 + if (*rq) 2396 + *rq = i915_request_get_rcu(*rq); 2397 + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 2359 2398 } 2360 2399 2361 2400 void xehp_enable_ccs_engines(struct intel_engine_cs *engine)
+16
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 4148 4148 spin_unlock_irqrestore(&sched_engine->lock, flags); 4149 4149 } 4150 4150 4151 + void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, 4152 + struct i915_request *hung_rq, 4153 + struct drm_printer *m) 4154 + { 4155 + unsigned long flags; 4156 + 4157 + spin_lock_irqsave(&engine->sched_engine->lock, flags); 4158 + 4159 + intel_engine_dump_active_requests(&engine->sched_engine->requests, hung_rq, m); 4160 + 4161 + drm_printf(m, "\tOn hold?: %lu\n", 4162 + list_count_nodes(&engine->sched_engine->hold)); 4163 + 4164 + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 4165 + } 4166 + 4151 4167 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 4152 4168 #include "selftest_execlists.c" 4153 4169 #endif
+4
drivers/gpu/drm/i915/gt/intel_execlists_submission.h
··· 32 32 int indent), 33 33 unsigned int max); 34 34 35 + void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, 36 + struct i915_request *hung_rq, 37 + struct drm_printer *m); 38 + 35 39 bool 36 40 intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine); 37 41
+1 -36
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 288 288 END 289 289 }; 290 290 291 - static const u8 mtl_xcs_offsets[] = { 292 - NOP(1), 293 - LRI(13, POSTED), 294 - REG16(0x244), 295 - REG(0x034), 296 - REG(0x030), 297 - REG(0x038), 298 - REG(0x03c), 299 - REG(0x168), 300 - REG(0x140), 301 - REG(0x110), 302 - REG(0x1c0), 303 - REG(0x1c4), 304 - REG(0x1c8), 305 - REG(0x180), 306 - REG16(0x2b4), 307 - NOP(4), 308 - 309 - NOP(1), 310 - LRI(9, POSTED), 311 - REG16(0x3a8), 312 - REG16(0x28c), 313 - REG16(0x288), 314 - REG16(0x284), 315 - REG16(0x280), 316 - REG16(0x27c), 317 - REG16(0x278), 318 - REG16(0x274), 319 - REG16(0x270), 320 - 321 - END 322 - }; 323 - 324 291 static const u8 gen8_rcs_offsets[] = { 325 292 NOP(1), 326 293 LRI(14, POSTED), ··· 706 739 else 707 740 return gen8_rcs_offsets; 708 741 } else { 709 - if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 70)) 710 - return mtl_xcs_offsets; 711 - else if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 742 + if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) 712 743 return dg2_xcs_offsets; 713 744 else if (GRAPHICS_VER(engine->i915) >= 12) 714 745 return gen12_xcs_offsets;
+13 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1702 1702 goto next_context; 1703 1703 1704 1704 guilty = false; 1705 - rq = intel_context_find_active_request(ce); 1705 + rq = intel_context_get_active_request(ce); 1706 1706 if (!rq) { 1707 1707 head = ce->ring->tail; 1708 1708 goto out_replay; ··· 1715 1715 head = intel_ring_wrap(ce->ring, rq->head); 1716 1716 1717 1717 __i915_request_reset(rq, guilty); 1718 + i915_request_put(rq); 1718 1719 out_replay: 1719 1720 guc_reset_state(ce, head, guilty); 1720 1721 next_context: ··· 4818 4817 4819 4818 xa_lock_irqsave(&guc->context_lookup, flags); 4820 4819 xa_for_each(&guc->context_lookup, index, ce) { 4820 + bool found; 4821 + 4821 4822 if (!kref_get_unless_zero(&ce->ref)) 4822 4823 continue; 4823 4824 ··· 4836 4833 goto next; 4837 4834 } 4838 4835 4836 + found = false; 4837 + spin_lock(&ce->guc_state.lock); 4839 4838 list_for_each_entry(rq, &ce->guc_state.requests, sched.link) { 4840 4839 if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE) 4841 4840 continue; 4842 4841 4842 + found = true; 4843 + break; 4844 + } 4845 + spin_unlock(&ce->guc_state.lock); 4846 + 4847 + if (found) { 4843 4848 intel_engine_set_hung_context(engine, ce); 4844 4849 4845 4850 /* Can only cope with one hang at a time... */ ··· 4855 4844 xa_lock(&guc->context_lookup); 4856 4845 goto done; 4857 4846 } 4847 + 4858 4848 next: 4859 4849 intel_context_put(ce); 4860 4850 xa_lock(&guc->context_lookup);
+6 -27
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1596 1596 { 1597 1597 struct intel_engine_capture_vma *capture = NULL; 1598 1598 struct intel_engine_coredump *ee; 1599 - struct intel_context *ce; 1599 + struct intel_context *ce = NULL; 1600 1600 struct i915_request *rq = NULL; 1601 - unsigned long flags; 1602 1601 1603 1602 ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL, dump_flags); 1604 1603 if (!ee) 1605 1604 return NULL; 1606 1605 1607 - ce = intel_engine_get_hung_context(engine); 1608 - if (ce) { 1609 - intel_engine_clear_hung_context(engine); 1610 - rq = intel_context_find_active_request(ce); 1611 - if (!rq || !i915_request_started(rq)) 1612 - goto no_request_capture; 1613 - } else { 1614 - /* 1615 - * Getting here with GuC enabled means it is a forced error capture 1616 - * with no actual hang. So, no need to attempt the execlist search. 1617 - */ 1618 - if (!intel_uc_uses_guc_submission(&engine->gt->uc)) { 1619 - spin_lock_irqsave(&engine->sched_engine->lock, flags); 1620 - rq = intel_engine_execlist_find_hung_request(engine); 1621 - spin_unlock_irqrestore(&engine->sched_engine->lock, 1622 - flags); 1623 - } 1624 - } 1625 - if (rq) 1626 - rq = i915_request_get_rcu(rq); 1627 - 1628 - if (!rq) 1606 + intel_engine_get_hung_entity(engine, &ce, &rq); 1607 + if (!rq || !i915_request_started(rq)) 1629 1608 goto no_request_capture; 1630 1609 1631 1610 capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL); 1632 - if (!capture) { 1633 - i915_request_put(rq); 1611 + if (!capture) 1634 1612 goto no_request_capture; 1635 - } 1636 1613 if (dump_flags & CORE_DUMP_FLAG_IS_GUC_CAPTURE) 1637 1614 intel_guc_capture_get_matching_node(engine->gt, ee, ce); 1638 1615 ··· 1619 1642 return ee; 1620 1643 1621 1644 no_request_capture: 1645 + if (rq) 1646 + i915_request_put(rq); 1622 1647 kfree(ee); 1623 1648 return NULL; 1624 1649 }
+1 -2
drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
··· 28 28 29 29 int intel_selftest_modify_policy(struct intel_engine_cs *engine, 30 30 struct intel_selftest_saved_policy *saved, 31 - u32 modify_type) 32 - 31 + enum selftest_scheduler_modify modify_type) 33 32 { 34 33 int err; 35 34
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
··· 97 97 int gp102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 98 98 int gp10b_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 99 99 int gv100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 100 + int tu102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 100 101 int ga100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 101 102 int ga102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 102 103
+3
drivers/gpu/drm/nouveau/nvkm/core/firmware.c
··· 151 151 static enum nvkm_memory_target 152 152 nvkm_firmware_mem_target(struct nvkm_memory *memory) 153 153 { 154 + if (nvkm_firmware_mem(memory)->device->func->tegra) 155 + return NVKM_MEM_TARGET_NCOH; 156 + 154 157 return NVKM_MEM_TARGET_HOST; 155 158 } 156 159
+5 -5
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 2405 2405 .bus = { 0x00000001, gf100_bus_new }, 2406 2406 .devinit = { 0x00000001, tu102_devinit_new }, 2407 2407 .fault = { 0x00000001, tu102_fault_new }, 2408 - .fb = { 0x00000001, gv100_fb_new }, 2408 + .fb = { 0x00000001, tu102_fb_new }, 2409 2409 .fuse = { 0x00000001, gm107_fuse_new }, 2410 2410 .gpio = { 0x00000001, gk104_gpio_new }, 2411 2411 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2440 2440 .bus = { 0x00000001, gf100_bus_new }, 2441 2441 .devinit = { 0x00000001, tu102_devinit_new }, 2442 2442 .fault = { 0x00000001, tu102_fault_new }, 2443 - .fb = { 0x00000001, gv100_fb_new }, 2443 + .fb = { 0x00000001, tu102_fb_new }, 2444 2444 .fuse = { 0x00000001, gm107_fuse_new }, 2445 2445 .gpio = { 0x00000001, gk104_gpio_new }, 2446 2446 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2475 2475 .bus = { 0x00000001, gf100_bus_new }, 2476 2476 .devinit = { 0x00000001, tu102_devinit_new }, 2477 2477 .fault = { 0x00000001, tu102_fault_new }, 2478 - .fb = { 0x00000001, gv100_fb_new }, 2478 + .fb = { 0x00000001, tu102_fb_new }, 2479 2479 .fuse = { 0x00000001, gm107_fuse_new }, 2480 2480 .gpio = { 0x00000001, gk104_gpio_new }, 2481 2481 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2510 2510 .bus = { 0x00000001, gf100_bus_new }, 2511 2511 .devinit = { 0x00000001, tu102_devinit_new }, 2512 2512 .fault = { 0x00000001, tu102_fault_new }, 2513 - .fb = { 0x00000001, gv100_fb_new }, 2513 + .fb = { 0x00000001, tu102_fb_new }, 2514 2514 .fuse = { 0x00000001, gm107_fuse_new }, 2515 2515 .gpio = { 0x00000001, gk104_gpio_new }, 2516 2516 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2545 2545 .bus = { 0x00000001, gf100_bus_new }, 2546 2546 .devinit = { 0x00000001, tu102_devinit_new }, 2547 2547 .fault = { 0x00000001, tu102_fault_new }, 2548 - .fb = { 0x00000001, gv100_fb_new }, 2548 + .fb = { 0x00000001, tu102_fb_new }, 2549 2549 .fuse = { 0x00000001, gm107_fuse_new }, 2550 2550 .gpio = { 0x00000001, gk104_gpio_new }, 2551 2551 .gsp = { 0x00000001, gv100_gsp_new },
+13 -1
drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
··· 48 48 img += 4; 49 49 len -= 4; 50 50 } 51 + 52 + /* Sigh. Tegra PMU FW's init message... */ 53 + if (len) { 54 + u32 data = nvkm_falcon_rd32(falcon, 0x1c4 + (port * 8)); 55 + 56 + while (len--) { 57 + *(u8 *)img++ = data & 0xff; 58 + data >>= 8; 59 + } 60 + } 51 61 } 52 62 53 63 static void ··· 74 64 img += 4; 75 65 len -= 4; 76 66 } 67 + 68 + WARN_ON(len); 77 69 } 78 70 79 71 static void ··· 86 74 87 75 const struct nvkm_falcon_func_pio 88 76 gm200_flcn_dmem_pio = { 89 - .min = 4, 77 + .min = 1, 90 78 .max = 0x100, 91 79 .wr_init = gm200_flcn_pio_dmem_wr_init, 92 80 .wr = gm200_flcn_pio_dmem_wr,
+23
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
··· 65 65 return ret; 66 66 } 67 67 68 + static int 69 + tu102_devinit_wait(struct nvkm_device *device) 70 + { 71 + unsigned timeout = 50 + 2000; 72 + 73 + do { 74 + if (nvkm_rd32(device, 0x118128) & 0x00000001) { 75 + if ((nvkm_rd32(device, 0x118234) & 0x000000ff) == 0xff) 76 + return 0; 77 + } 78 + 79 + usleep_range(1000, 2000); 80 + } while (timeout--); 81 + 82 + return -ETIMEDOUT; 83 + } 84 + 68 85 int 69 86 tu102_devinit_post(struct nvkm_devinit *base, bool post) 70 87 { 71 88 struct nv50_devinit *init = nv50_devinit(base); 89 + int ret; 90 + 91 + ret = tu102_devinit_wait(init->base.subdev.device); 92 + if (ret) 93 + return ret; 94 + 72 95 gm200_devinit_preos(init, post); 73 96 return 0; 74 97 }
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
··· 32 32 nvkm-y += nvkm/subdev/fb/gp102.o 33 33 nvkm-y += nvkm/subdev/fb/gp10b.o 34 34 nvkm-y += nvkm/subdev/fb/gv100.o 35 + nvkm-y += nvkm/subdev/fb/tu102.o 35 36 nvkm-y += nvkm/subdev/fb/ga100.o 36 37 nvkm-y += nvkm/subdev/fb/ga102.o 37 38
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
··· 40 40 return ret; 41 41 } 42 42 43 - static bool 44 - ga102_fb_vpr_scrub_required(struct nvkm_fb *fb) 45 - { 46 - return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; 47 - } 48 - 49 43 static const struct nvkm_fb_func 50 44 ga102_fb = { 51 45 .dtor = gf100_fb_dtor, ··· 50 56 .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 51 57 .ram_new = ga102_ram_new, 52 58 .default_bigpage = 16, 53 - .vpr.scrub_required = ga102_fb_vpr_scrub_required, 59 + .vpr.scrub_required = tu102_fb_vpr_scrub_required, 54 60 .vpr.scrub = ga102_fb_vpr_scrub, 55 61 }; 56 62
-5
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
··· 49 49 } 50 50 51 51 MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin"); 52 - MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); 53 - MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); 54 - MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); 55 - MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); 56 - MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin");
+2
drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
··· 89 89 int gp102_fb_vpr_scrub(struct nvkm_fb *); 90 90 91 91 int gv100_fb_init_page(struct nvkm_fb *); 92 + 93 + bool tu102_fb_vpr_scrub_required(struct nvkm_fb *); 92 94 #endif
+55
drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c
··· 1 + /* 2 + * Copyright 2018 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "gf100.h" 23 + #include "ram.h" 24 + 25 + bool 26 + tu102_fb_vpr_scrub_required(struct nvkm_fb *fb) 27 + { 28 + return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; 29 + } 30 + 31 + static const struct nvkm_fb_func 32 + tu102_fb = { 33 + .dtor = gf100_fb_dtor, 34 + .oneinit = gf100_fb_oneinit, 35 + .init = gm200_fb_init, 36 + .init_page = gv100_fb_init_page, 37 + .init_unkn = gp100_fb_init_unkn, 38 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 39 + .vpr.scrub_required = tu102_fb_vpr_scrub_required, 40 + .vpr.scrub = gp102_fb_vpr_scrub, 41 + .ram_new = gp100_ram_new, 42 + .default_bigpage = 16, 43 + }; 44 + 45 + int 46 + tu102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb) 47 + { 48 + return gp102_fb_new_(&tu102_fb, device, type, inst, pfb); 49 + } 50 + 51 + MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); 52 + MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); 53 + MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); 54 + MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); 55 + MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin");
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
··· 225 225 226 226 pmu->initmsg_received = false; 227 227 228 - nvkm_falcon_load_dmem(falcon, &args, addr_args, sizeof(args), 0); 228 + nvkm_falcon_pio_wr(falcon, (u8 *)&args, 0, 0, DMEM, addr_args, sizeof(args), 0, false); 229 229 nvkm_falcon_start(falcon); 230 230 return 0; 231 231 }
+12 -4
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 1193 1193 return 0; 1194 1194 } 1195 1195 1196 - static int boe_panel_unprepare(struct drm_panel *panel) 1196 + static int boe_panel_disable(struct drm_panel *panel) 1197 1197 { 1198 1198 struct boe_panel *boe = to_boe_panel(panel); 1199 1199 int ret; 1200 - 1201 - if (!boe->prepared) 1202 - return 0; 1203 1200 1204 1201 ret = boe_panel_enter_sleep_mode(boe); 1205 1202 if (ret < 0) { ··· 1205 1208 } 1206 1209 1207 1210 msleep(150); 1211 + 1212 + return 0; 1213 + } 1214 + 1215 + static int boe_panel_unprepare(struct drm_panel *panel) 1216 + { 1217 + struct boe_panel *boe = to_boe_panel(panel); 1218 + 1219 + if (!boe->prepared) 1220 + return 0; 1208 1221 1209 1222 if (boe->desc->discharge_on_disable) { 1210 1223 regulator_disable(boe->avee); ··· 1535 1528 } 1536 1529 1537 1530 static const struct drm_panel_funcs boe_panel_funcs = { 1531 + .disable = boe_panel_disable, 1538 1532 .unprepare = boe_panel_unprepare, 1539 1533 .prepare = boe_panel_prepare, 1540 1534 .enable = boe_panel_enable,
+7 -11
drivers/gpu/drm/solomon/ssd130x.c
··· 656 656 .atomic_check = drm_crtc_helper_atomic_check, 657 657 }; 658 658 659 - static void ssd130x_crtc_reset(struct drm_crtc *crtc) 660 - { 661 - struct drm_device *drm = crtc->dev; 662 - struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 663 - 664 - ssd130x_init(ssd130x); 665 - 666 - drm_atomic_helper_crtc_reset(crtc); 667 - } 668 - 669 659 static const struct drm_crtc_funcs ssd130x_crtc_funcs = { 670 - .reset = ssd130x_crtc_reset, 660 + .reset = drm_atomic_helper_crtc_reset, 671 661 .destroy = drm_crtc_cleanup, 672 662 .set_config = drm_atomic_helper_set_config, 673 663 .page_flip = drm_atomic_helper_page_flip, ··· 675 685 ret = ssd130x_power_on(ssd130x); 676 686 if (ret) 677 687 return; 688 + 689 + ret = ssd130x_init(ssd130x); 690 + if (ret) { 691 + ssd130x_power_off(ssd130x); 692 + return; 693 + } 678 694 679 695 ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); 680 696
+2 -1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 3018 3018 } 3019 3019 3020 3020 vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops, 3021 - vc4_hdmi, "vc4", 3021 + vc4_hdmi, 3022 + vc4_hdmi->variant->card_name, 3022 3023 CEC_CAP_DEFAULTS | 3023 3024 CEC_CAP_CONNECTOR_INFO, 1); 3024 3025 ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap);
drivers/gpu/drm/vmwgfx/vmwgfx_msg_arm64.h
+1 -1
drivers/hv/hv_balloon.c
··· 1963 1963 1964 1964 static void hv_balloon_debugfs_exit(struct hv_dynmem_device *b) 1965 1965 { 1966 - debugfs_remove(debugfs_lookup("hv-balloon", NULL)); 1966 + debugfs_lookup_and_remove("hv-balloon", NULL); 1967 1967 } 1968 1968 1969 1969 #else
+1 -1
drivers/i2c/busses/i2c-axxia.c
··· 118 118 #define SDA_HOLD_TIME 0x90 119 119 120 120 /** 121 - * axxia_i2c_dev - I2C device context 121 + * struct axxia_i2c_dev - I2C device context 122 122 * @base: pointer to register struct 123 123 * @msg: pointer to current message 124 124 * @msg_r: pointer to current read message (sequence transfer)
+6 -3
drivers/i2c/busses/i2c-designware-common.c
··· 351 351 * 352 352 * If your hardware is free from tHD;STA issue, try this one. 353 353 */ 354 - return DIV_ROUND_CLOSEST(ic_clk * tSYMBOL, MICRO) - 8 + offset; 354 + return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * tSYMBOL, MICRO) - 355 + 8 + offset; 355 356 else 356 357 /* 357 358 * Conditional expression: ··· 368 367 * The reason why we need to take into account "tf" here, 369 368 * is the same as described in i2c_dw_scl_lcnt(). 370 369 */ 371 - return DIV_ROUND_CLOSEST(ic_clk * (tSYMBOL + tf), MICRO) - 3 + offset; 370 + return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tSYMBOL + tf), MICRO) - 371 + 3 + offset; 372 372 } 373 373 374 374 u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset) ··· 385 383 * account the fall time of SCL signal (tf). Default tf value 386 384 * should be 0.3 us, for safety. 387 385 */ 388 - return DIV_ROUND_CLOSEST(ic_clk * (tLOW + tf), MICRO) - 1 + offset; 386 + return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tLOW + tf), MICRO) - 387 + 1 + offset; 389 388 } 390 389 391 390 int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
+2
drivers/i2c/busses/i2c-designware-pcidrv.c
··· 396 396 { PCI_VDEVICE(ATI, 0x73a4), navi_amd }, 397 397 { PCI_VDEVICE(ATI, 0x73e4), navi_amd }, 398 398 { PCI_VDEVICE(ATI, 0x73c4), navi_amd }, 399 + { PCI_VDEVICE(ATI, 0x7444), navi_amd }, 400 + { PCI_VDEVICE(ATI, 0x7464), navi_amd }, 399 401 { 0,} 400 402 }; 401 403 MODULE_DEVICE_TABLE(pci, i2_designware_pci_ids);
+2 -18
drivers/i2c/busses/i2c-designware-platdrv.c
··· 351 351 352 352 if (dev->flags & ACCESS_NO_IRQ_SUSPEND) { 353 353 dev_pm_set_driver_flags(&pdev->dev, 354 - DPM_FLAG_SMART_PREPARE | 355 - DPM_FLAG_MAY_SKIP_RESUME); 354 + DPM_FLAG_SMART_PREPARE); 356 355 } else { 357 356 dev_pm_set_driver_flags(&pdev->dev, 358 357 DPM_FLAG_SMART_PREPARE | 359 - DPM_FLAG_SMART_SUSPEND | 360 - DPM_FLAG_MAY_SKIP_RESUME); 358 + DPM_FLAG_SMART_SUSPEND); 361 359 } 362 360 363 361 device_enable_async_suspend(&pdev->dev); ··· 417 419 */ 418 420 return !has_acpi_companion(dev); 419 421 } 420 - 421 - static void dw_i2c_plat_complete(struct device *dev) 422 - { 423 - /* 424 - * The device can only be in runtime suspend at this point if it has not 425 - * been resumed throughout the ending system suspend/resume cycle, so if 426 - * the platform firmware might mess up with it, request the runtime PM 427 - * framework to resume it. 428 - */ 429 - if (pm_runtime_suspended(dev) && pm_resume_via_firmware()) 430 - pm_request_resume(dev); 431 - } 432 422 #else 433 423 #define dw_i2c_plat_prepare NULL 434 - #define dw_i2c_plat_complete NULL 435 424 #endif 436 425 437 426 #ifdef CONFIG_PM ··· 468 483 469 484 static const struct dev_pm_ops dw_i2c_dev_pm_ops = { 470 485 .prepare = dw_i2c_plat_prepare, 471 - .complete = dw_i2c_plat_complete, 472 486 SET_LATE_SYSTEM_SLEEP_PM_OPS(dw_i2c_plat_suspend, dw_i2c_plat_resume) 473 487 SET_RUNTIME_PM_OPS(dw_i2c_plat_runtime_suspend, dw_i2c_plat_runtime_resume, NULL) 474 488 };
+2 -2
drivers/i2c/busses/i2c-mxs.c
··· 826 826 /* Setup the DMA */ 827 827 i2c->dmach = dma_request_chan(dev, "rx-tx"); 828 828 if (IS_ERR(i2c->dmach)) { 829 - dev_err(dev, "Failed to request dma\n"); 830 - return PTR_ERR(i2c->dmach); 829 + return dev_err_probe(dev, PTR_ERR(i2c->dmach), 830 + "Failed to request dma\n"); 831 831 } 832 832 833 833 platform_set_drvdata(pdev, i2c);
+22 -22
drivers/i2c/busses/i2c-rk3x.c
··· 80 80 #define DEFAULT_SCL_RATE (100 * 1000) /* Hz */ 81 81 82 82 /** 83 - * struct i2c_spec_values: 83 + * struct i2c_spec_values - I2C specification values for various modes 84 84 * @min_hold_start_ns: min hold time (repeated) START condition 85 85 * @min_low_ns: min LOW period of the SCL clock 86 86 * @min_high_ns: min HIGH period of the SCL cloc ··· 136 136 }; 137 137 138 138 /** 139 - * struct rk3x_i2c_calced_timings: 139 + * struct rk3x_i2c_calced_timings - calculated V1 timings 140 140 * @div_low: Divider output for low 141 141 * @div_high: Divider output for high 142 142 * @tuning: Used to adjust setup/hold data time, ··· 159 159 }; 160 160 161 161 /** 162 - * struct rk3x_i2c_soc_data: 162 + * struct rk3x_i2c_soc_data - SOC-specific data 163 163 * @grf_offset: offset inside the grf regmap for setting the i2c type 164 164 * @calc_timings: Callback function for i2c timing information calculated 165 165 */ ··· 239 239 } 240 240 241 241 /** 242 - * Generate a START condition, which triggers a REG_INT_START interrupt. 242 + * rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt. 243 + * @i2c: target controller data 243 244 */ 244 245 static void rk3x_i2c_start(struct rk3x_i2c *i2c) 245 246 { ··· 259 258 } 260 259 261 260 /** 262 - * Generate a STOP condition, which triggers a REG_INT_STOP interrupt. 263 - * 261 + * rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt. 262 + * @i2c: target controller data 264 263 * @error: Error code to return in rk3x_i2c_xfer 265 264 */ 266 265 static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error) ··· 299 298 } 300 299 301 300 /** 302 - * Setup a read according to i2c->msg 301 + * rk3x_i2c_prepare_read - Setup a read according to i2c->msg 302 + * @i2c: target controller data 303 303 */ 304 304 static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c) 305 305 { ··· 331 329 } 332 330 333 331 /** 334 - * Fill the transmit buffer with data from i2c->msg 332 + * rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg 333 + * @i2c: target controller data 335 334 */ 336 335 static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c) 337 336 { ··· 535 532 } 536 533 537 534 /** 538 - * Get timing values of I2C specification 539 - * 535 + * rk3x_i2c_get_spec - Get timing values of I2C specification 540 536 * @speed: Desired SCL frequency 541 537 * 542 - * Returns: Matched i2c spec values. 538 + * Return: Matched i2c_spec_values. 543 539 */ 544 540 static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed) 545 541 { ··· 551 549 } 552 550 553 551 /** 554 - * Calculate divider values for desired SCL frequency 555 - * 552 + * rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency 556 553 * @clk_rate: I2C input clock rate 557 554 * @t: Known I2C timing information 558 555 * @t_calc: Caculated rk3x private timings that would be written into regs 559 556 * 560 - * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case 557 + * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case 561 558 * a best-effort divider value is returned in divs. If the target rate is 562 559 * too high, we silently use the highest possible rate. 563 560 */ ··· 711 710 } 712 711 713 712 /** 714 - * Calculate timing values for desired SCL frequency 715 - * 713 + * rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency 716 714 * @clk_rate: I2C input clock rate 717 715 * @t: Known I2C timing information 718 716 * @t_calc: Caculated rk3x private timings that would be written into regs 719 717 * 720 - * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case 718 + * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case 721 719 * a best-effort divider value is returned in divs. If the target rate is 722 720 * too high, we silently use the highest possible rate. 723 721 * The following formulas are v1's method to calculate timings. ··· 960 960 } 961 961 962 962 /** 963 - * Setup I2C registers for an I2C operation specified by msgs, num. 964 - * 965 - * Must be called with i2c->lock held. 966 - * 963 + * rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num. 964 + * @i2c: target controller data 967 965 * @msgs: I2C msgs to process 968 966 * @num: Number of msgs 969 967 * 970 - * returns: Number of I2C msgs processed or negative in case of error 968 + * Must be called with i2c->lock held. 969 + * 970 + * Return: Number of I2C msgs processed or negative in case of error 971 971 */ 972 972 static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num) 973 973 {
+1
drivers/iio/accel/hid-sensor-accel-3d.c
··· 280 280 hid_sensor_convert_timestamp( 281 281 &accel_state->common_attributes, 282 282 *(int64_t *)raw_data); 283 + ret = 0; 283 284 break; 284 285 default: 285 286 break;
+3 -1
drivers/iio/adc/berlin2-adc.c
··· 298 298 int ret; 299 299 300 300 indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv)); 301 - if (!indio_dev) 301 + if (!indio_dev) { 302 + of_node_put(parent_np); 302 303 return -ENOMEM; 304 + } 303 305 304 306 priv = iio_priv(indio_dev); 305 307
+9 -2
drivers/iio/adc/imx8qxp-adc.c
··· 86 86 87 87 #define IMX8QXP_ADC_TIMEOUT msecs_to_jiffies(100) 88 88 89 + #define IMX8QXP_ADC_MAX_FIFO_SIZE 16 90 + 89 91 struct imx8qxp_adc { 90 92 struct device *dev; 91 93 void __iomem *regs; ··· 97 95 /* Serialise ADC channel reads */ 98 96 struct mutex lock; 99 97 struct completion completion; 98 + u32 fifo[IMX8QXP_ADC_MAX_FIFO_SIZE]; 100 99 }; 101 100 102 101 #define IMX8QXP_ADC_CHAN(_idx) { \ ··· 241 238 return ret; 242 239 } 243 240 244 - *val = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, 245 - readl(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); 241 + *val = adc->fifo[0]; 246 242 247 243 mutex_unlock(&adc->lock); 248 244 return IIO_VAL_INT; ··· 267 265 { 268 266 struct imx8qxp_adc *adc = dev_id; 269 267 u32 fifo_count; 268 + int i; 270 269 271 270 fifo_count = FIELD_GET(IMX8QXP_ADC_FCTRL_FCOUNT_MASK, 272 271 readl(adc->regs + IMX8QXP_ADR_ADC_FCTRL)); 272 + 273 + for (i = 0; i < fifo_count; i++) 274 + adc->fifo[i] = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, 275 + readl_relaxed(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); 273 276 274 277 if (fifo_count) 275 278 complete(&adc->completion);
+1
drivers/iio/adc/stm32-dfsdm-adc.c
··· 1520 1520 }, 1521 1521 {} 1522 1522 }; 1523 + MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match); 1523 1524 1524 1525 static int stm32_dfsdm_adc_probe(struct platform_device *pdev) 1525 1526 {
+32
drivers/iio/adc/twl6030-gpadc.c
··· 57 57 #define TWL6030_GPADCS BIT(1) 58 58 #define TWL6030_GPADCR BIT(0) 59 59 60 + #define USB_VBUS_CTRL_SET 0x04 61 + #define USB_ID_CTRL_SET 0x06 62 + 63 + #define TWL6030_MISC1 0xE4 64 + #define VBUS_MEAS 0x01 65 + #define ID_MEAS 0x01 66 + 67 + #define VAC_MEAS 0x04 68 + #define VBAT_MEAS 0x02 69 + #define BB_MEAS 0x01 70 + 71 + 60 72 /** 61 73 * struct twl6030_chnl_calib - channel calibration 62 74 * @gain: slope coefficient for ideal curve ··· 936 924 TWL6030_REG_TOGGLE1); 937 925 if (ret < 0) { 938 926 dev_err(dev, "failed to enable GPADC module\n"); 927 + return ret; 928 + } 929 + 930 + ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET); 931 + if (ret < 0) { 932 + dev_err(dev, "failed to wire up inputs\n"); 933 + return ret; 934 + } 935 + 936 + ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET); 937 + if (ret < 0) { 938 + dev_err(dev, "failed to wire up inputs\n"); 939 + return ret; 940 + } 941 + 942 + ret = twl_i2c_write_u8(TWL6030_MODULE_ID0, 943 + VBAT_MEAS | BB_MEAS | VAC_MEAS, 944 + TWL6030_MISC1); 945 + if (ret < 0) { 946 + dev_err(dev, "failed to wire up inputs\n"); 939 947 return ret; 940 948 } 941 949
+1 -1
drivers/iio/adc/xilinx-ams.c
··· 1329 1329 1330 1330 dev_channels = devm_krealloc(dev, ams_channels, dev_size, GFP_KERNEL); 1331 1331 if (!dev_channels) 1332 - ret = -ENOMEM; 1332 + return -ENOMEM; 1333 1333 1334 1334 indio_dev->channels = dev_channels; 1335 1335 indio_dev->num_channels = num_channels;
+1
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 231 231 gyro_state->timestamp = 232 232 hid_sensor_convert_timestamp(&gyro_state->common_attributes, 233 233 *(s64 *)raw_data); 234 + ret = 0; 234 235 break; 235 236 default: 236 237 break;
+89 -22
drivers/iio/imu/fxos8700_core.c
··· 10 10 #include <linux/regmap.h> 11 11 #include <linux/acpi.h> 12 12 #include <linux/bitops.h> 13 + #include <linux/bitfield.h> 13 14 14 15 #include <linux/iio/iio.h> 15 16 #include <linux/iio/sysfs.h> ··· 145 144 #define FXOS8700_NVM_DATA_BNK0 0xa7 146 145 147 146 /* Bit definitions for FXOS8700_CTRL_REG1 */ 148 - #define FXOS8700_CTRL_ODR_MSK 0x38 149 147 #define FXOS8700_CTRL_ODR_MAX 0x00 150 - #define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3) 148 + #define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3) 151 149 152 150 /* Bit definitions for FXOS8700_M_CTRL_REG1 */ 153 151 #define FXOS8700_HMS_MASK GENMASK(1, 0) ··· 320 320 switch (iio_type) { 321 321 case IIO_ACCEL: 322 322 return FXOS8700_ACCEL; 323 - case IIO_ANGL_VEL: 323 + case IIO_MAGN: 324 324 return FXOS8700_MAGN; 325 325 default: 326 326 return -EINVAL; ··· 345 345 static int fxos8700_set_scale(struct fxos8700_data *data, 346 346 enum fxos8700_sensor t, int uscale) 347 347 { 348 - int i; 348 + int i, ret, val; 349 + bool active_mode; 349 350 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); 350 351 struct device *dev = regmap_get_device(data->regmap); 351 352 352 353 if (t == FXOS8700_MAGN) { 353 - dev_err(dev, "Magnetometer scale is locked at 1200uT\n"); 354 + dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n"); 354 355 return -EINVAL; 356 + } 357 + 358 + /* 359 + * When device is in active mode, it failed to set an ACCEL 360 + * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG. 361 + * This is not align with the datasheet, but it is a fxos8700 362 + * chip behavier. Set the device in standby mode before setting 363 + * an ACCEL full-scale range. 364 + */ 365 + ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val); 366 + if (ret) 367 + return ret; 368 + 369 + active_mode = val & FXOS8700_ACTIVE; 370 + if (active_mode) { 371 + ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, 372 + val & ~FXOS8700_ACTIVE); 373 + if (ret) 374 + return ret; 355 375 } 356 376 357 377 for (i = 0; i < scale_num; i++) ··· 381 361 if (i == scale_num) 382 362 return -EINVAL; 383 363 384 - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, 364 + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, 385 365 fxos8700_accel_scale[i].bits); 366 + if (ret) 367 + return ret; 368 + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, 369 + active_mode); 386 370 } 387 371 388 372 static int fxos8700_get_scale(struct fxos8700_data *data, ··· 396 372 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); 397 373 398 374 if (t == FXOS8700_MAGN) { 399 - *uscale = 1200; /* Magnetometer is locked at 1200uT */ 375 + *uscale = 1000; /* Magnetometer is locked at 0.001Gs */ 400 376 return 0; 401 377 } 402 378 ··· 418 394 int axis, int *val) 419 395 { 420 396 u8 base, reg; 397 + s16 tmp; 421 398 int ret; 422 - enum fxos8700_sensor type = fxos8700_to_sensor(chan_type); 423 399 424 - base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB; 400 + /* 401 + * Different register base addresses varies with channel types. 402 + * This bug hasn't been noticed before because using an enum is 403 + * really hard to read. Use an a switch statement to take over that. 404 + */ 405 + switch (chan_type) { 406 + case IIO_ACCEL: 407 + base = FXOS8700_OUT_X_MSB; 408 + break; 409 + case IIO_MAGN: 410 + base = FXOS8700_M_OUT_X_MSB; 411 + break; 412 + default: 413 + return -EINVAL; 414 + } 425 415 426 416 /* Block read 6 bytes of device output registers to avoid data loss */ 427 417 ret = regmap_bulk_read(data->regmap, base, data->buf, 428 - FXOS8700_DATA_BUF_SIZE); 418 + sizeof(data->buf)); 429 419 if (ret) 430 420 return ret; 431 421 432 422 /* Convert axis to buffer index */ 433 423 reg = axis - IIO_MOD_X; 434 424 425 + /* 426 + * Convert to native endianness. The accel data and magn data 427 + * are signed, so a forced type conversion is needed. 428 + */ 429 + tmp = be16_to_cpu(data->buf[reg]); 430 + 431 + /* 432 + * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis 433 + * 14-bit left-justified sample data and MAGN output data registers 434 + * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply 435 + * a signed 2 bits right shift to the readback raw data from ACCEL 436 + * output data register and keep that from MAGN sensor as the origin. 437 + * Value should be extended to 32 bit. 438 + */ 439 + switch (chan_type) { 440 + case IIO_ACCEL: 441 + tmp = tmp >> 2; 442 + break; 443 + case IIO_MAGN: 444 + /* Nothing to do */ 445 + break; 446 + default: 447 + return -EINVAL; 448 + } 449 + 435 450 /* Convert to native endianness */ 436 - *val = sign_extend32(be16_to_cpu(data->buf[reg]), 15); 451 + *val = sign_extend32(tmp, 15); 437 452 438 453 return 0; 439 454 } ··· 508 445 if (i >= odr_num) 509 446 return -EINVAL; 510 447 511 - return regmap_update_bits(data->regmap, 512 - FXOS8700_CTRL_REG1, 513 - FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE, 514 - fxos8700_odr[i].bits << 3 | active_mode); 448 + val &= ~FXOS8700_CTRL_ODR_MSK; 449 + val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE; 450 + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val); 515 451 } 516 452 517 453 static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t, ··· 523 461 if (ret) 524 462 return ret; 525 463 526 - val &= FXOS8700_CTRL_ODR_MSK; 464 + val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val); 527 465 528 466 for (i = 0; i < odr_num; i++) 529 467 if (val == fxos8700_odr[i].bits) ··· 588 526 static IIO_CONST_ATTR(in_magn_sampling_frequency_available, 589 527 "1.5625 6.25 12.5 50 100 200 400 800"); 590 528 static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976"); 591 - static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200"); 529 + static IIO_CONST_ATTR(in_magn_scale_available, "0.001000"); 592 530 593 531 static struct attribute *fxos8700_attrs[] = { 594 532 &iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr, ··· 654 592 if (ret) 655 593 return ret; 656 594 657 - /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ 658 - ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, 659 - FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE); 595 + /* 596 + * Set max full-scale range (+/-8G) for ACCEL sensor in chip 597 + * initialization then activate the device. 598 + */ 599 + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); 660 600 if (ret) 661 601 return ret; 662 602 663 - /* Set for max full-scale range (+/-8G) */ 664 - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); 603 + /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ 604 + return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1, 605 + FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE, 606 + FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) | 607 + FXOS8700_ACTIVE); 665 608 } 666 609 667 610 static void fxos8700_chip_uninit(void *data)
+1
drivers/iio/imu/st_lsm6dsx/Kconfig
··· 4 4 tristate "ST_LSM6DSx driver for STM 6-axis IMU MEMS sensors" 5 5 depends on (I2C || SPI || I3C) 6 6 select IIO_BUFFER 7 + select IIO_TRIGGERED_BUFFER 7 8 select IIO_KFIFO_BUF 8 9 select IIO_ST_LSM6DSX_I2C if (I2C) 9 10 select IIO_ST_LSM6DSX_SPI if (SPI_MASTER)
+5 -4
drivers/iio/light/cm32181.c
··· 440 440 if (!indio_dev) 441 441 return -ENOMEM; 442 442 443 + i2c_set_clientdata(client, indio_dev); 444 + 443 445 /* 444 446 * Some ACPI systems list 2 I2C resources for the CM3218 sensor, the 445 447 * SMBus Alert Response Address (ARA, 0x0c) and the actual I2C address. ··· 461 459 if (IS_ERR(client)) 462 460 return PTR_ERR(client); 463 461 } 464 - 465 - i2c_set_clientdata(client, indio_dev); 466 462 467 463 cm32181 = iio_priv(indio_dev); 468 464 cm32181->client = client; ··· 490 490 491 491 static int cm32181_suspend(struct device *dev) 492 492 { 493 - struct i2c_client *client = to_i2c_client(dev); 493 + struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); 494 + struct i2c_client *client = cm32181->client; 494 495 495 496 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, 496 497 CM32181_CMD_ALS_DISABLE); ··· 499 498 500 499 static int cm32181_resume(struct device *dev) 501 500 { 502 - struct i2c_client *client = to_i2c_client(dev); 503 501 struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); 502 + struct i2c_client *client = cm32181->client; 504 503 505 504 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, 506 505 cm32181->conf_regs[CM32181_REG_ADDR_CMD]);
-1
drivers/input/mouse/synaptics.c
··· 192 192 "SYN3221", /* HP 15-ay000 */ 193 193 "SYN323d", /* HP Spectre X360 13-w013dx */ 194 194 "SYN3257", /* HP Envy 13-ad105ng */ 195 - "SYN3286", /* HP Laptop 15-da3001TU */ 196 195 NULL 197 196 }; 198 197
+7
drivers/input/serio/i8042-acpipnpio.h
··· 1240 1240 }, 1241 1241 { 1242 1242 .matches = { 1243 + DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"), 1244 + }, 1245 + .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1246 + SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1247 + }, 1248 + { 1249 + .matches = { 1243 1250 DMI_MATCH(DMI_BOARD_NAME, "X170SM"), 1244 1251 }, 1245 1252 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+2 -1
drivers/md/bcache/bcache_ondisk.h
··· 106 106 return bkey_u64s(k) * sizeof(__u64); 107 107 } 108 108 109 - #define bkey_copy(_dest, _src) memcpy(_dest, _src, bkey_bytes(_src)) 109 + #define bkey_copy(_dest, _src) unsafe_memcpy(_dest, _src, bkey_bytes(_src), \ 110 + /* bkey is always padded */) 110 111 111 112 static inline void bkey_copy_key(struct bkey *dest, const struct bkey *src) 112 113 {
+2 -1
drivers/md/bcache/journal.c
··· 149 149 bytes, GFP_KERNEL); 150 150 if (!i) 151 151 return -ENOMEM; 152 - memcpy(&i->j, j, bytes); 152 + unsafe_memcpy(&i->j, j, bytes, 153 + /* "bytes" was calculated by set_bytes() above */); 153 154 /* Add to the location after 'where' points to */ 154 155 list_add(&i->list, where); 155 156 ret = 1;
+2 -3
drivers/media/common/videobuf2/videobuf2-core.c
··· 2149 2149 if (ret) 2150 2150 return ret; 2151 2151 2152 - q->streaming = 1; 2153 - 2154 2152 /* 2155 2153 * Tell driver to start streaming provided sufficient buffers 2156 2154 * are available. ··· 2159 2161 goto unprepare; 2160 2162 } 2161 2163 2164 + q->streaming = 1; 2165 + 2162 2166 dprintk(q, 3, "successful\n"); 2163 2167 return 0; 2164 2168 2165 2169 unprepare: 2166 2170 call_void_qop(q, unprepare_streaming, q); 2167 - q->streaming = 0; 2168 2171 return ret; 2169 2172 } 2170 2173 EXPORT_SYMBOL_GPL(vb2_core_streamon);
+1 -1
drivers/media/v4l2-core/v4l2-ctrls-api.c
··· 150 150 * then return an error. 151 151 */ 152 152 if (strlen(ctrl->p_new.p_char) == ctrl->maximum && last) 153 - ctrl->is_new = 1; 154 153 return -ERANGE; 154 + ctrl->is_new = 1; 155 155 } 156 156 return ret; 157 157 default:
+1
drivers/net/can/spi/mcp251xfd/mcp251xfd-ethtool.c
··· 48 48 priv->rx_obj_num = layout.cur_rx; 49 49 priv->rx_obj_num_coalesce_irq = layout.rx_coalesce; 50 50 priv->tx->obj_num = layout.cur_tx; 51 + priv->tx_obj_num_coalesce_irq = layout.tx_coalesce; 51 52 52 53 return 0; 53 54 }
+4 -3
drivers/net/dsa/Kconfig
··· 35 35 the xrx200 / VR9 SoC. 36 36 37 37 config NET_DSA_MT7530 38 - tristate "MediaTek MT753x and MT7621 Ethernet switch support" 38 + tristate "MediaTek MT7530 and MT7531 Ethernet switch support" 39 39 select NET_DSA_TAG_MTK 40 40 select MEDIATEK_GE_PHY 41 41 help 42 - This enables support for the MediaTek MT7530, MT7531, and MT7621 43 - Ethernet switch chips. 42 + This enables support for the MediaTek MT7530 and MT7531 Ethernet 43 + switch chips. Multi-chip module MT7530 in MT7621AT, MT7621DAT, 44 + MT7621ST and MT7623AI SoCs is supported. 44 45 45 46 config NET_DSA_MV88E6060 46 47 tristate "Marvell 88E6060 ethernet switch chip support"
+1 -1
drivers/net/dsa/microchip/ksz9477_i2c.c
··· 104 104 }, 105 105 { 106 106 .compatible = "microchip,ksz8563", 107 - .data = &ksz_switch_chips[KSZ9893] 107 + .data = &ksz_switch_chips[KSZ8563] 108 108 }, 109 109 { 110 110 .compatible = "microchip,ksz9567",
+1 -1
drivers/net/ethernet/adi/adin1110.c
··· 356 356 357 357 if ((port_priv->flags & IFF_ALLMULTI && rxb->pkt_type == PACKET_MULTICAST) || 358 358 (port_priv->flags & IFF_BROADCAST && rxb->pkt_type == PACKET_BROADCAST)) 359 - rxb->offload_fwd_mark = 1; 359 + rxb->offload_fwd_mark = port_priv->priv->forwarding; 360 360 361 361 netif_rx(rxb); 362 362
+4 -4
drivers/net/ethernet/broadcom/tg3.c
··· 11166 11166 rtnl_lock(); 11167 11167 tg3_full_lock(tp, 0); 11168 11168 11169 - if (!netif_running(tp->dev)) { 11169 + if (tp->pcierr_recovery || !netif_running(tp->dev)) { 11170 11170 tg3_flag_clear(tp, RESET_TASK_PENDING); 11171 11171 tg3_full_unlock(tp); 11172 11172 rtnl_unlock(); ··· 18101 18101 18102 18102 netdev_info(netdev, "PCI I/O error detected\n"); 18103 18103 18104 + /* Want to make sure that the reset task doesn't run */ 18105 + tg3_reset_task_cancel(tp); 18106 + 18104 18107 rtnl_lock(); 18105 18108 18106 18109 /* Could be second call or maybe we don't have netdev yet */ ··· 18119 18116 tg3_netif_stop(tp); 18120 18117 18121 18118 tg3_timer_stop(tp); 18122 - 18123 - /* Want to make sure that the reset task doesn't run */ 18124 - tg3_reset_task_cancel(tp); 18125 18119 18126 18120 netif_device_detach(netdev); 18127 18121
+9 -6
drivers/net/ethernet/engleder/tsnep_main.c
··· 450 450 /* ring full, shall not happen because queue is stopped if full 451 451 * below 452 452 */ 453 - netif_stop_queue(tx->adapter->netdev); 453 + netif_stop_subqueue(tx->adapter->netdev, tx->queue_index); 454 454 455 455 spin_unlock_irqrestore(&tx->lock, flags); 456 456 ··· 493 493 494 494 if (tsnep_tx_desc_available(tx) < (MAX_SKB_FRAGS + 1)) { 495 495 /* ring can get full with next frame */ 496 - netif_stop_queue(tx->adapter->netdev); 496 + netif_stop_subqueue(tx->adapter->netdev, tx->queue_index); 497 497 } 498 498 499 499 spin_unlock_irqrestore(&tx->lock, flags); ··· 503 503 504 504 static bool tsnep_tx_poll(struct tsnep_tx *tx, int napi_budget) 505 505 { 506 + struct tsnep_tx_entry *entry; 507 + struct netdev_queue *nq; 506 508 unsigned long flags; 507 509 int budget = 128; 508 - struct tsnep_tx_entry *entry; 509 - int count; 510 510 int length; 511 + int count; 512 + 513 + nq = netdev_get_tx_queue(tx->adapter->netdev, tx->queue_index); 511 514 512 515 spin_lock_irqsave(&tx->lock, flags); 513 516 ··· 567 564 } while (likely(budget)); 568 565 569 566 if ((tsnep_tx_desc_available(tx) >= ((MAX_SKB_FRAGS + 1) * 2)) && 570 - netif_queue_stopped(tx->adapter->netdev)) { 571 - netif_wake_queue(tx->adapter->netdev); 567 + netif_tx_queue_stopped(nq)) { 568 + netif_tx_wake_queue(nq); 572 569 } 573 570 574 571 spin_unlock_irqrestore(&tx->lock, flags);
+3 -3
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 2410 2410 2411 2411 cleaned = qman_p_poll_dqrr(np->p, budget); 2412 2412 2413 + if (np->xdp_act & XDP_REDIRECT) 2414 + xdp_do_flush(); 2415 + 2413 2416 if (cleaned < budget) { 2414 2417 napi_complete_done(napi, cleaned); 2415 2418 qman_p_irqsource_add(np->p, QM_PIRQ_DQRI); 2416 2419 } else if (np->down) { 2417 2420 qman_p_irqsource_add(np->p, QM_PIRQ_DQRI); 2418 2421 } 2419 - 2420 - if (np->xdp_act & XDP_REDIRECT) 2421 - xdp_do_flush(); 2422 2422 2423 2423 return cleaned; 2424 2424 }
+6 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 1993 1993 if (rx_cleaned >= budget || 1994 1994 txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) { 1995 1995 work_done = budget; 1996 + if (ch->xdp.res & XDP_REDIRECT) 1997 + xdp_do_flush(); 1996 1998 goto out; 1997 1999 } 1998 2000 } while (store_cleaned); 2001 + 2002 + if (ch->xdp.res & XDP_REDIRECT) 2003 + xdp_do_flush(); 1999 2004 2000 2005 /* Update NET DIM with the values for this CDAN */ 2001 2006 dpaa2_io_update_net_dim(ch->dpio, ch->stats.frames_per_cdan, ··· 2037 2032 txc_fq->dq_bytes = 0; 2038 2033 } 2039 2034 2040 - if (ch->xdp.res & XDP_REDIRECT) 2041 - xdp_do_flush_map(); 2042 - else if (rx_cleaned && ch->xdp.res & XDP_TX) 2035 + if (rx_cleaned && ch->xdp.res & XDP_TX) 2043 2036 dpaa2_eth_xdp_tx_flush(priv, ch, &priv->fq[flowid]); 2044 2037 2045 2038 return work_done;
+1 -1
drivers/net/ethernet/freescale/fec_main.c
··· 3191 3191 for (q = 0; q < fep->num_rx_queues; q++) { 3192 3192 rxq = fep->rx_queue[q]; 3193 3193 for (i = 0; i < rxq->bd.ring_size; i++) 3194 - page_pool_release_page(rxq->page_pool, rxq->rx_skb_info[i].page); 3194 + page_pool_put_full_page(rxq->page_pool, rxq->rx_skb_info[i].page, false); 3195 3195 3196 3196 for (i = 0; i < XDP_STATS_TOTAL; i++) 3197 3197 rxq->stats[i] = 0;
+3
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 1055 1055 return ERR_PTR(-EPROBE_DEFER); 1056 1056 1057 1057 pcs = lynx_pcs_create(mdiodev); 1058 + if (!pcs) 1059 + mdio_device_free(mdiodev); 1060 + 1058 1061 return pcs; 1059 1062 } 1060 1063
+1 -1
drivers/net/ethernet/intel/iavf/iavf.h
··· 249 249 250 250 /* board specific private data structure */ 251 251 struct iavf_adapter { 252 + struct workqueue_struct *wq; 252 253 struct work_struct reset_task; 253 254 struct work_struct adminq_task; 254 255 struct delayed_work client_task; ··· 460 459 461 460 /* needed by iavf_ethtool.c */ 462 461 extern char iavf_driver_name[]; 463 - extern struct workqueue_struct *iavf_wq; 464 462 465 463 static inline const char *iavf_state_str(enum iavf_state_t state) 466 464 {
+5 -5
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 532 532 if (changed_flags & IAVF_FLAG_LEGACY_RX) { 533 533 if (netif_running(netdev)) { 534 534 adapter->flags |= IAVF_FLAG_RESET_NEEDED; 535 - queue_work(iavf_wq, &adapter->reset_task); 535 + queue_work(adapter->wq, &adapter->reset_task); 536 536 } 537 537 } 538 538 ··· 672 672 673 673 if (netif_running(netdev)) { 674 674 adapter->flags |= IAVF_FLAG_RESET_NEEDED; 675 - queue_work(iavf_wq, &adapter->reset_task); 675 + queue_work(adapter->wq, &adapter->reset_task); 676 676 } 677 677 678 678 return 0; ··· 1433 1433 adapter->aq_required |= IAVF_FLAG_AQ_ADD_FDIR_FILTER; 1434 1434 spin_unlock_bh(&adapter->fdir_fltr_lock); 1435 1435 1436 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 1436 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1437 1437 1438 1438 ret: 1439 1439 if (err && fltr) ··· 1474 1474 spin_unlock_bh(&adapter->fdir_fltr_lock); 1475 1475 1476 1476 if (fltr && fltr->state == IAVF_FDIR_FLTR_DEL_REQUEST) 1477 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 1477 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1478 1478 1479 1479 return err; 1480 1480 } ··· 1658 1658 spin_unlock_bh(&adapter->adv_rss_lock); 1659 1659 1660 1660 if (!err) 1661 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 1661 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1662 1662 1663 1663 mutex_unlock(&adapter->crit_lock); 1664 1664
+51 -62
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 49 49 MODULE_LICENSE("GPL v2"); 50 50 51 51 static const struct net_device_ops iavf_netdev_ops; 52 - struct workqueue_struct *iavf_wq; 53 52 54 53 int iavf_status_to_errno(enum iavf_status status) 55 54 { ··· 276 277 if (!(adapter->flags & 277 278 (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED))) { 278 279 adapter->flags |= IAVF_FLAG_RESET_NEEDED; 279 - queue_work(iavf_wq, &adapter->reset_task); 280 + queue_work(adapter->wq, &adapter->reset_task); 280 281 } 281 282 } 282 283 ··· 290 291 void iavf_schedule_request_stats(struct iavf_adapter *adapter) 291 292 { 292 293 adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_STATS; 293 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 294 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 294 295 } 295 296 296 297 /** ··· 410 411 411 412 if (adapter->state != __IAVF_REMOVE) 412 413 /* schedule work on the private workqueue */ 413 - queue_work(iavf_wq, &adapter->adminq_task); 414 + queue_work(adapter->wq, &adapter->adminq_task); 414 415 415 416 return IRQ_HANDLED; 416 417 } ··· 1033 1034 1034 1035 /* schedule the watchdog task to immediately process the request */ 1035 1036 if (f) { 1036 - queue_work(iavf_wq, &adapter->watchdog_task.work); 1037 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1037 1038 return 0; 1038 1039 } 1039 1040 return -ENOMEM; ··· 1256 1257 adapter->aq_required |= IAVF_FLAG_AQ_ENABLE_QUEUES; 1257 1258 if (CLIENT_ENABLED(adapter)) 1258 1259 adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_OPEN; 1259 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 1260 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1260 1261 } 1261 1262 1262 1263 /** ··· 1413 1414 adapter->aq_required |= IAVF_FLAG_AQ_DISABLE_QUEUES; 1414 1415 } 1415 1416 1416 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 1417 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 1417 1418 } 1418 1419 1419 1420 /** ··· 2247 2248 2248 2249 if (aq_required) { 2249 2250 adapter->aq_required |= aq_required; 2250 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 0); 2251 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); 2251 2252 } 2252 2253 } 2253 2254 ··· 2692 2693 goto restart_watchdog; 2693 2694 } 2694 2695 2696 + if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) && 2697 + adapter->netdev_registered && 2698 + !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section) && 2699 + rtnl_trylock()) { 2700 + netdev_update_features(adapter->netdev); 2701 + rtnl_unlock(); 2702 + adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES; 2703 + } 2704 + 2695 2705 if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) 2696 2706 iavf_change_state(adapter, __IAVF_COMM_FAILED); 2697 2707 ··· 2708 2700 adapter->aq_required = 0; 2709 2701 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2710 2702 mutex_unlock(&adapter->crit_lock); 2711 - queue_work(iavf_wq, &adapter->reset_task); 2703 + queue_work(adapter->wq, &adapter->reset_task); 2712 2704 return; 2713 2705 } 2714 2706 ··· 2716 2708 case __IAVF_STARTUP: 2717 2709 iavf_startup(adapter); 2718 2710 mutex_unlock(&adapter->crit_lock); 2719 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2711 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2720 2712 msecs_to_jiffies(30)); 2721 2713 return; 2722 2714 case __IAVF_INIT_VERSION_CHECK: 2723 2715 iavf_init_version_check(adapter); 2724 2716 mutex_unlock(&adapter->crit_lock); 2725 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2717 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2726 2718 msecs_to_jiffies(30)); 2727 2719 return; 2728 2720 case __IAVF_INIT_GET_RESOURCES: 2729 2721 iavf_init_get_resources(adapter); 2730 2722 mutex_unlock(&adapter->crit_lock); 2731 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2723 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2732 2724 msecs_to_jiffies(1)); 2733 2725 return; 2734 2726 case __IAVF_INIT_EXTENDED_CAPS: 2735 2727 iavf_init_process_extended_caps(adapter); 2736 2728 mutex_unlock(&adapter->crit_lock); 2737 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2729 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2738 2730 msecs_to_jiffies(1)); 2739 2731 return; 2740 2732 case __IAVF_INIT_CONFIG_ADAPTER: 2741 2733 iavf_init_config_adapter(adapter); 2742 2734 mutex_unlock(&adapter->crit_lock); 2743 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2735 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2744 2736 msecs_to_jiffies(1)); 2745 2737 return; 2746 2738 case __IAVF_INIT_FAILED: ··· 2759 2751 adapter->flags |= IAVF_FLAG_PF_COMMS_FAILED; 2760 2752 iavf_shutdown_adminq(hw); 2761 2753 mutex_unlock(&adapter->crit_lock); 2762 - queue_delayed_work(iavf_wq, 2754 + queue_delayed_work(adapter->wq, 2763 2755 &adapter->watchdog_task, (5 * HZ)); 2764 2756 return; 2765 2757 } 2766 2758 /* Try again from failed step*/ 2767 2759 iavf_change_state(adapter, adapter->last_state); 2768 2760 mutex_unlock(&adapter->crit_lock); 2769 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ); 2761 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, HZ); 2770 2762 return; 2771 2763 case __IAVF_COMM_FAILED: 2772 2764 if (test_bit(__IAVF_IN_REMOVE_TASK, ··· 2797 2789 adapter->aq_required = 0; 2798 2790 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2799 2791 mutex_unlock(&adapter->crit_lock); 2800 - queue_delayed_work(iavf_wq, 2792 + queue_delayed_work(adapter->wq, 2801 2793 &adapter->watchdog_task, 2802 2794 msecs_to_jiffies(10)); 2803 2795 return; 2804 2796 case __IAVF_RESETTING: 2805 2797 mutex_unlock(&adapter->crit_lock); 2806 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2); 2798 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2799 + HZ * 2); 2807 2800 return; 2808 2801 case __IAVF_DOWN: 2809 2802 case __IAVF_DOWN_PENDING: ··· 2843 2834 adapter->aq_required = 0; 2844 2835 adapter->current_op = VIRTCHNL_OP_UNKNOWN; 2845 2836 dev_err(&adapter->pdev->dev, "Hardware reset detected\n"); 2846 - queue_work(iavf_wq, &adapter->reset_task); 2837 + queue_work(adapter->wq, &adapter->reset_task); 2847 2838 mutex_unlock(&adapter->crit_lock); 2848 - queue_delayed_work(iavf_wq, 2839 + queue_delayed_work(adapter->wq, 2849 2840 &adapter->watchdog_task, HZ * 2); 2850 2841 return; 2851 2842 } ··· 2854 2845 mutex_unlock(&adapter->crit_lock); 2855 2846 restart_watchdog: 2856 2847 if (adapter->state >= __IAVF_DOWN) 2857 - queue_work(iavf_wq, &adapter->adminq_task); 2848 + queue_work(adapter->wq, &adapter->adminq_task); 2858 2849 if (adapter->aq_required) 2859 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 2850 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2860 2851 msecs_to_jiffies(20)); 2861 2852 else 2862 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, HZ * 2); 2853 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 2854 + HZ * 2); 2863 2855 } 2864 2856 2865 2857 /** ··· 2962 2952 */ 2963 2953 if (!mutex_trylock(&adapter->crit_lock)) { 2964 2954 if (adapter->state != __IAVF_REMOVE) 2965 - queue_work(iavf_wq, &adapter->reset_task); 2955 + queue_work(adapter->wq, &adapter->reset_task); 2966 2956 2967 2957 goto reset_finish; 2968 2958 } ··· 3126 3116 bitmap_clear(adapter->vsi.active_cvlans, 0, VLAN_N_VID); 3127 3117 bitmap_clear(adapter->vsi.active_svlans, 0, VLAN_N_VID); 3128 3118 3129 - mod_delayed_work(iavf_wq, &adapter->watchdog_task, 2); 3119 + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 2); 3130 3120 3131 3121 /* We were running when the reset started, so we need to restore some 3132 3122 * state here. ··· 3218 3208 if (adapter->state == __IAVF_REMOVE) 3219 3209 return; 3220 3210 3221 - queue_work(iavf_wq, &adapter->adminq_task); 3211 + queue_work(adapter->wq, &adapter->adminq_task); 3222 3212 goto out; 3223 3213 } 3224 3214 ··· 3242 3232 } while (pending); 3243 3233 mutex_unlock(&adapter->crit_lock); 3244 3234 3245 - if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES)) { 3246 - if (adapter->netdev_registered || 3247 - !test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) { 3248 - struct net_device *netdev = adapter->netdev; 3249 - 3250 - rtnl_lock(); 3251 - netdev_update_features(netdev); 3252 - rtnl_unlock(); 3253 - /* Request VLAN offload settings */ 3254 - if (VLAN_V2_ALLOWED(adapter)) 3255 - iavf_set_vlan_offload_features 3256 - (adapter, 0, netdev->features); 3257 - 3258 - iavf_set_queue_vlan_tag_loc(adapter); 3259 - } 3260 - 3261 - adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES; 3262 - } 3263 3235 if ((adapter->flags & 3264 3236 (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) || 3265 3237 adapter->state == __IAVF_RESETTING) ··· 4341 4349 4342 4350 if (netif_running(netdev)) { 4343 4351 adapter->flags |= IAVF_FLAG_RESET_NEEDED; 4344 - queue_work(iavf_wq, &adapter->reset_task); 4352 + queue_work(adapter->wq, &adapter->reset_task); 4345 4353 } 4346 4354 4347 4355 return 0; ··· 4890 4898 hw = &adapter->hw; 4891 4899 hw->back = adapter; 4892 4900 4901 + adapter->wq = alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, 4902 + iavf_driver_name); 4903 + if (!adapter->wq) { 4904 + err = -ENOMEM; 4905 + goto err_alloc_wq; 4906 + } 4907 + 4893 4908 adapter->msg_enable = BIT(DEFAULT_DEBUG_LEVEL_SHIFT) - 1; 4894 4909 iavf_change_state(adapter, __IAVF_STARTUP); 4895 4910 ··· 4941 4942 INIT_WORK(&adapter->adminq_task, iavf_adminq_task); 4942 4943 INIT_DELAYED_WORK(&adapter->watchdog_task, iavf_watchdog_task); 4943 4944 INIT_DELAYED_WORK(&adapter->client_task, iavf_client_task); 4944 - queue_delayed_work(iavf_wq, &adapter->watchdog_task, 4945 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 4945 4946 msecs_to_jiffies(5 * (pdev->devfn & 0x07))); 4946 4947 4947 4948 /* Setup the wait queue for indicating transition to down status */ ··· 4953 4954 return 0; 4954 4955 4955 4956 err_ioremap: 4957 + destroy_workqueue(adapter->wq); 4958 + err_alloc_wq: 4956 4959 free_netdev(netdev); 4957 4960 err_alloc_etherdev: 4958 4961 pci_disable_pcie_error_reporting(pdev); ··· 5024 5023 return err; 5025 5024 } 5026 5025 5027 - queue_work(iavf_wq, &adapter->reset_task); 5026 + queue_work(adapter->wq, &adapter->reset_task); 5028 5027 5029 5028 netif_device_attach(adapter->netdev); 5030 5029 ··· 5171 5170 } 5172 5171 spin_unlock_bh(&adapter->adv_rss_lock); 5173 5172 5173 + destroy_workqueue(adapter->wq); 5174 + 5174 5175 free_netdev(netdev); 5175 5176 5176 5177 pci_disable_pcie_error_reporting(pdev); ··· 5199 5196 **/ 5200 5197 static int __init iavf_init_module(void) 5201 5198 { 5202 - int ret; 5203 - 5204 5199 pr_info("iavf: %s\n", iavf_driver_string); 5205 5200 5206 5201 pr_info("%s\n", iavf_copyright); 5207 5202 5208 - iavf_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM, 1, 5209 - iavf_driver_name); 5210 - if (!iavf_wq) { 5211 - pr_err("%s: Failed to create workqueue\n", iavf_driver_name); 5212 - return -ENOMEM; 5213 - } 5214 - 5215 - ret = pci_register_driver(&iavf_driver); 5216 - if (ret) 5217 - destroy_workqueue(iavf_wq); 5218 - 5219 - return ret; 5203 + return pci_register_driver(&iavf_driver); 5220 5204 } 5221 5205 5222 5206 module_init(iavf_init_module); ··· 5217 5227 static void __exit iavf_exit_module(void) 5218 5228 { 5219 5229 pci_unregister_driver(&iavf_driver); 5220 - destroy_workqueue(iavf_wq); 5221 5230 } 5222 5231 5223 5232 module_exit(iavf_exit_module);
+9 -1
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 1952 1952 if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) { 1953 1953 adapter->flags |= IAVF_FLAG_RESET_PENDING; 1954 1954 dev_info(&adapter->pdev->dev, "Scheduling reset task\n"); 1955 - queue_work(iavf_wq, &adapter->reset_task); 1955 + queue_work(adapter->wq, &adapter->reset_task); 1956 1956 } 1957 1957 break; 1958 1958 default: ··· 2226 2226 2227 2227 iavf_process_config(adapter); 2228 2228 adapter->flags |= IAVF_FLAG_SETUP_NETDEV_FEATURES; 2229 + 2230 + /* Request VLAN offload settings */ 2231 + if (VLAN_V2_ALLOWED(adapter)) 2232 + iavf_set_vlan_offload_features(adapter, 0, 2233 + netdev->features); 2234 + 2235 + iavf_set_queue_vlan_tag_loc(adapter); 2236 + 2229 2237 was_mac_changed = !ether_addr_equal(netdev->dev_addr, 2230 2238 adapter->hw.mac.addr); 2231 2239
+1 -1
drivers/net/ethernet/intel/ice/ice.h
··· 880 880 void ice_set_ethtool_safe_mode_ops(struct net_device *netdev); 881 881 u16 ice_get_avail_txq_count(struct ice_pf *pf); 882 882 u16 ice_get_avail_rxq_count(struct ice_pf *pf); 883 - int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx); 883 + int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked); 884 884 void ice_update_vsi_stats(struct ice_vsi *vsi); 885 885 void ice_update_pf_stats(struct ice_pf *pf); 886 886 void
+13 -10
drivers/net/ethernet/intel/ice/ice_dcb_lib.c
··· 441 441 goto out; 442 442 } 443 443 444 - ice_pf_dcb_recfg(pf); 444 + ice_pf_dcb_recfg(pf, false); 445 445 446 446 out: 447 447 /* enable previously downed VSIs */ ··· 731 731 /** 732 732 * ice_pf_dcb_recfg - Reconfigure all VEBs and VSIs 733 733 * @pf: pointer to the PF struct 734 + * @locked: is adev device lock held 734 735 * 735 736 * Assumed caller has already disabled all VSIs before 736 737 * calling this function. Reconfiguring DCB based on 737 738 * local_dcbx_cfg. 738 739 */ 739 - void ice_pf_dcb_recfg(struct ice_pf *pf) 740 + void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) 740 741 { 741 742 struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg; 742 743 struct iidc_event *event; ··· 784 783 if (vsi->type == ICE_VSI_PF) 785 784 ice_dcbnl_set_all(vsi); 786 785 } 787 - /* Notify the AUX drivers that TC change is finished */ 788 - event = kzalloc(sizeof(*event), GFP_KERNEL); 789 - if (!event) 790 - return; 786 + if (!locked) { 787 + /* Notify the AUX drivers that TC change is finished */ 788 + event = kzalloc(sizeof(*event), GFP_KERNEL); 789 + if (!event) 790 + return; 791 791 792 - set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type); 793 - ice_send_event_to_aux(pf, event); 794 - kfree(event); 792 + set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type); 793 + ice_send_event_to_aux(pf, event); 794 + kfree(event); 795 + } 795 796 } 796 797 797 798 /** ··· 1047 1044 } 1048 1045 1049 1046 /* changes in configuration update VSI */ 1050 - ice_pf_dcb_recfg(pf); 1047 + ice_pf_dcb_recfg(pf, false); 1051 1048 1052 1049 /* enable previously downed VSIs */ 1053 1050 ice_dcb_ena_dis_vsi(pf, true, true);
+2 -2
drivers/net/ethernet/intel/ice/ice_dcb_lib.h
··· 23 23 int 24 24 ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked); 25 25 int ice_dcb_bwchk(struct ice_pf *pf, struct ice_dcbx_cfg *dcbcfg); 26 - void ice_pf_dcb_recfg(struct ice_pf *pf); 26 + void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked); 27 27 void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi); 28 28 int ice_init_pf_dcb(struct ice_pf *pf, bool locked); 29 29 void ice_update_dcb_stats(struct ice_pf *pf); ··· 128 128 return 0; 129 129 } 130 130 131 - static inline void ice_pf_dcb_recfg(struct ice_pf *pf) { } 131 + static inline void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) { } 132 132 static inline void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi) { } 133 133 static inline void ice_update_dcb_stats(struct ice_pf *pf) { } 134 134 static inline void
+24 -4
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 3641 3641 struct ice_vsi *vsi = np->vsi; 3642 3642 struct ice_pf *pf = vsi->back; 3643 3643 int new_rx = 0, new_tx = 0; 3644 + bool locked = false; 3644 3645 u32 curr_combined; 3646 + int ret = 0; 3645 3647 3646 3648 /* do not support changing channels in Safe Mode */ 3647 3649 if (ice_is_safe_mode(pf)) { ··· 3707 3705 return -EINVAL; 3708 3706 } 3709 3707 3710 - ice_vsi_recfg_qs(vsi, new_rx, new_tx); 3708 + if (pf->adev) { 3709 + mutex_lock(&pf->adev_mutex); 3710 + device_lock(&pf->adev->dev); 3711 + locked = true; 3712 + if (pf->adev->dev.driver) { 3713 + netdev_err(dev, "Cannot change channels when RDMA is active\n"); 3714 + ret = -EBUSY; 3715 + goto adev_unlock; 3716 + } 3717 + } 3711 3718 3712 - if (!netif_is_rxfh_configured(dev)) 3713 - return ice_vsi_set_dflt_rss_lut(vsi, new_rx); 3719 + ice_vsi_recfg_qs(vsi, new_rx, new_tx, locked); 3720 + 3721 + if (!netif_is_rxfh_configured(dev)) { 3722 + ret = ice_vsi_set_dflt_rss_lut(vsi, new_rx); 3723 + goto adev_unlock; 3724 + } 3714 3725 3715 3726 /* Update rss_size due to change in Rx queues */ 3716 3727 vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx); 3717 3728 3718 - return 0; 3729 + adev_unlock: 3730 + if (locked) { 3731 + device_unlock(&pf->adev->dev); 3732 + mutex_unlock(&pf->adev_mutex); 3733 + } 3734 + return ret; 3719 3735 } 3720 3736 3721 3737 /**
-3
drivers/net/ethernet/intel/ice/ice_lib.c
··· 3235 3235 } 3236 3236 } 3237 3237 3238 - if (vsi->type == ICE_VSI_PF) 3239 - ice_devlink_destroy_pf_port(pf); 3240 - 3241 3238 if (vsi->type == ICE_VSI_VF && 3242 3239 vsi->agg_node && vsi->agg_node->valid) 3243 3240 vsi->agg_node->num_vsis--;
+20 -10
drivers/net/ethernet/intel/ice/ice_main.c
··· 4195 4195 * @vsi: VSI being changed 4196 4196 * @new_rx: new number of Rx queues 4197 4197 * @new_tx: new number of Tx queues 4198 + * @locked: is adev device_lock held 4198 4199 * 4199 4200 * Only change the number of queues if new_tx, or new_rx is non-0. 4200 4201 * 4201 4202 * Returns 0 on success. 4202 4203 */ 4203 - int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx) 4204 + int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked) 4204 4205 { 4205 4206 struct ice_pf *pf = vsi->back; 4206 4207 int err = 0, timeout = 50; ··· 4230 4229 4231 4230 ice_vsi_close(vsi); 4232 4231 ice_vsi_rebuild(vsi, false); 4233 - ice_pf_dcb_recfg(pf); 4232 + ice_pf_dcb_recfg(pf, locked); 4234 4233 ice_vsi_open(vsi); 4235 4234 done: 4236 4235 clear_bit(ICE_CFG_BUSY, pf->state); ··· 4591 4590 } 4592 4591 4593 4592 /** 4594 - * ice_register_netdev - register netdev and devlink port 4593 + * ice_register_netdev - register netdev 4595 4594 * @pf: pointer to the PF struct 4596 4595 */ 4597 4596 static int ice_register_netdev(struct ice_pf *pf) ··· 4603 4602 if (!vsi || !vsi->netdev) 4604 4603 return -EIO; 4605 4604 4606 - err = ice_devlink_create_pf_port(pf); 4607 - if (err) 4608 - goto err_devlink_create; 4609 - 4610 - SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); 4611 4605 err = register_netdev(vsi->netdev); 4612 4606 if (err) 4613 4607 goto err_register_netdev; ··· 4613 4617 4614 4618 return 0; 4615 4619 err_register_netdev: 4616 - ice_devlink_destroy_pf_port(pf); 4617 - err_devlink_create: 4618 4620 free_netdev(vsi->netdev); 4619 4621 vsi->netdev = NULL; 4620 4622 clear_bit(ICE_VSI_NETDEV_ALLOCD, vsi->state); ··· 4630 4636 ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent) 4631 4637 { 4632 4638 struct device *dev = &pdev->dev; 4639 + struct ice_vsi *vsi; 4633 4640 struct ice_pf *pf; 4634 4641 struct ice_hw *hw; 4635 4642 int i, err; ··· 4913 4918 pcie_print_link_status(pf->pdev); 4914 4919 4915 4920 probe_done: 4921 + err = ice_devlink_create_pf_port(pf); 4922 + if (err) 4923 + goto err_create_pf_port; 4924 + 4925 + vsi = ice_get_main_vsi(pf); 4926 + if (!vsi || !vsi->netdev) { 4927 + err = -EINVAL; 4928 + goto err_netdev_reg; 4929 + } 4930 + 4931 + SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port); 4932 + 4916 4933 err = ice_register_netdev(pf); 4917 4934 if (err) 4918 4935 goto err_netdev_reg; ··· 4962 4955 err_devlink_reg_param: 4963 4956 ice_devlink_unregister_params(pf); 4964 4957 err_netdev_reg: 4958 + ice_devlink_destroy_pf_port(pf); 4959 + err_create_pf_port: 4965 4960 err_send_version_unroll: 4966 4961 ice_vsi_release_all(pf); 4967 4962 err_alloc_sw_unroll: ··· 5092 5083 ice_setup_mc_magic_wake(pf); 5093 5084 ice_vsi_release_all(pf); 5094 5085 mutex_destroy(&(&pf->hw)->fdir_fltr_lock); 5086 + ice_devlink_destroy_pf_port(pf); 5095 5087 ice_set_wake(pf); 5096 5088 ice_free_irq_msix_misc(pf); 5097 5089 ice_for_each_vsi(pf, i) {
+9 -5
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 417 417 * 418 418 * We need to convert the system time value stored in the RX/TXSTMP registers 419 419 * into a hwtstamp which can be used by the upper level timestamping functions. 420 + * 421 + * Returns 0 on success. 420 422 **/ 421 - static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter, 422 - struct skb_shared_hwtstamps *hwtstamps, 423 - u64 systim) 423 + static int igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter, 424 + struct skb_shared_hwtstamps *hwtstamps, 425 + u64 systim) 424 426 { 425 427 switch (adapter->hw.mac.type) { 426 428 case igc_i225: ··· 432 430 systim & 0xFFFFFFFF); 433 431 break; 434 432 default: 435 - break; 433 + return -EINVAL; 436 434 } 435 + return 0; 437 436 } 438 437 439 438 /** ··· 655 652 656 653 regval = rd32(IGC_TXSTMPL); 657 654 regval |= (u64)rd32(IGC_TXSTMPH) << 32; 658 - igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval); 655 + if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval)) 656 + return; 659 657 660 658 switch (adapter->link_speed) { 661 659 case SPEED_10:
+28 -9
drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
··· 1500 1500 BIT(DEVLINK_PARAM_CMODE_RUNTIME), 1501 1501 rvu_af_dl_dwrr_mtu_get, rvu_af_dl_dwrr_mtu_set, 1502 1502 rvu_af_dl_dwrr_mtu_validate), 1503 + }; 1504 + 1505 + static const struct devlink_param rvu_af_dl_param_exact_match[] = { 1503 1506 DEVLINK_PARAM_DRIVER(RVU_AF_DEVLINK_PARAM_ID_NPC_EXACT_FEATURE_DISABLE, 1504 1507 "npc_exact_feature_disable", DEVLINK_PARAM_TYPE_STRING, 1505 1508 BIT(DEVLINK_PARAM_CMODE_RUNTIME), ··· 1559 1556 { 1560 1557 struct rvu_devlink *rvu_dl; 1561 1558 struct devlink *dl; 1562 - size_t size; 1563 1559 int err; 1564 1560 1565 1561 dl = devlink_alloc(&rvu_devlink_ops, sizeof(struct rvu_devlink), ··· 1580 1578 goto err_dl_health; 1581 1579 } 1582 1580 1583 - /* Register exact match devlink only for CN10K-B */ 1584 - size = ARRAY_SIZE(rvu_af_dl_params); 1585 - if (!rvu_npc_exact_has_match_table(rvu)) 1586 - size -= 1; 1587 - 1588 - err = devlink_params_register(dl, rvu_af_dl_params, size); 1581 + err = devlink_params_register(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params)); 1589 1582 if (err) { 1590 1583 dev_err(rvu->dev, 1591 1584 "devlink params register failed with error %d", err); 1592 1585 goto err_dl_health; 1593 1586 } 1594 1587 1588 + /* Register exact match devlink only for CN10K-B */ 1589 + if (!rvu_npc_exact_has_match_table(rvu)) 1590 + goto done; 1591 + 1592 + err = devlink_params_register(dl, rvu_af_dl_param_exact_match, 1593 + ARRAY_SIZE(rvu_af_dl_param_exact_match)); 1594 + if (err) { 1595 + dev_err(rvu->dev, 1596 + "devlink exact match params register failed with error %d", err); 1597 + goto err_dl_exact_match; 1598 + } 1599 + 1600 + done: 1595 1601 devlink_register(dl); 1596 1602 return 0; 1603 + 1604 + err_dl_exact_match: 1605 + devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params)); 1597 1606 1598 1607 err_dl_health: 1599 1608 rvu_health_reporters_destroy(rvu); ··· 1618 1605 struct devlink *dl = rvu_dl->dl; 1619 1606 1620 1607 devlink_unregister(dl); 1621 - devlink_params_unregister(dl, rvu_af_dl_params, 1622 - ARRAY_SIZE(rvu_af_dl_params)); 1608 + 1609 + devlink_params_unregister(dl, rvu_af_dl_params, ARRAY_SIZE(rvu_af_dl_params)); 1610 + 1611 + /* Unregister exact match devlink only for CN10K-B */ 1612 + if (rvu_npc_exact_has_match_table(rvu)) 1613 + devlink_params_unregister(dl, rvu_af_dl_param_exact_match, 1614 + ARRAY_SIZE(rvu_af_dl_param_exact_match)); 1615 + 1623 1616 rvu_health_reporters_destroy(rvu); 1624 1617 devlink_free(dl); 1625 1618 }
+4 -2
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3177 3177 struct mtk_eth *eth = mac->hw; 3178 3178 int i, err; 3179 3179 3180 - if (mtk_uses_dsa(dev) && !eth->prog) { 3180 + if ((mtk_uses_dsa(dev) && !eth->prog) && 3181 + !(mac->id == 1 && MTK_HAS_CAPS(eth->soc->caps, MTK_GMAC1_TRGMII))) { 3181 3182 for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { 3182 3183 struct metadata_dst *md_dst = eth->dsa_meta[i]; 3183 3184 ··· 3195 3194 } 3196 3195 } else { 3197 3196 /* Hardware special tag parsing needs to be disabled if at least 3198 - * one MAC does not use DSA. 3197 + * one MAC does not use DSA, or the second MAC of the MT7621 and 3198 + * MT7623 SoCs is being used. 3199 3199 */ 3200 3200 u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); 3201 3201 val &= ~MTK_CDMP_STAG_EN;
+3 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.h
··· 519 519 #define SGMII_SPEED_10 FIELD_PREP(SGMII_SPEED_MASK, 0) 520 520 #define SGMII_SPEED_100 FIELD_PREP(SGMII_SPEED_MASK, 1) 521 521 #define SGMII_SPEED_1000 FIELD_PREP(SGMII_SPEED_MASK, 2) 522 - #define SGMII_DUPLEX_FULL BIT(4) 522 + #define SGMII_DUPLEX_HALF BIT(4) 523 523 #define SGMII_IF_MODE_BIT5 BIT(5) 524 524 #define SGMII_REMOTE_FAULT_DIS BIT(8) 525 525 #define SGMII_CODE_SYNC_SET_VAL BIT(9) ··· 1036 1036 * @regmap: The register map pointing at the range used to setup 1037 1037 * SGMII modes 1038 1038 * @ana_rgc3: The offset refers to register ANA_RGC3 related to regmap 1039 + * @interface: Currently configured interface mode 1039 1040 * @pcs: Phylink PCS structure 1040 1041 */ 1041 1042 struct mtk_pcs { 1042 1043 struct regmap *regmap; 1043 1044 u32 ana_rgc3; 1045 + phy_interface_t interface; 1044 1046 struct phylink_pcs pcs; 1045 1047 }; 1046 1048
+1 -2
drivers/net/ethernet/mediatek/mtk_ppe.c
··· 615 615 u32 ib1_mask = mtk_get_ib1_pkt_type_mask(ppe->eth) | MTK_FOE_IB1_UDP; 616 616 int type; 617 617 618 - flow_info = kzalloc(offsetof(struct mtk_flow_entry, l2_data.end), 619 - GFP_ATOMIC); 618 + flow_info = kzalloc(sizeof(*flow_info), GFP_ATOMIC); 620 619 if (!flow_info) 621 620 return; 622 621
-1
drivers/net/ethernet/mediatek/mtk_ppe.h
··· 279 279 struct { 280 280 struct mtk_flow_entry *base_flow; 281 281 struct hlist_node list; 282 - struct {} end; 283 282 } l2_data; 284 283 }; 285 284 struct rhash_head node;
+32 -14
drivers/net/ethernet/mediatek/mtk_sgmii.c
··· 43 43 int advertise, link_timer; 44 44 bool changed, use_an; 45 45 46 - if (interface == PHY_INTERFACE_MODE_2500BASEX) 47 - rgc3 = RG_PHY_SPEED_3_125G; 48 - else 49 - rgc3 = 0; 50 - 51 46 advertise = phylink_mii_c22_pcs_encode_advertisement(interface, 52 47 advertising); 53 48 if (advertise < 0) ··· 83 88 bmcr = 0; 84 89 } 85 90 86 - /* Configure the underlying interface speed */ 87 - regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3, 88 - RG_PHY_SPEED_3_125G, rgc3); 91 + if (mpcs->interface != interface) { 92 + /* PHYA power down */ 93 + regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 94 + SGMII_PHYA_PWD, SGMII_PHYA_PWD); 95 + 96 + if (interface == PHY_INTERFACE_MODE_2500BASEX) 97 + rgc3 = RG_PHY_SPEED_3_125G; 98 + else 99 + rgc3 = 0; 100 + 101 + /* Configure the underlying interface speed */ 102 + regmap_update_bits(mpcs->regmap, mpcs->ana_rgc3, 103 + RG_PHY_SPEED_3_125G, rgc3); 104 + 105 + mpcs->interface = interface; 106 + } 89 107 90 108 /* Update the advertisement, noting whether it has changed */ 91 109 regmap_update_bits_check(mpcs->regmap, SGMSYS_PCS_ADVERTISE, ··· 116 108 regmap_update_bits(mpcs->regmap, SGMSYS_PCS_CONTROL_1, 117 109 SGMII_AN_RESTART | SGMII_AN_ENABLE, bmcr); 118 110 119 - /* Release PHYA power down state */ 120 - regmap_update_bits(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 121 - SGMII_PHYA_PWD, 0); 111 + /* Release PHYA power down state 112 + * Only removing bit SGMII_PHYA_PWD isn't enough. 113 + * There are cases when the SGMII_PHYA_PWD register contains 0x9 which 114 + * prevents SGMII from working. The SGMII still shows link but no traffic 115 + * can flow. Writing 0x0 to the PHYA_PWD register fix the issue. 0x0 was 116 + * taken from a good working state of the SGMII interface. 117 + * Unknown how much the QPHY needs but it is racy without a sleep. 118 + * Tested on mt7622 & mt7986. 119 + */ 120 + usleep_range(50, 100); 121 + regmap_write(mpcs->regmap, SGMSYS_QPHY_PWR_STATE_CTRL, 0); 122 122 123 123 return changed; 124 124 } ··· 154 138 else 155 139 sgm_mode = SGMII_SPEED_1000; 156 140 157 - if (duplex == DUPLEX_FULL) 158 - sgm_mode |= SGMII_DUPLEX_FULL; 141 + if (duplex != DUPLEX_FULL) 142 + sgm_mode |= SGMII_DUPLEX_HALF; 159 143 160 144 regmap_update_bits(mpcs->regmap, SGMSYS_SGMII_MODE, 161 - SGMII_DUPLEX_FULL | SGMII_SPEED_MASK, 145 + SGMII_DUPLEX_HALF | SGMII_SPEED_MASK, 162 146 sgm_mode); 163 147 } 164 148 } ··· 187 171 return PTR_ERR(ss->pcs[i].regmap); 188 172 189 173 ss->pcs[i].pcs.ops = &mtk_pcs_ops; 174 + ss->pcs[i].pcs.poll = true; 175 + ss->pcs[i].interface = PHY_INTERFACE_MODE_NA; 190 176 } 191 177 192 178 return 0;
+3 -3
drivers/net/ethernet/microchip/lan966x/lan966x_fdma.c
··· 608 608 lan966x_fdma_rx_reload(rx); 609 609 } 610 610 611 - if (counter < weight && napi_complete_done(napi, counter)) 612 - lan_wr(0xff, lan966x, FDMA_INTR_DB_ENA); 613 - 614 611 if (redirect) 615 612 xdp_do_flush(); 613 + 614 + if (counter < weight && napi_complete_done(napi, counter)) 615 + lan_wr(0xff, lan966x, FDMA_INTR_DB_ENA); 616 616 617 617 return counter; 618 618 }
+8 -1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1259 1259 gic->handler = NULL; 1260 1260 gic->arg = NULL; 1261 1261 1262 + if (!i) 1263 + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_hwc@pci:%s", 1264 + pci_name(pdev)); 1265 + else 1266 + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s", 1267 + i - 1, pci_name(pdev)); 1268 + 1262 1269 irq = pci_irq_vector(pdev, i); 1263 1270 if (irq < 0) { 1264 1271 err = irq; 1265 1272 goto free_mask; 1266 1273 } 1267 1274 1268 - err = request_irq(irq, mana_gd_intr, 0, "mana_intr", gic); 1275 + err = request_irq(irq, mana_gd_intr, 0, gic->name, gic); 1269 1276 if (err) 1270 1277 goto free_mask; 1271 1278 irq_set_affinity_and_hint(irq, req_mask);
+7 -1
drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
··· 460 460 sizeof(struct nfp_tun_neigh_v4); 461 461 unsigned long cookie = (unsigned long)neigh; 462 462 struct nfp_flower_priv *priv = app->priv; 463 + struct nfp_tun_neigh_lag lag_info; 463 464 struct nfp_neigh_entry *nn_entry; 464 465 u32 port_id; 465 466 u8 mtype; ··· 468 467 port_id = nfp_flower_get_port_id_from_netdev(app, netdev); 469 468 if (!port_id) 470 469 return; 470 + 471 + if ((port_id & NFP_FL_LAG_OUT) == NFP_FL_LAG_OUT) { 472 + memset(&lag_info, 0, sizeof(struct nfp_tun_neigh_lag)); 473 + nfp_flower_lag_get_info_from_netdev(app, netdev, &lag_info); 474 + } 471 475 472 476 spin_lock_bh(&priv->predt_lock); 473 477 nn_entry = rhashtable_lookup_fast(&priv->neigh_table, &cookie, ··· 521 515 neigh_ha_snapshot(common->dst_addr, neigh, netdev); 522 516 523 517 if ((port_id & NFP_FL_LAG_OUT) == NFP_FL_LAG_OUT) 524 - nfp_flower_lag_get_info_from_netdev(app, netdev, lag); 518 + memcpy(lag, &lag_info, sizeof(struct nfp_tun_neigh_lag)); 525 519 common->port_id = cpu_to_be32(port_id); 526 520 527 521 if (rhashtable_insert_fast(&priv->neigh_table,
+4 -3
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 1438 1438 rx_work_done = (likely(fp->type & QEDE_FASTPATH_RX) && 1439 1439 qede_has_rx_work(fp->rxq)) ? 1440 1440 qede_rx_int(fp, budget) : 0; 1441 + 1442 + if (fp->xdp_xmit & QEDE_XDP_REDIRECT) 1443 + xdp_do_flush(); 1444 + 1441 1445 /* Handle case where we are called by netpoll with a budget of 0 */ 1442 1446 if (rx_work_done < budget || !budget) { 1443 1447 if (!qede_poll_is_more_work(fp)) { ··· 1460 1456 fp->xdp_tx->tx_db.data.bd_prod = cpu_to_le16(xdp_prod); 1461 1457 qede_update_tx_producer(fp->xdp_tx); 1462 1458 } 1463 - 1464 - if (fp->xdp_xmit & QEDE_XDP_REDIRECT) 1465 - xdp_do_flush_map(); 1466 1459 1467 1460 return rx_work_done; 1468 1461 }
+8 -2
drivers/net/ethernet/renesas/ravb_main.c
··· 1101 1101 ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS); 1102 1102 if (eis & EIS_QFS) { 1103 1103 ris2 = ravb_read(ndev, RIS2); 1104 - ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED), 1104 + ravb_write(ndev, ~(RIS2_QFF0 | RIS2_QFF1 | RIS2_RFFF | RIS2_RESERVED), 1105 1105 RIS2); 1106 1106 1107 1107 /* Receive Descriptor Empty int */ 1108 1108 if (ris2 & RIS2_QFF0) 1109 1109 priv->stats[RAVB_BE].rx_over_errors++; 1110 1110 1111 - /* Receive Descriptor Empty int */ 1111 + /* Receive Descriptor Empty int */ 1112 1112 if (ris2 & RIS2_QFF1) 1113 1113 priv->stats[RAVB_NC].rx_over_errors++; 1114 1114 ··· 2973 2973 else 2974 2974 ret = ravb_close(ndev); 2975 2975 2976 + if (priv->info->ccc_gac) 2977 + ravb_ptp_stop(ndev); 2978 + 2976 2979 return ret; 2977 2980 } 2978 2981 ··· 3013 3010 3014 3011 /* Restore descriptor base address table */ 3015 3012 ravb_write(ndev, priv->desc_bat_dma, DBAT); 3013 + 3014 + if (priv->info->ccc_gac) 3015 + ravb_ptp_init(ndev, priv->pdev); 3016 3016 3017 3017 if (netif_running(ndev)) { 3018 3018 if (priv->wol_enabled) {
+13 -9
drivers/net/ethernet/renesas/rswitch.c
··· 1074 1074 port = NULL; 1075 1075 goto out; 1076 1076 } 1077 - if (index == rdev->etha->index) 1077 + if (index == rdev->etha->index) { 1078 + if (!of_device_is_available(port)) 1079 + port = NULL; 1078 1080 break; 1081 + } 1079 1082 } 1080 1083 1081 1084 out: ··· 1109 1106 1110 1107 port = rswitch_get_port_node(rdev); 1111 1108 if (!port) 1112 - return -ENODEV; 1109 + return 0; /* ignored */ 1113 1110 1114 1111 err = of_get_phy_mode(port, &rdev->etha->phy_interface); 1115 1112 of_node_put(port); ··· 1327 1324 { 1328 1325 int i, err; 1329 1326 1330 - for (i = 0; i < RSWITCH_NUM_PORTS; i++) { 1327 + rswitch_for_each_enabled_port(priv, i) { 1331 1328 err = rswitch_ether_port_init_one(priv->rdev[i]); 1332 1329 if (err) 1333 1330 goto err_init_one; 1334 1331 } 1335 1332 1336 - for (i = 0; i < RSWITCH_NUM_PORTS; i++) { 1333 + rswitch_for_each_enabled_port(priv, i) { 1337 1334 err = rswitch_serdes_init(priv->rdev[i]); 1338 1335 if (err) 1339 1336 goto err_serdes; ··· 1342 1339 return 0; 1343 1340 1344 1341 err_serdes: 1345 - for (i--; i >= 0; i--) 1342 + rswitch_for_each_enabled_port_continue_reverse(priv, i) 1346 1343 rswitch_serdes_deinit(priv->rdev[i]); 1347 1344 i = RSWITCH_NUM_PORTS; 1348 1345 1349 1346 err_init_one: 1350 - for (i--; i >= 0; i--) 1347 + rswitch_for_each_enabled_port_continue_reverse(priv, i) 1351 1348 rswitch_ether_port_deinit_one(priv->rdev[i]); 1352 1349 1353 1350 return err; ··· 1611 1608 netif_napi_add(ndev, &rdev->napi, rswitch_poll); 1612 1609 1613 1610 port = rswitch_get_port_node(rdev); 1611 + rdev->disabled = !port; 1614 1612 err = of_get_ethdev_address(port, ndev); 1615 1613 of_node_put(port); 1616 1614 if (err) { ··· 1711 1707 if (err) 1712 1708 goto err_ether_port_init_all; 1713 1709 1714 - for (i = 0; i < RSWITCH_NUM_PORTS; i++) { 1710 + rswitch_for_each_enabled_port(priv, i) { 1715 1711 err = register_netdev(priv->rdev[i]->ndev); 1716 1712 if (err) { 1717 - for (i--; i >= 0; i--) 1713 + rswitch_for_each_enabled_port_continue_reverse(priv, i) 1718 1714 unregister_netdev(priv->rdev[i]->ndev); 1719 1715 goto err_register_netdev; 1720 1716 } 1721 1717 } 1722 1718 1723 - for (i = 0; i < RSWITCH_NUM_PORTS; i++) 1719 + rswitch_for_each_enabled_port(priv, i) 1724 1720 netdev_info(priv->rdev[i]->ndev, "MAC address %pM\n", 1725 1721 priv->rdev[i]->ndev->dev_addr); 1726 1722
+12
drivers/net/ethernet/renesas/rswitch.h
··· 13 13 #define RSWITCH_MAX_NUM_QUEUES 128 14 14 15 15 #define RSWITCH_NUM_PORTS 3 16 + #define rswitch_for_each_enabled_port(priv, i) \ 17 + for (i = 0; i < RSWITCH_NUM_PORTS; i++) \ 18 + if (priv->rdev[i]->disabled) \ 19 + continue; \ 20 + else 21 + 22 + #define rswitch_for_each_enabled_port_continue_reverse(priv, i) \ 23 + for (i--; i >= 0; i--) \ 24 + if (priv->rdev[i]->disabled) \ 25 + continue; \ 26 + else 16 27 17 28 #define TX_RING_SIZE 1024 18 29 #define RX_RING_SIZE 1024 ··· 949 938 struct rswitch_gwca_queue *tx_queue; 950 939 struct rswitch_gwca_queue *rx_queue; 951 940 u8 ts_tag; 941 + bool disabled; 952 942 953 943 int port; 954 944 struct rswitch_etha *etha;
+4 -1
drivers/net/ethernet/sfc/efx.c
··· 1003 1003 /* Determine netdevice features */ 1004 1004 net_dev->features |= (efx->type->offload_features | NETIF_F_SG | 1005 1005 NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL); 1006 - if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) 1006 + if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM)) { 1007 1007 net_dev->features |= NETIF_F_TSO6; 1008 + if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) 1009 + net_dev->hw_enc_features |= NETIF_F_TSO6; 1010 + } 1008 1011 /* Check whether device supports TSO */ 1009 1012 if (!efx->type->tso_versions || !efx->type->tso_versions(efx)) 1010 1013 net_dev->features &= ~NETIF_F_ALL_TSO;
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
··· 560 560 plat_dat->has_gmac4 = 1; 561 561 plat_dat->pmt = 1; 562 562 plat_dat->tso_en = of_property_read_bool(np, "snps,tso"); 563 + if (of_device_is_compatible(np, "qcom,qcs404-ethqos")) 564 + plat_dat->rx_clk_runs_in_lpi = 1; 563 565 564 566 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 565 567 if (ret)
+2 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1080 1080 1081 1081 stmmac_mac_set(priv, priv->ioaddr, true); 1082 1082 if (phy && priv->dma_cap.eee) { 1083 - priv->eee_active = phy_init_eee(phy, 1) >= 0; 1083 + priv->eee_active = 1084 + phy_init_eee(phy, !priv->plat->rx_clk_runs_in_lpi) >= 0; 1084 1085 priv->eee_enabled = stmmac_eee_init(priv); 1085 1086 priv->tx_lpi_enabled = priv->eee_enabled; 1086 1087 stmmac_set_eee_pls(priv, priv->hw, true);
+2 -7
drivers/net/hyperv/netvsc.c
··· 987 987 void netvsc_dma_unmap(struct hv_device *hv_dev, 988 988 struct hv_netvsc_packet *packet) 989 989 { 990 - u32 page_count = packet->cp_partial ? 991 - packet->page_buf_cnt - packet->rmsg_pgcnt : 992 - packet->page_buf_cnt; 993 990 int i; 994 991 995 992 if (!hv_is_isolation_supported()) ··· 995 998 if (!packet->dma_range) 996 999 return; 997 1000 998 - for (i = 0; i < page_count; i++) 1001 + for (i = 0; i < packet->page_buf_cnt; i++) 999 1002 dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma, 1000 1003 packet->dma_range[i].mapping_size, 1001 1004 DMA_TO_DEVICE); ··· 1025 1028 struct hv_netvsc_packet *packet, 1026 1029 struct hv_page_buffer *pb) 1027 1030 { 1028 - u32 page_count = packet->cp_partial ? 1029 - packet->page_buf_cnt - packet->rmsg_pgcnt : 1030 - packet->page_buf_cnt; 1031 + u32 page_count = packet->page_buf_cnt; 1031 1032 dma_addr_t dma; 1032 1033 int i; 1033 1034
+16 -7
drivers/net/mdio/mdio-mux-meson-g12a.c
··· 4 4 */ 5 5 6 6 #include <linux/bitfield.h> 7 + #include <linux/delay.h> 7 8 #include <linux/clk.h> 8 9 #include <linux/clk-provider.h> 9 10 #include <linux/device.h> ··· 151 150 152 151 static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv) 153 152 { 153 + u32 value; 154 154 int ret; 155 155 156 156 /* Enable the phy clock */ ··· 165 163 166 164 /* Initialize ephy control */ 167 165 writel(EPHY_G12A_ID, priv->regs + ETH_PHY_CNTL0); 168 - writel(FIELD_PREP(PHY_CNTL1_ST_MODE, 3) | 169 - FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) | 170 - FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) | 171 - PHY_CNTL1_CLK_EN | 172 - PHY_CNTL1_CLKFREQ | 173 - PHY_CNTL1_PHY_ENB, 174 - priv->regs + ETH_PHY_CNTL1); 166 + 167 + /* Make sure we get a 0 -> 1 transition on the enable bit */ 168 + value = FIELD_PREP(PHY_CNTL1_ST_MODE, 3) | 169 + FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) | 170 + FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) | 171 + PHY_CNTL1_CLK_EN | 172 + PHY_CNTL1_CLKFREQ; 173 + writel(value, priv->regs + ETH_PHY_CNTL1); 175 174 writel(PHY_CNTL2_USE_INTERNAL | 176 175 PHY_CNTL2_SMI_SRC_MAC | 177 176 PHY_CNTL2_RX_CLK_EPHY, 178 177 priv->regs + ETH_PHY_CNTL2); 178 + 179 + value |= PHY_CNTL1_PHY_ENB; 180 + writel(value, priv->regs + ETH_PHY_CNTL1); 181 + 182 + /* The phy needs a bit of time to power up */ 183 + mdelay(10); 179 184 180 185 return 0; 181 186 }
+4 -2
drivers/net/phy/dp83822.c
··· 233 233 DP83822_ENERGY_DET_INT_EN | 234 234 DP83822_LINK_QUAL_INT_EN); 235 235 236 - if (!dp83822->fx_enabled) 236 + /* Private data pointer is NULL on DP83825/26 */ 237 + if (!dp83822 || !dp83822->fx_enabled) 237 238 misr_status |= DP83822_ANEG_COMPLETE_INT_EN | 238 239 DP83822_DUP_MODE_CHANGE_INT_EN | 239 240 DP83822_SPEED_CHANGED_INT_EN; ··· 254 253 DP83822_PAGE_RX_INT_EN | 255 254 DP83822_EEE_ERROR_CHANGE_INT_EN); 256 255 257 - if (!dp83822->fx_enabled) 256 + /* Private data pointer is NULL on DP83825/26 */ 257 + if (!dp83822 || !dp83822->fx_enabled) 258 258 misr_status |= DP83822_ANEG_ERR_INT_EN | 259 259 DP83822_WOL_PKT_INT_EN; 260 260
+2
drivers/net/phy/meson-gxl.c
··· 271 271 .handle_interrupt = meson_gxl_handle_interrupt, 272 272 .suspend = genphy_suspend, 273 273 .resume = genphy_resume, 274 + .read_mmd = genphy_read_mmd_unsupported, 275 + .write_mmd = genphy_write_mmd_unsupported, 274 276 }, 275 277 }; 276 278
+1 -1
drivers/net/phy/phy_device.c
··· 1517 1517 * another mac interface, so we should create a device link between 1518 1518 * phy dev and mac dev. 1519 1519 */ 1520 - if (phydev->mdio.bus->parent && dev->dev.parent != phydev->mdio.bus->parent) 1520 + if (dev && phydev->mdio.bus->parent && dev->dev.parent != phydev->mdio.bus->parent) 1521 1521 phydev->devlink = device_link_add(dev->dev.parent, &phydev->mdio.dev, 1522 1522 DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 1523 1523
+4 -4
drivers/net/virtio_net.c
··· 1677 1677 1678 1678 received = virtnet_receive(rq, budget, &xdp_xmit); 1679 1679 1680 + if (xdp_xmit & VIRTIO_XDP_REDIR) 1681 + xdp_do_flush(); 1682 + 1680 1683 /* Out of packets? */ 1681 1684 if (received < budget) 1682 1685 virtqueue_napi_complete(napi, rq->vq, received); 1683 - 1684 - if (xdp_xmit & VIRTIO_XDP_REDIR) 1685 - xdp_do_flush(); 1686 1686 1687 1687 if (xdp_xmit & VIRTIO_XDP_TX) { 1688 1688 sq = virtnet_xdp_get_sq(vi); ··· 2158 2158 cancel_delayed_work_sync(&vi->refill); 2159 2159 2160 2160 for (i = 0; i < vi->max_queue_pairs; i++) { 2161 - xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); 2162 2161 napi_disable(&vi->rq[i].napi); 2162 + xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq); 2163 2163 virtnet_napi_tx_disable(&vi->sq[i].napi); 2164 2164 } 2165 2165
+10 -1
drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
··· 152 152 } 153 153 154 154 t7xx_pcie_mac_clear_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int); 155 + 156 + return IRQ_WAKE_THREAD; 157 + } 158 + 159 + static irqreturn_t t7xx_dpmaif_isr_thread(int irq, void *data) 160 + { 161 + struct dpmaif_isr_para *isr_para = data; 162 + struct dpmaif_ctrl *dpmaif_ctrl = isr_para->dpmaif_ctrl; 163 + 155 164 t7xx_dpmaif_irq_cb(isr_para); 156 165 t7xx_pcie_mac_set_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int); 157 166 return IRQ_HANDLED; ··· 197 188 t7xx_pcie_mac_clear_int(t7xx_dev, int_type); 198 189 199 190 t7xx_dev->intr_handler[int_type] = t7xx_dpmaif_isr_handler; 200 - t7xx_dev->intr_thread[int_type] = NULL; 191 + t7xx_dev->intr_thread[int_type] = t7xx_dpmaif_isr_thread; 201 192 t7xx_dev->callback_param[int_type] = isr_para; 202 193 203 194 t7xx_pcie_mac_clear_int_status(t7xx_dev, int_type);
+20 -9
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
··· 840 840 841 841 if (!rxq->que_started) { 842 842 atomic_set(&rxq->rx_processing, 0); 843 + pm_runtime_put_autosuspend(rxq->dpmaif_ctrl->dev); 843 844 dev_err(rxq->dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index); 844 845 return work_done; 845 846 } 846 847 847 - if (!rxq->sleep_lock_pending) { 848 - pm_runtime_get_noresume(rxq->dpmaif_ctrl->dev); 848 + if (!rxq->sleep_lock_pending) 849 849 t7xx_pci_disable_sleep(t7xx_dev); 850 - } 851 850 852 851 ret = try_wait_for_completion(&t7xx_dev->sleep_lock_acquire); 853 852 if (!ret) { ··· 875 876 napi_complete_done(napi, work_done); 876 877 t7xx_dpmaif_clr_ip_busy_sts(&rxq->dpmaif_ctrl->hw_info); 877 878 t7xx_dpmaif_dlq_unmask_rx_done(&rxq->dpmaif_ctrl->hw_info, rxq->index); 879 + t7xx_pci_enable_sleep(rxq->dpmaif_ctrl->t7xx_dev); 880 + pm_runtime_mark_last_busy(rxq->dpmaif_ctrl->dev); 881 + pm_runtime_put_autosuspend(rxq->dpmaif_ctrl->dev); 882 + atomic_set(&rxq->rx_processing, 0); 878 883 } else { 879 884 t7xx_dpmaif_clr_ip_busy_sts(&rxq->dpmaif_ctrl->hw_info); 880 885 } 881 - 882 - t7xx_pci_enable_sleep(rxq->dpmaif_ctrl->t7xx_dev); 883 - pm_runtime_mark_last_busy(rxq->dpmaif_ctrl->dev); 884 - pm_runtime_put_noidle(rxq->dpmaif_ctrl->dev); 885 - atomic_set(&rxq->rx_processing, 0); 886 886 887 887 return work_done; 888 888 } ··· 889 891 void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask) 890 892 { 891 893 struct dpmaif_rx_queue *rxq; 892 - int qno; 894 + struct dpmaif_ctrl *ctrl; 895 + int qno, ret; 893 896 894 897 qno = ffs(que_mask) - 1; 895 898 if (qno < 0 || qno > DPMAIF_RXQ_NUM - 1) { ··· 899 900 } 900 901 901 902 rxq = &dpmaif_ctrl->rxq[qno]; 903 + ctrl = rxq->dpmaif_ctrl; 904 + /* We need to make sure that the modem has been resumed before 905 + * calling napi. This can't be done inside the polling function 906 + * as we could be blocked waiting for device to be resumed, 907 + * which can't be done from softirq context the poll function 908 + * is running in. 909 + */ 910 + ret = pm_runtime_resume_and_get(ctrl->dev); 911 + if (ret < 0 && ret != -EACCES) { 912 + dev_err(ctrl->dev, "Failed to resume device: %d\n", ret); 913 + return; 914 + } 902 915 napi_schedule(&rxq->napi); 903 916 } 904 917
+15 -1
drivers/net/wwan/t7xx/t7xx_netdev.c
··· 27 27 #include <linux/list.h> 28 28 #include <linux/netdev_features.h> 29 29 #include <linux/netdevice.h> 30 + #include <linux/pm_runtime.h> 30 31 #include <linux/skbuff.h> 31 32 #include <linux/types.h> 32 33 #include <linux/wwan.h> ··· 46 45 47 46 static void t7xx_ccmni_enable_napi(struct t7xx_ccmni_ctrl *ctlb) 48 47 { 49 - int i; 48 + struct dpmaif_ctrl *ctrl; 49 + int i, ret; 50 + 51 + ctrl = ctlb->hif_ctrl; 50 52 51 53 if (ctlb->is_napi_en) 52 54 return; 53 55 54 56 for (i = 0; i < RXQ_NUM; i++) { 57 + /* The usage count has to be bumped every time before calling 58 + * napi_schedule. It will be decresed in the poll routine, 59 + * right after napi_complete_done is called. 60 + */ 61 + ret = pm_runtime_resume_and_get(ctrl->dev); 62 + if (ret < 0) { 63 + dev_err(ctrl->dev, "Failed to resume device: %d\n", 64 + ret); 65 + return; 66 + } 55 67 napi_enable(ctlb->napi[i]); 56 68 napi_schedule(ctlb->napi[i]); 57 69 }
+2
drivers/net/wwan/t7xx/t7xx_pci.c
··· 121 121 iowrite32(T7XX_L1_BIT(0), IREG_BASE(t7xx_dev) + ENABLE_ASPM_LOWPWR); 122 122 atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED); 123 123 124 + pm_runtime_mark_last_busy(&t7xx_dev->pdev->dev); 125 + pm_runtime_allow(&t7xx_dev->pdev->dev); 124 126 pm_runtime_put_noidle(&t7xx_dev->pdev->dev); 125 127 } 126 128
+12 -2
drivers/nvme/host/auth.c
··· 45 45 int sess_key_len; 46 46 }; 47 47 48 + struct workqueue_struct *nvme_auth_wq; 49 + 48 50 #define nvme_auth_flags_from_qid(qid) \ 49 51 (qid == 0) ? 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED 50 52 #define nvme_auth_queue_from_qid(ctrl, qid) \ ··· 868 866 869 867 chap = &ctrl->dhchap_ctxs[qid]; 870 868 cancel_work_sync(&chap->auth_work); 871 - queue_work(nvme_wq, &chap->auth_work); 869 + queue_work(nvme_auth_wq, &chap->auth_work); 872 870 return 0; 873 871 } 874 872 EXPORT_SYMBOL_GPL(nvme_auth_negotiate); ··· 1010 1008 1011 1009 int __init nvme_init_auth(void) 1012 1010 { 1011 + nvme_auth_wq = alloc_workqueue("nvme-auth-wq", 1012 + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0); 1013 + if (!nvme_auth_wq) 1014 + return -ENOMEM; 1015 + 1013 1016 nvme_chap_buf_cache = kmem_cache_create("nvme-chap-buf-cache", 1014 1017 CHAP_BUF_SIZE, 0, SLAB_HWCACHE_ALIGN, NULL); 1015 1018 if (!nvme_chap_buf_cache) 1016 - return -ENOMEM; 1019 + goto err_destroy_workqueue; 1017 1020 1018 1021 nvme_chap_buf_pool = mempool_create(16, mempool_alloc_slab, 1019 1022 mempool_free_slab, nvme_chap_buf_cache); ··· 1028 1021 return 0; 1029 1022 err_destroy_chap_buf_cache: 1030 1023 kmem_cache_destroy(nvme_chap_buf_cache); 1024 + err_destroy_workqueue: 1025 + destroy_workqueue(nvme_auth_wq); 1031 1026 return -ENOMEM; 1032 1027 } 1033 1028 ··· 1037 1028 { 1038 1029 mempool_destroy(nvme_chap_buf_pool); 1039 1030 kmem_cache_destroy(nvme_chap_buf_cache); 1031 + destroy_workqueue(nvme_auth_wq); 1040 1032 }
+5 -2
drivers/nvme/host/core.c
··· 1093 1093 if (ns) { 1094 1094 if (ns->head->effects) 1095 1095 effects = le32_to_cpu(ns->head->effects->iocs[opcode]); 1096 - if (ns->head->ids.csi == NVME_CAP_CSS_NVM) 1096 + if (ns->head->ids.csi == NVME_CSI_NVM) 1097 1097 effects |= nvme_known_nvm_effects(opcode); 1098 1098 if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC)) 1099 1099 dev_warn_once(ctrl->device, ··· 4921 4921 blk_mq_destroy_queue(ctrl->admin_q); 4922 4922 blk_put_queue(ctrl->admin_q); 4923 4923 out_free_tagset: 4924 - blk_mq_free_tag_set(ctrl->admin_tagset); 4924 + blk_mq_free_tag_set(set); 4925 + ctrl->admin_q = NULL; 4926 + ctrl->fabrics_q = NULL; 4925 4927 return ret; 4926 4928 } 4927 4929 EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set); ··· 4985 4983 4986 4984 out_free_tag_set: 4987 4985 blk_mq_free_tag_set(set); 4986 + ctrl->connect_q = NULL; 4988 4987 return ret; 4989 4988 } 4990 4989 EXPORT_SYMBOL_GPL(nvme_alloc_io_tag_set);
+8 -10
drivers/nvme/host/fc.c
··· 3521 3521 3522 3522 nvme_fc_init_queue(ctrl, 0); 3523 3523 3524 - ret = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set, 3525 - &nvme_fc_admin_mq_ops, 3526 - struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv, 3527 - ctrl->lport->ops->fcprqst_priv_sz)); 3528 - if (ret) 3529 - goto out_free_queues; 3530 - 3531 3524 /* 3532 3525 * Would have been nice to init io queues tag set as well. 3533 3526 * However, we require interaction from the controller ··· 3530 3537 3531 3538 ret = nvme_init_ctrl(&ctrl->ctrl, dev, &nvme_fc_ctrl_ops, 0); 3532 3539 if (ret) 3533 - goto out_cleanup_tagset; 3540 + goto out_free_queues; 3534 3541 3535 3542 /* at this point, teardown path changes to ref counting on nvme ctrl */ 3543 + 3544 + ret = nvme_alloc_admin_tag_set(&ctrl->ctrl, &ctrl->admin_tag_set, 3545 + &nvme_fc_admin_mq_ops, 3546 + struct_size((struct nvme_fcp_op_w_sgl *)NULL, priv, 3547 + ctrl->lport->ops->fcprqst_priv_sz)); 3548 + if (ret) 3549 + goto fail_ctrl; 3536 3550 3537 3551 spin_lock_irqsave(&rport->lock, flags); 3538 3552 list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list); ··· 3592 3592 3593 3593 return ERR_PTR(-EIO); 3594 3594 3595 - out_cleanup_tagset: 3596 - nvme_remove_admin_tag_set(&ctrl->ctrl); 3597 3595 out_free_queues: 3598 3596 kfree(ctrl->queues); 3599 3597 out_free_ida:
+1
drivers/nvme/host/pci.c
··· 3102 3102 3103 3103 nvme_start_ctrl(&dev->ctrl); 3104 3104 nvme_put_ctrl(&dev->ctrl); 3105 + flush_work(&dev->ctrl.scan_work); 3105 3106 return 0; 3106 3107 3107 3108 out_disable:
+3 -1
drivers/nvme/target/fc.c
··· 1685 1685 else { 1686 1686 queue = nvmet_fc_alloc_target_queue(iod->assoc, 0, 1687 1687 be16_to_cpu(rqst->assoc_cmd.sqsize)); 1688 - if (!queue) 1688 + if (!queue) { 1689 1689 ret = VERR_QUEUE_ALLOC_FAIL; 1690 + nvmet_fc_tgt_a_put(iod->assoc); 1691 + } 1690 1692 } 1691 1693 } 1692 1694
+3
drivers/nvmem/brcm_nvram.c
··· 98 98 len = le32_to_cpu(header.len); 99 99 100 100 data = kzalloc(len, GFP_KERNEL); 101 + if (!data) 102 + return -ENOMEM; 103 + 101 104 memcpy_fromio(data, priv->base, len); 102 105 data[len - 1] = '\0'; 103 106
+30 -30
drivers/nvmem/core.c
··· 770 770 return ERR_PTR(rval); 771 771 } 772 772 773 - if (config->wp_gpio) 774 - nvmem->wp_gpio = config->wp_gpio; 775 - else if (!config->ignore_wp) 773 + nvmem->id = rval; 774 + 775 + nvmem->dev.type = &nvmem_provider_type; 776 + nvmem->dev.bus = &nvmem_bus_type; 777 + nvmem->dev.parent = config->dev; 778 + 779 + device_initialize(&nvmem->dev); 780 + 781 + if (!config->ignore_wp) 776 782 nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp", 777 783 GPIOD_OUT_HIGH); 778 784 if (IS_ERR(nvmem->wp_gpio)) { 779 - ida_free(&nvmem_ida, nvmem->id); 780 785 rval = PTR_ERR(nvmem->wp_gpio); 781 - kfree(nvmem); 782 - return ERR_PTR(rval); 786 + nvmem->wp_gpio = NULL; 787 + goto err_put_device; 783 788 } 784 789 785 790 kref_init(&nvmem->refcnt); 786 791 INIT_LIST_HEAD(&nvmem->cells); 787 792 788 - nvmem->id = rval; 789 793 nvmem->owner = config->owner; 790 794 if (!nvmem->owner && config->dev->driver) 791 795 nvmem->owner = config->dev->driver->owner; 792 796 nvmem->stride = config->stride ?: 1; 793 797 nvmem->word_size = config->word_size ?: 1; 794 798 nvmem->size = config->size; 795 - nvmem->dev.type = &nvmem_provider_type; 796 - nvmem->dev.bus = &nvmem_bus_type; 797 - nvmem->dev.parent = config->dev; 798 799 nvmem->root_only = config->root_only; 799 800 nvmem->priv = config->priv; 800 801 nvmem->type = config->type; ··· 823 822 break; 824 823 } 825 824 826 - if (rval) { 827 - ida_free(&nvmem_ida, nvmem->id); 828 - kfree(nvmem); 829 - return ERR_PTR(rval); 830 - } 825 + if (rval) 826 + goto err_put_device; 831 827 832 828 nvmem->read_only = device_property_present(config->dev, "read-only") || 833 829 config->read_only || !nvmem->reg_write; ··· 833 835 nvmem->dev.groups = nvmem_dev_groups; 834 836 #endif 835 837 836 - dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); 837 - 838 - rval = device_register(&nvmem->dev); 839 - if (rval) 840 - goto err_put_device; 841 - 842 838 if (nvmem->nkeepout) { 843 839 rval = nvmem_validate_keepouts(nvmem); 844 840 if (rval) 845 - goto err_device_del; 841 + goto err_put_device; 846 842 } 847 843 848 844 if (config->compat) { 849 845 rval = nvmem_sysfs_setup_compat(nvmem, config); 850 846 if (rval) 851 - goto err_device_del; 847 + goto err_put_device; 852 848 } 853 849 854 850 if (config->cells) { 855 851 rval = nvmem_add_cells(nvmem, config->cells, config->ncells); 856 852 if (rval) 857 - goto err_teardown_compat; 853 + goto err_remove_cells; 858 854 } 859 855 860 856 rval = nvmem_add_cells_from_table(nvmem); ··· 859 867 if (rval) 860 868 goto err_remove_cells; 861 869 870 + dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); 871 + 872 + rval = device_add(&nvmem->dev); 873 + if (rval) 874 + goto err_remove_cells; 875 + 862 876 blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); 863 877 864 878 return nvmem; 865 879 866 880 err_remove_cells: 867 881 nvmem_device_remove_all_cells(nvmem); 868 - err_teardown_compat: 869 882 if (config->compat) 870 883 nvmem_sysfs_remove_compat(nvmem, config); 871 - err_device_del: 872 - device_del(&nvmem->dev); 873 884 err_put_device: 874 885 put_device(&nvmem->dev); 875 886 ··· 1237 1242 if (!cell_np) 1238 1243 return ERR_PTR(-ENOENT); 1239 1244 1240 - nvmem_np = of_get_next_parent(cell_np); 1241 - if (!nvmem_np) 1245 + nvmem_np = of_get_parent(cell_np); 1246 + if (!nvmem_np) { 1247 + of_node_put(cell_np); 1242 1248 return ERR_PTR(-EINVAL); 1249 + } 1243 1250 1244 1251 nvmem = __nvmem_device_get(nvmem_np, device_match_of_node); 1245 1252 of_node_put(nvmem_np); 1246 - if (IS_ERR(nvmem)) 1253 + if (IS_ERR(nvmem)) { 1254 + of_node_put(cell_np); 1247 1255 return ERR_CAST(nvmem); 1256 + } 1248 1257 1249 1258 cell_entry = nvmem_find_cell_entry_by_node(nvmem, cell_np); 1259 + of_node_put(cell_np); 1250 1260 if (!cell_entry) { 1251 1261 __nvmem_device_put(nvmem); 1252 1262 return ERR_PTR(-ENOENT);
+1
drivers/nvmem/qcom-spmi-sdam.c
··· 166 166 { .compatible = "qcom,spmi-sdam" }, 167 167 {}, 168 168 }; 169 + MODULE_DEVICE_TABLE(of, sdam_match_table); 169 170 170 171 static struct platform_driver sdam_driver = { 171 172 .driver = {
+14 -1
drivers/nvmem/sunxi_sid.c
··· 41 41 void *val, size_t bytes) 42 42 { 43 43 struct sunxi_sid *sid = context; 44 + u32 word; 44 45 45 - memcpy_fromio(val, sid->base + sid->value_offset + offset, bytes); 46 + /* .stride = 4 so offset is guaranteed to be aligned */ 47 + __ioread32_copy(val, sid->base + sid->value_offset + offset, bytes / 4); 48 + 49 + val += round_down(bytes, 4); 50 + offset += round_down(bytes, 4); 51 + bytes = bytes % 4; 52 + 53 + if (!bytes) 54 + return 0; 55 + 56 + /* Handle any trailing bytes */ 57 + word = readl_relaxed(sid->base + sid->value_offset + offset); 58 + memcpy(val, &word, bytes); 46 59 47 60 return 0; 48 61 }
+1 -5
drivers/of/fdt.c
··· 26 26 #include <linux/serial_core.h> 27 27 #include <linux/sysfs.h> 28 28 #include <linux/random.h> 29 - #include <linux/kmemleak.h> 30 29 31 30 #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ 32 31 #include <asm/page.h> ··· 524 525 size = dt_mem_next_cell(dt_root_size_cells, &prop); 525 526 526 527 if (size && 527 - early_init_dt_reserve_memory(base, size, nomap) == 0) { 528 + early_init_dt_reserve_memory(base, size, nomap) == 0) 528 529 pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n", 529 530 uname, &base, (unsigned long)(size / SZ_1M)); 530 - if (!nomap) 531 - kmemleak_alloc_phys(base, size, 0); 532 - } 533 531 else 534 532 pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n", 535 533 uname, &base, (unsigned long)(size / SZ_1M));
+3 -6
drivers/parisc/pdc_stable.c
··· 274 274 275 275 /* We'll use a local copy of buf */ 276 276 count = min_t(size_t, count, sizeof(in)-1); 277 - strncpy(in, buf, count); 278 - in[count] = '\0'; 277 + strscpy(in, buf, count + 1); 279 278 280 279 /* Let's clean up the target. 0xff is a blank pattern */ 281 280 memset(&hwpath, 0xff, sizeof(hwpath)); ··· 387 388 388 389 /* We'll use a local copy of buf */ 389 390 count = min_t(size_t, count, sizeof(in)-1); 390 - strncpy(in, buf, count); 391 - in[count] = '\0'; 391 + strscpy(in, buf, count + 1); 392 392 393 393 /* Let's clean up the target. 0 is a blank pattern */ 394 394 memset(&layers, 0, sizeof(layers)); ··· 754 756 755 757 /* We'll use a local copy of buf */ 756 758 count = min_t(size_t, count, sizeof(in)-1); 757 - strncpy(in, buf, count); 758 - in[count] = '\0'; 759 + strscpy(in, buf, count + 1); 759 760 760 761 /* Current flags are stored in primary boot path entry */ 761 762 pathentry = &pdcspath_entry_primary;
+6 -1
drivers/perf/arm-cmn.c
··· 1576 1576 hw->dn++; 1577 1577 continue; 1578 1578 } 1579 - hw->dtcs_used |= arm_cmn_node_to_xp(cmn, dn)->dtc; 1580 1579 hw->num_dns++; 1581 1580 if (bynodeid) 1582 1581 break; ··· 1588 1589 nodeid, nid.x, nid.y, nid.port, nid.dev, type); 1589 1590 return -EINVAL; 1590 1591 } 1592 + /* 1593 + * Keep assuming non-cycles events count in all DTC domains; turns out 1594 + * it's hard to make a worthwhile optimisation around this, short of 1595 + * going all-in with domain-local counter allocation as well. 1596 + */ 1597 + hw->dtcs_used = (1U << cmn->num_dtcs) - 1; 1591 1598 1592 1599 return arm_cmn_validate_group(cmn, event); 1593 1600 }
+1
drivers/platform/x86/amd/Kconfig
··· 8 8 config AMD_PMC 9 9 tristate "AMD SoC PMC driver" 10 10 depends on ACPI && PCI && RTC_CLASS 11 + select SERIO 11 12 help 12 13 The driver provides support for AMD Power Management Controller 13 14 primarily responsible for S2Idle transactions that are driven from
+56 -2
drivers/platform/x86/amd/pmc.c
··· 22 22 #include <linux/pci.h> 23 23 #include <linux/platform_device.h> 24 24 #include <linux/rtc.h> 25 + #include <linux/serio.h> 25 26 #include <linux/suspend.h> 26 27 #include <linux/seq_file.h> 27 28 #include <linux/uaccess.h> ··· 160 159 static bool enable_stb; 161 160 module_param(enable_stb, bool, 0644); 162 161 MODULE_PARM_DESC(enable_stb, "Enable the STB debug mechanism"); 162 + 163 + static bool disable_workarounds; 164 + module_param(disable_workarounds, bool, 0644); 165 + MODULE_PARM_DESC(disable_workarounds, "Disable workarounds for platform bugs"); 163 166 164 167 static struct amd_pmc_dev pmc; 165 168 static int amd_pmc_send_cmd(struct amd_pmc_dev *dev, u32 arg, u32 *data, u8 msg, bool ret); ··· 658 653 return -EINVAL; 659 654 } 660 655 656 + static int amd_pmc_czn_wa_irq1(struct amd_pmc_dev *pdev) 657 + { 658 + struct device *d; 659 + int rc; 660 + 661 + if (!pdev->major) { 662 + rc = amd_pmc_get_smu_version(pdev); 663 + if (rc) 664 + return rc; 665 + } 666 + 667 + if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65)) 668 + return 0; 669 + 670 + d = bus_find_device_by_name(&serio_bus, NULL, "serio0"); 671 + if (!d) 672 + return 0; 673 + if (device_may_wakeup(d)) { 674 + dev_info_once(d, "Disabling IRQ1 wakeup source to avoid platform firmware bug\n"); 675 + disable_irq_wake(1); 676 + device_set_wakeup_enable(d, false); 677 + } 678 + put_device(d); 679 + 680 + return 0; 681 + } 682 + 661 683 static int amd_pmc_verify_czn_rtc(struct amd_pmc_dev *pdev, u32 *arg) 662 684 { 663 685 struct rtc_device *rtc_device; ··· 747 715 /* Reset and Start SMU logging - to monitor the s0i3 stats */ 748 716 amd_pmc_setup_smu_logging(pdev); 749 717 750 - /* Activate CZN specific RTC functionality */ 751 - if (pdev->cpu_id == AMD_CPU_ID_CZN) { 718 + /* Activate CZN specific platform bug workarounds */ 719 + if (pdev->cpu_id == AMD_CPU_ID_CZN && !disable_workarounds) { 752 720 rc = amd_pmc_verify_czn_rtc(pdev, &arg); 753 721 if (rc) { 754 722 dev_err(pdev->dev, "failed to set RTC: %d\n", rc); ··· 814 782 .check = amd_pmc_s2idle_check, 815 783 .restore = amd_pmc_s2idle_restore, 816 784 }; 785 + 786 + static int __maybe_unused amd_pmc_suspend_handler(struct device *dev) 787 + { 788 + struct amd_pmc_dev *pdev = dev_get_drvdata(dev); 789 + 790 + if (pdev->cpu_id == AMD_CPU_ID_CZN && !disable_workarounds) { 791 + int rc = amd_pmc_czn_wa_irq1(pdev); 792 + 793 + if (rc) { 794 + dev_err(pdev->dev, "failed to adjust keyboard wakeup: %d\n", rc); 795 + return rc; 796 + } 797 + } 798 + 799 + return 0; 800 + } 801 + 802 + static SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL); 803 + 817 804 #endif 818 805 819 806 static const struct pci_device_id pmc_pci_ids[] = { ··· 1031 980 .name = "amd_pmc", 1032 981 .acpi_match_table = amd_pmc_acpi_ids, 1033 982 .dev_groups = pmc_groups, 983 + #ifdef CONFIG_SUSPEND 984 + .pm = &amd_pmc_pm, 985 + #endif 1034 986 }, 1035 987 .probe = amd_pmc_probe, 1036 988 .remove = amd_pmc_remove,
+1 -8
drivers/platform/x86/amd/pmf/auto-mode.c
··· 275 275 */ 276 276 277 277 if (is_apmf_func_supported(dev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) { 278 - int mode = amd_pmf_get_pprof_modes(dev); 279 - 280 - if (mode < 0) 281 - return mode; 282 - 283 278 dev_dbg(dev->dev, "resetting AMT thermals\n"); 284 - amd_pmf_update_slider(dev, SLIDER_OP_SET, mode, NULL); 279 + amd_pmf_set_sps_power_limits(dev); 285 280 } 286 281 return 0; 287 282 } ··· 294 299 void amd_pmf_init_auto_mode(struct amd_pmf_dev *dev) 295 300 { 296 301 amd_pmf_load_defaults_auto_mode(dev); 297 - /* update the thermal limits for Automode */ 298 - amd_pmf_set_automode(dev, config_store.current_mode, NULL); 299 302 amd_pmf_init_metrics_table(dev); 300 303 }
+5 -9
drivers/platform/x86/amd/pmf/cnqf.c
··· 103 103 104 104 src = amd_pmf_cnqf_get_power_source(dev); 105 105 106 - if (dev->current_profile == PLATFORM_PROFILE_BALANCED) { 106 + if (is_pprof_balanced(dev)) { 107 107 amd_pmf_set_cnqf(dev, src, config_store.current_mode, NULL); 108 108 } else { 109 109 /* ··· 307 307 const char *buf, size_t count) 308 308 { 309 309 struct amd_pmf_dev *pdev = dev_get_drvdata(dev); 310 - int mode, result, src; 310 + int result, src; 311 311 bool input; 312 - 313 - mode = amd_pmf_get_pprof_modes(pdev); 314 - if (mode < 0) 315 - return mode; 316 312 317 313 result = kstrtobool(buf, &input); 318 314 if (result) ··· 317 321 src = amd_pmf_cnqf_get_power_source(pdev); 318 322 pdev->cnqf_enabled = input; 319 323 320 - if (pdev->cnqf_enabled && pdev->current_profile == PLATFORM_PROFILE_BALANCED) { 324 + if (pdev->cnqf_enabled && is_pprof_balanced(pdev)) { 321 325 amd_pmf_set_cnqf(pdev, src, config_store.current_mode, NULL); 322 326 } else { 323 327 if (is_apmf_func_supported(pdev, APMF_FUNC_STATIC_SLIDER_GRANULAR)) 324 - amd_pmf_update_slider(pdev, SLIDER_OP_SET, mode, NULL); 328 + amd_pmf_set_sps_power_limits(pdev); 325 329 } 326 330 327 331 dev_dbg(pdev->dev, "Received CnQF %s\n", input ? "on" : "off"); ··· 382 386 dev->cnqf_enabled = amd_pmf_check_flags(dev); 383 387 384 388 /* update the thermal for CnQF */ 385 - if (dev->cnqf_enabled && dev->current_profile == PLATFORM_PROFILE_BALANCED) { 389 + if (dev->cnqf_enabled && is_pprof_balanced(dev)) { 386 390 src = amd_pmf_cnqf_get_power_source(dev); 387 391 amd_pmf_set_cnqf(dev, src, config_store.current_mode, NULL); 388 392 }
+28 -4
drivers/platform/x86/amd/pmf/core.c
··· 58 58 module_param(force_load, bool, 0444); 59 59 MODULE_PARM_DESC(force_load, "Force load this driver on supported older platforms (experimental)"); 60 60 61 + static int amd_pmf_pwr_src_notify_call(struct notifier_block *nb, unsigned long event, void *data) 62 + { 63 + struct amd_pmf_dev *pmf = container_of(nb, struct amd_pmf_dev, pwr_src_notifier); 64 + 65 + if (event != PSY_EVENT_PROP_CHANGED) 66 + return NOTIFY_OK; 67 + 68 + if (is_apmf_func_supported(pmf, APMF_FUNC_AUTO_MODE) || 69 + is_apmf_func_supported(pmf, APMF_FUNC_DYN_SLIDER_DC) || 70 + is_apmf_func_supported(pmf, APMF_FUNC_DYN_SLIDER_AC)) { 71 + if ((pmf->amt_enabled || pmf->cnqf_enabled) && is_pprof_balanced(pmf)) 72 + return NOTIFY_DONE; 73 + } 74 + 75 + amd_pmf_set_sps_power_limits(pmf); 76 + 77 + return NOTIFY_OK; 78 + } 79 + 61 80 static int current_power_limits_show(struct seq_file *seq, void *unused) 62 81 { 63 82 struct amd_pmf_dev *dev = seq->private; ··· 385 366 if (!dev->regbase) 386 367 return -ENOMEM; 387 368 369 + mutex_init(&dev->lock); 370 + mutex_init(&dev->update_mutex); 371 + 388 372 apmf_acpi_init(dev); 389 373 platform_set_drvdata(pdev, dev); 390 374 amd_pmf_init_features(dev); 391 375 apmf_install_handler(dev); 392 376 amd_pmf_dbgfs_register(dev); 393 377 394 - mutex_init(&dev->lock); 395 - mutex_init(&dev->update_mutex); 378 + dev->pwr_src_notifier.notifier_call = amd_pmf_pwr_src_notify_call; 379 + power_supply_reg_notifier(&dev->pwr_src_notifier); 380 + 396 381 dev_info(dev->dev, "registered PMF device successfully\n"); 397 382 398 383 return 0; ··· 406 383 { 407 384 struct amd_pmf_dev *dev = platform_get_drvdata(pdev); 408 385 409 - mutex_destroy(&dev->lock); 410 - mutex_destroy(&dev->update_mutex); 386 + power_supply_unreg_notifier(&dev->pwr_src_notifier); 411 387 amd_pmf_deinit_features(dev); 412 388 apmf_acpi_deinit(dev); 413 389 amd_pmf_dbgfs_unregister(dev); 390 + mutex_destroy(&dev->lock); 391 + mutex_destroy(&dev->update_mutex); 414 392 kfree(dev->buf); 415 393 return 0; 416 394 }
+3
drivers/platform/x86/amd/pmf/pmf.h
··· 169 169 struct mutex update_mutex; /* protects race between ACPI handler and metrics thread */ 170 170 bool cnqf_enabled; 171 171 bool cnqf_supported; 172 + struct notifier_block pwr_src_notifier; 172 173 }; 173 174 174 175 struct apmf_sps_prop_granular { ··· 392 391 void amd_pmf_deinit_sps(struct amd_pmf_dev *dev); 393 392 int apmf_get_static_slider_granular(struct amd_pmf_dev *pdev, 394 393 struct apmf_static_slider_granular_output *output); 394 + bool is_pprof_balanced(struct amd_pmf_dev *pmf); 395 395 396 396 397 397 int apmf_update_fan_idx(struct amd_pmf_dev *pdev, bool manual, u32 idx); 398 + int amd_pmf_set_sps_power_limits(struct amd_pmf_dev *pmf); 398 399 399 400 /* Auto Mode Layer */ 400 401 int apmf_get_auto_mode_def(struct amd_pmf_dev *pdev, struct apmf_auto_mode *data);
+22 -6
drivers/platform/x86/amd/pmf/sps.c
··· 70 70 } 71 71 } 72 72 73 + int amd_pmf_set_sps_power_limits(struct amd_pmf_dev *pmf) 74 + { 75 + int mode; 76 + 77 + mode = amd_pmf_get_pprof_modes(pmf); 78 + if (mode < 0) 79 + return mode; 80 + 81 + amd_pmf_update_slider(pmf, SLIDER_OP_SET, mode, NULL); 82 + 83 + return 0; 84 + } 85 + 86 + bool is_pprof_balanced(struct amd_pmf_dev *pmf) 87 + { 88 + return (pmf->current_profile == PLATFORM_PROFILE_BALANCED) ? true : false; 89 + } 90 + 73 91 static int amd_pmf_profile_get(struct platform_profile_handler *pprof, 74 92 enum platform_profile_option *profile) 75 93 { ··· 123 105 enum platform_profile_option profile) 124 106 { 125 107 struct amd_pmf_dev *pmf = container_of(pprof, struct amd_pmf_dev, pprof); 126 - int mode; 127 108 128 109 pmf->current_profile = profile; 129 - mode = amd_pmf_get_pprof_modes(pmf); 130 - if (mode < 0) 131 - return mode; 132 110 133 - amd_pmf_update_slider(pmf, SLIDER_OP_SET, mode, NULL); 134 - return 0; 111 + return amd_pmf_set_sps_power_limits(pmf); 135 112 } 136 113 137 114 int amd_pmf_init_sps(struct amd_pmf_dev *dev) ··· 135 122 136 123 dev->current_profile = PLATFORM_PROFILE_BALANCED; 137 124 amd_pmf_load_defaults_sps(dev); 125 + 126 + /* update SPS balanced power mode thermals */ 127 + amd_pmf_set_sps_power_limits(dev); 138 128 139 129 dev->pprof.profile_get = amd_pmf_profile_get; 140 130 dev->pprof.profile_set = amd_pmf_profile_set;
+18 -75
drivers/platform/x86/apple-gmux.c
··· 64 64 65 65 static struct apple_gmux_data *apple_gmux_data; 66 66 67 - /* 68 - * gmux port offsets. Many of these are not yet used, but may be in the 69 - * future, and it's useful to have them documented here anyhow. 70 - */ 71 - #define GMUX_PORT_VERSION_MAJOR 0x04 72 - #define GMUX_PORT_VERSION_MINOR 0x05 73 - #define GMUX_PORT_VERSION_RELEASE 0x06 74 - #define GMUX_PORT_SWITCH_DISPLAY 0x10 75 - #define GMUX_PORT_SWITCH_GET_DISPLAY 0x11 76 - #define GMUX_PORT_INTERRUPT_ENABLE 0x14 77 - #define GMUX_PORT_INTERRUPT_STATUS 0x16 78 - #define GMUX_PORT_SWITCH_DDC 0x28 79 - #define GMUX_PORT_SWITCH_EXTERNAL 0x40 80 - #define GMUX_PORT_SWITCH_GET_EXTERNAL 0x41 81 - #define GMUX_PORT_DISCRETE_POWER 0x50 82 - #define GMUX_PORT_MAX_BRIGHTNESS 0x70 83 - #define GMUX_PORT_BRIGHTNESS 0x74 84 - #define GMUX_PORT_VALUE 0xc2 85 - #define GMUX_PORT_READ 0xd0 86 - #define GMUX_PORT_WRITE 0xd4 87 - 88 - #define GMUX_MIN_IO_LEN (GMUX_PORT_BRIGHTNESS + 4) 89 - 90 67 #define GMUX_INTERRUPT_ENABLE 0xff 91 68 #define GMUX_INTERRUPT_DISABLE 0x00 92 69 ··· 224 247 gmux_index_write32(gmux_data, port, val); 225 248 else 226 249 gmux_pio_write32(gmux_data, port, val); 227 - } 228 - 229 - static bool gmux_is_indexed(struct apple_gmux_data *gmux_data) 230 - { 231 - u16 val; 232 - 233 - outb(0xaa, gmux_data->iostart + 0xcc); 234 - outb(0x55, gmux_data->iostart + 0xcd); 235 - outb(0x00, gmux_data->iostart + 0xce); 236 - 237 - val = inb(gmux_data->iostart + 0xcc) | 238 - (inb(gmux_data->iostart + 0xcd) << 8); 239 - 240 - if (val == 0x55aa) 241 - return true; 242 - 243 - return false; 244 250 } 245 251 246 252 /** ··· 565 605 int ret = -ENXIO; 566 606 acpi_status status; 567 607 unsigned long long gpe; 608 + bool indexed = false; 609 + u32 version; 568 610 569 611 if (apple_gmux_data) 570 612 return -EBUSY; 613 + 614 + if (!apple_gmux_detect(pnp, &indexed)) { 615 + pr_info("gmux device not present\n"); 616 + return -ENODEV; 617 + } 571 618 572 619 gmux_data = kzalloc(sizeof(*gmux_data), GFP_KERNEL); 573 620 if (!gmux_data) ··· 582 615 pnp_set_drvdata(pnp, gmux_data); 583 616 584 617 res = pnp_get_resource(pnp, IORESOURCE_IO, 0); 585 - if (!res) { 586 - pr_err("Failed to find gmux I/O resource\n"); 587 - goto err_free; 588 - } 589 - 590 618 gmux_data->iostart = res->start; 591 619 gmux_data->iolen = resource_size(res); 592 - 593 - if (gmux_data->iolen < GMUX_MIN_IO_LEN) { 594 - pr_err("gmux I/O region too small (%lu < %u)\n", 595 - gmux_data->iolen, GMUX_MIN_IO_LEN); 596 - goto err_free; 597 - } 598 620 599 621 if (!request_region(gmux_data->iostart, gmux_data->iolen, 600 622 "Apple gmux")) { ··· 591 635 goto err_free; 592 636 } 593 637 594 - /* 595 - * Invalid version information may indicate either that the gmux 596 - * device isn't present or that it's a new one that uses indexed 597 - * io 598 - */ 599 - 600 - ver_major = gmux_read8(gmux_data, GMUX_PORT_VERSION_MAJOR); 601 - ver_minor = gmux_read8(gmux_data, GMUX_PORT_VERSION_MINOR); 602 - ver_release = gmux_read8(gmux_data, GMUX_PORT_VERSION_RELEASE); 603 - if (ver_major == 0xff && ver_minor == 0xff && ver_release == 0xff) { 604 - if (gmux_is_indexed(gmux_data)) { 605 - u32 version; 606 - mutex_init(&gmux_data->index_lock); 607 - gmux_data->indexed = true; 608 - version = gmux_read32(gmux_data, 609 - GMUX_PORT_VERSION_MAJOR); 610 - ver_major = (version >> 24) & 0xff; 611 - ver_minor = (version >> 16) & 0xff; 612 - ver_release = (version >> 8) & 0xff; 613 - } else { 614 - pr_info("gmux device not present\n"); 615 - ret = -ENODEV; 616 - goto err_release; 617 - } 638 + if (indexed) { 639 + mutex_init(&gmux_data->index_lock); 640 + gmux_data->indexed = true; 641 + version = gmux_read32(gmux_data, GMUX_PORT_VERSION_MAJOR); 642 + ver_major = (version >> 24) & 0xff; 643 + ver_minor = (version >> 16) & 0xff; 644 + ver_release = (version >> 8) & 0xff; 645 + } else { 646 + ver_major = gmux_read8(gmux_data, GMUX_PORT_VERSION_MAJOR); 647 + ver_minor = gmux_read8(gmux_data, GMUX_PORT_VERSION_MINOR); 648 + ver_release = gmux_read8(gmux_data, GMUX_PORT_VERSION_RELEASE); 618 649 } 619 650 pr_info("Found gmux version %d.%d.%d [%s]\n", ver_major, ver_minor, 620 651 ver_release, (gmux_data->indexed ? "indexed" : "classic"));
+12 -5
drivers/platform/x86/asus-wmi.c
··· 225 225 226 226 int tablet_switch_event_code; 227 227 u32 tablet_switch_dev_id; 228 + bool tablet_switch_inverted; 228 229 229 230 enum fan_type fan_type; 230 231 enum fan_type gpu_fan_type; ··· 494 493 } 495 494 496 495 /* Input **********************************************************************/ 496 + static void asus_wmi_tablet_sw_report(struct asus_wmi *asus, bool value) 497 + { 498 + input_report_switch(asus->inputdev, SW_TABLET_MODE, 499 + asus->tablet_switch_inverted ? !value : value); 500 + input_sync(asus->inputdev); 501 + } 502 + 497 503 static void asus_wmi_tablet_sw_init(struct asus_wmi *asus, u32 dev_id, int event_code) 498 504 { 499 505 struct device *dev = &asus->platform_device->dev; ··· 509 501 result = asus_wmi_get_devstate_simple(asus, dev_id); 510 502 if (result >= 0) { 511 503 input_set_capability(asus->inputdev, EV_SW, SW_TABLET_MODE); 512 - input_report_switch(asus->inputdev, SW_TABLET_MODE, result); 504 + asus_wmi_tablet_sw_report(asus, result); 513 505 asus->tablet_switch_dev_id = dev_id; 514 506 asus->tablet_switch_event_code = event_code; 515 507 } else if (result == -ENODEV) { ··· 542 534 case asus_wmi_no_tablet_switch: 543 535 break; 544 536 case asus_wmi_kbd_dock_devid: 537 + asus->tablet_switch_inverted = true; 545 538 asus_wmi_tablet_sw_init(asus, ASUS_WMI_DEVID_KBD_DOCK, NOTIFY_KBD_DOCK_CHANGE); 546 539 break; 547 540 case asus_wmi_lid_flip_devid: ··· 582 573 return; 583 574 584 575 result = asus_wmi_get_devstate_simple(asus, asus->tablet_switch_dev_id); 585 - if (result >= 0) { 586 - input_report_switch(asus->inputdev, SW_TABLET_MODE, result); 587 - input_sync(asus->inputdev); 588 - } 576 + if (result >= 0) 577 + asus_wmi_tablet_sw_report(asus, result); 589 578 } 590 579 591 580 /* dGPU ********************************************************************/
+3
drivers/platform/x86/dell/dell-wmi-base.c
··· 261 261 { KE_KEY, 0x57, { KEY_BRIGHTNESSDOWN } }, 262 262 { KE_KEY, 0x58, { KEY_BRIGHTNESSUP } }, 263 263 264 + /*Speaker Mute*/ 265 + { KE_KEY, 0x109, { KEY_MUTE} }, 266 + 264 267 /* Mic mute */ 265 268 { KE_KEY, 0x150, { KEY_MICMUTE } }, 266 269
+1
drivers/platform/x86/gigabyte-wmi.c
··· 141 141 142 142 static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { 143 143 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H-CF"), 144 + DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H WIFI-CF"), 144 145 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"), 145 146 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"), 146 147 DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
+5 -1
drivers/platform/x86/hp/hp-wmi.c
··· 90 90 HPWMI_PEAKSHIFT_PERIOD = 0x0F, 91 91 HPWMI_BATTERY_CHARGE_PERIOD = 0x10, 92 92 HPWMI_SANITIZATION_MODE = 0x17, 93 + HPWMI_OMEN_KEY = 0x1D, 93 94 HPWMI_SMART_EXPERIENCE_APP = 0x21, 94 95 }; 95 96 ··· 217 216 { KE_KEY, 0x213b, { KEY_INFO } }, 218 217 { KE_KEY, 0x2169, { KEY_ROTATE_DISPLAY } }, 219 218 { KE_KEY, 0x216a, { KEY_SETUP } }, 219 + { KE_KEY, 0x21a5, { KEY_PROG2 } }, /* HP Omen Key */ 220 + { KE_KEY, 0x21a7, { KEY_FN_ESC } }, 220 221 { KE_KEY, 0x21a9, { KEY_TOUCHPAD_OFF } }, 221 222 { KE_KEY, 0x121a9, { KEY_TOUCHPAD_ON } }, 222 223 { KE_KEY, 0x231b, { KEY_HELP } }, ··· 551 548 552 549 static int hp_wmi_set_block(void *data, bool blocked) 553 550 { 554 - enum hp_wmi_radio r = (enum hp_wmi_radio) data; 551 + enum hp_wmi_radio r = (long)data; 555 552 int query = BIT(r + 8) | ((!blocked) << r); 556 553 int ret; 557 554 ··· 813 810 case HPWMI_SMART_ADAPTER: 814 811 break; 815 812 case HPWMI_BEZEL_BUTTON: 813 + case HPWMI_OMEN_KEY: 816 814 key_code = hp_wmi_read_int(HPWMI_HOTKEY_QUERY); 817 815 if (key_code < 0) 818 816 break;
+7 -6
drivers/platform/x86/thinkpad_acpi.c
··· 5563 5563 5564 5564 static enum led_brightness light_sysfs_get(struct led_classdev *led_cdev) 5565 5565 { 5566 - return (light_get_status() == 1) ? LED_FULL : LED_OFF; 5566 + return (light_get_status() == 1) ? LED_ON : LED_OFF; 5567 5567 } 5568 5568 5569 5569 static struct tpacpi_led_classdev tpacpi_led_thinklight = { ··· 10496 10496 if (err) 10497 10497 goto unlock; 10498 10498 } 10499 - } 10500 - if (dytc_capabilities & BIT(DYTC_FC_PSC)) { 10499 + } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { 10501 10500 err = dytc_command(DYTC_SET_COMMAND(DYTC_FUNCTION_PSC, perfmode, 1), &output); 10502 10501 if (err) 10503 10502 goto unlock; ··· 10524 10525 err = dytc_command(DYTC_CMD_MMC_GET, &output); 10525 10526 else 10526 10527 err = dytc_cql_command(DYTC_CMD_GET, &output); 10527 - } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) 10528 + funcmode = DYTC_FUNCTION_MMC; 10529 + } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { 10528 10530 err = dytc_command(DYTC_CMD_GET, &output); 10529 - 10531 + /* Check if we are PSC mode, or have AMT enabled */ 10532 + funcmode = (output >> DYTC_GET_FUNCTION_BIT) & 0xF; 10533 + } 10530 10534 mutex_unlock(&dytc_mutex); 10531 10535 if (err) 10532 10536 return; 10533 10537 10534 - funcmode = (output >> DYTC_GET_FUNCTION_BIT) & 0xF; 10535 10538 perfmode = (output >> DYTC_GET_MODE_BIT) & 0xF; 10536 10539 convert_dytc_to_profile(funcmode, perfmode, &profile); 10537 10540 if (profile != dytc_current_profile) {
+9
drivers/platform/x86/touchscreen_dmi.c
··· 1098 1098 }, 1099 1099 }, 1100 1100 { 1101 + /* Chuwi Vi8 (CWI501) */ 1102 + .driver_data = (void *)&chuwi_vi8_data, 1103 + .matches = { 1104 + DMI_MATCH(DMI_SYS_VENDOR, "Insyde"), 1105 + DMI_MATCH(DMI_PRODUCT_NAME, "i86"), 1106 + DMI_MATCH(DMI_BIOS_VERSION, "CHUWI.W86JLBNR01"), 1107 + }, 1108 + }, 1109 + { 1101 1110 /* Chuwi Vi8 (CWI506) */ 1102 1111 .driver_data = (void *)&chuwi_vi8_data, 1103 1112 .matches = {
+26 -20
drivers/rtc/rtc-efi.c
··· 188 188 189 189 static int efi_procfs(struct device *dev, struct seq_file *seq) 190 190 { 191 - efi_time_t eft, alm; 192 - efi_time_cap_t cap; 193 - efi_bool_t enabled, pending; 191 + efi_time_t eft, alm; 192 + efi_time_cap_t cap; 193 + efi_bool_t enabled, pending; 194 + struct rtc_device *rtc = dev_get_drvdata(dev); 194 195 195 196 memset(&eft, 0, sizeof(eft)); 196 197 memset(&alm, 0, sizeof(alm)); ··· 214 213 /* XXX fixme: convert to string? */ 215 214 seq_printf(seq, "Timezone\t: %u\n", eft.timezone); 216 215 217 - seq_printf(seq, 218 - "Alarm Time\t: %u:%u:%u.%09u\n" 219 - "Alarm Date\t: %u-%u-%u\n" 220 - "Alarm Daylight\t: %u\n" 221 - "Enabled\t\t: %s\n" 222 - "Pending\t\t: %s\n", 223 - alm.hour, alm.minute, alm.second, alm.nanosecond, 224 - alm.year, alm.month, alm.day, 225 - alm.daylight, 226 - enabled == 1 ? "yes" : "no", 227 - pending == 1 ? "yes" : "no"); 216 + if (test_bit(RTC_FEATURE_ALARM, rtc->features)) { 217 + seq_printf(seq, 218 + "Alarm Time\t: %u:%u:%u.%09u\n" 219 + "Alarm Date\t: %u-%u-%u\n" 220 + "Alarm Daylight\t: %u\n" 221 + "Enabled\t\t: %s\n" 222 + "Pending\t\t: %s\n", 223 + alm.hour, alm.minute, alm.second, alm.nanosecond, 224 + alm.year, alm.month, alm.day, 225 + alm.daylight, 226 + enabled == 1 ? "yes" : "no", 227 + pending == 1 ? "yes" : "no"); 228 228 229 - if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) 230 - seq_puts(seq, "Timezone\t: unspecified\n"); 231 - else 232 - /* XXX fixme: convert to string? */ 233 - seq_printf(seq, "Timezone\t: %u\n", alm.timezone); 229 + if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) 230 + seq_puts(seq, "Timezone\t: unspecified\n"); 231 + else 232 + /* XXX fixme: convert to string? */ 233 + seq_printf(seq, "Timezone\t: %u\n", alm.timezone); 234 + } 234 235 235 236 /* 236 237 * now prints the capabilities ··· 272 269 273 270 rtc->ops = &efi_rtc_ops; 274 271 clear_bit(RTC_FEATURE_UPDATE_INTERRUPT, rtc->features); 275 - set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); 272 + if (efi_rt_services_supported(EFI_RT_SUPPORTED_WAKEUP_SERVICES)) 273 + set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); 274 + else 275 + clear_bit(RTC_FEATURE_ALARM, rtc->features); 276 276 277 277 device_init_wakeup(&dev->dev, true); 278 278
+2 -2
drivers/rtc/rtc-sunplus.c
··· 240 240 if (IS_ERR(sp_rtc->reg_base)) 241 241 return dev_err_probe(&plat_dev->dev, PTR_ERR(sp_rtc->reg_base), 242 242 "%s devm_ioremap_resource fail\n", RTC_REG_NAME); 243 - dev_dbg(&plat_dev->dev, "res = 0x%x, reg_base = 0x%lx\n", 244 - sp_rtc->res->start, (unsigned long)sp_rtc->reg_base); 243 + dev_dbg(&plat_dev->dev, "res = %pR, reg_base = %p\n", 244 + sp_rtc->res, sp_rtc->reg_base); 245 245 246 246 sp_rtc->irq = platform_get_irq(plat_dev, 0); 247 247 if (sp_rtc->irq < 0)
+3 -2
drivers/scsi/device_handler/scsi_dh_alua.c
··· 981 981 * 982 982 * Returns true if and only if alua_rtpg_work() will be called asynchronously. 983 983 * That function is responsible for calling @qdata->fn(). 984 + * 985 + * Context: may be called from atomic context (alua_check()) only if the caller 986 + * holds an sdev reference. 984 987 */ 985 988 static bool alua_rtpg_queue(struct alua_port_group *pg, 986 989 struct scsi_device *sdev, ··· 991 988 { 992 989 int start_queue = 0; 993 990 unsigned long flags; 994 - 995 - might_sleep(); 996 991 997 992 if (WARN_ON_ONCE(!pg) || scsi_device_get(sdev)) 998 993 return false;
+1 -1
drivers/scsi/hpsa.c
··· 5850 5850 { 5851 5851 struct Scsi_Host *sh; 5852 5852 5853 - sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h)); 5853 + sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info)); 5854 5854 if (sh == NULL) { 5855 5855 dev_err(&h->pdev->dev, "scsi_host_alloc failed\n"); 5856 5856 return -ENOMEM;
+16 -6
drivers/scsi/iscsi_tcp.c
··· 849 849 enum iscsi_host_param param, char *buf) 850 850 { 851 851 struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost); 852 - struct iscsi_session *session = tcp_sw_host->session; 852 + struct iscsi_session *session; 853 853 struct iscsi_conn *conn; 854 854 struct iscsi_tcp_conn *tcp_conn; 855 855 struct iscsi_sw_tcp_conn *tcp_sw_conn; ··· 859 859 860 860 switch (param) { 861 861 case ISCSI_HOST_PARAM_IPADDRESS: 862 + session = tcp_sw_host->session; 862 863 if (!session) 863 864 return -ENOTCONN; 864 865 ··· 960 959 if (!cls_session) 961 960 goto remove_host; 962 961 session = cls_session->dd_data; 963 - tcp_sw_host = iscsi_host_priv(shost); 964 - tcp_sw_host->session = session; 965 962 966 963 if (iscsi_tcp_r2tpool_alloc(session)) 967 964 goto remove_session; 965 + 966 + /* We are now fully setup so expose the session to sysfs. */ 967 + tcp_sw_host = iscsi_host_priv(shost); 968 + tcp_sw_host->session = session; 968 969 return cls_session; 969 970 970 971 remove_session: ··· 986 983 if (WARN_ON_ONCE(session->leadconn)) 987 984 return; 988 985 989 - iscsi_tcp_r2tpool_free(cls_session->dd_data); 990 - iscsi_session_teardown(cls_session); 991 - 986 + iscsi_session_remove(cls_session); 987 + /* 988 + * Our get_host_param needs to access the session, so remove the 989 + * host from sysfs before freeing the session to make sure userspace 990 + * is no longer accessing the callout. 991 + */ 992 992 iscsi_host_remove(shost, false); 993 + 994 + iscsi_tcp_r2tpool_free(cls_session->dd_data); 995 + 996 + iscsi_session_free(cls_session); 993 997 iscsi_host_free(shost); 994 998 } 995 999
+31 -7
drivers/scsi/libiscsi.c
··· 3104 3104 } 3105 3105 EXPORT_SYMBOL_GPL(iscsi_session_setup); 3106 3106 3107 - /** 3108 - * iscsi_session_teardown - destroy session, host, and cls_session 3109 - * @cls_session: iscsi session 3107 + /* 3108 + * issi_session_remove - Remove session from iSCSI class. 3110 3109 */ 3111 - void iscsi_session_teardown(struct iscsi_cls_session *cls_session) 3110 + void iscsi_session_remove(struct iscsi_cls_session *cls_session) 3112 3111 { 3113 3112 struct iscsi_session *session = cls_session->dd_data; 3114 - struct module *owner = cls_session->transport->owner; 3115 3113 struct Scsi_Host *shost = session->host; 3116 3114 3117 3115 iscsi_remove_session(cls_session); 3116 + /* 3117 + * host removal only has to wait for its children to be removed from 3118 + * sysfs, and iscsi_tcp needs to do iscsi_host_remove before freeing 3119 + * the session, so drop the session count here. 3120 + */ 3121 + iscsi_host_dec_session_cnt(shost); 3122 + } 3123 + EXPORT_SYMBOL_GPL(iscsi_session_remove); 3124 + 3125 + /** 3126 + * iscsi_session_free - Free iscsi session and it's resources 3127 + * @cls_session: iscsi session 3128 + */ 3129 + void iscsi_session_free(struct iscsi_cls_session *cls_session) 3130 + { 3131 + struct iscsi_session *session = cls_session->dd_data; 3132 + struct module *owner = cls_session->transport->owner; 3118 3133 3119 3134 iscsi_pool_free(&session->cmdpool); 3120 3135 kfree(session->password); ··· 3147 3132 kfree(session->discovery_parent_type); 3148 3133 3149 3134 iscsi_free_session(cls_session); 3150 - 3151 - iscsi_host_dec_session_cnt(shost); 3152 3135 module_put(owner); 3136 + } 3137 + EXPORT_SYMBOL_GPL(iscsi_session_free); 3138 + 3139 + /** 3140 + * iscsi_session_teardown - destroy session and cls_session 3141 + * @cls_session: iscsi session 3142 + */ 3143 + void iscsi_session_teardown(struct iscsi_cls_session *cls_session) 3144 + { 3145 + iscsi_session_remove(cls_session); 3146 + iscsi_session_free(cls_session); 3153 3147 } 3154 3148 EXPORT_SYMBOL_GPL(iscsi_session_teardown); 3155 3149
-2
drivers/scsi/scsi.c
··· 588 588 { 589 589 struct module *mod = sdev->host->hostt->module; 590 590 591 - might_sleep(); 592 - 593 591 put_device(&sdev->sdev_gendev); 594 592 module_put(mod); 595 593 }
+3 -4
drivers/scsi/scsi_scan.c
··· 1232 1232 * that no LUN is present, so don't add sdev in these cases. 1233 1233 * Two specific examples are: 1234 1234 * 1) NetApp targets: return PQ=1, PDT=0x1f 1235 - * 2) IBM/2145 targets: return PQ=1, PDT=0 1236 - * 3) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved" 1235 + * 2) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved" 1237 1236 * in the UFI 1.0 spec (we cannot rely on reserved bits). 1238 1237 * 1239 1238 * References: ··· 1246 1247 * PDT=00h Direct-access device (floppy) 1247 1248 * PDT=1Fh none (no FDD connected to the requested logical unit) 1248 1249 */ 1249 - if (((result[0] >> 5) == 1 || 1250 - (starget->pdt_1f_for_no_lun && (result[0] & 0x1f) == 0x1f)) && 1250 + if (((result[0] >> 5) == 1 || starget->pdt_1f_for_no_lun) && 1251 + (result[0] & 0x1f) == 0x1f && 1251 1252 !scsi_is_wlun(lun)) { 1252 1253 SCSI_LOG_SCAN_BUS(3, sdev_printk(KERN_INFO, sdev, 1253 1254 "scsi scan: peripheral device type"
+2
drivers/scsi/scsi_sysfs.c
··· 451 451 struct scsi_vpd *vpd_pgb0 = NULL, *vpd_pgb1 = NULL, *vpd_pgb2 = NULL; 452 452 unsigned long flags; 453 453 454 + might_sleep(); 455 + 454 456 scsi_dh_release_device(sdev); 455 457 456 458 parent = sdev->sdev_gendev.parent;
+2 -2
drivers/target/target_core_tmr.c
··· 73 73 { 74 74 struct se_session *sess = se_cmd->se_sess; 75 75 76 - assert_spin_locked(&sess->sess_cmd_lock); 77 - WARN_ON_ONCE(!irqs_disabled()); 76 + lockdep_assert_held(&sess->sess_cmd_lock); 77 + 78 78 /* 79 79 * If command already reached CMD_T_COMPLETE state within 80 80 * target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
+22 -6
drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
··· 44 44 int trip, int *temp) 45 45 { 46 46 struct int34x_thermal_zone *d = zone->devdata; 47 - int i; 47 + int i, ret = 0; 48 48 49 49 if (d->override_ops && d->override_ops->get_trip_temp) 50 50 return d->override_ops->get_trip_temp(zone, trip, temp); 51 + 52 + mutex_lock(&d->trip_mutex); 51 53 52 54 if (trip < d->aux_trip_nr) 53 55 *temp = d->aux_trips[trip]; ··· 68 66 } 69 67 } 70 68 if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT) 71 - return -EINVAL; 69 + ret = -EINVAL; 72 70 } 73 71 74 - return 0; 72 + mutex_unlock(&d->trip_mutex); 73 + 74 + return ret; 75 75 } 76 76 77 77 static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone, ··· 81 77 enum thermal_trip_type *type) 82 78 { 83 79 struct int34x_thermal_zone *d = zone->devdata; 84 - int i; 80 + int i, ret = 0; 85 81 86 82 if (d->override_ops && d->override_ops->get_trip_type) 87 83 return d->override_ops->get_trip_type(zone, trip, type); 84 + 85 + mutex_lock(&d->trip_mutex); 88 86 89 87 if (trip < d->aux_trip_nr) 90 88 *type = THERMAL_TRIP_PASSIVE; ··· 105 99 } 106 100 } 107 101 if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT) 108 - return -EINVAL; 102 + ret = -EINVAL; 109 103 } 110 104 111 - return 0; 105 + mutex_unlock(&d->trip_mutex); 106 + 107 + return ret; 112 108 } 113 109 114 110 static int int340x_thermal_set_trip_temp(struct thermal_zone_device *zone, ··· 188 180 int trip_cnt = int34x_zone->aux_trip_nr; 189 181 int i; 190 182 183 + mutex_lock(&int34x_zone->trip_mutex); 184 + 191 185 int34x_zone->crt_trip_id = -1; 192 186 if (!int340x_thermal_get_trip_config(int34x_zone->adev->handle, "_CRT", 193 187 &int34x_zone->crt_temp)) ··· 217 207 int34x_zone->act_trips[i].valid = true; 218 208 } 219 209 210 + mutex_unlock(&int34x_zone->trip_mutex); 211 + 220 212 return trip_cnt; 221 213 } 222 214 EXPORT_SYMBOL_GPL(int340x_thermal_read_trips); ··· 241 229 GFP_KERNEL); 242 230 if (!int34x_thermal_zone) 243 231 return ERR_PTR(-ENOMEM); 232 + 233 + mutex_init(&int34x_thermal_zone->trip_mutex); 244 234 245 235 int34x_thermal_zone->adev = adev; 246 236 int34x_thermal_zone->override_ops = override_ops; ··· 295 281 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 296 282 kfree(int34x_thermal_zone->aux_trips); 297 283 err_trip_alloc: 284 + mutex_destroy(&int34x_thermal_zone->trip_mutex); 298 285 kfree(int34x_thermal_zone); 299 286 return ERR_PTR(ret); 300 287 } ··· 307 292 thermal_zone_device_unregister(int34x_thermal_zone->zone); 308 293 acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table); 309 294 kfree(int34x_thermal_zone->aux_trips); 295 + mutex_destroy(&int34x_thermal_zone->trip_mutex); 310 296 kfree(int34x_thermal_zone); 311 297 } 312 298 EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
+1
drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
··· 32 32 struct thermal_zone_device_ops *override_ops; 33 33 void *priv_data; 34 34 struct acpi_lpat_conversion_table *lpat_table; 35 + struct mutex trip_mutex; 35 36 }; 36 37 37 38 struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *,
+17 -4
drivers/tty/serial/8250/8250_dma.c
··· 43 43 struct uart_8250_dma *dma = p->dma; 44 44 struct tty_port *tty_port = &p->port.state->port; 45 45 struct dma_tx_state state; 46 + enum dma_status dma_status; 46 47 int count; 47 48 48 - dma->rx_running = 0; 49 - dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 49 + /* 50 + * New DMA Rx can be started during the completion handler before it 51 + * could acquire port's lock and it might still be ongoing. Don't to 52 + * anything in such case. 53 + */ 54 + dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 55 + if (dma_status == DMA_IN_PROGRESS) 56 + return; 50 57 51 58 count = dma->rx_size - state.residue; 52 59 53 60 tty_insert_flip_string(tty_port, dma->rx_buf, count); 54 61 p->port.icount.rx += count; 62 + dma->rx_running = 0; 55 63 56 64 tty_flip_buffer_push(tty_port); 57 65 } ··· 70 62 struct uart_8250_dma *dma = p->dma; 71 63 unsigned long flags; 72 64 73 - __dma_rx_complete(p); 74 - 75 65 spin_lock_irqsave(&p->port.lock, flags); 66 + if (dma->rx_running) 67 + __dma_rx_complete(p); 68 + 69 + /* 70 + * Cannot be combined with the previous check because __dma_rx_complete() 71 + * changes dma->rx_running. 72 + */ 76 73 if (!dma->rx_running && (serial_lsr_in(p) & UART_LSR_DR)) 77 74 p->dma->rx_dma(p); 78 75 spin_unlock_irqrestore(&p->port.lock, flags);
+5 -28
drivers/tty/serial/stm32-usart.c
··· 797 797 spin_unlock(&port->lock); 798 798 } 799 799 800 - if (stm32_usart_rx_dma_enabled(port)) 801 - return IRQ_WAKE_THREAD; 802 - else 803 - return IRQ_HANDLED; 804 - } 805 - 806 - static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr) 807 - { 808 - struct uart_port *port = ptr; 809 - struct tty_port *tport = &port->state->port; 810 - struct stm32_port *stm32_port = to_stm32_port(port); 811 - unsigned int size; 812 - unsigned long flags; 813 - 814 800 /* Receiver timeout irq for DMA RX */ 815 - if (!stm32_port->throttled) { 816 - spin_lock_irqsave(&port->lock, flags); 801 + if (stm32_usart_rx_dma_enabled(port) && !stm32_port->throttled) { 802 + spin_lock(&port->lock); 817 803 size = stm32_usart_receive_chars(port, false); 818 - uart_unlock_and_check_sysrq_irqrestore(port, flags); 804 + uart_unlock_and_check_sysrq(port); 819 805 if (size) 820 806 tty_flip_buffer_push(tport); 821 807 } ··· 1001 1015 u32 val; 1002 1016 int ret; 1003 1017 1004 - ret = request_threaded_irq(port->irq, stm32_usart_interrupt, 1005 - stm32_usart_threaded_interrupt, 1006 - IRQF_ONESHOT | IRQF_NO_SUSPEND, 1007 - name, port); 1018 + ret = request_irq(port->irq, stm32_usart_interrupt, 1019 + IRQF_NO_SUSPEND, name, port); 1008 1020 if (ret) 1009 1021 return ret; 1010 1022 ··· 1584 1600 struct device *dev = &pdev->dev; 1585 1601 struct dma_slave_config config; 1586 1602 int ret; 1587 - 1588 - /* 1589 - * Using DMA and threaded handler for the console could lead to 1590 - * deadlocks. 1591 - */ 1592 - if (uart_console(port)) 1593 - return -ENODEV; 1594 1603 1595 1604 stm32port->rx_buf = dma_alloc_coherent(dev, RX_BUF_L, 1596 1605 &stm32port->rx_dma_buf,
+5 -4
drivers/tty/vt/vc_screen.c
··· 386 386 387 387 uni_mode = use_unicode(inode); 388 388 attr = use_attributes(inode); 389 - ret = -ENXIO; 390 - vc = vcs_vc(inode, &viewed); 391 - if (!vc) 392 - goto unlock_out; 393 389 394 390 ret = -EINVAL; 395 391 if (pos < 0) ··· 402 406 while (count) { 403 407 unsigned int this_round, skip = 0; 404 408 int size; 409 + 410 + ret = -ENXIO; 411 + vc = vcs_vc(inode, &viewed); 412 + if (!vc) 413 + goto unlock_out; 405 414 406 415 /* Check whether we are above size each round, 407 416 * as copy_to_user at the end of this loop
+15 -14
drivers/ufs/core/ufshcd.c
··· 1234 1234 * clock scaling is in progress 1235 1235 */ 1236 1236 ufshcd_scsi_block_requests(hba); 1237 + mutex_lock(&hba->wb_mutex); 1237 1238 down_write(&hba->clk_scaling_lock); 1238 1239 1239 1240 if (!hba->clk_scaling.is_allowed || 1240 1241 ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) { 1241 1242 ret = -EBUSY; 1242 1243 up_write(&hba->clk_scaling_lock); 1244 + mutex_unlock(&hba->wb_mutex); 1243 1245 ufshcd_scsi_unblock_requests(hba); 1244 1246 goto out; 1245 1247 } ··· 1253 1251 return ret; 1254 1252 } 1255 1253 1256 - static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock) 1254 + static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool scale_up) 1257 1255 { 1258 - if (writelock) 1259 - up_write(&hba->clk_scaling_lock); 1260 - else 1261 - up_read(&hba->clk_scaling_lock); 1256 + up_write(&hba->clk_scaling_lock); 1257 + 1258 + /* Enable Write Booster if we have scaled up else disable it */ 1259 + if (ufshcd_enable_wb_if_scaling_up(hba) && !err) 1260 + ufshcd_wb_toggle(hba, scale_up); 1261 + 1262 + mutex_unlock(&hba->wb_mutex); 1263 + 1262 1264 ufshcd_scsi_unblock_requests(hba); 1263 1265 ufshcd_release(hba); 1264 1266 } ··· 1279 1273 static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up) 1280 1274 { 1281 1275 int ret = 0; 1282 - bool is_writelock = true; 1283 1276 1284 1277 ret = ufshcd_clock_scaling_prepare(hba); 1285 1278 if (ret) ··· 1307 1302 } 1308 1303 } 1309 1304 1310 - /* Enable Write Booster if we have scaled up else disable it */ 1311 - if (ufshcd_enable_wb_if_scaling_up(hba)) { 1312 - downgrade_write(&hba->clk_scaling_lock); 1313 - is_writelock = false; 1314 - ufshcd_wb_toggle(hba, scale_up); 1315 - } 1316 - 1317 1305 out_unprepare: 1318 - ufshcd_clock_scaling_unprepare(hba, is_writelock); 1306 + ufshcd_clock_scaling_unprepare(hba, ret, scale_up); 1319 1307 return ret; 1320 1308 } 1321 1309 ··· 6064 6066 6065 6067 static void ufshcd_clk_scaling_allow(struct ufs_hba *hba, bool allow) 6066 6068 { 6069 + mutex_lock(&hba->wb_mutex); 6067 6070 down_write(&hba->clk_scaling_lock); 6068 6071 hba->clk_scaling.is_allowed = allow; 6069 6072 up_write(&hba->clk_scaling_lock); 6073 + mutex_unlock(&hba->wb_mutex); 6070 6074 } 6071 6075 6072 6076 static void ufshcd_clk_scaling_suspend(struct ufs_hba *hba, bool suspend) ··· 9793 9793 /* Initialize mutex for exception event control */ 9794 9794 mutex_init(&hba->ee_ctrl_mutex); 9795 9795 9796 + mutex_init(&hba->wb_mutex); 9796 9797 init_rwsem(&hba->clk_scaling_lock); 9797 9798 9798 9799 ufshcd_init_clk_gating(hba);
+1 -1
drivers/usb/dwc3/dwc3-qcom.c
··· 901 901 qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev); 902 902 903 903 /* enable vbus override for device mode */ 904 - if (qcom->mode == USB_DR_MODE_PERIPHERAL) 904 + if (qcom->mode != USB_DR_MODE_HOST) 905 905 dwc3_qcom_vbus_override_enable(qcom, true); 906 906 907 907 /* register extcon to override sw_vbus on Vbus change later */
-1
drivers/usb/fotg210/fotg210-udc.c
··· 1010 1010 int ret; 1011 1011 1012 1012 /* hook up the driver */ 1013 - driver->driver.bus = NULL; 1014 1013 fotg210->driver = driver; 1015 1014 fotg210->gadget.dev.of_node = fotg210->dev->of_node; 1016 1015 fotg210->gadget.speed = USB_SPEED_UNKNOWN;
+3 -1
drivers/usb/gadget/function/f_fs.c
··· 279 279 struct usb_request *req = ffs->ep0req; 280 280 int ret; 281 281 282 - if (!req) 282 + if (!req) { 283 + spin_unlock_irq(&ffs->ev.waitq.lock); 283 284 return -EINVAL; 285 + } 284 286 285 287 req->zero = len < le16_to_cpu(ffs->ev.setup.wLength); 286 288
+1
drivers/usb/gadget/function/f_uac2.c
··· 1142 1142 } 1143 1143 std_as_out_if0_desc.bInterfaceNumber = ret; 1144 1144 std_as_out_if1_desc.bInterfaceNumber = ret; 1145 + std_as_out_if1_desc.bNumEndpoints = 1; 1145 1146 uac2->as_out_intf = ret; 1146 1147 uac2->as_out_alt = 0; 1147 1148
-1
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 1830 1830 bcm63xx_select_phy_mode(udc, true); 1831 1831 1832 1832 udc->driver = driver; 1833 - driver->driver.bus = NULL; 1834 1833 udc->gadget.dev.of_node = udc->dev->of_node; 1835 1834 1836 1835 spin_unlock_irqrestore(&udc->lock, flags);
-1
drivers/usb/gadget/udc/fsl_qe_udc.c
··· 2285 2285 /* lock is needed but whether should use this lock or another */ 2286 2286 spin_lock_irqsave(&udc->lock, flags); 2287 2287 2288 - driver->driver.bus = NULL; 2289 2288 /* hook up the driver */ 2290 2289 udc->driver = driver; 2291 2290 udc->gadget.speed = driver->max_speed;
-1
drivers/usb/gadget/udc/fsl_udc_core.c
··· 1943 1943 /* lock is needed but whether should use this lock or another */ 1944 1944 spin_lock_irqsave(&udc_controller->lock, flags); 1945 1945 1946 - driver->driver.bus = NULL; 1947 1946 /* hook up the driver */ 1948 1947 udc_controller->driver = driver; 1949 1948 spin_unlock_irqrestore(&udc_controller->lock, flags);
-1
drivers/usb/gadget/udc/fusb300_udc.c
··· 1311 1311 struct fusb300 *fusb300 = to_fusb300(g); 1312 1312 1313 1313 /* hook up the driver */ 1314 - driver->driver.bus = NULL; 1315 1314 fusb300->driver = driver; 1316 1315 1317 1316 return 0;
-1
drivers/usb/gadget/udc/goku_udc.c
··· 1375 1375 struct goku_udc *dev = to_goku_udc(g); 1376 1376 1377 1377 /* hook up the driver */ 1378 - driver->driver.bus = NULL; 1379 1378 dev->driver = driver; 1380 1379 1381 1380 /*
-1
drivers/usb/gadget/udc/gr_udc.c
··· 1906 1906 spin_lock(&dev->lock); 1907 1907 1908 1908 /* Hook up the driver */ 1909 - driver->driver.bus = NULL; 1910 1909 dev->driver = driver; 1911 1910 1912 1911 /* Get ready for host detection */
-1
drivers/usb/gadget/udc/m66592-udc.c
··· 1454 1454 struct m66592 *m66592 = to_m66592(g); 1455 1455 1456 1456 /* hook up the driver */ 1457 - driver->driver.bus = NULL; 1458 1457 m66592->driver = driver; 1459 1458 1460 1459 m66592_bset(m66592, M66592_VBSE | M66592_URST, M66592_INTENB0);
-1
drivers/usb/gadget/udc/max3420_udc.c
··· 1108 1108 1109 1109 spin_lock_irqsave(&udc->lock, flags); 1110 1110 /* hook up the driver */ 1111 - driver->driver.bus = NULL; 1112 1111 udc->driver = driver; 1113 1112 udc->gadget.speed = USB_SPEED_FULL; 1114 1113
-1
drivers/usb/gadget/udc/mv_u3d_core.c
··· 1243 1243 } 1244 1244 1245 1245 /* hook up the driver ... */ 1246 - driver->driver.bus = NULL; 1247 1246 u3d->driver = driver; 1248 1247 1249 1248 u3d->ep0_dir = USB_DIR_OUT;
-1
drivers/usb/gadget/udc/mv_udc_core.c
··· 1359 1359 spin_lock_irqsave(&udc->lock, flags); 1360 1360 1361 1361 /* hook up the driver ... */ 1362 - driver->driver.bus = NULL; 1363 1362 udc->driver = driver; 1364 1363 1365 1364 udc->usb_state = USB_STATE_ATTACHED;
-1
drivers/usb/gadget/udc/net2272.c
··· 1451 1451 dev->ep[i].irqs = 0; 1452 1452 /* hook up the driver ... */ 1453 1453 dev->softconnect = 1; 1454 - driver->driver.bus = NULL; 1455 1454 dev->driver = driver; 1456 1455 1457 1456 /* ... then enable host detection and ep0; and we're ready
-1
drivers/usb/gadget/udc/net2280.c
··· 2423 2423 dev->ep[i].irqs = 0; 2424 2424 2425 2425 /* hook up the driver ... */ 2426 - driver->driver.bus = NULL; 2427 2426 dev->driver = driver; 2428 2427 2429 2428 retval = device_create_file(&dev->pdev->dev, &dev_attr_function);
-1
drivers/usb/gadget/udc/omap_udc.c
··· 2066 2066 udc->softconnect = 1; 2067 2067 2068 2068 /* hook up the driver */ 2069 - driver->driver.bus = NULL; 2070 2069 udc->driver = driver; 2071 2070 spin_unlock_irqrestore(&udc->lock, flags); 2072 2071
-1
drivers/usb/gadget/udc/pch_udc.c
··· 2908 2908 { 2909 2909 struct pch_udc_dev *dev = to_pch_udc(g); 2910 2910 2911 - driver->driver.bus = NULL; 2912 2911 dev->driver = driver; 2913 2912 2914 2913 /* get ready for ep0 traffic */
-1
drivers/usb/gadget/udc/snps_udc_core.c
··· 1933 1933 struct udc *dev = to_amd5536_udc(g); 1934 1934 u32 tmp; 1935 1935 1936 - driver->driver.bus = NULL; 1937 1936 dev->driver = driver; 1938 1937 1939 1938 /* Some gadget drivers use both ep0 directions.
+8 -1
drivers/usb/typec/ucsi/ucsi.c
··· 1400 1400 con->port = NULL; 1401 1401 } 1402 1402 1403 + kfree(ucsi->connector); 1404 + ucsi->connector = NULL; 1405 + 1403 1406 err_reset: 1404 1407 memset(&ucsi->cap, 0, sizeof(ucsi->cap)); 1405 1408 ucsi_reset_ppm(ucsi); ··· 1434 1431 1435 1432 int ucsi_resume(struct ucsi *ucsi) 1436 1433 { 1437 - queue_work(system_long_wq, &ucsi->resume_work); 1434 + if (ucsi->connector) 1435 + queue_work(system_long_wq, &ucsi->resume_work); 1438 1436 return 0; 1439 1437 } 1440 1438 EXPORT_SYMBOL_GPL(ucsi_resume); ··· 1554 1550 1555 1551 /* Disable notifications */ 1556 1552 ucsi->ops->async_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd)); 1553 + 1554 + if (!ucsi->connector) 1555 + return; 1557 1556 1558 1557 for (i = 0; i < ucsi->cap.num_connectors; i++) { 1559 1558 cancel_work_sync(&ucsi->connector[i].work);
+1 -1
drivers/vdpa/ifcvf/ifcvf_main.c
··· 849 849 ret = ifcvf_init_hw(vf, pdev); 850 850 if (ret) { 851 851 IFCVF_ERR(pdev, "Failed to init IFCVF hw\n"); 852 - return ret; 852 + goto err; 853 853 } 854 854 855 855 for (i = 0; i < vf->nr_vring; i++)
+20 -11
drivers/vfio/vfio_iommu_type1.c
··· 1856 1856 * significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when 1857 1857 * hugetlbfs is in use. 1858 1858 */ 1859 - static void vfio_test_domain_fgsp(struct vfio_domain *domain) 1859 + static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions) 1860 1860 { 1861 - struct page *pages; 1862 1861 int ret, order = get_order(PAGE_SIZE * 2); 1862 + struct vfio_iova *region; 1863 + struct page *pages; 1864 + dma_addr_t start; 1863 1865 1864 1866 pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 1865 1867 if (!pages) 1866 1868 return; 1867 1869 1868 - ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2, 1869 - IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); 1870 - if (!ret) { 1871 - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE); 1870 + list_for_each_entry(region, regions, list) { 1871 + start = ALIGN(region->start, PAGE_SIZE * 2); 1872 + if (start >= region->end || (region->end - start < PAGE_SIZE * 2)) 1873 + continue; 1872 1874 1873 - if (unmapped == PAGE_SIZE) 1874 - iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE); 1875 - else 1876 - domain->fgsp = true; 1875 + ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2, 1876 + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE); 1877 + if (!ret) { 1878 + size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE); 1879 + 1880 + if (unmapped == PAGE_SIZE) 1881 + iommu_unmap(domain->domain, start + PAGE_SIZE, PAGE_SIZE); 1882 + else 1883 + domain->fgsp = true; 1884 + } 1885 + break; 1877 1886 } 1878 1887 1879 1888 __free_pages(pages, order); ··· 2335 2326 } 2336 2327 } 2337 2328 2338 - vfio_test_domain_fgsp(domain); 2329 + vfio_test_domain_fgsp(domain, &iova_copy); 2339 2330 2340 2331 /* replay mappings on new domains */ 2341 2332 ret = vfio_iommu_replay(iommu, domain);
+3
drivers/vhost/net.c
··· 1511 1511 nvq = &n->vqs[index]; 1512 1512 mutex_lock(&vq->mutex); 1513 1513 1514 + if (fd == -1) 1515 + vhost_clear_msg(&n->dev); 1516 + 1514 1517 /* Verify that ring has been setup correctly. */ 1515 1518 if (!vhost_vq_access_ok(vq)) { 1516 1519 r = -EFAULT;
+17 -4
drivers/vhost/scsi.c
··· 80 80 struct scatterlist *tvc_prot_sgl; 81 81 struct page **tvc_upages; 82 82 /* Pointer to response header iovec */ 83 - struct iovec tvc_resp_iov; 83 + struct iovec *tvc_resp_iov; 84 84 /* Pointer to vhost_scsi for our device */ 85 85 struct vhost_scsi *tvc_vhost; 86 86 /* Pointer to vhost_virtqueue for the cmd */ ··· 563 563 memcpy(v_rsp.sense, cmd->tvc_sense_buf, 564 564 se_cmd->scsi_sense_length); 565 565 566 - iov_iter_init(&iov_iter, ITER_DEST, &cmd->tvc_resp_iov, 566 + iov_iter_init(&iov_iter, ITER_DEST, cmd->tvc_resp_iov, 567 567 cmd->tvc_in_iovs, sizeof(v_rsp)); 568 568 ret = copy_to_iter(&v_rsp, sizeof(v_rsp), &iov_iter); 569 569 if (likely(ret == sizeof(v_rsp))) { ··· 594 594 struct vhost_scsi_cmd *cmd; 595 595 struct vhost_scsi_nexus *tv_nexus; 596 596 struct scatterlist *sg, *prot_sg; 597 + struct iovec *tvc_resp_iov; 597 598 struct page **pages; 598 599 int tag; 599 600 ··· 614 613 sg = cmd->tvc_sgl; 615 614 prot_sg = cmd->tvc_prot_sgl; 616 615 pages = cmd->tvc_upages; 616 + tvc_resp_iov = cmd->tvc_resp_iov; 617 617 memset(cmd, 0, sizeof(*cmd)); 618 618 cmd->tvc_sgl = sg; 619 619 cmd->tvc_prot_sgl = prot_sg; ··· 627 625 cmd->tvc_data_direction = data_direction; 628 626 cmd->tvc_nexus = tv_nexus; 629 627 cmd->inflight = vhost_scsi_get_inflight(vq); 628 + cmd->tvc_resp_iov = tvc_resp_iov; 630 629 631 630 memcpy(cmd->tvc_cdb, cdb, VHOST_SCSI_MAX_CDB_SIZE); 632 631 ··· 938 935 struct iov_iter in_iter, prot_iter, data_iter; 939 936 u64 tag; 940 937 u32 exp_data_len, data_direction; 941 - int ret, prot_bytes, c = 0; 938 + int ret, prot_bytes, i, c = 0; 942 939 u16 lun; 943 940 u8 task_attr; 944 941 bool t10_pi = vhost_has_feature(vq, VIRTIO_SCSI_F_T10_PI); ··· 1095 1092 } 1096 1093 cmd->tvc_vhost = vs; 1097 1094 cmd->tvc_vq = vq; 1098 - cmd->tvc_resp_iov = vq->iov[vc.out]; 1095 + for (i = 0; i < vc.in ; i++) 1096 + cmd->tvc_resp_iov[i] = vq->iov[vc.out + i]; 1099 1097 cmd->tvc_in_iovs = vc.in; 1100 1098 1101 1099 pr_debug("vhost_scsi got command opcode: %#02x, lun: %d\n", ··· 1465 1461 kfree(tv_cmd->tvc_sgl); 1466 1462 kfree(tv_cmd->tvc_prot_sgl); 1467 1463 kfree(tv_cmd->tvc_upages); 1464 + kfree(tv_cmd->tvc_resp_iov); 1468 1465 } 1469 1466 1470 1467 sbitmap_free(&svq->scsi_tags); ··· 1510 1505 GFP_KERNEL); 1511 1506 if (!tv_cmd->tvc_upages) { 1512 1507 pr_err("Unable to allocate tv_cmd->tvc_upages\n"); 1508 + goto out; 1509 + } 1510 + 1511 + tv_cmd->tvc_resp_iov = kcalloc(UIO_MAXIOV, 1512 + sizeof(struct iovec), 1513 + GFP_KERNEL); 1514 + if (!tv_cmd->tvc_resp_iov) { 1515 + pr_err("Unable to allocate tv_cmd->tvc_resp_iov\n"); 1513 1516 goto out; 1514 1517 } 1515 1518
+2 -1
drivers/vhost/vhost.c
··· 661 661 } 662 662 EXPORT_SYMBOL_GPL(vhost_dev_stop); 663 663 664 - static void vhost_clear_msg(struct vhost_dev *dev) 664 + void vhost_clear_msg(struct vhost_dev *dev) 665 665 { 666 666 struct vhost_msg_node *node, *n; 667 667 ··· 679 679 680 680 spin_unlock(&dev->iotlb_lock); 681 681 } 682 + EXPORT_SYMBOL_GPL(vhost_clear_msg); 682 683 683 684 void vhost_dev_cleanup(struct vhost_dev *dev) 684 685 {
+1
drivers/vhost/vhost.h
··· 181 181 long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp); 182 182 bool vhost_vq_access_ok(struct vhost_virtqueue *vq); 183 183 bool vhost_log_access_ok(struct vhost_dev *); 184 + void vhost_clear_msg(struct vhost_dev *dev); 184 185 185 186 int vhost_get_vq_desc(struct vhost_virtqueue *, 186 187 struct iovec iov[], unsigned int iov_count,
+1 -21
drivers/video/fbdev/atmel_lcdfb.c
··· 49 49 struct clk *lcdc_clk; 50 50 51 51 struct backlight_device *backlight; 52 - u8 bl_power; 53 52 u8 saved_lcdcon; 54 53 55 54 u32 pseudo_palette[16]; ··· 108 109 static int atmel_bl_update_status(struct backlight_device *bl) 109 110 { 110 111 struct atmel_lcdfb_info *sinfo = bl_get_data(bl); 111 - int power = sinfo->bl_power; 112 - int brightness = bl->props.brightness; 113 - 114 - /* REVISIT there may be a meaningful difference between 115 - * fb_blank and power ... there seem to be some cases 116 - * this doesn't handle correctly. 117 - */ 118 - if (bl->props.fb_blank != sinfo->bl_power) 119 - power = bl->props.fb_blank; 120 - else if (bl->props.power != sinfo->bl_power) 121 - power = bl->props.power; 122 - 123 - if (brightness < 0 && power == FB_BLANK_UNBLANK) 124 - brightness = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_VAL); 125 - else if (power != FB_BLANK_UNBLANK) 126 - brightness = 0; 112 + int brightness = backlight_get_brightness(bl); 127 113 128 114 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_VAL, brightness); 129 115 if (contrast_ctr & ATMEL_LCDC_POL_POSITIVE) ··· 116 132 brightness ? contrast_ctr : 0); 117 133 else 118 134 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_CTR, contrast_ctr); 119 - 120 - bl->props.fb_blank = bl->props.power = sinfo->bl_power = power; 121 135 122 136 return 0; 123 137 } ··· 136 154 { 137 155 struct backlight_properties props; 138 156 struct backlight_device *bl; 139 - 140 - sinfo->bl_power = FB_BLANK_UNBLANK; 141 157 142 158 if (sinfo->backlight) 143 159 return;
+2 -4
drivers/video/fbdev/aty/aty128fb.c
··· 1766 1766 unsigned int reg = aty_ld_le32(LVDS_GEN_CNTL); 1767 1767 int level; 1768 1768 1769 - if (bd->props.power != FB_BLANK_UNBLANK || 1770 - bd->props.fb_blank != FB_BLANK_UNBLANK || 1771 - !par->lcd_on) 1769 + if (!par->lcd_on) 1772 1770 level = 0; 1773 1771 else 1774 - level = bd->props.brightness; 1772 + level = backlight_get_brightness(bd); 1775 1773 1776 1774 reg |= LVDS_BL_MOD_EN | LVDS_BLON; 1777 1775 if (level > 0) {
+1 -7
drivers/video/fbdev/aty/atyfb_base.c
··· 2219 2219 { 2220 2220 struct atyfb_par *par = bl_get_data(bd); 2221 2221 unsigned int reg = aty_ld_lcd(LCD_MISC_CNTL, par); 2222 - int level; 2223 - 2224 - if (bd->props.power != FB_BLANK_UNBLANK || 2225 - bd->props.fb_blank != FB_BLANK_UNBLANK) 2226 - level = 0; 2227 - else 2228 - level = bd->props.brightness; 2222 + int level = backlight_get_brightness(bd); 2229 2223 2230 2224 reg |= (BLMOD_EN | BIASMOD_EN); 2231 2225 if (level > 0) {
+1 -5
drivers/video/fbdev/aty/radeon_backlight.c
··· 57 57 * backlight. This provides some greater power saving and the display 58 58 * is useless without backlight anyway. 59 59 */ 60 - if (bd->props.power != FB_BLANK_UNBLANK || 61 - bd->props.fb_blank != FB_BLANK_UNBLANK) 62 - level = 0; 63 - else 64 - level = bd->props.brightness; 60 + level = backlight_get_brightness(bd); 65 61 66 62 del_timer_sync(&rinfo->lvds_timer); 67 63 radeon_engine_idle();
+5 -2
drivers/video/fbdev/core/fbcon.c
··· 2495 2495 h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres)) 2496 2496 return -EINVAL; 2497 2497 2498 + if (font->width > 32 || font->height > 32) 2499 + return -EINVAL; 2500 + 2498 2501 /* Make sure drawing engine can handle the font */ 2499 - if (!(info->pixmap.blit_x & (1 << (font->width - 1))) || 2500 - !(info->pixmap.blit_y & (1 << (font->height - 1)))) 2502 + if (!(info->pixmap.blit_x & BIT(font->width - 1)) || 2503 + !(info->pixmap.blit_y & BIT(font->height - 1))) 2501 2504 return -EINVAL; 2502 2505 2503 2506 /* Make sure driver can handle the font length */
+1 -1
drivers/video/fbdev/core/fbmon.c
··· 1050 1050 } 1051 1051 1052 1052 /** 1053 - * fb_get_hblank_by_freq - get horizontal blank time given hfreq 1053 + * fb_get_hblank_by_hfreq - get horizontal blank time given hfreq 1054 1054 * @hfreq: horizontal freq 1055 1055 * @xres: horizontal resolution in pixels 1056 1056 *
+1 -6
drivers/video/fbdev/mx3fb.c
··· 283 283 static int mx3fb_bl_update_status(struct backlight_device *bl) 284 284 { 285 285 struct mx3fb_data *fbd = bl_get_data(bl); 286 - int brightness = bl->props.brightness; 287 - 288 - if (bl->props.power != FB_BLANK_UNBLANK) 289 - brightness = 0; 290 - if (bl->props.fb_blank != FB_BLANK_UNBLANK) 291 - brightness = 0; 286 + int brightness = backlight_get_brightness(bl); 292 287 293 288 fbd->backlight_level = (fbd->backlight_level & ~0xFF) | brightness; 294 289
+1 -7
drivers/video/fbdev/nvidia/nv_backlight.c
··· 49 49 { 50 50 struct nvidia_par *par = bl_get_data(bd); 51 51 u32 tmp_pcrt, tmp_pmc, fpcontrol; 52 - int level; 52 + int level = backlight_get_brightness(bd); 53 53 54 54 if (!par->FlatPanel) 55 55 return 0; 56 - 57 - if (bd->props.power != FB_BLANK_UNBLANK || 58 - bd->props.fb_blank != FB_BLANK_UNBLANK) 59 - level = 0; 60 - else 61 - level = bd->props.brightness; 62 56 63 57 tmp_pmc = NV_RD32(par->PMC, 0x10F0) & 0x0000FFFF; 64 58 tmp_pcrt = NV_RD32(par->PCRTC0, 0x081C) & 0xFFFFFFFC;
+1 -7
drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
··· 331 331 struct panel_drv_data *ddata = dev_get_drvdata(&dev->dev); 332 332 struct omap_dss_device *in = ddata->in; 333 333 int r; 334 - int level; 335 - 336 - if (dev->props.fb_blank == FB_BLANK_UNBLANK && 337 - dev->props.power == FB_BLANK_UNBLANK) 338 - level = dev->props.brightness; 339 - else 340 - level = 0; 334 + int level = backlight_get_brightness(dev); 341 335 342 336 dev_dbg(&ddata->pdev->dev, "update brightness to %d\n", level); 343 337
+4 -3
drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c
··· 10 10 #define DSS_SUBSYS_NAME "DISPLAY" 11 11 12 12 #include <linux/kernel.h> 13 + #include <linux/kstrtox.h> 13 14 #include <linux/module.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/sysfs.h> ··· 37 36 int r; 38 37 bool enable; 39 38 40 - r = strtobool(buf, &enable); 39 + r = kstrtobool(buf, &enable); 41 40 if (r) 42 41 return r; 43 42 ··· 74 73 if (!dssdev->driver->enable_te || !dssdev->driver->get_te) 75 74 return -ENOENT; 76 75 77 - r = strtobool(buf, &te); 76 + r = kstrtobool(buf, &te); 78 77 if (r) 79 78 return r; 80 79 ··· 184 183 if (!dssdev->driver->set_mirror || !dssdev->driver->get_mirror) 185 184 return -ENOENT; 186 185 187 - r = strtobool(buf, &mirror); 186 + r = kstrtobool(buf, &mirror); 188 187 if (r) 189 188 return r; 190 189
+4 -3
drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c
··· 10 10 #define DSS_SUBSYS_NAME "MANAGER" 11 11 12 12 #include <linux/kernel.h> 13 + #include <linux/kstrtox.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/module.h> 15 16 #include <linux/platform_device.h> ··· 247 246 bool enable; 248 247 int r; 249 248 250 - r = strtobool(buf, &enable); 249 + r = kstrtobool(buf, &enable); 251 250 if (r) 252 251 return r; 253 252 ··· 291 290 if(!dss_has_feature(FEAT_ALPHA_FIXED_ZORDER)) 292 291 return -ENODEV; 293 292 294 - r = strtobool(buf, &enable); 293 + r = kstrtobool(buf, &enable); 295 294 if (r) 296 295 return r; 297 296 ··· 330 329 if (!dss_has_feature(FEAT_CPR)) 331 330 return -ENODEV; 332 331 333 - r = strtobool(buf, &enable); 332 + r = kstrtobool(buf, &enable); 334 333 if (r) 335 334 return r; 336 335
+2 -1
drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/sysfs.h> 15 15 #include <linux/kobject.h> 16 + #include <linux/kstrtox.h> 16 17 #include <linux/platform_device.h> 17 18 18 19 #include <video/omapfb_dss.h> ··· 211 210 int r; 212 211 bool enable; 213 212 214 - r = strtobool(buf, &enable); 213 + r = kstrtobool(buf, &enable); 215 214 if (r) 216 215 return r; 217 216
+2 -1
drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c
··· 15 15 #include <linux/uaccess.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/kernel.h> 18 + #include <linux/kstrtox.h> 18 19 #include <linux/mm.h> 19 20 #include <linux/omapfb.h> 20 21 ··· 97 96 int r; 98 97 struct fb_var_screeninfo new_var; 99 98 100 - r = strtobool(buf, &mirror); 99 + r = kstrtobool(buf, &mirror); 101 100 if (r) 102 101 return r; 103 102
+1 -7
drivers/video/fbdev/riva/fbdev.c
··· 293 293 { 294 294 struct riva_par *par = bl_get_data(bd); 295 295 U032 tmp_pcrt, tmp_pmc; 296 - int level; 297 - 298 - if (bd->props.power != FB_BLANK_UNBLANK || 299 - bd->props.fb_blank != FB_BLANK_UNBLANK) 300 - level = 0; 301 - else 302 - level = bd->props.brightness; 296 + int level = backlight_get_brightness(bd); 303 297 304 298 tmp_pmc = NV_RD32(par->riva.PMC, 0x10F0) & 0x0000FFFF; 305 299 tmp_pcrt = NV_RD32(par->riva.PCRTC0, 0x081C) & 0xFFFFFFFC;
+12 -3
drivers/watchdog/diag288_wdt.c
··· 86 86 "1:\n" 87 87 EX_TABLE(0b, 1b) 88 88 : "+d" (err) : "d"(__func), "d"(__timeout), 89 - "d"(__action), "d"(__len) : "1", "cc"); 89 + "d"(__action), "d"(__len) : "1", "cc", "memory"); 90 90 return err; 91 91 } 92 92 ··· 268 268 char ebc_begin[] = { 269 269 194, 197, 199, 201, 213 270 270 }; 271 + char *ebc_cmd; 271 272 272 273 watchdog_set_nowayout(&wdt_dev, nowayout_info); 273 274 274 275 if (MACHINE_IS_VM) { 275 - if (__diag288_vm(WDT_FUNC_INIT, 15, 276 - ebc_begin, sizeof(ebc_begin)) != 0) { 276 + ebc_cmd = kmalloc(sizeof(ebc_begin), GFP_KERNEL); 277 + if (!ebc_cmd) { 278 + pr_err("The watchdog cannot be initialized\n"); 279 + return -ENOMEM; 280 + } 281 + memcpy(ebc_cmd, ebc_begin, sizeof(ebc_begin)); 282 + ret = __diag288_vm(WDT_FUNC_INIT, 15, 283 + ebc_cmd, sizeof(ebc_begin)); 284 + kfree(ebc_cmd); 285 + if (ret != 0) { 277 286 pr_err("The watchdog cannot be initialized\n"); 278 287 return -EINVAL; 279 288 }
+15 -2
fs/ceph/addr.c
··· 305 305 struct inode *inode = rreq->inode; 306 306 struct ceph_inode_info *ci = ceph_inode(inode); 307 307 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 308 - struct ceph_osd_request *req; 308 + struct ceph_osd_request *req = NULL; 309 309 struct ceph_vino vino = ceph_vino(inode); 310 310 struct iov_iter iter; 311 311 struct page **pages; 312 312 size_t page_off; 313 313 int err = 0; 314 314 u64 len = subreq->len; 315 + 316 + if (ceph_inode_is_shutdown(inode)) { 317 + err = -EIO; 318 + goto out; 319 + } 315 320 316 321 if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) 317 322 return; ··· 567 562 bool caching = ceph_is_cache_enabled(inode); 568 563 569 564 dout("writepage %p idx %lu\n", page, page->index); 565 + 566 + if (ceph_inode_is_shutdown(inode)) 567 + return -EIO; 570 568 571 569 /* verify this is a writeable snap context */ 572 570 snapc = page_snap_context(page); ··· 1651 1643 struct ceph_inode_info *ci = ceph_inode(inode); 1652 1644 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 1653 1645 struct ceph_osd_request *req = NULL; 1654 - struct ceph_cap_flush *prealloc_cf; 1646 + struct ceph_cap_flush *prealloc_cf = NULL; 1655 1647 struct folio *folio = NULL; 1656 1648 u64 inline_version = CEPH_INLINE_NONE; 1657 1649 struct page *pages[1]; ··· 1664 1656 1665 1657 dout("uninline_data %p %llx.%llx inline_version %llu\n", 1666 1658 inode, ceph_vinop(inode), inline_version); 1659 + 1660 + if (ceph_inode_is_shutdown(inode)) { 1661 + err = -EIO; 1662 + goto out; 1663 + } 1667 1664 1668 1665 if (inline_version == CEPH_INLINE_NONE) 1669 1666 return 0;
+13 -3
fs/ceph/caps.c
··· 4078 4078 void *p, *end; 4079 4079 struct cap_extra_info extra_info = {}; 4080 4080 bool queue_trunc; 4081 + bool close_sessions = false; 4081 4082 4082 4083 dout("handle_caps from mds%d\n", session->s_mds); 4083 4084 ··· 4216 4215 realm = NULL; 4217 4216 if (snaptrace_len) { 4218 4217 down_write(&mdsc->snap_rwsem); 4219 - ceph_update_snap_trace(mdsc, snaptrace, 4220 - snaptrace + snaptrace_len, 4221 - false, &realm); 4218 + if (ceph_update_snap_trace(mdsc, snaptrace, 4219 + snaptrace + snaptrace_len, 4220 + false, &realm)) { 4221 + up_write(&mdsc->snap_rwsem); 4222 + close_sessions = true; 4223 + goto done; 4224 + } 4222 4225 downgrade_write(&mdsc->snap_rwsem); 4223 4226 } else { 4224 4227 down_read(&mdsc->snap_rwsem); ··· 4282 4277 iput(inode); 4283 4278 out: 4284 4279 ceph_put_string(extra_info.pool_ns); 4280 + 4281 + /* Defer closing the sessions after s_mutex lock being released */ 4282 + if (close_sessions) 4283 + ceph_mdsc_close_sessions(mdsc); 4284 + 4285 4285 return; 4286 4286 4287 4287 flush_cap_releases:
+3
fs/ceph/file.c
··· 2011 2011 loff_t zero = 0; 2012 2012 int op; 2013 2013 2014 + if (ceph_inode_is_shutdown(inode)) 2015 + return -EIO; 2016 + 2014 2017 if (!length) { 2015 2018 op = offset ? CEPH_OSD_OP_DELETE : CEPH_OSD_OP_TRUNCATE; 2016 2019 length = &zero;
+27 -3
fs/ceph/mds_client.c
··· 806 806 { 807 807 struct ceph_mds_session *s; 808 808 809 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) 810 + return ERR_PTR(-EIO); 811 + 809 812 if (mds >= mdsc->mdsmap->possible_max_rank) 810 813 return ERR_PTR(-EINVAL); 811 814 ··· 1480 1477 struct ceph_msg *msg; 1481 1478 int mstate; 1482 1479 int mds = session->s_mds; 1480 + 1481 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) 1482 + return -EIO; 1483 1483 1484 1484 /* wait for mds to go active? */ 1485 1485 mstate = ceph_mdsmap_get_state(mdsc->mdsmap, mds); ··· 2866 2860 return; 2867 2861 } 2868 2862 2863 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) { 2864 + dout("do_request metadata corrupted\n"); 2865 + err = -EIO; 2866 + goto finish; 2867 + } 2869 2868 if (req->r_timeout && 2870 2869 time_after_eq(jiffies, req->r_started + req->r_timeout)) { 2871 2870 dout("do_request timed out\n"); ··· 3256 3245 u64 tid; 3257 3246 int err, result; 3258 3247 int mds = session->s_mds; 3248 + bool close_sessions = false; 3259 3249 3260 3250 if (msg->front.iov_len < sizeof(*head)) { 3261 3251 pr_err("mdsc_handle_reply got corrupt (short) reply\n"); ··· 3363 3351 realm = NULL; 3364 3352 if (rinfo->snapblob_len) { 3365 3353 down_write(&mdsc->snap_rwsem); 3366 - ceph_update_snap_trace(mdsc, rinfo->snapblob, 3354 + err = ceph_update_snap_trace(mdsc, rinfo->snapblob, 3367 3355 rinfo->snapblob + rinfo->snapblob_len, 3368 3356 le32_to_cpu(head->op) == CEPH_MDS_OP_RMSNAP, 3369 3357 &realm); 3358 + if (err) { 3359 + up_write(&mdsc->snap_rwsem); 3360 + close_sessions = true; 3361 + if (err == -EIO) 3362 + ceph_msg_dump(msg); 3363 + goto out_err; 3364 + } 3370 3365 downgrade_write(&mdsc->snap_rwsem); 3371 3366 } else { 3372 3367 down_read(&mdsc->snap_rwsem); ··· 3431 3412 req->r_end_latency, err); 3432 3413 out: 3433 3414 ceph_mdsc_put_request(req); 3415 + 3416 + /* Defer closing the sessions after s_mutex lock being released */ 3417 + if (close_sessions) 3418 + ceph_mdsc_close_sessions(mdsc); 3434 3419 return; 3435 3420 } 3436 3421 ··· 5034 5011 } 5035 5012 5036 5013 /* 5037 - * called after sb is ro. 5014 + * called after sb is ro or when metadata corrupted. 5038 5015 */ 5039 5016 void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc) 5040 5017 { ··· 5324 5301 struct ceph_mds_client *mdsc = s->s_mdsc; 5325 5302 5326 5303 pr_warn("mds%d closed our session\n", s->s_mds); 5327 - send_mds_reconnect(mdsc, s); 5304 + if (READ_ONCE(mdsc->fsc->mount_state) != CEPH_MOUNT_FENCE_IO) 5305 + send_mds_reconnect(mdsc, s); 5328 5306 } 5329 5307 5330 5308 static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg)
+34 -2
fs/ceph/snap.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/ceph/ceph_debug.h> 3 3 4 + #include <linux/fs.h> 4 5 #include <linux/sort.h> 5 6 #include <linux/slab.h> 6 7 #include <linux/iversion.h> ··· 767 766 struct ceph_snap_realm *realm; 768 767 struct ceph_snap_realm *first_realm = NULL; 769 768 struct ceph_snap_realm *realm_to_rebuild = NULL; 769 + struct ceph_client *client = mdsc->fsc->client; 770 770 int rebuild_snapcs; 771 771 int err = -ENOMEM; 772 + int ret; 772 773 LIST_HEAD(dirty_realms); 773 774 774 775 lockdep_assert_held_write(&mdsc->snap_rwsem); ··· 887 884 if (first_realm) 888 885 ceph_put_snap_realm(mdsc, first_realm); 889 886 pr_err("%s error %d\n", __func__, err); 887 + 888 + /* 889 + * When receiving a corrupted snap trace we don't know what 890 + * exactly has happened in MDS side. And we shouldn't continue 891 + * writing to OSD, which may corrupt the snapshot contents. 892 + * 893 + * Just try to blocklist this kclient and then this kclient 894 + * must be remounted to continue after the corrupted metadata 895 + * fixed in the MDS side. 896 + */ 897 + WRITE_ONCE(mdsc->fsc->mount_state, CEPH_MOUNT_FENCE_IO); 898 + ret = ceph_monc_blocklist_add(&client->monc, &client->msgr.inst.addr); 899 + if (ret) 900 + pr_err("%s failed to blocklist %s: %d\n", __func__, 901 + ceph_pr_addr(&client->msgr.inst.addr), ret); 902 + 903 + WARN(1, "%s: %s%sdo remount to continue%s", 904 + __func__, ret ? "" : ceph_pr_addr(&client->msgr.inst.addr), 905 + ret ? "" : " was blocklisted, ", 906 + err == -EIO ? " after corrupted snaptrace is fixed" : ""); 907 + 890 908 return err; 891 909 } 892 910 ··· 1008 984 __le64 *split_inos = NULL, *split_realms = NULL; 1009 985 int i; 1010 986 int locked_rwsem = 0; 987 + bool close_sessions = false; 1011 988 1012 989 /* decode */ 1013 990 if (msg->front.iov_len < sizeof(*h)) ··· 1117 1092 * update using the provided snap trace. if we are deleting a 1118 1093 * snap, we can avoid queueing cap_snaps. 1119 1094 */ 1120 - ceph_update_snap_trace(mdsc, p, e, 1121 - op == CEPH_SNAP_OP_DESTROY, NULL); 1095 + if (ceph_update_snap_trace(mdsc, p, e, 1096 + op == CEPH_SNAP_OP_DESTROY, 1097 + NULL)) { 1098 + close_sessions = true; 1099 + goto bad; 1100 + } 1122 1101 1123 1102 if (op == CEPH_SNAP_OP_SPLIT) 1124 1103 /* we took a reference when we created the realm, above */ ··· 1141 1112 out: 1142 1113 if (locked_rwsem) 1143 1114 up_write(&mdsc->snap_rwsem); 1115 + 1116 + if (close_sessions) 1117 + ceph_mdsc_close_sessions(mdsc); 1144 1118 return; 1145 1119 } 1146 1120
+11
fs/ceph/super.h
··· 100 100 char *mon_addr; 101 101 }; 102 102 103 + /* mount state */ 104 + enum { 105 + CEPH_MOUNT_MOUNTING, 106 + CEPH_MOUNT_MOUNTED, 107 + CEPH_MOUNT_UNMOUNTING, 108 + CEPH_MOUNT_UNMOUNTED, 109 + CEPH_MOUNT_SHUTDOWN, 110 + CEPH_MOUNT_RECOVER, 111 + CEPH_MOUNT_FENCE_IO, 112 + }; 113 + 103 114 #define CEPH_ASYNC_CREATE_CONFLICT_BITS 8 104 115 105 116 struct ceph_fs_client {
+1
fs/cifs/smbdirect.c
··· 1405 1405 destroy_workqueue(info->workqueue); 1406 1406 log_rdma_event(INFO, "rdma session destroyed\n"); 1407 1407 kfree(info); 1408 + server->smbd_conn = NULL; 1408 1409 } 1409 1410 1410 1411 /*
+6 -5
fs/ext4/xattr.c
··· 482 482 */ 483 483 e_hash = ext4_xattr_hash_entry_signed(entry->e_name, entry->e_name_len, 484 484 &tmp_data, 1); 485 - if (e_hash == entry->e_hash) 486 - return 0; 487 - 488 485 /* Still no match - bad */ 489 - return -EFSCORRUPTED; 486 + if (e_hash != entry->e_hash) 487 + return -EFSCORRUPTED; 488 + 489 + /* Let people know about old hash */ 490 + pr_warn_once("ext4: filesystem with signed xattr name hash"); 490 491 } 491 492 return 0; 492 493 } ··· 3097 3096 while (name_len--) { 3098 3097 hash = (hash << NAME_HASH_SHIFT) ^ 3099 3098 (hash >> (8*sizeof(hash) - NAME_HASH_SHIFT)) ^ 3100 - *name++; 3099 + (unsigned char)*name++; 3101 3100 } 3102 3101 while (value_count--) { 3103 3102 hash = (hash << VALUE_HASH_SHIFT) ^
+1 -1
fs/freevxfs/Kconfig
··· 8 8 of SCO UnixWare (and possibly others) and optionally available 9 9 for Sunsoft Solaris, HP-UX and many other operating systems. However 10 10 these particular OS implementations of vxfs may differ in on-disk 11 - data endianess and/or superblock offset. The vxfs module has been 11 + data endianness and/or superblock offset. The vxfs module has been 12 12 tested with SCO UnixWare and HP-UX B.10.20 (pa-risc 1.1 arch.) 13 13 Currently only readonly access is supported and VxFX versions 14 14 2, 3 and 4. Tests were performed with HP-UX VxFS version 3.
+7 -7
fs/fscache/volume.c
··· 141 141 static void fscache_wait_on_volume_collision(struct fscache_volume *candidate, 142 142 unsigned int collidee_debug_id) 143 143 { 144 - wait_var_event_timeout(&candidate->flags, 145 - !fscache_is_acquire_pending(candidate), 20 * HZ); 144 + wait_on_bit_timeout(&candidate->flags, FSCACHE_VOLUME_ACQUIRE_PENDING, 145 + TASK_UNINTERRUPTIBLE, 20 * HZ); 146 146 if (fscache_is_acquire_pending(candidate)) { 147 147 pr_notice("Potential volume collision new=%08x old=%08x", 148 148 candidate->debug_id, collidee_debug_id); 149 149 fscache_stat(&fscache_n_volumes_collision); 150 - wait_var_event(&candidate->flags, !fscache_is_acquire_pending(candidate)); 150 + wait_on_bit(&candidate->flags, FSCACHE_VOLUME_ACQUIRE_PENDING, 151 + TASK_UNINTERRUPTIBLE); 151 152 } 152 153 } 153 154 ··· 280 279 fscache_end_cache_access(volume->cache, 281 280 fscache_access_acquire_volume_end); 282 281 283 - clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags); 284 - wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING); 282 + clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags); 285 283 fscache_put_volume(volume, fscache_volume_put_create_work); 286 284 } 287 285 ··· 347 347 hlist_bl_for_each_entry(cursor, p, h, hash_link) { 348 348 if (fscache_volume_same(cursor, volume)) { 349 349 fscache_see_volume(cursor, fscache_volume_see_hash_wake); 350 - clear_bit(FSCACHE_VOLUME_ACQUIRE_PENDING, &cursor->flags); 351 - wake_up_bit(&cursor->flags, FSCACHE_VOLUME_ACQUIRE_PENDING); 350 + clear_and_wake_up_bit(FSCACHE_VOLUME_ACQUIRE_PENDING, 351 + &cursor->flags); 352 352 return; 353 353 } 354 354 }
+61 -7
fs/fuse/acl.c
··· 11 11 #include <linux/posix_acl.h> 12 12 #include <linux/posix_acl_xattr.h> 13 13 14 - struct posix_acl *fuse_get_acl(struct inode *inode, int type, bool rcu) 14 + static struct posix_acl *__fuse_get_acl(struct fuse_conn *fc, 15 + struct user_namespace *mnt_userns, 16 + struct inode *inode, int type, bool rcu) 15 17 { 16 - struct fuse_conn *fc = get_fuse_conn(inode); 17 18 int size; 18 19 const char *name; 19 20 void *value = NULL; ··· 26 25 if (fuse_is_bad(inode)) 27 26 return ERR_PTR(-EIO); 28 27 29 - if (!fc->posix_acl || fc->no_getxattr) 28 + if (fc->no_getxattr) 30 29 return NULL; 31 30 32 31 if (type == ACL_TYPE_ACCESS) ··· 54 53 return acl; 55 54 } 56 55 56 + static inline bool fuse_no_acl(const struct fuse_conn *fc, 57 + const struct inode *inode) 58 + { 59 + /* 60 + * Refuse interacting with POSIX ACLs for daemons that 61 + * don't support FUSE_POSIX_ACL and are not mounted on 62 + * the host to retain backwards compatibility. 63 + */ 64 + return !fc->posix_acl && (i_user_ns(inode) != &init_user_ns); 65 + } 66 + 67 + struct posix_acl *fuse_get_acl(struct user_namespace *mnt_userns, 68 + struct dentry *dentry, int type) 69 + { 70 + struct inode *inode = d_inode(dentry); 71 + struct fuse_conn *fc = get_fuse_conn(inode); 72 + 73 + if (fuse_no_acl(fc, inode)) 74 + return ERR_PTR(-EOPNOTSUPP); 75 + 76 + return __fuse_get_acl(fc, mnt_userns, inode, type, false); 77 + } 78 + 79 + struct posix_acl *fuse_get_inode_acl(struct inode *inode, int type, bool rcu) 80 + { 81 + struct fuse_conn *fc = get_fuse_conn(inode); 82 + 83 + /* 84 + * FUSE daemons before FUSE_POSIX_ACL was introduced could get and set 85 + * POSIX ACLs without them being used for permission checking by the 86 + * vfs. Retain that behavior for backwards compatibility as there are 87 + * filesystems that do all permission checking for acls in the daemon 88 + * and not in the kernel. 89 + */ 90 + if (!fc->posix_acl) 91 + return NULL; 92 + 93 + return __fuse_get_acl(fc, &init_user_ns, inode, type, rcu); 94 + } 95 + 57 96 int fuse_set_acl(struct user_namespace *mnt_userns, struct dentry *dentry, 58 97 struct posix_acl *acl, int type) 59 98 { ··· 105 64 if (fuse_is_bad(inode)) 106 65 return -EIO; 107 66 108 - if (!fc->posix_acl || fc->no_setxattr) 67 + if (fc->no_setxattr || fuse_no_acl(fc, inode)) 109 68 return -EOPNOTSUPP; 110 69 111 70 if (type == ACL_TYPE_ACCESS) ··· 140 99 return ret; 141 100 } 142 101 143 - if (!vfsgid_in_group_p(i_gid_into_vfsgid(&init_user_ns, inode)) && 102 + /* 103 + * Fuse daemons without FUSE_POSIX_ACL never changed the passed 104 + * through POSIX ACLs. Such daemons don't expect setgid bits to 105 + * be stripped. 106 + */ 107 + if (fc->posix_acl && 108 + !vfsgid_in_group_p(i_gid_into_vfsgid(&init_user_ns, inode)) && 144 109 !capable_wrt_inode_uidgid(&init_user_ns, inode, CAP_FSETID)) 145 110 extra_flags |= FUSE_SETXATTR_ACL_KILL_SGID; 146 111 ··· 155 108 } else { 156 109 ret = fuse_removexattr(inode, name); 157 110 } 158 - forget_all_cached_acls(inode); 159 - fuse_invalidate_attr(inode); 111 + 112 + if (fc->posix_acl) { 113 + /* 114 + * Fuse daemons without FUSE_POSIX_ACL never cached POSIX ACLs 115 + * and didn't invalidate attributes. Retain that behavior. 116 + */ 117 + forget_all_cached_acls(inode); 118 + fuse_invalidate_attr(inode); 119 + } 160 120 161 121 return ret; 162 122 }
+4 -2
fs/fuse/dir.c
··· 1942 1942 .permission = fuse_permission, 1943 1943 .getattr = fuse_getattr, 1944 1944 .listxattr = fuse_listxattr, 1945 - .get_inode_acl = fuse_get_acl, 1945 + .get_inode_acl = fuse_get_inode_acl, 1946 + .get_acl = fuse_get_acl, 1946 1947 .set_acl = fuse_set_acl, 1947 1948 .fileattr_get = fuse_fileattr_get, 1948 1949 .fileattr_set = fuse_fileattr_set, ··· 1965 1964 .permission = fuse_permission, 1966 1965 .getattr = fuse_getattr, 1967 1966 .listxattr = fuse_listxattr, 1968 - .get_inode_acl = fuse_get_acl, 1967 + .get_inode_acl = fuse_get_inode_acl, 1968 + .get_acl = fuse_get_acl, 1969 1969 .set_acl = fuse_set_acl, 1970 1970 .fileattr_get = fuse_fileattr_get, 1971 1971 .fileattr_set = fuse_fileattr_set,
+3 -3
fs/fuse/fuse_i.h
··· 1264 1264 ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size); 1265 1265 int fuse_removexattr(struct inode *inode, const char *name); 1266 1266 extern const struct xattr_handler *fuse_xattr_handlers[]; 1267 - extern const struct xattr_handler *fuse_acl_xattr_handlers[]; 1268 - extern const struct xattr_handler *fuse_no_acl_xattr_handlers[]; 1269 1267 1270 1268 struct posix_acl; 1271 - struct posix_acl *fuse_get_acl(struct inode *inode, int type, bool rcu); 1269 + struct posix_acl *fuse_get_inode_acl(struct inode *inode, int type, bool rcu); 1270 + struct posix_acl *fuse_get_acl(struct user_namespace *mnt_userns, 1271 + struct dentry *dentry, int type); 1272 1272 int fuse_set_acl(struct user_namespace *mnt_userns, struct dentry *dentry, 1273 1273 struct posix_acl *acl, int type); 1274 1274
+10 -11
fs/fuse/inode.c
··· 311 311 fuse_dax_dontcache(inode, attr->flags); 312 312 } 313 313 314 - static void fuse_init_inode(struct inode *inode, struct fuse_attr *attr) 314 + static void fuse_init_inode(struct inode *inode, struct fuse_attr *attr, 315 + struct fuse_conn *fc) 315 316 { 316 317 inode->i_mode = attr->mode & S_IFMT; 317 318 inode->i_size = attr->size; ··· 334 333 new_decode_dev(attr->rdev)); 335 334 } else 336 335 BUG(); 336 + /* 337 + * Ensure that we don't cache acls for daemons without FUSE_POSIX_ACL 338 + * so they see the exact same behavior as before. 339 + */ 340 + if (!fc->posix_acl) 341 + inode->i_acl = inode->i_default_acl = ACL_DONT_CACHE; 337 342 } 338 343 339 344 static int fuse_inode_eq(struct inode *inode, void *_nodeidp) ··· 379 372 if (!inode) 380 373 return NULL; 381 374 382 - fuse_init_inode(inode, attr); 375 + fuse_init_inode(inode, attr, fc); 383 376 get_fuse_inode(inode)->nodeid = nodeid; 384 377 inode->i_flags |= S_AUTOMOUNT; 385 378 goto done; ··· 395 388 if (!fc->writeback_cache || !S_ISREG(attr->mode)) 396 389 inode->i_flags |= S_NOCMTIME; 397 390 inode->i_generation = generation; 398 - fuse_init_inode(inode, attr); 391 + fuse_init_inode(inode, attr, fc); 399 392 unlock_new_inode(inode); 400 393 } else if (fuse_stale_inode(inode, generation, attr)) { 401 394 /* nodeid was reused, any I/O on the old inode should fail */ ··· 1181 1174 if ((flags & FUSE_POSIX_ACL)) { 1182 1175 fc->default_permissions = 1; 1183 1176 fc->posix_acl = 1; 1184 - fm->sb->s_xattr = fuse_acl_xattr_handlers; 1185 1177 } 1186 1178 if (flags & FUSE_CACHE_SYMLINKS) 1187 1179 fc->cache_symlinks = 1; ··· 1426 1420 if (sb->s_user_ns != &init_user_ns) 1427 1421 sb->s_iflags |= SB_I_UNTRUSTED_MOUNTER; 1428 1422 sb->s_flags &= ~(SB_NOSEC | SB_I_VERSION); 1429 - 1430 - /* 1431 - * If we are not in the initial user namespace posix 1432 - * acls must be translated. 1433 - */ 1434 - if (sb->s_user_ns != &init_user_ns) 1435 - sb->s_xattr = fuse_no_acl_xattr_handlers; 1436 1423 } 1437 1424 1438 1425 static int fuse_fill_super_submount(struct super_block *sb,
-51
fs/fuse/xattr.c
··· 203 203 return fuse_setxattr(inode, name, value, size, flags, 0); 204 204 } 205 205 206 - static bool no_xattr_list(struct dentry *dentry) 207 - { 208 - return false; 209 - } 210 - 211 - static int no_xattr_get(const struct xattr_handler *handler, 212 - struct dentry *dentry, struct inode *inode, 213 - const char *name, void *value, size_t size) 214 - { 215 - return -EOPNOTSUPP; 216 - } 217 - 218 - static int no_xattr_set(const struct xattr_handler *handler, 219 - struct user_namespace *mnt_userns, 220 - struct dentry *dentry, struct inode *nodee, 221 - const char *name, const void *value, 222 - size_t size, int flags) 223 - { 224 - return -EOPNOTSUPP; 225 - } 226 - 227 206 static const struct xattr_handler fuse_xattr_handler = { 228 207 .prefix = "", 229 208 .get = fuse_xattr_get, ··· 210 231 }; 211 232 212 233 const struct xattr_handler *fuse_xattr_handlers[] = { 213 - &fuse_xattr_handler, 214 - NULL 215 - }; 216 - 217 - const struct xattr_handler *fuse_acl_xattr_handlers[] = { 218 - &posix_acl_access_xattr_handler, 219 - &posix_acl_default_xattr_handler, 220 - &fuse_xattr_handler, 221 - NULL 222 - }; 223 - 224 - static const struct xattr_handler fuse_no_acl_access_xattr_handler = { 225 - .name = XATTR_NAME_POSIX_ACL_ACCESS, 226 - .flags = ACL_TYPE_ACCESS, 227 - .list = no_xattr_list, 228 - .get = no_xattr_get, 229 - .set = no_xattr_set, 230 - }; 231 - 232 - static const struct xattr_handler fuse_no_acl_default_xattr_handler = { 233 - .name = XATTR_NAME_POSIX_ACL_DEFAULT, 234 - .flags = ACL_TYPE_ACCESS, 235 - .list = no_xattr_list, 236 - .get = no_xattr_get, 237 - .set = no_xattr_set, 238 - }; 239 - 240 - const struct xattr_handler *fuse_no_acl_xattr_handlers[] = { 241 - &fuse_no_acl_access_xattr_handler, 242 - &fuse_no_acl_default_xattr_handler, 243 234 &fuse_xattr_handler, 244 235 NULL 245 236 };
+10 -1
fs/gfs2/log.c
··· 80 80 brelse(bd->bd_bh); 81 81 } 82 82 83 + static int __gfs2_writepage(struct page *page, struct writeback_control *wbc, 84 + void *data) 85 + { 86 + struct address_space *mapping = data; 87 + int ret = mapping->a_ops->writepage(page, wbc); 88 + mapping_set_error(mapping, ret); 89 + return ret; 90 + } 91 + 83 92 /** 84 93 * gfs2_ail1_start_one - Start I/O on a transaction 85 94 * @sdp: The superblock ··· 140 131 if (!mapping) 141 132 continue; 142 133 spin_unlock(&sdp->sd_ail_lock); 143 - ret = filemap_fdatawrite_wbc(mapping, wbc); 134 + ret = write_cache_pages(mapping, wbc, __gfs2_writepage, mapping); 144 135 if (need_resched()) { 145 136 blk_finish_plug(plug); 146 137 cond_resched();
+15 -2
fs/ksmbd/connection.c
··· 280 280 { 281 281 struct ksmbd_conn *conn = (struct ksmbd_conn *)p; 282 282 struct ksmbd_transport *t = conn->transport; 283 - unsigned int pdu_size; 283 + unsigned int pdu_size, max_allowed_pdu_size; 284 284 char hdr_buf[4] = {0,}; 285 285 int size; 286 286 ··· 305 305 pdu_size = get_rfc1002_len(hdr_buf); 306 306 ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size); 307 307 308 + if (conn->status == KSMBD_SESS_GOOD) 309 + max_allowed_pdu_size = 310 + SMB3_MAX_MSGSIZE + conn->vals->max_write_size; 311 + else 312 + max_allowed_pdu_size = SMB3_MAX_MSGSIZE; 313 + 314 + if (pdu_size > max_allowed_pdu_size) { 315 + pr_err_ratelimited("PDU length(%u) excceed maximum allowed pdu size(%u) on connection(%d)\n", 316 + pdu_size, max_allowed_pdu_size, 317 + conn->status); 318 + break; 319 + } 320 + 308 321 /* 309 322 * Check if pdu size is valid (min : smb header size, 310 323 * max : 0x00FFFFFF). 311 324 */ 312 325 if (pdu_size < __SMB2_HEADER_STRUCTURE_SIZE || 313 326 pdu_size > MAX_STREAM_PROT_LEN) { 314 - continue; 327 + break; 315 328 } 316 329 317 330 /* 4 for rfc1002 length field */
+2 -1
fs/ksmbd/ksmbd_netlink.h
··· 106 106 __u32 sub_auth[3]; /* Subauth value for Security ID */ 107 107 __u32 smb2_max_credits; /* MAX credits */ 108 108 __u32 smbd_max_io_size; /* smbd read write size */ 109 - __u32 reserved[127]; /* Reserved room */ 109 + __u32 max_connections; /* Number of maximum simultaneous connections */ 110 + __u32 reserved[126]; /* Reserved room */ 110 111 __u32 ifc_list_sz; /* interfaces list size */ 111 112 __s8 ____payload[]; 112 113 };
+4 -4
fs/ksmbd/ndr.c
··· 242 242 return ret; 243 243 244 244 if (da->version != 3 && da->version != 4) { 245 - pr_err("v%d version is not supported\n", da->version); 245 + ksmbd_debug(VFS, "v%d version is not supported\n", da->version); 246 246 return -EINVAL; 247 247 } 248 248 ··· 251 251 return ret; 252 252 253 253 if (da->version != version2) { 254 - pr_err("ndr version mismatched(version: %d, version2: %d)\n", 254 + ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n", 255 255 da->version, version2); 256 256 return -EINVAL; 257 257 } ··· 457 457 if (ret) 458 458 return ret; 459 459 if (acl->version != 4) { 460 - pr_err("v%d version is not supported\n", acl->version); 460 + ksmbd_debug(VFS, "v%d version is not supported\n", acl->version); 461 461 return -EINVAL; 462 462 } 463 463 ··· 465 465 if (ret) 466 466 return ret; 467 467 if (acl->version != version2) { 468 - pr_err("ndr version mismatched(version: %d, version2: %d)\n", 468 + ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n", 469 469 acl->version, version2); 470 470 return -EINVAL; 471 471 }
+1
fs/ksmbd/server.h
··· 41 41 unsigned int share_fake_fscaps; 42 42 struct smb_sid domain_sid; 43 43 unsigned int auth_mechs; 44 + unsigned int max_connections; 44 45 45 46 char *conf[SERVER_CONF_WORK_GROUP + 1]; 46 47 };
+2
fs/ksmbd/smb2pdu.c
··· 8663 8663 bool smb3_11_final_sess_setup_resp(struct ksmbd_work *work) 8664 8664 { 8665 8665 struct ksmbd_conn *conn = work->conn; 8666 + struct ksmbd_session *sess = work->sess; 8666 8667 struct smb2_hdr *rsp = smb2_get_msg(work->response_buf); 8667 8668 8668 8669 if (conn->dialect < SMB30_PROT_ID) ··· 8673 8672 rsp = ksmbd_resp_buf_next(work); 8674 8673 8675 8674 if (le16_to_cpu(rsp->Command) == SMB2_SESSION_SETUP_HE && 8675 + sess->user && !user_guest(sess->user) && 8676 8676 rsp->Status == STATUS_SUCCESS) 8677 8677 return true; 8678 8678 return false;
+3 -2
fs/ksmbd/smb2pdu.h
··· 24 24 25 25 #define SMB21_DEFAULT_IOSIZE (1024 * 1024) 26 26 #define SMB3_DEFAULT_TRANS_SIZE (1024 * 1024) 27 - #define SMB3_MIN_IOSIZE (64 * 1024) 28 - #define SMB3_MAX_IOSIZE (8 * 1024 * 1024) 27 + #define SMB3_MIN_IOSIZE (64 * 1024) 28 + #define SMB3_MAX_IOSIZE (8 * 1024 * 1024) 29 + #define SMB3_MAX_MSGSIZE (4 * 4096) 29 30 30 31 /* 31 32 * Definitions for SMB2 Protocol Data Units (network frames)
+3
fs/ksmbd/transport_ipc.c
··· 308 308 if (req->smbd_max_io_size) 309 309 init_smbd_max_io_size(req->smbd_max_io_size); 310 310 311 + if (req->max_connections) 312 + server_conf.max_connections = req->max_connections; 313 + 311 314 ret = ksmbd_set_netbios_name(req->netbios_name); 312 315 ret |= ksmbd_set_server_string(req->server_string); 313 316 ret |= ksmbd_set_work_group(req->work_group);
+16 -1
fs/ksmbd/transport_tcp.c
··· 15 15 #define IFACE_STATE_DOWN BIT(0) 16 16 #define IFACE_STATE_CONFIGURED BIT(1) 17 17 18 + static atomic_t active_num_conn; 19 + 18 20 struct interface { 19 21 struct task_struct *ksmbd_kthread; 20 22 struct socket *ksmbd_socket; ··· 187 185 struct tcp_transport *t; 188 186 189 187 t = alloc_transport(client_sk); 190 - if (!t) 188 + if (!t) { 189 + sock_release(client_sk); 191 190 return -ENOMEM; 191 + } 192 192 193 193 csin = KSMBD_TCP_PEER_SOCKADDR(KSMBD_TRANS(t)->conn); 194 194 if (kernel_getpeername(client_sk, csin) < 0) { ··· 240 236 if (ret == -EAGAIN) 241 237 /* check for new connections every 100 msecs */ 242 238 schedule_timeout_interruptible(HZ / 10); 239 + continue; 240 + } 241 + 242 + if (server_conf.max_connections && 243 + atomic_inc_return(&active_num_conn) >= server_conf.max_connections) { 244 + pr_info_ratelimited("Limit the maximum number of connections(%u)\n", 245 + atomic_read(&active_num_conn)); 246 + atomic_dec(&active_num_conn); 247 + sock_release(client_sk); 243 248 continue; 244 249 } 245 250 ··· 381 368 static void ksmbd_tcp_disconnect(struct ksmbd_transport *t) 382 369 { 383 370 free_transport(TCP_TRANS(t)); 371 + if (server_conf.max_connections) 372 + atomic_dec(&active_num_conn); 384 373 } 385 374 386 375 static void tcp_destroy_socket(struct socket *ksmbd_socket)
+36 -25
fs/nfsd/filecache.c
··· 662 662 }; 663 663 664 664 /** 665 + * nfsd_file_cond_queue - conditionally unhash and queue a nfsd_file 666 + * @nf: nfsd_file to attempt to queue 667 + * @dispose: private list to queue successfully-put objects 668 + * 669 + * Unhash an nfsd_file, try to get a reference to it, and then put that 670 + * reference. If it's the last reference, queue it to the dispose list. 671 + */ 672 + static void 673 + nfsd_file_cond_queue(struct nfsd_file *nf, struct list_head *dispose) 674 + __must_hold(RCU) 675 + { 676 + int decrement = 1; 677 + 678 + /* If we raced with someone else unhashing, ignore it */ 679 + if (!nfsd_file_unhash(nf)) 680 + return; 681 + 682 + /* If we can't get a reference, ignore it */ 683 + if (!nfsd_file_get(nf)) 684 + return; 685 + 686 + /* Extra decrement if we remove from the LRU */ 687 + if (nfsd_file_lru_remove(nf)) 688 + ++decrement; 689 + 690 + /* If refcount goes to 0, then put on the dispose list */ 691 + if (refcount_sub_and_test(decrement, &nf->nf_ref)) { 692 + list_add(&nf->nf_lru, dispose); 693 + trace_nfsd_file_closing(nf); 694 + } 695 + } 696 + 697 + /** 665 698 * nfsd_file_queue_for_close: try to close out any open nfsd_files for an inode 666 699 * @inode: inode on which to close out nfsd_files 667 700 * @dispose: list on which to gather nfsd_files to close out ··· 721 688 722 689 rcu_read_lock(); 723 690 do { 724 - int decrement = 1; 725 - 726 691 nf = rhashtable_lookup(&nfsd_file_rhash_tbl, &key, 727 692 nfsd_file_rhash_params); 728 693 if (!nf) 729 694 break; 730 - 731 - /* If we raced with someone else unhashing, ignore it */ 732 - if (!nfsd_file_unhash(nf)) 733 - continue; 734 - 735 - /* If we can't get a reference, ignore it */ 736 - if (!nfsd_file_get(nf)) 737 - continue; 738 - 739 - /* Extra decrement if we remove from the LRU */ 740 - if (nfsd_file_lru_remove(nf)) 741 - ++decrement; 742 - 743 - /* If refcount goes to 0, then put on the dispose list */ 744 - if (refcount_sub_and_test(decrement, &nf->nf_ref)) { 745 - list_add(&nf->nf_lru, dispose); 746 - trace_nfsd_file_closing(nf); 747 - } 695 + nfsd_file_cond_queue(nf, dispose); 748 696 } while (1); 749 697 rcu_read_unlock(); 750 698 } ··· 942 928 943 929 nf = rhashtable_walk_next(&iter); 944 930 while (!IS_ERR_OR_NULL(nf)) { 945 - if (!net || nf->nf_net == net) { 946 - nfsd_file_unhash(nf); 947 - nfsd_file_lru_remove(nf); 948 - list_add(&nf->nf_lru, &dispose); 949 - } 931 + if (!net || nf->nf_net == net) 932 + nfsd_file_cond_queue(nf, &dispose); 950 933 nf = rhashtable_walk_next(&iter); 951 934 } 952 935
+5 -1
fs/overlayfs/copy_up.c
··· 792 792 if (!c->metacopy && c->stat.size) { 793 793 err = ovl_copy_up_file(ofs, c->dentry, tmpfile, c->stat.size); 794 794 if (err) 795 - return err; 795 + goto out_fput; 796 796 } 797 797 798 798 err = ovl_copy_up_metadata(c, temp); ··· 1010 1010 STATX_BASIC_STATS, AT_STATX_SYNC_AS_STAT); 1011 1011 if (err) 1012 1012 return err; 1013 + 1014 + if (!kuid_has_mapping(current_user_ns(), ctx.stat.uid) || 1015 + !kgid_has_mapping(current_user_ns(), ctx.stat.gid)) 1016 + return -EOVERFLOW; 1013 1017 1014 1018 ctx.metacopy = ovl_need_meta_copy_up(dentry, ctx.stat.mode, flags); 1015 1019
+1 -3
fs/proc/task_mmu.c
··· 745 745 page = pfn_swap_entry_to_page(swpent); 746 746 } 747 747 if (page) { 748 - int mapcount = page_mapcount(page); 749 - 750 - if (mapcount >= 2) 748 + if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte)) 751 749 mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); 752 750 else 753 751 mss->private_hugetlb += huge_page_size(hstate_vma(vma));
+1 -1
fs/squashfs/squashfs_fs.h
··· 183 183 #define SQUASHFS_ID_BLOCK_BYTES(A) (SQUASHFS_ID_BLOCKS(A) *\ 184 184 sizeof(u64)) 185 185 /* xattr id lookup table defines */ 186 - #define SQUASHFS_XATTR_BYTES(A) ((A) * sizeof(struct squashfs_xattr_id)) 186 + #define SQUASHFS_XATTR_BYTES(A) (((u64) (A)) * sizeof(struct squashfs_xattr_id)) 187 187 188 188 #define SQUASHFS_XATTR_BLOCK(A) (SQUASHFS_XATTR_BYTES(A) / \ 189 189 SQUASHFS_METADATA_SIZE)
+1 -1
fs/squashfs/squashfs_fs_sb.h
··· 63 63 long long bytes_used; 64 64 unsigned int inodes; 65 65 unsigned int fragments; 66 - int xattr_ids; 66 + unsigned int xattr_ids; 67 67 unsigned int ids; 68 68 bool panic_on_errors; 69 69 const struct squashfs_decompressor_thread_ops *thread_ops;
+2 -2
fs/squashfs/xattr.h
··· 10 10 11 11 #ifdef CONFIG_SQUASHFS_XATTR 12 12 extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64, 13 - u64 *, int *); 13 + u64 *, unsigned int *); 14 14 extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *, 15 15 unsigned int *, unsigned long long *); 16 16 #else 17 17 static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb, 18 - u64 start, u64 *xattr_table_start, int *xattr_ids) 18 + u64 start, u64 *xattr_table_start, unsigned int *xattr_ids) 19 19 { 20 20 struct squashfs_xattr_id_table *id_table; 21 21
+2 -2
fs/squashfs/xattr_id.c
··· 56 56 * Read uncompressed xattr id lookup table indexes from disk into memory 57 57 */ 58 58 __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start, 59 - u64 *xattr_table_start, int *xattr_ids) 59 + u64 *xattr_table_start, unsigned int *xattr_ids) 60 60 { 61 61 struct squashfs_sb_info *msblk = sb->s_fs_info; 62 62 unsigned int len, indexes; ··· 76 76 /* Sanity check values */ 77 77 78 78 /* there is always at least one xattr id */ 79 - if (*xattr_ids == 0) 79 + if (*xattr_ids <= 0) 80 80 return ERR_PTR(-EINVAL); 81 81 82 82 len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
+12
include/drm/drm_fb_helper.h
··· 208 208 * the smem_start field should always be cleared to zero. 209 209 */ 210 210 bool hint_leak_smem_start; 211 + 212 + #ifdef CONFIG_FB_DEFERRED_IO 213 + /** 214 + * @fbdefio: 215 + * 216 + * Temporary storage for the driver's FB deferred I/O handler. If the 217 + * driver uses the DRM fbdev emulation layer, this is set by the core 218 + * to a generic deferred I/O handler if a driver is preferring to use 219 + * a shadow buffer. 220 + */ 221 + struct fb_deferred_io fbdefio; 222 + #endif 211 223 }; 212 224 213 225 static inline struct drm_fb_helper *
+1
include/drm/drm_vma_manager.h
··· 74 74 struct drm_vma_offset_node *node); 75 75 76 76 int drm_vma_node_allow(struct drm_vma_offset_node *node, struct drm_file *tag); 77 + int drm_vma_node_allow_once(struct drm_vma_offset_node *node, struct drm_file *tag); 77 78 void drm_vma_node_revoke(struct drm_vma_offset_node *node, 78 79 struct drm_file *tag); 79 80 bool drm_vma_node_is_allowed(struct drm_vma_offset_node *node,
+3 -3
include/kunit/test.h
··· 303 303 */ 304 304 #define kunit_test_init_section_suites(__suites...) \ 305 305 __kunit_test_suites(CONCATENATE(__UNIQUE_ID(array), _probe), \ 306 - CONCATENATE(__UNIQUE_ID(suites), _probe), \ 307 306 ##__suites) 308 307 309 308 #define kunit_test_init_section_suite(suite) \ ··· 682 683 .right_text = #right, \ 683 684 }; \ 684 685 \ 685 - if (likely(memcmp(__left, __right, __size) op 0)) \ 686 - break; \ 686 + if (likely(__left && __right)) \ 687 + if (likely(memcmp(__left, __right, __size) op 0)) \ 688 + break; \ 687 689 \ 688 690 _KUNIT_FAILED(test, \ 689 691 assert_type, \
+1 -1
include/kvm/arm_vgic.h
··· 263 263 struct vgic_io_device dist_iodev; 264 264 265 265 bool has_its; 266 - bool save_its_tables_in_progress; 266 + bool table_write_in_progress; 267 267 268 268 /* 269 269 * Contains the attributes and gpa of the LPI configuration table.
+107 -2
include/linux/apple-gmux.h
··· 8 8 #define LINUX_APPLE_GMUX_H 9 9 10 10 #include <linux/acpi.h> 11 + #include <linux/io.h> 12 + #include <linux/pnp.h> 11 13 12 14 #define GMUX_ACPI_HID "APP000B" 13 15 16 + /* 17 + * gmux port offsets. Many of these are not yet used, but may be in the 18 + * future, and it's useful to have them documented here anyhow. 19 + */ 20 + #define GMUX_PORT_VERSION_MAJOR 0x04 21 + #define GMUX_PORT_VERSION_MINOR 0x05 22 + #define GMUX_PORT_VERSION_RELEASE 0x06 23 + #define GMUX_PORT_SWITCH_DISPLAY 0x10 24 + #define GMUX_PORT_SWITCH_GET_DISPLAY 0x11 25 + #define GMUX_PORT_INTERRUPT_ENABLE 0x14 26 + #define GMUX_PORT_INTERRUPT_STATUS 0x16 27 + #define GMUX_PORT_SWITCH_DDC 0x28 28 + #define GMUX_PORT_SWITCH_EXTERNAL 0x40 29 + #define GMUX_PORT_SWITCH_GET_EXTERNAL 0x41 30 + #define GMUX_PORT_DISCRETE_POWER 0x50 31 + #define GMUX_PORT_MAX_BRIGHTNESS 0x70 32 + #define GMUX_PORT_BRIGHTNESS 0x74 33 + #define GMUX_PORT_VALUE 0xc2 34 + #define GMUX_PORT_READ 0xd0 35 + #define GMUX_PORT_WRITE 0xd4 36 + 37 + #define GMUX_MIN_IO_LEN (GMUX_PORT_BRIGHTNESS + 4) 38 + 14 39 #if IS_ENABLED(CONFIG_APPLE_GMUX) 40 + static inline bool apple_gmux_is_indexed(unsigned long iostart) 41 + { 42 + u16 val; 43 + 44 + outb(0xaa, iostart + 0xcc); 45 + outb(0x55, iostart + 0xcd); 46 + outb(0x00, iostart + 0xce); 47 + 48 + val = inb(iostart + 0xcc) | (inb(iostart + 0xcd) << 8); 49 + if (val == 0x55aa) 50 + return true; 51 + 52 + return false; 53 + } 15 54 16 55 /** 17 - * apple_gmux_present() - detect if gmux is built into the machine 56 + * apple_gmux_detect() - detect if gmux is built into the machine 57 + * 58 + * @pnp_dev: Device to probe or NULL to use the first matching device 59 + * @indexed_ret: Returns (by reference) if the gmux is indexed or not 60 + * 61 + * Detect if a supported gmux device is present by actually probing it. 62 + * This avoids the false positives returned on some models by 63 + * apple_gmux_present(). 64 + * 65 + * Return: %true if a supported gmux ACPI device is detected and the kernel 66 + * was configured with CONFIG_APPLE_GMUX, %false otherwise. 67 + */ 68 + static inline bool apple_gmux_detect(struct pnp_dev *pnp_dev, bool *indexed_ret) 69 + { 70 + u8 ver_major, ver_minor, ver_release; 71 + struct device *dev = NULL; 72 + struct acpi_device *adev; 73 + struct resource *res; 74 + bool indexed = false; 75 + bool ret = false; 76 + 77 + if (!pnp_dev) { 78 + adev = acpi_dev_get_first_match_dev(GMUX_ACPI_HID, NULL, -1); 79 + if (!adev) 80 + return false; 81 + 82 + dev = get_device(acpi_get_first_physical_node(adev)); 83 + acpi_dev_put(adev); 84 + if (!dev) 85 + return false; 86 + 87 + pnp_dev = to_pnp_dev(dev); 88 + } 89 + 90 + res = pnp_get_resource(pnp_dev, IORESOURCE_IO, 0); 91 + if (!res || resource_size(res) < GMUX_MIN_IO_LEN) 92 + goto out; 93 + 94 + /* 95 + * Invalid version information may indicate either that the gmux 96 + * device isn't present or that it's a new one that uses indexed io. 97 + */ 98 + ver_major = inb(res->start + GMUX_PORT_VERSION_MAJOR); 99 + ver_minor = inb(res->start + GMUX_PORT_VERSION_MINOR); 100 + ver_release = inb(res->start + GMUX_PORT_VERSION_RELEASE); 101 + if (ver_major == 0xff && ver_minor == 0xff && ver_release == 0xff) { 102 + indexed = apple_gmux_is_indexed(res->start); 103 + if (!indexed) 104 + goto out; 105 + } 106 + 107 + if (indexed_ret) 108 + *indexed_ret = indexed; 109 + 110 + ret = true; 111 + out: 112 + put_device(dev); 113 + return ret; 114 + } 115 + 116 + /** 117 + * apple_gmux_present() - check if gmux ACPI device is present 18 118 * 19 119 * Drivers may use this to activate quirks specific to dual GPU MacBook Pros 20 120 * and Mac Pros, e.g. for deferred probing, runtime pm and backlight. 21 121 * 22 - * Return: %true if gmux is present and the kernel was configured 122 + * Return: %true if gmux ACPI device is present and the kernel was configured 23 123 * with CONFIG_APPLE_GMUX, %false otherwise. 24 124 */ 25 125 static inline bool apple_gmux_present(void) ··· 130 30 #else /* !CONFIG_APPLE_GMUX */ 131 31 132 32 static inline bool apple_gmux_present(void) 33 + { 34 + return false; 35 + } 36 + 37 + static inline bool apple_gmux_detect(struct pnp_dev *pnp_dev, bool *indexed_ret) 133 38 { 134 39 return false; 135 40 }
-10
include/linux/ceph/libceph.h
··· 99 99 100 100 #define CEPH_AUTH_NAME_DEFAULT "guest" 101 101 102 - /* mount state */ 103 - enum { 104 - CEPH_MOUNT_MOUNTING, 105 - CEPH_MOUNT_MOUNTED, 106 - CEPH_MOUNT_UNMOUNTING, 107 - CEPH_MOUNT_UNMOUNTED, 108 - CEPH_MOUNT_SHUTDOWN, 109 - CEPH_MOUNT_RECOVER, 110 - }; 111 - 112 102 static inline unsigned long ceph_timeout_jiffies(unsigned long timeout) 113 103 { 114 104 return timeout ?: MAX_SCHEDULE_TIMEOUT;
+2 -1
include/linux/efi.h
··· 668 668 669 669 #define EFI_RT_SUPPORTED_ALL 0x3fff 670 670 671 - #define EFI_RT_SUPPORTED_TIME_SERVICES 0x000f 671 + #define EFI_RT_SUPPORTED_TIME_SERVICES 0x0003 672 + #define EFI_RT_SUPPORTED_WAKEUP_SERVICES 0x000c 672 673 #define EFI_RT_SUPPORTED_VARIABLE_SERVICES 0x0070 673 674 674 675 extern struct mm_struct efi_mm;
+2 -2
include/linux/highmem-internal.h
··· 200 200 static inline void __kunmap_local(const void *addr) 201 201 { 202 202 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 203 - kunmap_flush_on_unmap(addr); 203 + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); 204 204 #endif 205 205 } 206 206 ··· 227 227 static inline void __kunmap_atomic(const void *addr) 228 228 { 229 229 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 230 - kunmap_flush_on_unmap(addr); 230 + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); 231 231 #endif 232 232 pagefault_enable(); 233 233 if (IS_ENABLED(CONFIG_PREEMPT_RT))
+13
include/linux/hugetlb.h
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/hugetlb_inline.h> 9 9 #include <linux/cgroup.h> 10 + #include <linux/page_ref.h> 10 11 #include <linux/list.h> 11 12 #include <linux/kref.h> 12 13 #include <linux/pgtable.h> ··· 1185 1184 #else 1186 1185 static inline __init void hugetlb_cma_reserve(int order) 1187 1186 { 1187 + } 1188 + #endif 1189 + 1190 + #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE 1191 + static inline bool hugetlb_pmd_shared(pte_t *pte) 1192 + { 1193 + return page_count(virt_to_page(pte)) > 1; 1194 + } 1195 + #else 1196 + static inline bool hugetlb_pmd_shared(pte_t *pte) 1197 + { 1198 + return false; 1188 1199 } 1189 1200 #endif 1190 1201
+4 -1
include/linux/memcontrol.h
··· 1666 1666 static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, 1667 1667 struct bdi_writeback *wb) 1668 1668 { 1669 + struct mem_cgroup *memcg; 1670 + 1669 1671 if (mem_cgroup_disabled()) 1670 1672 return; 1671 1673 1672 - if (unlikely(&folio_memcg(folio)->css != wb->memcg_css)) 1674 + memcg = folio_memcg(folio); 1675 + if (unlikely(memcg && &memcg->css != wb->memcg_css)) 1673 1676 mem_cgroup_track_foreign_dirty_slowpath(folio, wb); 1674 1677 } 1675 1678
-2
include/linux/nvmem-provider.h
··· 70 70 * @word_size: Minimum read/write access granularity. 71 71 * @stride: Minimum read/write access stride. 72 72 * @priv: User context passed to read/write callbacks. 73 - * @wp-gpio: Write protect pin 74 73 * @ignore_wp: Write Protect pin is managed by the provider. 75 74 * 76 75 * Note: A default "nvmem<id>" name will be assigned to the device if ··· 84 85 const char *name; 85 86 int id; 86 87 struct module *owner; 87 - struct gpio_desc *wp_gpio; 88 88 const struct nvmem_cell_info *cells; 89 89 int ncells; 90 90 const struct nvmem_keepout *keepout;
+9
include/linux/spinlock.h
··· 476 476 #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ 477 477 __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) 478 478 479 + extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); 480 + #define atomic_dec_and_raw_lock(atomic, lock) \ 481 + __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) 482 + 483 + extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 484 + unsigned long *flags); 485 + #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ 486 + __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) 487 + 479 488 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, 480 489 size_t max_size, unsigned int cpu_mult, 481 490 gfp_t gfp, const char *name,
+1
include/linux/stmmac.h
··· 252 252 int rss_en; 253 253 int mac_port_sel_speed; 254 254 bool en_tx_lpi_clockgating; 255 + bool rx_clk_runs_in_lpi; 255 256 int has_xgmac; 256 257 bool vlan_fail_q_en; 257 258 u8 vlan_fail_q;
+1 -2
include/linux/swap.h
··· 418 418 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, 419 419 unsigned long nr_pages, 420 420 gfp_t gfp_mask, 421 - unsigned int reclaim_options, 422 - nodemask_t *nodemask); 421 + unsigned int reclaim_options); 423 422 extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, 424 423 gfp_t gfp_mask, bool noswap, 425 424 pg_data_t *pgdat,
+12
include/linux/util_macros.h
··· 38 38 */ 39 39 #define find_closest_descending(x, a, as) __find_closest(x, a, as, >=) 40 40 41 + /** 42 + * is_insidevar - check if the @ptr points inside the @var memory range. 43 + * @ptr: the pointer to a memory address. 44 + * @var: the variable which address and size identify the memory range. 45 + * 46 + * Evaluates to true if the address in @ptr lies within the memory 47 + * range allocated to @var. 48 + */ 49 + #define is_insidevar(ptr, var) \ 50 + ((uintptr_t)(ptr) >= (uintptr_t)(var) && \ 51 + (uintptr_t)(ptr) < (uintptr_t)(var) + sizeof(var)) 52 + 41 53 #endif
+3
include/net/mana/gdma.h
··· 336 336 }; 337 337 }; 338 338 339 + #define MANA_IRQ_NAME_SZ 32 340 + 339 341 struct gdma_irq_context { 340 342 void (*handler)(void *arg); 341 343 void *arg; 344 + char name[MANA_IRQ_NAME_SZ]; 342 345 }; 343 346 344 347 struct gdma_context {
+2
include/scsi/libiscsi.h
··· 422 422 extern struct iscsi_cls_session * 423 423 iscsi_session_setup(struct iscsi_transport *, struct Scsi_Host *shost, 424 424 uint16_t, int, int, uint32_t, unsigned int); 425 + void iscsi_session_remove(struct iscsi_cls_session *cls_session); 426 + void iscsi_session_free(struct iscsi_cls_session *cls_session); 425 427 extern void iscsi_session_teardown(struct iscsi_cls_session *); 426 428 extern void iscsi_session_recovery_timedout(struct iscsi_cls_session *); 427 429 extern int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
+1 -2
include/uapi/linux/netfilter/nf_conntrack_sctp.h
··· 15 15 SCTP_CONNTRACK_SHUTDOWN_RECD, 16 16 SCTP_CONNTRACK_SHUTDOWN_ACK_SENT, 17 17 SCTP_CONNTRACK_HEARTBEAT_SENT, 18 - SCTP_CONNTRACK_HEARTBEAT_ACKED, 19 - SCTP_CONNTRACK_DATA_SENT, 18 + SCTP_CONNTRACK_HEARTBEAT_ACKED, /* no longer used */ 20 19 SCTP_CONNTRACK_MAX 21 20 }; 22 21
+2
include/ufs/ufshcd.h
··· 808 808 * @urgent_bkops_lvl: keeps track of urgent bkops level for device 809 809 * @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for 810 810 * device is known or not. 811 + * @wb_mutex: used to serialize devfreq and sysfs write booster toggling 811 812 * @clk_scaling_lock: used to serialize device commands and clock scaling 812 813 * @desc_size: descriptor sizes reported by device 813 814 * @scsi_block_reqs_cnt: reference counting for scsi block requests ··· 952 951 enum bkops_status urgent_bkops_lvl; 953 952 bool is_urgent_bkops_lvl_checked; 954 953 954 + struct mutex wb_mutex; 955 955 struct rw_semaphore clk_scaling_lock; 956 956 unsigned char desc_size[QUERY_DESC_IDN_MAX]; 957 957 atomic_t scsi_block_reqs_cnt;
+8 -10
io_uring/io_uring.c
··· 1765 1765 } 1766 1766 spin_unlock(&ctx->completion_lock); 1767 1767 1768 - ret = io_req_prep_async(req); 1769 - if (ret) { 1770 - fail: 1771 - io_req_defer_failed(req, ret); 1772 - return; 1773 - } 1774 1768 io_prep_async_link(req); 1775 1769 de = kmalloc(sizeof(*de), GFP_KERNEL); 1776 1770 if (!de) { 1777 1771 ret = -ENOMEM; 1778 - goto fail; 1772 + io_req_defer_failed(req, ret); 1773 + return; 1779 1774 } 1780 1775 1781 1776 spin_lock(&ctx->completion_lock); ··· 2043 2048 req->flags &= ~REQ_F_HARDLINK; 2044 2049 req->flags |= REQ_F_LINK; 2045 2050 io_req_defer_failed(req, req->cqe.res); 2046 - } else if (unlikely(req->ctx->drain_active)) { 2047 - io_drain_req(req); 2048 2051 } else { 2049 2052 int ret = io_req_prep_async(req); 2050 2053 2051 - if (unlikely(ret)) 2054 + if (unlikely(ret)) { 2052 2055 io_req_defer_failed(req, ret); 2056 + return; 2057 + } 2058 + 2059 + if (unlikely(req->ctx->drain_active)) 2060 + io_drain_req(req); 2053 2061 else 2054 2062 io_queue_iowq(req, NULL); 2055 2063 }
+11
io_uring/net.c
··· 62 62 u16 flags; 63 63 /* initialised and used only by !msg send variants */ 64 64 u16 addr_len; 65 + u16 buf_group; 65 66 void __user *addr; 66 67 /* used only for send zerocopy */ 67 68 struct io_kiocb *notif; ··· 581 580 if (req->opcode == IORING_OP_RECV && sr->len) 582 581 return -EINVAL; 583 582 req->flags |= REQ_F_APOLL_MULTISHOT; 583 + /* 584 + * Store the buffer group for this multishot receive separately, 585 + * as if we end up doing an io-wq based issue that selects a 586 + * buffer, it has to be committed immediately and that will 587 + * clear ->buf_list. This means we lose the link to the buffer 588 + * list, and the eventual buffer put on completion then cannot 589 + * restore it. 590 + */ 591 + sr->buf_group = req->buf_index; 584 592 } 585 593 586 594 #ifdef CONFIG_COMPAT ··· 606 596 607 597 sr->done_io = 0; 608 598 sr->len = 0; /* get from the provided buffer */ 599 + req->buf_index = sr->buf_group; 609 600 } 610 601 611 602 /*
-1
kernel/bpf/bpf_lsm.c
··· 51 51 */ 52 52 BTF_SET_START(bpf_lsm_locked_sockopt_hooks) 53 53 #ifdef CONFIG_SECURITY_NETWORK 54 - BTF_ID(func, bpf_lsm_socket_sock_rcv_skb) 55 54 BTF_ID(func, bpf_lsm_sock_graft) 56 55 BTF_ID(func, bpf_lsm_inet_csk_clone) 57 56 BTF_ID(func, bpf_lsm_inet_conn_established)
+2 -2
kernel/bpf/btf.c
··· 7782 7782 7783 7783 sort(tab->dtors, tab->cnt, sizeof(tab->dtors[0]), btf_id_cmp_func, NULL); 7784 7784 7785 - return 0; 7786 7785 end: 7787 - btf_free_dtor_kfunc_tab(btf); 7786 + if (ret) 7787 + btf_free_dtor_kfunc_tab(btf); 7788 7788 btf_put(btf); 7789 7789 return ret; 7790 7790 }
+1 -1
kernel/bpf/memalloc.c
··· 71 71 if (size <= 192) 72 72 return size_index[(size - 1) / 8] - 1; 73 73 74 - return fls(size - 1) - 1; 74 + return fls(size - 1) - 2; 75 75 } 76 76 77 77 #define NUM_CACHES 11
+18 -7
kernel/bpf/verifier.c
··· 3243 3243 return reg->type != SCALAR_VALUE; 3244 3244 } 3245 3245 3246 + /* Copy src state preserving dst->parent and dst->live fields */ 3247 + static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src) 3248 + { 3249 + struct bpf_reg_state *parent = dst->parent; 3250 + enum bpf_reg_liveness live = dst->live; 3251 + 3252 + *dst = *src; 3253 + dst->parent = parent; 3254 + dst->live = live; 3255 + } 3256 + 3246 3257 static void save_register_state(struct bpf_func_state *state, 3247 3258 int spi, struct bpf_reg_state *reg, 3248 3259 int size) 3249 3260 { 3250 3261 int i; 3251 3262 3252 - state->stack[spi].spilled_ptr = *reg; 3263 + copy_register_state(&state->stack[spi].spilled_ptr, reg); 3253 3264 if (size == BPF_REG_SIZE) 3254 3265 state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN; 3255 3266 ··· 3588 3577 */ 3589 3578 s32 subreg_def = state->regs[dst_regno].subreg_def; 3590 3579 3591 - state->regs[dst_regno] = *reg; 3580 + copy_register_state(&state->regs[dst_regno], reg); 3592 3581 state->regs[dst_regno].subreg_def = subreg_def; 3593 3582 } else { 3594 3583 for (i = 0; i < size; i++) { ··· 3609 3598 3610 3599 if (dst_regno >= 0) { 3611 3600 /* restore register state from stack */ 3612 - state->regs[dst_regno] = *reg; 3601 + copy_register_state(&state->regs[dst_regno], reg); 3613 3602 /* mark reg as written since spilled pointer state likely 3614 3603 * has its liveness marks cleared by is_state_visited() 3615 3604 * which resets stack/reg liveness for state transitions ··· 9603 9592 */ 9604 9593 if (!ptr_is_dst_reg) { 9605 9594 tmp = *dst_reg; 9606 - *dst_reg = *ptr_reg; 9595 + copy_register_state(dst_reg, ptr_reg); 9607 9596 } 9608 9597 ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1, 9609 9598 env->insn_idx); ··· 10856 10845 * to propagate min/max range. 10857 10846 */ 10858 10847 src_reg->id = ++env->id_gen; 10859 - *dst_reg = *src_reg; 10848 + copy_register_state(dst_reg, src_reg); 10860 10849 dst_reg->live |= REG_LIVE_WRITTEN; 10861 10850 dst_reg->subreg_def = DEF_NOT_SUBREG; 10862 10851 } else { ··· 10867 10856 insn->src_reg); 10868 10857 return -EACCES; 10869 10858 } else if (src_reg->type == SCALAR_VALUE) { 10870 - *dst_reg = *src_reg; 10859 + copy_register_state(dst_reg, src_reg); 10871 10860 /* Make sure ID is cleared otherwise 10872 10861 * dst_reg min/max could be incorrectly 10873 10862 * propagated into src_reg by find_equal_scalars() ··· 11666 11655 11667 11656 bpf_for_each_reg_in_vstate(vstate, state, reg, ({ 11668 11657 if (reg->type == SCALAR_VALUE && reg->id == known_reg->id) 11669 - *reg = *known_reg; 11658 + copy_register_state(reg, known_reg); 11670 11659 })); 11671 11660 } 11672 11661
+2 -1
kernel/cgroup/cpuset.c
··· 1346 1346 * A parent can be left with no CPU as long as there is no 1347 1347 * task directly associated with the parent partition. 1348 1348 */ 1349 - if (!cpumask_intersects(cs->cpus_allowed, parent->effective_cpus) && 1349 + if (cpumask_subset(parent->effective_cpus, cs->cpus_allowed) && 1350 1350 partition_is_populated(parent, cs)) 1351 1351 return PERR_NOCPUS; 1352 1352 ··· 2324 2324 new_prs = -new_prs; 2325 2325 spin_lock_irq(&callback_lock); 2326 2326 cs->partition_root_state = new_prs; 2327 + WRITE_ONCE(cs->prs_err, err); 2327 2328 spin_unlock_irq(&callback_lock); 2328 2329 /* 2329 2330 * Update child cpusets, if present.
+17 -22
kernel/events/core.c
··· 4813 4813 4814 4814 cpc = per_cpu_ptr(pmu->cpu_pmu_context, event->cpu); 4815 4815 epc = &cpc->epc; 4816 - 4816 + raw_spin_lock_irq(&ctx->lock); 4817 4817 if (!epc->ctx) { 4818 4818 atomic_set(&epc->refcount, 1); 4819 4819 epc->embedded = 1; 4820 - raw_spin_lock_irq(&ctx->lock); 4821 4820 list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); 4822 4821 epc->ctx = ctx; 4823 - raw_spin_unlock_irq(&ctx->lock); 4824 4822 } else { 4825 4823 WARN_ON_ONCE(epc->ctx != ctx); 4826 4824 atomic_inc(&epc->refcount); 4827 4825 } 4828 - 4826 + raw_spin_unlock_irq(&ctx->lock); 4829 4827 return epc; 4830 4828 } 4831 4829 ··· 4894 4896 4895 4897 static void put_pmu_ctx(struct perf_event_pmu_context *epc) 4896 4898 { 4899 + struct perf_event_context *ctx = epc->ctx; 4897 4900 unsigned long flags; 4898 4901 4899 - if (!atomic_dec_and_test(&epc->refcount)) 4902 + /* 4903 + * XXX 4904 + * 4905 + * lockdep_assert_held(&ctx->mutex); 4906 + * 4907 + * can't because of the call-site in _free_event()/put_event() 4908 + * which isn't always called under ctx->mutex. 4909 + */ 4910 + if (!atomic_dec_and_raw_lock_irqsave(&epc->refcount, &ctx->lock, flags)) 4900 4911 return; 4901 4912 4902 - if (epc->ctx) { 4903 - struct perf_event_context *ctx = epc->ctx; 4913 + WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); 4904 4914 4905 - /* 4906 - * XXX 4907 - * 4908 - * lockdep_assert_held(&ctx->mutex); 4909 - * 4910 - * can't because of the call-site in _free_event()/put_event() 4911 - * which isn't always called under ctx->mutex. 4912 - */ 4913 - 4914 - WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); 4915 - raw_spin_lock_irqsave(&ctx->lock, flags); 4916 - list_del_init(&epc->pmu_ctx_entry); 4917 - epc->ctx = NULL; 4918 - raw_spin_unlock_irqrestore(&ctx->lock, flags); 4919 - } 4915 + list_del_init(&epc->pmu_ctx_entry); 4916 + epc->ctx = NULL; 4920 4917 4921 4918 WARN_ON_ONCE(!list_empty(&epc->pinned_active)); 4922 4919 WARN_ON_ONCE(!list_empty(&epc->flexible_active)); 4920 + 4921 + raw_spin_unlock_irqrestore(&ctx->lock, flags); 4923 4922 4924 4923 if (epc->embedded) 4925 4924 return;
+2 -2
kernel/irq/irqdomain.c
··· 114 114 { 115 115 struct irqchip_fwid *fwid; 116 116 117 - if (WARN_ON(!is_fwnode_irqchip(fwnode))) 117 + if (!fwnode || WARN_ON(!is_fwnode_irqchip(fwnode))) 118 118 return; 119 119 120 120 fwid = container_of(fwnode, struct irqchip_fwid, fwnode); ··· 1915 1915 1916 1916 static void debugfs_remove_domain_dir(struct irq_domain *d) 1917 1917 { 1918 - debugfs_remove(debugfs_lookup(d->name, domain_dir)); 1918 + debugfs_lookup_and_remove(d->name, domain_dir); 1919 1919 } 1920 1920 1921 1921 void __init irq_domain_debugfs_init(struct dentry *root)
+5 -1
kernel/irq/msi.c
··· 1000 1000 fail: 1001 1001 msi_unlock_descs(dev); 1002 1002 free_fwnode: 1003 - kfree(fwnode); 1003 + irq_domain_free_fwnode(fwnode); 1004 1004 free_bundle: 1005 1005 kfree(bundle); 1006 1006 return false; ··· 1013 1013 */ 1014 1014 void msi_remove_device_irq_domain(struct device *dev, unsigned int domid) 1015 1015 { 1016 + struct fwnode_handle *fwnode = NULL; 1016 1017 struct msi_domain_info *info; 1017 1018 struct irq_domain *domain; 1018 1019 ··· 1026 1025 1027 1026 dev->msi.data->__domains[domid].domain = NULL; 1028 1027 info = domain->host_data; 1028 + if (irq_domain_is_msi_device(domain)) 1029 + fwnode = domain->fwnode; 1029 1030 irq_domain_remove(domain); 1031 + irq_domain_free_fwnode(fwnode); 1030 1032 kfree(container_of(info, struct msi_domain_template, info)); 1031 1033 1032 1034 unlock:
+21 -5
kernel/module/main.c
··· 2393 2393 sched_annotate_sleep(); 2394 2394 mutex_lock(&module_mutex); 2395 2395 mod = find_module_all(name, strlen(name), true); 2396 - ret = !mod || mod->state == MODULE_STATE_LIVE; 2396 + ret = !mod || mod->state == MODULE_STATE_LIVE 2397 + || mod->state == MODULE_STATE_GOING; 2397 2398 mutex_unlock(&module_mutex); 2398 2399 2399 2400 return ret; ··· 2570 2569 2571 2570 mod->state = MODULE_STATE_UNFORMED; 2572 2571 2573 - again: 2574 2572 mutex_lock(&module_mutex); 2575 2573 old = find_module_all(mod->name, strlen(mod->name), true); 2576 2574 if (old != NULL) { 2577 - if (old->state != MODULE_STATE_LIVE) { 2575 + if (old->state == MODULE_STATE_COMING 2576 + || old->state == MODULE_STATE_UNFORMED) { 2578 2577 /* Wait in case it fails to load. */ 2579 2578 mutex_unlock(&module_mutex); 2580 2579 err = wait_event_interruptible(module_wq, 2581 2580 finished_loading(mod->name)); 2582 2581 if (err) 2583 2582 goto out_unlocked; 2584 - goto again; 2583 + 2584 + /* The module might have gone in the meantime. */ 2585 + mutex_lock(&module_mutex); 2586 + old = find_module_all(mod->name, strlen(mod->name), 2587 + true); 2585 2588 } 2586 - err = -EEXIST; 2589 + 2590 + /* 2591 + * We are here only when the same module was being loaded. Do 2592 + * not try to load it again right now. It prevents long delays 2593 + * caused by serialized module load failures. It might happen 2594 + * when more devices of the same type trigger load of 2595 + * a particular module. 2596 + */ 2597 + if (old && old->state == MODULE_STATE_LIVE) 2598 + err = -EEXIST; 2599 + else 2600 + err = -EBUSY; 2587 2601 goto out; 2588 2602 } 2589 2603 mod_update_bounds(mod);
+8 -2
kernel/sched/core.c
··· 8290 8290 if (retval) 8291 8291 goto out_put_task; 8292 8292 8293 + /* 8294 + * With non-SMP configs, user_cpus_ptr/user_mask isn't used and 8295 + * alloc_user_cpus_ptr() returns NULL. 8296 + */ 8293 8297 user_mask = alloc_user_cpus_ptr(NUMA_NO_NODE); 8294 - if (IS_ENABLED(CONFIG_SMP) && !user_mask) { 8298 + if (user_mask) { 8299 + cpumask_copy(user_mask, in_mask); 8300 + } else if (IS_ENABLED(CONFIG_SMP)) { 8295 8301 retval = -ENOMEM; 8296 8302 goto out_put_task; 8297 8303 } 8298 - cpumask_copy(user_mask, in_mask); 8304 + 8299 8305 ac = (struct affinity_context){ 8300 8306 .new_mask = in_mask, 8301 8307 .user_mask = user_mask,
+26 -20
kernel/sched/fair.c
··· 7229 7229 eenv_task_busy_time(&eenv, p, prev_cpu); 7230 7230 7231 7231 for (; pd; pd = pd->next) { 7232 + unsigned long util_min = p_util_min, util_max = p_util_max; 7232 7233 unsigned long cpu_cap, cpu_thermal_cap, util; 7233 7234 unsigned long cur_delta, max_spare_cap = 0; 7234 7235 unsigned long rq_util_min, rq_util_max; 7235 - unsigned long util_min, util_max; 7236 7236 unsigned long prev_spare_cap = 0; 7237 7237 int max_spare_cap_cpu = -1; 7238 7238 unsigned long base_energy; ··· 7251 7251 eenv.pd_cap = 0; 7252 7252 7253 7253 for_each_cpu(cpu, cpus) { 7254 + struct rq *rq = cpu_rq(cpu); 7255 + 7254 7256 eenv.pd_cap += cpu_thermal_cap; 7255 7257 7256 7258 if (!cpumask_test_cpu(cpu, sched_domain_span(sd))) ··· 7271 7269 * much capacity we can get out of the CPU; this is 7272 7270 * aligned with sched_cpu_util(). 7273 7271 */ 7274 - if (uclamp_is_used()) { 7275 - if (uclamp_rq_is_idle(cpu_rq(cpu))) { 7276 - util_min = p_util_min; 7277 - util_max = p_util_max; 7278 - } else { 7279 - /* 7280 - * Open code uclamp_rq_util_with() except for 7281 - * the clamp() part. Ie: apply max aggregation 7282 - * only. util_fits_cpu() logic requires to 7283 - * operate on non clamped util but must use the 7284 - * max-aggregated uclamp_{min, max}. 7285 - */ 7286 - rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN); 7287 - rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX); 7272 + if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) { 7273 + /* 7274 + * Open code uclamp_rq_util_with() except for 7275 + * the clamp() part. Ie: apply max aggregation 7276 + * only. util_fits_cpu() logic requires to 7277 + * operate on non clamped util but must use the 7278 + * max-aggregated uclamp_{min, max}. 7279 + */ 7280 + rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN); 7281 + rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX); 7288 7282 7289 - util_min = max(rq_util_min, p_util_min); 7290 - util_max = max(rq_util_max, p_util_max); 7291 - } 7283 + util_min = max(rq_util_min, p_util_min); 7284 + util_max = max(rq_util_max, p_util_max); 7292 7285 } 7293 7286 if (!util_fits_cpu(util, util_min, util_max, cpu)) 7294 7287 continue; ··· 8868 8871 * * Thermal pressure will impact all cpus in this perf domain 8869 8872 * equally. 8870 8873 */ 8871 - if (static_branch_unlikely(&sched_asym_cpucapacity)) { 8874 + if (sched_energy_enabled()) { 8872 8875 unsigned long inv_cap = capacity_orig - thermal_load_avg(rq); 8873 - struct perf_domain *pd = rcu_dereference(rq->rd->pd); 8876 + struct perf_domain *pd; 8874 8877 8878 + rcu_read_lock(); 8879 + 8880 + pd = rcu_dereference(rq->rd->pd); 8875 8881 rq->cpu_capacity_inverted = 0; 8876 8882 8877 8883 for (; pd; pd = pd->next) { 8878 8884 struct cpumask *pd_span = perf_domain_span(pd); 8879 8885 unsigned long pd_cap_orig, pd_cap; 8886 + 8887 + /* We can't be inverted against our own pd */ 8888 + if (cpumask_test_cpu(cpu_of(rq), pd_span)) 8889 + continue; 8880 8890 8881 8891 cpu = cpumask_any(pd_span); 8882 8892 pd_cap_orig = arch_scale_cpu_capacity(cpu); ··· 8909 8905 break; 8910 8906 } 8911 8907 } 8908 + 8909 + rcu_read_unlock(); 8912 8910 } 8913 8911 8914 8912 trace_sched_cpu_capacity_tp(rq);
+4 -4
kernel/trace/Kconfig
··· 933 933 default y 934 934 help 935 935 The ring buffer has its own internal recursion. Although when 936 - recursion happens it wont cause harm because of the protection, 937 - but it does cause an unwanted overhead. Enabling this option will 936 + recursion happens it won't cause harm because of the protection, 937 + but it does cause unwanted overhead. Enabling this option will 938 938 place where recursion was detected into the ftrace "recursed_functions" 939 939 file. 940 940 ··· 1017 1017 The test runs for 10 seconds. This will slow your boot time 1018 1018 by at least 10 more seconds. 1019 1019 1020 - At the end of the test, statics and more checks are done. 1021 - It will output the stats of each per cpu buffer. What 1020 + At the end of the test, statistics and more checks are done. 1021 + It will output the stats of each per cpu buffer: What 1022 1022 was written, the sizes, what was read, what was lost, and 1023 1023 other similar details. 1024 1024
+2 -1
kernel/trace/bpf_trace.c
··· 833 833 834 834 work = container_of(entry, struct send_signal_irq_work, irq_work); 835 835 group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, work->type); 836 + put_task_struct(work->task); 836 837 } 837 838 838 839 static int bpf_send_signal_common(u32 sig, enum pid_type type) ··· 868 867 * to the irq_work. The current task may change when queued 869 868 * irq works get executed. 870 869 */ 871 - work->task = current; 870 + work->task = get_task_struct(current); 872 871 work->sig = sig; 873 872 work->type = type; 874 873 irq_work_queue(&work->irq_work);
+22 -1
kernel/trace/ftrace.c
··· 1248 1248 call_rcu(&hash->rcu, __free_ftrace_hash_rcu); 1249 1249 } 1250 1250 1251 + /** 1252 + * ftrace_free_filter - remove all filters for an ftrace_ops 1253 + * @ops - the ops to remove the filters from 1254 + */ 1251 1255 void ftrace_free_filter(struct ftrace_ops *ops) 1252 1256 { 1253 1257 ftrace_ops_init(ops); 1254 1258 free_ftrace_hash(ops->func_hash->filter_hash); 1255 1259 free_ftrace_hash(ops->func_hash->notrace_hash); 1256 1260 } 1261 + EXPORT_SYMBOL_GPL(ftrace_free_filter); 1257 1262 1258 1263 static struct ftrace_hash *alloc_ftrace_hash(int size_bits) 1259 1264 { ··· 5844 5839 * 5845 5840 * Filters denote which functions should be enabled when tracing is enabled 5846 5841 * If @ip is NULL, it fails to update filter. 5842 + * 5843 + * This can allocate memory which must be freed before @ops can be freed, 5844 + * either by removing each filtered addr or by using 5845 + * ftrace_free_filter(@ops). 5847 5846 */ 5848 5847 int ftrace_set_filter_ip(struct ftrace_ops *ops, unsigned long ip, 5849 5848 int remove, int reset) ··· 5867 5858 * 5868 5859 * Filters denote which functions should be enabled when tracing is enabled 5869 5860 * If @ips array or any ip specified within is NULL , it fails to update filter. 5870 - */ 5861 + * 5862 + * This can allocate memory which must be freed before @ops can be freed, 5863 + * either by removing each filtered addr or by using 5864 + * ftrace_free_filter(@ops). 5865 + */ 5871 5866 int ftrace_set_filter_ips(struct ftrace_ops *ops, unsigned long *ips, 5872 5867 unsigned int cnt, int remove, int reset) 5873 5868 { ··· 5913 5900 * 5914 5901 * Filters denote which functions should be enabled when tracing is enabled. 5915 5902 * If @buf is NULL and reset is set, all functions will be enabled for tracing. 5903 + * 5904 + * This can allocate memory which must be freed before @ops can be freed, 5905 + * either by removing each filtered addr or by using 5906 + * ftrace_free_filter(@ops). 5916 5907 */ 5917 5908 int ftrace_set_filter(struct ftrace_ops *ops, unsigned char *buf, 5918 5909 int len, int reset) ··· 5936 5919 * Notrace Filters denote which functions should not be enabled when tracing 5937 5920 * is enabled. If @buf is NULL and reset is set, all functions will be enabled 5938 5921 * for tracing. 5922 + * 5923 + * This can allocate memory which must be freed before @ops can be freed, 5924 + * either by removing each filtered addr or by using 5925 + * ftrace_free_filter(@ops). 5939 5926 */ 5940 5927 int ftrace_set_notrace(struct ftrace_ops *ops, unsigned char *buf, 5941 5928 int len, int reset)
+1 -1
kernel/trace/rv/rv.c
··· 516 516 struct rv_monitor_def *mdef; 517 517 int retval = -EINVAL; 518 518 bool enable = true; 519 - char *ptr = buff; 519 + char *ptr; 520 520 int len; 521 521 522 522 if (count < 1 || count > MAX_RV_MONITOR_NAME_SIZE + 1)
+2
kernel/trace/trace.c
··· 10295 10295 static_key_enable(&tracepoint_printk_key.key); 10296 10296 } 10297 10297 tracer_alloc_buffers(); 10298 + 10299 + init_events(); 10298 10300 } 10299 10301 10300 10302 void __init trace_init(void)
+1
kernel/trace/trace.h
··· 1490 1490 extern void trace_event_enable_tgid_record(bool enable); 1491 1491 1492 1492 extern int event_trace_init(void); 1493 + extern int init_events(void); 1493 1494 extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr); 1494 1495 extern int event_trace_del_tracer(struct trace_array *tr); 1495 1496 extern void __trace_early_add_events(struct trace_array *tr);
+4 -4
kernel/trace/trace_events_filter.c
··· 128 128 } 129 129 130 130 /** 131 - * prog_entry - a singe entry in the filter program 131 + * struct prog_entry - a singe entry in the filter program 132 132 * @target: Index to jump to on a branch (actually one minus the index) 133 133 * @when_to_branch: The value of the result of the predicate to do a branch 134 134 * @pred: The predicate to execute. ··· 140 140 }; 141 141 142 142 /** 143 - * update_preds- assign a program entry a label target 143 + * update_preds - assign a program entry a label target 144 144 * @prog: The program array 145 145 * @N: The index of the current entry in @prog 146 - * @when_to_branch: What to assign a program entry for its branch condition 146 + * @invert: What to assign a program entry for its branch condition 147 147 * 148 148 * The program entry at @N has a target that points to the index of a program 149 149 * entry that can have its target and when_to_branch fields updated. 150 150 * Update the current program entry denoted by index @N target field to be 151 151 * that of the updated entry. This will denote the entry to update if 152 - * we are processing an "||" after an "&&" 152 + * we are processing an "||" after an "&&". 153 153 */ 154 154 static void update_preds(struct prog_entry *prog, int N, int invert) 155 155 {
+2
kernel/trace/trace_events_hist.c
··· 1988 1988 hist_field->fn_num = flags & HIST_FIELD_FL_LOG2 ? HIST_FIELD_FN_LOG2 : 1989 1989 HIST_FIELD_FN_BUCKET; 1990 1990 hist_field->operands[0] = create_hist_field(hist_data, field, fl, NULL); 1991 + if (!hist_field->operands[0]) 1992 + goto free; 1991 1993 hist_field->size = hist_field->operands[0]->size; 1992 1994 hist_field->type = kstrdup_const(hist_field->operands[0]->type, GFP_KERNEL); 1993 1995 if (!hist_field->type)
+2 -3
kernel/trace/trace_osnoise.c
··· 147 147 * register/unregister serialization is provided by trace's 148 148 * trace_types_lock. 149 149 */ 150 - lockdep_assert_held(&trace_types_lock); 151 - 152 - list_for_each_entry_rcu(inst, &osnoise_instances, list) { 150 + list_for_each_entry_rcu(inst, &osnoise_instances, list, 151 + lockdep_is_held(&trace_types_lock)) { 153 152 if (inst->tr == tr) { 154 153 list_del_rcu(&inst->list); 155 154 found = 1;
+1 -2
kernel/trace/trace_output.c
··· 1535 1535 NULL 1536 1536 }; 1537 1537 1538 - __init static int init_events(void) 1538 + __init int init_events(void) 1539 1539 { 1540 1540 struct trace_event *event; 1541 1541 int i, ret; ··· 1548 1548 1549 1549 return 0; 1550 1550 } 1551 - early_initcall(init_events);
+12 -2
lib/Kconfig.debug
··· 754 754 select KALLSYMS 755 755 select CRC32 756 756 select STACKDEPOT 757 + select STACKDEPOT_ALWAYS_INIT if !DEBUG_KMEMLEAK_DEFAULT_OFF 757 758 help 758 759 Say Y here if you want to enable the memory leak 759 760 detector. The memory allocation/freeing is traced in a way ··· 1208 1207 depends on DEBUG_KERNEL && PROC_FS 1209 1208 default y 1210 1209 help 1211 - If you say Y here, the /proc/sched_debug file will be provided 1210 + If you say Y here, the /sys/kernel/debug/sched file will be provided 1212 1211 that can help debug the scheduler. The runtime overhead of this 1213 1212 option is minimal. 1214 1213 ··· 1918 1917 help 1919 1918 Add fault injections into various functions that are annotated with 1920 1919 ALLOW_ERROR_INJECTION() in the kernel. BPF may also modify the return 1921 - value of theses functions. This is useful to test error paths of code. 1920 + value of these functions. This is useful to test error paths of code. 1922 1921 1923 1922 If unsure, say N 1924 1923 ··· 2566 2565 to the KUnit documentation in Documentation/dev-tools/kunit/. 2567 2566 2568 2567 If unsure, say N. 2568 + 2569 + config MEMCPY_SLOW_KUNIT_TEST 2570 + bool "Include exhaustive memcpy tests" 2571 + depends on MEMCPY_KUNIT_TEST 2572 + default y 2573 + help 2574 + Some memcpy tests are quite exhaustive in checking for overlaps 2575 + and bit ranges. These can be very slow, so they are split out 2576 + as a separate config, in case they need to be disabled. 2569 2577 2570 2578 config IS_SIGNED_TYPE_KUNIT_TEST 2571 2579 tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS
+1 -1
lib/Kconfig.kcsan
··· 194 194 Enable support for modeling a subset of weak memory, which allows 195 195 detecting a subset of data races due to missing memory barriers. 196 196 197 - Depends on KCSAN_STRICT, because the options strenghtening certain 197 + Depends on KCSAN_STRICT, because the options strengthening certain 198 198 plain accesses by default (depending on !KCSAN_STRICT) reduce the 199 199 ability to detect any data races invoving reordered accesses, in 200 200 particular reordered writes.
+31
lib/dec_and_lock.c
··· 49 49 return 0; 50 50 } 51 51 EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); 52 + 53 + int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) 54 + { 55 + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 56 + if (atomic_add_unless(atomic, -1, 1)) 57 + return 0; 58 + 59 + /* Otherwise do it the slow way */ 60 + raw_spin_lock(lock); 61 + if (atomic_dec_and_test(atomic)) 62 + return 1; 63 + raw_spin_unlock(lock); 64 + return 0; 65 + } 66 + EXPORT_SYMBOL(_atomic_dec_and_raw_lock); 67 + 68 + int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 69 + unsigned long *flags) 70 + { 71 + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 72 + if (atomic_add_unless(atomic, -1, 1)) 73 + return 0; 74 + 75 + /* Otherwise do it the slow way */ 76 + raw_spin_lock_irqsave(lock, *flags); 77 + if (atomic_dec_and_test(atomic)) 78 + return 1; 79 + raw_spin_unlock_irqrestore(lock, *flags); 80 + return 0; 81 + } 82 + EXPORT_SYMBOL(_atomic_dec_and_raw_lock_irqsave);
+25 -15
lib/kunit/assert.c
··· 241 241 mem_assert = container_of(assert, struct kunit_mem_assert, 242 242 assert); 243 243 244 - string_stream_add(stream, 245 - KUNIT_SUBTEST_INDENT "Expected %s %s %s, but\n", 246 - mem_assert->text->left_text, 247 - mem_assert->text->operation, 248 - mem_assert->text->right_text); 244 + if (!mem_assert->left_value) { 245 + string_stream_add(stream, 246 + KUNIT_SUBTEST_INDENT "Expected %s is not null, but is\n", 247 + mem_assert->text->left_text); 248 + } else if (!mem_assert->right_value) { 249 + string_stream_add(stream, 250 + KUNIT_SUBTEST_INDENT "Expected %s is not null, but is\n", 251 + mem_assert->text->right_text); 252 + } else { 253 + string_stream_add(stream, 254 + KUNIT_SUBTEST_INDENT "Expected %s %s %s, but\n", 255 + mem_assert->text->left_text, 256 + mem_assert->text->operation, 257 + mem_assert->text->right_text); 249 258 250 - string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n", 251 - mem_assert->text->left_text); 252 - kunit_assert_hexdump(stream, mem_assert->left_value, 253 - mem_assert->right_value, mem_assert->size); 259 + string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n", 260 + mem_assert->text->left_text); 261 + kunit_assert_hexdump(stream, mem_assert->left_value, 262 + mem_assert->right_value, mem_assert->size); 254 263 255 - string_stream_add(stream, "\n"); 264 + string_stream_add(stream, "\n"); 256 265 257 - string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n", 258 - mem_assert->text->right_text); 259 - kunit_assert_hexdump(stream, mem_assert->right_value, 260 - mem_assert->left_value, mem_assert->size); 266 + string_stream_add(stream, KUNIT_SUBSUBTEST_INDENT "%s ==\n", 267 + mem_assert->text->right_text); 268 + kunit_assert_hexdump(stream, mem_assert->right_value, 269 + mem_assert->left_value, mem_assert->size); 261 270 262 - kunit_assert_print_msg(message, stream); 271 + kunit_assert_print_msg(message, stream); 272 + } 263 273 } 264 274 EXPORT_SYMBOL_GPL(kunit_mem_assert_format);
+1
lib/kunit/test.c
··· 21 21 #include "try-catch-impl.h" 22 22 23 23 DEFINE_STATIC_KEY_FALSE(kunit_running); 24 + EXPORT_SYMBOL_GPL(kunit_running); 24 25 25 26 #if IS_BUILTIN(CONFIG_KUNIT) 26 27 /*
+11 -11
lib/maple_tree.c
··· 670 670 unsigned char piv) 671 671 { 672 672 struct maple_node *node = mte_to_node(mn); 673 + enum maple_type type = mte_node_type(mn); 673 674 674 - if (piv >= mt_pivots[piv]) { 675 + if (piv >= mt_pivots[type]) { 675 676 WARN_ON(1); 676 677 return 0; 677 678 } 678 - switch (mte_node_type(mn)) { 679 + switch (type) { 679 680 case maple_arange_64: 680 681 return node->ma64.pivot[piv]; 681 682 case maple_range_64: ··· 4888 4887 unsigned long *pivots, *gaps; 4889 4888 void __rcu **slots; 4890 4889 unsigned long gap = 0; 4891 - unsigned long max, min, index; 4890 + unsigned long max, min; 4892 4891 unsigned char offset; 4893 4892 4894 4893 if (unlikely(mas_is_err(mas))) ··· 4910 4909 min = mas_safe_min(mas, pivots, --offset); 4911 4910 4912 4911 max = mas_safe_pivot(mas, pivots, offset, type); 4913 - index = mas->index; 4914 - while (index <= max) { 4912 + while (mas->index <= max) { 4915 4913 gap = 0; 4916 4914 if (gaps) 4917 4915 gap = gaps[offset]; ··· 4941 4941 min = mas_safe_min(mas, pivots, offset); 4942 4942 } 4943 4943 4944 - if (unlikely(index > max)) { 4945 - mas_set_err(mas, -EBUSY); 4946 - return false; 4947 - } 4944 + if (unlikely((mas->index > max) || (size - 1 > max - mas->index))) 4945 + goto no_space; 4948 4946 4949 4947 if (unlikely(ma_is_leaf(type))) { 4950 4948 mas->offset = offset; ··· 4959 4961 return false; 4960 4962 4961 4963 ascend: 4962 - if (mte_is_root(mas->node)) 4963 - mas_set_err(mas, -EBUSY); 4964 + if (!mte_is_root(mas->node)) 4965 + return false; 4964 4966 4967 + no_space: 4968 + mas_set_err(mas, -EBUSY); 4965 4969 return false; 4966 4970 } 4967 4971
+2
lib/memcpy_kunit.c
··· 309 309 310 310 static void init_large(struct kunit *test) 311 311 { 312 + if (!IS_ENABLED(CONFIG_MEMCPY_SLOW_KUNIT_TEST)) 313 + kunit_skip(test, "Slow test skipped. Enable with CONFIG_MEMCPY_SLOW_KUNIT_TEST=y"); 312 314 313 315 /* Get many bit patterns. */ 314 316 get_random_bytes(large_src, ARRAY_SIZE(large_src));
+3
lib/nlattr.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/errno.h> 12 12 #include <linux/jiffies.h> 13 + #include <linux/nospec.h> 13 14 #include <linux/skbuff.h> 14 15 #include <linux/string.h> 15 16 #include <linux/types.h> ··· 382 381 if (type <= 0 || type > maxtype) 383 382 return 0; 384 383 384 + type = array_index_nospec(type, maxtype + 1); 385 385 pt = &policy[type]; 386 386 387 387 BUG_ON(pt->type > NLA_TYPE_MAX); ··· 598 596 } 599 597 continue; 600 598 } 599 + type = array_index_nospec(type, maxtype + 1); 601 600 if (policy) { 602 601 int err = validate_nla(nla, maxtype, policy, 603 602 validate, extack, depth);
+89
lib/test_maple_tree.c
··· 2517 2517 mt_set_non_kernel(0); 2518 2518 } 2519 2519 2520 + static noinline void check_empty_area_window(struct maple_tree *mt) 2521 + { 2522 + unsigned long i, nr_entries = 20; 2523 + MA_STATE(mas, mt, 0, 0); 2524 + 2525 + for (i = 1; i <= nr_entries; i++) 2526 + mtree_store_range(mt, i*10, i*10 + 9, 2527 + xa_mk_value(i), GFP_KERNEL); 2528 + 2529 + /* Create another hole besides the one at 0 */ 2530 + mtree_store_range(mt, 160, 169, NULL, GFP_KERNEL); 2531 + 2532 + /* Check lower bounds that don't fit */ 2533 + rcu_read_lock(); 2534 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 10) != -EBUSY); 2535 + 2536 + mas_reset(&mas); 2537 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 6, 90, 5) != -EBUSY); 2538 + 2539 + /* Check lower bound that does fit */ 2540 + mas_reset(&mas); 2541 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 5) != 0); 2542 + MT_BUG_ON(mt, mas.index != 5); 2543 + MT_BUG_ON(mt, mas.last != 9); 2544 + rcu_read_unlock(); 2545 + 2546 + /* Check one gap that doesn't fit and one that does */ 2547 + rcu_read_lock(); 2548 + mas_reset(&mas); 2549 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 217, 9) != 0); 2550 + MT_BUG_ON(mt, mas.index != 161); 2551 + MT_BUG_ON(mt, mas.last != 169); 2552 + 2553 + /* Check one gap that does fit above the min */ 2554 + mas_reset(&mas); 2555 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 3) != 0); 2556 + MT_BUG_ON(mt, mas.index != 216); 2557 + MT_BUG_ON(mt, mas.last != 218); 2558 + 2559 + /* Check size that doesn't fit any gap */ 2560 + mas_reset(&mas); 2561 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 16) != -EBUSY); 2562 + 2563 + /* 2564 + * Check size that doesn't fit the lower end of the window but 2565 + * does fit the gap 2566 + */ 2567 + mas_reset(&mas); 2568 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 167, 200, 4) != -EBUSY); 2569 + 2570 + /* 2571 + * Check size that doesn't fit the upper end of the window but 2572 + * does fit the gap 2573 + */ 2574 + mas_reset(&mas); 2575 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 162, 4) != -EBUSY); 2576 + 2577 + /* Check mas_empty_area forward */ 2578 + mas_reset(&mas); 2579 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 9) != 0); 2580 + MT_BUG_ON(mt, mas.index != 0); 2581 + MT_BUG_ON(mt, mas.last != 8); 2582 + 2583 + mas_reset(&mas); 2584 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 4) != 0); 2585 + MT_BUG_ON(mt, mas.index != 0); 2586 + MT_BUG_ON(mt, mas.last != 3); 2587 + 2588 + mas_reset(&mas); 2589 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 11) != -EBUSY); 2590 + 2591 + mas_reset(&mas); 2592 + MT_BUG_ON(mt, mas_empty_area(&mas, 5, 100, 6) != -EBUSY); 2593 + 2594 + mas_reset(&mas); 2595 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EBUSY); 2596 + 2597 + mas_reset(&mas); 2598 + mas_empty_area(&mas, 100, 165, 3); 2599 + 2600 + mas_reset(&mas); 2601 + MT_BUG_ON(mt, mas_empty_area(&mas, 100, 163, 6) != -EBUSY); 2602 + rcu_read_unlock(); 2603 + } 2604 + 2520 2605 static DEFINE_MTREE(tree); 2521 2606 static int maple_tree_seed(void) 2522 2607 { ··· 2848 2763 2849 2764 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); 2850 2765 check_bnode_min_spanning(&tree); 2766 + mtree_destroy(&tree); 2767 + 2768 + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); 2769 + check_empty_area_window(&tree); 2851 2770 mtree_destroy(&tree); 2852 2771 2853 2772 #if defined(BENCH)
+1
mm/compaction.c
··· 1839 1839 pfn = cc->zone->zone_start_pfn; 1840 1840 cc->fast_search_fail = 0; 1841 1841 found_block = true; 1842 + set_pageblock_skip(freepage); 1842 1843 break; 1843 1844 } 1844 1845 }
+3
mm/hugetlb.c
··· 5051 5051 entry = huge_pte_clear_uffd_wp(entry); 5052 5052 set_huge_pte_at(dst, addr, dst_pte, entry); 5053 5053 } else if (unlikely(is_pte_marker(entry))) { 5054 + /* No swap on hugetlb */ 5055 + WARN_ON_ONCE( 5056 + is_swapin_error_entry(pte_to_swp_entry(entry))); 5054 5057 /* 5055 5058 * We copy the pte marker only if the dst vma has 5056 5059 * uffd-wp enabled.
+21 -1
mm/khugepaged.c
··· 847 847 return SCAN_SUCCEED; 848 848 } 849 849 850 + /* 851 + * See pmd_trans_unstable() for how the result may change out from 852 + * underneath us, even if we hold mmap_lock in read. 853 + */ 850 854 static int find_pmd_or_thp_or_none(struct mm_struct *mm, 851 855 unsigned long address, 852 856 pmd_t **pmd) ··· 869 865 #endif 870 866 if (pmd_none(pmde)) 871 867 return SCAN_PMD_NONE; 868 + if (!pmd_present(pmde)) 869 + return SCAN_PMD_NULL; 872 870 if (pmd_trans_huge(pmde)) 873 871 return SCAN_PMD_MAPPED; 872 + if (pmd_devmap(pmde)) 873 + return SCAN_PMD_NULL; 874 874 if (pmd_bad(pmde)) 875 875 return SCAN_PMD_NULL; 876 876 return SCAN_SUCCEED; ··· 1650 1642 * has higher cost too. It would also probably require locking 1651 1643 * the anon_vma. 1652 1644 */ 1653 - if (vma->anon_vma) { 1645 + if (READ_ONCE(vma->anon_vma)) { 1654 1646 result = SCAN_PAGE_ANON; 1655 1647 goto next; 1656 1648 } ··· 1678 1670 result = SCAN_PTE_MAPPED_HUGEPAGE; 1679 1671 if ((cc->is_khugepaged || is_target) && 1680 1672 mmap_write_trylock(mm)) { 1673 + /* 1674 + * Re-check whether we have an ->anon_vma, because 1675 + * collapse_and_free_pmd() requires that either no 1676 + * ->anon_vma exists or the anon_vma is locked. 1677 + * We already checked ->anon_vma above, but that check 1678 + * is racy because ->anon_vma can be populated under the 1679 + * mmap lock in read mode. 1680 + */ 1681 + if (vma->anon_vma) { 1682 + result = SCAN_PAGE_ANON; 1683 + goto unlock_next; 1684 + } 1681 1685 /* 1682 1686 * When a vma is registered with uffd-wp, we can't 1683 1687 * recycle the pmd pgtable because there can be pte
+3 -2
mm/kmemleak.c
··· 2070 2070 return -EINVAL; 2071 2071 if (strcmp(str, "off") == 0) 2072 2072 kmemleak_disable(); 2073 - else if (strcmp(str, "on") == 0) 2073 + else if (strcmp(str, "on") == 0) { 2074 2074 kmemleak_skip_disable = 1; 2075 + stack_depot_want_early_init(); 2076 + } 2075 2077 else 2076 2078 return -EINVAL; 2077 2079 return 0; ··· 2095 2093 if (kmemleak_error) 2096 2094 return; 2097 2095 2098 - stack_depot_init(); 2099 2096 jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE); 2100 2097 jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000); 2101 2098
+13 -54
mm/memcontrol.c
··· 63 63 #include <linux/resume_user_mode.h> 64 64 #include <linux/psi.h> 65 65 #include <linux/seq_buf.h> 66 - #include <linux/parser.h> 67 66 #include "internal.h" 68 67 #include <net/sock.h> 69 68 #include <net/ip.h> ··· 2392 2393 psi_memstall_enter(&pflags); 2393 2394 nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages, 2394 2395 gfp_mask, 2395 - MEMCG_RECLAIM_MAY_SWAP, 2396 - NULL); 2396 + MEMCG_RECLAIM_MAY_SWAP); 2397 2397 psi_memstall_leave(&pflags); 2398 2398 } while ((memcg = parent_mem_cgroup(memcg)) && 2399 2399 !mem_cgroup_is_root(memcg)); ··· 2683 2685 2684 2686 psi_memstall_enter(&pflags); 2685 2687 nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages, 2686 - gfp_mask, reclaim_options, 2687 - NULL); 2688 + gfp_mask, reclaim_options); 2688 2689 psi_memstall_leave(&pflags); 2689 2690 2690 2691 if (mem_cgroup_margin(mem_over_limit) >= nr_pages) ··· 3503 3506 } 3504 3507 3505 3508 if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, 3506 - memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP, 3507 - NULL)) { 3509 + memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) { 3508 3510 ret = -EBUSY; 3509 3511 break; 3510 3512 } ··· 3614 3618 return -EINTR; 3615 3619 3616 3620 if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, 3617 - MEMCG_RECLAIM_MAY_SWAP, 3618 - NULL)) 3621 + MEMCG_RECLAIM_MAY_SWAP)) 3619 3622 nr_retries--; 3620 3623 } 3621 3624 ··· 6424 6429 } 6425 6430 6426 6431 reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high, 6427 - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, 6428 - NULL); 6432 + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP); 6429 6433 6430 6434 if (!reclaimed && !nr_retries--) 6431 6435 break; ··· 6473 6479 6474 6480 if (nr_reclaims) { 6475 6481 if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max, 6476 - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, 6477 - NULL)) 6482 + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP)) 6478 6483 nr_reclaims--; 6479 6484 continue; 6480 6485 } ··· 6596 6603 return nbytes; 6597 6604 } 6598 6605 6599 - enum { 6600 - MEMORY_RECLAIM_NODES = 0, 6601 - MEMORY_RECLAIM_NULL, 6602 - }; 6603 - 6604 - static const match_table_t if_tokens = { 6605 - { MEMORY_RECLAIM_NODES, "nodes=%s" }, 6606 - { MEMORY_RECLAIM_NULL, NULL }, 6607 - }; 6608 - 6609 6606 static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, 6610 6607 size_t nbytes, loff_t off) 6611 6608 { 6612 6609 struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); 6613 6610 unsigned int nr_retries = MAX_RECLAIM_RETRIES; 6614 6611 unsigned long nr_to_reclaim, nr_reclaimed = 0; 6615 - unsigned int reclaim_options = MEMCG_RECLAIM_MAY_SWAP | 6616 - MEMCG_RECLAIM_PROACTIVE; 6617 - char *old_buf, *start; 6618 - substring_t args[MAX_OPT_ARGS]; 6619 - int token; 6620 - char value[256]; 6621 - nodemask_t nodemask = NODE_MASK_ALL; 6612 + unsigned int reclaim_options; 6613 + int err; 6622 6614 6623 6615 buf = strstrip(buf); 6616 + err = page_counter_memparse(buf, "", &nr_to_reclaim); 6617 + if (err) 6618 + return err; 6624 6619 6625 - old_buf = buf; 6626 - nr_to_reclaim = memparse(buf, &buf) / PAGE_SIZE; 6627 - if (buf == old_buf) 6628 - return -EINVAL; 6629 - 6630 - buf = strstrip(buf); 6631 - 6632 - while ((start = strsep(&buf, " ")) != NULL) { 6633 - if (!strlen(start)) 6634 - continue; 6635 - token = match_token(start, if_tokens, args); 6636 - match_strlcpy(value, args, sizeof(value)); 6637 - switch (token) { 6638 - case MEMORY_RECLAIM_NODES: 6639 - if (nodelist_parse(value, nodemask) < 0) 6640 - return -EINVAL; 6641 - break; 6642 - default: 6643 - return -EINVAL; 6644 - } 6645 - } 6646 - 6620 + reclaim_options = MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE; 6647 6621 while (nr_reclaimed < nr_to_reclaim) { 6648 6622 unsigned long reclaimed; 6649 6623 ··· 6627 6667 6628 6668 reclaimed = try_to_free_mem_cgroup_pages(memcg, 6629 6669 nr_to_reclaim - nr_reclaimed, 6630 - GFP_KERNEL, reclaim_options, 6631 - &nodemask); 6670 + GFP_KERNEL, reclaim_options); 6632 6671 6633 6672 if (!reclaimed && !nr_retries--) 6634 6673 return -EAGAIN;
+7 -7
mm/memory.c
··· 828 828 return -EBUSY; 829 829 return -ENOENT; 830 830 } else if (is_pte_marker_entry(entry)) { 831 - /* 832 - * We're copying the pgtable should only because dst_vma has 833 - * uffd-wp enabled, do sanity check. 834 - */ 835 - WARN_ON_ONCE(!userfaultfd_wp(dst_vma)); 836 - set_pte_at(dst_mm, addr, dst_pte, pte); 831 + if (is_swapin_error_entry(entry) || userfaultfd_wp(dst_vma)) 832 + set_pte_at(dst_mm, addr, dst_pte, pte); 837 833 return 0; 838 834 } 839 835 if (!userfaultfd_wp(dst_vma)) ··· 3625 3629 /* 3626 3630 * Be careful so that we will only recover a special uffd-wp pte into a 3627 3631 * none pte. Otherwise it means the pte could have changed, so retry. 3632 + * 3633 + * This should also cover the case where e.g. the pte changed 3634 + * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_SWAPIN_ERROR. 3635 + * So is_pte_marker() check is not enough to safely drop the pte. 3628 3636 */ 3629 - if (is_pte_marker(*vmf->pte)) 3637 + if (pte_same(vmf->orig_pte, *vmf->pte)) 3630 3638 pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); 3631 3639 pte_unmap_unlock(vmf->pte, vmf->ptl); 3632 3640 return 0;
+2 -1
mm/mempolicy.c
··· 600 600 601 601 /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ 602 602 if (flags & (MPOL_MF_MOVE_ALL) || 603 - (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) { 603 + (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 && 604 + !hugetlb_pmd_shared(pte))) { 604 605 if (isolate_hugetlb(page, qp->pagelist) && 605 606 (flags & MPOL_MF_STRICT)) 606 607 /*
+7 -1
mm/mprotect.c
··· 245 245 newpte = pte_swp_mksoft_dirty(newpte); 246 246 if (pte_swp_uffd_wp(oldpte)) 247 247 newpte = pte_swp_mkuffd_wp(newpte); 248 - } else if (pte_marker_entry_uffd_wp(entry)) { 248 + } else if (is_pte_marker_entry(entry)) { 249 + /* 250 + * Ignore swapin errors unconditionally, 251 + * because any access should sigbus anyway. 252 + */ 253 + if (is_swapin_error_entry(entry)) 254 + continue; 249 255 /* 250 256 * If this is uffd-wp pte marker and we'd like 251 257 * to unprotect it, drop it; the next page
+19 -6
mm/mremap.c
··· 1027 1027 } 1028 1028 1029 1029 /* 1030 - * Function vma_merge() is called on the extension we are adding to 1031 - * the already existing vma, vma_merge() will merge this extension with 1032 - * the already existing vma (expand operation itself) and possibly also 1033 - * with the next vma if it becomes adjacent to the expanded vma and 1034 - * otherwise compatible. 1030 + * Function vma_merge() is called on the extension we 1031 + * are adding to the already existing vma, vma_merge() 1032 + * will merge this extension with the already existing 1033 + * vma (expand operation itself) and possibly also with 1034 + * the next vma if it becomes adjacent to the expanded 1035 + * vma and otherwise compatible. 1036 + * 1037 + * However, vma_merge() can currently fail due to 1038 + * is_mergeable_vma() check for vm_ops->close (see the 1039 + * comment there). Yet this should not prevent vma 1040 + * expanding, so perform a simple expand for such vma. 1041 + * Ideally the check for close op should be only done 1042 + * when a vma would be actually removed due to a merge. 1035 1043 */ 1036 - vma = vma_merge(mm, vma, extension_start, extension_end, 1044 + if (!vma->vm_ops || !vma->vm_ops->close) { 1045 + vma = vma_merge(mm, vma, extension_start, extension_end, 1037 1046 vma->vm_flags, vma->anon_vma, vma->vm_file, 1038 1047 extension_pgoff, vma_policy(vma), 1039 1048 vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 1049 + } else if (vma_adjust(vma, vma->vm_start, addr + new_len, 1050 + vma->vm_pgoff, NULL)) { 1051 + vma = NULL; 1052 + } 1040 1053 if (!vma) { 1041 1054 vm_unacct_memory(pages); 1042 1055 ret = -ENOMEM;
+1
mm/swapfile.c
··· 1100 1100 goto check_out; 1101 1101 pr_debug("scan_swap_map of si %d failed to find offset\n", 1102 1102 si->type); 1103 + cond_resched(); 1103 1104 1104 1105 spin_lock(&swap_avail_lock); 1105 1106 nextsi:
+5 -4
mm/vmscan.c
··· 3323 3323 if (mem_cgroup_disabled()) 3324 3324 return; 3325 3325 3326 + /* migration can happen before addition */ 3327 + if (!mm->lru_gen.memcg) 3328 + return; 3329 + 3326 3330 rcu_read_lock(); 3327 3331 memcg = mem_cgroup_from_task(task); 3328 3332 rcu_read_unlock(); 3329 3333 if (memcg == mm->lru_gen.memcg) 3330 3334 return; 3331 3335 3332 - VM_WARN_ON_ONCE(!mm->lru_gen.memcg); 3333 3336 VM_WARN_ON_ONCE(list_empty(&mm->lru_gen.list)); 3334 3337 3335 3338 lru_gen_del_mm(mm); ··· 6757 6754 unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, 6758 6755 unsigned long nr_pages, 6759 6756 gfp_t gfp_mask, 6760 - unsigned int reclaim_options, 6761 - nodemask_t *nodemask) 6757 + unsigned int reclaim_options) 6762 6758 { 6763 6759 unsigned long nr_reclaimed; 6764 6760 unsigned int noreclaim_flag; ··· 6772 6770 .may_unmap = 1, 6773 6771 .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP), 6774 6772 .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE), 6775 - .nodemask = nodemask, 6776 6773 }; 6777 6774 /* 6778 6775 * Traverse the ZONELIST_FALLBACK zonelist of the current node to put
+205 -32
mm/zsmalloc.c
··· 113 113 * have room for two bit at least. 114 114 */ 115 115 #define OBJ_ALLOCATED_TAG 1 116 - #define OBJ_TAG_BITS 1 116 + 117 + #ifdef CONFIG_ZPOOL 118 + /* 119 + * The second least-significant bit in the object's header identifies if the 120 + * value stored at the header is a deferred handle from the last reclaim 121 + * attempt. 122 + * 123 + * As noted above, this is valid because we have room for two bits. 124 + */ 125 + #define OBJ_DEFERRED_HANDLE_TAG 2 126 + #define OBJ_TAG_BITS 2 127 + #define OBJ_TAG_MASK (OBJ_ALLOCATED_TAG | OBJ_DEFERRED_HANDLE_TAG) 128 + #else 129 + #define OBJ_TAG_BITS 1 130 + #define OBJ_TAG_MASK OBJ_ALLOCATED_TAG 131 + #endif /* CONFIG_ZPOOL */ 132 + 117 133 #define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS) 118 134 #define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1) 119 135 ··· 238 222 * Handle of allocated object. 239 223 */ 240 224 unsigned long handle; 225 + #ifdef CONFIG_ZPOOL 226 + /* 227 + * Deferred handle of a reclaimed object. 228 + */ 229 + unsigned long deferred_handle; 230 + #endif 241 231 }; 242 232 }; 243 233 ··· 294 272 /* links the zspage to the lru list in the pool */ 295 273 struct list_head lru; 296 274 bool under_reclaim; 297 - /* list of unfreed handles whose objects have been reclaimed */ 298 - unsigned long *deferred_handles; 299 275 #endif 300 276 301 277 struct zs_pool *pool; ··· 917 897 return *(unsigned long *)handle; 918 898 } 919 899 920 - static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) 900 + static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle, 901 + int tag) 921 902 { 922 903 unsigned long handle; 923 904 struct zspage *zspage = get_zspage(page); ··· 929 908 } else 930 909 handle = *(unsigned long *)obj; 931 910 932 - if (!(handle & OBJ_ALLOCATED_TAG)) 911 + if (!(handle & tag)) 933 912 return false; 934 913 935 - *phandle = handle & ~OBJ_ALLOCATED_TAG; 914 + /* Clear all tags before returning the handle */ 915 + *phandle = handle & ~OBJ_TAG_MASK; 936 916 return true; 937 917 } 918 + 919 + static inline bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) 920 + { 921 + return obj_tagged(page, obj, phandle, OBJ_ALLOCATED_TAG); 922 + } 923 + 924 + #ifdef CONFIG_ZPOOL 925 + static bool obj_stores_deferred_handle(struct page *page, void *obj, 926 + unsigned long *phandle) 927 + { 928 + return obj_tagged(page, obj, phandle, OBJ_DEFERRED_HANDLE_TAG); 929 + } 930 + #endif 938 931 939 932 static void reset_page(struct page *page) 940 933 { ··· 981 946 } 982 947 983 948 #ifdef CONFIG_ZPOOL 949 + static unsigned long find_deferred_handle_obj(struct size_class *class, 950 + struct page *page, int *obj_idx); 951 + 984 952 /* 985 953 * Free all the deferred handles whose objects are freed in zs_free. 986 954 */ 987 - static void free_handles(struct zs_pool *pool, struct zspage *zspage) 955 + static void free_handles(struct zs_pool *pool, struct size_class *class, 956 + struct zspage *zspage) 988 957 { 989 - unsigned long handle = (unsigned long)zspage->deferred_handles; 958 + int obj_idx = 0; 959 + struct page *page = get_first_page(zspage); 960 + unsigned long handle; 990 961 991 - while (handle) { 992 - unsigned long nxt_handle = handle_to_obj(handle); 962 + while (1) { 963 + handle = find_deferred_handle_obj(class, page, &obj_idx); 964 + if (!handle) { 965 + page = get_next_page(page); 966 + if (!page) 967 + break; 968 + obj_idx = 0; 969 + continue; 970 + } 993 971 994 972 cache_free_handle(pool, handle); 995 - handle = nxt_handle; 973 + obj_idx++; 996 974 } 997 975 } 998 976 #else 999 - static inline void free_handles(struct zs_pool *pool, struct zspage *zspage) {} 977 + static inline void free_handles(struct zs_pool *pool, struct size_class *class, 978 + struct zspage *zspage) {} 1000 979 #endif 1001 980 1002 981 static void __free_zspage(struct zs_pool *pool, struct size_class *class, ··· 1028 979 VM_BUG_ON(fg != ZS_EMPTY); 1029 980 1030 981 /* Free all deferred handles from zs_free */ 1031 - free_handles(pool, zspage); 982 + free_handles(pool, class, zspage); 1032 983 1033 984 next = page = get_first_page(zspage); 1034 985 do { ··· 1116 1067 #ifdef CONFIG_ZPOOL 1117 1068 INIT_LIST_HEAD(&zspage->lru); 1118 1069 zspage->under_reclaim = false; 1119 - zspage->deferred_handles = NULL; 1120 1070 #endif 1121 1071 1122 1072 set_freeobj(zspage, 0); ··· 1616 1568 } 1617 1569 EXPORT_SYMBOL_GPL(zs_malloc); 1618 1570 1619 - static void obj_free(int class_size, unsigned long obj) 1571 + static void obj_free(int class_size, unsigned long obj, unsigned long *handle) 1620 1572 { 1621 1573 struct link_free *link; 1622 1574 struct zspage *zspage; ··· 1630 1582 zspage = get_zspage(f_page); 1631 1583 1632 1584 vaddr = kmap_atomic(f_page); 1633 - 1634 - /* Insert this object in containing zspage's freelist */ 1635 1585 link = (struct link_free *)(vaddr + f_offset); 1636 - if (likely(!ZsHugePage(zspage))) 1637 - link->next = get_freeobj(zspage) << OBJ_TAG_BITS; 1638 - else 1639 - f_page->index = 0; 1586 + 1587 + if (handle) { 1588 + #ifdef CONFIG_ZPOOL 1589 + /* Stores the (deferred) handle in the object's header */ 1590 + *handle |= OBJ_DEFERRED_HANDLE_TAG; 1591 + *handle &= ~OBJ_ALLOCATED_TAG; 1592 + 1593 + if (likely(!ZsHugePage(zspage))) 1594 + link->deferred_handle = *handle; 1595 + else 1596 + f_page->index = *handle; 1597 + #endif 1598 + } else { 1599 + /* Insert this object in containing zspage's freelist */ 1600 + if (likely(!ZsHugePage(zspage))) 1601 + link->next = get_freeobj(zspage) << OBJ_TAG_BITS; 1602 + else 1603 + f_page->index = 0; 1604 + set_freeobj(zspage, f_objidx); 1605 + } 1606 + 1640 1607 kunmap_atomic(vaddr); 1641 - set_freeobj(zspage, f_objidx); 1642 1608 mod_zspage_inuse(zspage, -1); 1643 1609 } 1644 1610 ··· 1677 1615 zspage = get_zspage(f_page); 1678 1616 class = zspage_class(pool, zspage); 1679 1617 1680 - obj_free(class->size, obj); 1681 1618 class_stat_dec(class, OBJ_USED, 1); 1682 1619 1683 1620 #ifdef CONFIG_ZPOOL ··· 1685 1624 * Reclaim needs the handles during writeback. It'll free 1686 1625 * them along with the zspage when it's done with them. 1687 1626 * 1688 - * Record current deferred handle at the memory location 1689 - * whose address is given by handle. 1627 + * Record current deferred handle in the object's header. 1690 1628 */ 1691 - record_obj(handle, (unsigned long)zspage->deferred_handles); 1692 - zspage->deferred_handles = (unsigned long *)handle; 1629 + obj_free(class->size, obj, &handle); 1693 1630 spin_unlock(&pool->lock); 1694 1631 return; 1695 1632 } 1696 1633 #endif 1634 + obj_free(class->size, obj, NULL); 1635 + 1697 1636 fullness = fix_fullness_group(class, zspage); 1698 1637 if (fullness == ZS_EMPTY) 1699 1638 free_zspage(pool, class, zspage); ··· 1774 1713 } 1775 1714 1776 1715 /* 1777 - * Find alloced object in zspage from index object and 1716 + * Find object with a certain tag in zspage from index object and 1778 1717 * return handle. 1779 1718 */ 1780 - static unsigned long find_alloced_obj(struct size_class *class, 1781 - struct page *page, int *obj_idx) 1719 + static unsigned long find_tagged_obj(struct size_class *class, 1720 + struct page *page, int *obj_idx, int tag) 1782 1721 { 1783 1722 unsigned int offset; 1784 1723 int index = *obj_idx; ··· 1789 1728 offset += class->size * index; 1790 1729 1791 1730 while (offset < PAGE_SIZE) { 1792 - if (obj_allocated(page, addr + offset, &handle)) 1731 + if (obj_tagged(page, addr + offset, &handle, tag)) 1793 1732 break; 1794 1733 1795 1734 offset += class->size; ··· 1802 1741 1803 1742 return handle; 1804 1743 } 1744 + 1745 + /* 1746 + * Find alloced object in zspage from index object and 1747 + * return handle. 1748 + */ 1749 + static unsigned long find_alloced_obj(struct size_class *class, 1750 + struct page *page, int *obj_idx) 1751 + { 1752 + return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG); 1753 + } 1754 + 1755 + #ifdef CONFIG_ZPOOL 1756 + /* 1757 + * Find object storing a deferred handle in header in zspage from index object 1758 + * and return handle. 1759 + */ 1760 + static unsigned long find_deferred_handle_obj(struct size_class *class, 1761 + struct page *page, int *obj_idx) 1762 + { 1763 + return find_tagged_obj(class, page, obj_idx, OBJ_DEFERRED_HANDLE_TAG); 1764 + } 1765 + #endif 1805 1766 1806 1767 struct zs_compact_control { 1807 1768 /* Source spage for migration which could be a subpage of zspage */ ··· 1867 1784 zs_object_copy(class, free_obj, used_obj); 1868 1785 obj_idx++; 1869 1786 record_obj(handle, free_obj); 1870 - obj_free(class->size, used_obj); 1787 + obj_free(class->size, used_obj, NULL); 1871 1788 } 1872 1789 1873 1790 /* Remember last position in this iteration */ ··· 2561 2478 EXPORT_SYMBOL_GPL(zs_destroy_pool); 2562 2479 2563 2480 #ifdef CONFIG_ZPOOL 2481 + static void restore_freelist(struct zs_pool *pool, struct size_class *class, 2482 + struct zspage *zspage) 2483 + { 2484 + unsigned int obj_idx = 0; 2485 + unsigned long handle, off = 0; /* off is within-page offset */ 2486 + struct page *page = get_first_page(zspage); 2487 + struct link_free *prev_free = NULL; 2488 + void *prev_page_vaddr = NULL; 2489 + 2490 + /* in case no free object found */ 2491 + set_freeobj(zspage, (unsigned int)(-1UL)); 2492 + 2493 + while (page) { 2494 + void *vaddr = kmap_atomic(page); 2495 + struct page *next_page; 2496 + 2497 + while (off < PAGE_SIZE) { 2498 + void *obj_addr = vaddr + off; 2499 + 2500 + /* skip allocated object */ 2501 + if (obj_allocated(page, obj_addr, &handle)) { 2502 + obj_idx++; 2503 + off += class->size; 2504 + continue; 2505 + } 2506 + 2507 + /* free deferred handle from reclaim attempt */ 2508 + if (obj_stores_deferred_handle(page, obj_addr, &handle)) 2509 + cache_free_handle(pool, handle); 2510 + 2511 + if (prev_free) 2512 + prev_free->next = obj_idx << OBJ_TAG_BITS; 2513 + else /* first free object found */ 2514 + set_freeobj(zspage, obj_idx); 2515 + 2516 + prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free); 2517 + /* if last free object in a previous page, need to unmap */ 2518 + if (prev_page_vaddr) { 2519 + kunmap_atomic(prev_page_vaddr); 2520 + prev_page_vaddr = NULL; 2521 + } 2522 + 2523 + obj_idx++; 2524 + off += class->size; 2525 + } 2526 + 2527 + /* 2528 + * Handle the last (full or partial) object on this page. 2529 + */ 2530 + next_page = get_next_page(page); 2531 + if (next_page) { 2532 + if (!prev_free || prev_page_vaddr) { 2533 + /* 2534 + * There is no free object in this page, so we can safely 2535 + * unmap it. 2536 + */ 2537 + kunmap_atomic(vaddr); 2538 + } else { 2539 + /* update prev_page_vaddr since prev_free is on this page */ 2540 + prev_page_vaddr = vaddr; 2541 + } 2542 + } else { /* this is the last page */ 2543 + if (prev_free) { 2544 + /* 2545 + * Reset OBJ_TAG_BITS bit to last link to tell 2546 + * whether it's allocated object or not. 2547 + */ 2548 + prev_free->next = -1UL << OBJ_TAG_BITS; 2549 + } 2550 + 2551 + /* unmap previous page (if not done yet) */ 2552 + if (prev_page_vaddr) { 2553 + kunmap_atomic(prev_page_vaddr); 2554 + prev_page_vaddr = NULL; 2555 + } 2556 + 2557 + kunmap_atomic(vaddr); 2558 + } 2559 + 2560 + page = next_page; 2561 + off %= PAGE_SIZE; 2562 + } 2563 + } 2564 + 2564 2565 static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries) 2565 2566 { 2566 2567 int i, obj_idx, ret = 0; ··· 2728 2561 return 0; 2729 2562 } 2730 2563 2564 + /* 2565 + * Eviction fails on one of the handles, so we need to restore zspage. 2566 + * We need to rebuild its freelist (and free stored deferred handles), 2567 + * put it back to the correct size class, and add it to the LRU list. 2568 + */ 2569 + restore_freelist(pool, class, zspage); 2731 2570 putback_zspage(class, zspage); 2732 2571 list_add(&zspage->lru, &pool->lru); 2733 2572 unlock_zspage(zspage);
+1
net/bridge/br_netfilter_hooks.c
··· 871 871 if (nf_bridge && !nf_bridge->in_prerouting && 872 872 !netif_is_l3_master(skb->dev) && 873 873 !netif_is_l3_slave(skb->dev)) { 874 + nf_bridge_info_free(skb); 874 875 state->okfn(state->net, state->sk, skb); 875 876 return NF_STOLEN; 876 877 }
+33 -36
net/can/isotp.c
··· 140 140 canid_t rxid; 141 141 ktime_t tx_gap; 142 142 ktime_t lastrxcf_tstamp; 143 - struct hrtimer rxtimer, txtimer; 143 + struct hrtimer rxtimer, txtimer, txfrtimer; 144 144 struct can_isotp_options opt; 145 145 struct can_isotp_fc_options rxfc, txfc; 146 146 struct can_isotp_ll_options ll; ··· 871 871 } 872 872 873 873 /* start timer to send next consecutive frame with correct delay */ 874 - hrtimer_start(&so->txtimer, so->tx_gap, HRTIMER_MODE_REL_SOFT); 874 + hrtimer_start(&so->txfrtimer, so->tx_gap, HRTIMER_MODE_REL_SOFT); 875 875 } 876 876 877 877 static enum hrtimer_restart isotp_tx_timer_handler(struct hrtimer *hrtimer) ··· 879 879 struct isotp_sock *so = container_of(hrtimer, struct isotp_sock, 880 880 txtimer); 881 881 struct sock *sk = &so->sk; 882 - enum hrtimer_restart restart = HRTIMER_NORESTART; 883 882 884 - switch (so->tx.state) { 885 - case ISOTP_SENDING: 883 + /* don't handle timeouts in IDLE state */ 884 + if (so->tx.state == ISOTP_IDLE) 885 + return HRTIMER_NORESTART; 886 886 887 - /* cfecho should be consumed by isotp_rcv_echo() here */ 888 - if (!so->cfecho) { 889 - /* start timeout for unlikely lost echo skb */ 890 - hrtimer_set_expires(&so->txtimer, 891 - ktime_add(ktime_get(), 892 - ktime_set(ISOTP_ECHO_TIMEOUT, 0))); 893 - restart = HRTIMER_RESTART; 887 + /* we did not get any flow control or echo frame in time */ 894 888 895 - /* push out the next consecutive frame */ 896 - isotp_send_cframe(so); 897 - break; 898 - } 889 + /* report 'communication error on send' */ 890 + sk->sk_err = ECOMM; 891 + if (!sock_flag(sk, SOCK_DEAD)) 892 + sk_error_report(sk); 899 893 900 - /* cfecho has not been cleared in isotp_rcv_echo() */ 901 - pr_notice_once("can-isotp: cfecho %08X timeout\n", so->cfecho); 902 - fallthrough; 894 + /* reset tx state */ 895 + so->tx.state = ISOTP_IDLE; 896 + wake_up_interruptible(&so->wait); 903 897 904 - case ISOTP_WAIT_FC: 905 - case ISOTP_WAIT_FIRST_FC: 898 + return HRTIMER_NORESTART; 899 + } 906 900 907 - /* we did not get any flow control frame in time */ 901 + static enum hrtimer_restart isotp_txfr_timer_handler(struct hrtimer *hrtimer) 902 + { 903 + struct isotp_sock *so = container_of(hrtimer, struct isotp_sock, 904 + txfrtimer); 908 905 909 - /* report 'communication error on send' */ 910 - sk->sk_err = ECOMM; 911 - if (!sock_flag(sk, SOCK_DEAD)) 912 - sk_error_report(sk); 906 + /* start echo timeout handling and cover below protocol error */ 907 + hrtimer_start(&so->txtimer, ktime_set(ISOTP_ECHO_TIMEOUT, 0), 908 + HRTIMER_MODE_REL_SOFT); 913 909 914 - /* reset tx state */ 915 - so->tx.state = ISOTP_IDLE; 916 - wake_up_interruptible(&so->wait); 917 - break; 910 + /* cfecho should be consumed by isotp_rcv_echo() here */ 911 + if (so->tx.state == ISOTP_SENDING && !so->cfecho) 912 + isotp_send_cframe(so); 918 913 919 - default: 920 - WARN_ONCE(1, "can-isotp: tx timer state %08X cfecho %08X\n", 921 - so->tx.state, so->cfecho); 922 - } 923 - 924 - return restart; 914 + return HRTIMER_NORESTART; 925 915 } 926 916 927 917 static int isotp_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) ··· 1152 1162 /* wait for complete transmission of current pdu */ 1153 1163 wait_event_interruptible(so->wait, so->tx.state == ISOTP_IDLE); 1154 1164 1165 + /* force state machines to be idle also when a signal occurred */ 1166 + so->tx.state = ISOTP_IDLE; 1167 + so->rx.state = ISOTP_IDLE; 1168 + 1155 1169 spin_lock(&isotp_notifier_lock); 1156 1170 while (isotp_busy_notifier == so) { 1157 1171 spin_unlock(&isotp_notifier_lock); ··· 1188 1194 } 1189 1195 } 1190 1196 1197 + hrtimer_cancel(&so->txfrtimer); 1191 1198 hrtimer_cancel(&so->txtimer); 1192 1199 hrtimer_cancel(&so->rxtimer); 1193 1200 ··· 1592 1597 so->rxtimer.function = isotp_rx_timer_handler; 1593 1598 hrtimer_init(&so->txtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT); 1594 1599 so->txtimer.function = isotp_tx_timer_handler; 1600 + hrtimer_init(&so->txfrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT); 1601 + so->txfrtimer.function = isotp_txfr_timer_handler; 1595 1602 1596 1603 init_waitqueue_head(&so->wait); 1597 1604 spin_lock_init(&so->rx_lock);
-4
net/can/j1939/transport.c
··· 1092 1092 bool active; 1093 1093 1094 1094 j1939_session_list_lock(priv); 1095 - /* This function should be called with a session ref-count of at 1096 - * least 2. 1097 - */ 1098 - WARN_ON_ONCE(kref_read(&session->kref) < 2); 1099 1095 active = j1939_session_deactivate_locked(session); 1100 1096 j1939_session_list_unlock(priv); 1101 1097
+31 -16
net/can/raw.c
··· 132 132 return; 133 133 134 134 /* make sure to not pass oversized frames to the socket */ 135 - if ((can_is_canfd_skb(oskb) && !ro->fd_frames && !ro->xl_frames) || 136 - (can_is_canxl_skb(oskb) && !ro->xl_frames)) 135 + if ((!ro->fd_frames && can_is_canfd_skb(oskb)) || 136 + (!ro->xl_frames && can_is_canxl_skb(oskb))) 137 137 return; 138 138 139 139 /* eliminate multiple filter matches for the same skb */ ··· 670 670 if (copy_from_sockptr(&ro->fd_frames, optval, optlen)) 671 671 return -EFAULT; 672 672 673 + /* Enabling CAN XL includes CAN FD */ 674 + if (ro->xl_frames && !ro->fd_frames) { 675 + ro->fd_frames = ro->xl_frames; 676 + return -EINVAL; 677 + } 673 678 break; 674 679 675 680 case CAN_RAW_XL_FRAMES: ··· 684 679 if (copy_from_sockptr(&ro->xl_frames, optval, optlen)) 685 680 return -EFAULT; 686 681 682 + /* Enabling CAN XL includes CAN FD */ 683 + if (ro->xl_frames) 684 + ro->fd_frames = ro->xl_frames; 687 685 break; 688 686 689 687 case CAN_RAW_JOIN_FILTERS: ··· 794 786 return 0; 795 787 } 796 788 789 + static bool raw_bad_txframe(struct raw_sock *ro, struct sk_buff *skb, int mtu) 790 + { 791 + /* Classical CAN -> no checks for flags and device capabilities */ 792 + if (can_is_can_skb(skb)) 793 + return false; 794 + 795 + /* CAN FD -> needs to be enabled and a CAN FD or CAN XL device */ 796 + if (ro->fd_frames && can_is_canfd_skb(skb) && 797 + (mtu == CANFD_MTU || can_is_canxl_dev_mtu(mtu))) 798 + return false; 799 + 800 + /* CAN XL -> needs to be enabled and a CAN XL device */ 801 + if (ro->xl_frames && can_is_canxl_skb(skb) && 802 + can_is_canxl_dev_mtu(mtu)) 803 + return false; 804 + 805 + return true; 806 + } 807 + 797 808 static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size) 798 809 { 799 810 struct sock *sk = sock->sk; ··· 860 833 goto free_skb; 861 834 862 835 err = -EINVAL; 863 - if (ro->xl_frames && can_is_canxl_dev_mtu(dev->mtu)) { 864 - /* CAN XL, CAN FD and Classical CAN */ 865 - if (!can_is_canxl_skb(skb) && !can_is_canfd_skb(skb) && 866 - !can_is_can_skb(skb)) 867 - goto free_skb; 868 - } else if (ro->fd_frames && dev->mtu == CANFD_MTU) { 869 - /* CAN FD and Classical CAN */ 870 - if (!can_is_canfd_skb(skb) && !can_is_can_skb(skb)) 871 - goto free_skb; 872 - } else { 873 - /* Classical CAN */ 874 - if (!can_is_can_skb(skb)) 875 - goto free_skb; 876 - } 836 + if (raw_bad_txframe(ro, skb, dev->mtu)) 837 + goto free_skb; 877 838 878 839 sockcm_init(&sockc, sk); 879 840 if (msg->msg_controllen) {
+9
net/core/gro.c
··· 162 162 struct sk_buff *lp; 163 163 int segs; 164 164 165 + /* Do not splice page pool based packets w/ non-page pool 166 + * packets. This can result in reference count issues as page 167 + * pool pages will not decrement the reference count and will 168 + * instead be immediately returned to the pool or have frag 169 + * count decremented. 170 + */ 171 + if (p->pp_recycle != skb->pp_recycle) 172 + return -ETOOMANYREFS; 173 + 165 174 /* pairs with WRITE_ONCE() in netif_set_gro_max_size() */ 166 175 gro_max_size = READ_ONCE(p->dev->gro_max_size); 167 176
+1 -1
net/core/net_namespace.c
··· 137 137 return 0; 138 138 139 139 if (ops->id && ops->size) { 140 - cleanup: 141 140 ng = rcu_dereference_protected(net->gen, 142 141 lockdep_is_held(&pernet_ops_rwsem)); 143 142 ng->ptr[*ops->id] = NULL; 144 143 } 145 144 145 + cleanup: 146 146 kfree(data); 147 147 148 148 out:
+2 -3
net/core/skbuff.c
··· 4100 4100 4101 4101 skb_shinfo(skb)->frag_list = NULL; 4102 4102 4103 - do { 4103 + while (list_skb) { 4104 4104 nskb = list_skb; 4105 4105 list_skb = list_skb->next; 4106 4106 ··· 4146 4146 if (skb_needs_linearize(nskb, features) && 4147 4147 __skb_linearize(nskb)) 4148 4148 goto err_linearize; 4149 - 4150 - } while (list_skb); 4149 + } 4151 4150 4152 4151 skb->truesize = skb->truesize - delta_truesize; 4153 4152 skb->data_len = skb->data_len - delta_len;
+34 -27
net/core/sock_map.c
··· 1569 1569 psock = sk_psock(sk); 1570 1570 if (unlikely(!psock)) { 1571 1571 rcu_read_unlock(); 1572 - if (sk->sk_prot->unhash) 1573 - sk->sk_prot->unhash(sk); 1574 - return; 1572 + saved_unhash = READ_ONCE(sk->sk_prot)->unhash; 1573 + } else { 1574 + saved_unhash = psock->saved_unhash; 1575 + sock_map_remove_links(sk, psock); 1576 + rcu_read_unlock(); 1575 1577 } 1576 - 1577 - saved_unhash = psock->saved_unhash; 1578 - sock_map_remove_links(sk, psock); 1579 - rcu_read_unlock(); 1580 - saved_unhash(sk); 1578 + if (WARN_ON_ONCE(saved_unhash == sock_map_unhash)) 1579 + return; 1580 + if (saved_unhash) 1581 + saved_unhash(sk); 1581 1582 } 1582 1583 EXPORT_SYMBOL_GPL(sock_map_unhash); 1583 1584 ··· 1591 1590 psock = sk_psock_get(sk); 1592 1591 if (unlikely(!psock)) { 1593 1592 rcu_read_unlock(); 1594 - if (sk->sk_prot->destroy) 1595 - sk->sk_prot->destroy(sk); 1596 - return; 1593 + saved_destroy = READ_ONCE(sk->sk_prot)->destroy; 1594 + } else { 1595 + saved_destroy = psock->saved_destroy; 1596 + sock_map_remove_links(sk, psock); 1597 + rcu_read_unlock(); 1598 + sk_psock_stop(psock); 1599 + sk_psock_put(sk, psock); 1597 1600 } 1598 - 1599 - saved_destroy = psock->saved_destroy; 1600 - sock_map_remove_links(sk, psock); 1601 - rcu_read_unlock(); 1602 - sk_psock_stop(psock); 1603 - sk_psock_put(sk, psock); 1604 - saved_destroy(sk); 1601 + if (WARN_ON_ONCE(saved_destroy == sock_map_destroy)) 1602 + return; 1603 + if (saved_destroy) 1604 + saved_destroy(sk); 1605 1605 } 1606 1606 EXPORT_SYMBOL_GPL(sock_map_destroy); 1607 1607 ··· 1617 1615 if (unlikely(!psock)) { 1618 1616 rcu_read_unlock(); 1619 1617 release_sock(sk); 1620 - return sk->sk_prot->close(sk, timeout); 1618 + saved_close = READ_ONCE(sk->sk_prot)->close; 1619 + } else { 1620 + saved_close = psock->saved_close; 1621 + sock_map_remove_links(sk, psock); 1622 + rcu_read_unlock(); 1623 + sk_psock_stop(psock); 1624 + release_sock(sk); 1625 + cancel_work_sync(&psock->work); 1626 + sk_psock_put(sk, psock); 1621 1627 } 1622 - 1623 - saved_close = psock->saved_close; 1624 - sock_map_remove_links(sk, psock); 1625 - rcu_read_unlock(); 1626 - sk_psock_stop(psock); 1627 - release_sock(sk); 1628 - cancel_work_sync(&psock->work); 1629 - sk_psock_put(sk, psock); 1628 + /* Make sure we do not recurse. This is a bug. 1629 + * Leak the socket instead of crashing on a stack overflow. 1630 + */ 1631 + if (WARN_ON_ONCE(saved_close == sock_map_close)) 1632 + return; 1630 1633 saved_close(sk, timeout); 1631 1634 } 1632 1635 EXPORT_SYMBOL_GPL(sock_map_close);
+2
net/ipv4/fib_semantics.c
··· 30 30 #include <linux/slab.h> 31 31 #include <linux/netlink.h> 32 32 #include <linux/hash.h> 33 + #include <linux/nospec.h> 33 34 34 35 #include <net/arp.h> 35 36 #include <net/inet_dscp.h> ··· 1023 1022 if (type > RTAX_MAX) 1024 1023 return false; 1025 1024 1025 + type = array_index_nospec(type, RTAX_MAX + 1); 1026 1026 if (type == RTAX_CC_ALGO) { 1027 1027 char tmp[TCP_CA_NAME_MAX]; 1028 1028 bool ecn_ca = false;
+2
net/ipv4/metrics.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 #include <linux/netlink.h> 3 + #include <linux/nospec.h> 3 4 #include <linux/rtnetlink.h> 4 5 #include <linux/types.h> 5 6 #include <net/ip.h> ··· 26 25 return -EINVAL; 27 26 } 28 27 28 + type = array_index_nospec(type, RTAX_MAX + 1); 29 29 if (type == RTAX_CC_ALGO) { 30 30 char tmp[TCP_CA_NAME_MAX]; 31 31
+2 -2
net/ipv4/tcp_bpf.c
··· 6 6 #include <linux/bpf.h> 7 7 #include <linux/init.h> 8 8 #include <linux/wait.h> 9 + #include <linux/util_macros.h> 9 10 10 11 #include <net/inet_common.h> 11 12 #include <net/tls.h> ··· 640 639 */ 641 640 void tcp_bpf_clone(const struct sock *sk, struct sock *newsk) 642 641 { 643 - int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4; 644 642 struct proto *prot = newsk->sk_prot; 645 643 646 - if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE]) 644 + if (is_insidevar(prot, tcp_bpf_prots)) 647 645 newsk->sk_prot = sk->sk_prot_creator; 648 646 } 649 647 #endif /* CONFIG_BPF_SYSCALL */
+32 -27
net/ipv6/addrconf.c
··· 3127 3127 offset = sizeof(struct in6_addr) - 4; 3128 3128 memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); 3129 3129 3130 - if (idev->dev->flags&IFF_POINTOPOINT) { 3130 + if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) { 3131 + scope = IPV6_ADDR_COMPATv4; 3132 + plen = 96; 3133 + pflags |= RTF_NONEXTHOP; 3134 + } else { 3131 3135 if (idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_NONE) 3132 3136 return; 3133 3137 3134 3138 addr.s6_addr32[0] = htonl(0xfe800000); 3135 3139 scope = IFA_LINK; 3136 3140 plen = 64; 3137 - } else { 3138 - scope = IPV6_ADDR_COMPATv4; 3139 - plen = 96; 3140 - pflags |= RTF_NONEXTHOP; 3141 3141 } 3142 3142 3143 3143 if (addr.s6_addr32[3]) { ··· 3447 3447 } 3448 3448 #endif 3449 3449 3450 + static void addrconf_init_auto_addrs(struct net_device *dev) 3451 + { 3452 + switch (dev->type) { 3453 + #if IS_ENABLED(CONFIG_IPV6_SIT) 3454 + case ARPHRD_SIT: 3455 + addrconf_sit_config(dev); 3456 + break; 3457 + #endif 3458 + #if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE) 3459 + case ARPHRD_IP6GRE: 3460 + case ARPHRD_IPGRE: 3461 + addrconf_gre_config(dev); 3462 + break; 3463 + #endif 3464 + case ARPHRD_LOOPBACK: 3465 + init_loopback(dev); 3466 + break; 3467 + 3468 + default: 3469 + addrconf_dev_config(dev); 3470 + break; 3471 + } 3472 + } 3473 + 3450 3474 static int fixup_permanent_addr(struct net *net, 3451 3475 struct inet6_dev *idev, 3452 3476 struct inet6_ifaddr *ifp) ··· 3639 3615 run_pending = 1; 3640 3616 } 3641 3617 3642 - switch (dev->type) { 3643 - #if IS_ENABLED(CONFIG_IPV6_SIT) 3644 - case ARPHRD_SIT: 3645 - addrconf_sit_config(dev); 3646 - break; 3647 - #endif 3648 - #if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE) 3649 - case ARPHRD_IP6GRE: 3650 - case ARPHRD_IPGRE: 3651 - addrconf_gre_config(dev); 3652 - break; 3653 - #endif 3654 - case ARPHRD_LOOPBACK: 3655 - init_loopback(dev); 3656 - break; 3657 - 3658 - default: 3659 - addrconf_dev_config(dev); 3660 - break; 3661 - } 3618 + addrconf_init_auto_addrs(dev); 3662 3619 3663 3620 if (!IS_ERR_OR_NULL(idev)) { 3664 3621 if (run_pending) ··· 6402 6397 6403 6398 if (idev->cnf.addr_gen_mode != new_val) { 6404 6399 idev->cnf.addr_gen_mode = new_val; 6405 - addrconf_dev_config(idev->dev); 6400 + addrconf_init_auto_addrs(idev->dev); 6406 6401 } 6407 6402 } else if (&net->ipv6.devconf_all->addr_gen_mode == ctl->data) { 6408 6403 struct net_device *dev; ··· 6413 6408 if (idev && 6414 6409 idev->cnf.addr_gen_mode != new_val) { 6415 6410 idev->cnf.addr_gen_mode = new_val; 6416 - addrconf_dev_config(idev->dev); 6411 + addrconf_init_auto_addrs(idev->dev); 6417 6412 } 6418 6413 } 6419 6414 }
+14 -1
net/ipv6/ip6_output.c
··· 547 547 pneigh_lookup(&nd_tbl, net, &hdr->daddr, skb->dev, 0)) { 548 548 int proxied = ip6_forward_proxy_check(skb); 549 549 if (proxied > 0) { 550 - hdr->hop_limit--; 550 + /* It's tempting to decrease the hop limit 551 + * here by 1, as we do at the end of the 552 + * function too. 553 + * 554 + * But that would be incorrect, as proxying is 555 + * not forwarding. The ip6_input function 556 + * will handle this packet locally, and it 557 + * depends on the hop limit being unchanged. 558 + * 559 + * One example is the NDP hop limit, that 560 + * always has to stay 255, but other would be 561 + * similar checks around RA packets, where the 562 + * user can even change the desired limit. 563 + */ 551 564 return ip6_input(skb); 552 565 } else if (proxied < 0) { 553 566 __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
-1
net/mac802154/rx.c
··· 213 213 ret = ieee802154_parse_frame_start(skb, &hdr); 214 214 if (ret) { 215 215 pr_debug("got invalid frame\n"); 216 - kfree_skb(skb); 217 216 return; 218 217 } 219 218
+13 -3
net/mctp/af_mctp.c
··· 544 544 545 545 static void mctp_sk_close(struct sock *sk, long timeout) 546 546 { 547 - struct mctp_sock *msk = container_of(sk, struct mctp_sock, sk); 548 - 549 - del_timer_sync(&msk->key_expiry); 550 547 sk_common_release(sk); 551 548 } 552 549 ··· 577 580 spin_lock_irqsave(&key->lock, fl2); 578 581 __mctp_key_remove(key, net, fl2, MCTP_TRACE_KEY_CLOSED); 579 582 } 583 + sock_set_flag(sk, SOCK_DEAD); 580 584 spin_unlock_irqrestore(&net->mctp.keys_lock, flags); 585 + 586 + /* Since there are no more tag allocations (we have removed all of the 587 + * keys), stop any pending expiry events. the timer cannot be re-queued 588 + * as the sk is no longer observable 589 + */ 590 + del_timer_sync(&msk->key_expiry); 591 + } 592 + 593 + static void mctp_sk_destruct(struct sock *sk) 594 + { 595 + skb_queue_purge(&sk->sk_receive_queue); 581 596 } 582 597 583 598 static struct proto mctp_proto = { ··· 628 619 return -ENOMEM; 629 620 630 621 sock_init_data(sock, sk); 622 + sk->sk_destruct = mctp_sk_destruct; 631 623 632 624 rc = 0; 633 625 if (sk->sk_prot->init)
+21 -13
net/mctp/route.c
··· 147 147 key->valid = true; 148 148 spin_lock_init(&key->lock); 149 149 refcount_set(&key->refs, 1); 150 + sock_hold(key->sk); 150 151 151 152 return key; 152 153 } ··· 166 165 mctp_dev_release_key(key->dev, key); 167 166 spin_unlock_irqrestore(&key->lock, flags); 168 167 168 + sock_put(key->sk); 169 169 kfree(key); 170 170 } 171 171 ··· 178 176 int rc = 0; 179 177 180 178 spin_lock_irqsave(&net->mctp.keys_lock, flags); 179 + 180 + if (sock_flag(&msk->sk, SOCK_DEAD)) { 181 + rc = -EINVAL; 182 + goto out_unlock; 183 + } 181 184 182 185 hlist_for_each_entry(tmp, &net->mctp.keys, hlist) { 183 186 if (mctp_key_match(tmp, key->local_addr, key->peer_addr, ··· 205 198 hlist_add_head(&key->sklist, &msk->keys); 206 199 } 207 200 201 + out_unlock: 208 202 spin_unlock_irqrestore(&net->mctp.keys_lock, flags); 209 203 210 204 return rc; ··· 323 315 324 316 static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb) 325 317 { 318 + struct mctp_sk_key *key, *any_key = NULL; 326 319 struct net *net = dev_net(skb->dev); 327 - struct mctp_sk_key *key; 328 320 struct mctp_sock *msk; 329 321 struct mctp_hdr *mh; 330 322 unsigned long f; ··· 369 361 * key for reassembly - we'll create a more specific 370 362 * one for future packets if required (ie, !EOM). 371 363 */ 372 - key = mctp_lookup_key(net, skb, MCTP_ADDR_ANY, &f); 373 - if (key) { 374 - msk = container_of(key->sk, 364 + any_key = mctp_lookup_key(net, skb, MCTP_ADDR_ANY, &f); 365 + if (any_key) { 366 + msk = container_of(any_key->sk, 375 367 struct mctp_sock, sk); 376 - spin_unlock_irqrestore(&key->lock, f); 377 - mctp_key_unref(key); 378 - key = NULL; 368 + spin_unlock_irqrestore(&any_key->lock, f); 379 369 } 380 370 } 381 371 ··· 425 419 * this function. 426 420 */ 427 421 rc = mctp_key_add(key, msk); 428 - if (rc) { 429 - kfree(key); 430 - } else { 422 + if (!rc) 431 423 trace_mctp_key_acquire(key); 432 424 433 - /* we don't need to release key->lock on exit */ 434 - mctp_key_unref(key); 435 - } 425 + /* we don't need to release key->lock on exit, so 426 + * clean up here and suppress the unlock via 427 + * setting to NULL 428 + */ 429 + mctp_key_unref(key); 436 430 key = NULL; 437 431 438 432 } else { ··· 479 473 spin_unlock_irqrestore(&key->lock, f); 480 474 mctp_key_unref(key); 481 475 } 476 + if (any_key) 477 + mctp_key_unref(any_key); 482 478 out: 483 479 if (rc) 484 480 kfree_skb(skb);
+71 -96
net/netfilter/nf_conntrack_proto_sctp.c
··· 27 27 #include <net/netfilter/nf_conntrack_ecache.h> 28 28 #include <net/netfilter/nf_conntrack_timeout.h> 29 29 30 - /* FIXME: Examine ipfilter's timeouts and conntrack transitions more 31 - closely. They're more complex. --RR 32 - 33 - And so for me for SCTP :D -Kiran */ 34 - 35 30 static const char *const sctp_conntrack_names[] = { 36 - "NONE", 37 - "CLOSED", 38 - "COOKIE_WAIT", 39 - "COOKIE_ECHOED", 40 - "ESTABLISHED", 41 - "SHUTDOWN_SENT", 42 - "SHUTDOWN_RECD", 43 - "SHUTDOWN_ACK_SENT", 44 - "HEARTBEAT_SENT", 45 - "HEARTBEAT_ACKED", 31 + [SCTP_CONNTRACK_NONE] = "NONE", 32 + [SCTP_CONNTRACK_CLOSED] = "CLOSED", 33 + [SCTP_CONNTRACK_COOKIE_WAIT] = "COOKIE_WAIT", 34 + [SCTP_CONNTRACK_COOKIE_ECHOED] = "COOKIE_ECHOED", 35 + [SCTP_CONNTRACK_ESTABLISHED] = "ESTABLISHED", 36 + [SCTP_CONNTRACK_SHUTDOWN_SENT] = "SHUTDOWN_SENT", 37 + [SCTP_CONNTRACK_SHUTDOWN_RECD] = "SHUTDOWN_RECD", 38 + [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = "SHUTDOWN_ACK_SENT", 39 + [SCTP_CONNTRACK_HEARTBEAT_SENT] = "HEARTBEAT_SENT", 46 40 }; 47 41 48 42 #define SECS * HZ ··· 48 54 [SCTP_CONNTRACK_CLOSED] = 10 SECS, 49 55 [SCTP_CONNTRACK_COOKIE_WAIT] = 3 SECS, 50 56 [SCTP_CONNTRACK_COOKIE_ECHOED] = 3 SECS, 51 - [SCTP_CONNTRACK_ESTABLISHED] = 5 DAYS, 57 + [SCTP_CONNTRACK_ESTABLISHED] = 210 SECS, 52 58 [SCTP_CONNTRACK_SHUTDOWN_SENT] = 300 SECS / 1000, 53 59 [SCTP_CONNTRACK_SHUTDOWN_RECD] = 300 SECS / 1000, 54 60 [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = 3 SECS, 55 61 [SCTP_CONNTRACK_HEARTBEAT_SENT] = 30 SECS, 56 - [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS, 57 - [SCTP_CONNTRACK_DATA_SENT] = 30 SECS, 58 62 }; 59 63 60 64 #define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1 ··· 66 74 #define sSR SCTP_CONNTRACK_SHUTDOWN_RECD 67 75 #define sSA SCTP_CONNTRACK_SHUTDOWN_ACK_SENT 68 76 #define sHS SCTP_CONNTRACK_HEARTBEAT_SENT 69 - #define sHA SCTP_CONNTRACK_HEARTBEAT_ACKED 70 - #define sDS SCTP_CONNTRACK_DATA_SENT 71 77 #define sIV SCTP_CONNTRACK_MAX 72 78 73 79 /* ··· 88 98 CLOSED - We have seen a SHUTDOWN_COMPLETE chunk in the direction of 89 99 the SHUTDOWN chunk. Connection is closed. 90 100 HEARTBEAT_SENT - We have seen a HEARTBEAT in a new flow. 91 - HEARTBEAT_ACKED - We have seen a HEARTBEAT-ACK/DATA/SACK in the direction 92 - opposite to that of the HEARTBEAT/DATA chunk. Secondary connection 93 - is established. 94 - DATA_SENT - We have seen a DATA/SACK in a new flow. 95 101 */ 96 102 97 103 /* TODO ··· 101 115 */ 102 116 103 117 /* SCTP conntrack state transitions */ 104 - static const u8 sctp_conntracks[2][12][SCTP_CONNTRACK_MAX] = { 118 + static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = { 105 119 { 106 120 /* ORIGINAL */ 107 - /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS */ 108 - /* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA, sCW}, 109 - /* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL}, 110 - /* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, 111 - /* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS, sCL}, 112 - /* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA, sHA, sSA}, 113 - /* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* Can't have Stale cookie*/ 114 - /* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* 5.2.4 - Big TODO */ 115 - /* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA, sCL},/* Can't come in orig dir */ 116 - /* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL, sHA, sCL}, 117 - /* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS}, 118 - /* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS}, 119 - /* data/sack */ {sDS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS} 121 + /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */ 122 + /* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW}, 123 + /* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL}, 124 + /* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL}, 125 + /* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL}, 126 + /* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA}, 127 + /* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/ 128 + /* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */ 129 + /* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */ 130 + /* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL}, 131 + /* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, 132 + /* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, 120 133 }, 121 134 { 122 135 /* REPLY */ 123 - /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sDS */ 124 - /* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV},/* INIT in sCL Big TODO */ 125 - /* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV}, 126 - /* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL, sIV}, 127 - /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR, sIV}, 128 - /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA, sIV}, 129 - /* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV, sHA, sIV}, 130 - /* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA, sIV},/* Can't come in reply dir */ 131 - /* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV, sHA, sIV}, 132 - /* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV, sHA, sIV}, 133 - /* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA, sHA}, 134 - /* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA, sHA}, 135 - /* data/sack */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA, sHA}, 136 + /* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */ 137 + /* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* INIT in sCL Big TODO */ 138 + /* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV}, 139 + /* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV}, 140 + /* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV}, 141 + /* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV}, 142 + /* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV}, 143 + /* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */ 144 + /* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV}, 145 + /* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV}, 146 + /* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS}, 147 + /* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sES}, 136 148 } 137 149 }; 138 150 ··· 142 158 } 143 159 #endif 144 160 161 + /* do_basic_checks ensures sch->length > 0, do not use before */ 145 162 #define for_each_sctp_chunk(skb, sch, _sch, offset, dataoff, count) \ 146 163 for ((offset) = (dataoff) + sizeof(struct sctphdr), (count) = 0; \ 147 164 (offset) < (skb)->len && \ ··· 243 258 pr_debug("SCTP_CID_HEARTBEAT_ACK"); 244 259 i = 10; 245 260 break; 246 - case SCTP_CID_DATA: 247 - case SCTP_CID_SACK: 248 - pr_debug("SCTP_CID_DATA/SACK"); 249 - i = 11; 250 - break; 251 261 default: 252 262 /* Other chunks like DATA or SACK do not change the state */ 253 263 pr_debug("Unknown chunk type, Will stay in %s\n", ··· 296 316 ih->init_tag); 297 317 298 318 ct->proto.sctp.vtag[IP_CT_DIR_REPLY] = ih->init_tag; 299 - } else if (sch->type == SCTP_CID_HEARTBEAT || 300 - sch->type == SCTP_CID_DATA || 301 - sch->type == SCTP_CID_SACK) { 319 + } else if (sch->type == SCTP_CID_HEARTBEAT) { 302 320 pr_debug("Setting vtag %x for secondary conntrack\n", 303 321 sh->vtag); 304 322 ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag; ··· 382 404 383 405 if (!sctp_new(ct, skb, sh, dataoff)) 384 406 return -NF_ACCEPT; 385 - } else { 386 - /* Check the verification tag (Sec 8.5) */ 387 - if (!test_bit(SCTP_CID_INIT, map) && 388 - !test_bit(SCTP_CID_SHUTDOWN_COMPLETE, map) && 389 - !test_bit(SCTP_CID_COOKIE_ECHO, map) && 390 - !test_bit(SCTP_CID_ABORT, map) && 391 - !test_bit(SCTP_CID_SHUTDOWN_ACK, map) && 392 - !test_bit(SCTP_CID_HEARTBEAT, map) && 393 - !test_bit(SCTP_CID_HEARTBEAT_ACK, map) && 394 - sh->vtag != ct->proto.sctp.vtag[dir]) { 395 - pr_debug("Verification tag check failed\n"); 396 - goto out; 397 - } 407 + } 408 + 409 + /* Check the verification tag (Sec 8.5) */ 410 + if (!test_bit(SCTP_CID_INIT, map) && 411 + !test_bit(SCTP_CID_SHUTDOWN_COMPLETE, map) && 412 + !test_bit(SCTP_CID_COOKIE_ECHO, map) && 413 + !test_bit(SCTP_CID_ABORT, map) && 414 + !test_bit(SCTP_CID_SHUTDOWN_ACK, map) && 415 + !test_bit(SCTP_CID_HEARTBEAT, map) && 416 + !test_bit(SCTP_CID_HEARTBEAT_ACK, map) && 417 + sh->vtag != ct->proto.sctp.vtag[dir]) { 418 + pr_debug("Verification tag check failed\n"); 419 + goto out; 398 420 } 399 421 400 422 old_state = new_state = SCTP_CONNTRACK_NONE; ··· 402 424 for_each_sctp_chunk (skb, sch, _sch, offset, dataoff, count) { 403 425 /* Special cases of Verification tag check (Sec 8.5.1) */ 404 426 if (sch->type == SCTP_CID_INIT) { 405 - /* Sec 8.5.1 (A) */ 427 + /* (A) vtag MUST be zero */ 406 428 if (sh->vtag != 0) 407 429 goto out_unlock; 408 430 } else if (sch->type == SCTP_CID_ABORT) { 409 - /* Sec 8.5.1 (B) */ 410 - if (sh->vtag != ct->proto.sctp.vtag[dir] && 411 - sh->vtag != ct->proto.sctp.vtag[!dir]) 431 + /* (B) vtag MUST match own vtag if T flag is unset OR 432 + * MUST match peer's vtag if T flag is set 433 + */ 434 + if ((!(sch->flags & SCTP_CHUNK_FLAG_T) && 435 + sh->vtag != ct->proto.sctp.vtag[dir]) || 436 + ((sch->flags & SCTP_CHUNK_FLAG_T) && 437 + sh->vtag != ct->proto.sctp.vtag[!dir])) 412 438 goto out_unlock; 413 439 } else if (sch->type == SCTP_CID_SHUTDOWN_COMPLETE) { 414 - /* Sec 8.5.1 (C) */ 415 - if (sh->vtag != ct->proto.sctp.vtag[dir] && 416 - sh->vtag != ct->proto.sctp.vtag[!dir] && 417 - sch->flags & SCTP_CHUNK_FLAG_T) 440 + /* (C) vtag MUST match own vtag if T flag is unset OR 441 + * MUST match peer's vtag if T flag is set 442 + */ 443 + if ((!(sch->flags & SCTP_CHUNK_FLAG_T) && 444 + sh->vtag != ct->proto.sctp.vtag[dir]) || 445 + ((sch->flags & SCTP_CHUNK_FLAG_T) && 446 + sh->vtag != ct->proto.sctp.vtag[!dir])) 418 447 goto out_unlock; 419 448 } else if (sch->type == SCTP_CID_COOKIE_ECHO) { 420 - /* Sec 8.5.1 (D) */ 449 + /* (D) vtag must be same as init_vtag as found in INIT_ACK */ 421 450 if (sh->vtag != ct->proto.sctp.vtag[dir]) 422 451 goto out_unlock; 423 452 } else if (sch->type == SCTP_CID_HEARTBEAT) { ··· 460 475 ct->proto.sctp.vtag[!dir] = 0; 461 476 } else if (ct->proto.sctp.flags & SCTP_FLAG_HEARTBEAT_VTAG_FAILED) { 462 477 ct->proto.sctp.flags &= ~SCTP_FLAG_HEARTBEAT_VTAG_FAILED; 463 - } 464 - } else if (sch->type == SCTP_CID_DATA || sch->type == SCTP_CID_SACK) { 465 - if (ct->proto.sctp.vtag[dir] == 0) { 466 - pr_debug("Setting vtag %x for dir %d\n", sh->vtag, dir); 467 - ct->proto.sctp.vtag[dir] = sh->vtag; 468 478 } 469 479 } 470 480 ··· 498 518 } 499 519 500 520 ct->proto.sctp.state = new_state; 501 - if (old_state != new_state) 521 + if (old_state != new_state) { 502 522 nf_conntrack_event_cache(IPCT_PROTOINFO, ct); 523 + if (new_state == SCTP_CONNTRACK_ESTABLISHED && 524 + !test_and_set_bit(IPS_ASSURED_BIT, &ct->status)) 525 + nf_conntrack_event_cache(IPCT_ASSURED, ct); 526 + } 503 527 } 504 528 spin_unlock_bh(&ct->lock); 505 529 ··· 516 532 timeouts = nf_sctp_pernet(nf_ct_net(ct))->timeouts; 517 533 518 534 nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]); 519 - 520 - if (old_state == SCTP_CONNTRACK_COOKIE_ECHOED && 521 - dir == IP_CT_DIR_REPLY && 522 - new_state == SCTP_CONNTRACK_ESTABLISHED) { 523 - pr_debug("Setting assured bit\n"); 524 - set_bit(IPS_ASSURED_BIT, &ct->status); 525 - nf_conntrack_event_cache(IPCT_ASSURED, ct); 526 - } 527 535 528 536 return NF_ACCEPT; 529 537 ··· 677 701 [CTA_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT] = { .type = NLA_U32 }, 678 702 [CTA_TIMEOUT_SCTP_HEARTBEAT_SENT] = { .type = NLA_U32 }, 679 703 [CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED] = { .type = NLA_U32 }, 680 - [CTA_TIMEOUT_SCTP_DATA_SENT] = { .type = NLA_U32 }, 681 704 }; 682 705 #endif /* CONFIG_NF_CONNTRACK_TIMEOUT */ 683 706
-16
net/netfilter/nf_conntrack_standalone.c
··· 601 601 NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_RECD, 602 602 NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT, 603 603 NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_SENT, 604 - NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED, 605 - NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_DATA_SENT, 606 604 #endif 607 605 #ifdef CONFIG_NF_CT_PROTO_DCCP 608 606 NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST, ··· 885 887 .mode = 0644, 886 888 .proc_handler = proc_dointvec_jiffies, 887 889 }, 888 - [NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED] = { 889 - .procname = "nf_conntrack_sctp_timeout_heartbeat_acked", 890 - .maxlen = sizeof(unsigned int), 891 - .mode = 0644, 892 - .proc_handler = proc_dointvec_jiffies, 893 - }, 894 - [NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_DATA_SENT] = { 895 - .procname = "nf_conntrack_sctp_timeout_data_sent", 896 - .maxlen = sizeof(unsigned int), 897 - .mode = 0644, 898 - .proc_handler = proc_dointvec_jiffies, 899 - }, 900 890 #endif 901 891 #ifdef CONFIG_NF_CT_PROTO_DCCP 902 892 [NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST] = { ··· 1028 1042 XASSIGN(SHUTDOWN_RECD, sn); 1029 1043 XASSIGN(SHUTDOWN_ACK_SENT, sn); 1030 1044 XASSIGN(HEARTBEAT_SENT, sn); 1031 - XASSIGN(HEARTBEAT_ACKED, sn); 1032 - XASSIGN(DATA_SENT, sn); 1033 1045 #undef XASSIGN 1034 1046 #endif 1035 1047 }
+207 -131
net/netfilter/nft_set_rbtree.c
··· 38 38 return !nft_rbtree_interval_end(rbe); 39 39 } 40 40 41 - static bool nft_rbtree_equal(const struct nft_set *set, const void *this, 42 - const struct nft_rbtree_elem *interval) 41 + static int nft_rbtree_cmp(const struct nft_set *set, 42 + const struct nft_rbtree_elem *e1, 43 + const struct nft_rbtree_elem *e2) 43 44 { 44 - return memcmp(this, nft_set_ext_key(&interval->ext), set->klen) == 0; 45 + return memcmp(nft_set_ext_key(&e1->ext), nft_set_ext_key(&e2->ext), 46 + set->klen); 45 47 } 46 48 47 49 static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set, ··· 54 52 const struct nft_rbtree_elem *rbe, *interval = NULL; 55 53 u8 genmask = nft_genmask_cur(net); 56 54 const struct rb_node *parent; 57 - const void *this; 58 55 int d; 59 56 60 57 parent = rcu_dereference_raw(priv->root.rb_node); ··· 63 62 64 63 rbe = rb_entry(parent, struct nft_rbtree_elem, node); 65 64 66 - this = nft_set_ext_key(&rbe->ext); 67 - d = memcmp(this, key, set->klen); 65 + d = memcmp(nft_set_ext_key(&rbe->ext), key, set->klen); 68 66 if (d < 0) { 69 67 parent = rcu_dereference_raw(parent->rb_left); 70 68 if (interval && 71 - nft_rbtree_equal(set, this, interval) && 69 + !nft_rbtree_cmp(set, rbe, interval) && 72 70 nft_rbtree_interval_end(rbe) && 73 71 nft_rbtree_interval_start(interval)) 74 72 continue; ··· 215 215 return rbe; 216 216 } 217 217 218 + static int nft_rbtree_gc_elem(const struct nft_set *__set, 219 + struct nft_rbtree *priv, 220 + struct nft_rbtree_elem *rbe) 221 + { 222 + struct nft_set *set = (struct nft_set *)__set; 223 + struct rb_node *prev = rb_prev(&rbe->node); 224 + struct nft_rbtree_elem *rbe_prev; 225 + struct nft_set_gc_batch *gcb; 226 + 227 + gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC); 228 + if (!gcb) 229 + return -ENOMEM; 230 + 231 + /* search for expired end interval coming before this element. */ 232 + do { 233 + rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); 234 + if (nft_rbtree_interval_end(rbe_prev)) 235 + break; 236 + 237 + prev = rb_prev(prev); 238 + } while (prev != NULL); 239 + 240 + rb_erase(&rbe_prev->node, &priv->root); 241 + rb_erase(&rbe->node, &priv->root); 242 + atomic_sub(2, &set->nelems); 243 + 244 + nft_set_gc_batch_add(gcb, rbe); 245 + nft_set_gc_batch_complete(gcb); 246 + 247 + return 0; 248 + } 249 + 250 + static bool nft_rbtree_update_first(const struct nft_set *set, 251 + struct nft_rbtree_elem *rbe, 252 + struct rb_node *first) 253 + { 254 + struct nft_rbtree_elem *first_elem; 255 + 256 + first_elem = rb_entry(first, struct nft_rbtree_elem, node); 257 + /* this element is closest to where the new element is to be inserted: 258 + * update the first element for the node list path. 259 + */ 260 + if (nft_rbtree_cmp(set, rbe, first_elem) < 0) 261 + return true; 262 + 263 + return false; 264 + } 265 + 218 266 static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, 219 267 struct nft_rbtree_elem *new, 220 268 struct nft_set_ext **ext) 221 269 { 222 - bool overlap = false, dup_end_left = false, dup_end_right = false; 270 + struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL; 271 + struct rb_node *node, *parent, **p, *first = NULL; 223 272 struct nft_rbtree *priv = nft_set_priv(set); 224 273 u8 genmask = nft_genmask_next(net); 225 - struct nft_rbtree_elem *rbe; 226 - struct rb_node *parent, **p; 227 - int d; 274 + int d, err; 228 275 229 - /* Detect overlaps as we descend the tree. Set the flag in these cases: 230 - * 231 - * a1. _ _ __>| ?_ _ __| (insert end before existing end) 232 - * a2. _ _ ___| ?_ _ _>| (insert end after existing end) 233 - * a3. _ _ ___? >|_ _ __| (insert start before existing end) 234 - * 235 - * and clear it later on, as we eventually reach the points indicated by 236 - * '?' above, in the cases described below. We'll always meet these 237 - * later, locally, due to tree ordering, and overlaps for the intervals 238 - * that are the closest together are always evaluated last. 239 - * 240 - * b1. _ _ __>| !_ _ __| (insert end before existing start) 241 - * b2. _ _ ___| !_ _ _>| (insert end after existing start) 242 - * b3. _ _ ___! >|_ _ __| (insert start after existing end, as a leaf) 243 - * '--' no nodes falling in this range 244 - * b4. >|_ _ ! (insert start before existing start) 245 - * 246 - * Case a3. resolves to b3.: 247 - * - if the inserted start element is the leftmost, because the '0' 248 - * element in the tree serves as end element 249 - * - otherwise, if an existing end is found immediately to the left. If 250 - * there are existing nodes in between, we need to further descend the 251 - * tree before we can conclude the new start isn't causing an overlap 252 - * 253 - * or to b4., which, preceded by a3., means we already traversed one or 254 - * more existing intervals entirely, from the right. 255 - * 256 - * For a new, rightmost pair of elements, we'll hit cases b3. and b2., 257 - * in that order. 258 - * 259 - * The flag is also cleared in two special cases: 260 - * 261 - * b5. |__ _ _!|<_ _ _ (insert start right before existing end) 262 - * b6. |__ _ >|!__ _ _ (insert end right after existing start) 263 - * 264 - * which always happen as last step and imply that no further 265 - * overlapping is possible. 266 - * 267 - * Another special case comes from the fact that start elements matching 268 - * an already existing start element are allowed: insertion is not 269 - * performed but we return -EEXIST in that case, and the error will be 270 - * cleared by the caller if NLM_F_EXCL is not present in the request. 271 - * This way, request for insertion of an exact overlap isn't reported as 272 - * error to userspace if not desired. 273 - * 274 - * However, if the existing start matches a pre-existing start, but the 275 - * end element doesn't match the corresponding pre-existing end element, 276 - * we need to report a partial overlap. This is a local condition that 277 - * can be noticed without need for a tracking flag, by checking for a 278 - * local duplicated end for a corresponding start, from left and right, 279 - * separately. 276 + /* Descend the tree to search for an existing element greater than the 277 + * key value to insert that is greater than the new element. This is the 278 + * first element to walk the ordered elements to find possible overlap. 280 279 */ 281 - 282 280 parent = NULL; 283 281 p = &priv->root.rb_node; 284 282 while (*p != NULL) { 285 283 parent = *p; 286 284 rbe = rb_entry(parent, struct nft_rbtree_elem, node); 287 - d = memcmp(nft_set_ext_key(&rbe->ext), 288 - nft_set_ext_key(&new->ext), 289 - set->klen); 285 + d = nft_rbtree_cmp(set, rbe, new); 286 + 290 287 if (d < 0) { 291 288 p = &parent->rb_left; 292 - 293 - if (nft_rbtree_interval_start(new)) { 294 - if (nft_rbtree_interval_end(rbe) && 295 - nft_set_elem_active(&rbe->ext, genmask) && 296 - !nft_set_elem_expired(&rbe->ext) && !*p) 297 - overlap = false; 298 - } else { 299 - if (dup_end_left && !*p) 300 - return -ENOTEMPTY; 301 - 302 - overlap = nft_rbtree_interval_end(rbe) && 303 - nft_set_elem_active(&rbe->ext, 304 - genmask) && 305 - !nft_set_elem_expired(&rbe->ext); 306 - 307 - if (overlap) { 308 - dup_end_right = true; 309 - continue; 310 - } 311 - } 312 289 } else if (d > 0) { 290 + if (!first || 291 + nft_rbtree_update_first(set, rbe, first)) 292 + first = &rbe->node; 293 + 313 294 p = &parent->rb_right; 314 - 315 - if (nft_rbtree_interval_end(new)) { 316 - if (dup_end_right && !*p) 317 - return -ENOTEMPTY; 318 - 319 - overlap = nft_rbtree_interval_end(rbe) && 320 - nft_set_elem_active(&rbe->ext, 321 - genmask) && 322 - !nft_set_elem_expired(&rbe->ext); 323 - 324 - if (overlap) { 325 - dup_end_left = true; 326 - continue; 327 - } 328 - } else if (nft_set_elem_active(&rbe->ext, genmask) && 329 - !nft_set_elem_expired(&rbe->ext)) { 330 - overlap = nft_rbtree_interval_end(rbe); 331 - } 332 295 } else { 333 - if (nft_rbtree_interval_end(rbe) && 334 - nft_rbtree_interval_start(new)) { 296 + if (nft_rbtree_interval_end(rbe)) 335 297 p = &parent->rb_left; 336 - 337 - if (nft_set_elem_active(&rbe->ext, genmask) && 338 - !nft_set_elem_expired(&rbe->ext)) 339 - overlap = false; 340 - } else if (nft_rbtree_interval_start(rbe) && 341 - nft_rbtree_interval_end(new)) { 298 + else 342 299 p = &parent->rb_right; 343 - 344 - if (nft_set_elem_active(&rbe->ext, genmask) && 345 - !nft_set_elem_expired(&rbe->ext)) 346 - overlap = false; 347 - } else if (nft_set_elem_active(&rbe->ext, genmask) && 348 - !nft_set_elem_expired(&rbe->ext)) { 349 - *ext = &rbe->ext; 350 - return -EEXIST; 351 - } else { 352 - overlap = false; 353 - if (nft_rbtree_interval_end(rbe)) 354 - p = &parent->rb_left; 355 - else 356 - p = &parent->rb_right; 357 - } 358 300 } 359 - 360 - dup_end_left = dup_end_right = false; 361 301 } 362 302 363 - if (overlap) 303 + if (!first) 304 + first = rb_first(&priv->root); 305 + 306 + /* Detect overlap by going through the list of valid tree nodes. 307 + * Values stored in the tree are in reversed order, starting from 308 + * highest to lowest value. 309 + */ 310 + for (node = first; node != NULL; node = rb_next(node)) { 311 + rbe = rb_entry(node, struct nft_rbtree_elem, node); 312 + 313 + if (!nft_set_elem_active(&rbe->ext, genmask)) 314 + continue; 315 + 316 + /* perform garbage collection to avoid bogus overlap reports. */ 317 + if (nft_set_elem_expired(&rbe->ext)) { 318 + err = nft_rbtree_gc_elem(set, priv, rbe); 319 + if (err < 0) 320 + return err; 321 + 322 + continue; 323 + } 324 + 325 + d = nft_rbtree_cmp(set, rbe, new); 326 + if (d == 0) { 327 + /* Matching end element: no need to look for an 328 + * overlapping greater or equal element. 329 + */ 330 + if (nft_rbtree_interval_end(rbe)) { 331 + rbe_le = rbe; 332 + break; 333 + } 334 + 335 + /* first element that is greater or equal to key value. */ 336 + if (!rbe_ge) { 337 + rbe_ge = rbe; 338 + continue; 339 + } 340 + 341 + /* this is a closer more or equal element, update it. */ 342 + if (nft_rbtree_cmp(set, rbe_ge, new) != 0) { 343 + rbe_ge = rbe; 344 + continue; 345 + } 346 + 347 + /* element is equal to key value, make sure flags are 348 + * the same, an existing more or equal start element 349 + * must not be replaced by more or equal end element. 350 + */ 351 + if ((nft_rbtree_interval_start(new) && 352 + nft_rbtree_interval_start(rbe_ge)) || 353 + (nft_rbtree_interval_end(new) && 354 + nft_rbtree_interval_end(rbe_ge))) { 355 + rbe_ge = rbe; 356 + continue; 357 + } 358 + } else if (d > 0) { 359 + /* annotate element greater than the new element. */ 360 + rbe_ge = rbe; 361 + continue; 362 + } else if (d < 0) { 363 + /* annotate element less than the new element. */ 364 + rbe_le = rbe; 365 + break; 366 + } 367 + } 368 + 369 + /* - new start element matching existing start element: full overlap 370 + * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given. 371 + */ 372 + if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) && 373 + nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) { 374 + *ext = &rbe_ge->ext; 375 + return -EEXIST; 376 + } 377 + 378 + /* - new end element matching existing end element: full overlap 379 + * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given. 380 + */ 381 + if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) && 382 + nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) { 383 + *ext = &rbe_le->ext; 384 + return -EEXIST; 385 + } 386 + 387 + /* - new start element with existing closest, less or equal key value 388 + * being a start element: partial overlap, reported as -ENOTEMPTY. 389 + * Anonymous sets allow for two consecutive start element since they 390 + * are constant, skip them to avoid bogus overlap reports. 391 + */ 392 + if (!nft_set_is_anonymous(set) && rbe_le && 393 + nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new)) 364 394 return -ENOTEMPTY; 395 + 396 + /* - new end element with existing closest, less or equal key value 397 + * being a end element: partial overlap, reported as -ENOTEMPTY. 398 + */ 399 + if (rbe_le && 400 + nft_rbtree_interval_end(rbe_le) && nft_rbtree_interval_end(new)) 401 + return -ENOTEMPTY; 402 + 403 + /* - new end element with existing closest, greater or equal key value 404 + * being an end element: partial overlap, reported as -ENOTEMPTY 405 + */ 406 + if (rbe_ge && 407 + nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new)) 408 + return -ENOTEMPTY; 409 + 410 + /* Accepted element: pick insertion point depending on key value */ 411 + parent = NULL; 412 + p = &priv->root.rb_node; 413 + while (*p != NULL) { 414 + parent = *p; 415 + rbe = rb_entry(parent, struct nft_rbtree_elem, node); 416 + d = nft_rbtree_cmp(set, rbe, new); 417 + 418 + if (d < 0) 419 + p = &parent->rb_left; 420 + else if (d > 0) 421 + p = &parent->rb_right; 422 + else if (nft_rbtree_interval_end(rbe)) 423 + p = &parent->rb_left; 424 + else 425 + p = &parent->rb_right; 426 + } 365 427 366 428 rb_link_node_rcu(&new->node, parent, p); 367 429 rb_insert_color(&new->node, &priv->root); ··· 563 501 struct nft_rbtree *priv; 564 502 struct rb_node *node; 565 503 struct nft_set *set; 504 + struct net *net; 505 + u8 genmask; 566 506 567 507 priv = container_of(work, struct nft_rbtree, gc_work.work); 568 508 set = nft_set_container_of(priv); 509 + net = read_pnet(&set->net); 510 + genmask = nft_genmask_cur(net); 569 511 570 512 write_lock_bh(&priv->lock); 571 513 write_seqcount_begin(&priv->count); 572 514 for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) { 573 515 rbe = rb_entry(node, struct nft_rbtree_elem, node); 574 516 517 + if (!nft_set_elem_active(&rbe->ext, genmask)) 518 + continue; 519 + 520 + /* elements are reversed in the rbtree for historical reasons, 521 + * from highest to lowest value, that is why end element is 522 + * always visited before the start element. 523 + */ 575 524 if (nft_rbtree_interval_end(rbe)) { 576 525 rbe_end = rbe; 577 526 continue; 578 527 } 579 528 if (!nft_set_elem_expired(&rbe->ext)) 580 529 continue; 581 - if (nft_set_elem_mark_busy(&rbe->ext)) 530 + 531 + if (nft_set_elem_mark_busy(&rbe->ext)) { 532 + rbe_end = NULL; 582 533 continue; 534 + } 583 535 584 536 if (rbe_prev) { 585 537 rb_erase(&rbe_prev->node, &priv->root);
+24 -14
net/netlink/af_netlink.c
··· 580 580 if (nlk_sk(sk)->bound) 581 581 goto err; 582 582 583 - nlk_sk(sk)->portid = portid; 583 + /* portid can be read locklessly from netlink_getname(). */ 584 + WRITE_ONCE(nlk_sk(sk)->portid, portid); 585 + 584 586 sock_hold(sk); 585 587 586 588 err = __netlink_insert(table, sk); ··· 1098 1096 return -EINVAL; 1099 1097 1100 1098 if (addr->sa_family == AF_UNSPEC) { 1101 - sk->sk_state = NETLINK_UNCONNECTED; 1102 - nlk->dst_portid = 0; 1103 - nlk->dst_group = 0; 1099 + /* paired with READ_ONCE() in netlink_getsockbyportid() */ 1100 + WRITE_ONCE(sk->sk_state, NETLINK_UNCONNECTED); 1101 + /* dst_portid and dst_group can be read locklessly */ 1102 + WRITE_ONCE(nlk->dst_portid, 0); 1103 + WRITE_ONCE(nlk->dst_group, 0); 1104 1104 return 0; 1105 1105 } 1106 1106 if (addr->sa_family != AF_NETLINK) ··· 1123 1119 err = netlink_autobind(sock); 1124 1120 1125 1121 if (err == 0) { 1126 - sk->sk_state = NETLINK_CONNECTED; 1127 - nlk->dst_portid = nladdr->nl_pid; 1128 - nlk->dst_group = ffs(nladdr->nl_groups); 1122 + /* paired with READ_ONCE() in netlink_getsockbyportid() */ 1123 + WRITE_ONCE(sk->sk_state, NETLINK_CONNECTED); 1124 + /* dst_portid and dst_group can be read locklessly */ 1125 + WRITE_ONCE(nlk->dst_portid, nladdr->nl_pid); 1126 + WRITE_ONCE(nlk->dst_group, ffs(nladdr->nl_groups)); 1129 1127 } 1130 1128 1131 1129 return err; ··· 1144 1138 nladdr->nl_pad = 0; 1145 1139 1146 1140 if (peer) { 1147 - nladdr->nl_pid = nlk->dst_portid; 1148 - nladdr->nl_groups = netlink_group_mask(nlk->dst_group); 1141 + /* Paired with WRITE_ONCE() in netlink_connect() */ 1142 + nladdr->nl_pid = READ_ONCE(nlk->dst_portid); 1143 + nladdr->nl_groups = netlink_group_mask(READ_ONCE(nlk->dst_group)); 1149 1144 } else { 1150 - nladdr->nl_pid = nlk->portid; 1145 + /* Paired with WRITE_ONCE() in netlink_insert() */ 1146 + nladdr->nl_pid = READ_ONCE(nlk->portid); 1151 1147 netlink_lock_table(); 1152 1148 nladdr->nl_groups = nlk->groups ? nlk->groups[0] : 0; 1153 1149 netlink_unlock_table(); ··· 1176 1168 1177 1169 /* Don't bother queuing skb if kernel socket has no input function */ 1178 1170 nlk = nlk_sk(sock); 1179 - if (sock->sk_state == NETLINK_CONNECTED && 1180 - nlk->dst_portid != nlk_sk(ssk)->portid) { 1171 + /* dst_portid and sk_state can be changed in netlink_connect() */ 1172 + if (READ_ONCE(sock->sk_state) == NETLINK_CONNECTED && 1173 + READ_ONCE(nlk->dst_portid) != nlk_sk(ssk)->portid) { 1181 1174 sock_put(sock); 1182 1175 return ERR_PTR(-ECONNREFUSED); 1183 1176 } ··· 1895 1886 goto out; 1896 1887 netlink_skb_flags |= NETLINK_SKB_DST; 1897 1888 } else { 1898 - dst_portid = nlk->dst_portid; 1899 - dst_group = nlk->dst_group; 1889 + /* Paired with WRITE_ONCE() in netlink_connect() */ 1890 + dst_portid = READ_ONCE(nlk->dst_portid); 1891 + dst_group = READ_ONCE(nlk->dst_group); 1900 1892 } 1901 1893 1902 1894 /* Paired with WRITE_ONCE() in netlink_insert() */
+5
net/netrom/af_netrom.c
··· 400 400 struct sock *sk = sock->sk; 401 401 402 402 lock_sock(sk); 403 + if (sock->state != SS_UNCONNECTED) { 404 + release_sock(sk); 405 + return -EINVAL; 406 + } 407 + 403 408 if (sk->sk_state != TCP_LISTEN) { 404 409 memset(&nr_sk(sk)->user_addr, 0, AX25_ADDR_LEN); 405 410 sk->sk_max_ack_backlog = backlog;
+1
net/netrom/nr_timer.c
··· 121 121 is accepted() it isn't 'dead' so doesn't get removed. */ 122 122 if (sock_flag(sk, SOCK_DESTROY) || 123 123 (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) { 124 + sock_hold(sk); 124 125 bh_unlock_sock(sk); 125 126 nr_destroy_socket(sk); 126 127 goto out;
+6 -6
net/openvswitch/datapath.c
··· 1004 1004 key = kzalloc(sizeof(*key), GFP_KERNEL); 1005 1005 if (!key) { 1006 1006 error = -ENOMEM; 1007 - goto err_kfree_key; 1007 + goto err_kfree_flow; 1008 1008 } 1009 1009 1010 1010 ovs_match_init(&match, key, false, &mask); 1011 1011 error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY], 1012 1012 a[OVS_FLOW_ATTR_MASK], log); 1013 1013 if (error) 1014 - goto err_kfree_flow; 1014 + goto err_kfree_key; 1015 1015 1016 1016 ovs_flow_mask_key(&new_flow->key, key, true, &mask); 1017 1017 ··· 1019 1019 error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID], 1020 1020 key, log); 1021 1021 if (error) 1022 - goto err_kfree_flow; 1022 + goto err_kfree_key; 1023 1023 1024 1024 /* Validate actions. */ 1025 1025 error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS], 1026 1026 &new_flow->key, &acts, log); 1027 1027 if (error) { 1028 1028 OVS_NLERR(log, "Flow actions may not be safe on all matching packets."); 1029 - goto err_kfree_flow; 1029 + goto err_kfree_key; 1030 1030 } 1031 1031 1032 1032 reply = ovs_flow_cmd_alloc_info(acts, &new_flow->id, info, false, ··· 1126 1126 kfree_skb(reply); 1127 1127 err_kfree_acts: 1128 1128 ovs_nla_free_flow_actions(acts); 1129 - err_kfree_flow: 1130 - ovs_flow_free(new_flow, false); 1131 1129 err_kfree_key: 1132 1130 kfree(key); 1131 + err_kfree_flow: 1132 + ovs_flow_free(new_flow, false); 1133 1133 error: 1134 1134 return error; 1135 1135 }
+4 -1
net/qrtr/ns.c
··· 83 83 84 84 node->id = node_id; 85 85 86 - radix_tree_insert(&nodes, node_id, node); 86 + if (radix_tree_insert(&nodes, node_id, node)) { 87 + kfree(node); 88 + return NULL; 89 + } 87 90 88 91 return node; 89 92 }
+8
net/rose/af_rose.c
··· 488 488 { 489 489 struct sock *sk = sock->sk; 490 490 491 + lock_sock(sk); 492 + if (sock->state != SS_UNCONNECTED) { 493 + release_sock(sk); 494 + return -EINVAL; 495 + } 496 + 491 497 if (sk->sk_state != TCP_LISTEN) { 492 498 struct rose_sock *rose = rose_sk(sk); 493 499 ··· 503 497 memset(rose->dest_digis, 0, AX25_ADDR_LEN * ROSE_MAX_DIGIS); 504 498 sk->sk_max_ack_backlog = backlog; 505 499 sk->sk_state = TCP_LISTEN; 500 + release_sock(sk); 506 501 return 0; 507 502 } 503 + release_sock(sk); 508 504 509 505 return -EOPNOTSUPP; 510 506 }
+4 -1
net/sched/sch_htb.c
··· 431 431 while (cl->cmode == HTB_MAY_BORROW && p && mask) { 432 432 m = mask; 433 433 while (m) { 434 - int prio = ffz(~m); 434 + unsigned int prio = ffz(~m); 435 + 436 + if (WARN_ON_ONCE(prio > ARRAY_SIZE(p->inner.clprio))) 437 + break; 435 438 m &= ~(1 << prio); 436 439 437 440 if (p->inner.clprio[prio].feed.rb_node)
-1
net/sched/sch_taprio.c
··· 1700 1700 int i; 1701 1701 1702 1702 hrtimer_cancel(&q->advance_timer); 1703 - qdisc_synchronize(sch); 1704 1703 1705 1704 if (q->qdiscs) { 1706 1705 for (i = 0; i < dev->num_tx_queues; i++)
+6
net/sctp/bind_addr.c
··· 73 73 } 74 74 } 75 75 76 + /* If somehow no addresses were found that can be used with this 77 + * scope, it's an error. 78 + */ 79 + if (list_empty(&dest->address_list)) 80 + error = -ENETUNREACH; 81 + 76 82 out: 77 83 if (error) 78 84 sctp_bind_addr_clean(dest);
+1 -3
net/sctp/transport.c
··· 196 196 197 197 /* When a data chunk is sent, reset the heartbeat interval. */ 198 198 expires = jiffies + sctp_transport_timeout(transport); 199 - if ((time_before(transport->hb_timer.expires, expires) || 200 - !timer_pending(&transport->hb_timer)) && 201 - !mod_timer(&transport->hb_timer, 199 + if (!mod_timer(&transport->hb_timer, 202 200 expires + get_random_u32_below(transport->rto))) 203 201 sctp_transport_hold(transport); 204 202 }
+1 -1
net/tls/tls_sw.c
··· 2427 2427 { 2428 2428 struct tls_rec *rec; 2429 2429 2430 - rec = list_first_entry(&ctx->tx_list, struct tls_rec, list); 2430 + rec = list_first_entry_or_null(&ctx->tx_list, struct tls_rec, list); 2431 2431 if (!rec) 2432 2432 return false; 2433 2433
+6
net/x25/af_x25.c
··· 482 482 int rc = -EOPNOTSUPP; 483 483 484 484 lock_sock(sk); 485 + if (sock->state != SS_UNCONNECTED) { 486 + rc = -EINVAL; 487 + release_sock(sk); 488 + return rc; 489 + } 490 + 485 491 if (sk->sk_state != TCP_LISTEN) { 486 492 memset(&x25_sk(sk)->dest_addr, 0, X25_ADDR_LEN); 487 493 sk->sk_max_ack_backlog = backlog;
+18 -11
rust/kernel/print.rs
··· 142 142 macro_rules! print_macro ( 143 143 // The non-continuation cases (most of them, e.g. `INFO`). 144 144 ($format_string:path, false, $($arg:tt)+) => ( 145 - // SAFETY: This hidden macro should only be called by the documented 146 - // printing macros which ensure the format string is one of the fixed 147 - // ones. All `__LOG_PREFIX`s are null-terminated as they are generated 148 - // by the `module!` proc macro or fixed values defined in a kernel 149 - // crate. 150 - unsafe { 151 - $crate::print::call_printk( 152 - &$format_string, 153 - crate::__LOG_PREFIX, 154 - format_args!($($arg)+), 155 - ); 145 + // To remain sound, `arg`s must be expanded outside the `unsafe` block. 146 + // Typically one would use a `let` binding for that; however, `format_args!` 147 + // takes borrows on the arguments, but does not extend the scope of temporaries. 148 + // Therefore, a `match` expression is used to keep them around, since 149 + // the scrutinee is kept until the end of the `match`. 150 + match format_args!($($arg)+) { 151 + // SAFETY: This hidden macro should only be called by the documented 152 + // printing macros which ensure the format string is one of the fixed 153 + // ones. All `__LOG_PREFIX`s are null-terminated as they are generated 154 + // by the `module!` proc macro or fixed values defined in a kernel 155 + // crate. 156 + args => unsafe { 157 + $crate::print::call_printk( 158 + &$format_string, 159 + crate::__LOG_PREFIX, 160 + args, 161 + ); 162 + } 156 163 } 157 164 ); 158 165
+1
samples/ftrace/ftrace-direct-multi-modify.c
··· 152 152 { 153 153 kthread_stop(simple_tsk); 154 154 unregister_ftrace_direct_multi(&direct, my_tramp); 155 + ftrace_free_filter(&direct); 155 156 } 156 157 157 158 module_init(ftrace_direct_multi_init);
+1
samples/ftrace/ftrace-direct-multi.c
··· 79 79 static void __exit ftrace_direct_multi_exit(void) 80 80 { 81 81 unregister_ftrace_direct_multi(&direct, (unsigned long) my_tramp); 82 + ftrace_free_filter(&direct); 82 83 } 83 84 84 85 module_init(ftrace_direct_multi_init);
+5 -1
scripts/Makefile.modinst
··· 66 66 # Don't stop modules_install even if we can't sign external modules. 67 67 # 68 68 ifeq ($(CONFIG_MODULE_SIG_ALL),y) 69 + ifeq ($(filter pkcs11:%, $(CONFIG_MODULE_SIG_KEY)),) 69 70 sig-key := $(if $(wildcard $(CONFIG_MODULE_SIG_KEY)),,$(srctree)/)$(CONFIG_MODULE_SIG_KEY) 71 + else 72 + sig-key := $(CONFIG_MODULE_SIG_KEY) 73 + endif 70 74 quiet_cmd_sign = SIGN $@ 71 - cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) $(sig-key) certs/signing_key.x509 $@ \ 75 + cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) "$(sig-key)" certs/signing_key.x509 $@ \ 72 76 $(if $(KBUILD_EXTMOD),|| true) 73 77 else 74 78 quiet_cmd_sign :=
scripts/atomic/atomics.tbl
+2 -2
scripts/gcc-plugins/gcc-common.h
··· 71 71 #include "varasm.h" 72 72 #include "stor-layout.h" 73 73 #include "internal-fn.h" 74 + #include "gimple.h" 74 75 #include "gimple-expr.h" 76 + #include "gimple-iterator.h" 75 77 #include "gimple-fold.h" 76 78 #include "context.h" 77 79 #include "tree-ssa-alias.h" ··· 87 85 #include "tree-eh.h" 88 86 #include "stmt.h" 89 87 #include "gimplify.h" 90 - #include "gimple.h" 91 88 #include "tree-phinodes.h" 92 89 #include "tree-cfg.h" 93 - #include "gimple-iterator.h" 94 90 #include "gimple-ssa.h" 95 91 #include "ssa-iterators.h" 96 92
+26 -8
scripts/tracing/ftrace-bisect.sh
··· 12 12 # (note, if this is a problem with function_graph tracing, then simply 13 13 # replace "function" with "function_graph" in the following steps). 14 14 # 15 - # # cd /sys/kernel/debug/tracing 15 + # # cd /sys/kernel/tracing 16 16 # # echo schedule > set_ftrace_filter 17 17 # # echo function > current_tracer 18 18 # ··· 20 20 # 21 21 # # echo nop > current_tracer 22 22 # 23 - # # cat available_filter_functions > ~/full-file 23 + # Starting with v5.1 this can be done with numbers, making it much faster: 24 + # 25 + # The old (slow) way, for kernels before v5.1. 26 + # 27 + # [old-way] # cat available_filter_functions > ~/full-file 28 + # 29 + # [old-way] *** Note *** this process will take several minutes to update the 30 + # [old-way] filters. Setting multiple functions is an O(n^2) operation, and we 31 + # [old-way] are dealing with thousands of functions. So go have coffee, talk 32 + # [old-way] with your coworkers, read facebook. And eventually, this operation 33 + # [old-way] will end. 34 + # 35 + # The new way (using numbers) is an O(n) operation, and usually takes less than a second. 36 + # 37 + # seq `wc -l available_filter_functions | cut -d' ' -f1` > ~/full-file 38 + # 39 + # This will create a sequence of numbers that match the functions in 40 + # available_filter_functions, and when echoing in a number into the 41 + # set_ftrace_filter file, it will enable the corresponding function in 42 + # O(1) time. Making enabling all functions O(n) where n is the number of 43 + # functions to enable. 44 + # 45 + # For either the new or old way, the rest of the operations remain the same. 46 + # 24 47 # # ftrace-bisect ~/full-file ~/test-file ~/non-test-file 25 48 # # cat ~/test-file > set_ftrace_filter 26 - # 27 - # *** Note *** this will take several minutes. Setting multiple functions is 28 - # an O(n^2) operation, and we are dealing with thousands of functions. So go 29 - # have coffee, talk with your coworkers, read facebook. And eventually, this 30 - # operation will end. 31 49 # 32 50 # # echo function > current_tracer 33 51 # ··· 53 35 # 54 36 # Reboot back to test kernel. 55 37 # 56 - # # cd /sys/kernel/debug/tracing 38 + # # cd /sys/kernel/tracing 57 39 # # mv ~/test-file ~/full-file 58 40 # 59 41 # If it didn't crash.
+68 -17
sound/core/memalloc.c
··· 541 541 struct sg_table *sgt; 542 542 void *p; 543 543 544 + #ifdef CONFIG_SND_DMA_SGBUF 545 + if (cpu_feature_enabled(X86_FEATURE_XENPV)) 546 + return snd_dma_sg_fallback_alloc(dmab, size); 547 + #endif 544 548 sgt = dma_alloc_noncontiguous(dmab->dev.dev, size, dmab->dev.dir, 545 549 DEFAULT_GFP, 0); 546 550 #ifdef CONFIG_SND_DMA_SGBUF 547 - if (!sgt && !get_dma_ops(dmab->dev.dev)) { 548 - if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG) 549 - dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK; 550 - else 551 - dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK; 551 + if (!sgt && !get_dma_ops(dmab->dev.dev)) 552 552 return snd_dma_sg_fallback_alloc(dmab, size); 553 - } 554 553 #endif 555 554 if (!sgt) 556 555 return NULL; ··· 716 717 717 718 /* Fallback SG-buffer allocations for x86 */ 718 719 struct snd_dma_sg_fallback { 720 + bool use_dma_alloc_coherent; 719 721 size_t count; 720 722 struct page **pages; 723 + /* DMA address array; the first page contains #pages in ~PAGE_MASK */ 724 + dma_addr_t *addrs; 721 725 }; 722 726 723 727 static void __snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab, 724 728 struct snd_dma_sg_fallback *sgbuf) 725 729 { 726 - bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK; 727 - size_t i; 730 + size_t i, size; 728 731 729 - for (i = 0; i < sgbuf->count && sgbuf->pages[i]; i++) 730 - do_free_pages(page_address(sgbuf->pages[i]), PAGE_SIZE, wc); 732 + if (sgbuf->pages && sgbuf->addrs) { 733 + i = 0; 734 + while (i < sgbuf->count) { 735 + if (!sgbuf->pages[i] || !sgbuf->addrs[i]) 736 + break; 737 + size = sgbuf->addrs[i] & ~PAGE_MASK; 738 + if (WARN_ON(!size)) 739 + break; 740 + if (sgbuf->use_dma_alloc_coherent) 741 + dma_free_coherent(dmab->dev.dev, size << PAGE_SHIFT, 742 + page_address(sgbuf->pages[i]), 743 + sgbuf->addrs[i] & PAGE_MASK); 744 + else 745 + do_free_pages(page_address(sgbuf->pages[i]), 746 + size << PAGE_SHIFT, false); 747 + i += size; 748 + } 749 + } 731 750 kvfree(sgbuf->pages); 751 + kvfree(sgbuf->addrs); 732 752 kfree(sgbuf); 733 753 } 734 754 ··· 756 738 struct snd_dma_sg_fallback *sgbuf; 757 739 struct page **pagep, *curp; 758 740 size_t chunk, npages; 741 + dma_addr_t *addrp; 759 742 dma_addr_t addr; 760 743 void *p; 761 - bool wc = dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK; 744 + 745 + /* correct the type */ 746 + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) 747 + dmab->dev.type = SNDRV_DMA_TYPE_DEV_SG_FALLBACK; 748 + else if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG) 749 + dmab->dev.type = SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK; 762 750 763 751 sgbuf = kzalloc(sizeof(*sgbuf), GFP_KERNEL); 764 752 if (!sgbuf) 765 753 return NULL; 754 + sgbuf->use_dma_alloc_coherent = cpu_feature_enabled(X86_FEATURE_XENPV); 766 755 size = PAGE_ALIGN(size); 767 756 sgbuf->count = size >> PAGE_SHIFT; 768 757 sgbuf->pages = kvcalloc(sgbuf->count, sizeof(*sgbuf->pages), GFP_KERNEL); 769 - if (!sgbuf->pages) 758 + sgbuf->addrs = kvcalloc(sgbuf->count, sizeof(*sgbuf->addrs), GFP_KERNEL); 759 + if (!sgbuf->pages || !sgbuf->addrs) 770 760 goto error; 771 761 772 762 pagep = sgbuf->pages; 773 - chunk = size; 763 + addrp = sgbuf->addrs; 764 + chunk = (PAGE_SIZE - 1) << PAGE_SHIFT; /* to fit in low bits in addrs */ 774 765 while (size > 0) { 775 766 chunk = min(size, chunk); 776 - p = do_alloc_pages(dmab->dev.dev, chunk, &addr, wc); 767 + if (sgbuf->use_dma_alloc_coherent) 768 + p = dma_alloc_coherent(dmab->dev.dev, chunk, &addr, DEFAULT_GFP); 769 + else 770 + p = do_alloc_pages(dmab->dev.dev, chunk, &addr, false); 777 771 if (!p) { 778 772 if (chunk <= PAGE_SIZE) 779 773 goto error; ··· 797 767 size -= chunk; 798 768 /* fill pages */ 799 769 npages = chunk >> PAGE_SHIFT; 770 + *addrp = npages; /* store in lower bits */ 800 771 curp = virt_to_page(p); 801 - while (npages--) 772 + while (npages--) { 802 773 *pagep++ = curp++; 774 + *addrp++ |= addr; 775 + addr += PAGE_SIZE; 776 + } 803 777 } 804 778 805 779 p = vmap(sgbuf->pages, sgbuf->count, VM_MAP, PAGE_KERNEL); 806 780 if (!p) 807 781 goto error; 782 + 783 + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK) 784 + set_pages_array_wc(sgbuf->pages, sgbuf->count); 785 + 808 786 dmab->private_data = sgbuf; 809 787 /* store the first page address for convenience */ 810 - dmab->addr = snd_sgbuf_get_addr(dmab, 0); 788 + dmab->addr = sgbuf->addrs[0] & PAGE_MASK; 811 789 return p; 812 790 813 791 error: ··· 825 787 826 788 static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab) 827 789 { 790 + struct snd_dma_sg_fallback *sgbuf = dmab->private_data; 791 + 792 + if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC_SG_FALLBACK) 793 + set_pages_array_wb(sgbuf->pages, sgbuf->count); 828 794 vunmap(dmab->area); 829 795 __snd_dma_sg_fallback_free(dmab, dmab->private_data); 796 + } 797 + 798 + static dma_addr_t snd_dma_sg_fallback_get_addr(struct snd_dma_buffer *dmab, 799 + size_t offset) 800 + { 801 + struct snd_dma_sg_fallback *sgbuf = dmab->private_data; 802 + size_t index = offset >> PAGE_SHIFT; 803 + 804 + return (sgbuf->addrs[index] & PAGE_MASK) | (offset & ~PAGE_MASK); 830 805 } 831 806 832 807 static int snd_dma_sg_fallback_mmap(struct snd_dma_buffer *dmab, ··· 856 805 .alloc = snd_dma_sg_fallback_alloc, 857 806 .free = snd_dma_sg_fallback_free, 858 807 .mmap = snd_dma_sg_fallback_mmap, 808 + .get_addr = snd_dma_sg_fallback_get_addr, 859 809 /* reuse vmalloc helpers */ 860 - .get_addr = snd_dma_vmalloc_get_addr, 861 810 .get_page = snd_dma_vmalloc_get_page, 862 811 .get_chunk_size = snd_dma_vmalloc_get_chunk_size, 863 812 };
+4
sound/firewire/motu/motu-hwdep.c
··· 87 87 return -EFAULT; 88 88 89 89 count = consumed; 90 + } else { 91 + spin_unlock_irq(&motu->lock); 92 + 93 + count = 0; 90 94 } 91 95 92 96 return count;
+2
sound/pci/hda/hda_bind.c
··· 144 144 145 145 error: 146 146 snd_hda_codec_cleanup_for_unbind(codec); 147 + codec->preset = NULL; 147 148 return err; 148 149 } 149 150 ··· 167 166 if (codec->patch_ops.free) 168 167 codec->patch_ops.free(codec); 169 168 snd_hda_codec_cleanup_for_unbind(codec); 169 + codec->preset = NULL; 170 170 module_put(dev->driver->owner); 171 171 return 0; 172 172 }
-1
sound/pci/hda/hda_codec.c
··· 795 795 snd_array_free(&codec->cvt_setups); 796 796 snd_array_free(&codec->spdif_out); 797 797 snd_array_free(&codec->verbs); 798 - codec->preset = NULL; 799 798 codec->follower_dig_outs = NULL; 800 799 codec->spdif_status_reset = 0; 801 800 snd_array_free(&codec->mixers);
+2
sound/pci/hda/patch_realtek.c
··· 9202 9202 SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 9203 9203 SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE), 9204 9204 SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), 9205 + SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 9205 9206 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 9206 9207 SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X), 9207 9208 SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), ··· 9433 9432 SND_PCI_QUIRK(0x103c, 0x8ad2, "HP EliteBook 860 16 inch G9 Notebook PC", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9434 9433 SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9435 9434 SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 9435 + SND_PCI_QUIRK(0x103c, 0x8b92, "HP", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 9436 9436 SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED), 9437 9437 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 9438 9438 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+3
sound/pci/hda/patch_via.c
··· 819 819 return 0; 820 820 nums = snd_hda_get_connections(codec, spec->gen.mixer_nid, conn, 821 821 ARRAY_SIZE(conn) - 1); 822 + if (nums < 0) 823 + return nums; 824 + 822 825 for (i = 0; i < nums; i++) { 823 826 if (get_wcaps_type(get_wcaps(codec, conn[i])) == AC_WID_AUD_OUT) 824 827 return 0;
+4 -2
sound/soc/amd/acp-es8336.c
··· 198 198 int ret; 199 199 200 200 adev = acpi_dev_get_first_match_dev("ESSX8336", NULL, -1); 201 - if (adev) 202 - put_device(&adev->dev); 201 + if (!adev) 202 + return -ENODEV; 203 + 203 204 codec_dev = acpi_get_first_physical_node(adev); 205 + acpi_dev_put(adev); 204 206 if (!codec_dev) 205 207 dev_err(card->dev, "can not find codec dev\n"); 206 208
+21
sound/soc/amd/yc/acp6x-mach.c
··· 230 230 { 231 231 .driver_data = &acp6x_card, 232 232 .matches = { 233 + DMI_MATCH(DMI_BOARD_VENDOR, "TIMI"), 234 + DMI_MATCH(DMI_PRODUCT_NAME, "Redmi Book Pro 15 2022"), 235 + } 236 + }, 237 + { 238 + .driver_data = &acp6x_card, 239 + .matches = { 233 240 DMI_MATCH(DMI_BOARD_VENDOR, "Razer"), 234 241 DMI_MATCH(DMI_PRODUCT_NAME, "Blade 14 (2022) - RZ09-0427"), 242 + } 243 + }, 244 + { 245 + .driver_data = &acp6x_card, 246 + .matches = { 247 + DMI_MATCH(DMI_BOARD_VENDOR, "RB"), 248 + DMI_MATCH(DMI_PRODUCT_NAME, "Swift SFA16-41"), 249 + } 250 + }, 251 + { 252 + .driver_data = &acp6x_card, 253 + .matches = { 254 + DMI_MATCH(DMI_BOARD_VENDOR, "IRBIS"), 255 + DMI_MATCH(DMI_PRODUCT_NAME, "15NBC1011"), 235 256 } 236 257 }, 237 258 {}
-6
sound/soc/codecs/cs42l56.c
··· 1191 1191 if (pdata) { 1192 1192 cs42l56->pdata = *pdata; 1193 1193 } else { 1194 - pdata = devm_kzalloc(&i2c_client->dev, sizeof(*pdata), 1195 - GFP_KERNEL); 1196 - if (!pdata) 1197 - return -ENOMEM; 1198 - 1199 1194 if (i2c_client->dev.of_node) { 1200 1195 ret = cs42l56_handle_of_data(i2c_client, 1201 1196 &cs42l56->pdata); 1202 1197 if (ret != 0) 1203 1198 return ret; 1204 1199 } 1205 - cs42l56->pdata = *pdata; 1206 1200 } 1207 1201 1208 1202 if (cs42l56->pdata.gpio_nreset) {
sound/soc/codecs/es8326.c
sound/soc/codecs/es8326.h
+2 -2
sound/soc/codecs/wsa883x.c
··· 1359 1359 .stream_name = "SPKR Playback", 1360 1360 .rates = WSA883X_RATES | WSA883X_FRAC_RATES, 1361 1361 .formats = WSA883X_FORMATS, 1362 - .rate_max = 8000, 1363 - .rate_min = 352800, 1362 + .rate_min = 8000, 1363 + .rate_max = 352800, 1364 1364 .channels_min = 1, 1365 1365 .channels_max = 1, 1366 1366 },
+24
sound/soc/intel/avs/core.c
··· 481 481 return ret; 482 482 } 483 483 484 + static void avs_pci_shutdown(struct pci_dev *pci) 485 + { 486 + struct hdac_bus *bus = pci_get_drvdata(pci); 487 + struct avs_dev *adev = hdac_to_avs(bus); 488 + 489 + cancel_work_sync(&adev->probe_work); 490 + avs_ipc_block(adev->ipc); 491 + 492 + snd_hdac_stop_streams(bus); 493 + avs_dsp_op(adev, int_control, false); 494 + snd_hdac_ext_bus_ppcap_int_enable(bus, false); 495 + snd_hdac_ext_bus_link_power_down_all(bus); 496 + 497 + snd_hdac_bus_stop_chip(bus); 498 + snd_hdac_display_power(bus, HDA_CODEC_IDX_CONTROLLER, false); 499 + 500 + if (avs_platattr_test(adev, CLDMA)) 501 + pci_free_irq(pci, 0, &code_loader); 502 + pci_free_irq(pci, 0, adev); 503 + pci_free_irq(pci, 0, bus); 504 + pci_free_irq_vectors(pci); 505 + } 506 + 484 507 static void avs_pci_remove(struct pci_dev *pci) 485 508 { 486 509 struct hdac_device *hdev, *save; ··· 762 739 .id_table = avs_ids, 763 740 .probe = avs_pci_probe, 764 741 .remove = avs_pci_remove, 742 + .shutdown = avs_pci_shutdown, 765 743 .driver = { 766 744 .pm = &avs_dev_pm, 767 745 },
+12 -8
sound/soc/intel/boards/bytcht_es8316.c
··· 497 497 if (adev) { 498 498 snprintf(codec_name, sizeof(codec_name), 499 499 "i2c-%s", acpi_dev_name(adev)); 500 - put_device(&adev->dev); 501 500 byt_cht_es8316_dais[dai_index].codecs->name = codec_name; 502 501 } else { 503 502 dev_err(dev, "Error cannot find '%s' dev\n", mach->id); 504 503 return -ENXIO; 505 504 } 505 + 506 + codec_dev = acpi_get_first_physical_node(adev); 507 + acpi_dev_put(adev); 508 + if (!codec_dev) 509 + return -EPROBE_DEFER; 510 + priv->codec_dev = get_device(codec_dev); 506 511 507 512 /* override platform name, if required */ 508 513 byt_cht_es8316_card.dev = dev; ··· 515 510 516 511 ret = snd_soc_fixup_dai_links_platform_name(&byt_cht_es8316_card, 517 512 platform_name); 518 - if (ret) 513 + if (ret) { 514 + put_device(codec_dev); 519 515 return ret; 516 + } 520 517 521 518 /* Check for BYTCR or other platform and setup quirks */ 522 519 dmi_id = dmi_first_match(byt_cht_es8316_quirk_table); ··· 546 539 547 540 /* get the clock */ 548 541 priv->mclk = devm_clk_get(dev, "pmc_plt_clk_3"); 549 - if (IS_ERR(priv->mclk)) 542 + if (IS_ERR(priv->mclk)) { 543 + put_device(codec_dev); 550 544 return dev_err_probe(dev, PTR_ERR(priv->mclk), "clk_get pmc_plt_clk_3 failed\n"); 551 - 552 - codec_dev = acpi_get_first_physical_node(adev); 553 - if (!codec_dev) 554 - return -EPROBE_DEFER; 555 - priv->codec_dev = get_device(codec_dev); 545 + } 556 546 557 547 if (quirk & BYT_CHT_ES8316_JD_INVERTED) 558 548 props[cnt++] = PROPERTY_ENTRY_BOOL("everest,jack-detect-inverted");
+6 -6
sound/soc/intel/boards/bytcr_rt5640.c
··· 1636 1636 if (adev) { 1637 1637 snprintf(byt_rt5640_codec_name, sizeof(byt_rt5640_codec_name), 1638 1638 "i2c-%s", acpi_dev_name(adev)); 1639 - put_device(&adev->dev); 1640 1639 byt_rt5640_dais[dai_index].codecs->name = byt_rt5640_codec_name; 1641 1640 } else { 1642 1641 dev_err(dev, "Error cannot find '%s' dev\n", mach->id); 1643 1642 return -ENXIO; 1644 1643 } 1644 + 1645 + codec_dev = acpi_get_first_physical_node(adev); 1646 + acpi_dev_put(adev); 1647 + if (!codec_dev) 1648 + return -EPROBE_DEFER; 1649 + priv->codec_dev = get_device(codec_dev); 1645 1650 1646 1651 /* 1647 1652 * swap SSP0 if bytcr is detected ··· 1721 1716 byt_rt5640_quirk, quirk_override); 1722 1717 byt_rt5640_quirk = quirk_override; 1723 1718 } 1724 - 1725 - codec_dev = acpi_get_first_physical_node(adev); 1726 - if (!codec_dev) 1727 - return -EPROBE_DEFER; 1728 - priv->codec_dev = get_device(codec_dev); 1729 1719 1730 1720 if (byt_rt5640_quirk & BYT_RT5640_JD_HP_ELITEP_1000G2) { 1731 1721 acpi_dev_add_driver_gpios(ACPI_COMPANION(priv->codec_dev),
+1 -1
sound/soc/intel/boards/bytcr_rt5651.c
··· 922 922 if (adev) { 923 923 snprintf(byt_rt5651_codec_name, sizeof(byt_rt5651_codec_name), 924 924 "i2c-%s", acpi_dev_name(adev)); 925 - put_device(&adev->dev); 926 925 byt_rt5651_dais[dai_index].codecs->name = byt_rt5651_codec_name; 927 926 } else { 928 927 dev_err(dev, "Error cannot find '%s' dev\n", mach->id); ··· 929 930 } 930 931 931 932 codec_dev = acpi_get_first_physical_node(adev); 933 + acpi_dev_put(adev); 932 934 if (!codec_dev) 933 935 return -EPROBE_DEFER; 934 936 priv->codec_dev = get_device(codec_dev);
+1 -1
sound/soc/intel/boards/bytcr_wm5102.c
··· 411 411 return -ENOENT; 412 412 } 413 413 snprintf(codec_name, sizeof(codec_name), "spi-%s", acpi_dev_name(adev)); 414 - put_device(&adev->dev); 415 414 416 415 codec_dev = bus_find_device_by_name(&spi_bus_type, NULL, codec_name); 416 + acpi_dev_put(adev); 417 417 if (!codec_dev) 418 418 return -EPROBE_DEFER; 419 419
+3
sound/soc/intel/boards/sof_cs42l42.c
··· 336 336 links[*id].platforms = platform_component; 337 337 links[*id].num_platforms = ARRAY_SIZE(platform_component); 338 338 links[*id].dpcm_playback = 1; 339 + /* firmware-generated echo reference */ 340 + links[*id].dpcm_capture = 1; 341 + 339 342 links[*id].no_pcm = 1; 340 343 links[*id].cpus = &cpus[*id]; 341 344 links[*id].num_cpus = 1;
+8 -6
sound/soc/intel/boards/sof_es8336.c
··· 681 681 if (adev) { 682 682 snprintf(codec_name, sizeof(codec_name), 683 683 "i2c-%s", acpi_dev_name(adev)); 684 - put_device(&adev->dev); 685 684 dai_links[0].codecs->name = codec_name; 686 685 687 686 /* also fixup codec dai name if relevant */ ··· 691 692 return -ENXIO; 692 693 } 693 694 694 - ret = snd_soc_fixup_dai_links_platform_name(&sof_es8336_card, 695 - mach->mach_params.platform); 696 - if (ret) 697 - return ret; 698 - 699 695 codec_dev = acpi_get_first_physical_node(adev); 696 + acpi_dev_put(adev); 700 697 if (!codec_dev) 701 698 return -EPROBE_DEFER; 702 699 priv->codec_dev = get_device(codec_dev); 700 + 701 + ret = snd_soc_fixup_dai_links_platform_name(&sof_es8336_card, 702 + mach->mach_params.platform); 703 + if (ret) { 704 + put_device(codec_dev); 705 + return ret; 706 + } 703 707 704 708 if (quirk & SOF_ES8336_JD_INVERTED) 705 709 props[cnt++] = PROPERTY_ENTRY_BOOL("everest,jack-detect-inverted");
+3 -2
sound/soc/intel/boards/sof_nau8825.c
··· 487 487 links[id].num_codecs = ARRAY_SIZE(max_98373_components); 488 488 links[id].init = max_98373_spk_codec_init; 489 489 links[id].ops = &max_98373_ops; 490 - /* feedback stream */ 491 - links[id].dpcm_capture = 1; 492 490 } else if (sof_nau8825_quirk & 493 491 SOF_MAX98360A_SPEAKER_AMP_PRESENT) { 494 492 max_98360a_dai_link(&links[id]); ··· 504 506 links[id].platforms = platform_component; 505 507 links[id].num_platforms = ARRAY_SIZE(platform_component); 506 508 links[id].dpcm_playback = 1; 509 + /* feedback stream or firmware-generated echo reference */ 510 + links[id].dpcm_capture = 1; 511 + 507 512 links[id].no_pcm = 1; 508 513 links[id].cpus = &cpus[id]; 509 514 links[id].num_cpus = 1;
+3 -2
sound/soc/intel/boards/sof_rt5682.c
··· 761 761 links[id].num_codecs = ARRAY_SIZE(max_98373_components); 762 762 links[id].init = max_98373_spk_codec_init; 763 763 links[id].ops = &max_98373_ops; 764 - /* feedback stream */ 765 - links[id].dpcm_capture = 1; 766 764 } else if (sof_rt5682_quirk & 767 765 SOF_MAX98360A_SPEAKER_AMP_PRESENT) { 768 766 max_98360a_dai_link(&links[id]); ··· 787 789 links[id].platforms = platform_component; 788 790 links[id].num_platforms = ARRAY_SIZE(platform_component); 789 791 links[id].dpcm_playback = 1; 792 + /* feedback stream or firmware-generated echo reference */ 793 + links[id].dpcm_capture = 1; 794 + 790 795 links[id].no_pcm = 1; 791 796 links[id].cpus = &cpus[id]; 792 797 links[id].num_cpus = 1;
+2 -3
sound/soc/intel/boards/sof_ssp_amp.c
··· 258 258 sof_rt1308_dai_link(&links[id]); 259 259 } else if (sof_ssp_amp_quirk & SOF_CS35L41_SPEAKER_AMP_PRESENT) { 260 260 cs35l41_set_dai_link(&links[id]); 261 - 262 - /* feedback from amplifier */ 263 - links[id].dpcm_capture = 1; 264 261 } 265 262 links[id].platforms = platform_component; 266 263 links[id].num_platforms = ARRAY_SIZE(platform_component); 267 264 links[id].dpcm_playback = 1; 265 + /* feedback from amplifier or firmware-generated echo reference */ 266 + links[id].dpcm_capture = 1; 268 267 links[id].no_pcm = 1; 269 268 links[id].cpus = &cpus[id]; 270 269 links[id].num_cpus = 1;
+4 -3
sound/soc/sof/ipc4-mtrace.c
··· 344 344 size_t count, loff_t *ppos) 345 345 { 346 346 struct sof_mtrace_priv *priv = file->private_data; 347 - int id, ret; 347 + unsigned int id; 348 348 char *buf; 349 349 u32 mask; 350 + int ret; 350 351 351 352 /* 352 353 * To update Nth mask entry, write: ··· 358 357 if (IS_ERR(buf)) 359 358 return PTR_ERR(buf); 360 359 361 - ret = sscanf(buf, "%d,0x%x", &id, &mask); 360 + ret = sscanf(buf, "%u,0x%x", &id, &mask); 362 361 if (ret != 2) { 363 - ret = sscanf(buf, "%d,%x", &id, &mask); 362 + ret = sscanf(buf, "%u,%x", &id, &mask); 364 363 if (ret != 2) { 365 364 ret = -EINVAL; 366 365 goto out;
+9 -7
sound/soc/sof/sof-audio.c
··· 271 271 struct snd_sof_widget *swidget = widget->dobj.private; 272 272 struct snd_soc_dapm_path *p; 273 273 274 - /* return if the widget is in use or if it is already unprepared */ 275 - if (!swidget->prepared || swidget->use_count > 1) 276 - return; 274 + /* skip if the widget is in use or if it is already unprepared */ 275 + if (!swidget || !swidget->prepared || swidget->use_count > 0) 276 + goto sink_unprepare; 277 277 278 278 if (widget_ops[widget->id].ipc_unprepare) 279 279 /* unprepare the source widget */ ··· 281 281 282 282 swidget->prepared = false; 283 283 284 + sink_unprepare: 284 285 /* unprepare all widgets in the sink paths */ 285 286 snd_soc_dapm_widget_for_each_sink_path(widget, p) { 286 287 if (!p->walking && p->sink->dobj.private) { ··· 304 303 struct snd_soc_dapm_path *p; 305 304 int ret; 306 305 307 - if (!widget_ops[widget->id].ipc_prepare || swidget->prepared) 306 + if (!swidget || !widget_ops[widget->id].ipc_prepare || swidget->prepared) 308 307 goto sink_prepare; 309 308 310 309 /* prepare the source widget */ ··· 327 326 p->walking = false; 328 327 if (ret < 0) { 329 328 /* unprepare the source widget */ 330 - if (widget_ops[widget->id].ipc_unprepare && swidget->prepared) { 329 + if (widget_ops[widget->id].ipc_unprepare && 330 + swidget && swidget->prepared) { 331 331 widget_ops[widget->id].ipc_unprepare(swidget); 332 332 swidget->prepared = false; 333 333 } ··· 431 429 432 430 for_each_dapm_widgets(list, i, widget) { 433 431 /* starting widget for playback is AIF type */ 434 - if (dir == SNDRV_PCM_STREAM_PLAYBACK && !WIDGET_IS_AIF(widget->id)) 432 + if (dir == SNDRV_PCM_STREAM_PLAYBACK && widget->id != snd_soc_dapm_aif_in) 435 433 continue; 436 434 437 435 /* starting widget for capture is DAI type */ 438 - if (dir == SNDRV_PCM_STREAM_CAPTURE && !WIDGET_IS_DAI(widget->id)) 436 + if (dir == SNDRV_PCM_STREAM_CAPTURE && widget->id != snd_soc_dapm_dai_out) 439 437 continue; 440 438 441 439 switch (op) {
+2
sound/usb/quirks.c
··· 2152 2152 QUIRK_FLAG_GENERIC_IMPLICIT_FB), 2153 2153 DEVICE_FLG(0x0525, 0xa4ad, /* Hamedal C20 usb camero */ 2154 2154 QUIRK_FLAG_IFACE_SKIP_CLOSE), 2155 + DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */ 2156 + QUIRK_FLAG_FIXED_RATE), 2155 2157 DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */ 2156 2158 QUIRK_FLAG_FIXED_RATE), 2157 2159
+1
tools/gpio/gpio-event-mon.c
··· 86 86 gpiotools_test_bit(values.bits, i)); 87 87 } 88 88 89 + i = 0; 89 90 while (1) { 90 91 struct gpio_v2_line_event event; 91 92
-5
tools/testing/selftests/amd-pstate/Makefile
··· 7 7 uname_M := $(shell uname -m 2>/dev/null || echo not) 8 8 ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/) 9 9 10 - ifeq (x86,$(ARCH)) 11 - TEST_GEN_FILES += ../../../power/x86/amd_pstate_tracer/amd_pstate_trace.py 12 - TEST_GEN_FILES += ../../../power/x86/intel_pstate_tracer/intel_pstate_tracer.py 13 - endif 14 - 15 10 TEST_PROGS := run.sh 16 11 TEST_FILES := basic.sh tbench.sh gitsource.sh 17 12
+63 -18
tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
··· 30 30 #define MAX_STRERR_LEN 256 31 31 #define MAX_TEST_NAME 80 32 32 33 + #define __always_unused __attribute__((__unused__)) 34 + 33 35 #define _FAIL(errnum, fmt...) \ 34 36 ({ \ 35 37 error_at_line(0, (errnum), __func__, __LINE__, fmt); \ ··· 323 321 return socket_loopback_reuseport(family, sotype, -1); 324 322 } 325 323 326 - static void test_insert_invalid(int family, int sotype, int mapfd) 324 + static void test_insert_invalid(struct test_sockmap_listen *skel __always_unused, 325 + int family, int sotype, int mapfd) 327 326 { 328 327 u32 key = 0; 329 328 u64 value; ··· 341 338 FAIL_ERRNO("map_update: expected EBADF"); 342 339 } 343 340 344 - static void test_insert_opened(int family, int sotype, int mapfd) 341 + static void test_insert_opened(struct test_sockmap_listen *skel __always_unused, 342 + int family, int sotype, int mapfd) 345 343 { 346 344 u32 key = 0; 347 345 u64 value; ··· 363 359 xclose(s); 364 360 } 365 361 366 - static void test_insert_bound(int family, int sotype, int mapfd) 362 + static void test_insert_bound(struct test_sockmap_listen *skel __always_unused, 363 + int family, int sotype, int mapfd) 367 364 { 368 365 struct sockaddr_storage addr; 369 366 socklen_t len; ··· 391 386 xclose(s); 392 387 } 393 388 394 - static void test_insert(int family, int sotype, int mapfd) 389 + static void test_insert(struct test_sockmap_listen *skel __always_unused, 390 + int family, int sotype, int mapfd) 395 391 { 396 392 u64 value; 397 393 u32 key; ··· 408 402 xclose(s); 409 403 } 410 404 411 - static void test_delete_after_insert(int family, int sotype, int mapfd) 405 + static void test_delete_after_insert(struct test_sockmap_listen *skel __always_unused, 406 + int family, int sotype, int mapfd) 412 407 { 413 408 u64 value; 414 409 u32 key; ··· 426 419 xclose(s); 427 420 } 428 421 429 - static void test_delete_after_close(int family, int sotype, int mapfd) 422 + static void test_delete_after_close(struct test_sockmap_listen *skel __always_unused, 423 + int family, int sotype, int mapfd) 430 424 { 431 425 int err, s; 432 426 u64 value; ··· 450 442 FAIL_ERRNO("map_delete: expected EINVAL/EINVAL"); 451 443 } 452 444 453 - static void test_lookup_after_insert(int family, int sotype, int mapfd) 445 + static void test_lookup_after_insert(struct test_sockmap_listen *skel __always_unused, 446 + int family, int sotype, int mapfd) 454 447 { 455 448 u64 cookie, value; 456 449 socklen_t len; ··· 479 470 xclose(s); 480 471 } 481 472 482 - static void test_lookup_after_delete(int family, int sotype, int mapfd) 473 + static void test_lookup_after_delete(struct test_sockmap_listen *skel __always_unused, 474 + int family, int sotype, int mapfd) 483 475 { 484 476 int err, s; 485 477 u64 value; ··· 503 493 xclose(s); 504 494 } 505 495 506 - static void test_lookup_32_bit_value(int family, int sotype, int mapfd) 496 + static void test_lookup_32_bit_value(struct test_sockmap_listen *skel __always_unused, 497 + int family, int sotype, int mapfd) 507 498 { 508 499 u32 key, value32; 509 500 int err, s; ··· 534 523 xclose(s); 535 524 } 536 525 537 - static void test_update_existing(int family, int sotype, int mapfd) 526 + static void test_update_existing(struct test_sockmap_listen *skel __always_unused, 527 + int family, int sotype, int mapfd) 538 528 { 539 529 int s1, s2; 540 530 u64 value; ··· 563 551 /* Exercise the code path where we destroy child sockets that never 564 552 * got accept()'ed, aka orphans, when parent socket gets closed. 565 553 */ 566 - static void test_destroy_orphan_child(int family, int sotype, int mapfd) 554 + static void do_destroy_orphan_child(int family, int sotype, int mapfd) 567 555 { 568 556 struct sockaddr_storage addr; 569 557 socklen_t len; ··· 594 582 xclose(s); 595 583 } 596 584 585 + static void test_destroy_orphan_child(struct test_sockmap_listen *skel, 586 + int family, int sotype, int mapfd) 587 + { 588 + int msg_verdict = bpf_program__fd(skel->progs.prog_msg_verdict); 589 + int skb_verdict = bpf_program__fd(skel->progs.prog_skb_verdict); 590 + const struct test { 591 + int progfd; 592 + enum bpf_attach_type atype; 593 + } tests[] = { 594 + { -1, -1 }, 595 + { msg_verdict, BPF_SK_MSG_VERDICT }, 596 + { skb_verdict, BPF_SK_SKB_VERDICT }, 597 + }; 598 + const struct test *t; 599 + 600 + for (t = tests; t < tests + ARRAY_SIZE(tests); t++) { 601 + if (t->progfd != -1 && 602 + xbpf_prog_attach(t->progfd, mapfd, t->atype, 0) != 0) 603 + return; 604 + 605 + do_destroy_orphan_child(family, sotype, mapfd); 606 + 607 + if (t->progfd != -1) 608 + xbpf_prog_detach2(t->progfd, mapfd, t->atype); 609 + } 610 + } 611 + 597 612 /* Perform a passive open after removing listening socket from SOCKMAP 598 613 * to ensure that callbacks get restored properly. 599 614 */ 600 - static void test_clone_after_delete(int family, int sotype, int mapfd) 615 + static void test_clone_after_delete(struct test_sockmap_listen *skel __always_unused, 616 + int family, int sotype, int mapfd) 601 617 { 602 618 struct sockaddr_storage addr; 603 619 socklen_t len; ··· 661 621 * SOCKMAP, but got accept()'ed only after the parent has been removed 662 622 * from SOCKMAP, gets cloned without parent psock state or callbacks. 663 623 */ 664 - static void test_accept_after_delete(int family, int sotype, int mapfd) 624 + static void test_accept_after_delete(struct test_sockmap_listen *skel __always_unused, 625 + int family, int sotype, int mapfd) 665 626 { 666 627 struct sockaddr_storage addr; 667 628 const u32 zero = 0; ··· 716 675 /* Check that child socket that got created and accepted while parent 717 676 * was in a SOCKMAP is cloned without parent psock state or callbacks. 718 677 */ 719 - static void test_accept_before_delete(int family, int sotype, int mapfd) 678 + static void test_accept_before_delete(struct test_sockmap_listen *skel __always_unused, 679 + int family, int sotype, int mapfd) 720 680 { 721 681 struct sockaddr_storage addr; 722 682 const u32 zero = 0, one = 1; ··· 826 784 return NULL; 827 785 } 828 786 829 - static void test_syn_recv_insert_delete(int family, int sotype, int mapfd) 787 + static void test_syn_recv_insert_delete(struct test_sockmap_listen *skel __always_unused, 788 + int family, int sotype, int mapfd) 830 789 { 831 790 struct connect_accept_ctx ctx = { 0 }; 832 791 struct sockaddr_storage addr; ··· 890 847 return NULL; 891 848 } 892 849 893 - static void test_race_insert_listen(int family, int socktype, int mapfd) 850 + static void test_race_insert_listen(struct test_sockmap_listen *skel __always_unused, 851 + int family, int socktype, int mapfd) 894 852 { 895 853 struct connect_accept_ctx ctx = { 0 }; 896 854 const u32 zero = 0; ··· 1517 1473 int family, int sotype) 1518 1474 { 1519 1475 const struct op_test { 1520 - void (*fn)(int family, int sotype, int mapfd); 1476 + void (*fn)(struct test_sockmap_listen *skel, 1477 + int family, int sotype, int mapfd); 1521 1478 const char *name; 1522 1479 int sotype; 1523 1480 } tests[] = { ··· 1565 1520 if (!test__start_subtest(s)) 1566 1521 continue; 1567 1522 1568 - t->fn(family, sotype, map_fd); 1523 + t->fn(skel, family, sotype, map_fd); 1569 1524 test_ops_cleanup(map); 1570 1525 } 1571 1526 }
+36
tools/testing/selftests/bpf/verifier/search_pruning.c
··· 225 225 .result_unpriv = ACCEPT, 226 226 .insn_processed = 15, 227 227 }, 228 + /* The test performs a conditional 64-bit write to a stack location 229 + * fp[-8], this is followed by an unconditional 8-bit write to fp[-8], 230 + * then data is read from fp[-8]. This sequence is unsafe. 231 + * 232 + * The test would be mistakenly marked as safe w/o dst register parent 233 + * preservation in verifier.c:copy_register_state() function. 234 + * 235 + * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the 236 + * checkpoint state after conditional 64-bit assignment. 237 + */ 238 + { 239 + "write tracking and register parent chain bug", 240 + .insns = { 241 + /* r6 = ktime_get_ns() */ 242 + BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), 243 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 244 + /* r0 = ktime_get_ns() */ 245 + BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns), 246 + /* if r0 > r6 goto +1 */ 247 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_6, 1), 248 + /* *(u64 *)(r10 - 8) = 0xdeadbeef */ 249 + BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0xdeadbeef), 250 + /* r1 = 42 */ 251 + BPF_MOV64_IMM(BPF_REG_1, 42), 252 + /* *(u8 *)(r10 - 8) = r1 */ 253 + BPF_STX_MEM(BPF_B, BPF_REG_FP, BPF_REG_1, -8), 254 + /* r2 = *(u64 *)(r10 - 8) */ 255 + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_FP, -8), 256 + /* exit(0) */ 257 + BPF_MOV64_IMM(BPF_REG_0, 0), 258 + BPF_EXIT_INSN(), 259 + }, 260 + .flags = BPF_F_TEST_STATE_FREQ, 261 + .errstr = "invalid read from stack off -8+1 size 8", 262 + .result = REJECT, 263 + },
+1
tools/testing/selftests/cgroup/test_cpuset_prs.sh
··· 268 268 # Taking away all CPUs from parent or itself if there are tasks 269 269 # will make the partition invalid. 270 270 " S+ C2-3:P1:S+ C3:P1 . . T C2-3 . . 0 A1:2-3,A2:2-3 A1:P1,A2:P-1" 271 + " S+ C3:P1:S+ C3 . . T P1 . . 0 A1:3,A2:3 A1:P1,A2:P-1" 271 272 " S+ $SETUP_A123_PARTITIONS . T:C2-3 . . . 0 A1:2-3,A2:2-3,A3:3 A1:P1,A2:P-1,A3:P-1" 272 273 " S+ $SETUP_A123_PARTITIONS . T:C2-3:C1-3 . . . 0 A1:1,A2:2,A3:3 A1:P1,A2:P1,A3:P1" 273 274
tools/testing/selftests/filesystems/fat/run_fat_tests.sh
+103 -84
tools/testing/selftests/kvm/aarch64/page_fault_test.c
··· 237 237 GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); 238 238 } 239 239 240 + static void guest_check_no_s1ptw_wr_in_dirty_log(void) 241 + { 242 + GUEST_SYNC(CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG); 243 + } 244 + 240 245 static void guest_exec(void) 241 246 { 242 247 int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; ··· 309 304 310 305 /* Returns true to continue the test, and false if it should be skipped. */ 311 306 static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg, 312 - struct uffd_args *args, bool expect_write) 307 + struct uffd_args *args) 313 308 { 314 309 uint64_t addr = msg->arg.pagefault.address; 315 310 uint64_t flags = msg->arg.pagefault.flags; ··· 318 313 319 314 TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, 320 315 "The only expected UFFD mode is MISSING"); 321 - ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); 322 316 ASSERT_EQ(addr, (uint64_t)args->hva); 323 317 324 318 pr_debug("uffd fault: addr=%p write=%d\n", ··· 341 337 return 0; 342 338 } 343 339 344 - static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) 340 + static int uffd_pt_handler(int mode, int uffd, struct uffd_msg *msg) 345 341 { 346 - return uffd_generic_handler(mode, uffd, msg, &pt_args, true); 342 + return uffd_generic_handler(mode, uffd, msg, &pt_args); 347 343 } 348 344 349 - static int uffd_data_write_handler(int mode, int uffd, struct uffd_msg *msg) 345 + static int uffd_data_handler(int mode, int uffd, struct uffd_msg *msg) 350 346 { 351 - return uffd_generic_handler(mode, uffd, msg, &data_args, true); 352 - } 353 - 354 - static int uffd_data_read_handler(int mode, int uffd, struct uffd_msg *msg) 355 - { 356 - return uffd_generic_handler(mode, uffd, msg, &data_args, false); 347 + return uffd_generic_handler(mode, uffd, msg, &data_args); 357 348 } 358 349 359 350 static void setup_uffd_args(struct userspace_mem_region *region, ··· 470 471 { 471 472 struct userspace_mem_region *data_region, *pt_region; 472 473 bool continue_test = true; 474 + uint64_t pte_gpa, pte_pg; 473 475 474 476 data_region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); 475 477 pt_region = vm_get_mem_region(vm, MEM_REGION_PT); 478 + pte_gpa = addr_hva2gpa(vm, virt_get_pte_hva(vm, TEST_GVA)); 479 + pte_pg = (pte_gpa - pt_region->region.guest_phys_addr) / getpagesize(); 476 480 477 481 if (cmd == CMD_SKIP_TEST) 478 482 continue_test = false; ··· 488 486 TEST_ASSERT(check_write_in_dirty_log(vm, data_region, 0), 489 487 "Missing write in dirty log"); 490 488 if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) 491 - TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, 0), 489 + TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, pte_pg), 492 490 "Missing s1ptw write in dirty log"); 493 491 if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) 494 492 TEST_ASSERT(!check_write_in_dirty_log(vm, data_region, 0), 495 493 "Unexpected write in dirty log"); 496 494 if (cmd & CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG) 497 - TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, 0), 495 + TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, pte_pg), 498 496 "Unexpected s1ptw write in dirty log"); 499 497 500 498 return continue_test; ··· 799 797 .expected_events = { .uffd_faults = _uffd_faults, }, \ 800 798 } 801 799 802 - #define TEST_DIRTY_LOG(_access, _with_af, _test_check) \ 800 + #define TEST_DIRTY_LOG(_access, _with_af, _test_check, _pt_check) \ 803 801 { \ 804 802 .name = SCAT3(dirty_log, _access, _with_af), \ 805 803 .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ ··· 807 805 .guest_prepare = { _PREPARE(_with_af), \ 808 806 _PREPARE(_access) }, \ 809 807 .guest_test = _access, \ 810 - .guest_test_check = { _CHECK(_with_af), _test_check, \ 811 - guest_check_s1ptw_wr_in_dirty_log}, \ 808 + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ 812 809 .expected_events = { 0 }, \ 813 810 } 814 811 815 812 #define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_data_handler, \ 816 - _uffd_faults, _test_check) \ 813 + _uffd_faults, _test_check, _pt_check) \ 817 814 { \ 818 815 .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ 819 816 .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ ··· 821 820 _PREPARE(_access) }, \ 822 821 .guest_test = _access, \ 823 822 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 824 - .guest_test_check = { _CHECK(_with_af), _test_check }, \ 823 + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ 825 824 .uffd_data_handler = _uffd_data_handler, \ 826 - .uffd_pt_handler = uffd_pt_write_handler, \ 825 + .uffd_pt_handler = uffd_pt_handler, \ 827 826 .expected_events = { .uffd_faults = _uffd_faults, }, \ 828 827 } 829 828 830 829 #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ 831 830 { \ 832 - .name = SCAT3(ro_memslot, _access, _with_af), \ 831 + .name = SCAT2(ro_memslot, _access), \ 833 832 .data_memslot_flags = KVM_MEM_READONLY, \ 833 + .pt_memslot_flags = KVM_MEM_READONLY, \ 834 834 .guest_prepare = { _PREPARE(_access) }, \ 835 835 .guest_test = _access, \ 836 836 .mmio_handler = _mmio_handler, \ ··· 842 840 { \ 843 841 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 844 842 .data_memslot_flags = KVM_MEM_READONLY, \ 843 + .pt_memslot_flags = KVM_MEM_READONLY, \ 845 844 .guest_test = _access, \ 846 845 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 847 846 .expected_events = { .fail_vcpu_runs = 1 }, \ ··· 851 848 #define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ 852 849 _test_check) \ 853 850 { \ 854 - .name = SCAT3(ro_memslot, _access, _with_af), \ 851 + .name = SCAT2(ro_memslot, _access), \ 855 852 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 856 - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ 853 + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 857 854 .guest_prepare = { _PREPARE(_access) }, \ 858 855 .guest_test = _access, \ 859 856 .guest_test_check = { _test_check }, \ ··· 865 862 { \ 866 863 .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ 867 864 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ 865 + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 869 866 .guest_test = _access, \ 870 867 .guest_test_check = { _test_check }, \ 871 868 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ ··· 877 874 { \ 878 875 .name = SCAT2(ro_memslot_uffd, _access), \ 879 876 .data_memslot_flags = KVM_MEM_READONLY, \ 877 + .pt_memslot_flags = KVM_MEM_READONLY, \ 880 878 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 881 879 .guest_prepare = { _PREPARE(_access) }, \ 882 880 .guest_test = _access, \ 883 881 .uffd_data_handler = _uffd_data_handler, \ 884 - .uffd_pt_handler = uffd_pt_write_handler, \ 882 + .uffd_pt_handler = uffd_pt_handler, \ 885 883 .mmio_handler = _mmio_handler, \ 886 884 .expected_events = { .mmio_exits = _mmio_exits, \ 887 885 .uffd_faults = _uffd_faults }, \ ··· 893 889 { \ 894 890 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 895 891 .data_memslot_flags = KVM_MEM_READONLY, \ 892 + .pt_memslot_flags = KVM_MEM_READONLY, \ 896 893 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 897 894 .guest_test = _access, \ 898 895 .uffd_data_handler = _uffd_data_handler, \ 899 - .uffd_pt_handler = uffd_pt_write_handler, \ 896 + .uffd_pt_handler = uffd_pt_handler, \ 900 897 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 901 898 .expected_events = { .fail_vcpu_runs = 1, \ 902 899 .uffd_faults = _uffd_faults }, \ ··· 938 933 * (S1PTW). 939 934 */ 940 935 TEST_UFFD(guest_read64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 941 - uffd_data_read_handler, uffd_pt_write_handler, 2), 942 - /* no_af should also lead to a PT write. */ 936 + uffd_data_handler, uffd_pt_handler, 2), 943 937 TEST_UFFD(guest_read64, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, 944 - uffd_data_read_handler, uffd_pt_write_handler, 2), 945 - /* Note how that cas invokes the read handler. */ 938 + uffd_data_handler, uffd_pt_handler, 2), 946 939 TEST_UFFD(guest_cas, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 947 - uffd_data_read_handler, uffd_pt_write_handler, 2), 940 + uffd_data_handler, uffd_pt_handler, 2), 948 941 /* 949 942 * Can't test guest_at with_af as it's IMPDEF whether the AF is set. 950 943 * The S1PTW fault should still be marked as a write. 951 944 */ 952 945 TEST_UFFD(guest_at, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, 953 - uffd_data_read_handler, uffd_pt_write_handler, 1), 946 + uffd_no_handler, uffd_pt_handler, 1), 954 947 TEST_UFFD(guest_ld_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 955 - uffd_data_read_handler, uffd_pt_write_handler, 2), 948 + uffd_data_handler, uffd_pt_handler, 2), 956 949 TEST_UFFD(guest_write64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 957 - uffd_data_write_handler, uffd_pt_write_handler, 2), 950 + uffd_data_handler, uffd_pt_handler, 2), 958 951 TEST_UFFD(guest_dc_zva, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 959 - uffd_data_write_handler, uffd_pt_write_handler, 2), 952 + uffd_data_handler, uffd_pt_handler, 2), 960 953 TEST_UFFD(guest_st_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 961 - uffd_data_write_handler, uffd_pt_write_handler, 2), 954 + uffd_data_handler, uffd_pt_handler, 2), 962 955 TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 963 - uffd_data_read_handler, uffd_pt_write_handler, 2), 956 + uffd_data_handler, uffd_pt_handler, 2), 964 957 965 958 /* 966 959 * Try accesses when the data and PT memory regions are both 967 960 * tracked for dirty logging. 968 961 */ 969 - TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log), 970 - /* no_af should also lead to a PT write. */ 971 - TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log), 972 - TEST_DIRTY_LOG(guest_ld_preidx, with_af, guest_check_no_write_in_dirty_log), 973 - TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log), 974 - TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log), 975 - TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log), 976 - TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log), 977 - TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), 978 - TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), 962 + TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log, 963 + guest_check_s1ptw_wr_in_dirty_log), 964 + TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log, 965 + guest_check_no_s1ptw_wr_in_dirty_log), 966 + TEST_DIRTY_LOG(guest_ld_preidx, with_af, 967 + guest_check_no_write_in_dirty_log, 968 + guest_check_s1ptw_wr_in_dirty_log), 969 + TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log, 970 + guest_check_no_s1ptw_wr_in_dirty_log), 971 + TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log, 972 + guest_check_s1ptw_wr_in_dirty_log), 973 + TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log, 974 + guest_check_s1ptw_wr_in_dirty_log), 975 + TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log, 976 + guest_check_s1ptw_wr_in_dirty_log), 977 + TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log, 978 + guest_check_s1ptw_wr_in_dirty_log), 979 + TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log, 980 + guest_check_s1ptw_wr_in_dirty_log), 979 981 980 982 /* 981 983 * Access when the data and PT memory regions are both marked for ··· 992 980 * fault, and nothing in the dirty log. Any S1PTW should result in 993 981 * a write in the dirty log and a userfaultfd write. 994 982 */ 995 - TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_data_read_handler, 2, 996 - guest_check_no_write_in_dirty_log), 997 - /* no_af should also lead to a PT write. */ 998 - TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_data_read_handler, 2, 999 - guest_check_no_write_in_dirty_log), 1000 - TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_data_read_handler, 1001 - 2, guest_check_no_write_in_dirty_log), 1002 - TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, 1003 - guest_check_no_write_in_dirty_log), 1004 - TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_data_read_handler, 2, 1005 - guest_check_no_write_in_dirty_log), 1006 - TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_data_write_handler, 1007 - 2, guest_check_write_in_dirty_log), 1008 - TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_data_read_handler, 2, 1009 - guest_check_write_in_dirty_log), 1010 - TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_data_write_handler, 1011 - 2, guest_check_write_in_dirty_log), 983 + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, 984 + uffd_data_handler, 2, 985 + guest_check_no_write_in_dirty_log, 986 + guest_check_s1ptw_wr_in_dirty_log), 987 + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, 988 + uffd_data_handler, 2, 989 + guest_check_no_write_in_dirty_log, 990 + guest_check_no_s1ptw_wr_in_dirty_log), 991 + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, 992 + uffd_data_handler, 993 + 2, guest_check_no_write_in_dirty_log, 994 + guest_check_s1ptw_wr_in_dirty_log), 995 + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, uffd_no_handler, 1, 996 + guest_check_no_write_in_dirty_log, 997 + guest_check_s1ptw_wr_in_dirty_log), 998 + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, 999 + uffd_data_handler, 2, 1000 + guest_check_no_write_in_dirty_log, 1001 + guest_check_s1ptw_wr_in_dirty_log), 1002 + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, 1003 + uffd_data_handler, 1004 + 2, guest_check_write_in_dirty_log, 1005 + guest_check_s1ptw_wr_in_dirty_log), 1006 + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, 1007 + uffd_data_handler, 2, 1008 + guest_check_write_in_dirty_log, 1009 + guest_check_s1ptw_wr_in_dirty_log), 1010 + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, 1011 + uffd_data_handler, 1012 + 2, guest_check_write_in_dirty_log, 1013 + guest_check_s1ptw_wr_in_dirty_log), 1012 1014 TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, 1013 - uffd_data_write_handler, 2, 1014 - guest_check_write_in_dirty_log), 1015 - 1015 + uffd_data_handler, 2, 1016 + guest_check_write_in_dirty_log, 1017 + guest_check_s1ptw_wr_in_dirty_log), 1016 1018 /* 1017 - * Try accesses when the data memory region is marked read-only 1019 + * Access when both the PT and data regions are marked read-only 1018 1020 * (with KVM_MEM_READONLY). Writes with a syndrome result in an 1019 1021 * MMIO exit, writes with no syndrome (e.g., CAS) result in a 1020 1022 * failed vcpu run, and reads/execs with and without syndroms do ··· 1044 1018 TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), 1045 1019 1046 1020 /* 1047 - * Access when both the data region is both read-only and marked 1021 + * The PT and data regions are both read-only and marked 1048 1022 * for dirty logging at the same time. The expected result is that 1049 1023 * for writes there should be no write in the dirty log. The 1050 1024 * readonly handling is the same as if the memslot was not marked ··· 1069 1043 guest_check_no_write_in_dirty_log), 1070 1044 1071 1045 /* 1072 - * Access when the data region is both read-only and punched with 1046 + * The PT and data regions are both read-only and punched with 1073 1047 * holes tracked with userfaultfd. The expected result is the 1074 1048 * union of both userfaultfd and read-only behaviors. For example, 1075 1049 * write accesses result in a userfaultfd write fault and an MMIO ··· 1077 1051 * no userfaultfd write fault. Reads result in userfaultfd getting 1078 1052 * triggered. 1079 1053 */ 1080 - TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, 1081 - uffd_data_read_handler, 2), 1082 - TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, 1083 - uffd_data_read_handler, 2), 1084 - TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, 1085 - uffd_no_handler, 1), 1086 - TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, 1087 - uffd_data_read_handler, 2), 1054 + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, uffd_data_handler, 2), 1055 + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, uffd_data_handler, 2), 1056 + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, uffd_no_handler, 1), 1057 + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, uffd_data_handler, 2), 1088 1058 TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, 1089 - uffd_data_write_handler, 2), 1090 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, 1091 - uffd_data_read_handler, 2), 1092 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, 1093 - uffd_no_handler, 1), 1094 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, 1095 - uffd_no_handler, 1), 1059 + uffd_data_handler, 2), 1060 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, uffd_data_handler, 2), 1061 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, uffd_no_handler, 1), 1062 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, uffd_no_handler, 1), 1096 1063 1097 1064 { 0 } 1098 1065 };
+1 -1
tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
··· 241 241 while ((opt = getopt(argc, argv, "hp:t:r")) != -1) { 242 242 switch (opt) { 243 243 case 'p': 244 - reclaim_period_ms = atoi_non_negative("Reclaim period", optarg); 244 + reclaim_period_ms = atoi_positive("Reclaim period", optarg); 245 245 break; 246 246 case 't': 247 247 token = atoi_paranoid(optarg);
+3 -4
tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
··· 434 434 int main(int argc, char *argv[]) 435 435 { 436 436 struct timespec min_ts, max_ts, vm_ts; 437 + struct kvm_xen_hvm_attr evt_reset; 437 438 struct kvm_vm *vm; 438 439 pthread_t thread; 439 440 bool verbose; ··· 963 962 } 964 963 965 964 done: 966 - struct kvm_xen_hvm_attr evt_reset = { 967 - .type = KVM_XEN_ATTR_TYPE_EVTCHN, 968 - .u.evtchn.flags = KVM_XEN_EVTCHN_RESET, 969 - }; 965 + evt_reset.type = KVM_XEN_ATTR_TYPE_EVTCHN; 966 + evt_reset.u.evtchn.flags = KVM_XEN_EVTCHN_RESET; 970 967 vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &evt_reset); 971 968 972 969 alarm(0);
+1 -1
tools/testing/selftests/net/cmsg_ipv6.sh
··· 6 6 NS=ns 7 7 IP6=2001:db8:1::1/64 8 8 TGT6=2001:db8:1::2 9 - TMPF=`mktemp` 9 + TMPF=$(mktemp --suffix ".pcap") 10 10 11 11 cleanup() 12 12 {
+19 -3
tools/testing/selftests/net/udpgso_bench.sh
··· 7 7 readonly YELLOW='\033[0;33m' 8 8 readonly RED='\033[0;31m' 9 9 readonly NC='\033[0m' # No Color 10 + readonly TESTPORT=8000 10 11 11 12 readonly KSFT_PASS=0 12 13 readonly KSFT_FAIL=1 ··· 57 56 58 57 run_one() { 59 58 local -r args=$@ 59 + local nr_socks=0 60 + local i=0 61 + local -r timeout=10 60 62 61 - ./udpgso_bench_rx & 62 - ./udpgso_bench_rx -t & 63 + ./udpgso_bench_rx -p "$TESTPORT" & 64 + ./udpgso_bench_rx -p "$TESTPORT" -t & 63 65 64 - ./udpgso_bench_tx ${args} 66 + # Wait for the above test program to get ready to receive connections. 67 + while [ "$i" -lt "$timeout" ]; do 68 + nr_socks="$(ss -lnHi | grep -c "\*:${TESTPORT}")" 69 + [ "$nr_socks" -eq 2 ] && break 70 + i=$((i + 1)) 71 + sleep 1 72 + done 73 + if [ "$nr_socks" -ne 2 ]; then 74 + echo "timed out while waiting for udpgso_bench_rx" 75 + exit 1 76 + fi 77 + 78 + ./udpgso_bench_tx -p "$TESTPORT" ${args} 65 79 } 66 80 67 81 run_in_netns() {
+3 -1
tools/testing/selftests/net/udpgso_bench_rx.c
··· 250 250 static void do_flush_udp(int fd) 251 251 { 252 252 static char rbuf[ETH_MAX_MTU]; 253 - int ret, len, gso_size, budget = 256; 253 + int ret, len, gso_size = 0, budget = 256; 254 254 255 255 len = cfg_read_all ? sizeof(rbuf) : 0; 256 256 while (budget--) { ··· 336 336 cfg_verify = true; 337 337 cfg_read_all = true; 338 338 break; 339 + default: 340 + exit(1); 339 341 } 340 342 } 341 343
+29 -7
tools/testing/selftests/net/udpgso_bench_tx.c
··· 62 62 static int cfg_port = 8000; 63 63 static int cfg_runtime_ms = -1; 64 64 static bool cfg_poll; 65 + static int cfg_poll_loop_timeout_ms = 2000; 65 66 static bool cfg_segment; 66 67 static bool cfg_sendmmsg; 67 68 static bool cfg_tcp; ··· 236 235 } 237 236 } 238 237 239 - static void flush_errqueue(int fd, const bool do_poll) 238 + static void flush_errqueue(int fd, const bool do_poll, 239 + unsigned long poll_timeout, const bool poll_err) 240 240 { 241 241 if (do_poll) { 242 242 struct pollfd fds = {0}; 243 243 int ret; 244 244 245 245 fds.fd = fd; 246 - ret = poll(&fds, 1, 500); 246 + ret = poll(&fds, 1, poll_timeout); 247 247 if (ret == 0) { 248 - if (cfg_verbose) 248 + if ((cfg_verbose) && (poll_err)) 249 249 fprintf(stderr, "poll timeout\n"); 250 250 } else if (ret < 0) { 251 251 error(1, errno, "poll"); ··· 254 252 } 255 253 256 254 flush_errqueue_recv(fd); 255 + } 256 + 257 + static void flush_errqueue_retry(int fd, unsigned long num_sends) 258 + { 259 + unsigned long tnow, tstop; 260 + bool first_try = true; 261 + 262 + tnow = gettimeofday_ms(); 263 + tstop = tnow + cfg_poll_loop_timeout_ms; 264 + do { 265 + flush_errqueue(fd, true, tstop - tnow, first_try); 266 + first_try = false; 267 + tnow = gettimeofday_ms(); 268 + } while ((stat_zcopies != num_sends) && (tnow < tstop)); 257 269 } 258 270 259 271 static int send_tcp(int fd, char *data) ··· 429 413 430 414 static void usage(const char *filepath) 431 415 { 432 - error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]", 416 + error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] " 417 + "[-L secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]", 433 418 filepath); 434 419 } 435 420 ··· 440 423 int max_len, hdrlen; 441 424 int c; 442 425 443 - while ((c = getopt(argc, argv, "46acC:D:Hl:mM:p:s:PS:tTuvz")) != -1) { 426 + while ((c = getopt(argc, argv, "46acC:D:Hl:L:mM:p:s:PS:tTuvz")) != -1) { 444 427 switch (c) { 445 428 case '4': 446 429 if (cfg_family != PF_UNSPEC) ··· 468 451 break; 469 452 case 'l': 470 453 cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000; 454 + break; 455 + case 'L': 456 + cfg_poll_loop_timeout_ms = strtoul(optarg, NULL, 10) * 1000; 471 457 break; 472 458 case 'm': 473 459 cfg_sendmmsg = true; ··· 510 490 case 'z': 511 491 cfg_zerocopy = true; 512 492 break; 493 + default: 494 + exit(1); 513 495 } 514 496 } 515 497 ··· 699 677 num_sends += send_udp(fd, buf[i]); 700 678 num_msgs++; 701 679 if ((cfg_zerocopy && ((num_msgs & 0xF) == 0)) || cfg_tx_tstamp) 702 - flush_errqueue(fd, cfg_poll); 680 + flush_errqueue(fd, cfg_poll, 500, true); 703 681 704 682 if (cfg_msg_nr && num_msgs >= cfg_msg_nr) 705 683 break; ··· 718 696 } while (!interrupted && (cfg_runtime_ms == -1 || tnow < tstop)); 719 697 720 698 if (cfg_zerocopy || cfg_tx_tstamp) 721 - flush_errqueue(fd, true); 699 + flush_errqueue_retry(fd, num_sends); 722 700 723 701 if (close(fd)) 724 702 error(1, errno, "close");
-1
tools/testing/selftests/vm/hugetlb-madvise.c
··· 17 17 #include <stdio.h> 18 18 #include <unistd.h> 19 19 #include <sys/mman.h> 20 - #define __USE_GNU 21 20 #include <fcntl.h> 22 21 23 22 #define MIN_FREE_PAGES 20
+3 -5
tools/virtio/linux/bug.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef BUG_H 3 - #define BUG_H 2 + #ifndef _LINUX_BUG_H 3 + #define _LINUX_BUG_H 4 4 5 5 #include <asm/bug.h> 6 6 7 7 #define BUG_ON(__BUG_ON_cond) assert(!(__BUG_ON_cond)) 8 8 9 - #define BUILD_BUG_ON(x) 10 - 11 9 #define BUG() abort() 12 10 13 - #endif /* BUG_H */ 11 + #endif /* _LINUX_BUG_H */
+7
tools/virtio/linux/build_bug.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_BUILD_BUG_H 3 + #define _LINUX_BUILD_BUG_H 4 + 5 + #define BUILD_BUG_ON(x) 6 + 7 + #endif /* _LINUX_BUILD_BUG_H */
+7
tools/virtio/linux/cpumask.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_CPUMASK_H 3 + #define _LINUX_CPUMASK_H 4 + 5 + #include <linux/kernel.h> 6 + 7 + #endif /* _LINUX_CPUMASK_H */
+7
tools/virtio/linux/gfp.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __LINUX_GFP_H 3 + #define __LINUX_GFP_H 4 + 5 + #include <linux/topology.h> 6 + 7 + #endif
+1
tools/virtio/linux/kernel.h
··· 10 10 #include <stdarg.h> 11 11 12 12 #include <linux/compiler.h> 13 + #include <linux/log2.h> 13 14 #include <linux/types.h> 14 15 #include <linux/overflow.h> 15 16 #include <linux/list.h>
+12
tools/virtio/linux/kmsan.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_KMSAN_H 3 + #define _LINUX_KMSAN_H 4 + 5 + #include <linux/gfp.h> 6 + 7 + inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, 8 + enum dma_data_direction dir) 9 + { 10 + } 11 + 12 + #endif /* _LINUX_KMSAN_H */
+1
tools/virtio/linux/scatterlist.h
··· 2 2 #ifndef SCATTERLIST_H 3 3 #define SCATTERLIST_H 4 4 #include <linux/kernel.h> 5 + #include <linux/bug.h> 5 6 6 7 struct scatterlist { 7 8 unsigned long page_link;
+7
tools/virtio/linux/topology.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_TOPOLOGY_H 3 + #define _LINUX_TOPOLOGY_H 4 + 5 + #include <linux/cpumask.h> 6 + 7 + #endif /* _LINUX_TOPOLOGY_H */
+3 -3
virt/kvm/vfio.c
··· 336 336 return -ENXIO; 337 337 } 338 338 339 - static void kvm_vfio_destroy(struct kvm_device *dev) 339 + static void kvm_vfio_release(struct kvm_device *dev) 340 340 { 341 341 struct kvm_vfio *kv = dev->private; 342 342 struct kvm_vfio_group *kvg, *tmp; ··· 355 355 kvm_vfio_update_coherency(dev); 356 356 357 357 kfree(kv); 358 - kfree(dev); /* alloc by kvm_ioctl_create_device, free by .destroy */ 358 + kfree(dev); /* alloc by kvm_ioctl_create_device, free by .release */ 359 359 } 360 360 361 361 static int kvm_vfio_create(struct kvm_device *dev, u32 type); ··· 363 363 static struct kvm_device_ops kvm_vfio_ops = { 364 364 .name = "kvm-vfio", 365 365 .create = kvm_vfio_create, 366 - .destroy = kvm_vfio_destroy, 366 + .release = kvm_vfio_release, 367 367 .set_attr = kvm_vfio_set_attr, 368 368 .has_attr = kvm_vfio_has_attr, 369 369 };