Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

sound: codecs: tlv320adcx140: assorted patches

Merge series from Sascha Hauer <s.hauer@pengutronix.de>:

These are some patches for the tlv320adcx140 codec we are carrying
around for a while, time to upstream them.

+2952 -2062
+8
.mailmap
··· 416 416 Juha Yrjola <juha.yrjola@nokia.com> 417 417 Juha Yrjola <juha.yrjola@solidboot.com> 418 418 Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com> 419 + Justin Iurman <justin.iurman@gmail.com> <justin.iurman@uliege.be> 419 420 Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com> 420 421 Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org> 421 422 Kalle Valo <kvalo@kernel.org> <quic_kvalo@quicinc.com> ··· 473 472 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch> 474 473 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de> 475 474 Linus Lüssing <linus.luessing@c0d3.blue> <ll@simonwunderlich.de> 475 + Linus Walleij <linusw@kernel.org> <linus.walleij@ericsson.com> 476 + Linus Walleij <linusw@kernel.org> <linus.walleij@stericsson.com> 477 + Linus Walleij <linusw@kernel.org> <linus.walleij@linaro.org> 478 + Linus Walleij <linusw@kernel.org> <triad@df.lth.se> 476 479 <linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com> 477 480 Li Yang <leoyang.li@nxp.com> <leoli@freescale.com> 478 481 Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org> ··· 710 705 Santosh Shilimkar <santosh.shilimkar@oracle.org> 711 706 Santosh Shilimkar <ssantosh@kernel.org> 712 707 Sarangdhar Joshi <spjoshi@codeaurora.org> 708 + Saravana Kannan <saravanak@kernel.org> <skannan@codeaurora.org> 709 + Saravana Kannan <saravanak@kernel.org> <saravanak@google.com> 713 710 Sascha Hauer <s.hauer@pengutronix.de> 714 711 Sahitya Tummala <quic_stummala@quicinc.com> <stummala@codeaurora.org> 715 712 Sathishkumar Muruganandam <quic_murugana@quicinc.com> <murugana@codeaurora.org> ··· 801 794 Tejun Heo <htejun@gmail.com> 802 795 Tomeu Vizoso <tomeu@tomeuvizoso.net> <tomeu.vizoso@collabora.com> 803 796 Thomas Graf <tgraf@suug.ch> 797 + Thomas Gleixner <tglx@kernel.org> <tglx@linutronix.de> 804 798 Thomas Körper <socketcan@esd.eu> <thomas.koerper@esd.eu> 805 799 Thomas Pedersen <twp@codeaurora.org> 806 800 Thorsten Blum <thorsten.blum@linux.dev> <thorsten.blum@toblux.com>
+1 -1
CREDITS
··· 1398 1398 P: 1024D/8399E1BB 250D 3BCF 7127 0D8C A444 A961 1DBD 5E75 8399 E1BB 1399 1399 1400 1400 N: Thomas Gleixner 1401 - E: tglx@linutronix.de 1401 + E: tglx@kernel.org 1402 1402 D: NAND flash hardware support, JFFS2 on NAND flash 1403 1403 1404 1404 N: Jérôme Glisse
+1 -1
Documentation/ABI/stable/sysfs-kernel-time-aux-clocks
··· 1 1 What: /sys/kernel/time/aux_clocks/<ID>/enable 2 2 Date: May 2025 3 - Contact: Thomas Gleixner <tglx@linutronix.de> 3 + Contact: Thomas Gleixner <tglx@kernel.org> 4 4 Description: 5 5 Controls the enablement of auxiliary clock timekeepers.
+2 -2
Documentation/ABI/testing/sysfs-devices-soc
··· 17 17 contact: Lee Jones <lee@kernel.org> 18 18 Description: 19 19 Read-only attribute common to all SoCs. Contains the SoC machine 20 - name (e.g. Ux500). 20 + name (e.g. DB8500). 21 21 22 22 What: /sys/devices/socX/family 23 23 Date: January 2012 24 24 contact: Lee Jones <lee@kernel.org> 25 25 Description: 26 26 Read-only attribute common to all SoCs. Contains SoC family name 27 - (e.g. DB8500). 27 + (e.g. ux500). 28 28 29 29 On many of ARM based silicon with SMCCC v1.2+ compliant firmware 30 30 this will contain the JEDEC JEP106 manufacturer’s identification
+1 -1
Documentation/arch/x86/topology.rst
··· 17 17 Needless to say, code should use the generic functions - this file is *only* 18 18 here to *document* the inner workings of x86 topology. 19 19 20 - Started by Thomas Gleixner <tglx@linutronix.de> and Borislav Petkov <bp@alien8.de>. 20 + Started by Thomas Gleixner <tglx@kernel.org> and Borislav Petkov <bp@alien8.de>. 21 21 22 22 The main aim of the topology facilities is to present adequate interfaces to 23 23 code which needs to know/query/use the structure of the running system wrt
+1 -1
Documentation/core-api/cpu_hotplug.rst
··· 8 8 Srivatsa Vaddagiri <vatsa@in.ibm.com>, 9 9 Ashok Raj <ashok.raj@intel.com>, 10 10 Joel Schopp <jschopp@austin.ibm.com>, 11 - Thomas Gleixner <tglx@linutronix.de> 11 + Thomas Gleixner <tglx@kernel.org> 12 12 13 13 Introduction 14 14 ============
+1 -1
Documentation/core-api/genericirq.rst
··· 439 439 440 440 The following people have contributed to this document: 441 441 442 - 1. Thomas Gleixner tglx@linutronix.de 442 + 1. Thomas Gleixner tglx@kernel.org 443 443 444 444 2. Ingo Molnar mingo@elte.hu
+1 -1
Documentation/core-api/librs.rst
··· 209 209 210 210 The following people have contributed to this document: 211 211 212 - Thomas Gleixner\ tglx@linutronix.de 212 + Thomas Gleixner\ tglx@kernel.org
+8 -1
Documentation/devicetree/bindings/arm/fsl.yaml
··· 1105 1105 - gateworks,imx8mp-gw74xx # i.MX8MP Gateworks Board 1106 1106 - gateworks,imx8mp-gw75xx-2x # i.MX8MP Gateworks Board 1107 1107 - gateworks,imx8mp-gw82xx-2x # i.MX8MP Gateworks Board 1108 - - gocontroll,moduline-display # GOcontroll Moduline Display controller 1109 1108 - prt,prt8ml # Protonic PRT8ML 1110 1109 - skov,imx8mp-skov-basic # SKOV i.MX8MP baseboard without frontplate 1111 1110 - skov,imx8mp-skov-revb-hdmi # SKOV i.MX8MP climate control without panel ··· 1161 1162 - enum: 1162 1163 - engicam,icore-mx8mp-edimm2.2 # i.MX8MP Engicam i.Core MX8M Plus EDIMM2.2 Starter Kit 1163 1164 - const: engicam,icore-mx8mp # i.MX8MP Engicam i.Core MX8M Plus SoM 1165 + - const: fsl,imx8mp 1166 + 1167 + - description: Ka-Ro TX8P-ML81 SoM based boards 1168 + items: 1169 + - enum: 1170 + - gocontroll,moduline-display 1171 + - gocontroll,moduline-display-106 1172 + - const: karo,tx8p-ml81 1164 1173 - const: fsl,imx8mp 1165 1174 1166 1175 - description: Kontron i.MX8MP OSM-S SoM based Boards
+7 -1
Documentation/devicetree/bindings/misc/pci1de4,1.yaml
··· 25 25 items: 26 26 - const: pci1de4,1 27 27 28 + reg: 29 + maxItems: 1 30 + description: The PCI Bus-Device-Function address. 31 + 28 32 '#interrupt-cells': 29 33 const: 2 30 34 description: | ··· 105 101 106 102 required: 107 103 - compatible 104 + - reg 108 105 - '#interrupt-cells' 109 106 - interrupt-controller 110 107 - pci-ep-bus@1 ··· 116 111 #address-cells = <3>; 117 112 #size-cells = <2>; 118 113 119 - rp1@0,0 { 114 + dev@0,0 { 120 115 compatible = "pci1de4,1"; 116 + reg = <0x10000 0x0 0x0 0x0 0x0>; 121 117 ranges = <0x01 0x00 0x00000000 0x82010000 0x00 0x00 0x00 0x400000>; 122 118 #address-cells = <3>; 123 119 #size-cells = <2>;
+5 -2
Documentation/devicetree/bindings/sound/ti,tlv320adcx140.yaml
··· 41 41 42 42 areg-supply: 43 43 description: | 44 - Regulator with AVDD at 3.3V. If not defined then the internal regulator 45 - is enabled. 44 + External supply of 1.8V. If not defined then the internal regulator is 45 + enabled instead. 46 + 47 + avdd-supply: true 48 + iovdd-supply: true 46 49 47 50 ti,mic-bias-source: 48 51 description: |
+1 -1
Documentation/devicetree/bindings/timer/mrvl,mmp-timer.yaml
··· 8 8 9 9 maintainers: 10 10 - Daniel Lezcano <daniel.lezcano@linaro.org> 11 - - Thomas Gleixner <tglx@linutronix.de> 11 + - Thomas Gleixner <tglx@kernel.org> 12 12 - Rob Herring <robh@kernel.org> 13 13 14 14 properties:
+2 -2
Documentation/driver-api/mtdnand.rst
··· 996 996 997 997 2. David Woodhouse\ dwmw2@infradead.org 998 998 999 - 3. Thomas Gleixner\ tglx@linutronix.de 999 + 3. Thomas Gleixner\ tglx@kernel.org 1000 1000 1001 1001 A lot of users have provided bugfixes, improvements and helping hands 1002 1002 for testing. Thanks a lot. 1003 1003 1004 1004 The following people have contributed to this document: 1005 1005 1006 - 1. Thomas Gleixner\ tglx@linutronix.de 1006 + 1. Thomas Gleixner\ tglx@kernel.org
+1
Documentation/filesystems/locking.rst
··· 416 416 lm_breaker_owns_lease: yes no no 417 417 lm_lock_expirable yes no no 418 418 lm_expire_lock no no yes 419 + lm_open_conflict yes no no 419 420 ====================== ============= ================= ========= 420 421 421 422 buffer_head
+4 -2
Documentation/netlink/specs/netdev.yaml
··· 142 142 name: ifindex 143 143 doc: | 144 144 ifindex of the netdev to which the pool belongs. 145 - May be reported as 0 if the page pool was allocated for a netdev 145 + May not be reported if the page pool was allocated for a netdev 146 146 which got destroyed already (page pools may outlast their netdevs 147 147 because they wait for all memory to be returned). 148 148 type: u32 ··· 601 601 name: page-pool-get 602 602 doc: | 603 603 Get / dump information about Page Pools. 604 - (Only Page Pools associated with a net_device can be listed.) 604 + Only Page Pools associated by the driver with a net_device 605 + can be listed. ifindex will not be reported if the net_device 606 + no longer exists. 605 607 attribute-set: page-pool 606 608 do: 607 609 request:
+6 -4
Documentation/process/maintainer-soc.rst
··· 57 57 58 58 All typical platform related patches should be sent via SoC submaintainers 59 59 (platform-specific maintainers). This includes also changes to per-platform or 60 - shared defconfigs (scripts/get_maintainer.pl might not provide correct 61 - addresses in such case). 60 + shared defconfigs. Note that scripts/get_maintainer.pl might not provide 61 + correct addresses for the shared defconfig, so ignore its output and manually 62 + create CC-list based on MAINTAINERS file or use something like 63 + ``scripts/get_maintainer.pl -f drivers/soc/FOO/``). 62 64 63 65 Submitting Patches to the Main SoC Maintainers 64 66 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ··· 116 114 Usually the branch that includes a driver change will also include the 117 115 corresponding change to the devicetree binding description, to ensure they are 118 116 in fact compatible. This means that the devicetree branch can end up causing 119 - warnings in the "make dtbs_check" step. If a devicetree change depends on 117 + warnings in the ``make dtbs_check`` step. If a devicetree change depends on 120 118 missing additions to a header file in include/dt-bindings/, it will fail the 121 - "make dtbs" step and not get merged. 119 + ``make dtbs`` step and not get merged. 122 120 123 121 There are multiple ways to deal with this: 124 122
+1 -1
Documentation/translations/zh_CN/core-api/cpu_hotplug.rst
··· 22 22 Srivatsa Vaddagiri <vatsa@in.ibm.com>, 23 23 Ashok Raj <ashok.raj@intel.com>, 24 24 Joel Schopp <jschopp@austin.ibm.com>, 25 - Thomas Gleixner <tglx@linutronix.de> 25 + Thomas Gleixner <tglx@kernel.org> 26 26 27 27 简介 28 28 ====
+1 -1
Documentation/translations/zh_CN/core-api/genericirq.rst
··· 404 404 405 405 感谢以下人士对本文档作出的贡献: 406 406 407 - 1. Thomas Gleixner tglx@linutronix.de 407 + 1. Thomas Gleixner tglx@kernel.org 408 408 409 409 2. Ingo Molnar mingo@elte.hu
+33 -28
MAINTAINERS
··· 1283 1283 1284 1284 AMD XGBE DRIVER 1285 1285 M: "Shyam Sundar S K" <Shyam-sundar.S-k@amd.com> 1286 + M: Raju Rangoju <Raju.Rangoju@amd.com> 1286 1287 L: netdev@vger.kernel.org 1287 1288 S: Maintained 1288 1289 F: arch/arm64/boot/dts/amd/amd-seattle-xgbe*.dtsi ··· 2012 2011 M: Arnd Bergmann <arnd@arndb.de> 2013 2012 M: Krzysztof Kozlowski <krzk@kernel.org> 2014 2013 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 2015 - M: Linus Walleij <linus.walleij@linaro.org> 2014 + M: Linus Walleij <linusw@kernel.org> 2016 2015 R: Drew Fustini <fustini@kernel.org> 2017 2016 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2018 2017 L: soc@lists.linux.dev ··· 2159 2158 L: dri-devel@lists.freedesktop.org 2160 2159 S: Supported 2161 2160 W: https://rust-for-linux.com/tyr-gpu-driver 2162 - W https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2161 + W: https://drm.pages.freedesktop.org/maintainer-tools/drm-rust.html 2163 2162 B: https://gitlab.freedesktop.org/panfrost/linux/-/issues 2164 2163 T: git https://gitlab.freedesktop.org/drm/rust/kernel.git 2165 2164 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml ··· 5802 5801 5803 5802 CEPH COMMON CODE (LIBCEPH) 5804 5803 M: Ilya Dryomov <idryomov@gmail.com> 5805 - M: Xiubo Li <xiubli@redhat.com> 5804 + M: Alex Markuze <amarkuze@redhat.com> 5805 + M: Viacheslav Dubeyko <slava@dubeyko.com> 5806 5806 L: ceph-devel@vger.kernel.org 5807 5807 S: Supported 5808 5808 W: http://ceph.com/ ··· 5814 5812 F: net/ceph/ 5815 5813 5816 5814 CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH) 5817 - M: Xiubo Li <xiubli@redhat.com> 5818 5815 M: Ilya Dryomov <idryomov@gmail.com> 5816 + M: Alex Markuze <amarkuze@redhat.com> 5817 + M: Viacheslav Dubeyko <slava@dubeyko.com> 5819 5818 L: ceph-devel@vger.kernel.org 5820 5819 S: Supported 5821 5820 W: http://ceph.com/ ··· 6175 6172 6176 6173 CLOCKSOURCE, CLOCKEVENT DRIVERS 6177 6174 M: Daniel Lezcano <daniel.lezcano@linaro.org> 6178 - M: Thomas Gleixner <tglx@linutronix.de> 6175 + M: Thomas Gleixner <tglx@kernel.org> 6179 6176 L: linux-kernel@vger.kernel.org 6180 6177 S: Supported 6181 6178 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 6535 6532 F: tools/testing/selftests/cpufreq/ 6536 6533 6537 6534 CPU FREQUENCY DRIVERS - VIRTUAL MACHINE CPUFREQ 6538 - M: Saravana Kannan <saravanak@google.com> 6535 + M: Saravana Kannan <saravanak@kernel.org> 6539 6536 L: linux-pm@vger.kernel.org 6540 6537 S: Maintained 6541 6538 F: drivers/cpufreq/virtual-cpufreq.c 6542 6539 6543 6540 CPU HOTPLUG 6544 - M: Thomas Gleixner <tglx@linutronix.de> 6541 + M: Thomas Gleixner <tglx@kernel.org> 6545 6542 M: Peter Zijlstra <peterz@infradead.org> 6546 6543 L: linux-kernel@vger.kernel.org 6547 6544 S: Maintained ··· 6708 6705 T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-next 6709 6706 T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-fixes 6710 6707 F: lib/crypto/ 6708 + F: scripts/crypto/ 6711 6709 6712 6710 CRYPTO SPEED TEST COMPARE 6713 6711 M: Wang Jinchao <wangjinchao@xfusion.com> ··· 6969 6965 F: drivers/scsi/dc395x.* 6970 6966 6971 6967 DEBUGOBJECTS: 6972 - M: Thomas Gleixner <tglx@linutronix.de> 6968 + M: Thomas Gleixner <tglx@kernel.org> 6973 6969 L: linux-kernel@vger.kernel.org 6974 6970 S: Maintained 6975 6971 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects ··· 7174 7170 F: include/linux/devcoredump.h 7175 7171 7176 7172 DEVICE DEPENDENCY HELPER SCRIPT 7177 - M: Saravana Kannan <saravanak@google.com> 7173 + M: Saravana Kannan <saravanak@kernel.org> 7178 7174 L: linux-kernel@vger.kernel.org 7179 7175 S: Maintained 7180 7176 F: scripts/dev-needs.sh ··· 8071 8067 Q: https://patchwork.freedesktop.org/project/nouveau/ 8072 8068 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8073 8069 C: irc://irc.oftc.net/nouveau 8074 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8070 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8075 8071 F: Documentation/gpu/nova/ 8076 8072 F: drivers/gpu/nova-core/ 8077 8073 ··· 8083 8079 Q: https://patchwork.freedesktop.org/project/nouveau/ 8084 8080 B: https://gitlab.freedesktop.org/drm/nova/-/issues 8085 8081 C: irc://irc.oftc.net/nouveau 8086 - T: git https://gitlab.freedesktop.org/drm/nova.git nova-next 8082 + T: git https://gitlab.freedesktop.org/drm/rust/kernel.git drm-rust-next 8087 8083 F: Documentation/gpu/nova/ 8088 8084 F: drivers/gpu/drm/nova/ 8089 8085 F: include/uapi/drm/nova_drm.h ··· 8361 8357 X: drivers/gpu/drm/nova/ 8362 8358 X: drivers/gpu/drm/radeon/ 8363 8359 X: drivers/gpu/drm/tegra/ 8360 + X: drivers/gpu/drm/tyr/ 8364 8361 X: drivers/gpu/drm/xe/ 8365 8362 8366 8363 DRM DRIVERS AND COMMON INFRASTRUCTURE [RUST] ··· 10372 10367 F: tools/testing/selftests/filesystems/fuse/ 10373 10368 10374 10369 FUTEX SUBSYSTEM 10375 - M: Thomas Gleixner <tglx@linutronix.de> 10370 + M: Thomas Gleixner <tglx@kernel.org> 10376 10371 M: Ingo Molnar <mingo@redhat.com> 10377 10372 R: Peter Zijlstra <peterz@infradead.org> 10378 10373 R: Darren Hart <dvhart@infradead.org> ··· 10516 10511 F: include/linux/arch_topology.h 10517 10512 10518 10513 GENERIC ENTRY CODE 10519 - M: Thomas Gleixner <tglx@linutronix.de> 10514 + M: Thomas Gleixner <tglx@kernel.org> 10520 10515 M: Peter Zijlstra <peterz@infradead.org> 10521 10516 M: Andy Lutomirski <luto@kernel.org> 10522 10517 L: linux-kernel@vger.kernel.org ··· 10629 10624 10630 10625 GENERIC VDSO LIBRARY 10631 10626 M: Andy Lutomirski <luto@kernel.org> 10632 - M: Thomas Gleixner <tglx@linutronix.de> 10627 + M: Thomas Gleixner <tglx@kernel.org> 10633 10628 M: Vincenzo Frascino <vincenzo.frascino@arm.com> 10634 10629 L: linux-kernel@vger.kernel.org 10635 10630 S: Maintained ··· 11242 11237 HIGH-RESOLUTION TIMERS, TIMER WHEEL, CLOCKEVENTS 11243 11238 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 11244 11239 M: Frederic Weisbecker <frederic@kernel.org> 11245 - M: Thomas Gleixner <tglx@linutronix.de> 11240 + M: Thomas Gleixner <tglx@kernel.org> 11246 11241 L: linux-kernel@vger.kernel.org 11247 11242 S: Maintained 11248 11243 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 11265 11260 R: FUJITA Tomonori <fujita.tomonori@gmail.com> 11266 11261 R: Frederic Weisbecker <frederic@kernel.org> 11267 11262 R: Lyude Paul <lyude@redhat.com> 11268 - R: Thomas Gleixner <tglx@linutronix.de> 11263 + R: Thomas Gleixner <tglx@kernel.org> 11269 11264 R: Anna-Maria Behnsen <anna-maria@linutronix.de> 11270 11265 R: John Stultz <jstultz@google.com> 11271 11266 R: Stephen Boyd <sboyd@kernel.org> ··· 13335 13330 F: sound/soc/codecs/sma* 13336 13331 13337 13332 IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY) 13338 - M: Thomas Gleixner <tglx@linutronix.de> 13333 + M: Thomas Gleixner <tglx@kernel.org> 13339 13334 S: Maintained 13340 13335 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core 13341 13336 F: Documentation/core-api/irq/irq-domain.rst ··· 13345 13340 F: kernel/irq/msi.c 13346 13341 13347 13342 IRQ SUBSYSTEM 13348 - M: Thomas Gleixner <tglx@linutronix.de> 13343 + M: Thomas Gleixner <tglx@kernel.org> 13349 13344 L: linux-kernel@vger.kernel.org 13350 13345 S: Maintained 13351 13346 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core ··· 13358 13353 F: lib/group_cpus.c 13359 13354 13360 13355 IRQCHIP DRIVERS 13361 - M: Thomas Gleixner <tglx@linutronix.de> 13356 + M: Thomas Gleixner <tglx@kernel.org> 13362 13357 L: linux-kernel@vger.kernel.org 13363 13358 S: Maintained 13364 13359 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core ··· 14452 14447 F: lib/* 14453 14448 14454 14449 LICENSES and SPDX stuff 14455 - M: Thomas Gleixner <tglx@linutronix.de> 14450 + M: Thomas Gleixner <tglx@kernel.org> 14456 14451 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 14457 14452 L: linux-spdx@vger.kernel.org 14458 14453 S: Maintained ··· 18288 18283 X: tools/testing/selftests/net/can/ 18289 18284 18290 18285 NETWORKING [IOAM] 18291 - M: Justin Iurman <justin.iurman@uliege.be> 18286 + M: Justin Iurman <justin.iurman@gmail.com> 18292 18287 S: Maintained 18293 18288 F: Documentation/networking/ioam6* 18294 18289 F: include/linux/ioam6* ··· 18577 18572 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 18578 18573 M: Frederic Weisbecker <frederic@kernel.org> 18579 18574 M: Ingo Molnar <mingo@kernel.org> 18580 - M: Thomas Gleixner <tglx@linutronix.de> 18575 + M: Thomas Gleixner <tglx@kernel.org> 18581 18576 L: linux-kernel@vger.kernel.org 18582 18577 S: Maintained 18583 18578 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/nohz ··· 19552 19547 19553 19548 OPEN FIRMWARE AND FLATTENED DEVICE TREE 19554 19549 M: Rob Herring <robh@kernel.org> 19555 - M: Saravana Kannan <saravanak@google.com> 19550 + M: Saravana Kannan <saravanak@kernel.org> 19556 19551 L: devicetree@vger.kernel.org 19557 19552 S: Maintained 19558 19553 Q: http://patchwork.kernel.org/project/devicetree/list/ ··· 20762 20757 POSIX CLOCKS and TIMERS 20763 20758 M: Anna-Maria Behnsen <anna-maria@linutronix.de> 20764 20759 M: Frederic Weisbecker <frederic@kernel.org> 20765 - M: Thomas Gleixner <tglx@linutronix.de> 20760 + M: Thomas Gleixner <tglx@kernel.org> 20766 20761 L: linux-kernel@vger.kernel.org 20767 20762 S: Maintained 20768 20763 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git timers/core ··· 26273 26268 26274 26269 TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER 26275 26270 M: John Stultz <jstultz@google.com> 26276 - M: Thomas Gleixner <tglx@linutronix.de> 26271 + M: Thomas Gleixner <tglx@kernel.org> 26277 26272 R: Stephen Boyd <sboyd@kernel.org> 26278 26273 L: linux-kernel@vger.kernel.org 26279 26274 S: Supported ··· 28204 28199 F: net/x25/ 28205 28200 28206 28201 X86 ARCHITECTURE (32-BIT AND 64-BIT) 28207 - M: Thomas Gleixner <tglx@linutronix.de> 28202 + M: Thomas Gleixner <tglx@kernel.org> 28208 28203 M: Ingo Molnar <mingo@redhat.com> 28209 28204 M: Borislav Petkov <bp@alien8.de> 28210 28205 M: Dave Hansen <dave.hansen@linux.intel.com> ··· 28220 28215 28221 28216 X86 CPUID DATABASE 28222 28217 M: Borislav Petkov <bp@alien8.de> 28223 - M: Thomas Gleixner <tglx@linutronix.de> 28218 + M: Thomas Gleixner <tglx@kernel.org> 28224 28219 M: x86@kernel.org 28225 28220 R: Ahmed S. Darwish <darwi@linutronix.de> 28226 28221 L: x86-cpuid@lists.linux.dev ··· 28236 28231 F: arch/x86/entry/ 28237 28232 28238 28233 X86 HARDWARE VULNERABILITIES 28239 - M: Thomas Gleixner <tglx@linutronix.de> 28234 + M: Thomas Gleixner <tglx@kernel.org> 28240 28235 M: Borislav Petkov <bp@alien8.de> 28241 28236 M: Peter Zijlstra <peterz@infradead.org> 28242 28237 M: Josh Poimboeuf <jpoimboe@kernel.org>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+11
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr-ac.dts
··· 12 12 model = "Actiontec MI424WR rev A/C"; 13 13 compatible = "actiontec,mi424wr-ac", "intel,ixp42x"; 14 14 15 + /* Connect the switch to EthC */ 16 + spi { 17 + ethernet-switch@0 { 18 + ethernet-ports { 19 + ethernet-port@4 { 20 + ethernet = <&ethc>; 21 + }; 22 + }; 23 + }; 24 + }; 25 + 15 26 soc { 16 27 /* EthB used for WAN */ 17 28 ethernet@c8009000 {
+11
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr-d.dts
··· 12 12 model = "Actiontec MI424WR rev D"; 13 13 compatible = "actiontec,mi424wr-d", "intel,ixp42x"; 14 14 15 + /* Connect the switch to EthB */ 16 + spi { 17 + ethernet-switch@0 { 18 + ethernet-ports { 19 + ethernet-port@4 { 20 + ethernet = <&ethb>; 21 + }; 22 + }; 23 + }; 24 + }; 25 + 15 26 soc { 16 27 /* EthB used for LAN */ 17 28 ethernet@c8009000 {
-1
arch/arm/boot/dts/intel/ixp/intel-ixp42x-actiontec-mi424wr.dtsi
··· 152 152 }; 153 153 ethernet-port@4 { 154 154 reg = <4>; 155 - ethernet = <&ethc>; 156 155 phy-mode = "mii"; 157 156 fixed-link { 158 157 speed = <100>;
+4 -4
arch/arm/boot/dts/nxp/imx/imx27-phytec-phycore-rdk.dts
··· 248 248 linux,default-trigger = "nand-disk"; 249 249 }; 250 250 251 - ledg3: led@10 { 252 - reg = <10>; 251 + ledg3: led@a { 252 + reg = <0xa>; 253 253 label = "system:green3:live"; 254 254 linux,default-trigger = "heartbeat"; 255 255 }; 256 256 257 - ledb3: led@11 { 258 - reg = <11>; 257 + ledb3: led@b { 258 + reg = <0xb>; 259 259 label = "system:blue3:cpu"; 260 260 linux,default-trigger = "cpu0"; 261 261 };
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-rdu1.dts
··· 398 398 #size-cells = <0>; 399 399 led-control = <0x0 0x0 0x3f83f8 0x0>; 400 400 401 - sysled0@3 { 401 + led@3 { 402 402 reg = <3>; 403 403 label = "system:green:status"; 404 404 linux,default-trigger = "default-on"; 405 405 }; 406 406 407 - sysled1@4 { 407 + led@4 { 408 408 reg = <4>; 409 409 label = "system:green:act"; 410 410 linux,default-trigger = "heartbeat";
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-scu2-mezz.dts
··· 225 225 #size-cells = <0>; 226 226 led-control = <0x0 0x0 0x3f83f8 0x0>; 227 227 228 - sysled3: led3@3 { 228 + sysled3: led@3 { 229 229 reg = <3>; 230 230 label = "system:red:power"; 231 231 linux,default-trigger = "default-on"; 232 232 }; 233 233 234 - sysled4: led4@4 { 234 + sysled4: led@4 { 235 235 reg = <4>; 236 236 label = "system:green:act"; 237 237 linux,default-trigger = "heartbeat";
+2 -2
arch/arm/boot/dts/nxp/imx/imx51-zii-scu3-esb.dts
··· 153 153 #size-cells = <0>; 154 154 led-control = <0x0 0x0 0x3f83f8 0x0>; 155 155 156 - sysled3: led3@3 { 156 + sysled3: led@3 { 157 157 reg = <3>; 158 158 label = "system:red:power"; 159 159 linux,default-trigger = "default-on"; 160 160 }; 161 161 162 - sysled4: led4@4 { 162 + sysled4: led@4 { 163 163 reg = <4>; 164 164 label = "system:green:act"; 165 165 linux,default-trigger = "heartbeat";
+1 -1
arch/arm/boot/dts/nxp/imx/imx6q-ba16.dtsi
··· 337 337 pinctrl-0 = <&pinctrl_rtc>; 338 338 reg = <0x32>; 339 339 interrupt-parent = <&gpio4>; 340 - interrupts = <10 IRQ_TYPE_LEVEL_HIGH>; 340 + interrupts = <10 IRQ_TYPE_LEVEL_LOW>; 341 341 }; 342 342 }; 343 343
+1 -3
arch/arm64/boot/dts/broadcom/Makefile
··· 7 7 bcm2711-rpi-4-b.dtb \ 8 8 bcm2711-rpi-cm4-io.dtb \ 9 9 bcm2712-rpi-5-b.dtb \ 10 - bcm2712-rpi-5-b-ovl-rp1.dtb \ 11 10 bcm2712-d-rpi-5-b.dtb \ 12 11 bcm2837-rpi-2-b.dtb \ 13 12 bcm2837-rpi-3-a-plus.dtb \ 14 13 bcm2837-rpi-3-b.dtb \ 15 14 bcm2837-rpi-3-b-plus.dtb \ 16 15 bcm2837-rpi-cm3-io3.dtb \ 17 - bcm2837-rpi-zero-2-w.dtb \ 18 - rp1.dtbo 16 + bcm2837-rpi-zero-2-w.dtb 19 17 20 18 subdir-y += bcmbca 21 19 subdir-y += northstar2
arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b-ovl-rp1.dts arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b-base.dtsi
+26 -13
arch/arm64/boot/dts/broadcom/bcm2712-rpi-5-b.dts
··· 1 1 // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 2 /* 3 - * bcm2712-rpi-5-b-ovl-rp1.dts is the overlay-ready DT which will make 4 - * the RP1 driver to load the RP1 dtb overlay at runtime, while 5 - * bcm2712-rpi-5-b.dts (this file) is the fully defined one (i.e. it 6 - * already contains RP1 node, so no overlay is loaded nor needed). 7 - * This file is intended to host the override nodes for the RP1 peripherals, 8 - * e.g. to declare the phy of the ethernet interface or the custom pin setup 9 - * for several RP1 peripherals. 10 - * This in turn is due to the fact that there's no current generic 11 - * infrastructure to reference nodes (i.e. the nodes in rp1-common.dtsi) that 12 - * are not yet defined in the DT since they are loaded at runtime via overlay. 3 + * As a loose attempt to separate RP1 customizations from SoC peripherals 4 + * definitioni, this file is intended to host the override nodes for the RP1 5 + * peripherals, e.g. to declare the phy of the ethernet interface or custom 6 + * pin setup. 13 7 * All other nodes that do not have anything to do with RP1 should be added 14 - * to the included bcm2712-rpi-5-b-ovl-rp1.dts instead. 8 + * to the included bcm2712-rpi-5-b-base.dtsi instead. 15 9 */ 16 10 17 11 /dts-v1/; 18 12 19 - #include "bcm2712-rpi-5-b-ovl-rp1.dts" 13 + #include "bcm2712-rpi-5-b-base.dtsi" 20 14 21 15 / { 22 16 aliases { ··· 19 25 }; 20 26 21 27 &pcie2 { 22 - #include "rp1-nexus.dtsi" 28 + pci@0,0 { 29 + reg = <0x0 0x0 0x0 0x0 0x0>; 30 + ranges; 31 + bus-range = <0 1>; 32 + device_type = "pci"; 33 + #address-cells = <3>; 34 + #size-cells = <2>; 35 + 36 + dev@0,0 { 37 + compatible = "pci1de4,1"; 38 + reg = <0x10000 0x0 0x0 0x0 0x0>; 39 + ranges = <0x1 0x0 0x0 0x82010000 0x0 0x0 0x0 0x400000>; 40 + interrupt-controller; 41 + #interrupt-cells = <2>; 42 + #address-cells = <3>; 43 + #size-cells = <2>; 44 + 45 + #include "rp1-common.dtsi" 46 + }; 47 + }; 23 48 }; 24 49 25 50 &rp1_eth {
-14
arch/arm64/boot/dts/broadcom/rp1-nexus.dtsi
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - rp1_nexus { 4 - compatible = "pci1de4,1"; 5 - #address-cells = <3>; 6 - #size-cells = <2>; 7 - ranges = <0x01 0x00 0x00000000 8 - 0x02000000 0x00 0x00000000 9 - 0x0 0x400000>; 10 - interrupt-controller; 11 - #interrupt-cells = <2>; 12 - 13 - #include "rp1-common.dtsi" 14 - };
-11
arch/arm64/boot/dts/broadcom/rp1.dtso
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - /dts-v1/; 4 - /plugin/; 5 - 6 - &pcie2 { 7 - #address-cells = <3>; 8 - #size-cells = <2>; 9 - 10 - #include "rp1-nexus.dtsi" 11 - };
+1
arch/arm64/boot/dts/freescale/imx8mp-dhcom-som.dtsi
··· 113 113 ethphy0f: ethernet-phy@1 { /* SMSC LAN8740Ai */ 114 114 compatible = "ethernet-phy-id0007.c110", 115 115 "ethernet-phy-ieee802.3-c22"; 116 + clocks = <&clk IMX8MP_CLK_ENET_QOS>; 116 117 interrupt-parent = <&gpio3>; 117 118 interrupts = <19 IRQ_TYPE_LEVEL_LOW>; 118 119 pinctrl-0 = <&pinctrl_ethphy0>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-tx8p-ml81-moduline-display-106.dts
··· 9 9 #include "imx8mp-tx8p-ml81.dtsi" 10 10 11 11 / { 12 - compatible = "gocontroll,moduline-display", "fsl,imx8mp"; 12 + compatible = "gocontroll,moduline-display-106", "karo,tx8p-ml81", "fsl,imx8mp"; 13 13 chassis-type = "embedded"; 14 14 hardware = "Moduline Display V1.06"; 15 15 model = "GOcontroll Moduline Display baseboard";
+5
arch/arm64/boot/dts/freescale/imx8mp-tx8p-ml81.dtsi
··· 47 47 <&clk IMX8MP_SYS_PLL2_100M>, 48 48 <&clk IMX8MP_SYS_PLL2_50M>; 49 49 assigned-clock-rates = <266000000>, <100000000>, <50000000>; 50 + nvmem-cells = <&eth_mac1>; 50 51 phy-handle = <&ethphy0>; 51 52 phy-mode = "rmii"; 52 53 pinctrl-0 = <&pinctrl_eqos>; ··· 74 73 smsc,disable-energy-detect; 75 74 }; 76 75 }; 76 + }; 77 + 78 + &fec { 79 + nvmem-cells = <&eth_mac2>; 77 80 }; 78 81 79 82 &gpio1 {
+2 -1
arch/arm64/boot/dts/freescale/imx8qm-mek.dts
··· 263 263 regulator-max-microvolt = <3000000>; 264 264 gpio = <&lsio_gpio4 7 GPIO_ACTIVE_HIGH>; 265 265 enable-active-high; 266 + off-on-delay-us = <4800>; 266 267 }; 267 268 268 269 reg_audio: regulator-audio { ··· 577 576 compatible = "isil,isl29023"; 578 577 reg = <0x44>; 579 578 interrupt-parent = <&lsio_gpio4>; 580 - interrupts = <11 IRQ_TYPE_EDGE_FALLING>; 579 + interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 581 580 }; 582 581 583 582 pressure-sensor@60 {
+4 -4
arch/arm64/boot/dts/freescale/imx8qm-ss-dma.dtsi
··· 172 172 173 173 &lpuart0 { 174 174 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 175 - dmas = <&edma2 13 0 0>, <&edma2 12 0 1>; 175 + dmas = <&edma2 12 0 FSL_EDMA_RX>, <&edma2 13 0 0>; 176 176 dma-names = "rx","tx"; 177 177 }; 178 178 179 179 &lpuart1 { 180 180 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 181 - dmas = <&edma2 15 0 0>, <&edma2 14 0 1>; 181 + dmas = <&edma2 14 0 FSL_EDMA_RX>, <&edma2 15 0 0>; 182 182 dma-names = "rx","tx"; 183 183 }; 184 184 185 185 &lpuart2 { 186 186 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 187 - dmas = <&edma2 17 0 0>, <&edma2 16 0 1>; 187 + dmas = <&edma2 16 0 FSL_EDMA_RX>, <&edma2 17 0 0>; 188 188 dma-names = "rx","tx"; 189 189 }; 190 190 191 191 &lpuart3 { 192 192 compatible = "fsl,imx8qm-lpuart", "fsl,imx8qxp-lpuart"; 193 - dmas = <&edma2 19 0 0>, <&edma2 18 0 1>; 193 + dmas = <&edma2 18 0 FSL_EDMA_RX>, <&edma2 19 0 0>; 194 194 dma-names = "rx","tx"; 195 195 }; 196 196
+1 -3
arch/arm64/boot/dts/freescale/imx95-toradex-smarc.dtsi
··· 406 406 "", 407 407 "", 408 408 "", 409 - "", 410 - "", 411 409 "SMARC_SDIO_WP"; 412 410 }; 413 411 ··· 580 582 ethphy1: ethernet-phy@1 { 581 583 reg = <1>; 582 584 interrupt-parent = <&som_gpio_expander_1>; 583 - interrupts = <6 IRQ_TYPE_LEVEL_LOW>; 585 + interrupts = <6 IRQ_TYPE_EDGE_FALLING>; 584 586 ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>; 585 587 ti,tx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>; 586 588 };
+1 -1
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 828 828 interrupts = <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>; 829 829 #address-cells = <3>; 830 830 #size-cells = <0>; 831 - clocks = <&scmi_clk IMX95_CLK_BUSAON>, 831 + clocks = <&scmi_clk IMX95_CLK_BUSWAKEUP>, 832 832 <&scmi_clk IMX95_CLK_I3C2SLOW>; 833 833 clock-names = "pclk", "fast_clk"; 834 834 status = "disabled";
+1 -1
arch/arm64/boot/dts/freescale/mba8mx.dtsi
··· 192 192 reset-assert-us = <500000>; 193 193 reset-deassert-us = <500>; 194 194 interrupt-parent = <&expander2>; 195 - interrupts = <6 IRQ_TYPE_EDGE_FALLING>; 195 + interrupts = <6 IRQ_TYPE_LEVEL_LOW>; 196 196 }; 197 197 }; 198 198 };
-3
arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
··· 675 675 snps,lfps_filter_quirk; 676 676 snps,dis_u2_susphy_quirk; 677 677 snps,dis_u3_susphy_quirk; 678 - snps,tx_de_emphasis_quirk; 679 - snps,tx_de_emphasis = <1>; 680 678 snps,dis_enblslpm_quirk; 681 - snps,gctl-reset-quirk; 682 679 usb-role-switch; 683 680 role-switch-default-mode = "host"; 684 681 port {
+1 -1
arch/arm64/boot/dts/ti/k3-am62-lp-sk-nand.dtso
··· 14 14 }; 15 15 16 16 &main_pmx0 { 17 - gpmc0_pins_default: gpmc0-pins-default { 17 + gpmc0_pins_default: gpmc0-default-pins { 18 18 pinctrl-single,pins = < 19 19 AM62X_IOPAD(0x003c, PIN_INPUT, 0) /* (K19) GPMC0_AD0 */ 20 20 AM62X_IOPAD(0x0040, PIN_INPUT, 0) /* (L19) GPMC0_AD1 */
+2 -5
arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-peb-c-010.dtso
··· 30 30 <&main_pktdma 0xc206 15>, /* egress slice 1 */ 31 31 <&main_pktdma 0xc207 15>, /* egress slice 1 */ 32 32 <&main_pktdma 0x4200 15>, /* ingress slice 0 */ 33 - <&main_pktdma 0x4201 15>, /* ingress slice 1 */ 34 - <&main_pktdma 0x4202 0>, /* mgmnt rsp slice 0 */ 35 - <&main_pktdma 0x4203 0>; /* mgmnt rsp slice 1 */ 33 + <&main_pktdma 0x4201 15>; /* ingress slice 1 */ 36 34 dma-names = "tx0-0", "tx0-1", "tx0-2", "tx0-3", 37 35 "tx1-0", "tx1-1", "tx1-2", "tx1-3", 38 - "rx0", "rx1", 39 - "rxmgm0", "rxmgm1"; 36 + "rx0", "rx1"; 40 37 41 38 firmware-name = "ti-pruss/am65x-sr2-pru0-prueth-fw.elf", 42 39 "ti-pruss/am65x-sr2-rtu0-prueth-fw.elf",
+4 -4
arch/arm64/boot/dts/ti/k3-am642-phyboard-electra-x27-gpio1-spi1-uart3.dtso
··· 20 20 }; 21 21 22 22 &main_pmx0 { 23 - main_gpio1_exp_header_gpio_pins_default: main-gpio1-exp-header-gpio-pins-default { 23 + main_gpio1_exp_header_gpio_pins_default: main-gpio1-exp-header-gpio-default-pins { 24 24 pinctrl-single,pins = < 25 25 AM64X_IOPAD(0x0220, PIN_INPUT, 7) /* (D14) SPI1_CS1.GPIO1_48 */ 26 26 >; 27 27 }; 28 28 29 - main_spi1_pins_default: main-spi1-pins-default { 29 + main_spi1_pins_default: main-spi1-default-pins { 30 30 pinctrl-single,pins = < 31 31 AM64X_IOPAD(0x0224, PIN_INPUT, 0) /* (C14) SPI1_CLK */ 32 32 AM64X_IOPAD(0x021C, PIN_OUTPUT, 0) /* (B14) SPI1_CS0 */ ··· 35 35 >; 36 36 }; 37 37 38 - main_uart3_pins_default: main-uart3-pins-default { 38 + main_uart3_pins_default: main-uart3-default-pins { 39 39 pinctrl-single,pins = < 40 40 AM64X_IOPAD(0x0048, PIN_INPUT, 2) /* (U20) GPMC0_AD3.UART3_RXD */ 41 41 AM64X_IOPAD(0x004c, PIN_OUTPUT, 2) /* (U18) GPMC0_AD4.UART3_TXD */ ··· 52 52 &main_spi1 { 53 53 pinctrl-names = "default"; 54 54 pinctrl-0 = <&main_spi1_pins_default>; 55 - ti,pindir-d0-out-d1-in = <1>; 55 + ti,pindir-d0-out-d1-in; 56 56 status = "okay"; 57 57 }; 58 58
+1 -1
arch/arm64/include/asm/efi.h
··· 45 45 * switching to the EFI runtime stack. 46 46 */ 47 47 #define current_in_efi() \ 48 - (!preemptible() && efi_rt_stack_top != NULL && \ 48 + (efi_rt_stack_top != NULL && \ 49 49 on_task_stack(current, READ_ONCE(efi_rt_stack_top[-1]), 1)) 50 50 51 51 #define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+1 -1
arch/arm64/include/asm/suspend.h
··· 2 2 #ifndef __ASM_SUSPEND_H 3 3 #define __ASM_SUSPEND_H 4 4 5 - #define NR_CTX_REGS 13 5 + #define NR_CTX_REGS 14 6 6 #define NR_CALLEE_SAVED_REGS 12 7 7 8 8 /*
+4 -2
arch/arm64/mm/pageattr.c
··· 171 171 */ 172 172 area = find_vm_area((void *)addr); 173 173 if (!area || 174 - end > (unsigned long)kasan_reset_tag(area->addr) + area->size || 174 + ((unsigned long)kasan_reset_tag((void *)end) > 175 + (unsigned long)kasan_reset_tag(area->addr) + area->size) || 175 176 ((area->flags & (VM_ALLOC | VM_ALLOW_HUGE_VMAP)) != VM_ALLOC)) 176 177 return -EINVAL; 177 178 ··· 185 184 */ 186 185 if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || 187 186 pgprot_val(clear_mask) == PTE_RDONLY)) { 188 - unsigned long idx = (start - (unsigned long)kasan_reset_tag(area->addr)) 187 + unsigned long idx = ((unsigned long)kasan_reset_tag((void *)start) - 188 + (unsigned long)kasan_reset_tag(area->addr)) 189 189 >> PAGE_SHIFT; 190 190 for (; numpages; idx++, numpages--) { 191 191 ret = __change_memory_common((u64)page_address(area->pages[idx]),
+8
arch/arm64/mm/proc.S
··· 110 110 * call stack. 111 111 */ 112 112 str x18, [x0, #96] 113 + alternative_if ARM64_HAS_TCR2 114 + mrs x2, REG_TCR2_EL1 115 + str x2, [x0, #104] 116 + alternative_else_nop_endif 113 117 ret 114 118 SYM_FUNC_END(cpu_do_suspend) 115 119 ··· 148 144 msr tcr_el1, x8 149 145 msr vbar_el1, x9 150 146 msr mdscr_el1, x10 147 + alternative_if ARM64_HAS_TCR2 148 + ldr x2, [x0, #104] 149 + msr REG_TCR2_EL1, x2 150 + alternative_else_nop_endif 151 151 152 152 msr sctlr_el1, x12 153 153 set_this_cpu_offset x13
-4
arch/riscv/boot/Makefile
··· 31 31 32 32 endif 33 33 34 - ifdef CONFIG_RELOCATABLE 35 - $(obj)/Image: vmlinux.unstripped FORCE 36 - else 37 34 $(obj)/Image: vmlinux FORCE 38 - endif 39 35 $(call if_changed,objcopy) 40 36 41 37 $(obj)/Image.gz: $(obj)/Image FORCE
-2
arch/riscv/configs/nommu_k210_defconfig
··· 55 55 # CONFIG_HW_RANDOM is not set 56 56 # CONFIG_DEVMEM is not set 57 57 CONFIG_I2C=y 58 - # CONFIG_I2C_COMPAT is not set 59 58 CONFIG_I2C_CHARDEV=y 60 59 # CONFIG_I2C_HELPER_AUTO is not set 61 60 CONFIG_I2C_DESIGNWARE_CORE=y ··· 88 89 # CONFIG_FRAME_POINTER is not set 89 90 # CONFIG_DEBUG_MISC is not set 90 91 CONFIG_PANIC_ON_OOPS=y 91 - # CONFIG_SCHED_DEBUG is not set 92 92 # CONFIG_RCU_TRACE is not set 93 93 # CONFIG_FTRACE is not set 94 94 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_k210_sdcard_defconfig
··· 86 86 # CONFIG_FRAME_POINTER is not set 87 87 # CONFIG_DEBUG_MISC is not set 88 88 CONFIG_PANIC_ON_OOPS=y 89 - # CONFIG_SCHED_DEBUG is not set 90 89 # CONFIG_RCU_TRACE is not set 91 90 # CONFIG_FTRACE is not set 92 91 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/configs/nommu_virt_defconfig
··· 66 66 # CONFIG_MISC_FILESYSTEMS is not set 67 67 CONFIG_LSM="[]" 68 68 CONFIG_PRINTK_TIME=y 69 - # CONFIG_SCHED_DEBUG is not set 70 69 # CONFIG_RCU_TRACE is not set 71 70 # CONFIG_FTRACE is not set 72 71 # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/riscv/include/asm/bitops.h
··· 11 11 #endif /* _LINUX_BITOPS_H */ 12 12 13 13 #include <linux/compiler.h> 14 - #include <linux/irqflags.h> 15 14 #include <asm/barrier.h> 16 15 #include <asm/bitsperlong.h> 17 16
-4
arch/riscv/include/asm/pgtable.h
··· 124 124 #ifdef CONFIG_64BIT 125 125 #include <asm/pgtable-64.h> 126 126 127 - #define VA_USER_SV39 (UL(1) << (VA_BITS_SV39 - 1)) 128 - #define VA_USER_SV48 (UL(1) << (VA_BITS_SV48 - 1)) 129 - #define VA_USER_SV57 (UL(1) << (VA_BITS_SV57 - 1)) 130 - 131 127 #define MMAP_VA_BITS_64 ((VA_BITS >= VA_BITS_SV48) ? VA_BITS_SV48 : VA_BITS) 132 128 #define MMAP_MIN_VA_BITS_64 (VA_BITS_SV39) 133 129 #define MMAP_VA_BITS (is_compat_task() ? VA_BITS_SV32 : MMAP_VA_BITS_64)
+8 -7
arch/riscv/kernel/Makefile
··· 3 3 # Makefile for the RISC-V Linux kernel 4 4 # 5 5 6 - ifdef CONFIG_FTRACE 7 - CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 8 - CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) 9 - CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 10 - CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 11 - endif 12 6 CFLAGS_syscall_table.o += $(call cc-disable-warning, override-init) 13 7 CFLAGS_compat_syscall_table.o += $(call cc-disable-warning, override-init) 14 8 ··· 18 24 ifdef CONFIG_FTRACE 19 25 CFLAGS_REMOVE_alternative.o = $(CC_FLAGS_FTRACE) 20 26 CFLAGS_REMOVE_cpufeature.o = $(CC_FLAGS_FTRACE) 21 - CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE) 22 27 endif 23 28 ifdef CONFIG_RELOCATABLE 24 29 CFLAGS_alternative.o += -fno-pie ··· 34 41 CFLAGS_cpufeature.o += -D__NO_FORTIFY 35 42 CFLAGS_sbi_ecall.o += -D__NO_FORTIFY 36 43 endif 44 + endif 45 + 46 + ifdef CONFIG_FTRACE 47 + CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 48 + CFLAGS_REMOVE_patch.o = $(CC_FLAGS_FTRACE) 49 + CFLAGS_REMOVE_sbi.o = $(CC_FLAGS_FTRACE) 50 + CFLAGS_REMOVE_return_address.o = $(CC_FLAGS_FTRACE) 51 + CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE) 37 52 endif 38 53 39 54 always-$(KBUILD_BUILTIN) += vmlinux.lds
+1 -1
arch/riscv/kernel/cpu_ops_sbi.c
··· 85 85 int ret; 86 86 87 87 ret = sbi_hsm_hart_stop(); 88 - pr_crit("Unable to stop the cpu %u (%d)\n", smp_processor_id(), ret); 88 + pr_crit("Unable to stop the cpu %d (%d)\n", smp_processor_id(), ret); 89 89 } 90 90 91 91 static int sbi_cpu_is_stopped(unsigned int cpuid)
+11 -12
arch/riscv/kernel/cpufeature.c
··· 301 301 RISCV_ISA_EXT_ZALRSC, 302 302 }; 303 303 304 + #define RISCV_ISA_EXT_ZKN \ 305 + RISCV_ISA_EXT_ZBKB, \ 306 + RISCV_ISA_EXT_ZBKC, \ 307 + RISCV_ISA_EXT_ZBKX, \ 308 + RISCV_ISA_EXT_ZKND, \ 309 + RISCV_ISA_EXT_ZKNE, \ 310 + RISCV_ISA_EXT_ZKNH 311 + 304 312 static const unsigned int riscv_zk_bundled_exts[] = { 305 - RISCV_ISA_EXT_ZBKB, 306 - RISCV_ISA_EXT_ZBKC, 307 - RISCV_ISA_EXT_ZBKX, 308 - RISCV_ISA_EXT_ZKND, 309 - RISCV_ISA_EXT_ZKNE, 313 + RISCV_ISA_EXT_ZKN, 310 314 RISCV_ISA_EXT_ZKR, 311 - RISCV_ISA_EXT_ZKT, 315 + RISCV_ISA_EXT_ZKT 312 316 }; 313 317 314 318 static const unsigned int riscv_zkn_bundled_exts[] = { 315 - RISCV_ISA_EXT_ZBKB, 316 - RISCV_ISA_EXT_ZBKC, 317 - RISCV_ISA_EXT_ZBKX, 318 - RISCV_ISA_EXT_ZKND, 319 - RISCV_ISA_EXT_ZKNE, 320 - RISCV_ISA_EXT_ZKNH, 319 + RISCV_ISA_EXT_ZKN 321 320 }; 322 321 323 322 static const unsigned int riscv_zks_bundled_exts[] = {
+1 -1
arch/riscv/kernel/kexec_image.c
··· 22 22 if (!h || kernel_len < sizeof(*h)) 23 23 return -EINVAL; 24 24 25 - /* According to Documentation/riscv/boot-image-header.rst, 25 + /* According to Documentation/arch/riscv/boot-image-header.rst, 26 26 * use "magic2" field to check when version >= 0.2. 27 27 */ 28 28
+2
arch/riscv/kernel/tests/kprobes/test-kprobes-asm.S
··· 181 181 182 182 #endif /* CONFIG_RISCV_ISA_C */ 183 183 184 + .section .rodata 184 185 SYM_DATA_START(test_kprobes_addresses) 185 186 RISCV_PTR test_kprobes_add_addr1 186 187 RISCV_PTR test_kprobes_add_addr2 ··· 213 212 RISCV_PTR 0 214 213 SYM_DATA_END(test_kprobes_addresses) 215 214 215 + .section .rodata 216 216 SYM_DATA_START(test_kprobes_functions) 217 217 RISCV_PTR test_kprobes_add 218 218 RISCV_PTR test_kprobes_jal
+3 -1
arch/riscv/kernel/traps.c
··· 339 339 340 340 add_random_kstack_offset(); 341 341 342 - if (syscall >= 0 && syscall < NR_syscalls) 342 + if (syscall >= 0 && syscall < NR_syscalls) { 343 + syscall = array_index_nospec(syscall, NR_syscalls); 343 344 syscall_handler(regs, syscall); 345 + } 344 346 345 347 /* 346 348 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
+1 -1
arch/sh/kernel/perf_event.c
··· 7 7 * Heavily based on the x86 and PowerPC implementations. 8 8 * 9 9 * x86: 10 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 10 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 11 11 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 12 12 * Copyright (C) 2009 Jaswinder Singh Rajput 13 13 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+23
arch/sparc/kernel/pci.c
··· 181 181 182 182 __setup("ofpci_debug=", ofpci_debug); 183 183 184 + static void of_fixup_pci_pref(struct pci_dev *dev, int index, 185 + struct resource *res) 186 + { 187 + struct pci_bus_region region; 188 + 189 + if (!(res->flags & IORESOURCE_MEM_64)) 190 + return; 191 + 192 + if (!resource_size(res)) 193 + return; 194 + 195 + pcibios_resource_to_bus(dev->bus, &region, res); 196 + if (region.end <= ~((u32)0)) 197 + return; 198 + 199 + if (!(res->flags & IORESOURCE_PREFETCH)) { 200 + res->flags |= IORESOURCE_PREFETCH; 201 + pci_info(dev, "reg 0x%x: fixup: pref added to 64-bit resource\n", 202 + index); 203 + } 204 + } 205 + 184 206 static unsigned long pci_parse_of_flags(u32 addr0) 185 207 { 186 208 unsigned long flags = 0; ··· 266 244 res->end = op_res->end; 267 245 res->flags = flags; 268 246 res->name = pci_name(dev); 247 + of_fixup_pci_pref(dev, i, res); 269 248 270 249 pci_info(dev, "reg 0x%x: %pR\n", i, res); 271 250 }
+1 -1
arch/sparc/kernel/perf_event.c
··· 6 6 * This code is based almost entirely upon the x86 perf event 7 7 * code, which is: 8 8 * 9 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 9 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 10 10 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 11 11 * Copyright (C) 2009 Jaswinder Singh Rajput 12 12 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+2
arch/x86/coco/sev/Makefile
··· 8 8 # GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining 9 9 KASAN_SANITIZE_noinstr.o := n 10 10 KCSAN_SANITIZE_noinstr.o := n 11 + 12 + GCOV_PROFILE_noinstr.o := n
+1 -1
arch/x86/events/core.c
··· 1 1 /* 2 2 * Performance events x86 architecture code 3 3 * 4 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2009 Jaswinder Singh Rajput 7 7 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1 -1
arch/x86/events/perf_event.h
··· 1 1 /* 2 2 * Performance events x86 architecture header 3 3 * 4 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2009 Jaswinder Singh Rajput 7 7 * Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
+1 -1
arch/x86/kernel/x86_init.c
··· 1 1 /* 2 - * Copyright (C) 2009 Thomas Gleixner <tglx@linutronix.de> 2 + * Copyright (C) 2009 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 * 4 4 * For licencing details see kernel-base/COPYING 5 5 */
+1 -1
arch/x86/mm/pti.c
··· 15 15 * Signed-off-by: Michael Schwarz <michael.schwarz@iaik.tugraz.at> 16 16 * 17 17 * Major changes to the original code by: Dave Hansen <dave.hansen@intel.com> 18 - * Mostly rewritten by Thomas Gleixner <tglx@linutronix.de> and 18 + * Mostly rewritten by Thomas Gleixner <tglx@kernel.org> and 19 19 * Andy Lutomirsky <luto@amacapital.net> 20 20 */ 21 21 #include <linux/kernel.h>
+18 -5
block/blk-integrity.c
··· 140 140 bool blk_integrity_merge_rq(struct request_queue *q, struct request *req, 141 141 struct request *next) 142 142 { 143 + struct bio_integrity_payload *bip, *bip_next; 144 + 143 145 if (blk_integrity_rq(req) == 0 && blk_integrity_rq(next) == 0) 144 146 return true; 145 147 146 148 if (blk_integrity_rq(req) == 0 || blk_integrity_rq(next) == 0) 147 149 return false; 148 150 149 - if (bio_integrity(req->bio)->bip_flags != 150 - bio_integrity(next->bio)->bip_flags) 151 + bip = bio_integrity(req->bio); 152 + bip_next = bio_integrity(next->bio); 153 + if (bip->bip_flags != bip_next->bip_flags) 154 + return false; 155 + 156 + if (bip->bip_flags & BIP_CHECK_APPTAG && 157 + bip->app_tag != bip_next->app_tag) 151 158 return false; 152 159 153 160 if (req->nr_integrity_segments + next->nr_integrity_segments > ··· 170 163 bool blk_integrity_merge_bio(struct request_queue *q, struct request *req, 171 164 struct bio *bio) 172 165 { 166 + struct bio_integrity_payload *bip, *bip_bio = bio_integrity(bio); 173 167 int nr_integrity_segs; 174 168 175 - if (blk_integrity_rq(req) == 0 && bio_integrity(bio) == NULL) 169 + if (blk_integrity_rq(req) == 0 && bip_bio == NULL) 176 170 return true; 177 171 178 - if (blk_integrity_rq(req) == 0 || bio_integrity(bio) == NULL) 172 + if (blk_integrity_rq(req) == 0 || bip_bio == NULL) 179 173 return false; 180 174 181 - if (bio_integrity(req->bio)->bip_flags != bio_integrity(bio)->bip_flags) 175 + bip = bio_integrity(req->bio); 176 + if (bip->bip_flags != bip_bio->bip_flags) 177 + return false; 178 + 179 + if (bip->bip_flags & BIP_CHECK_APPTAG && 180 + bip->app_tag != bip_bio->app_tag) 182 181 return false; 183 182 184 183 nr_integrity_segs = blk_rq_count_integrity_sg(q, bio);
+1 -2
block/blk-mq.c
··· 4553 4553 * Make sure reading the old queue_hw_ctx from other 4554 4554 * context concurrently won't trigger uaf. 4555 4555 */ 4556 - synchronize_rcu_expedited(); 4557 - kfree(hctxs); 4556 + kfree_rcu_mightsleep(hctxs); 4558 4557 hctxs = new_hctxs; 4559 4558 } 4560 4559
+9 -16
block/blk-rq-qos.h
··· 112 112 113 113 static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio) 114 114 { 115 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 116 - q->rq_qos) 115 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 117 116 __rq_qos_cleanup(q->rq_qos, bio); 118 117 } 119 118 120 119 static inline void rq_qos_done(struct request_queue *q, struct request *rq) 121 120 { 122 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 123 - q->rq_qos && !blk_rq_is_passthrough(rq)) 121 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && 122 + q->rq_qos && !blk_rq_is_passthrough(rq)) 124 123 __rq_qos_done(q->rq_qos, rq); 125 124 } 126 125 127 126 static inline void rq_qos_issue(struct request_queue *q, struct request *rq) 128 127 { 129 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 130 - q->rq_qos) 128 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 131 129 __rq_qos_issue(q->rq_qos, rq); 132 130 } 133 131 134 132 static inline void rq_qos_requeue(struct request_queue *q, struct request *rq) 135 133 { 136 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 137 - q->rq_qos) 134 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 138 135 __rq_qos_requeue(q->rq_qos, rq); 139 136 } 140 137 ··· 159 162 160 163 static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) 161 164 { 162 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 163 - q->rq_qos) { 165 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) { 164 166 bio_set_flag(bio, BIO_QOS_THROTTLED); 165 167 __rq_qos_throttle(q->rq_qos, bio); 166 168 } ··· 168 172 static inline void rq_qos_track(struct request_queue *q, struct request *rq, 169 173 struct bio *bio) 170 174 { 171 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 172 - q->rq_qos) 175 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 173 176 __rq_qos_track(q->rq_qos, rq, bio); 174 177 } 175 178 176 179 static inline void rq_qos_merge(struct request_queue *q, struct request *rq, 177 180 struct bio *bio) 178 181 { 179 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 180 - q->rq_qos) { 182 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) { 181 183 bio_set_flag(bio, BIO_QOS_MERGED); 182 184 __rq_qos_merge(q->rq_qos, rq, bio); 183 185 } ··· 183 189 184 190 static inline void rq_qos_queue_depth_changed(struct request_queue *q) 185 191 { 186 - if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) && 187 - q->rq_qos) 192 + if (test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags) && q->rq_qos) 188 193 __rq_qos_queue_depth_changed(q->rq_qos); 189 194 } 190 195
+11 -8
drivers/acpi/pci_irq.c
··· 188 188 * the IRQ value, which is hardwired to specific interrupt inputs on 189 189 * the interrupt controller. 190 190 */ 191 - pr_debug("%04x:%02x:%02x[%c] -> %s[%d]\n", 191 + pr_debug("%04x:%02x:%02x[%c] -> %s[%u]\n", 192 192 entry->id.segment, entry->id.bus, entry->id.device, 193 193 pin_name(entry->pin), prt->source, entry->index); 194 194 ··· 384 384 int acpi_pci_irq_enable(struct pci_dev *dev) 385 385 { 386 386 struct acpi_prt_entry *entry; 387 - int gsi; 387 + u32 gsi; 388 388 u8 pin; 389 389 int triggering = ACPI_LEVEL_SENSITIVE; 390 390 /* ··· 422 422 return 0; 423 423 } 424 424 425 + rc = -ENODEV; 426 + 425 427 if (entry) { 426 428 if (entry->link) 427 - gsi = acpi_pci_link_allocate_irq(entry->link, 429 + rc = acpi_pci_link_allocate_irq(entry->link, 428 430 entry->index, 429 431 &triggering, &polarity, 430 - &link); 431 - else 432 + &link, &gsi); 433 + else { 432 434 gsi = entry->index; 433 - } else 434 - gsi = -1; 435 + rc = 0; 436 + } 437 + } 435 438 436 - if (gsi < 0) { 439 + if (rc < 0) { 437 440 /* 438 441 * No IRQ known to the ACPI subsystem - maybe the BIOS / 439 442 * driver reported one, then use it. Exit in any case.
+25 -14
drivers/acpi/pci_link.c
··· 448 448 /* >IRQ15 */ 449 449 }; 450 450 451 - static int acpi_irq_pci_sharing_penalty(int irq) 451 + static int acpi_irq_pci_sharing_penalty(u32 irq) 452 452 { 453 453 struct acpi_pci_link *link; 454 454 int penalty = 0; ··· 474 474 return penalty; 475 475 } 476 476 477 - static int acpi_irq_get_penalty(int irq) 477 + static int acpi_irq_get_penalty(u32 irq) 478 478 { 479 479 int penalty = 0; 480 480 ··· 528 528 static int acpi_pci_link_allocate(struct acpi_pci_link *link) 529 529 { 530 530 acpi_handle handle = link->device->handle; 531 - int irq; 531 + u32 irq; 532 532 int i; 533 533 534 534 if (link->irq.initialized) { ··· 598 598 return 0; 599 599 } 600 600 601 - /* 602 - * acpi_pci_link_allocate_irq 603 - * success: return IRQ >= 0 604 - * failure: return -1 601 + /** 602 + * acpi_pci_link_allocate_irq(): Retrieve a link device GSI 603 + * 604 + * @handle: Handle for the link device 605 + * @index: GSI index 606 + * @triggering: pointer to store the GSI trigger 607 + * @polarity: pointer to store GSI polarity 608 + * @name: pointer to store link device name 609 + * @gsi: pointer to store GSI number 610 + * 611 + * Returns: 612 + * 0 on success with @triggering, @polarity, @name, @gsi initialized. 613 + * -ENODEV on failure 605 614 */ 606 615 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering, 607 - int *polarity, char **name) 616 + int *polarity, char **name, u32 *gsi) 608 617 { 609 618 struct acpi_device *device = acpi_fetch_acpi_dev(handle); 610 619 struct acpi_pci_link *link; 611 620 612 621 if (!device) { 613 622 acpi_handle_err(handle, "Invalid link device\n"); 614 - return -1; 623 + return -ENODEV; 615 624 } 616 625 617 626 link = acpi_driver_data(device); 618 627 if (!link) { 619 628 acpi_handle_err(handle, "Invalid link context\n"); 620 - return -1; 629 + return -ENODEV; 621 630 } 622 631 623 632 /* TBD: Support multiple index (IRQ) entries per Link Device */ 624 633 if (index) { 625 634 acpi_handle_err(handle, "Invalid index %d\n", index); 626 - return -1; 635 + return -ENODEV; 627 636 } 628 637 629 638 mutex_lock(&acpi_link_lock); 630 639 if (acpi_pci_link_allocate(link)) { 631 640 mutex_unlock(&acpi_link_lock); 632 - return -1; 641 + return -ENODEV; 633 642 } 634 643 635 644 if (!link->irq.active) { 636 645 mutex_unlock(&acpi_link_lock); 637 646 acpi_handle_err(handle, "Link active IRQ is 0!\n"); 638 - return -1; 647 + return -ENODEV; 639 648 } 640 649 link->refcnt++; 641 650 mutex_unlock(&acpi_link_lock); ··· 656 647 if (name) 657 648 *name = acpi_device_bid(link->device); 658 649 acpi_handle_debug(handle, "Link is referenced\n"); 659 - return link->irq.active; 650 + *gsi = link->irq.active; 651 + 652 + return 0; 660 653 } 661 654 662 655 /*
-3
drivers/android/binder/page_range.rs
··· 727 727 drop(mm); 728 728 drop(page); 729 729 730 - // SAFETY: We just unlocked the lru lock, but it should be locked when we return. 731 - unsafe { bindings::spin_lock(&raw mut (*lru).lock) }; 732 - 733 730 LRU_REMOVED_ENTRY 734 731 }
+2 -1
drivers/atm/he.c
··· 1587 1587 he_dev->tbrq_base, he_dev->tbrq_phys); 1588 1588 1589 1589 if (he_dev->tpdrq_base) 1590 - dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq), 1590 + dma_free_coherent(&he_dev->pci_dev->dev, 1591 + CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq), 1591 1592 he_dev->tpdrq_base, he_dev->tpdrq_phys); 1592 1593 1593 1594 dma_pool_destroy(he_dev->tpd_pool);
+33 -12
drivers/block/loop.c
··· 1225 1225 } 1226 1226 1227 1227 static int 1228 - loop_set_status(struct loop_device *lo, const struct loop_info64 *info) 1228 + loop_set_status(struct loop_device *lo, blk_mode_t mode, 1229 + struct block_device *bdev, const struct loop_info64 *info) 1229 1230 { 1230 1231 int err; 1231 1232 bool partscan = false; 1232 1233 bool size_changed = false; 1233 1234 unsigned int memflags; 1234 1235 1236 + /* 1237 + * If we don't hold exclusive handle for the device, upgrade to it 1238 + * here to avoid changing device under exclusive owner. 1239 + */ 1240 + if (!(mode & BLK_OPEN_EXCL)) { 1241 + err = bd_prepare_to_claim(bdev, loop_set_status, NULL); 1242 + if (err) 1243 + goto out_reread_partitions; 1244 + } 1245 + 1235 1246 err = mutex_lock_killable(&lo->lo_mutex); 1236 1247 if (err) 1237 - return err; 1248 + goto out_abort_claiming; 1249 + 1238 1250 if (lo->lo_state != Lo_bound) { 1239 1251 err = -ENXIO; 1240 1252 goto out_unlock; ··· 1285 1273 } 1286 1274 out_unlock: 1287 1275 mutex_unlock(&lo->lo_mutex); 1276 + out_abort_claiming: 1277 + if (!(mode & BLK_OPEN_EXCL)) 1278 + bd_abort_claiming(bdev, loop_set_status); 1279 + out_reread_partitions: 1288 1280 if (partscan) 1289 1281 loop_reread_partitions(lo); 1290 1282 ··· 1368 1352 } 1369 1353 1370 1354 static int 1371 - loop_set_status_old(struct loop_device *lo, const struct loop_info __user *arg) 1355 + loop_set_status_old(struct loop_device *lo, blk_mode_t mode, 1356 + struct block_device *bdev, 1357 + const struct loop_info __user *arg) 1372 1358 { 1373 1359 struct loop_info info; 1374 1360 struct loop_info64 info64; ··· 1378 1360 if (copy_from_user(&info, arg, sizeof (struct loop_info))) 1379 1361 return -EFAULT; 1380 1362 loop_info64_from_old(&info, &info64); 1381 - return loop_set_status(lo, &info64); 1363 + return loop_set_status(lo, mode, bdev, &info64); 1382 1364 } 1383 1365 1384 1366 static int 1385 - loop_set_status64(struct loop_device *lo, const struct loop_info64 __user *arg) 1367 + loop_set_status64(struct loop_device *lo, blk_mode_t mode, 1368 + struct block_device *bdev, 1369 + const struct loop_info64 __user *arg) 1386 1370 { 1387 1371 struct loop_info64 info64; 1388 1372 1389 1373 if (copy_from_user(&info64, arg, sizeof (struct loop_info64))) 1390 1374 return -EFAULT; 1391 - return loop_set_status(lo, &info64); 1375 + return loop_set_status(lo, mode, bdev, &info64); 1392 1376 } 1393 1377 1394 1378 static int ··· 1569 1549 case LOOP_SET_STATUS: 1570 1550 err = -EPERM; 1571 1551 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1572 - err = loop_set_status_old(lo, argp); 1552 + err = loop_set_status_old(lo, mode, bdev, argp); 1573 1553 break; 1574 1554 case LOOP_GET_STATUS: 1575 1555 return loop_get_status_old(lo, argp); 1576 1556 case LOOP_SET_STATUS64: 1577 1557 err = -EPERM; 1578 1558 if ((mode & BLK_OPEN_WRITE) || capable(CAP_SYS_ADMIN)) 1579 - err = loop_set_status64(lo, argp); 1559 + err = loop_set_status64(lo, mode, bdev, argp); 1580 1560 break; 1581 1561 case LOOP_GET_STATUS64: 1582 1562 return loop_get_status64(lo, argp); ··· 1670 1650 } 1671 1651 1672 1652 static int 1673 - loop_set_status_compat(struct loop_device *lo, 1674 - const struct compat_loop_info __user *arg) 1653 + loop_set_status_compat(struct loop_device *lo, blk_mode_t mode, 1654 + struct block_device *bdev, 1655 + const struct compat_loop_info __user *arg) 1675 1656 { 1676 1657 struct loop_info64 info64; 1677 1658 int ret; ··· 1680 1659 ret = loop_info64_from_compat(arg, &info64); 1681 1660 if (ret < 0) 1682 1661 return ret; 1683 - return loop_set_status(lo, &info64); 1662 + return loop_set_status(lo, mode, bdev, &info64); 1684 1663 } 1685 1664 1686 1665 static int ··· 1706 1685 1707 1686 switch(cmd) { 1708 1687 case LOOP_SET_STATUS: 1709 - err = loop_set_status_compat(lo, 1688 + err = loop_set_status_compat(lo, mode, bdev, 1710 1689 (const struct compat_loop_info __user *)arg); 1711 1690 break; 1712 1691 case LOOP_GET_STATUS:
+22 -15
drivers/block/ublk_drv.c
··· 255 255 u16 q_id, u16 tag, struct ublk_io *io, size_t offset); 256 256 static inline unsigned int ublk_req_build_flags(struct request *req); 257 257 258 - static void ublk_partition_scan_work(struct work_struct *work) 259 - { 260 - struct ublk_device *ub = 261 - container_of(work, struct ublk_device, partition_scan_work); 262 - 263 - if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 264 - &ub->ub_disk->state))) 265 - return; 266 - 267 - mutex_lock(&ub->ub_disk->open_mutex); 268 - bdev_disk_changed(ub->ub_disk, false); 269 - mutex_unlock(&ub->ub_disk->open_mutex); 270 - } 271 - 272 258 static inline struct ublksrv_io_desc * 273 259 ublk_get_iod(const struct ublk_queue *ubq, unsigned tag) 274 260 { ··· 1583 1597 put_device(disk_to_dev(disk)); 1584 1598 } 1585 1599 1600 + static void ublk_partition_scan_work(struct work_struct *work) 1601 + { 1602 + struct ublk_device *ub = 1603 + container_of(work, struct ublk_device, partition_scan_work); 1604 + /* Hold disk reference to prevent UAF during concurrent teardown */ 1605 + struct gendisk *disk = ublk_get_disk(ub); 1606 + 1607 + if (!disk) 1608 + return; 1609 + 1610 + if (WARN_ON_ONCE(!test_and_clear_bit(GD_SUPPRESS_PART_SCAN, 1611 + &disk->state))) 1612 + goto out; 1613 + 1614 + mutex_lock(&disk->open_mutex); 1615 + bdev_disk_changed(disk, false); 1616 + mutex_unlock(&disk->open_mutex); 1617 + out: 1618 + ublk_put_disk(disk); 1619 + } 1620 + 1586 1621 /* 1587 1622 * Use this function to ensure that ->canceling is consistently set for 1588 1623 * the device and all queues. Do not set these flags directly. ··· 2048 2041 mutex_lock(&ub->mutex); 2049 2042 ublk_stop_dev_unlocked(ub); 2050 2043 mutex_unlock(&ub->mutex); 2051 - flush_work(&ub->partition_scan_work); 2044 + cancel_work_sync(&ub->partition_scan_work); 2052 2045 ublk_cancel_dev(ub); 2053 2046 } 2054 2047
+14 -6
drivers/counter/104-quad-8.c
··· 1192 1192 { 1193 1193 struct counter_device *counter = private; 1194 1194 struct quad8 *const priv = counter_priv(counter); 1195 + struct device *dev = counter->parent; 1195 1196 unsigned int status; 1196 1197 unsigned long irq_status; 1197 1198 unsigned long channel; ··· 1201 1200 int ret; 1202 1201 1203 1202 ret = regmap_read(priv->map, QUAD8_INTERRUPT_STATUS, &status); 1204 - if (ret) 1205 - return ret; 1203 + if (ret) { 1204 + dev_WARN_ONCE(dev, true, 1205 + "Attempt to read Interrupt Status Register failed: %d\n", ret); 1206 + return IRQ_NONE; 1207 + } 1206 1208 if (!status) 1207 1209 return IRQ_NONE; 1208 1210 ··· 1227 1223 break; 1228 1224 default: 1229 1225 /* should never reach this path */ 1230 - WARN_ONCE(true, "invalid interrupt trigger function %u configured for channel %lu\n", 1231 - flg_pins, channel); 1226 + dev_WARN_ONCE(dev, true, 1227 + "invalid interrupt trigger function %u configured for channel %lu\n", 1228 + flg_pins, channel); 1232 1229 continue; 1233 1230 } 1234 1231 ··· 1237 1232 } 1238 1233 1239 1234 ret = regmap_write(priv->map, QUAD8_CHANNEL_OPERATION, CLEAR_PENDING_INTERRUPTS); 1240 - if (ret) 1241 - return ret; 1235 + if (ret) { 1236 + dev_WARN_ONCE(dev, true, 1237 + "Attempt to clear pending interrupts by writing to Channel Operation Register failed: %d\n", ret); 1238 + return IRQ_HANDLED; 1239 + } 1242 1240 1243 1241 return IRQ_HANDLED; 1244 1242 }
+1 -2
drivers/counter/interrupt-cnt.c
··· 229 229 230 230 irq_set_status_flags(priv->irq, IRQ_NOAUTOEN); 231 231 ret = devm_request_irq(dev, priv->irq, interrupt_cnt_isr, 232 - IRQF_TRIGGER_RISING | IRQF_NO_THREAD, 233 - dev_name(dev), counter); 232 + IRQF_TRIGGER_RISING, dev_name(dev), counter); 234 233 if (ret) 235 234 return ret; 236 235
-2
drivers/crypto/intel/qat/qat_common/adf_aer.c
··· 41 41 adf_error_notifier(accel_dev); 42 42 adf_pf2vf_notify_fatal_error(accel_dev); 43 43 adf_dev_restarting_notify(accel_dev); 44 - adf_pf2vf_notify_restarting(accel_dev); 45 - adf_pf2vf_wait_for_restarting_complete(accel_dev); 46 44 pci_clear_master(pdev); 47 45 adf_dev_down(accel_dev); 48 46
+3 -8
drivers/gpio/gpio-it87.c
··· 12 12 13 13 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 14 15 + #include <linux/cleanup.h> 15 16 #include <linux/init.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/module.h> ··· 242 241 mask = 1 << (gpio_num % 8); 243 242 group = (gpio_num / 8); 244 243 245 - spin_lock(&it87_gpio->lock); 244 + guard(spinlock)(&it87_gpio->lock); 246 245 247 246 rc = superio_enter(); 248 247 if (rc) 249 - goto exit; 248 + return rc; 250 249 251 250 /* set the output enable bit */ 252 251 superio_set_mask(mask, group + it87_gpio->output_base); 253 252 254 253 rc = it87_gpio_set(chip, gpio_num, val); 255 - if (rc) 256 - goto exit; 257 - 258 254 superio_exit(); 259 - 260 - exit: 261 - spin_unlock(&it87_gpio->lock); 262 255 return rc; 263 256 } 264 257
+11 -1
drivers/gpio/gpio-mpsse.c
··· 548 548 ida_free(&gpio_mpsse_ida, priv->id); 549 549 } 550 550 551 + static void gpio_mpsse_usb_put_dev(void *data) 552 + { 553 + struct mpsse_priv *priv = data; 554 + 555 + usb_put_dev(priv->udev); 556 + } 557 + 551 558 static int mpsse_init_valid_mask(struct gpio_chip *chip, 552 559 unsigned long *valid_mask, 553 560 unsigned int ngpios) ··· 599 592 INIT_LIST_HEAD(&priv->workers); 600 593 601 594 priv->udev = usb_get_dev(interface_to_usbdev(interface)); 595 + err = devm_add_action_or_reset(dev, gpio_mpsse_usb_put_dev, priv); 596 + if (err) 597 + return err; 598 + 602 599 priv->intf = interface; 603 600 priv->intf_id = interface->cur_altsetting->desc.bInterfaceNumber; 604 601 ··· 724 713 725 714 priv->intf = NULL; 726 715 usb_set_intfdata(intf, NULL); 727 - usb_put_dev(priv->udev); 728 716 } 729 717 730 718 static struct usb_driver gpio_mpsse_driver = {
+24 -1
drivers/gpio/gpio-pca953x.c
··· 943 943 DECLARE_BITMAP(old_stat, MAX_LINE); 944 944 DECLARE_BITMAP(cur_stat, MAX_LINE); 945 945 DECLARE_BITMAP(new_stat, MAX_LINE); 946 + DECLARE_BITMAP(int_stat, MAX_LINE); 946 947 DECLARE_BITMAP(trigger, MAX_LINE); 947 948 DECLARE_BITMAP(edges, MAX_LINE); 948 949 int ret; 949 950 951 + if (chip->driver_data & PCA_PCAL) { 952 + /* Read INT_STAT before it is cleared by the input-port read. */ 953 + ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, int_stat); 954 + if (ret) 955 + return false; 956 + } 957 + 950 958 ret = pca953x_read_regs(chip, chip->regs->input, cur_stat); 951 959 if (ret) 952 960 return false; 961 + 962 + if (chip->driver_data & PCA_PCAL) { 963 + /* Detect short pulses via INT_STAT. */ 964 + bitmap_and(trigger, int_stat, chip->irq_mask, gc->ngpio); 965 + 966 + /* Apply filter for rising/falling edge selection. */ 967 + bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise, 968 + cur_stat, gc->ngpio); 969 + 970 + bitmap_and(int_stat, new_stat, trigger, gc->ngpio); 971 + } else { 972 + bitmap_zero(int_stat, gc->ngpio); 973 + } 953 974 954 975 /* Remove output pins from the equation */ 955 976 pca953x_read_regs(chip, chip->regs->direction, reg_direction); ··· 985 964 986 965 if (bitmap_empty(chip->irq_trig_level_high, gc->ngpio) && 987 966 bitmap_empty(chip->irq_trig_level_low, gc->ngpio)) { 988 - if (bitmap_empty(trigger, gc->ngpio)) 967 + if (bitmap_empty(trigger, gc->ngpio) && 968 + bitmap_empty(int_stat, gc->ngpio)) 989 969 return false; 990 970 } 991 971 ··· 994 972 bitmap_and(old_stat, chip->irq_trig_raise, new_stat, gc->ngpio); 995 973 bitmap_or(edges, old_stat, cur_stat, gc->ngpio); 996 974 bitmap_and(pending, edges, trigger, gc->ngpio); 975 + bitmap_or(pending, pending, int_stat, gc->ngpio); 997 976 998 977 bitmap_and(cur_stat, new_stat, chip->irq_trig_level_high, gc->ngpio); 999 978 bitmap_and(cur_stat, cur_stat, chip->irq_mask, gc->ngpio);
+1
drivers/gpio/gpio-rockchip.c
··· 593 593 gc->ngpio = bank->nr_pins; 594 594 gc->label = bank->name; 595 595 gc->parent = bank->dev; 596 + gc->can_sleep = true; 596 597 597 598 ret = gpiochip_add_data(gc, bank); 598 599 if (ret) {
+180 -71
drivers/gpio/gpiolib-shared.c
··· 38 38 int dev_id; 39 39 /* Protects the auxiliary device struct and the lookup table. */ 40 40 struct mutex lock; 41 + struct lock_class_key lock_key; 41 42 struct auxiliary_device adev; 42 43 struct gpiod_lookup_table *lookup; 44 + bool is_reset_gpio; 43 45 }; 44 46 45 47 /* Represents a single GPIO pin. */ ··· 78 76 return NULL; 79 77 } 80 78 79 + static struct gpio_shared_ref *gpio_shared_make_ref(struct fwnode_handle *fwnode, 80 + const char *con_id, 81 + enum gpiod_flags flags) 82 + { 83 + char *con_id_cpy __free(kfree) = NULL; 84 + 85 + struct gpio_shared_ref *ref __free(kfree) = kzalloc(sizeof(*ref), GFP_KERNEL); 86 + if (!ref) 87 + return NULL; 88 + 89 + if (con_id) { 90 + con_id_cpy = kstrdup(con_id, GFP_KERNEL); 91 + if (!con_id_cpy) 92 + return NULL; 93 + } 94 + 95 + ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL); 96 + if (ref->dev_id < 0) 97 + return NULL; 98 + 99 + ref->flags = flags; 100 + ref->con_id = no_free_ptr(con_id_cpy); 101 + ref->fwnode = fwnode; 102 + lockdep_register_key(&ref->lock_key); 103 + mutex_init_with_key(&ref->lock, &ref->lock_key); 104 + 105 + return no_free_ptr(ref); 106 + } 107 + 108 + static int gpio_shared_setup_reset_proxy(struct gpio_shared_entry *entry, 109 + enum gpiod_flags flags) 110 + { 111 + struct gpio_shared_ref *ref; 112 + 113 + list_for_each_entry(ref, &entry->refs, list) { 114 + if (ref->is_reset_gpio) 115 + /* Already set-up. */ 116 + return 0; 117 + } 118 + 119 + ref = gpio_shared_make_ref(NULL, "reset", flags); 120 + if (!ref) 121 + return -ENOMEM; 122 + 123 + ref->is_reset_gpio = true; 124 + 125 + list_add_tail(&ref->list, &entry->refs); 126 + 127 + pr_debug("Created a secondary shared GPIO reference for potential reset-gpio device for GPIO %u at %s\n", 128 + entry->offset, fwnode_get_name(entry->fwnode)); 129 + 130 + return 0; 131 + } 132 + 81 133 /* Handle all special nodes that we should ignore. */ 82 134 static bool gpio_shared_of_node_ignore(struct device_node *node) 83 135 { ··· 162 106 size_t con_id_len, suffix_len; 163 107 struct fwnode_handle *fwnode; 164 108 struct of_phandle_args args; 109 + struct gpio_shared_ref *ref; 165 110 struct property *prop; 166 111 unsigned int offset; 167 112 const char *suffix; ··· 195 138 196 139 for (i = 0; i < count; i++) { 197 140 struct device_node *np __free(device_node) = NULL; 141 + char *con_id __free(kfree) = NULL; 198 142 199 143 ret = of_parse_phandle_with_args(curr, prop->name, 200 144 "#gpio-cells", i, ··· 240 182 list_add_tail(&entry->list, &gpio_shared_list); 241 183 } 242 184 243 - struct gpio_shared_ref *ref __free(kfree) = 244 - kzalloc(sizeof(*ref), GFP_KERNEL); 245 - if (!ref) 246 - return -ENOMEM; 247 - 248 - ref->fwnode = fwnode_handle_get(of_fwnode_handle(curr)); 249 - ref->flags = args.args[1]; 250 - mutex_init(&ref->lock); 251 - 252 185 if (strends(prop->name, "gpios")) 253 186 suffix = "-gpios"; 254 187 else if (strends(prop->name, "gpio")) ··· 251 202 252 203 /* We only set con_id if there's actually one. */ 253 204 if (strcmp(prop->name, "gpios") && strcmp(prop->name, "gpio")) { 254 - ref->con_id = kstrdup(prop->name, GFP_KERNEL); 255 - if (!ref->con_id) 205 + con_id = kstrdup(prop->name, GFP_KERNEL); 206 + if (!con_id) 256 207 return -ENOMEM; 257 208 258 - con_id_len = strlen(ref->con_id); 209 + con_id_len = strlen(con_id); 259 210 suffix_len = strlen(suffix); 260 211 261 - ref->con_id[con_id_len - suffix_len] = '\0'; 212 + con_id[con_id_len - suffix_len] = '\0'; 262 213 } 263 214 264 - ref->dev_id = ida_alloc(&gpio_shared_ida, GFP_KERNEL); 265 - if (ref->dev_id < 0) { 266 - kfree(ref->con_id); 215 + ref = gpio_shared_make_ref(fwnode_handle_get(of_fwnode_handle(curr)), 216 + con_id, args.args[1]); 217 + if (!ref) 267 218 return -ENOMEM; 268 - } 269 219 270 220 if (!list_empty(&entry->refs)) 271 221 pr_debug("GPIO %u at %s is shared by multiple firmware nodes\n", 272 222 entry->offset, fwnode_get_name(entry->fwnode)); 273 223 274 - list_add_tail(&no_free_ptr(ref)->list, &entry->refs); 224 + list_add_tail(&ref->list, &entry->refs); 225 + 226 + if (strcmp(prop->name, "reset-gpios") == 0) { 227 + ret = gpio_shared_setup_reset_proxy(entry, args.args[1]); 228 + if (ret) 229 + return ret; 230 + } 275 231 } 276 232 } 277 233 ··· 360 306 struct fwnode_handle *reset_fwnode = dev_fwnode(consumer); 361 307 struct fwnode_reference_args ref_args, aux_args; 362 308 struct device *parent = consumer->parent; 309 + struct gpio_shared_ref *real_ref; 363 310 bool match; 364 311 int ret; 365 312 313 + lockdep_assert_held(&ref->lock); 314 + 366 315 /* The reset-gpio device must have a parent AND a firmware node. */ 367 316 if (!parent || !reset_fwnode) 368 - return false; 369 - 370 - /* 371 - * FIXME: use device_is_compatible() once the reset-gpio drivers gains 372 - * a compatible string which it currently does not have. 373 - */ 374 - if (!strstarts(dev_name(consumer), "reset.gpio.")) 375 317 return false; 376 318 377 319 /* ··· 378 328 return false; 379 329 380 330 /* 381 - * The device associated with the shared reference's firmware node is 382 - * the consumer of the reset control exposed by the reset-gpio device. 383 - * It must have a "reset-gpios" property that's referencing the entry's 384 - * firmware node. 385 - * 386 - * The reference args must agree between the real consumer and the 387 - * auxiliary reset-gpio device. 331 + * Now we need to find the actual pin we want to assign to this 332 + * reset-gpio device. To that end: iterate over the list of references 333 + * of this entry and see if there's one, whose reset-gpios property's 334 + * arguments match the ones from this consumer's node. 388 335 */ 389 - ret = fwnode_property_get_reference_args(ref->fwnode, "reset-gpios", 390 - NULL, 2, 0, &ref_args); 391 - if (ret) 392 - return false; 336 + list_for_each_entry(real_ref, &entry->refs, list) { 337 + if (real_ref == ref) 338 + continue; 393 339 394 - ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios", 395 - NULL, 2, 0, &aux_args); 396 - if (ret) { 340 + guard(mutex)(&real_ref->lock); 341 + 342 + if (!real_ref->fwnode) 343 + continue; 344 + 345 + /* 346 + * The device associated with the shared reference's firmware 347 + * node is the consumer of the reset control exposed by the 348 + * reset-gpio device. It must have a "reset-gpios" property 349 + * that's referencing the entry's firmware node. 350 + * 351 + * The reference args must agree between the real consumer and 352 + * the auxiliary reset-gpio device. 353 + */ 354 + ret = fwnode_property_get_reference_args(real_ref->fwnode, 355 + "reset-gpios", 356 + NULL, 2, 0, &ref_args); 357 + if (ret) 358 + continue; 359 + 360 + ret = fwnode_property_get_reference_args(reset_fwnode, "reset-gpios", 361 + NULL, 2, 0, &aux_args); 362 + if (ret) { 363 + fwnode_handle_put(ref_args.fwnode); 364 + continue; 365 + } 366 + 367 + match = ((ref_args.fwnode == entry->fwnode) && 368 + (aux_args.fwnode == entry->fwnode) && 369 + (ref_args.args[0] == aux_args.args[0])); 370 + 397 371 fwnode_handle_put(ref_args.fwnode); 398 - return false; 372 + fwnode_handle_put(aux_args.fwnode); 373 + 374 + if (!match) 375 + continue; 376 + 377 + /* 378 + * Reuse the fwnode of the real device, next time we'll use it 379 + * in the normal path. 380 + */ 381 + ref->fwnode = fwnode_handle_get(reset_fwnode); 382 + return true; 399 383 } 400 384 401 - match = ((ref_args.fwnode == entry->fwnode) && 402 - (aux_args.fwnode == entry->fwnode) && 403 - (ref_args.args[0] == aux_args.args[0])); 404 - 405 - fwnode_handle_put(ref_args.fwnode); 406 - fwnode_handle_put(aux_args.fwnode); 407 - return match; 385 + return false; 408 386 } 409 387 #else 410 388 static bool gpio_shared_dev_is_reset_gpio(struct device *consumer, ··· 443 365 } 444 366 #endif /* CONFIG_RESET_GPIO */ 445 367 446 - int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags) 368 + int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 369 + unsigned long lflags) 447 370 { 448 371 const char *dev_id = dev_name(consumer); 372 + struct gpiod_lookup_table *lookup; 449 373 struct gpio_shared_entry *entry; 450 374 struct gpio_shared_ref *ref; 451 375 452 - struct gpiod_lookup_table *lookup __free(kfree) = 453 - kzalloc(struct_size(lookup, table, 2), GFP_KERNEL); 454 - if (!lookup) 455 - return -ENOMEM; 456 - 457 376 list_for_each_entry(entry, &gpio_shared_list, list) { 458 377 list_for_each_entry(ref, &entry->refs, list) { 459 - if (!device_match_fwnode(consumer, ref->fwnode) && 460 - !gpio_shared_dev_is_reset_gpio(consumer, entry, ref)) 461 - continue; 462 - 463 378 guard(mutex)(&ref->lock); 379 + 380 + /* 381 + * FIXME: use device_is_compatible() once the reset-gpio 382 + * drivers gains a compatible string which it currently 383 + * does not have. 384 + */ 385 + if (!ref->fwnode && strstarts(dev_name(consumer), "reset.gpio.")) { 386 + if (!gpio_shared_dev_is_reset_gpio(consumer, entry, ref)) 387 + continue; 388 + } else if (!device_match_fwnode(consumer, ref->fwnode)) { 389 + continue; 390 + } 391 + 392 + if ((!con_id && ref->con_id) || (con_id && !ref->con_id) || 393 + (con_id && ref->con_id && strcmp(con_id, ref->con_id) != 0)) 394 + continue; 464 395 465 396 /* We've already done that on a previous request. */ 466 397 if (ref->lookup) ··· 482 395 if (!key) 483 396 return -ENOMEM; 484 397 398 + lookup = kzalloc(struct_size(lookup, table, 2), GFP_KERNEL); 399 + if (!lookup) 400 + return -ENOMEM; 401 + 485 402 pr_debug("Adding machine lookup entry for a shared GPIO for consumer %s, with key '%s' and con_id '%s'\n", 486 403 dev_id, key, ref->con_id ?: "none"); 487 404 ··· 493 402 lookup->table[0] = GPIO_LOOKUP(no_free_ptr(key), 0, 494 403 ref->con_id, lflags); 495 404 496 - ref->lookup = no_free_ptr(lookup); 405 + ref->lookup = lookup; 497 406 gpiod_add_lookup_table(ref->lookup); 498 407 499 408 return 0; ··· 557 466 entry->offset, gpio_device_get_label(gdev)); 558 467 559 468 list_for_each_entry(ref, &entry->refs, list) { 560 - pr_debug("Setting up a shared GPIO entry for %s\n", 561 - fwnode_get_name(ref->fwnode)); 469 + pr_debug("Setting up a shared GPIO entry for %s (con_id: '%s')\n", 470 + fwnode_get_name(ref->fwnode) ?: "(no fwnode)", 471 + ref->con_id ?: "(none)"); 562 472 563 473 ret = gpio_shared_make_adev(gdev, entry, ref); 564 474 if (ret) ··· 579 487 if (!device_match_fwnode(&gdev->dev, entry->fwnode)) 580 488 continue; 581 489 582 - /* 583 - * For some reason if we call synchronize_srcu() in GPIO core, 584 - * descent here and take this mutex and then recursively call 585 - * synchronize_srcu() again from gpiochip_remove() (which is 586 - * totally fine) called after gpio_shared_remove_adev(), 587 - * lockdep prints a false positive deadlock splat. Disable 588 - * lockdep here. 589 - */ 590 - lockdep_off(); 591 490 list_for_each_entry(ref, &entry->refs, list) { 592 491 guard(mutex)(&ref->lock); 593 492 ··· 591 508 592 509 gpio_shared_remove_adev(&ref->adev); 593 510 } 594 - lockdep_on(); 595 511 } 596 512 } 597 513 ··· 686 604 { 687 605 list_del(&ref->list); 688 606 mutex_destroy(&ref->lock); 607 + lockdep_unregister_key(&ref->lock_key); 689 608 kfree(ref->con_id); 690 609 ida_free(&gpio_shared_ida, ref->dev_id); 691 610 fwnode_handle_put(ref->fwnode); ··· 718 635 } 719 636 } 720 637 638 + static bool gpio_shared_entry_is_really_shared(struct gpio_shared_entry *entry) 639 + { 640 + size_t num_nodes = list_count_nodes(&entry->refs); 641 + struct gpio_shared_ref *ref; 642 + 643 + if (num_nodes <= 1) 644 + return false; 645 + 646 + if (num_nodes > 2) 647 + return true; 648 + 649 + /* Exactly two references: */ 650 + list_for_each_entry(ref, &entry->refs, list) { 651 + /* 652 + * Corner-case: the second reference comes from the potential 653 + * reset-gpio instance. However, this pin is not really shared 654 + * as it would have three references in this case. Avoid 655 + * creating unnecessary proxies. 656 + */ 657 + if (ref->is_reset_gpio) 658 + return false; 659 + } 660 + 661 + return true; 662 + } 663 + 721 664 static void gpio_shared_free_exclusive(void) 722 665 { 723 666 struct gpio_shared_entry *entry, *epos; 724 667 725 668 list_for_each_entry_safe(entry, epos, &gpio_shared_list, list) { 726 - if (list_count_nodes(&entry->refs) > 1) 669 + if (gpio_shared_entry_is_really_shared(entry)) 727 670 continue; 728 671 729 672 gpio_shared_drop_ref(list_first_entry(&entry->refs,
+3 -1
drivers/gpio/gpiolib-shared.h
··· 16 16 17 17 int gpio_device_setup_shared(struct gpio_device *gdev); 18 18 void gpio_device_teardown_shared(struct gpio_device *gdev); 19 - int gpio_shared_add_proxy_lookup(struct device *consumer, unsigned long lflags); 19 + int gpio_shared_add_proxy_lookup(struct device *consumer, const char *con_id, 20 + unsigned long lflags); 20 21 21 22 #else 22 23 ··· 29 28 static inline void gpio_device_teardown_shared(struct gpio_device *gdev) { } 30 29 31 30 static inline int gpio_shared_add_proxy_lookup(struct device *consumer, 31 + const char *con_id, 32 32 unsigned long lflags) 33 33 { 34 34 return 0;
+79 -57
drivers/gpio/gpiolib.c
··· 1105 1105 gdev->ngpio = gc->ngpio; 1106 1106 gdev->can_sleep = gc->can_sleep; 1107 1107 1108 + rwlock_init(&gdev->line_state_lock); 1109 + RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1110 + BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1111 + 1112 + ret = init_srcu_struct(&gdev->srcu); 1113 + if (ret) 1114 + goto err_free_label; 1115 + 1116 + ret = init_srcu_struct(&gdev->desc_srcu); 1117 + if (ret) 1118 + goto err_cleanup_gdev_srcu; 1119 + 1108 1120 scoped_guard(mutex, &gpio_devices_lock) { 1109 1121 /* 1110 1122 * TODO: this allocates a Linux GPIO number base in the global ··· 1131 1119 if (base < 0) { 1132 1120 ret = base; 1133 1121 base = 0; 1134 - goto err_free_label; 1122 + goto err_cleanup_desc_srcu; 1135 1123 } 1136 1124 1137 1125 /* ··· 1151 1139 ret = gpiodev_add_to_list_unlocked(gdev); 1152 1140 if (ret) { 1153 1141 gpiochip_err(gc, "GPIO integer space overlap, cannot add chip\n"); 1154 - goto err_free_label; 1142 + goto err_cleanup_desc_srcu; 1155 1143 } 1156 1144 } 1157 - 1158 - rwlock_init(&gdev->line_state_lock); 1159 - RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1160 - BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1161 - 1162 - ret = init_srcu_struct(&gdev->srcu); 1163 - if (ret) 1164 - goto err_remove_from_list; 1165 - 1166 - ret = init_srcu_struct(&gdev->desc_srcu); 1167 - if (ret) 1168 - goto err_cleanup_gdev_srcu; 1169 1145 1170 1146 #ifdef CONFIG_PINCTRL 1171 1147 INIT_LIST_HEAD(&gdev->pin_ranges); ··· 1164 1164 1165 1165 ret = gpiochip_set_names(gc); 1166 1166 if (ret) 1167 - goto err_cleanup_desc_srcu; 1167 + goto err_remove_from_list; 1168 1168 1169 1169 ret = gpiochip_init_valid_mask(gc); 1170 1170 if (ret) 1171 - goto err_cleanup_desc_srcu; 1171 + goto err_remove_from_list; 1172 1172 1173 1173 for (desc_index = 0; desc_index < gc->ngpio; desc_index++) { 1174 1174 struct gpio_desc *desc = &gdev->descs[desc_index]; ··· 1248 1248 of_gpiochip_remove(gc); 1249 1249 err_free_valid_mask: 1250 1250 gpiochip_free_valid_mask(gc); 1251 - err_cleanup_desc_srcu: 1252 - cleanup_srcu_struct(&gdev->desc_srcu); 1253 - err_cleanup_gdev_srcu: 1254 - cleanup_srcu_struct(&gdev->srcu); 1255 1251 err_remove_from_list: 1256 1252 scoped_guard(mutex, &gpio_devices_lock) 1257 1253 list_del_rcu(&gdev->list); ··· 1257 1261 gpio_device_put(gdev); 1258 1262 goto err_print_message; 1259 1263 } 1264 + err_cleanup_desc_srcu: 1265 + cleanup_srcu_struct(&gdev->desc_srcu); 1266 + err_cleanup_gdev_srcu: 1267 + cleanup_srcu_struct(&gdev->srcu); 1260 1268 err_free_label: 1261 1269 kfree_const(gdev->label); 1262 1270 err_free_descs: ··· 4508 4508 } 4509 4509 EXPORT_SYMBOL_GPL(gpiod_remove_hogs); 4510 4510 4511 - static struct gpiod_lookup_table *gpiod_find_lookup_table(struct device *dev) 4511 + static bool gpiod_match_lookup_table(struct device *dev, 4512 + const struct gpiod_lookup_table *table) 4512 4513 { 4513 4514 const char *dev_id = dev ? dev_name(dev) : NULL; 4514 - struct gpiod_lookup_table *table; 4515 4515 4516 - list_for_each_entry(table, &gpio_lookup_list, list) { 4517 - if (table->dev_id && dev_id) { 4518 - /* 4519 - * Valid strings on both ends, must be identical to have 4520 - * a match 4521 - */ 4522 - if (!strcmp(table->dev_id, dev_id)) 4523 - return table; 4524 - } else { 4525 - /* 4526 - * One of the pointers is NULL, so both must be to have 4527 - * a match 4528 - */ 4529 - if (dev_id == table->dev_id) 4530 - return table; 4531 - } 4516 + lockdep_assert_held(&gpio_lookup_lock); 4517 + 4518 + if (table->dev_id && dev_id) { 4519 + /* 4520 + * Valid strings on both ends, must be identical to have 4521 + * a match 4522 + */ 4523 + if (!strcmp(table->dev_id, dev_id)) 4524 + return true; 4525 + } else { 4526 + /* 4527 + * One of the pointers is NULL, so both must be to have 4528 + * a match 4529 + */ 4530 + if (dev_id == table->dev_id) 4531 + return true; 4532 4532 } 4533 4533 4534 - return NULL; 4534 + return false; 4535 4535 } 4536 4536 4537 - static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id, 4538 - unsigned int idx, unsigned long *flags) 4537 + static struct gpio_desc *gpio_desc_table_match(struct device *dev, const char *con_id, 4538 + unsigned int idx, unsigned long *flags, 4539 + struct gpiod_lookup_table *table) 4539 4540 { 4540 - struct gpio_desc *desc = ERR_PTR(-ENOENT); 4541 - struct gpiod_lookup_table *table; 4541 + struct gpio_desc *desc; 4542 4542 struct gpiod_lookup *p; 4543 4543 struct gpio_chip *gc; 4544 4544 4545 - guard(mutex)(&gpio_lookup_lock); 4546 - 4547 - table = gpiod_find_lookup_table(dev); 4548 - if (!table) 4549 - return desc; 4545 + lockdep_assert_held(&gpio_lookup_lock); 4550 4546 4551 4547 for (p = &table->table[0]; p->key; p++) { 4552 4548 /* idx must always match exactly */ ··· 4596 4600 return desc; 4597 4601 } 4598 4602 4599 - return desc; 4603 + return NULL; 4604 + } 4605 + 4606 + static struct gpio_desc *gpiod_find(struct device *dev, const char *con_id, 4607 + unsigned int idx, unsigned long *flags) 4608 + { 4609 + struct gpiod_lookup_table *table; 4610 + struct gpio_desc *desc; 4611 + 4612 + guard(mutex)(&gpio_lookup_lock); 4613 + 4614 + list_for_each_entry(table, &gpio_lookup_list, list) { 4615 + if (!gpiod_match_lookup_table(dev, table)) 4616 + continue; 4617 + 4618 + desc = gpio_desc_table_match(dev, con_id, idx, flags, table); 4619 + if (!desc) 4620 + continue; 4621 + 4622 + /* On IS_ERR() or match. */ 4623 + return desc; 4624 + } 4625 + 4626 + return ERR_PTR(-ENOENT); 4600 4627 } 4601 4628 4602 4629 static int platform_gpio_count(struct device *dev, const char *con_id) ··· 4629 4610 unsigned int count = 0; 4630 4611 4631 4612 scoped_guard(mutex, &gpio_lookup_lock) { 4632 - table = gpiod_find_lookup_table(dev); 4633 - if (!table) 4634 - return -ENOENT; 4613 + list_for_each_entry(table, &gpio_lookup_list, list) { 4614 + if (!gpiod_match_lookup_table(dev, table)) 4615 + continue; 4635 4616 4636 - for (p = &table->table[0]; p->key; p++) { 4637 - if ((con_id && p->con_id && !strcmp(con_id, p->con_id)) || 4638 - (!con_id && !p->con_id)) 4639 - count++; 4617 + for (p = &table->table[0]; p->key; p++) { 4618 + if ((con_id && p->con_id && 4619 + !strcmp(con_id, p->con_id)) || 4620 + (!con_id && !p->con_id)) 4621 + count++; 4622 + } 4640 4623 } 4641 4624 } 4642 4625 ··· 4717 4696 * lookup table for the proxy device as previously 4718 4697 * we only knew the consumer's fwnode. 4719 4698 */ 4720 - ret = gpio_shared_add_proxy_lookup(consumer, lookupflags); 4699 + ret = gpio_shared_add_proxy_lookup(consumer, con_id, 4700 + lookupflags); 4721 4701 if (ret) 4722 4702 return ERR_PTR(ret); 4723 4703
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3445 3445 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || 3446 3446 adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA)) 3447 3447 continue; 3448 - /* skip CG for VCE/UVD/VPE, it's handled specially */ 3448 + /* skip CG for VCE/UVD, it's handled specially */ 3449 3449 if (adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_UVD && 3450 3450 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCE && 3451 3451 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VCN && 3452 - adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_VPE && 3453 3452 adev->ip_blocks[i].version->type != AMD_IP_BLOCK_TYPE_JPEG && 3454 3453 adev->ip_blocks[i].version->funcs->set_powergating_state) { 3455 3454 /* enable powergating to save power */ ··· 5865 5866 5866 5867 if (ret) 5867 5868 goto mode1_reset_failed; 5869 + 5870 + /* enable mmio access after mode 1 reset completed */ 5871 + adev->no_hw_access = false; 5868 5872 5869 5873 amdgpu_device_load_pci_state(adev->pdev); 5870 5874 ret = amdgpu_psp_wait_for_bootloader(adev);
+31 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 89 89 return seq; 90 90 } 91 91 92 + static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) 93 + { 94 + af->fence_wptr_start = af->ring->wptr; 95 + } 96 + 97 + static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) 98 + { 99 + af->fence_wptr_end = af->ring->wptr; 100 + } 101 + 92 102 /** 93 103 * amdgpu_fence_emit - emit a fence on the requested ring 94 104 * ··· 126 116 &ring->fence_drv.lock, 127 117 adev->fence_context + ring->idx, seq); 128 118 119 + amdgpu_fence_save_fence_wptr_start(af); 129 120 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, 130 121 seq, flags | AMDGPU_FENCE_FLAG_INT); 122 + amdgpu_fence_save_fence_wptr_end(af); 131 123 amdgpu_fence_save_wptr(af); 132 124 pm_runtime_get_noresume(adev_to_drm(adev)->dev); 133 125 ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; ··· 721 709 struct amdgpu_ring *ring = af->ring; 722 710 unsigned long flags; 723 711 u32 seq, last_seq; 712 + bool reemitted = false; 724 713 725 714 last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; 726 715 seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; ··· 739 726 if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { 740 727 fence = container_of(unprocessed, struct amdgpu_fence, base); 741 728 742 - if (fence == af) 729 + if (fence->reemitted > 1) 730 + reemitted = true; 731 + else if (fence == af) 743 732 dma_fence_set_error(&fence->base, -ETIME); 744 733 else if (fence->context == af->context) 745 734 dma_fence_set_error(&fence->base, -ECANCELED); ··· 749 734 rcu_read_unlock(); 750 735 } while (last_seq != seq); 751 736 spin_unlock_irqrestore(&ring->fence_drv.lock, flags); 752 - /* signal the guilty fence */ 753 - amdgpu_fence_write(ring, (u32)af->base.seqno); 754 - amdgpu_fence_process(ring); 737 + 738 + if (reemitted) { 739 + /* if we've already reemitted once then just cancel everything */ 740 + amdgpu_fence_driver_force_completion(af->ring); 741 + af->ring->ring_backup_entries_to_copy = 0; 742 + } 755 743 } 756 744 757 745 void amdgpu_fence_save_wptr(struct amdgpu_fence *af) ··· 802 784 /* save everything if the ring is not guilty, otherwise 803 785 * just save the content from other contexts. 804 786 */ 805 - if (!guilty_fence || (fence->context != guilty_fence->context)) 787 + if (!fence->reemitted && 788 + (!guilty_fence || (fence->context != guilty_fence->context))) { 806 789 amdgpu_ring_backup_unprocessed_command(ring, wptr, 807 790 fence->wptr); 791 + } else if (!fence->reemitted) { 792 + /* always save the fence */ 793 + amdgpu_ring_backup_unprocessed_command(ring, 794 + fence->fence_wptr_start, 795 + fence->fence_wptr_end); 796 + } 808 797 wptr = fence->wptr; 798 + fence->reemitted++; 809 799 } 810 800 rcu_read_unlock(); 811 801 } while (last_seq != seq);
+24
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.c
··· 318 318 } 319 319 EXPORT_SYMBOL(isp_kernel_buffer_free); 320 320 321 + static int isp_resume(struct amdgpu_ip_block *ip_block) 322 + { 323 + struct amdgpu_device *adev = ip_block->adev; 324 + struct amdgpu_isp *isp = &adev->isp; 325 + 326 + if (isp->funcs->hw_resume) 327 + return isp->funcs->hw_resume(isp); 328 + 329 + return -ENODEV; 330 + } 331 + 332 + static int isp_suspend(struct amdgpu_ip_block *ip_block) 333 + { 334 + struct amdgpu_device *adev = ip_block->adev; 335 + struct amdgpu_isp *isp = &adev->isp; 336 + 337 + if (isp->funcs->hw_suspend) 338 + return isp->funcs->hw_suspend(isp); 339 + 340 + return -ENODEV; 341 + } 342 + 321 343 static const struct amd_ip_funcs isp_ip_funcs = { 322 344 .name = "isp_ip", 323 345 .early_init = isp_early_init, 324 346 .hw_init = isp_hw_init, 325 347 .hw_fini = isp_hw_fini, 326 348 .is_idle = isp_is_idle, 349 + .suspend = isp_suspend, 350 + .resume = isp_resume, 327 351 .set_clockgating_state = isp_set_clockgating_state, 328 352 .set_powergating_state = isp_set_powergating_state, 329 353 };
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_isp.h
··· 38 38 struct isp_funcs { 39 39 int (*hw_init)(struct amdgpu_isp *isp); 40 40 int (*hw_fini)(struct amdgpu_isp *isp); 41 + int (*hw_suspend)(struct amdgpu_isp *isp); 42 + int (*hw_resume)(struct amdgpu_isp *isp); 41 43 }; 42 44 43 45 struct amdgpu_isp {
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 201 201 type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ? 202 202 AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN; 203 203 break; 204 + case AMDGPU_HW_IP_VPE: 205 + type = AMD_IP_BLOCK_TYPE_VPE; 206 + break; 204 207 default: 205 208 type = AMD_IP_BLOCK_TYPE_NUM; 206 209 break; ··· 723 720 break; 724 721 case AMD_IP_BLOCK_TYPE_UVD: 725 722 count = adev->uvd.num_uvd_inst; 723 + break; 724 + case AMD_IP_BLOCK_TYPE_VPE: 725 + count = adev->vpe.num_instances; 726 726 break; 727 727 /* For all other IP block types not listed in the switch statement 728 728 * the ip status is valid here and the instance count is one.
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 144 144 struct amdgpu_ring *ring; 145 145 ktime_t start_timestamp; 146 146 147 - /* wptr for the fence for resets */ 147 + /* wptr for the total submission for resets */ 148 148 u64 wptr; 149 149 /* fence context for resets */ 150 150 u64 context; 151 + /* has this fence been reemitted */ 152 + unsigned int reemitted; 153 + /* wptr for the fence for the submission */ 154 + u64 fence_wptr_start; 155 + u64 fence_wptr_end; 151 156 }; 152 157 153 158 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
+41
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 26 26 */ 27 27 28 28 #include <linux/gpio/machine.h> 29 + #include <linux/pm_runtime.h> 29 30 #include "amdgpu.h" 30 31 #include "isp_v4_1_1.h" 31 32 ··· 146 145 return -ENODEV; 147 146 } 148 147 148 + /* The devices will be managed by the pm ops from the parent */ 149 + dev_pm_syscore_device(dev, true); 150 + 149 151 exit: 150 152 /* Continue to add */ 151 153 return 0; ··· 181 177 drm_err(&adev->ddev, "Failed to remove dev from genpd %d\n", ret); 182 178 return -ENODEV; 183 179 } 180 + dev_pm_syscore_device(dev, false); 184 181 185 182 exit: 186 183 /* Continue to remove */ 187 184 return 0; 185 + } 186 + 187 + static int isp_suspend_device(struct device *dev, void *data) 188 + { 189 + return pm_runtime_force_suspend(dev); 190 + } 191 + 192 + static int isp_resume_device(struct device *dev, void *data) 193 + { 194 + return pm_runtime_force_resume(dev); 195 + } 196 + 197 + static int isp_v4_1_1_hw_suspend(struct amdgpu_isp *isp) 198 + { 199 + int r; 200 + 201 + r = device_for_each_child(isp->parent, NULL, 202 + isp_suspend_device); 203 + if (r) 204 + dev_err(isp->parent, "failed to suspend hw devices (%d)\n", r); 205 + 206 + return r; 207 + } 208 + 209 + static int isp_v4_1_1_hw_resume(struct amdgpu_isp *isp) 210 + { 211 + int r; 212 + 213 + r = device_for_each_child(isp->parent, NULL, 214 + isp_resume_device); 215 + if (r) 216 + dev_err(isp->parent, "failed to resume hw device (%d)\n", r); 217 + 218 + return r; 188 219 } 189 220 190 221 static int isp_v4_1_1_hw_init(struct amdgpu_isp *isp) ··· 408 369 static const struct isp_funcs isp_v4_1_1_funcs = { 409 370 .hw_init = isp_v4_1_1_hw_init, 410 371 .hw_fini = isp_v4_1_1_hw_fini, 372 + .hw_suspend = isp_v4_1_1_hw_suspend, 373 + .hw_resume = isp_v4_1_1_hw_resume, 411 374 }; 412 375 413 376 void isp_v4_1_1_set_isp_funcs(struct amdgpu_isp *isp)
+2 -2
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
··· 763 763 return BP_RESULT_FAILURE; 764 764 765 765 return bp->cmd_tbl.dac1_encoder_control( 766 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 766 + bp, cntl->action, 767 767 cntl->pixel_clock, ATOM_DAC1_PS2); 768 768 } else if (cntl->engine_id == ENGINE_ID_DACB) { 769 769 if (!bp->cmd_tbl.dac2_encoder_control) 770 770 return BP_RESULT_FAILURE; 771 771 772 772 return bp->cmd_tbl.dac2_encoder_control( 773 - bp, cntl->action == ENCODER_CONTROL_ENABLE, 773 + bp, cntl->action, 774 774 cntl->pixel_clock, ATOM_DAC1_PS2); 775 775 } 776 776
+35 -9
drivers/gpu/drm/amd/display/dc/bios/command_table.c
··· 1797 1797 &params.ucEncodeMode)) 1798 1798 return BP_RESULT_BADINPUT; 1799 1799 1800 - params.ucDstBpc = bp_params->bit_depth; 1800 + switch (bp_params->color_depth) { 1801 + case COLOR_DEPTH_UNDEFINED: 1802 + params.ucDstBpc = PANEL_BPC_UNDEFINE; 1803 + break; 1804 + case COLOR_DEPTH_666: 1805 + params.ucDstBpc = PANEL_6BIT_PER_COLOR; 1806 + break; 1807 + default: 1808 + case COLOR_DEPTH_888: 1809 + params.ucDstBpc = PANEL_8BIT_PER_COLOR; 1810 + break; 1811 + case COLOR_DEPTH_101010: 1812 + params.ucDstBpc = PANEL_10BIT_PER_COLOR; 1813 + break; 1814 + case COLOR_DEPTH_121212: 1815 + params.ucDstBpc = PANEL_12BIT_PER_COLOR; 1816 + break; 1817 + case COLOR_DEPTH_141414: 1818 + dm_error("14-bit color not supported by SelectCRTC_Source v3\n"); 1819 + break; 1820 + case COLOR_DEPTH_161616: 1821 + params.ucDstBpc = PANEL_16BIT_PER_COLOR; 1822 + break; 1823 + } 1801 1824 1802 1825 if (EXEC_BIOS_CMD_TABLE(SelectCRTC_Source, params)) 1803 1826 result = BP_RESULT_OK; ··· 1838 1815 1839 1816 static enum bp_result dac1_encoder_control_v1( 1840 1817 struct bios_parser *bp, 1841 - bool enable, 1818 + enum bp_encoder_control_action action, 1842 1819 uint32_t pixel_clock, 1843 1820 uint8_t dac_standard); 1844 1821 static enum bp_result dac2_encoder_control_v1( 1845 1822 struct bios_parser *bp, 1846 - bool enable, 1823 + enum bp_encoder_control_action action, 1847 1824 uint32_t pixel_clock, 1848 1825 uint8_t dac_standard); 1849 1826 ··· 1869 1846 1870 1847 static void dac_encoder_control_prepare_params( 1871 1848 DAC_ENCODER_CONTROL_PS_ALLOCATION *params, 1872 - bool enable, 1849 + enum bp_encoder_control_action action, 1873 1850 uint32_t pixel_clock, 1874 1851 uint8_t dac_standard) 1875 1852 { 1876 1853 params->ucDacStandard = dac_standard; 1877 - if (enable) 1854 + if (action == ENCODER_CONTROL_SETUP || 1855 + action == ENCODER_CONTROL_INIT) 1856 + params->ucAction = ATOM_ENCODER_INIT; 1857 + else if (action == ENCODER_CONTROL_ENABLE) 1878 1858 params->ucAction = ATOM_ENABLE; 1879 1859 else 1880 1860 params->ucAction = ATOM_DISABLE; ··· 1890 1864 1891 1865 static enum bp_result dac1_encoder_control_v1( 1892 1866 struct bios_parser *bp, 1893 - bool enable, 1867 + enum bp_encoder_control_action action, 1894 1868 uint32_t pixel_clock, 1895 1869 uint8_t dac_standard) 1896 1870 { ··· 1899 1873 1900 1874 dac_encoder_control_prepare_params( 1901 1875 &params, 1902 - enable, 1876 + action, 1903 1877 pixel_clock, 1904 1878 dac_standard); 1905 1879 ··· 1911 1885 1912 1886 static enum bp_result dac2_encoder_control_v1( 1913 1887 struct bios_parser *bp, 1914 - bool enable, 1888 + enum bp_encoder_control_action action, 1915 1889 uint32_t pixel_clock, 1916 1890 uint8_t dac_standard) 1917 1891 { ··· 1920 1894 1921 1895 dac_encoder_control_prepare_params( 1922 1896 &params, 1923 - enable, 1897 + action, 1924 1898 pixel_clock, 1925 1899 dac_standard); 1926 1900
+2 -2
drivers/gpu/drm/amd/display/dc/bios/command_table.h
··· 57 57 struct bp_crtc_source_select *bp_params); 58 58 enum bp_result (*dac1_encoder_control)( 59 59 struct bios_parser *bp, 60 - bool enable, 60 + enum bp_encoder_control_action action, 61 61 uint32_t pixel_clock, 62 62 uint8_t dac_standard); 63 63 enum bp_result (*dac2_encoder_control)( 64 64 struct bios_parser *bp, 65 - bool enable, 65 + enum bp_encoder_control_action action, 66 66 uint32_t pixel_clock, 67 67 uint8_t dac_standard); 68 68 enum bp_result (*dac1_output_control)(
+5 -1
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 30 30 31 31 ifneq ($(CONFIG_FRAME_WARN),0) 32 32 ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y) 33 - frame_warn_limit := 3072 33 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy) 34 + frame_warn_limit := 4096 35 + else 36 + frame_warn_limit := 3072 37 + endif 34 38 else 35 39 frame_warn_limit := 2048 36 40 endif
+139 -406
drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
··· 77 77 static unsigned int dscComputeDelay( 78 78 enum output_format_class pixelFormat, 79 79 enum output_encoder_class Output); 80 - // Super monster function with some 45 argument 81 80 static bool CalculatePrefetchSchedule( 82 81 struct display_mode_lib *mode_lib, 83 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 84 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 82 + unsigned int k, 85 83 Pipe *myPipe, 86 84 unsigned int DSCDelay, 87 - double DPPCLKDelaySubtotalPlusCNVCFormater, 88 - double DPPCLKDelaySCL, 89 - double DPPCLKDelaySCLLBOnly, 90 - double DPPCLKDelayCNVCCursor, 91 - double DISPCLKDelaySubtotal, 92 85 unsigned int DPP_RECOUT_WIDTH, 93 - enum output_format_class OutputFormat, 94 - unsigned int MaxInterDCNTileRepeaters, 95 86 unsigned int VStartup, 96 87 unsigned int MaxVStartup, 97 - unsigned int GPUVMPageTableLevels, 98 - bool GPUVMEnable, 99 - bool HostVMEnable, 100 - unsigned int HostVMMaxNonCachedPageTableLevels, 101 - double HostVMMinPageSize, 102 - bool DynamicMetadataEnable, 103 - bool DynamicMetadataVMEnabled, 104 - int DynamicMetadataLinesBeforeActiveRequired, 105 - unsigned int DynamicMetadataTransmittedBytes, 106 88 double UrgentLatency, 107 89 double UrgentExtraLatency, 108 90 double TCalc, ··· 98 116 unsigned int MaxNumSwathY, 99 117 double PrefetchSourceLinesC, 100 118 unsigned int SwathWidthC, 101 - int BytePerPixelC, 102 119 double VInitPreFillC, 103 120 unsigned int MaxNumSwathC, 104 121 long swath_width_luma_ub, ··· 105 124 unsigned int SwathHeightY, 106 125 unsigned int SwathHeightC, 107 126 double TWait, 108 - bool ProgressiveToInterlaceUnitInOPP, 109 - double *DSTXAfterScaler, 110 - double *DSTYAfterScaler, 111 127 double *DestinationLinesForPrefetch, 112 128 double *PrefetchBandwidth, 113 129 double *DestinationLinesToRequestVMInVBlank, ··· 113 135 double *VRatioPrefetchC, 114 136 double *RequiredPrefetchPixDataBWLuma, 115 137 double *RequiredPrefetchPixDataBWChroma, 116 - bool *NotEnoughTimeForDynamicMetadata, 117 - double *Tno_bw, 118 - double *prefetch_vmrow_bw, 119 - double *Tdmdl_vm, 120 - double *Tdmdl, 121 - unsigned int *VUpdateOffsetPix, 122 - double *VUpdateWidthPix, 123 - double *VReadyOffsetPix); 138 + bool *NotEnoughTimeForDynamicMetadata); 124 139 static double RoundToDFSGranularityUp(double Clock, double VCOSpeed); 125 140 static double RoundToDFSGranularityDown(double Clock, double VCOSpeed); 126 141 static void CalculateDCCConfiguration( ··· 265 294 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 266 295 struct display_mode_lib *mode_lib, 267 296 unsigned int PrefetchMode, 268 - unsigned int NumberOfActivePlanes, 269 - unsigned int MaxLineBufferLines, 270 - unsigned int LineBufferSize, 271 - unsigned int DPPOutputBufferPixels, 272 - unsigned int DETBufferSizeInKByte, 273 - unsigned int WritebackInterfaceBufferSize, 274 297 double DCFCLK, 275 298 double ReturnBW, 276 - bool GPUVMEnable, 277 - unsigned int dpte_group_bytes[], 278 - unsigned int MetaChunkSize, 279 299 double UrgentLatency, 280 300 double ExtraLatency, 281 - double WritebackLatency, 282 - double WritebackChunkSize, 283 301 double SOCCLK, 284 - double DRAMClockChangeLatency, 285 - double SRExitTime, 286 - double SREnterPlusExitTime, 287 302 double DCFCLKDeepSleep, 288 303 unsigned int DPPPerPlane[], 289 - bool DCCEnable[], 290 304 double DPPCLK[], 291 305 unsigned int DETBufferSizeY[], 292 306 unsigned int DETBufferSizeC[], 293 307 unsigned int SwathHeightY[], 294 308 unsigned int SwathHeightC[], 295 - unsigned int LBBitPerPixel[], 296 309 double SwathWidthY[], 297 310 double SwathWidthC[], 298 - double HRatio[], 299 - double HRatioChroma[], 300 - unsigned int vtaps[], 301 - unsigned int VTAPsChroma[], 302 - double VRatio[], 303 - double VRatioChroma[], 304 - unsigned int HTotal[], 305 - double PixelClock[], 306 - unsigned int BlendingAndTiming[], 307 311 double BytePerPixelDETY[], 308 312 double BytePerPixelDETC[], 309 - double DSTXAfterScaler[], 310 - double DSTYAfterScaler[], 311 - bool WritebackEnable[], 312 - enum source_format_class WritebackPixelFormat[], 313 - double WritebackDestinationWidth[], 314 - double WritebackDestinationHeight[], 315 - double WritebackSourceHeight[], 316 - enum clock_change_support *DRAMClockChangeSupport, 317 - double *UrgentWatermark, 318 - double *WritebackUrgentWatermark, 319 - double *DRAMClockChangeWatermark, 320 - double *WritebackDRAMClockChangeWatermark, 321 - double *StutterExitWatermark, 322 - double *StutterEnterPlusExitWatermark, 323 - double *MinActiveDRAMClockChangeLatencySupported); 313 + enum clock_change_support *DRAMClockChangeSupport); 324 314 static void CalculateDCFCLKDeepSleep( 325 315 struct display_mode_lib *mode_lib, 326 316 unsigned int NumberOfActivePlanes, ··· 742 810 743 811 static bool CalculatePrefetchSchedule( 744 812 struct display_mode_lib *mode_lib, 745 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 746 - double PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 813 + unsigned int k, 747 814 Pipe *myPipe, 748 815 unsigned int DSCDelay, 749 - double DPPCLKDelaySubtotalPlusCNVCFormater, 750 - double DPPCLKDelaySCL, 751 - double DPPCLKDelaySCLLBOnly, 752 - double DPPCLKDelayCNVCCursor, 753 - double DISPCLKDelaySubtotal, 754 816 unsigned int DPP_RECOUT_WIDTH, 755 - enum output_format_class OutputFormat, 756 - unsigned int MaxInterDCNTileRepeaters, 757 817 unsigned int VStartup, 758 818 unsigned int MaxVStartup, 759 - unsigned int GPUVMPageTableLevels, 760 - bool GPUVMEnable, 761 - bool HostVMEnable, 762 - unsigned int HostVMMaxNonCachedPageTableLevels, 763 - double HostVMMinPageSize, 764 - bool DynamicMetadataEnable, 765 - bool DynamicMetadataVMEnabled, 766 - int DynamicMetadataLinesBeforeActiveRequired, 767 - unsigned int DynamicMetadataTransmittedBytes, 768 819 double UrgentLatency, 769 820 double UrgentExtraLatency, 770 821 double TCalc, ··· 761 846 unsigned int MaxNumSwathY, 762 847 double PrefetchSourceLinesC, 763 848 unsigned int SwathWidthC, 764 - int BytePerPixelC, 765 849 double VInitPreFillC, 766 850 unsigned int MaxNumSwathC, 767 851 long swath_width_luma_ub, ··· 768 854 unsigned int SwathHeightY, 769 855 unsigned int SwathHeightC, 770 856 double TWait, 771 - bool ProgressiveToInterlaceUnitInOPP, 772 - double *DSTXAfterScaler, 773 - double *DSTYAfterScaler, 774 857 double *DestinationLinesForPrefetch, 775 858 double *PrefetchBandwidth, 776 859 double *DestinationLinesToRequestVMInVBlank, ··· 776 865 double *VRatioPrefetchC, 777 866 double *RequiredPrefetchPixDataBWLuma, 778 867 double *RequiredPrefetchPixDataBWChroma, 779 - bool *NotEnoughTimeForDynamicMetadata, 780 - double *Tno_bw, 781 - double *prefetch_vmrow_bw, 782 - double *Tdmdl_vm, 783 - double *Tdmdl, 784 - unsigned int *VUpdateOffsetPix, 785 - double *VUpdateWidthPix, 786 - double *VReadyOffsetPix) 868 + bool *NotEnoughTimeForDynamicMetadata) 787 869 { 870 + struct vba_vars_st *v = &mode_lib->vba; 871 + double DPPCLKDelaySubtotalPlusCNVCFormater = v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater; 788 872 bool MyError = false; 789 873 unsigned int DPPCycles = 0, DISPCLKCycles = 0; 790 874 double DSTTotalPixelsAfterScaler = 0; ··· 811 905 double Tdmec = 0; 812 906 double Tdmsks = 0; 813 907 814 - if (GPUVMEnable == true && HostVMEnable == true) { 815 - HostVMInefficiencyFactor = PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 816 - HostVMDynamicLevelsTrips = HostVMMaxNonCachedPageTableLevels; 908 + if (v->GPUVMEnable == true && v->HostVMEnable == true) { 909 + HostVMInefficiencyFactor = v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData / v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly; 910 + HostVMDynamicLevelsTrips = v->HostVMMaxNonCachedPageTableLevels; 817 911 } else { 818 912 HostVMInefficiencyFactor = 1; 819 913 HostVMDynamicLevelsTrips = 0; 820 914 } 821 915 822 916 CalculateDynamicMetadataParameters( 823 - MaxInterDCNTileRepeaters, 917 + v->MaxInterDCNTileRepeaters, 824 918 myPipe->DPPCLK, 825 919 myPipe->DISPCLK, 826 920 myPipe->DCFCLKDeepSleep, 827 921 myPipe->PixelClock, 828 922 myPipe->HTotal, 829 923 myPipe->VBlank, 830 - DynamicMetadataTransmittedBytes, 831 - DynamicMetadataLinesBeforeActiveRequired, 924 + v->DynamicMetadataTransmittedBytes[k], 925 + v->DynamicMetadataLinesBeforeActiveRequired[k], 832 926 myPipe->InterlaceEnable, 833 - ProgressiveToInterlaceUnitInOPP, 927 + v->ProgressiveToInterlaceUnitInOPP, 834 928 &Tsetup, 835 929 &Tdmbf, 836 930 &Tdmec, ··· 838 932 839 933 LineTime = myPipe->HTotal / myPipe->PixelClock; 840 934 trip_to_mem = UrgentLatency; 841 - Tvm_trips = UrgentExtraLatency + trip_to_mem * (GPUVMPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 935 + Tvm_trips = UrgentExtraLatency + trip_to_mem * (v->GPUVMMaxPageTableLevels * (HostVMDynamicLevelsTrips + 1) - 1); 842 936 843 - if (DynamicMetadataVMEnabled == true && GPUVMEnable == true) { 844 - *Tdmdl = TWait + Tvm_trips + trip_to_mem; 937 + if (v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true) { 938 + v->Tdmdl[k] = TWait + Tvm_trips + trip_to_mem; 845 939 } else { 846 - *Tdmdl = TWait + UrgentExtraLatency; 940 + v->Tdmdl[k] = TWait + UrgentExtraLatency; 847 941 } 848 942 849 - if (DynamicMetadataEnable == true) { 850 - if (VStartup * LineTime < Tsetup + *Tdmdl + Tdmbf + Tdmec + Tdmsks) { 943 + if (v->DynamicMetadataEnable[k] == true) { 944 + if (VStartup * LineTime < Tsetup + v->Tdmdl[k] + Tdmbf + Tdmec + Tdmsks) { 851 945 *NotEnoughTimeForDynamicMetadata = true; 852 946 } else { 853 947 *NotEnoughTimeForDynamicMetadata = false; ··· 855 949 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 856 950 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 857 951 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 858 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 952 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 859 953 } 860 954 } else { 861 955 *NotEnoughTimeForDynamicMetadata = false; 862 956 } 863 957 864 - *Tdmdl_vm = (DynamicMetadataEnable == true && DynamicMetadataVMEnabled == true && GPUVMEnable == true ? TWait + Tvm_trips : 0); 958 + v->Tdmdl_vm[k] = (v->DynamicMetadataEnable[k] == true && v->DynamicMetadataVMEnabled == true && v->GPUVMEnable == true ? TWait + Tvm_trips : 0); 865 959 866 960 if (myPipe->ScalerEnabled) 867 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCL; 961 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCL; 868 962 else 869 - DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + DPPCLKDelaySCLLBOnly; 963 + DPPCycles = DPPCLKDelaySubtotalPlusCNVCFormater + v->DPPCLKDelaySCLLBOnly; 870 964 871 - DPPCycles = DPPCycles + myPipe->NumberOfCursors * DPPCLKDelayCNVCCursor; 965 + DPPCycles = DPPCycles + myPipe->NumberOfCursors * v->DPPCLKDelayCNVCCursor; 872 966 873 - DISPCLKCycles = DISPCLKDelaySubtotal; 967 + DISPCLKCycles = v->DISPCLKDelaySubtotal; 874 968 875 969 if (myPipe->DPPCLK == 0.0 || myPipe->DISPCLK == 0.0) 876 970 return true; 877 971 878 - *DSTXAfterScaler = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 972 + v->DSTXAfterScaler[k] = DPPCycles * myPipe->PixelClock / myPipe->DPPCLK + DISPCLKCycles * myPipe->PixelClock / myPipe->DISPCLK 879 973 + DSCDelay; 880 974 881 - *DSTXAfterScaler = *DSTXAfterScaler + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 975 + v->DSTXAfterScaler[k] = v->DSTXAfterScaler[k] + ((myPipe->ODMCombineEnabled)?18:0) + (myPipe->DPPPerPlane - 1) * DPP_RECOUT_WIDTH; 882 976 883 - if (OutputFormat == dm_420 || (myPipe->InterlaceEnable && ProgressiveToInterlaceUnitInOPP)) 884 - *DSTYAfterScaler = 1; 977 + if (v->OutputFormat[k] == dm_420 || (myPipe->InterlaceEnable && v->ProgressiveToInterlaceUnitInOPP)) 978 + v->DSTYAfterScaler[k] = 1; 885 979 else 886 - *DSTYAfterScaler = 0; 980 + v->DSTYAfterScaler[k] = 0; 887 981 888 - DSTTotalPixelsAfterScaler = *DSTYAfterScaler * myPipe->HTotal + *DSTXAfterScaler; 889 - *DSTYAfterScaler = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 890 - *DSTXAfterScaler = DSTTotalPixelsAfterScaler - ((double) (*DSTYAfterScaler * myPipe->HTotal)); 982 + DSTTotalPixelsAfterScaler = v->DSTYAfterScaler[k] * myPipe->HTotal + v->DSTXAfterScaler[k]; 983 + v->DSTYAfterScaler[k] = dml_floor(DSTTotalPixelsAfterScaler / myPipe->HTotal, 1); 984 + v->DSTXAfterScaler[k] = DSTTotalPixelsAfterScaler - ((double) (v->DSTYAfterScaler[k] * myPipe->HTotal)); 891 985 892 986 MyError = false; 893 987 ··· 896 990 Tvm_trips_rounded = dml_ceil(4.0 * Tvm_trips / LineTime, 1) / 4 * LineTime; 897 991 Tr0_trips_rounded = dml_ceil(4.0 * Tr0_trips / LineTime, 1) / 4 * LineTime; 898 992 899 - if (GPUVMEnable) { 900 - if (GPUVMPageTableLevels >= 3) { 901 - *Tno_bw = UrgentExtraLatency + trip_to_mem * ((GPUVMPageTableLevels - 2) - 1); 993 + if (v->GPUVMEnable) { 994 + if (v->GPUVMMaxPageTableLevels >= 3) { 995 + v->Tno_bw[k] = UrgentExtraLatency + trip_to_mem * ((v->GPUVMMaxPageTableLevels - 2) - 1); 902 996 } else 903 - *Tno_bw = 0; 997 + v->Tno_bw[k] = 0; 904 998 } else if (!myPipe->DCCEnable) 905 - *Tno_bw = LineTime; 999 + v->Tno_bw[k] = LineTime; 906 1000 else 907 - *Tno_bw = LineTime / 4; 1001 + v->Tno_bw[k] = LineTime / 4; 908 1002 909 - dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, *Tdmdl)) / LineTime 910 - - (*DSTYAfterScaler + *DSTXAfterScaler / myPipe->HTotal); 1003 + dst_y_prefetch_equ = VStartup - (Tsetup + dml_max(TWait + TCalc, v->Tdmdl[k])) / LineTime 1004 + - (v->DSTYAfterScaler[k] + v->DSTXAfterScaler[k] / myPipe->HTotal); 911 1005 dst_y_prefetch_equ = dml_min(dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH 912 1006 913 1007 Lsw_oto = dml_max(PrefetchSourceLinesY, PrefetchSourceLinesC); 914 1008 Tsw_oto = Lsw_oto * LineTime; 915 1009 916 - prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) / Tsw_oto; 1010 + prefetch_bw_oto = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) / Tsw_oto; 917 1011 918 - if (GPUVMEnable == true) { 919 - Tvm_oto = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 1012 + if (v->GPUVMEnable == true) { 1013 + Tvm_oto = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_oto, 920 1014 Tvm_trips, 921 1015 LineTime / 4.0); 922 1016 } else 923 1017 Tvm_oto = LineTime / 4.0; 924 1018 925 - if ((GPUVMEnable == true || myPipe->DCCEnable == true)) { 1019 + if ((v->GPUVMEnable == true || myPipe->DCCEnable == true)) { 926 1020 Tr0_oto = dml_max3( 927 1021 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_oto, 928 1022 LineTime - Tvm_oto, LineTime / 4); ··· 948 1042 dml_print("DML: Tdmbf: %fus - time for dmd transfer from dchub to dio output buffer\n", Tdmbf); 949 1043 dml_print("DML: Tdmec: %fus - time dio takes to transfer dmd\n", Tdmec); 950 1044 dml_print("DML: Tdmsks: %fus - time before active dmd must complete transmission at dio\n", Tdmsks); 951 - dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", *Tdmdl_vm); 952 - dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", *Tdmdl); 953 - dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", *DSTXAfterScaler); 954 - dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)*DSTYAfterScaler); 1045 + dml_print("DML: Tdmdl_vm: %fus - time for vm stages of dmd \n", v->Tdmdl_vm[k]); 1046 + dml_print("DML: Tdmdl: %fus - time for fabric to become ready and fetch dmd \n", v->Tdmdl[k]); 1047 + dml_print("DML: dst_x_after_scl: %f pixels - number of pixel clocks pipeline and buffer delay after scaler \n", v->DSTXAfterScaler[k]); 1048 + dml_print("DML: dst_y_after_scl: %d lines - number of lines of pipeline and buffer delay after scaler \n", (int)v->DSTYAfterScaler[k]); 955 1049 956 1050 *PrefetchBandwidth = 0; 957 1051 *DestinationLinesToRequestVMInVBlank = 0; ··· 965 1059 double PrefetchBandwidth3 = 0; 966 1060 double PrefetchBandwidth4 = 0; 967 1061 968 - if (Tpre_rounded - *Tno_bw > 0) 1062 + if (Tpre_rounded - v->Tno_bw[k] > 0) 969 1063 PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte 970 1064 + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor 971 1065 + PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY 972 - + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 973 - / (Tpre_rounded - *Tno_bw); 1066 + + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1067 + / (Tpre_rounded - v->Tno_bw[k]); 974 1068 else 975 1069 PrefetchBandwidth1 = 0; 976 1070 977 - if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw) > 0) { 978 - PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - *Tno_bw); 1071 + if (VStartup == MaxVStartup && (PrefetchBandwidth1 > 4 * prefetch_bw_oto) && (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]) > 0) { 1072 + PrefetchBandwidth1 = (PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor + 2 * MetaRowByte + 2 * PixelPTEBytesPerRow * HostVMInefficiencyFactor) / (Tpre_rounded - Tsw_oto / 4 - 0.75 * LineTime - v->Tno_bw[k]); 979 1073 } 980 1074 981 - if (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded > 0) 1075 + if (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded > 0) 982 1076 PrefetchBandwidth2 = (PDEAndMetaPTEBytesFrame * 983 1077 HostVMInefficiencyFactor + PrefetchSourceLinesY * 984 1078 swath_width_luma_ub * BytePerPixelY + 985 1079 PrefetchSourceLinesC * swath_width_chroma_ub * 986 - BytePerPixelC) / 987 - (Tpre_rounded - *Tno_bw - 2 * Tr0_trips_rounded); 1080 + v->BytePerPixelC[k]) / 1081 + (Tpre_rounded - v->Tno_bw[k] - 2 * Tr0_trips_rounded); 988 1082 else 989 1083 PrefetchBandwidth2 = 0; 990 1084 ··· 992 1086 PrefetchBandwidth3 = (2 * MetaRowByte + 2 * PixelPTEBytesPerRow * 993 1087 HostVMInefficiencyFactor + PrefetchSourceLinesY * 994 1088 swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * 995 - swath_width_chroma_ub * BytePerPixelC) / (Tpre_rounded - 1089 + swath_width_chroma_ub * v->BytePerPixelC[k]) / (Tpre_rounded - 996 1090 Tvm_trips_rounded); 997 1091 else 998 1092 PrefetchBandwidth3 = 0; ··· 1002 1096 } 1003 1097 1004 1098 if (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded > 0) 1005 - PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * BytePerPixelC) 1099 + PrefetchBandwidth4 = (PrefetchSourceLinesY * swath_width_luma_ub * BytePerPixelY + PrefetchSourceLinesC * swath_width_chroma_ub * v->BytePerPixelC[k]) 1006 1100 / (Tpre_rounded - Tvm_trips_rounded - 2 * Tr0_trips_rounded); 1007 1101 else 1008 1102 PrefetchBandwidth4 = 0; ··· 1013 1107 bool Case3OK; 1014 1108 1015 1109 if (PrefetchBandwidth1 > 0) { 1016 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1110 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth1 1017 1111 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth1 >= Tr0_trips_rounded) { 1018 1112 Case1OK = true; 1019 1113 } else { ··· 1024 1118 } 1025 1119 1026 1120 if (PrefetchBandwidth2 > 0) { 1027 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1121 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth2 1028 1122 >= Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth2 < Tr0_trips_rounded) { 1029 1123 Case2OK = true; 1030 1124 } else { ··· 1035 1129 } 1036 1130 1037 1131 if (PrefetchBandwidth3 > 0) { 1038 - if (*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1132 + if (v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / PrefetchBandwidth3 1039 1133 < Tvm_trips_rounded && (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / PrefetchBandwidth3 >= Tr0_trips_rounded) { 1040 1134 Case3OK = true; 1041 1135 } else { ··· 1058 1152 dml_print("DML: prefetch_bw_equ: %f\n", prefetch_bw_equ); 1059 1153 1060 1154 if (prefetch_bw_equ > 0) { 1061 - if (GPUVMEnable) { 1062 - Tvm_equ = dml_max3(*Tno_bw + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1155 + if (v->GPUVMEnable) { 1156 + Tvm_equ = dml_max3(v->Tno_bw[k] + PDEAndMetaPTEBytesFrame * HostVMInefficiencyFactor / prefetch_bw_equ, Tvm_trips, LineTime / 4); 1063 1157 } else { 1064 1158 Tvm_equ = LineTime / 4; 1065 1159 } 1066 1160 1067 - if ((GPUVMEnable || myPipe->DCCEnable)) { 1161 + if ((v->GPUVMEnable || myPipe->DCCEnable)) { 1068 1162 Tr0_equ = dml_max4( 1069 1163 (MetaRowByte + PixelPTEBytesPerRow * HostVMInefficiencyFactor) / prefetch_bw_equ, 1070 1164 Tr0_trips, ··· 1133 1227 } 1134 1228 1135 1229 *RequiredPrefetchPixDataBWLuma = (double) PrefetchSourceLinesY / LinesToRequestPrefetchPixelData * BytePerPixelY * swath_width_luma_ub / LineTime; 1136 - *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * BytePerPixelC * swath_width_chroma_ub / LineTime; 1230 + *RequiredPrefetchPixDataBWChroma = (double) PrefetchSourceLinesC / LinesToRequestPrefetchPixelData * v->BytePerPixelC[k] * swath_width_chroma_ub / LineTime; 1137 1231 } else { 1138 1232 MyError = true; 1139 1233 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); ··· 1149 1243 dml_print("DML: Tr0: %fus - time to fetch first row of data pagetables and first row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1150 1244 dml_print("DML: Tr1: %fus - time to fetch second row of data pagetables and second row of meta data (done in parallel)\n", TimeForFetchingRowInVBlank); 1151 1245 dml_print("DML: Tsw: %fus = time to fetch enough pixel data and cursor data to feed the scalers init position and detile\n", (double)LinesToRequestPrefetchPixelData * LineTime); 1152 - dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime); 1246 + dml_print("DML: To: %fus - time for propagation from scaler to optc\n", (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime); 1153 1247 dml_print("DML: Tvstartup - Tsetup - Tcalc - Twait - Tpre - To > 0\n"); 1154 - dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (*DSTYAfterScaler + ((*DSTXAfterScaler) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1248 + dml_print("DML: Tslack(pre): %fus - time left over in schedule\n", VStartup * LineTime - TimeForFetchingMetaPTE - 2 * TimeForFetchingRowInVBlank - (v->DSTYAfterScaler[k] + ((v->DSTXAfterScaler[k]) / (double) myPipe->HTotal)) * LineTime - TWait - TCalc - Tsetup); 1155 1249 dml_print("DML: row_bytes = dpte_row_bytes (per_pipe) = PixelPTEBytesPerRow = : %d\n", PixelPTEBytesPerRow); 1156 1250 1157 1251 } else { ··· 1182 1276 dml_print("DML: MyErr set %s:%d\n", __FILE__, __LINE__); 1183 1277 } 1184 1278 1185 - *prefetch_vmrow_bw = dml_max(prefetch_vm_bw, prefetch_row_bw); 1279 + v->prefetch_vmrow_bw[k] = dml_max(prefetch_vm_bw, prefetch_row_bw); 1186 1280 } 1187 1281 1188 1282 if (MyError) { ··· 2343 2437 2344 2438 v->ErrorResult[k] = CalculatePrefetchSchedule( 2345 2439 mode_lib, 2346 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 2347 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 2440 + k, 2348 2441 &myPipe, 2349 2442 v->DSCDelay[k], 2350 - v->DPPCLKDelaySubtotal 2351 - + v->DPPCLKDelayCNVCFormater, 2352 - v->DPPCLKDelaySCL, 2353 - v->DPPCLKDelaySCLLBOnly, 2354 - v->DPPCLKDelayCNVCCursor, 2355 - v->DISPCLKDelaySubtotal, 2356 2443 (unsigned int) (v->SwathWidthY[k] / v->HRatio[k]), 2357 - v->OutputFormat[k], 2358 - v->MaxInterDCNTileRepeaters, 2359 2444 dml_min(v->VStartupLines, v->MaxVStartupLines[k]), 2360 2445 v->MaxVStartupLines[k], 2361 - v->GPUVMMaxPageTableLevels, 2362 - v->GPUVMEnable, 2363 - v->HostVMEnable, 2364 - v->HostVMMaxNonCachedPageTableLevels, 2365 - v->HostVMMinPageSize, 2366 - v->DynamicMetadataEnable[k], 2367 - v->DynamicMetadataVMEnabled, 2368 - v->DynamicMetadataLinesBeforeActiveRequired[k], 2369 - v->DynamicMetadataTransmittedBytes[k], 2370 2446 v->UrgentLatency, 2371 2447 v->UrgentExtraLatency, 2372 2448 v->TCalc, ··· 2362 2474 v->MaxNumSwathY[k], 2363 2475 v->PrefetchSourceLinesC[k], 2364 2476 v->SwathWidthC[k], 2365 - v->BytePerPixelC[k], 2366 2477 v->VInitPreFillC[k], 2367 2478 v->MaxNumSwathC[k], 2368 2479 v->swath_width_luma_ub[k], ··· 2369 2482 v->SwathHeightY[k], 2370 2483 v->SwathHeightC[k], 2371 2484 TWait, 2372 - v->ProgressiveToInterlaceUnitInOPP, 2373 - &v->DSTXAfterScaler[k], 2374 - &v->DSTYAfterScaler[k], 2375 2485 &v->DestinationLinesForPrefetch[k], 2376 2486 &v->PrefetchBandwidth[k], 2377 2487 &v->DestinationLinesToRequestVMInVBlank[k], ··· 2377 2493 &v->VRatioPrefetchC[k], 2378 2494 &v->RequiredPrefetchPixDataBWLuma[k], 2379 2495 &v->RequiredPrefetchPixDataBWChroma[k], 2380 - &v->NotEnoughTimeForDynamicMetadata[k], 2381 - &v->Tno_bw[k], 2382 - &v->prefetch_vmrow_bw[k], 2383 - &v->Tdmdl_vm[k], 2384 - &v->Tdmdl[k], 2385 - &v->VUpdateOffsetPix[k], 2386 - &v->VUpdateWidthPix[k], 2387 - &v->VReadyOffsetPix[k]); 2496 + &v->NotEnoughTimeForDynamicMetadata[k]); 2388 2497 if (v->BlendingAndTiming[k] == k) { 2389 2498 double TotalRepeaterDelayTime = v->MaxInterDCNTileRepeaters * (2 / v->DPPCLK[k] + 3 / v->DISPCLK); 2390 2499 v->VUpdateWidthPix[k] = (14 / v->DCFCLKDeepSleep + 12 / v->DPPCLK[k] + TotalRepeaterDelayTime) * v->PixelClock[k]; ··· 2607 2730 CalculateWatermarksAndDRAMSpeedChangeSupport( 2608 2731 mode_lib, 2609 2732 PrefetchMode, 2610 - v->NumberOfActivePlanes, 2611 - v->MaxLineBufferLines, 2612 - v->LineBufferSize, 2613 - v->DPPOutputBufferPixels, 2614 - v->DETBufferSizeInKByte[0], 2615 - v->WritebackInterfaceBufferSize, 2616 2733 v->DCFCLK, 2617 2734 v->ReturnBW, 2618 - v->GPUVMEnable, 2619 - v->dpte_group_bytes, 2620 - v->MetaChunkSize, 2621 2735 v->UrgentLatency, 2622 2736 v->UrgentExtraLatency, 2623 - v->WritebackLatency, 2624 - v->WritebackChunkSize, 2625 2737 v->SOCCLK, 2626 - v->FinalDRAMClockChangeLatency, 2627 - v->SRExitTime, 2628 - v->SREnterPlusExitTime, 2629 2738 v->DCFCLKDeepSleep, 2630 2739 v->DPPPerPlane, 2631 - v->DCCEnable, 2632 2740 v->DPPCLK, 2633 2741 v->DETBufferSizeY, 2634 2742 v->DETBufferSizeC, 2635 2743 v->SwathHeightY, 2636 2744 v->SwathHeightC, 2637 - v->LBBitPerPixel, 2638 2745 v->SwathWidthY, 2639 2746 v->SwathWidthC, 2640 - v->HRatio, 2641 - v->HRatioChroma, 2642 - v->vtaps, 2643 - v->VTAPsChroma, 2644 - v->VRatio, 2645 - v->VRatioChroma, 2646 - v->HTotal, 2647 - v->PixelClock, 2648 - v->BlendingAndTiming, 2649 2747 v->BytePerPixelDETY, 2650 2748 v->BytePerPixelDETC, 2651 - v->DSTXAfterScaler, 2652 - v->DSTYAfterScaler, 2653 - v->WritebackEnable, 2654 - v->WritebackPixelFormat, 2655 - v->WritebackDestinationWidth, 2656 - v->WritebackDestinationHeight, 2657 - v->WritebackSourceHeight, 2658 - &DRAMClockChangeSupport, 2659 - &v->UrgentWatermark, 2660 - &v->WritebackUrgentWatermark, 2661 - &v->DRAMClockChangeWatermark, 2662 - &v->WritebackDRAMClockChangeWatermark, 2663 - &v->StutterExitWatermark, 2664 - &v->StutterEnterPlusExitWatermark, 2665 - &v->MinActiveDRAMClockChangeLatencySupported); 2749 + &DRAMClockChangeSupport); 2666 2750 2667 2751 for (k = 0; k < v->NumberOfActivePlanes; ++k) { 2668 2752 if (v->WritebackEnable[k] == true) { ··· 4608 4770 4609 4771 v->NoTimeForPrefetch[i][j][k] = CalculatePrefetchSchedule( 4610 4772 mode_lib, 4611 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyPixelMixedWithVMData, 4612 - v->PercentOfIdealDRAMFabricAndSDPPortBWReceivedAfterUrgLatencyVMDataOnly, 4773 + k, 4613 4774 &myPipe, 4614 4775 v->DSCDelayPerState[i][k], 4615 - v->DPPCLKDelaySubtotal + v->DPPCLKDelayCNVCFormater, 4616 - v->DPPCLKDelaySCL, 4617 - v->DPPCLKDelaySCLLBOnly, 4618 - v->DPPCLKDelayCNVCCursor, 4619 - v->DISPCLKDelaySubtotal, 4620 4776 v->SwathWidthYThisState[k] / v->HRatio[k], 4621 - v->OutputFormat[k], 4622 - v->MaxInterDCNTileRepeaters, 4623 4777 dml_min(v->MaxVStartup, v->MaximumVStartup[i][j][k]), 4624 4778 v->MaximumVStartup[i][j][k], 4625 - v->GPUVMMaxPageTableLevels, 4626 - v->GPUVMEnable, 4627 - v->HostVMEnable, 4628 - v->HostVMMaxNonCachedPageTableLevels, 4629 - v->HostVMMinPageSize, 4630 - v->DynamicMetadataEnable[k], 4631 - v->DynamicMetadataVMEnabled, 4632 - v->DynamicMetadataLinesBeforeActiveRequired[k], 4633 - v->DynamicMetadataTransmittedBytes[k], 4634 4779 v->UrgLatency[i], 4635 4780 v->ExtraLatency, 4636 4781 v->TimeCalc, ··· 4627 4806 v->MaxNumSwY[k], 4628 4807 v->PrefetchLinesC[i][j][k], 4629 4808 v->SwathWidthCThisState[k], 4630 - v->BytePerPixelC[k], 4631 4809 v->PrefillC[k], 4632 4810 v->MaxNumSwC[k], 4633 4811 v->swath_width_luma_ub_this_state[k], ··· 4634 4814 v->SwathHeightYThisState[k], 4635 4815 v->SwathHeightCThisState[k], 4636 4816 v->TWait, 4637 - v->ProgressiveToInterlaceUnitInOPP, 4638 - &v->DSTXAfterScaler[k], 4639 - &v->DSTYAfterScaler[k], 4640 4817 &v->LineTimesForPrefetch[k], 4641 4818 &v->PrefetchBW[k], 4642 4819 &v->LinesForMetaPTE[k], ··· 4642 4825 &v->VRatioPreC[i][j][k], 4643 4826 &v->RequiredPrefetchPixelDataBWLuma[i][j][k], 4644 4827 &v->RequiredPrefetchPixelDataBWChroma[i][j][k], 4645 - &v->NoTimeForDynamicMetadata[i][j][k], 4646 - &v->Tno_bw[k], 4647 - &v->prefetch_vmrow_bw[k], 4648 - &v->Tdmdl_vm[k], 4649 - &v->Tdmdl[k], 4650 - &v->VUpdateOffsetPix[k], 4651 - &v->VUpdateWidthPix[k], 4652 - &v->VReadyOffsetPix[k]); 4828 + &v->NoTimeForDynamicMetadata[i][j][k]); 4653 4829 } 4654 4830 4655 4831 for (k = 0; k <= v->NumberOfActivePlanes - 1; k++) { ··· 4817 5007 CalculateWatermarksAndDRAMSpeedChangeSupport( 4818 5008 mode_lib, 4819 5009 v->PrefetchModePerState[i][j], 4820 - v->NumberOfActivePlanes, 4821 - v->MaxLineBufferLines, 4822 - v->LineBufferSize, 4823 - v->DPPOutputBufferPixels, 4824 - v->DETBufferSizeInKByte[0], 4825 - v->WritebackInterfaceBufferSize, 4826 5010 v->DCFCLKState[i][j], 4827 5011 v->ReturnBWPerState[i][j], 4828 - v->GPUVMEnable, 4829 - v->dpte_group_bytes, 4830 - v->MetaChunkSize, 4831 5012 v->UrgLatency[i], 4832 5013 v->ExtraLatency, 4833 - v->WritebackLatency, 4834 - v->WritebackChunkSize, 4835 5014 v->SOCCLKPerState[i], 4836 - v->FinalDRAMClockChangeLatency, 4837 - v->SRExitTime, 4838 - v->SREnterPlusExitTime, 4839 5015 v->ProjectedDCFCLKDeepSleep[i][j], 4840 5016 v->NoOfDPPThisState, 4841 - v->DCCEnable, 4842 5017 v->RequiredDPPCLKThisState, 4843 5018 v->DETBufferSizeYThisState, 4844 5019 v->DETBufferSizeCThisState, 4845 5020 v->SwathHeightYThisState, 4846 5021 v->SwathHeightCThisState, 4847 - v->LBBitPerPixel, 4848 5022 v->SwathWidthYThisState, 4849 5023 v->SwathWidthCThisState, 4850 - v->HRatio, 4851 - v->HRatioChroma, 4852 - v->vtaps, 4853 - v->VTAPsChroma, 4854 - v->VRatio, 4855 - v->VRatioChroma, 4856 - v->HTotal, 4857 - v->PixelClock, 4858 - v->BlendingAndTiming, 4859 5024 v->BytePerPixelInDETY, 4860 5025 v->BytePerPixelInDETC, 4861 - v->DSTXAfterScaler, 4862 - v->DSTYAfterScaler, 4863 - v->WritebackEnable, 4864 - v->WritebackPixelFormat, 4865 - v->WritebackDestinationWidth, 4866 - v->WritebackDestinationHeight, 4867 - v->WritebackSourceHeight, 4868 - &v->DRAMClockChangeSupport[i][j], 4869 - &v->UrgentWatermark, 4870 - &v->WritebackUrgentWatermark, 4871 - &v->DRAMClockChangeWatermark, 4872 - &v->WritebackDRAMClockChangeWatermark, 4873 - &v->StutterExitWatermark, 4874 - &v->StutterEnterPlusExitWatermark, 4875 - &v->MinActiveDRAMClockChangeLatencySupported); 5026 + &v->DRAMClockChangeSupport[i][j]); 4876 5027 } 4877 5028 } 4878 5029 ··· 4950 5179 static void CalculateWatermarksAndDRAMSpeedChangeSupport( 4951 5180 struct display_mode_lib *mode_lib, 4952 5181 unsigned int PrefetchMode, 4953 - unsigned int NumberOfActivePlanes, 4954 - unsigned int MaxLineBufferLines, 4955 - unsigned int LineBufferSize, 4956 - unsigned int DPPOutputBufferPixels, 4957 - unsigned int DETBufferSizeInKByte, 4958 - unsigned int WritebackInterfaceBufferSize, 4959 5182 double DCFCLK, 4960 5183 double ReturnBW, 4961 - bool GPUVMEnable, 4962 - unsigned int dpte_group_bytes[], 4963 - unsigned int MetaChunkSize, 4964 5184 double UrgentLatency, 4965 5185 double ExtraLatency, 4966 - double WritebackLatency, 4967 - double WritebackChunkSize, 4968 5186 double SOCCLK, 4969 - double DRAMClockChangeLatency, 4970 - double SRExitTime, 4971 - double SREnterPlusExitTime, 4972 5187 double DCFCLKDeepSleep, 4973 5188 unsigned int DPPPerPlane[], 4974 - bool DCCEnable[], 4975 5189 double DPPCLK[], 4976 5190 unsigned int DETBufferSizeY[], 4977 5191 unsigned int DETBufferSizeC[], 4978 5192 unsigned int SwathHeightY[], 4979 5193 unsigned int SwathHeightC[], 4980 - unsigned int LBBitPerPixel[], 4981 5194 double SwathWidthY[], 4982 5195 double SwathWidthC[], 4983 - double HRatio[], 4984 - double HRatioChroma[], 4985 - unsigned int vtaps[], 4986 - unsigned int VTAPsChroma[], 4987 - double VRatio[], 4988 - double VRatioChroma[], 4989 - unsigned int HTotal[], 4990 - double PixelClock[], 4991 - unsigned int BlendingAndTiming[], 4992 5196 double BytePerPixelDETY[], 4993 5197 double BytePerPixelDETC[], 4994 - double DSTXAfterScaler[], 4995 - double DSTYAfterScaler[], 4996 - bool WritebackEnable[], 4997 - enum source_format_class WritebackPixelFormat[], 4998 - double WritebackDestinationWidth[], 4999 - double WritebackDestinationHeight[], 5000 - double WritebackSourceHeight[], 5001 - enum clock_change_support *DRAMClockChangeSupport, 5002 - double *UrgentWatermark, 5003 - double *WritebackUrgentWatermark, 5004 - double *DRAMClockChangeWatermark, 5005 - double *WritebackDRAMClockChangeWatermark, 5006 - double *StutterExitWatermark, 5007 - double *StutterEnterPlusExitWatermark, 5008 - double *MinActiveDRAMClockChangeLatencySupported) 5198 + enum clock_change_support *DRAMClockChangeSupport) 5009 5199 { 5200 + struct vba_vars_st *v = &mode_lib->vba; 5010 5201 double EffectiveLBLatencyHidingY = 0; 5011 5202 double EffectiveLBLatencyHidingC = 0; 5012 5203 double LinesInDETY[DC__NUM_DPP__MAX] = { 0 }; ··· 4987 5254 double WritebackDRAMClockChangeLatencyHiding = 0; 4988 5255 unsigned int k, j; 4989 5256 4990 - mode_lib->vba.TotalActiveDPP = 0; 4991 - mode_lib->vba.TotalDCCActiveDPP = 0; 4992 - for (k = 0; k < NumberOfActivePlanes; ++k) { 4993 - mode_lib->vba.TotalActiveDPP = mode_lib->vba.TotalActiveDPP + DPPPerPlane[k]; 4994 - if (DCCEnable[k] == true) { 4995 - mode_lib->vba.TotalDCCActiveDPP = mode_lib->vba.TotalDCCActiveDPP + DPPPerPlane[k]; 5257 + v->TotalActiveDPP = 0; 5258 + v->TotalDCCActiveDPP = 0; 5259 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5260 + v->TotalActiveDPP = v->TotalActiveDPP + DPPPerPlane[k]; 5261 + if (v->DCCEnable[k] == true) { 5262 + v->TotalDCCActiveDPP = v->TotalDCCActiveDPP + DPPPerPlane[k]; 4996 5263 } 4997 5264 } 4998 5265 4999 - *UrgentWatermark = UrgentLatency + ExtraLatency; 5266 + v->UrgentWatermark = UrgentLatency + ExtraLatency; 5000 5267 5001 - *DRAMClockChangeWatermark = DRAMClockChangeLatency + *UrgentWatermark; 5268 + v->DRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->UrgentWatermark; 5002 5269 5003 - mode_lib->vba.TotalActiveWriteback = 0; 5004 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5005 - if (WritebackEnable[k] == true) { 5006 - mode_lib->vba.TotalActiveWriteback = mode_lib->vba.TotalActiveWriteback + 1; 5270 + v->TotalActiveWriteback = 0; 5271 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5272 + if (v->WritebackEnable[k] == true) { 5273 + v->TotalActiveWriteback = v->TotalActiveWriteback + 1; 5007 5274 } 5008 5275 } 5009 5276 5010 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5011 - *WritebackUrgentWatermark = WritebackLatency; 5277 + if (v->TotalActiveWriteback <= 1) { 5278 + v->WritebackUrgentWatermark = v->WritebackLatency; 5012 5279 } else { 5013 - *WritebackUrgentWatermark = WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5280 + v->WritebackUrgentWatermark = v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5014 5281 } 5015 5282 5016 - if (mode_lib->vba.TotalActiveWriteback <= 1) { 5017 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency; 5283 + if (v->TotalActiveWriteback <= 1) { 5284 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency; 5018 5285 } else { 5019 - *WritebackDRAMClockChangeWatermark = DRAMClockChangeLatency + WritebackLatency + WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5286 + v->WritebackDRAMClockChangeWatermark = v->FinalDRAMClockChangeLatency + v->WritebackLatency + v->WritebackChunkSize * 1024.0 / 32.0 / SOCCLK; 5020 5287 } 5021 5288 5022 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5289 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5023 5290 5024 - mode_lib->vba.LBLatencyHidingSourceLinesY = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(HRatio[k], 1.0)), 1)) - (vtaps[k] - 1); 5291 + v->LBLatencyHidingSourceLinesY = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthY[k] / dml_max(v->HRatio[k], 1.0)), 1)) - (v->vtaps[k] - 1); 5025 5292 5026 - mode_lib->vba.LBLatencyHidingSourceLinesC = dml_min((double) MaxLineBufferLines, dml_floor(LineBufferSize / LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(HRatioChroma[k], 1.0)), 1)) - (VTAPsChroma[k] - 1); 5293 + v->LBLatencyHidingSourceLinesC = dml_min((double) v->MaxLineBufferLines, dml_floor(v->LineBufferSize / v->LBBitPerPixel[k] / (SwathWidthC[k] / dml_max(v->HRatioChroma[k], 1.0)), 1)) - (v->VTAPsChroma[k] - 1); 5027 5294 5028 - EffectiveLBLatencyHidingY = mode_lib->vba.LBLatencyHidingSourceLinesY / VRatio[k] * (HTotal[k] / PixelClock[k]); 5295 + EffectiveLBLatencyHidingY = v->LBLatencyHidingSourceLinesY / v->VRatio[k] * (v->HTotal[k] / v->PixelClock[k]); 5029 5296 5030 - EffectiveLBLatencyHidingC = mode_lib->vba.LBLatencyHidingSourceLinesC / VRatioChroma[k] * (HTotal[k] / PixelClock[k]); 5297 + EffectiveLBLatencyHidingC = v->LBLatencyHidingSourceLinesC / v->VRatioChroma[k] * (v->HTotal[k] / v->PixelClock[k]); 5031 5298 5032 5299 LinesInDETY[k] = (double) DETBufferSizeY[k] / BytePerPixelDETY[k] / SwathWidthY[k]; 5033 5300 LinesInDETYRoundedDownToSwath[k] = dml_floor(LinesInDETY[k], SwathHeightY[k]); 5034 - FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5301 + FullDETBufferingTimeY[k] = LinesInDETYRoundedDownToSwath[k] * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5035 5302 if (BytePerPixelDETC[k] > 0) { 5036 - LinesInDETC = mode_lib->vba.DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5303 + LinesInDETC = v->DETBufferSizeC[k] / BytePerPixelDETC[k] / SwathWidthC[k]; 5037 5304 LinesInDETCRoundedDownToSwath = dml_floor(LinesInDETC, SwathHeightC[k]); 5038 - FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (HTotal[k] / PixelClock[k]) / VRatioChroma[k]; 5305 + FullDETBufferingTimeC = LinesInDETCRoundedDownToSwath * (v->HTotal[k] / v->PixelClock[k]) / v->VRatioChroma[k]; 5039 5306 } else { 5040 5307 LinesInDETC = 0; 5041 5308 FullDETBufferingTimeC = 999999; 5042 5309 } 5043 5310 5044 - ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5311 + ActiveDRAMClockChangeLatencyMarginY = EffectiveLBLatencyHidingY + FullDETBufferingTimeY[k] - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5045 5312 5046 - if (NumberOfActivePlanes > 1) { 5047 - ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightY[k] * HTotal[k] / PixelClock[k] / VRatio[k]; 5313 + if (v->NumberOfActivePlanes > 1) { 5314 + ActiveDRAMClockChangeLatencyMarginY = ActiveDRAMClockChangeLatencyMarginY - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightY[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatio[k]; 5048 5315 } 5049 5316 5050 5317 if (BytePerPixelDETC[k] > 0) { 5051 - ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - *UrgentWatermark - (HTotal[k] / PixelClock[k]) * (DSTXAfterScaler[k] / HTotal[k] + DSTYAfterScaler[k]) - *DRAMClockChangeWatermark; 5318 + ActiveDRAMClockChangeLatencyMarginC = EffectiveLBLatencyHidingC + FullDETBufferingTimeC - v->UrgentWatermark - (v->HTotal[k] / v->PixelClock[k]) * (v->DSTXAfterScaler[k] / v->HTotal[k] + v->DSTYAfterScaler[k]) - v->DRAMClockChangeWatermark; 5052 5319 5053 - if (NumberOfActivePlanes > 1) { 5054 - ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / NumberOfActivePlanes) * SwathHeightC[k] * HTotal[k] / PixelClock[k] / VRatioChroma[k]; 5320 + if (v->NumberOfActivePlanes > 1) { 5321 + ActiveDRAMClockChangeLatencyMarginC = ActiveDRAMClockChangeLatencyMarginC - (1 - 1.0 / v->NumberOfActivePlanes) * SwathHeightC[k] * v->HTotal[k] / v->PixelClock[k] / v->VRatioChroma[k]; 5055 5322 } 5056 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5323 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(ActiveDRAMClockChangeLatencyMarginY, ActiveDRAMClockChangeLatencyMarginC); 5057 5324 } else { 5058 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5325 + v->ActiveDRAMClockChangeLatencyMargin[k] = ActiveDRAMClockChangeLatencyMarginY; 5059 5326 } 5060 5327 5061 - if (WritebackEnable[k] == true) { 5328 + if (v->WritebackEnable[k] == true) { 5062 5329 5063 - WritebackDRAMClockChangeLatencyHiding = WritebackInterfaceBufferSize * 1024 / (WritebackDestinationWidth[k] * WritebackDestinationHeight[k] / (WritebackSourceHeight[k] * HTotal[k] / PixelClock[k]) * 4); 5064 - if (WritebackPixelFormat[k] == dm_444_64) { 5330 + WritebackDRAMClockChangeLatencyHiding = v->WritebackInterfaceBufferSize * 1024 / (v->WritebackDestinationWidth[k] * v->WritebackDestinationHeight[k] / (v->WritebackSourceHeight[k] * v->HTotal[k] / v->PixelClock[k]) * 4); 5331 + if (v->WritebackPixelFormat[k] == dm_444_64) { 5065 5332 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding / 2; 5066 5333 } 5067 - if (mode_lib->vba.WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5334 + if (v->WritebackConfiguration == dm_whole_buffer_for_single_stream_interleave) { 5068 5335 WritebackDRAMClockChangeLatencyHiding = WritebackDRAMClockChangeLatencyHiding * 2; 5069 5336 } 5070 - WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - mode_lib->vba.WritebackDRAMClockChangeWatermark; 5071 - mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] = dml_min(mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5337 + WritebackDRAMClockChangeLatencyMargin = WritebackDRAMClockChangeLatencyHiding - v->WritebackDRAMClockChangeWatermark; 5338 + v->ActiveDRAMClockChangeLatencyMargin[k] = dml_min(v->ActiveDRAMClockChangeLatencyMargin[k], WritebackDRAMClockChangeLatencyMargin); 5072 5339 } 5073 5340 } 5074 5341 5075 - mode_lib->vba.MinActiveDRAMClockChangeMargin = 999999; 5342 + v->MinActiveDRAMClockChangeMargin = 999999; 5076 5343 PlaneWithMinActiveDRAMClockChangeMargin = 0; 5077 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5078 - if (mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < mode_lib->vba.MinActiveDRAMClockChangeMargin) { 5079 - mode_lib->vba.MinActiveDRAMClockChangeMargin = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5080 - if (BlendingAndTiming[k] == k) { 5344 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5345 + if (v->ActiveDRAMClockChangeLatencyMargin[k] < v->MinActiveDRAMClockChangeMargin) { 5346 + v->MinActiveDRAMClockChangeMargin = v->ActiveDRAMClockChangeLatencyMargin[k]; 5347 + if (v->BlendingAndTiming[k] == k) { 5081 5348 PlaneWithMinActiveDRAMClockChangeMargin = k; 5082 5349 } else { 5083 - for (j = 0; j < NumberOfActivePlanes; ++j) { 5084 - if (BlendingAndTiming[k] == j) { 5350 + for (j = 0; j < v->NumberOfActivePlanes; ++j) { 5351 + if (v->BlendingAndTiming[k] == j) { 5085 5352 PlaneWithMinActiveDRAMClockChangeMargin = j; 5086 5353 } 5087 5354 } ··· 5089 5356 } 5090 5357 } 5091 5358 5092 - *MinActiveDRAMClockChangeLatencySupported = mode_lib->vba.MinActiveDRAMClockChangeMargin + DRAMClockChangeLatency; 5359 + v->MinActiveDRAMClockChangeLatencySupported = v->MinActiveDRAMClockChangeMargin + v->FinalDRAMClockChangeLatency; 5093 5360 5094 5361 SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = 999999; 5095 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5096 - if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (BlendingAndTiming[k] == k)) && !(BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5097 - SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = mode_lib->vba.ActiveDRAMClockChangeLatencyMargin[k]; 5362 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5363 + if (!((k == PlaneWithMinActiveDRAMClockChangeMargin) && (v->BlendingAndTiming[k] == k)) && !(v->BlendingAndTiming[k] == PlaneWithMinActiveDRAMClockChangeMargin) && v->ActiveDRAMClockChangeLatencyMargin[k] < SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank) { 5364 + SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank = v->ActiveDRAMClockChangeLatencyMargin[k]; 5098 5365 } 5099 5366 } 5100 5367 5101 - mode_lib->vba.TotalNumberOfActiveOTG = 0; 5102 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5103 - if (BlendingAndTiming[k] == k) { 5104 - mode_lib->vba.TotalNumberOfActiveOTG = mode_lib->vba.TotalNumberOfActiveOTG + 1; 5368 + v->TotalNumberOfActiveOTG = 0; 5369 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5370 + if (v->BlendingAndTiming[k] == k) { 5371 + v->TotalNumberOfActiveOTG = v->TotalNumberOfActiveOTG + 1; 5105 5372 } 5106 5373 } 5107 5374 5108 - if (mode_lib->vba.MinActiveDRAMClockChangeMargin > 0) { 5375 + if (v->MinActiveDRAMClockChangeMargin > 0) { 5109 5376 *DRAMClockChangeSupport = dm_dram_clock_change_vactive; 5110 - } else if (((mode_lib->vba.SynchronizedVBlank == true || mode_lib->vba.TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5377 + } else if (((v->SynchronizedVBlank == true || v->TotalNumberOfActiveOTG == 1 || SecondMinActiveDRAMClockChangeMarginOneDisplayInVBLank > 0) && PrefetchMode == 0)) { 5111 5378 *DRAMClockChangeSupport = dm_dram_clock_change_vblank; 5112 5379 } else { 5113 5380 *DRAMClockChangeSupport = dm_dram_clock_change_unsupported; 5114 5381 } 5115 5382 5116 5383 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[0]; 5117 - for (k = 0; k < NumberOfActivePlanes; ++k) { 5384 + for (k = 0; k < v->NumberOfActivePlanes; ++k) { 5118 5385 if (FullDETBufferingTimeY[k] <= FullDETBufferingTimeYStutterCriticalPlane) { 5119 5386 FullDETBufferingTimeYStutterCriticalPlane = FullDETBufferingTimeY[k]; 5120 - TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (HTotal[k] / PixelClock[k]) / VRatio[k]; 5387 + TimeToFinishSwathTransferStutterCriticalPlane = (SwathHeightY[k] - (LinesInDETY[k] - LinesInDETYRoundedDownToSwath[k])) * (v->HTotal[k] / v->PixelClock[k]) / v->VRatio[k]; 5121 5388 } 5122 5389 } 5123 5390 5124 - *StutterExitWatermark = SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5125 - *StutterEnterPlusExitWatermark = dml_max(SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5391 + v->StutterExitWatermark = v->SRExitTime + ExtraLatency + 10 / DCFCLKDeepSleep; 5392 + v->StutterEnterPlusExitWatermark = dml_max(v->SREnterPlusExitTime + ExtraLatency + 10 / DCFCLKDeepSleep, TimeToFinishSwathTransferStutterCriticalPlane); 5126 5393 5127 5394 } 5128 5395
+1 -27
drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
··· 1610 1610 struct dc_bios *bios = link->ctx->dc_bios; 1611 1611 struct bp_crtc_source_select crtc_source_select = {0}; 1612 1612 enum engine_id engine_id = link->link_enc->preferred_engine; 1613 - uint8_t bit_depth; 1614 1613 1615 1614 if (dc_is_rgb_signal(pipe_ctx->stream->signal)) 1616 1615 engine_id = link->link_enc->analog_engine; 1617 1616 1618 - switch (pipe_ctx->stream->timing.display_color_depth) { 1619 - case COLOR_DEPTH_UNDEFINED: 1620 - bit_depth = 0; 1621 - break; 1622 - case COLOR_DEPTH_666: 1623 - bit_depth = 6; 1624 - break; 1625 - default: 1626 - case COLOR_DEPTH_888: 1627 - bit_depth = 8; 1628 - break; 1629 - case COLOR_DEPTH_101010: 1630 - bit_depth = 10; 1631 - break; 1632 - case COLOR_DEPTH_121212: 1633 - bit_depth = 12; 1634 - break; 1635 - case COLOR_DEPTH_141414: 1636 - bit_depth = 14; 1637 - break; 1638 - case COLOR_DEPTH_161616: 1639 - bit_depth = 16; 1640 - break; 1641 - } 1642 - 1643 1617 crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst; 1644 - crtc_source_select.bit_depth = bit_depth; 1618 + crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth; 1645 1619 crtc_source_select.engine_id = engine_id; 1646 1620 crtc_source_select.sink_signal = pipe_ctx->stream->signal; 1647 1621
+1 -1
drivers/gpu/drm/amd/display/include/bios_parser_types.h
··· 136 136 enum engine_id engine_id; 137 137 enum controller_id controller_id; 138 138 enum signal_type sink_signal; 139 - uint8_t bit_depth; 139 + enum dc_color_depth color_depth; 140 140 }; 141 141 142 142 struct bp_transmitter_control {
+15 -18
drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
··· 2455 2455 } 2456 2456 2457 2457 for (i = 0; i < NUM_LINK_LEVELS; i++) { 2458 - if (pptable->PcieGenSpeed[i] > pcie_gen_cap || 2459 - pptable->PcieLaneCount[i] > pcie_width_cap) { 2460 - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2461 - pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2462 - pcie_gen_cap : pptable->PcieGenSpeed[i]; 2463 - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2464 - pptable->PcieLaneCount[i] > pcie_width_cap ? 2465 - pcie_width_cap : pptable->PcieLaneCount[i]; 2466 - smu_pcie_arg = i << 16; 2467 - smu_pcie_arg |= pcie_gen_cap << 8; 2468 - smu_pcie_arg |= pcie_width_cap; 2469 - ret = smu_cmn_send_smc_msg_with_param(smu, 2470 - SMU_MSG_OverridePcieParameters, 2471 - smu_pcie_arg, 2472 - NULL); 2473 - if (ret) 2474 - break; 2475 - } 2458 + dpm_context->dpm_tables.pcie_table.pcie_gen[i] = 2459 + pptable->PcieGenSpeed[i] > pcie_gen_cap ? 2460 + pcie_gen_cap : pptable->PcieGenSpeed[i]; 2461 + dpm_context->dpm_tables.pcie_table.pcie_lane[i] = 2462 + pptable->PcieLaneCount[i] > pcie_width_cap ? 2463 + pcie_width_cap : pptable->PcieLaneCount[i]; 2464 + smu_pcie_arg = i << 16; 2465 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_gen[i] << 8; 2466 + smu_pcie_arg |= dpm_context->dpm_tables.pcie_table.pcie_lane[i]; 2467 + ret = smu_cmn_send_smc_msg_with_param(smu, 2468 + SMU_MSG_OverridePcieParameters, 2469 + smu_pcie_arg, 2470 + NULL); 2471 + if (ret) 2472 + return ret; 2476 2473 } 2477 2474 2478 2475 return ret;
+6 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2923 2923 break; 2924 2924 } 2925 2925 2926 - if (!ret) 2926 + if (!ret) { 2927 + /* disable mmio access while doing mode 1 reset*/ 2928 + smu->adev->no_hw_access = true; 2929 + /* ensure no_hw_access is globally visible before any MMIO */ 2930 + smp_mb(); 2927 2931 msleep(SMU13_MODE1_RESET_WAIT_TIME_IN_MS); 2932 + } 2928 2933 2929 2934 return ret; 2930 2935 }
+7 -2
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2143 2143 2144 2144 ret = smu_cmn_send_debug_smc_msg(smu, DEBUGSMC_MSG_Mode1Reset); 2145 2145 if (!ret) { 2146 - if (amdgpu_emu_mode == 1) 2146 + if (amdgpu_emu_mode == 1) { 2147 2147 msleep(50000); 2148 - else 2148 + } else { 2149 + /* disable mmio access while doing mode 1 reset*/ 2150 + smu->adev->no_hw_access = true; 2151 + /* ensure no_hw_access is globally visible before any MMIO */ 2152 + smp_mb(); 2149 2153 msleep(1000); 2154 + } 2150 2155 } 2151 2156 2152 2157 return ret;
+99 -23
drivers/gpu/drm/drm_atomic_helper.c
··· 1162 1162 new_state->self_refresh_active; 1163 1163 } 1164 1164 1165 - static void 1166 - encoder_bridge_disable(struct drm_device *dev, struct drm_atomic_state *state) 1165 + /** 1166 + * drm_atomic_helper_commit_encoder_bridge_disable - disable bridges and encoder 1167 + * @dev: DRM device 1168 + * @state: the driver state object 1169 + * 1170 + * Loops over all connectors in the current state and if the CRTC needs 1171 + * it, disables the bridge chain all the way, then disables the encoder 1172 + * afterwards. 1173 + */ 1174 + void 1175 + drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 1176 + struct drm_atomic_state *state) 1167 1177 { 1168 1178 struct drm_connector *connector; 1169 1179 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1239 1229 } 1240 1230 } 1241 1231 } 1232 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_disable); 1242 1233 1243 - static void 1244 - crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1234 + /** 1235 + * drm_atomic_helper_commit_crtc_disable - disable CRTSs 1236 + * @dev: DRM device 1237 + * @state: the driver state object 1238 + * 1239 + * Loops over all CRTCs in the current state and if the CRTC needs 1240 + * it, disables it. 1241 + */ 1242 + void 1243 + drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, struct drm_atomic_state *state) 1245 1244 { 1246 1245 struct drm_crtc *crtc; 1247 1246 struct drm_crtc_state *old_crtc_state, *new_crtc_state; ··· 1301 1282 drm_crtc_vblank_put(crtc); 1302 1283 } 1303 1284 } 1285 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_disable); 1304 1286 1305 - static void 1306 - encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1287 + /** 1288 + * drm_atomic_helper_commit_encoder_bridge_post_disable - post-disable encoder bridges 1289 + * @dev: DRM device 1290 + * @state: the driver state object 1291 + * 1292 + * Loops over all connectors in the current state and if the CRTC needs 1293 + * it, post-disables all encoder bridges. 1294 + */ 1295 + void 1296 + drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, struct drm_atomic_state *state) 1307 1297 { 1308 1298 struct drm_connector *connector; 1309 1299 struct drm_connector_state *old_conn_state, *new_conn_state; ··· 1363 1335 drm_bridge_put(bridge); 1364 1336 } 1365 1337 } 1338 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_post_disable); 1366 1339 1367 1340 static void 1368 1341 disable_outputs(struct drm_device *dev, struct drm_atomic_state *state) 1369 1342 { 1370 - encoder_bridge_disable(dev, state); 1343 + drm_atomic_helper_commit_encoder_bridge_disable(dev, state); 1371 1344 1372 - crtc_disable(dev, state); 1345 + drm_atomic_helper_commit_encoder_bridge_post_disable(dev, state); 1373 1346 1374 - encoder_bridge_post_disable(dev, state); 1347 + drm_atomic_helper_commit_crtc_disable(dev, state); 1375 1348 } 1376 1349 1377 1350 /** ··· 1475 1446 } 1476 1447 EXPORT_SYMBOL(drm_atomic_helper_calc_timestamping_constants); 1477 1448 1478 - static void 1479 - crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1449 + /** 1450 + * drm_atomic_helper_commit_crtc_set_mode - set the new mode 1451 + * @dev: DRM device 1452 + * @state: the driver state object 1453 + * 1454 + * Loops over all connectors in the current state and if the mode has 1455 + * changed, change the mode of the CRTC, then call down the bridge 1456 + * chain and change the mode in all bridges as well. 1457 + */ 1458 + void 1459 + drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *state) 1480 1460 { 1481 1461 struct drm_crtc *crtc; 1482 1462 struct drm_crtc_state *new_crtc_state; ··· 1546 1508 drm_bridge_put(bridge); 1547 1509 } 1548 1510 } 1511 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_set_mode); 1549 1512 1550 1513 /** 1551 1514 * drm_atomic_helper_commit_modeset_disables - modeset commit to disable outputs ··· 1570 1531 drm_atomic_helper_update_legacy_modeset_state(dev, state); 1571 1532 drm_atomic_helper_calc_timestamping_constants(state); 1572 1533 1573 - crtc_set_mode(dev, state); 1534 + drm_atomic_helper_commit_crtc_set_mode(dev, state); 1574 1535 } 1575 1536 EXPORT_SYMBOL(drm_atomic_helper_commit_modeset_disables); 1576 1537 1577 - static void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1578 - struct drm_atomic_state *state) 1538 + /** 1539 + * drm_atomic_helper_commit_writebacks - issue writebacks 1540 + * @dev: DRM device 1541 + * @state: atomic state object being committed 1542 + * 1543 + * This loops over the connectors, checks if the new state requires 1544 + * a writeback job to be issued and in that case issues an atomic 1545 + * commit on each connector. 1546 + */ 1547 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 1548 + struct drm_atomic_state *state) 1579 1549 { 1580 1550 struct drm_connector *connector; 1581 1551 struct drm_connector_state *new_conn_state; ··· 1603 1555 } 1604 1556 } 1605 1557 } 1558 + EXPORT_SYMBOL(drm_atomic_helper_commit_writebacks); 1606 1559 1607 - static void 1608 - encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1560 + /** 1561 + * drm_atomic_helper_commit_encoder_bridge_pre_enable - pre-enable bridges 1562 + * @dev: DRM device 1563 + * @state: atomic state object being committed 1564 + * 1565 + * This loops over the connectors and if the CRTC needs it, pre-enables 1566 + * the entire bridge chain. 1567 + */ 1568 + void 1569 + drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, struct drm_atomic_state *state) 1609 1570 { 1610 1571 struct drm_connector *connector; 1611 1572 struct drm_connector_state *new_conn_state; ··· 1645 1588 drm_bridge_put(bridge); 1646 1589 } 1647 1590 } 1591 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_pre_enable); 1648 1592 1649 - static void 1650 - crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1593 + /** 1594 + * drm_atomic_helper_commit_crtc_enable - enables the CRTCs 1595 + * @dev: DRM device 1596 + * @state: atomic state object being committed 1597 + * 1598 + * This loops over CRTCs in the new state, and of the CRTC needs 1599 + * it, enables it. 1600 + */ 1601 + void 1602 + drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, struct drm_atomic_state *state) 1651 1603 { 1652 1604 struct drm_crtc *crtc; 1653 1605 struct drm_crtc_state *old_crtc_state; ··· 1685 1619 } 1686 1620 } 1687 1621 } 1622 + EXPORT_SYMBOL(drm_atomic_helper_commit_crtc_enable); 1688 1623 1689 - static void 1690 - encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1624 + /** 1625 + * drm_atomic_helper_commit_encoder_bridge_enable - enables the bridges 1626 + * @dev: DRM device 1627 + * @state: atomic state object being committed 1628 + * 1629 + * This loops over all connectors in the new state, and of the CRTC needs 1630 + * it, enables the entire bridge chain. 1631 + */ 1632 + void 1633 + drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, struct drm_atomic_state *state) 1691 1634 { 1692 1635 struct drm_connector *connector; 1693 1636 struct drm_connector_state *new_conn_state; ··· 1739 1664 drm_bridge_put(bridge); 1740 1665 } 1741 1666 } 1667 + EXPORT_SYMBOL(drm_atomic_helper_commit_encoder_bridge_enable); 1742 1668 1743 1669 /** 1744 1670 * drm_atomic_helper_commit_modeset_enables - modeset commit to enable outputs ··· 1758 1682 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 1759 1683 struct drm_atomic_state *state) 1760 1684 { 1761 - encoder_bridge_pre_enable(dev, state); 1685 + drm_atomic_helper_commit_crtc_enable(dev, state); 1762 1686 1763 - crtc_enable(dev, state); 1687 + drm_atomic_helper_commit_encoder_bridge_pre_enable(dev, state); 1764 1688 1765 - encoder_bridge_enable(dev, state); 1689 + drm_atomic_helper_commit_encoder_bridge_enable(dev, state); 1766 1690 1767 1691 drm_atomic_helper_commit_writebacks(dev, state); 1768 1692 }
+10
drivers/gpu/drm/drm_fb_helper.c
··· 366 366 { 367 367 struct drm_fb_helper *helper = container_of(work, struct drm_fb_helper, damage_work); 368 368 369 + if (helper->info->state != FBINFO_STATE_RUNNING) 370 + return; 371 + 369 372 drm_fb_helper_fb_dirty(helper); 370 373 } 371 374 ··· 734 731 if (suspend) { 735 732 if (fb_helper->info->state != FBINFO_STATE_RUNNING) 736 733 return; 734 + 735 + /* 736 + * Cancel pending damage work. During GPU reset, VBlank 737 + * interrupts are disabled and drm_fb_helper_fb_dirty() 738 + * would wait for VBlank timeout otherwise. 739 + */ 740 + cancel_work_sync(&fb_helper->damage_work); 737 741 738 742 console_lock(); 739 743
+1 -1
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 1692 1692 { 1693 1693 struct hdmi_context *hdata = arg; 1694 1694 1695 - mod_delayed_work(system_wq, &hdata->hotplug_work, 1695 + mod_delayed_work(system_percpu_wq, &hdata->hotplug_work, 1696 1696 msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); 1697 1697 1698 1698 return IRQ_HANDLED;
-6
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1002 1002 return PTR_ERR(dsi->next_bridge); 1003 1003 } 1004 1004 1005 - /* 1006 - * set flag to request the DSI host bridge be pre-enabled before device bridge 1007 - * in the chain, so the DSI host is ready when the device bridge is pre-enabled 1008 - */ 1009 - dsi->next_bridge->pre_enable_prev_first = true; 1010 - 1011 1005 drm_bridge_add(&dsi->bridge); 1012 1006 1013 1007 ret = component_add(host->dev, &mtk_dsi_component_ops);
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ad102.c
··· 30 30 31 31 .booter.ctor = ga102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 337 337 } 338 338 339 339 int 340 - nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 340 + nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp) 341 341 { 342 342 return nvkm_gsp_fwsec_init(gsp, &gsp->fws.falcon.sb, "fwsec-sb", 343 343 NVFW_FALCON_APPIF_DMEMMAPPER_CMD_SB); 344 - } 345 - 346 - void 347 - nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 348 - { 349 - nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 350 344 } 351 345 352 346 int
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga100.c
··· 47 47 48 48 .booter.ctor = tu102_gsp_booter_ctor, 49 49 50 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 51 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 52 + 50 53 .dtor = r535_gsp_dtor, 51 54 .oneinit = tu102_gsp_oneinit, 52 55 .init = tu102_gsp_init,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/ga102.c
··· 158 158 159 159 .booter.ctor = ga102_gsp_booter_ctor, 160 160 161 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 162 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 163 + 161 164 .dtor = r535_gsp_dtor, 162 165 .oneinit = tu102_gsp_oneinit, 163 166 .init = tu102_gsp_init,
+21 -2
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
··· 7 7 8 8 int nvkm_gsp_fwsec_frts(struct nvkm_gsp *); 9 9 10 - int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 11 10 int nvkm_gsp_fwsec_sb(struct nvkm_gsp *); 12 - void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 11 + int nvkm_gsp_fwsec_sb_init(struct nvkm_gsp *gsp); 13 12 14 13 struct nvkm_gsp_fwif { 15 14 int version; ··· 51 52 struct nvkm_falcon *, struct nvkm_falcon_fw *); 52 53 } booter; 53 54 55 + struct { 56 + int (*ctor)(struct nvkm_gsp *); 57 + void (*dtor)(struct nvkm_gsp *); 58 + } fwsec_sb; 59 + 54 60 void (*dtor)(struct nvkm_gsp *); 55 61 int (*oneinit)(struct nvkm_gsp *); 56 62 int (*init)(struct nvkm_gsp *); ··· 71 67 extern const struct nvkm_falcon_fw_func tu102_gsp_fwsec; 72 68 int tu102_gsp_booter_ctor(struct nvkm_gsp *, const char *, const struct firmware *, 73 69 struct nvkm_falcon *, struct nvkm_falcon_fw *); 70 + int tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *); 71 + void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *); 74 72 int tu102_gsp_oneinit(struct nvkm_gsp *); 75 73 int tu102_gsp_init(struct nvkm_gsp *); 76 74 int tu102_gsp_fini(struct nvkm_gsp *, bool suspend); ··· 96 90 97 91 int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int, 98 92 struct nvkm_gsp **); 93 + 94 + static inline int nvkm_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 95 + { 96 + if (gsp->func->fwsec_sb.ctor) 97 + return gsp->func->fwsec_sb.ctor(gsp); 98 + return 0; 99 + } 100 + 101 + static inline void nvkm_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 102 + { 103 + if (gsp->func->fwsec_sb.dtor) 104 + gsp->func->fwsec_sb.dtor(gsp); 105 + } 99 106 100 107 extern const struct nvkm_gsp_func gv100_gsp; 101 108 #endif
+15
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
··· 30 30 #include <nvfw/fw.h> 31 31 #include <nvfw/hs.h> 32 32 33 + int 34 + tu102_gsp_fwsec_sb_ctor(struct nvkm_gsp *gsp) 35 + { 36 + return nvkm_gsp_fwsec_sb_init(gsp); 37 + } 38 + 39 + void 40 + tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *gsp) 41 + { 42 + nvkm_falcon_fw_dtor(&gsp->fws.falcon.sb); 43 + } 44 + 33 45 static int 34 46 tu102_gsp_booter_unload(struct nvkm_gsp *gsp, u32 mbox0, u32 mbox1) 35 47 { ··· 381 369 .sig_section = ".fwsignature_tu10x", 382 370 383 371 .booter.ctor = tu102_gsp_booter_ctor, 372 + 373 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 374 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 384 375 385 376 .dtor = r535_gsp_dtor, 386 377 .oneinit = tu102_gsp_oneinit,
+3
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu116.c
··· 30 30 31 31 .booter.ctor = tu102_gsp_booter_ctor, 32 32 33 + .fwsec_sb.ctor = tu102_gsp_fwsec_sb_ctor, 34 + .fwsec_sb.dtor = tu102_gsp_fwsec_sb_dtor, 35 + 33 36 .dtor = r535_gsp_dtor, 34 37 .oneinit = tu102_gsp_oneinit, 35 38 .init = tu102_gsp_init,
+1 -1
drivers/gpu/drm/pl111/pl111_drv.c
··· 295 295 variant->name, priv); 296 296 if (ret != 0) { 297 297 dev_err(dev, "%s failed irq %d\n", __func__, ret); 298 - return ret; 298 + goto dev_put; 299 299 } 300 300 301 301 ret = pl111_modeset_init(drm);
+1 -1
drivers/gpu/drm/radeon/pptable.h
··· 450 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 451 UCHAR ucEntrySize; 452 452 453 - UCHAR clockInfo[] __counted_by(ucNumEntries); 453 + UCHAR clockInfo[] /*__counted_by(ucNumEntries)*/; 454 454 }ClockInfoArray; 455 455 456 456 typedef struct _NonClockInfoArray{
+27 -3
drivers/gpu/drm/tidss/tidss_kms.c
··· 26 26 27 27 tidss_runtime_get(tidss); 28 28 29 - drm_atomic_helper_commit_modeset_disables(ddev, old_state); 30 - drm_atomic_helper_commit_planes(ddev, old_state, DRM_PLANE_COMMIT_ACTIVE_ONLY); 31 - drm_atomic_helper_commit_modeset_enables(ddev, old_state); 29 + /* 30 + * TI's OLDI and DSI encoders need to be set up before the crtc is 31 + * enabled. Thus drm_atomic_helper_commit_modeset_enables() and 32 + * drm_atomic_helper_commit_modeset_disables() cannot be used here, as 33 + * they enable the crtc before bridges' pre-enable, and disable the crtc 34 + * after bridges' post-disable. 35 + * 36 + * Open code the functions here and first call the bridges' pre-enables, 37 + * then crtc enable, then bridges' post-enable (and vice versa for 38 + * disable). 39 + */ 40 + 41 + drm_atomic_helper_commit_encoder_bridge_disable(ddev, old_state); 42 + drm_atomic_helper_commit_crtc_disable(ddev, old_state); 43 + drm_atomic_helper_commit_encoder_bridge_post_disable(ddev, old_state); 44 + 45 + drm_atomic_helper_update_legacy_modeset_state(ddev, old_state); 46 + drm_atomic_helper_calc_timestamping_constants(old_state); 47 + drm_atomic_helper_commit_crtc_set_mode(ddev, old_state); 48 + 49 + drm_atomic_helper_commit_planes(ddev, old_state, 50 + DRM_PLANE_COMMIT_ACTIVE_ONLY); 51 + 52 + drm_atomic_helper_commit_encoder_bridge_pre_enable(ddev, old_state); 53 + drm_atomic_helper_commit_crtc_enable(ddev, old_state); 54 + drm_atomic_helper_commit_encoder_bridge_enable(ddev, old_state); 55 + drm_atomic_helper_commit_writebacks(ddev, old_state); 32 56 33 57 drm_atomic_helper_commit_hw_done(old_state); 34 58 drm_atomic_helper_wait_for_flip_done(ddev, old_state);
+1 -1
drivers/gpu/nova-core/Kconfig
··· 3 3 depends on 64BIT 4 4 depends on PCI 5 5 depends on RUST 6 - depends on RUST_FW_LOADER_ABSTRACTIONS 6 + select RUST_FW_LOADER_ABSTRACTIONS 7 7 select AUXILIARY_BUS 8 8 default n 9 9 help
+8 -6
drivers/gpu/nova-core/gsp/cmdq.rs
··· 588 588 header.length(), 589 589 ); 590 590 591 + let payload_length = header.payload_length(); 592 + 591 593 // Check that the driver read area is large enough for the message. 592 - if slice_1.len() + slice_2.len() < header.length() { 594 + if slice_1.len() + slice_2.len() < payload_length { 593 595 return Err(EIO); 594 596 } 595 597 596 598 // Cut the message slices down to the actual length of the message. 597 - let (slice_1, slice_2) = if slice_1.len() > header.length() { 598 - // PANIC: we checked above that `slice_1` is at least as long as `msg_header.length()`. 599 - (slice_1.split_at(header.length()).0, &slice_2[0..0]) 599 + let (slice_1, slice_2) = if slice_1.len() > payload_length { 600 + // PANIC: we checked above that `slice_1` is at least as long as `payload_length`. 601 + (slice_1.split_at(payload_length).0, &slice_2[0..0]) 600 602 } else { 601 603 ( 602 604 slice_1, 603 605 // PANIC: we checked above that `slice_1.len() + slice_2.len()` is at least as 604 - // large as `msg_header.length()`. 605 - slice_2.split_at(header.length() - slice_1.len()).0, 606 + // large as `payload_length`. 607 + slice_2.split_at(payload_length - slice_1.len()).0, 606 608 ) 607 609 }; 608 610
+38 -40
drivers/gpu/nova-core/gsp/fw.rs
··· 141 141 // are valid. 142 142 unsafe impl FromBytes for GspFwWprMeta {} 143 143 144 - type GspFwWprMetaBootResumeInfo = r570_144::GspFwWprMeta__bindgen_ty_1; 145 - type GspFwWprMetaBootInfo = r570_144::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 144 + type GspFwWprMetaBootResumeInfo = bindings::GspFwWprMeta__bindgen_ty_1; 145 + type GspFwWprMetaBootInfo = bindings::GspFwWprMeta__bindgen_ty_1__bindgen_ty_1; 146 146 147 147 impl GspFwWprMeta { 148 148 /// Fill in and return a `GspFwWprMeta` suitable for booting `gsp_firmware` using the ··· 150 150 pub(crate) fn new(gsp_firmware: &GspFirmware, fb_layout: &FbLayout) -> Self { 151 151 Self(bindings::GspFwWprMeta { 152 152 // CAST: we want to store the bits of `GSP_FW_WPR_META_MAGIC` unmodified. 153 - magic: r570_144::GSP_FW_WPR_META_MAGIC as u64, 154 - revision: u64::from(r570_144::GSP_FW_WPR_META_REVISION), 153 + magic: bindings::GSP_FW_WPR_META_MAGIC as u64, 154 + revision: u64::from(bindings::GSP_FW_WPR_META_REVISION), 155 155 sysmemAddrOfRadix3Elf: gsp_firmware.radix3_dma_handle(), 156 156 sizeOfRadix3Elf: u64::from_safe_cast(gsp_firmware.size), 157 157 sysmemAddrOfBootloader: gsp_firmware.bootloader.ucode.dma_handle(), ··· 315 315 #[repr(u32)] 316 316 pub(crate) enum SeqBufOpcode { 317 317 // Core operation opcodes 318 - CoreReset = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 - CoreResume = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 - CoreStart = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 - CoreWaitForHalt = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 318 + CoreReset = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET, 319 + CoreResume = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME, 320 + CoreStart = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START, 321 + CoreWaitForHalt = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT, 322 322 323 323 // Delay opcode 324 - DelayUs = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 324 + DelayUs = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US, 325 325 326 326 // Register operation opcodes 327 - RegModify = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 - RegPoll = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 - RegStore = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 - RegWrite = r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 327 + RegModify = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY, 328 + RegPoll = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL, 329 + RegStore = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE, 330 + RegWrite = bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE, 331 331 } 332 332 333 333 impl fmt::Display for SeqBufOpcode { ··· 351 351 352 352 fn try_from(value: u32) -> Result<SeqBufOpcode> { 353 353 match value { 354 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 354 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESET => { 355 355 Ok(SeqBufOpcode::CoreReset) 356 356 } 357 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 357 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME => { 358 358 Ok(SeqBufOpcode::CoreResume) 359 359 } 360 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 360 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_START => { 361 361 Ok(SeqBufOpcode::CoreStart) 362 362 } 363 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 363 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_WAIT_FOR_HALT => { 364 364 Ok(SeqBufOpcode::CoreWaitForHalt) 365 365 } 366 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 366 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_DELAY_US => Ok(SeqBufOpcode::DelayUs), 367 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_MODIFY => { 368 368 Ok(SeqBufOpcode::RegModify) 369 369 } 370 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 - r570_144::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 370 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_POLL => Ok(SeqBufOpcode::RegPoll), 371 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_STORE => Ok(SeqBufOpcode::RegStore), 372 + bindings::GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_REG_WRITE => Ok(SeqBufOpcode::RegWrite), 373 373 _ => Err(EINVAL), 374 374 } 375 375 } ··· 385 385 /// Wrapper for GSP sequencer register write payload. 386 386 #[repr(transparent)] 387 387 #[derive(Copy, Clone)] 388 - pub(crate) struct RegWritePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 388 + pub(crate) struct RegWritePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_WRITE); 389 389 390 390 impl RegWritePayload { 391 391 /// Returns the register address. ··· 408 408 /// Wrapper for GSP sequencer register modify payload. 409 409 #[repr(transparent)] 410 410 #[derive(Copy, Clone)] 411 - pub(crate) struct RegModifyPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 411 + pub(crate) struct RegModifyPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_MODIFY); 412 412 413 413 impl RegModifyPayload { 414 414 /// Returns the register address. ··· 436 436 /// Wrapper for GSP sequencer register poll payload. 437 437 #[repr(transparent)] 438 438 #[derive(Copy, Clone)] 439 - pub(crate) struct RegPollPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 439 + pub(crate) struct RegPollPayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_POLL); 440 440 441 441 impl RegPollPayload { 442 442 /// Returns the register address. ··· 469 469 /// Wrapper for GSP sequencer delay payload. 470 470 #[repr(transparent)] 471 471 #[derive(Copy, Clone)] 472 - pub(crate) struct DelayUsPayload(r570_144::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 472 + pub(crate) struct DelayUsPayload(bindings::GSP_SEQ_BUF_PAYLOAD_DELAY_US); 473 473 474 474 impl DelayUsPayload { 475 475 /// Returns the delay value in microseconds. ··· 487 487 /// Wrapper for GSP sequencer register store payload. 488 488 #[repr(transparent)] 489 489 #[derive(Copy, Clone)] 490 - pub(crate) struct RegStorePayload(r570_144::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 490 + pub(crate) struct RegStorePayload(bindings::GSP_SEQ_BUF_PAYLOAD_REG_STORE); 491 491 492 492 impl RegStorePayload { 493 493 /// Returns the register address. ··· 510 510 511 511 /// Wrapper for GSP sequencer buffer command. 512 512 #[repr(transparent)] 513 - pub(crate) struct SequencerBufferCmd(r570_144::GSP_SEQUENCER_BUFFER_CMD); 513 + pub(crate) struct SequencerBufferCmd(bindings::GSP_SEQUENCER_BUFFER_CMD); 514 514 515 515 impl SequencerBufferCmd { 516 516 /// Returns the opcode as a `SeqBufOpcode` enum, or error if invalid. ··· 612 612 613 613 /// Wrapper for GSP run CPU sequencer RPC. 614 614 #[repr(transparent)] 615 - pub(crate) struct RunCpuSequencer(r570_144::rpc_run_cpu_sequencer_v17_00); 615 + pub(crate) struct RunCpuSequencer(bindings::rpc_run_cpu_sequencer_v17_00); 616 616 617 617 impl RunCpuSequencer { 618 618 /// Returns the command index. ··· 797 797 } 798 798 } 799 799 800 - // SAFETY: We can't derive the Zeroable trait for this binding because the 801 - // procedural macro doesn't support the syntax used by bindgen to create the 802 - // __IncompleteArrayField types. So instead we implement it here, which is safe 803 - // because these are explicitly padded structures only containing types for 804 - // which any bit pattern, including all zeros, is valid. 805 - unsafe impl Zeroable for bindings::rpc_message_header_v {} 806 - 807 800 /// GSP Message Element. 808 801 /// 809 802 /// This is essentially a message header expected to be followed by the message data. ··· 846 853 self.inner.checkSum = checksum; 847 854 } 848 855 849 - /// Returns the total length of the message. 856 + /// Returns the length of the message's payload. 857 + pub(crate) fn payload_length(&self) -> usize { 858 + // `rpc.length` includes the length of the RPC message header. 859 + num::u32_as_usize(self.inner.rpc.length) 860 + .saturating_sub(size_of::<bindings::rpc_message_header_v>()) 861 + } 862 + 863 + /// Returns the total length of the message, message and RPC headers included. 850 864 pub(crate) fn length(&self) -> usize { 851 - // `rpc.length` includes the length of the GspRpcHeader but not the message header. 852 - size_of::<Self>() - size_of::<bindings::rpc_message_header_v>() 853 - + num::u32_as_usize(self.inner.rpc.length) 865 + size_of::<Self>() + self.payload_length() 854 866 } 855 867 856 868 // Returns the sequence number of the message.
+7 -4
drivers/gpu/nova-core/gsp/fw/r570_144.rs
··· 24 24 unreachable_pub, 25 25 unsafe_op_in_unsafe_fn 26 26 )] 27 - use kernel::{ 28 - ffi, 29 - prelude::Zeroable, // 30 - }; 27 + use kernel::ffi; 28 + use pin_init::MaybeZeroable; 29 + 31 30 include!("r570_144/bindings.rs"); 31 + 32 + // SAFETY: This type has a size of zero, so its inclusion into another type should not affect their 33 + // ability to implement `Zeroable`. 34 + unsafe impl<T> kernel::prelude::Zeroable for __IncompleteArrayField<T> {}
+59 -46
drivers/gpu/nova-core/gsp/fw/r570_144/bindings.rs
··· 320 320 pub const NV_VGPU_MSG_EVENT_NUM_EVENTS: _bindgen_ty_3 = 4131; 321 321 pub type _bindgen_ty_3 = ffi::c_uint; 322 322 #[repr(C)] 323 - #[derive(Debug, Default, Copy, Clone)] 323 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 324 324 pub struct NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS { 325 325 pub totalVFs: u32_, 326 326 pub firstVfOffset: u32_, 327 327 pub vfFeatureMask: u32_, 328 + pub __bindgen_padding_0: [u8; 4usize], 328 329 pub FirstVFBar0Address: u64_, 329 330 pub FirstVFBar1Address: u64_, 330 331 pub FirstVFBar2Address: u64_, ··· 341 340 pub bClientRmAllocatedCtxBuffer: u8_, 342 341 pub bNonPowerOf2ChannelCountSupported: u8_, 343 342 pub bVfResizableBAR1Supported: u8_, 343 + pub __bindgen_padding_1: [u8; 7usize], 344 344 } 345 345 #[repr(C)] 346 - #[derive(Debug, Default, Copy, Clone)] 346 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 347 347 pub struct NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS { 348 348 pub BoardID: u32_, 349 349 pub chipSKU: [ffi::c_char; 9usize], 350 350 pub chipSKUMod: [ffi::c_char; 5usize], 351 + pub __bindgen_padding_0: [u8; 2usize], 351 352 pub skuConfigVersion: u32_, 352 353 pub project: [ffi::c_char; 5usize], 353 354 pub projectSKU: [ffi::c_char; 5usize], 354 355 pub CDP: [ffi::c_char; 6usize], 355 356 pub projectSKUMod: [ffi::c_char; 2usize], 357 + pub __bindgen_padding_1: [u8; 2usize], 356 358 pub businessCycle: u32_, 357 359 } 358 360 pub type NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG = [u8_; 17usize]; 359 361 #[repr(C)] 360 - #[derive(Debug, Default, Copy, Clone)] 362 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 361 363 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO { 362 364 pub base: u64_, 363 365 pub limit: u64_, ··· 372 368 pub blackList: NV2080_CTRL_CMD_FB_GET_FB_REGION_SURFACE_MEM_TYPE_FLAG, 373 369 } 374 370 #[repr(C)] 375 - #[derive(Debug, Default, Copy, Clone)] 371 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 376 372 pub struct NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS { 377 373 pub numFBRegions: u32_, 374 + pub __bindgen_padding_0: [u8; 4usize], 378 375 pub fbRegion: [NV2080_CTRL_CMD_FB_GET_FB_REGION_FB_REGION_INFO; 16usize], 379 376 } 380 377 #[repr(C)] 381 - #[derive(Debug, Copy, Clone)] 378 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 382 379 pub struct NV2080_CTRL_GPU_GET_GID_INFO_PARAMS { 383 380 pub index: u32_, 384 381 pub flags: u32_, ··· 396 391 } 397 392 } 398 393 #[repr(C)] 399 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 394 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 400 395 pub struct DOD_METHOD_DATA { 401 396 pub status: u32_, 402 397 pub acpiIdListLen: u32_, 403 398 pub acpiIdList: [u32_; 16usize], 404 399 } 405 400 #[repr(C)] 406 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 401 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 407 402 pub struct JT_METHOD_DATA { 408 403 pub status: u32_, 409 404 pub jtCaps: u32_, ··· 412 407 pub __bindgen_padding_0: u8, 413 408 } 414 409 #[repr(C)] 415 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 410 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 416 411 pub struct MUX_METHOD_DATA_ELEMENT { 417 412 pub acpiId: u32_, 418 413 pub mode: u32_, 419 414 pub status: u32_, 420 415 } 421 416 #[repr(C)] 422 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 417 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 423 418 pub struct MUX_METHOD_DATA { 424 419 pub tableLen: u32_, 425 420 pub acpiIdMuxModeTable: [MUX_METHOD_DATA_ELEMENT; 16usize], ··· 427 422 pub acpiIdMuxStateTable: [MUX_METHOD_DATA_ELEMENT; 16usize], 428 423 } 429 424 #[repr(C)] 430 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 425 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 431 426 pub struct CAPS_METHOD_DATA { 432 427 pub status: u32_, 433 428 pub optimusCaps: u32_, 434 429 } 435 430 #[repr(C)] 436 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 431 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 437 432 pub struct ACPI_METHOD_DATA { 438 433 pub bValid: u8_, 439 434 pub __bindgen_padding_0: [u8; 3usize], ··· 443 438 pub capsMethodData: CAPS_METHOD_DATA, 444 439 } 445 440 #[repr(C)] 446 - #[derive(Debug, Default, Copy, Clone)] 441 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 447 442 pub struct VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS { 448 443 pub headIndex: u32_, 449 444 pub maxHResolution: u32_, 450 445 pub maxVResolution: u32_, 451 446 } 452 447 #[repr(C)] 453 - #[derive(Debug, Default, Copy, Clone)] 448 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 454 449 pub struct VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS { 455 450 pub numHeads: u32_, 456 451 pub maxNumHeads: u32_, 457 452 } 458 453 #[repr(C)] 459 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 454 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 460 455 pub struct BUSINFO { 461 456 pub deviceID: u16_, 462 457 pub vendorID: u16_, ··· 466 461 pub __bindgen_padding_0: u8, 467 462 } 468 463 #[repr(C)] 469 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 464 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 470 465 pub struct GSP_VF_INFO { 471 466 pub totalVFs: u32_, 472 467 pub firstVFOffset: u32_, ··· 479 474 pub __bindgen_padding_0: [u8; 5usize], 480 475 } 481 476 #[repr(C)] 482 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 477 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 483 478 pub struct GSP_PCIE_CONFIG_REG { 484 479 pub linkCap: u32_, 485 480 } 486 481 #[repr(C)] 487 - #[derive(Debug, Default, Copy, Clone)] 482 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 488 483 pub struct EcidManufacturingInfo { 489 484 pub ecidLow: u32_, 490 485 pub ecidHigh: u32_, 491 486 pub ecidExtended: u32_, 492 487 } 493 488 #[repr(C)] 494 - #[derive(Debug, Default, Copy, Clone)] 489 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 495 490 pub struct FW_WPR_LAYOUT_OFFSET { 496 491 pub nonWprHeapOffset: u64_, 497 492 pub frtsOffset: u64_, 498 493 } 499 494 #[repr(C)] 500 - #[derive(Debug, Copy, Clone)] 495 + #[derive(Debug, Copy, Clone, MaybeZeroable)] 501 496 pub struct GspStaticConfigInfo_t { 502 497 pub grCapsBits: [u8_; 23usize], 498 + pub __bindgen_padding_0: u8, 503 499 pub gidInfo: NV2080_CTRL_GPU_GET_GID_INFO_PARAMS, 504 500 pub SKUInfo: NV2080_CTRL_BIOS_GET_SKU_INFO_PARAMS, 501 + pub __bindgen_padding_1: [u8; 4usize], 505 502 pub fbRegionInfoParams: NV2080_CTRL_CMD_FB_GET_FB_REGION_INFO_PARAMS, 506 503 pub sriovCaps: NV0080_CTRL_GPU_GET_SRIOV_CAPS_PARAMS, 507 504 pub sriovMaxGfid: u32_, 508 505 pub engineCaps: [u32_; 3usize], 509 506 pub poisonFuseEnabled: u8_, 507 + pub __bindgen_padding_2: [u8; 7usize], 510 508 pub fb_length: u64_, 511 509 pub fbio_mask: u64_, 512 510 pub fb_bus_width: u32_, ··· 535 527 pub bIsMigSupported: u8_, 536 528 pub RTD3GC6TotalBoardPower: u16_, 537 529 pub RTD3GC6PerstDelay: u16_, 530 + pub __bindgen_padding_3: [u8; 2usize], 538 531 pub bar1PdeBase: u64_, 539 532 pub bar2PdeBase: u64_, 540 533 pub bVbiosValid: u8_, 534 + pub __bindgen_padding_4: [u8; 3usize], 541 535 pub vbiosSubVendor: u32_, 542 536 pub vbiosSubDevice: u32_, 543 537 pub bPageRetirementSupported: u8_, 544 538 pub bSplitVasBetweenServerClientRm: u8_, 545 539 pub bClRootportNeedsNosnoopWAR: u8_, 540 + pub __bindgen_padding_5: u8, 546 541 pub displaylessMaxHeads: VIRTUAL_DISPLAY_GET_NUM_HEADS_PARAMS, 547 542 pub displaylessMaxResolution: VIRTUAL_DISPLAY_GET_MAX_RESOLUTION_PARAMS, 543 + pub __bindgen_padding_6: [u8; 4usize], 548 544 pub displaylessMaxPixels: u64_, 549 545 pub hInternalClient: u32_, 550 546 pub hInternalDevice: u32_, ··· 570 558 } 571 559 } 572 560 #[repr(C)] 573 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 561 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 574 562 pub struct GspSystemInfo { 575 563 pub gpuPhysAddr: u64_, 576 564 pub gpuPhysFbAddr: u64_, ··· 627 615 pub hostPageSize: u64_, 628 616 } 629 617 #[repr(C)] 630 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 618 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 631 619 pub struct MESSAGE_QUEUE_INIT_ARGUMENTS { 632 620 pub sharedMemPhysAddr: u64_, 633 621 pub pageTableEntryCount: u32_, ··· 636 624 pub statQueueOffset: u64_, 637 625 } 638 626 #[repr(C)] 639 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 627 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 640 628 pub struct GSP_SR_INIT_ARGUMENTS { 641 629 pub oldLevel: u32_, 642 630 pub flags: u32_, ··· 644 632 pub __bindgen_padding_0: [u8; 3usize], 645 633 } 646 634 #[repr(C)] 647 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 635 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 648 636 pub struct GSP_ARGUMENTS_CACHED { 649 637 pub messageQueueInitArguments: MESSAGE_QUEUE_INIT_ARGUMENTS, 650 638 pub srInitArguments: GSP_SR_INIT_ARGUMENTS, ··· 654 642 pub profilerArgs: GSP_ARGUMENTS_CACHED__bindgen_ty_1, 655 643 } 656 644 #[repr(C)] 657 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 645 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 658 646 pub struct GSP_ARGUMENTS_CACHED__bindgen_ty_1 { 659 647 pub pa: u64_, 660 648 pub size: u64_, 661 649 } 662 650 #[repr(C)] 663 - #[derive(Copy, Clone, Zeroable)] 651 + #[derive(Copy, Clone, MaybeZeroable)] 664 652 pub union rpc_message_rpc_union_field_v03_00 { 665 653 pub spare: u32_, 666 654 pub cpuRmGfid: u32_, ··· 676 664 } 677 665 pub type rpc_message_rpc_union_field_v = rpc_message_rpc_union_field_v03_00; 678 666 #[repr(C)] 667 + #[derive(MaybeZeroable)] 679 668 pub struct rpc_message_header_v03_00 { 680 669 pub header_version: u32_, 681 670 pub signature: u32_, ··· 699 686 } 700 687 pub type rpc_message_header_v = rpc_message_header_v03_00; 701 688 #[repr(C)] 702 - #[derive(Copy, Clone, Zeroable)] 689 + #[derive(Copy, Clone, MaybeZeroable)] 703 690 pub struct GspFwWprMeta { 704 691 pub magic: u64_, 705 692 pub revision: u64_, ··· 734 721 pub verified: u64_, 735 722 } 736 723 #[repr(C)] 737 - #[derive(Copy, Clone, Zeroable)] 724 + #[derive(Copy, Clone, MaybeZeroable)] 738 725 pub union GspFwWprMeta__bindgen_ty_1 { 739 726 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_1__bindgen_ty_1, 740 727 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_1__bindgen_ty_2, 741 728 } 742 729 #[repr(C)] 743 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 730 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 744 731 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_1 { 745 732 pub sysmemAddrOfSignature: u64_, 746 733 pub sizeOfSignature: u64_, 747 734 } 748 735 #[repr(C)] 749 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 736 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 750 737 pub struct GspFwWprMeta__bindgen_ty_1__bindgen_ty_2 { 751 738 pub gspFwHeapFreeListWprOffset: u32_, 752 739 pub unused0: u32_, ··· 762 749 } 763 750 } 764 751 #[repr(C)] 765 - #[derive(Copy, Clone, Zeroable)] 752 + #[derive(Copy, Clone, MaybeZeroable)] 766 753 pub union GspFwWprMeta__bindgen_ty_2 { 767 754 pub __bindgen_anon_1: GspFwWprMeta__bindgen_ty_2__bindgen_ty_1, 768 755 pub __bindgen_anon_2: GspFwWprMeta__bindgen_ty_2__bindgen_ty_2, 769 756 } 770 757 #[repr(C)] 771 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 758 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 772 759 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_1 { 773 760 pub partitionRpcAddr: u64_, 774 761 pub partitionRpcRequestOffset: u16_, ··· 780 767 pub lsUcodeVersion: u32_, 781 768 } 782 769 #[repr(C)] 783 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 770 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 784 771 pub struct GspFwWprMeta__bindgen_ty_2__bindgen_ty_2 { 785 772 pub partitionRpcPadding: [u32_; 4usize], 786 773 pub sysmemAddrOfCrashReportQueue: u64_, ··· 815 802 pub const LibosMemoryRegionLoc_LIBOS_MEMORY_REGION_LOC_FB: LibosMemoryRegionLoc = 2; 816 803 pub type LibosMemoryRegionLoc = ffi::c_uint; 817 804 #[repr(C)] 818 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 805 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 819 806 pub struct LibosMemoryRegionInitArgument { 820 807 pub id8: LibosAddress, 821 808 pub pa: LibosAddress, ··· 825 812 pub __bindgen_padding_0: [u8; 6usize], 826 813 } 827 814 #[repr(C)] 828 - #[derive(Debug, Default, Copy, Clone)] 815 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 829 816 pub struct PACKED_REGISTRY_ENTRY { 830 817 pub nameOffset: u32_, 831 818 pub type_: u8_, ··· 834 821 pub length: u32_, 835 822 } 836 823 #[repr(C)] 837 - #[derive(Debug, Default)] 824 + #[derive(Debug, Default, MaybeZeroable)] 838 825 pub struct PACKED_REGISTRY_TABLE { 839 826 pub size: u32_, 840 827 pub numEntries: u32_, 841 828 pub entries: __IncompleteArrayField<PACKED_REGISTRY_ENTRY>, 842 829 } 843 830 #[repr(C)] 844 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 831 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 845 832 pub struct msgqTxHeader { 846 833 pub version: u32_, 847 834 pub size: u32_, ··· 853 840 pub entryOff: u32_, 854 841 } 855 842 #[repr(C)] 856 - #[derive(Debug, Default, Copy, Clone, Zeroable)] 843 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 857 844 pub struct msgqRxHeader { 858 845 pub readPtr: u32_, 859 846 } 860 847 #[repr(C)] 861 848 #[repr(align(8))] 862 - #[derive(Zeroable)] 849 + #[derive(MaybeZeroable)] 863 850 pub struct GSP_MSG_QUEUE_ELEMENT { 864 851 pub authTagBuffer: [u8_; 16usize], 865 852 pub aadBuffer: [u8_; 16usize], ··· 879 866 } 880 867 } 881 868 #[repr(C)] 882 - #[derive(Debug, Default)] 869 + #[derive(Debug, Default, MaybeZeroable)] 883 870 pub struct rpc_run_cpu_sequencer_v17_00 { 884 871 pub bufferSizeDWord: u32_, 885 872 pub cmdIndex: u32_, ··· 897 884 pub const GSP_SEQ_BUF_OPCODE_GSP_SEQ_BUF_OPCODE_CORE_RESUME: GSP_SEQ_BUF_OPCODE = 8; 898 885 pub type GSP_SEQ_BUF_OPCODE = ffi::c_uint; 899 886 #[repr(C)] 900 - #[derive(Debug, Default, Copy, Clone)] 887 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 901 888 pub struct GSP_SEQ_BUF_PAYLOAD_REG_WRITE { 902 889 pub addr: u32_, 903 890 pub val: u32_, 904 891 } 905 892 #[repr(C)] 906 - #[derive(Debug, Default, Copy, Clone)] 893 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 907 894 pub struct GSP_SEQ_BUF_PAYLOAD_REG_MODIFY { 908 895 pub addr: u32_, 909 896 pub mask: u32_, 910 897 pub val: u32_, 911 898 } 912 899 #[repr(C)] 913 - #[derive(Debug, Default, Copy, Clone)] 900 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 914 901 pub struct GSP_SEQ_BUF_PAYLOAD_REG_POLL { 915 902 pub addr: u32_, 916 903 pub mask: u32_, ··· 919 906 pub error: u32_, 920 907 } 921 908 #[repr(C)] 922 - #[derive(Debug, Default, Copy, Clone)] 909 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 923 910 pub struct GSP_SEQ_BUF_PAYLOAD_DELAY_US { 924 911 pub val: u32_, 925 912 } 926 913 #[repr(C)] 927 - #[derive(Debug, Default, Copy, Clone)] 914 + #[derive(Debug, Default, Copy, Clone, MaybeZeroable)] 928 915 pub struct GSP_SEQ_BUF_PAYLOAD_REG_STORE { 929 916 pub addr: u32_, 930 917 pub index: u32_, 931 918 } 932 919 #[repr(C)] 933 - #[derive(Copy, Clone)] 920 + #[derive(Copy, Clone, MaybeZeroable)] 934 921 pub struct GSP_SEQUENCER_BUFFER_CMD { 935 922 pub opCode: GSP_SEQ_BUF_OPCODE, 936 923 pub payload: GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1, 937 924 } 938 925 #[repr(C)] 939 - #[derive(Copy, Clone)] 926 + #[derive(Copy, Clone, MaybeZeroable)] 940 927 pub union GSP_SEQUENCER_BUFFER_CMD__bindgen_ty_1 { 941 928 pub regWrite: GSP_SEQ_BUF_PAYLOAD_REG_WRITE, 942 929 pub regModify: GSP_SEQ_BUF_PAYLOAD_REG_MODIFY,
+4 -2
drivers/hid/bpf/progs/Makefile
··· 56 56 57 57 %.bpf.o: %.bpf.c vmlinux.h $(BPFOBJ) | $(OUTPUT) 58 58 $(call msg,BPF,$@) 59 - $(Q)$(CLANG) -g -O2 --target=bpf -Wall -Werror $(INCLUDES) \ 60 - -c $(filter %.c,$^) -o $@ && \ 59 + $(Q)$(CLANG) -g -O2 --target=bpf -Wall -Werror $(INCLUDES) \ 60 + -Wno-microsoft-anon-tag \ 61 + -fms-extensions \ 62 + -c $(filter %.c,$^) -o $@ && \ 61 63 $(LLVM_STRIP) -g $@ 62 64 63 65 vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR)
+13 -2
drivers/hid/hid-elecom.c
··· 77 77 break; 78 78 case USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB: 79 79 case USB_DEVICE_ID_ELECOM_M_XT3URBK_018F: 80 - case USB_DEVICE_ID_ELECOM_M_XT3DRBK: 80 + case USB_DEVICE_ID_ELECOM_M_XT3DRBK_00FC: 81 81 case USB_DEVICE_ID_ELECOM_M_XT4DRBK: 82 82 /* 83 83 * Report descriptor format: ··· 102 102 */ 103 103 mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8); 104 104 break; 105 + case USB_DEVICE_ID_ELECOM_M_XT3DRBK_018C: 106 + /* 107 + * Report descriptor format: 108 + * 22: button bit count 109 + * 30: padding bit count 110 + * 24: button report size 111 + * 16: button usage maximum 112 + */ 113 + mouse_button_fixup(hdev, rdesc, *rsize, 22, 30, 24, 16, 6); 114 + break; 105 115 case USB_DEVICE_ID_ELECOM_M_DT2DRBK: 106 116 case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C: 107 117 /* ··· 132 122 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) }, 133 123 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) }, 134 124 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) }, 135 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK) }, 125 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK_00FC) }, 126 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK_018C) }, 136 127 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 137 128 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 138 129 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) },
+6 -1
drivers/hid/hid-ids.h
··· 317 317 #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12 0x1421 318 318 #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA 0xb824 319 319 #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2 0xb82c 320 + #define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA3 0xb882 320 321 321 322 #define USB_VENDOR_ID_CHUNGHWAT 0x2247 322 323 #define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH 0x0001 ··· 439 438 #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001 440 439 #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002 0xc002 441 440 441 + #define USB_VENDOR_ID_EDIFIER 0x2d99 442 + #define USB_DEVICE_ID_EDIFIER_QR30 0xa101 /* EDIFIER Hal0 2.0 SE */ 443 + 442 444 #define USB_VENDOR_ID_ELAN 0x04f3 443 445 #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 444 446 #define USB_DEVICE_ID_HP_X2 0x074d ··· 455 451 #define USB_DEVICE_ID_ELECOM_M_XGL20DLBK 0x00e6 456 452 #define USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB 0x00fb 457 453 #define USB_DEVICE_ID_ELECOM_M_XT3URBK_018F 0x018f 458 - #define USB_DEVICE_ID_ELECOM_M_XT3DRBK 0x00fc 454 + #define USB_DEVICE_ID_ELECOM_M_XT3DRBK_00FC 0x00fc 455 + #define USB_DEVICE_ID_ELECOM_M_XT3DRBK_018C 0x018c 459 456 #define USB_DEVICE_ID_ELECOM_M_XT4DRBK 0x00fd 460 457 #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe 461 458 #define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff
+2
drivers/hid/hid-logitech-hidpp.c
··· 4662 4662 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb025) }, 4663 4663 { /* MX Master 3S mouse over Bluetooth */ 4664 4664 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb034) }, 4665 + { /* MX Anywhere 3S mouse over Bluetooth */ 4666 + HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb037) }, 4665 4667 { /* MX Anywhere 3SB mouse over Bluetooth */ 4666 4668 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb038) }, 4667 4669 {}
+12 -1
drivers/hid/hid-multitouch.c
··· 81 81 #define MT_INPUTMODE_TOUCHPAD 0x03 82 82 83 83 #define MT_BUTTONTYPE_CLICKPAD 0 84 + #define MT_BUTTONTYPE_PRESSUREPAD 1 84 85 85 86 enum latency_mode { 86 87 HID_LATENCY_NORMAL = 0, ··· 180 179 __u8 inputmode_value; /* InputMode HID feature value */ 181 180 __u8 maxcontacts; 182 181 bool is_buttonpad; /* is this device a button pad? */ 182 + bool is_pressurepad; /* is this device a pressurepad? */ 183 183 bool is_haptic_touchpad; /* is this device a haptic touchpad? */ 184 184 bool serial_maybe; /* need to check for serial protocol */ 185 185 ··· 395 393 { .name = MT_CLS_VTL, 396 394 .quirks = MT_QUIRK_ALWAYS_VALID | 397 395 MT_QUIRK_CONTACT_CNT_ACCURATE | 396 + MT_QUIRK_STICKY_FINGERS | 398 397 MT_QUIRK_FORCE_GET_FEATURE, 399 398 }, 400 399 { .name = MT_CLS_GOOGLE, ··· 533 530 } 534 531 535 532 mt_get_feature(hdev, field->report); 536 - if (field->value[usage->usage_index] == MT_BUTTONTYPE_CLICKPAD) 533 + switch (field->value[usage->usage_index]) { 534 + case MT_BUTTONTYPE_CLICKPAD: 537 535 td->is_buttonpad = true; 536 + break; 537 + case MT_BUTTONTYPE_PRESSUREPAD: 538 + td->is_pressurepad = true; 539 + break; 540 + } 538 541 539 542 break; 540 543 case 0xff0000c5: ··· 1402 1393 1403 1394 if (td->is_buttonpad) 1404 1395 __set_bit(INPUT_PROP_BUTTONPAD, input->propbit); 1396 + if (td->is_pressurepad) 1397 + __set_bit(INPUT_PROP_PRESSUREPAD, input->propbit); 1405 1398 1406 1399 app->pending_palm_slots = devm_kcalloc(&hi->input->dev, 1407 1400 BITS_TO_LONGS(td->maxcontacts),
+5
drivers/hid/hid-playstation.c
··· 753 753 if (IS_ERR(gamepad)) 754 754 return ERR_CAST(gamepad); 755 755 756 + /* Set initial resting state for joysticks to 128 (center) */ 756 757 input_set_abs_params(gamepad, ABS_X, 0, 255, 0, 0); 758 + gamepad->absinfo[ABS_X].value = 128; 757 759 input_set_abs_params(gamepad, ABS_Y, 0, 255, 0, 0); 760 + gamepad->absinfo[ABS_Y].value = 128; 758 761 input_set_abs_params(gamepad, ABS_Z, 0, 255, 0, 0); 759 762 input_set_abs_params(gamepad, ABS_RX, 0, 255, 0, 0); 763 + gamepad->absinfo[ABS_RX].value = 128; 760 764 input_set_abs_params(gamepad, ABS_RY, 0, 255, 0, 0); 765 + gamepad->absinfo[ABS_RY].value = 128; 761 766 input_set_abs_params(gamepad, ABS_RZ, 0, 255, 0, 0); 762 767 763 768 input_set_abs_params(gamepad, ABS_HAT0X, -1, 1, 0, 0);
+13 -1
drivers/hid/hid-quirks.c
··· 81 81 { HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_PS3), HID_QUIRK_MULTI_INPUT }, 82 82 { HID_USB_DEVICE(USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU), HID_QUIRK_MULTI_INPUT }, 83 83 { HID_USB_DEVICE(USB_VENDOR_ID_DWAV, USB_DEVICE_ID_EGALAX_TOUCHCONTROLLER), HID_QUIRK_MULTI_INPUT | HID_QUIRK_NOGET }, 84 + { HID_USB_DEVICE(USB_VENDOR_ID_EDIFIER, USB_DEVICE_ID_EDIFIER_QR30), HID_QUIRK_ALWAYS_POLL }, 84 85 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, HID_ANY_ID), HID_QUIRK_ALWAYS_POLL }, 85 86 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700), HID_QUIRK_NOGET }, 86 87 { HID_USB_DEVICE(USB_VENDOR_ID_EMS, USB_DEVICE_ID_EMS_TRIO_LINKER_PLUS_II), HID_QUIRK_MULTI_INPUT }, ··· 233 232 * used as a driver. See hid_scan_report(). 234 233 */ 235 234 static const struct hid_device_id hid_have_special_driver[] = { 235 + #if IS_ENABLED(CONFIG_APPLEDISPLAY) 236 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x9218) }, 237 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x9219) }, 238 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x921c) }, 239 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x921d) }, 240 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x9222) }, 241 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x9226) }, 242 + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, 0x9236) }, 243 + #endif 236 244 #if IS_ENABLED(CONFIG_HID_A4TECH) 237 245 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_WCP32PU) }, 238 246 { HID_USB_DEVICE(USB_VENDOR_ID_A4TECH, USB_DEVICE_ID_A4TECH_X5_005D) }, ··· 422 412 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) }, 423 413 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) }, 424 414 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) }, 425 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK) }, 415 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK_00FC) }, 416 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK_018C) }, 426 417 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT4DRBK) }, 427 418 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 428 419 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, ··· 780 769 { HID_USB_DEVICE(USB_VENDOR_ID_BERKSHIRE, USB_DEVICE_ID_BERKSHIRE_PCWD) }, 781 770 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA) }, 782 771 { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2) }, 772 + { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA3) }, 783 773 { HID_USB_DEVICE(USB_VENDOR_ID_CIDC, 0x0103) }, 784 774 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI470X) }, 785 775 { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI4713) },
+1
drivers/hid/i2c-hid/i2c-hid-core.c
··· 286 286 * In addition to report data device will supply data length 287 287 * in the first 2 bytes of the response, so adjust . 288 288 */ 289 + recv_len = min(recv_len, ihid->bufsize - sizeof(__le16)); 289 290 error = i2c_hid_xfer(ihid, ihid->cmdbuf, length, 290 291 ihid->rawbuf, recv_len + sizeof(__le16)); 291 292 if (error) {
+1
drivers/hid/intel-ish-hid/ishtp-hid-client.c
··· 495 495 int rv; 496 496 497 497 /* Send HOSTIF_DM_ENUM_DEVICES */ 498 + client_data->enum_devices_done = false; 498 499 memset(&msg, 0, sizeof(struct hostif_msg)); 499 500 msg.hdr.command = HOSTIF_DM_ENUM_DEVICES; 500 501 rv = ishtp_cl_send(hid_ishtp_cl, (unsigned char *)&msg,
+10 -2
drivers/hid/intel-ish-hid/ishtp/bus.c
··· 240 240 { 241 241 struct ishtp_cl_device *device = to_ishtp_cl_device(dev); 242 242 struct ishtp_cl_driver *driver = to_ishtp_cl_driver(drv); 243 + struct ishtp_fw_client *client = device->fw_client; 244 + const struct ishtp_device_id *id; 243 245 244 - return(device->fw_client ? guid_equal(&driver->id[0].guid, 245 - &device->fw_client->props.protocol_name) : 0); 246 + if (client) { 247 + for (id = driver->id; !guid_is_null(&id->guid); id++) { 248 + if (guid_equal(&id->guid, &client->props.protocol_name)) 249 + return 1; 250 + } 251 + } 252 + 253 + return 0; 246 254 } 247 255 248 256 /**
+1
drivers/hid/intel-thc-hid/Kconfig
··· 7 7 config INTEL_THC_HID 8 8 tristate "Intel Touch Host Controller" 9 9 depends on ACPI 10 + select SGL_ALLOC 10 11 help 11 12 THC (Touch Host Controller) is the name of the IP block in PCH that 12 13 interfaces with Touch Devices (ex: touchscreen, touchpad etc.). It
+2 -2
drivers/hid/intel-thc-hid/intel-thc/intel-thc-dev.c
··· 1593 1593 if (!max_rx_size) 1594 1594 return -EOPNOTSUPP; 1595 1595 1596 - ret = regmap_read(dev->thc_regmap, THC_M_PRT_SW_SEQ_STS_OFFSET, &val); 1596 + ret = regmap_read(dev->thc_regmap, THC_M_PRT_SPI_ICRRD_OPCODE_OFFSET, &val); 1597 1597 if (ret) 1598 1598 return ret; 1599 1599 ··· 1662 1662 if (!delay_us) 1663 1663 return -EOPNOTSUPP; 1664 1664 1665 - ret = regmap_read(dev->thc_regmap, THC_M_PRT_SW_SEQ_STS_OFFSET, &val); 1665 + ret = regmap_read(dev->thc_regmap, THC_M_PRT_SPI_ICRRD_OPCODE_OFFSET, &val); 1666 1666 if (ret) 1667 1667 return ret; 1668 1668
+8 -1
drivers/hid/intel-thc-hid/intel-thc/intel-thc-dma.c
··· 232 232 return 0; 233 233 234 234 memset(config->sgls, 0, sizeof(config->sgls)); 235 + memset(config->sgls_nent_pages, 0, sizeof(config->sgls_nent_pages)); 235 236 memset(config->sgls_nent, 0, sizeof(config->sgls_nent)); 236 237 237 238 cpu_addr = dma_alloc_coherent(dev->dev, prd_tbls_size, ··· 255 254 } 256 255 count = dma_map_sg(dev->dev, config->sgls[i], nent, dir); 257 256 257 + config->sgls_nent_pages[i] = nent; 258 258 config->sgls_nent[i] = count; 259 259 } 260 260 ··· 301 299 continue; 302 300 303 301 dma_unmap_sg(dev->dev, config->sgls[i], 304 - config->sgls_nent[i], 302 + config->sgls_nent_pages[i], 305 303 config->dir); 306 304 307 305 sgl_free(config->sgls[i]); ··· 572 570 573 571 if (prd_table_index >= read_config->prd_tbl_num) { 574 572 dev_err_once(dev->dev, "PRD table index %d too big\n", prd_table_index); 573 + return -EINVAL; 574 + } 575 + 576 + if (!read_config->prd_tbls || !read_config->sgls[prd_table_index]) { 577 + dev_err_once(dev->dev, "PRD tables are not ready yet\n"); 575 578 return -EINVAL; 576 579 } 577 580
+2
drivers/hid/intel-thc-hid/intel-thc/intel-thc-dma.h
··· 91 91 * @dir: Direction of DMA for this config 92 92 * @prd_tbls: PRD tables for current DMA 93 93 * @sgls: Array of pointers to scatter-gather lists 94 + * @sgls_nent_pages: Number of pages per scatter-gather list 94 95 * @sgls_nent: Actual number of entries per scatter-gather list 95 96 * @prd_tbl_num: Actual number of PRD tables 96 97 * @max_packet_size: Size of the buffer needed for 1 DMA message (1 PRD table) ··· 108 107 109 108 struct thc_prd_table *prd_tbls; 110 109 struct scatterlist *sgls[PRD_TABLES_NUM]; 110 + u8 sgls_nent_pages[PRD_TABLES_NUM]; 111 111 u8 sgls_nent[PRD_TABLES_NUM]; 112 112 u8 prd_tbl_num; 113 113
+16 -1
drivers/hid/usbhid/hid-core.c
··· 985 985 struct usb_device *dev = interface_to_usbdev (intf); 986 986 struct hid_descriptor *hdesc; 987 987 struct hid_class_descriptor *hcdesc; 988 + __u8 fixed_opt_descriptors_size; 988 989 u32 quirks = 0; 989 990 unsigned int rsize = 0; 990 991 char *rdesc; ··· 1016 1015 (hdesc->bNumDescriptors - 1) * sizeof(*hcdesc)) { 1017 1016 dbg_hid("hid descriptor invalid, bLen=%hhu bNum=%hhu\n", 1018 1017 hdesc->bLength, hdesc->bNumDescriptors); 1019 - return -EINVAL; 1018 + 1019 + /* 1020 + * Some devices may expose a wrong number of descriptors compared 1021 + * to the provided length. 1022 + * However, we ignore the optional hid class descriptors entirely 1023 + * so we can safely recompute the proper field. 1024 + */ 1025 + if (hdesc->bLength >= sizeof(*hdesc)) { 1026 + fixed_opt_descriptors_size = hdesc->bLength - sizeof(*hdesc); 1027 + 1028 + hid_warn(intf, "fixing wrong optional hid class descriptors count\n"); 1029 + hdesc->bNumDescriptors = fixed_opt_descriptors_size / sizeof(*hcdesc) + 1; 1030 + } else { 1031 + return -EINVAL; 1032 + } 1020 1033 } 1021 1034 1022 1035 hid->version = le16_to_cpu(hdesc->bcdHID);
+1 -1
drivers/iommu/generic_pt/.kunitconfig
··· 1 1 CONFIG_KUNIT=y 2 + CONFIG_COMPILE_TEST=y 2 3 CONFIG_GENERIC_PT=y 3 4 CONFIG_DEBUG_GENERIC_PT=y 4 5 CONFIG_IOMMU_PT=y ··· 12 11 CONFIG_DEBUG_KERNEL=y 13 12 CONFIG_FAULT_INJECTION=y 14 13 CONFIG_RUNTIME_TESTING_MENU=y 15 - CONFIG_IOMMUFD_TEST=y
+2 -2
drivers/iommu/generic_pt/pt_defs.h
··· 202 202 203 203 #define PT_SUPPORTED_FEATURE(feature_nr) (PT_SUPPORTED_FEATURES & BIT(feature_nr)) 204 204 205 - static inline bool pt_feature(const struct pt_common *common, 205 + static __always_inline bool pt_feature(const struct pt_common *common, 206 206 unsigned int feature_nr) 207 207 { 208 208 if (PT_FORCE_ENABLED_FEATURES & BIT(feature_nr)) ··· 212 212 return common->features & BIT(feature_nr); 213 213 } 214 214 215 - static inline bool pts_feature(const struct pt_state *pts, 215 + static __always_inline bool pts_feature(const struct pt_state *pts, 216 216 unsigned int feature_nr) 217 217 { 218 218 return pt_feature(pts->range->common, feature_nr);
+2 -1
drivers/iommu/iommufd/Kconfig
··· 41 41 depends on DEBUG_KERNEL 42 42 depends on FAULT_INJECTION 43 43 depends on RUNTIME_TESTING_MENU 44 - depends on IOMMU_PT_AMDV1 44 + depends on IOMMU_PT_AMDV1=y || IOMMUFD=IOMMU_PT_AMDV1 45 + select DMA_SHARED_BUFFER 45 46 select IOMMUFD_DRIVER 46 47 default n 47 48 help
+1 -1
drivers/irqchip/irq-gic-v5-its.c
··· 849 849 850 850 itte = gicv5_its_device_get_itte_ref(its_dev, event_id); 851 851 852 - if (FIELD_GET(GICV5_ITTL2E_VALID, *itte)) 852 + if (FIELD_GET(GICV5_ITTL2E_VALID, le64_to_cpu(*itte))) 853 853 return -EEXIST; 854 854 855 855 itt_entry = FIELD_PREP(GICV5_ITTL2E_LPI_ID, lpi) |
+8 -2
drivers/irqchip/irq-riscv-imsic-state.c
··· 477 477 lpriv = per_cpu_ptr(imsic->lpriv, cpu); 478 478 479 479 bitmap_free(lpriv->dirty_bitmap); 480 + kfree(lpriv->vectors); 480 481 } 481 482 482 483 free_percpu(imsic->lpriv); ··· 491 490 int cpu, i; 492 491 493 492 /* Allocate per-CPU private state */ 494 - imsic->lpriv = __alloc_percpu(struct_size(imsic->lpriv, vectors, global->nr_ids + 1), 495 - __alignof__(*imsic->lpriv)); 493 + imsic->lpriv = alloc_percpu(typeof(*imsic->lpriv)); 496 494 if (!imsic->lpriv) 497 495 return -ENOMEM; 498 496 ··· 510 510 /* Setup lazy timer for synchronization */ 511 511 timer_setup(&lpriv->timer, imsic_local_timer_callback, TIMER_PINNED); 512 512 #endif 513 + 514 + /* Allocate vector array */ 515 + lpriv->vectors = kcalloc(global->nr_ids + 1, sizeof(*lpriv->vectors), 516 + GFP_KERNEL); 517 + if (!lpriv->vectors) 518 + goto fail_local_cleanup; 513 519 514 520 /* Setup vector array */ 515 521 for (i = 0; i <= global->nr_ids; i++) {
+1 -1
drivers/irqchip/irq-riscv-imsic-state.h
··· 40 40 #endif 41 41 42 42 /* Local vector table */ 43 - struct imsic_vector vectors[]; 43 + struct imsic_vector *vectors; 44 44 }; 45 45 46 46 struct imsic_priv {
+3 -3
drivers/media/mc/mc-request.c
··· 315 315 316 316 fd_prepare_file(fdf)->private_data = req; 317 317 318 - *alloc_fd = fd_publish(fdf); 319 - 320 318 snprintf(req->debug_str, sizeof(req->debug_str), "%u:%d", 321 - atomic_inc_return(&mdev->request_id), *alloc_fd); 319 + atomic_inc_return(&mdev->request_id), fd_prepare_fd(fdf)); 322 320 dev_dbg(mdev->dev, "request: allocated %s\n", req->debug_str); 321 + 322 + *alloc_fd = fd_publish(fdf); 323 323 324 324 return 0; 325 325
+2
drivers/misc/mei/hw-me-regs.h
··· 122 122 123 123 #define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */ 124 124 125 + #define MEI_DEV_ID_NVL_S 0x6E68 /* Nova Lake Point S */ 126 + 125 127 /* 126 128 * MEI HW Section 127 129 */
+2
drivers/misc/mei/pci-me.c
··· 129 129 130 130 {MEI_PCI_DEVICE(MEI_DEV_ID_WCL_P, MEI_ME_PCH15_CFG)}, 131 131 132 + {MEI_PCI_DEVICE(MEI_DEV_ID_NVL_S, MEI_ME_PCH15_CFG)}, 133 + 132 134 /* required last entry */ 133 135 {0, } 134 136 };
+1 -5
drivers/misc/rp1/Kconfig
··· 5 5 6 6 config MISC_RP1 7 7 tristate "RaspberryPi RP1 misc device" 8 - depends on OF_IRQ && OF_OVERLAY && PCI_MSI && PCI_QUIRKS 9 - select PCI_DYNAMIC_OF_NODES 8 + depends on OF_IRQ && PCI_MSI 10 9 help 11 10 Support the RP1 peripheral chip found on Raspberry Pi 5 board. 12 11 ··· 14 15 15 16 The driver is responsible for enabling the DT node once the PCIe 16 17 endpoint has been configured, and handling interrupts. 17 - 18 - This driver uses an overlay to load other drivers to support for 19 - RP1 internal sub-devices.
+1 -2
drivers/misc/rp1/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_MISC_RP1) += rp1-pci.o 3 - rp1-pci-objs := rp1_pci.o rp1-pci.dtbo.o 2 + obj-$(CONFIG_MISC_RP1) += rp1_pci.o
-25
drivers/misc/rp1/rp1-pci.dtso
··· 1 - // SPDX-License-Identifier: (GPL-2.0 OR MIT) 2 - 3 - /* 4 - * The dts overlay is included from the dts directory so 5 - * it can be possible to check it with CHECK_DTBS while 6 - * also compile it from the driver source directory. 7 - */ 8 - 9 - /dts-v1/; 10 - /plugin/; 11 - 12 - / { 13 - fragment@0 { 14 - target-path=""; 15 - __overlay__ { 16 - compatible = "pci1de4,1"; 17 - #address-cells = <3>; 18 - #size-cells = <2>; 19 - interrupt-controller; 20 - #interrupt-cells = <2>; 21 - 22 - #include "arm64/broadcom/rp1-common.dtsi" 23 - }; 24 - }; 25 - };
+4 -33
drivers/misc/rp1/rp1_pci.c
··· 34 34 /* Interrupts */ 35 35 #define RP1_INT_END 61 36 36 37 - /* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */ 38 - extern char __dtbo_rp1_pci_begin[]; 39 - extern char __dtbo_rp1_pci_end[]; 40 - 41 37 struct rp1_dev { 42 38 struct pci_dev *pdev; 43 39 struct irq_domain *domain; 44 40 struct irq_data *pcie_irqds[64]; 45 41 void __iomem *bar1; 46 - int ovcs_id; /* overlay changeset id */ 47 42 bool level_triggered_irq[RP1_INT_END]; 48 43 }; 49 44 ··· 179 184 180 185 static int rp1_probe(struct pci_dev *pdev, const struct pci_device_id *id) 181 186 { 182 - u32 dtbo_size = __dtbo_rp1_pci_end - __dtbo_rp1_pci_begin; 183 - void *dtbo_start = __dtbo_rp1_pci_begin; 184 187 struct device *dev = &pdev->dev; 185 188 struct device_node *rp1_node; 186 - bool skip_ovl = true; 187 189 struct rp1_dev *rp1; 188 190 int err = 0; 189 191 int i; 190 192 191 - /* 192 - * Either use rp1_nexus node if already present in DT, or 193 - * set a flag to load it from overlay at runtime 194 - */ 195 - rp1_node = of_find_node_by_name(NULL, "rp1_nexus"); 196 - if (!rp1_node) { 197 - rp1_node = dev_of_node(dev); 198 - skip_ovl = false; 199 - } 193 + rp1_node = dev_of_node(dev); 200 194 201 195 if (!rp1_node) { 202 196 dev_err(dev, "Missing of_node for device\n"); ··· 260 276 rp1_chained_handle_irq, rp1); 261 277 } 262 278 263 - if (!skip_ovl) { 264 - err = of_overlay_fdt_apply(dtbo_start, dtbo_size, &rp1->ovcs_id, 265 - rp1_node); 266 - if (err) 267 - goto err_unregister_interrupts; 268 - } 269 - 270 279 err = of_platform_default_populate(rp1_node, NULL, dev); 271 280 if (err) { 272 281 dev_err_probe(&pdev->dev, err, "Error populating devicetree\n"); 273 - goto err_unload_overlay; 282 + goto err_unregister_interrupts; 274 283 } 275 284 276 - if (skip_ovl) 277 - of_node_put(rp1_node); 285 + of_node_put(rp1_node); 278 286 279 287 return 0; 280 288 281 - err_unload_overlay: 282 - of_overlay_remove(&rp1->ovcs_id); 283 289 err_unregister_interrupts: 284 290 rp1_unregister_interrupts(pdev); 285 291 err_put_node: 286 - if (skip_ovl) 287 - of_node_put(rp1_node); 292 + of_node_put(rp1_node); 288 293 289 294 return err; 290 295 } 291 296 292 297 static void rp1_remove(struct pci_dev *pdev) 293 298 { 294 - struct rp1_dev *rp1 = pci_get_drvdata(pdev); 295 299 struct device *dev = &pdev->dev; 296 300 297 301 of_platform_depopulate(dev); 298 - of_overlay_remove(&rp1->ovcs_id); 299 302 rp1_unregister_interrupts(pdev); 300 303 } 301 304
+1 -1
drivers/mtd/nand/ecc-sw-hamming.c
··· 8 8 * 9 9 * Completely replaces the previous ECC implementation which was written by: 10 10 * Steven J. Hill (sjhill@realitydiluted.com) 11 - * Thomas Gleixner (tglx@linutronix.de) 11 + * Thomas Gleixner (tglx@kernel.org) 12 12 * 13 13 * Information on how this algorithm works and how it was developed 14 14 * can be found in Documentation/driver-api/mtd/nand_ecc.rst
+1 -1
drivers/mtd/nand/raw/diskonchip.c
··· 11 11 * Error correction code lifted from the old docecc code 12 12 * Author: Fabrice Bellard (fabrice.bellard@netgem.com) 13 13 * Copyright (C) 2000 Netgem S.A. 14 - * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@linutronix.de> 14 + * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@kernel.org> 15 15 * 16 16 * Interface to generic NAND code for M-Systems DiskOnChip devices 17 17 */
+2 -2
drivers/mtd/nand/raw/nand_base.c
··· 8 8 * http://www.linux-mtd.infradead.org/doc/nand.html 9 9 * 10 10 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 11 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 11 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 12 12 * 13 13 * Credits: 14 14 * David Woodhouse for adding multichip support ··· 6594 6594 6595 6595 MODULE_LICENSE("GPL"); 6596 6596 MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>"); 6597 - MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>"); 6597 + MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>"); 6598 6598 MODULE_DESCRIPTION("Generic NAND flash driver code");
+1 -1
drivers/mtd/nand/raw/nand_bbt.c
··· 3 3 * Overview: 4 4 * Bad block table support for the NAND driver 5 5 * 6 - * Copyright © 2004 Thomas Gleixner (tglx@linutronix.de) 6 + * Copyright © 2004 Thomas Gleixner (tglx@kernel.org) 7 7 * 8 8 * Description: 9 9 *
+1 -1
drivers/mtd/nand/raw/nand_ids.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (C) 2002 Thomas Gleixner (tglx@linutronix.de) 3 + * Copyright (C) 2002 Thomas Gleixner (tglx@kernel.org) 4 4 */ 5 5 6 6 #include <linux/sizes.h>
+1 -1
drivers/mtd/nand/raw/nand_jedec.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/nand_legacy.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/nand_onfi.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com) 4 - * 2002-2006 Thomas Gleixner (tglx@linutronix.de) 4 + * 2002-2006 Thomas Gleixner (tglx@kernel.org) 5 5 * 6 6 * Credits: 7 7 * David Woodhouse for adding multichip support
+1 -1
drivers/mtd/nand/raw/ndfc.c
··· 272 272 module_platform_driver(ndfc_driver); 273 273 274 274 MODULE_LICENSE("GPL"); 275 - MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>"); 275 + MODULE_AUTHOR("Thomas Gleixner <tglx@kernel.org>"); 276 276 MODULE_DESCRIPTION("OF Platform driver for NDFC");
-23
drivers/net/dsa/mv88e6xxx/chip.c
··· 3364 3364 3365 3365 static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port) 3366 3366 { 3367 - struct device_node *phy_handle = NULL; 3368 3367 struct fwnode_handle *ports_fwnode; 3369 3368 struct fwnode_handle *port_fwnode; 3370 3369 struct dsa_switch *ds = chip->ds; 3371 3370 struct mv88e6xxx_port *p; 3372 - struct dsa_port *dp; 3373 - int tx_amp; 3374 3371 int err; 3375 3372 u16 reg; 3376 3373 u32 val; ··· 3577 3580 err = chip->info->ops->port_setup_message_port(chip, port); 3578 3581 if (err) 3579 3582 return err; 3580 - } 3581 - 3582 - if (chip->info->ops->serdes_set_tx_amplitude) { 3583 - dp = dsa_to_port(ds, port); 3584 - if (dp) 3585 - phy_handle = of_parse_phandle(dp->dn, "phy-handle", 0); 3586 - 3587 - if (phy_handle && !of_property_read_u32(phy_handle, 3588 - "tx-p2p-microvolt", 3589 - &tx_amp)) 3590 - err = chip->info->ops->serdes_set_tx_amplitude(chip, 3591 - port, tx_amp); 3592 - if (phy_handle) { 3593 - of_node_put(phy_handle); 3594 - if (err) 3595 - return err; 3596 - } 3597 3583 } 3598 3584 3599 3585 /* Port based VLAN map: give each port the same default address ··· 4748 4768 .serdes_irq_mapping = mv88e6352_serdes_irq_mapping, 4749 4769 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len, 4750 4770 .serdes_get_regs = mv88e6352_serdes_get_regs, 4751 - .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude, 4752 4771 .gpio_ops = &mv88e6352_gpio_ops, 4753 4772 .phylink_get_caps = mv88e6352_phylink_get_caps, 4754 4773 .pcs_ops = &mv88e6352_pcs_ops, ··· 5023 5044 .serdes_irq_mapping = mv88e6352_serdes_irq_mapping, 5024 5045 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len, 5025 5046 .serdes_get_regs = mv88e6352_serdes_get_regs, 5026 - .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude, 5027 5047 .gpio_ops = &mv88e6352_gpio_ops, 5028 5048 .avb_ops = &mv88e6352_avb_ops, 5029 5049 .ptp_ops = &mv88e6352_ptp_ops, ··· 5459 5481 .serdes_get_stats = mv88e6352_serdes_get_stats, 5460 5482 .serdes_get_regs_len = mv88e6352_serdes_get_regs_len, 5461 5483 .serdes_get_regs = mv88e6352_serdes_get_regs, 5462 - .serdes_set_tx_amplitude = mv88e6352_serdes_set_tx_amplitude, 5463 5484 .phylink_get_caps = mv88e6352_phylink_get_caps, 5464 5485 .pcs_ops = &mv88e6352_pcs_ops, 5465 5486 };
-4
drivers/net/dsa/mv88e6xxx/chip.h
··· 642 642 void (*serdes_get_regs)(struct mv88e6xxx_chip *chip, int port, 643 643 void *_p); 644 644 645 - /* SERDES SGMII/Fiber Output Amplitude */ 646 - int (*serdes_set_tx_amplitude)(struct mv88e6xxx_chip *chip, int port, 647 - int val); 648 - 649 645 /* Address Translation Unit operations */ 650 646 int (*atu_get_hash)(struct mv88e6xxx_chip *chip, u8 *hash); 651 647 int (*atu_set_hash)(struct mv88e6xxx_chip *chip, u8 hash);
-46
drivers/net/dsa/mv88e6xxx/serdes.c
··· 25 25 reg, val); 26 26 } 27 27 28 - static int mv88e6352_serdes_write(struct mv88e6xxx_chip *chip, int reg, 29 - u16 val) 30 - { 31 - return mv88e6xxx_phy_page_write(chip, MV88E6352_ADDR_SERDES, 32 - MV88E6352_SERDES_PAGE_FIBER, 33 - reg, val); 34 - } 35 - 36 28 static int mv88e6390_serdes_read(struct mv88e6xxx_chip *chip, 37 29 int lane, int device, int reg, u16 *val) 38 30 { ··· 497 505 if (!err) 498 506 p[i] = reg; 499 507 } 500 - } 501 - 502 - static const int mv88e6352_serdes_p2p_to_reg[] = { 503 - /* Index of value in microvolts corresponds to the register value */ 504 - 14000, 112000, 210000, 308000, 406000, 504000, 602000, 700000, 505 - }; 506 - 507 - int mv88e6352_serdes_set_tx_amplitude(struct mv88e6xxx_chip *chip, int port, 508 - int val) 509 - { 510 - bool found = false; 511 - u16 ctrl, reg; 512 - int err; 513 - int i; 514 - 515 - err = mv88e6352_g2_scratch_port_has_serdes(chip, port); 516 - if (err <= 0) 517 - return err; 518 - 519 - for (i = 0; i < ARRAY_SIZE(mv88e6352_serdes_p2p_to_reg); ++i) { 520 - if (mv88e6352_serdes_p2p_to_reg[i] == val) { 521 - reg = i; 522 - found = true; 523 - break; 524 - } 525 - } 526 - 527 - if (!found) 528 - return -EINVAL; 529 - 530 - err = mv88e6352_serdes_read(chip, MV88E6352_SERDES_SPEC_CTRL2, &ctrl); 531 - if (err) 532 - return err; 533 - 534 - ctrl &= ~MV88E6352_SERDES_OUT_AMP_MASK; 535 - ctrl |= reg; 536 - 537 - return mv88e6352_serdes_write(chip, MV88E6352_SERDES_SPEC_CTRL2, ctrl); 538 508 }
-5
drivers/net/dsa/mv88e6xxx/serdes.h
··· 29 29 #define MV88E6352_SERDES_INT_FIBRE_ENERGY BIT(4) 30 30 #define MV88E6352_SERDES_INT_STATUS 0x13 31 31 32 - #define MV88E6352_SERDES_SPEC_CTRL2 0x1a 33 - #define MV88E6352_SERDES_OUT_AMP_MASK 0x0007 34 32 35 33 #define MV88E6341_PORT5_LANE 0x15 36 34 ··· 137 139 void mv88e6352_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p); 138 140 int mv88e6390_serdes_get_regs_len(struct mv88e6xxx_chip *chip, int port); 139 141 void mv88e6390_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p); 140 - 141 - int mv88e6352_serdes_set_tx_amplitude(struct mv88e6xxx_chip *chip, int port, 142 - int val); 143 142 144 143 /* Return the (first) SERDES lane address a port is using, -errno otherwise. */ 145 144 static inline int mv88e6xxx_serdes_get_lane(struct mv88e6xxx_chip *chip,
+1 -1
drivers/net/ethernet/3com/3c59x.c
··· 1473 1473 return 0; 1474 1474 1475 1475 free_ring: 1476 - dma_free_coherent(&pdev->dev, 1476 + dma_free_coherent(gendev, 1477 1477 sizeof(struct boom_rx_desc) * RX_RING_SIZE + 1478 1478 sizeof(struct boom_tx_desc) * TX_RING_SIZE, 1479 1479 vp->rx_ring, vp->rx_ring_dma);
+6 -3
drivers/net/ethernet/airoha/airoha_ppe.c
··· 1547 1547 { 1548 1548 struct airoha_npu *npu; 1549 1549 1550 - rcu_read_lock(); 1551 - npu = rcu_dereference(eth->npu); 1550 + mutex_lock(&flow_offload_mutex); 1551 + 1552 + npu = rcu_replace_pointer(eth->npu, NULL, 1553 + lockdep_is_held(&flow_offload_mutex)); 1552 1554 if (npu) { 1553 1555 npu->ops.ppe_deinit(npu); 1554 1556 airoha_npu_put(npu); 1555 1557 } 1556 - rcu_read_unlock(); 1558 + 1559 + mutex_unlock(&flow_offload_mutex); 1557 1560 1558 1561 rhashtable_destroy(&eth->ppe->l2_flows); 1559 1562 rhashtable_destroy(&eth->flow_table);
+4
drivers/net/ethernet/amazon/ena/ena_devlink.c
··· 53 53 { 54 54 union devlink_param_value value; 55 55 56 + devl_lock(devlink); 56 57 value.vbool = false; 57 58 devl_param_driverinit_value_set(devlink, 58 59 DEVLINK_PARAM_GENERIC_ID_ENABLE_PHC, 59 60 value); 61 + devl_unlock(devlink); 60 62 } 61 63 62 64 static void ena_devlink_port_register(struct devlink *devlink) ··· 147 145 return rc; 148 146 } 149 147 148 + devl_lock(devlink); 150 149 value.vbool = ena_phc_is_enabled(adapter); 151 150 devl_param_driverinit_value_set(devlink, 152 151 DEVLINK_PARAM_GENERIC_ID_ENABLE_PHC, 153 152 value); 153 + devl_unlock(devlink); 154 154 155 155 return 0; 156 156 }
+1
drivers/net/ethernet/broadcom/Kconfig
··· 259 259 depends on PCI 260 260 select NET_DEVLINK 261 261 select PAGE_POOL 262 + select AUXILIARY_BUS 262 263 help 263 264 This driver supports Broadcom ThorUltra 50/100/200/400/800 gigabit 264 265 Ethernet cards. The module will be called bng_en. To compile this
+15 -6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1482 1482 struct bnxt_tpa_idx_map *map = rxr->rx_tpa_idx_map; 1483 1483 u16 idx = agg_id & MAX_TPA_P5_MASK; 1484 1484 1485 - if (test_bit(idx, map->agg_idx_bmap)) 1486 - idx = find_first_zero_bit(map->agg_idx_bmap, 1487 - BNXT_AGG_IDX_BMAP_SIZE); 1485 + if (test_bit(idx, map->agg_idx_bmap)) { 1486 + idx = find_first_zero_bit(map->agg_idx_bmap, MAX_TPA_P5); 1487 + if (idx >= MAX_TPA_P5) 1488 + return INVALID_HW_RING_ID; 1489 + } 1488 1490 __set_bit(idx, map->agg_idx_bmap); 1489 1491 map->agg_id_tbl[agg_id] = idx; 1490 1492 return idx; ··· 1550 1548 if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { 1551 1549 agg_id = TPA_START_AGG_ID_P5(tpa_start); 1552 1550 agg_id = bnxt_alloc_agg_idx(rxr, agg_id); 1551 + if (unlikely(agg_id == INVALID_HW_RING_ID)) { 1552 + netdev_warn(bp->dev, "Unable to allocate agg ID for ring %d, agg 0x%x\n", 1553 + rxr->bnapi->index, 1554 + TPA_START_AGG_ID_P5(tpa_start)); 1555 + bnxt_sched_reset_rxr(bp, rxr); 1556 + return; 1557 + } 1553 1558 } else { 1554 1559 agg_id = TPA_START_AGG_ID(tpa_start); 1555 1560 } ··· 16891 16882 16892 16883 init_err_pci_clean: 16893 16884 bnxt_hwrm_func_drv_unrgtr(bp); 16894 - bnxt_free_hwrm_resources(bp); 16895 - bnxt_hwmon_uninit(bp); 16896 - bnxt_ethtool_free(bp); 16897 16885 bnxt_ptp_clear(bp); 16898 16886 kfree(bp->ptp_cfg); 16899 16887 bp->ptp_cfg = NULL; 16888 + bnxt_free_hwrm_resources(bp); 16889 + bnxt_hwmon_uninit(bp); 16890 + bnxt_ethtool_free(bp); 16900 16891 kfree(bp->fw_health); 16901 16892 bp->fw_health = NULL; 16902 16893 bnxt_cleanup_pci(bp);
+1 -3
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1080 1080 struct rx_agg_cmp *agg_arr; 1081 1081 }; 1082 1082 1083 - #define BNXT_AGG_IDX_BMAP_SIZE (MAX_TPA_P5 / BITS_PER_LONG) 1084 - 1085 1083 struct bnxt_tpa_idx_map { 1086 1084 u16 agg_id_tbl[1024]; 1087 - unsigned long agg_idx_bmap[BNXT_AGG_IDX_BMAP_SIZE]; 1085 + DECLARE_BITMAP(agg_idx_bmap, MAX_TPA_P5); 1088 1086 }; 1089 1087 1090 1088 struct bnxt_rx_ring_info {
+2 -2
drivers/net/ethernet/freescale/enetc/enetc.h
··· 79 79 #define ENETC_RXB_TRUESIZE (PAGE_SIZE >> 1) 80 80 #define ENETC_RXB_PAD NET_SKB_PAD /* add extra space if needed */ 81 81 #define ENETC_RXB_DMA_SIZE \ 82 - (SKB_WITH_OVERHEAD(ENETC_RXB_TRUESIZE) - ENETC_RXB_PAD) 82 + min(SKB_WITH_OVERHEAD(ENETC_RXB_TRUESIZE) - ENETC_RXB_PAD, 0xffff) 83 83 #define ENETC_RXB_DMA_SIZE_XDP \ 84 - (SKB_WITH_OVERHEAD(ENETC_RXB_TRUESIZE) - XDP_PACKET_HEADROOM) 84 + min(SKB_WITH_OVERHEAD(ENETC_RXB_TRUESIZE) - XDP_PACKET_HEADROOM, 0xffff) 85 85 86 86 struct enetc_rx_swbd { 87 87 dma_addr_t dma;
+3 -4
drivers/net/ethernet/intel/idpf/idpf.h
··· 284 284 285 285 struct idpf_fsteer_fltr { 286 286 struct list_head list; 287 - u32 loc; 288 - u32 q_index; 287 + struct ethtool_rx_flow_spec fs; 289 288 }; 290 289 291 290 /** ··· 423 424 * @rss_key: RSS hash key 424 425 * @rss_lut_size: Size of RSS lookup table 425 426 * @rss_lut: RSS lookup table 426 - * @cached_lut: Used to restore previously init RSS lut 427 427 */ 428 428 struct idpf_rss_data { 429 429 u16 rss_key_size; 430 430 u8 *rss_key; 431 431 u16 rss_lut_size; 432 432 u32 *rss_lut; 433 - u32 *cached_lut; 434 433 }; 435 434 436 435 /** ··· 555 558 * @max_q: Maximum possible queues 556 559 * @req_qs_chunks: Queue chunk data for requested queues 557 560 * @mac_filter_list_lock: Lock to protect mac filters 561 + * @flow_steer_list_lock: Lock to protect fsteer filters 558 562 * @flags: See enum idpf_vport_config_flags 559 563 */ 560 564 struct idpf_vport_config { ··· 563 565 struct idpf_vport_max_q max_q; 564 566 struct virtchnl2_add_queues *req_qs_chunks; 565 567 spinlock_t mac_filter_list_lock; 568 + spinlock_t flow_steer_list_lock; 566 569 DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS); 567 570 }; 568 571
+63 -29
drivers/net/ethernet/intel/idpf/idpf_ethtool.c
··· 37 37 { 38 38 struct idpf_netdev_priv *np = netdev_priv(netdev); 39 39 struct idpf_vport_user_config_data *user_config; 40 + struct idpf_vport_config *vport_config; 40 41 struct idpf_fsteer_fltr *f; 41 42 struct idpf_vport *vport; 42 43 unsigned int cnt = 0; ··· 45 44 46 45 idpf_vport_ctrl_lock(netdev); 47 46 vport = idpf_netdev_to_vport(netdev); 48 - user_config = &np->adapter->vport_config[np->vport_idx]->user_config; 47 + vport_config = np->adapter->vport_config[np->vport_idx]; 48 + user_config = &vport_config->user_config; 49 49 50 50 switch (cmd->cmd) { 51 51 case ETHTOOL_GRXCLSRLCNT: ··· 54 52 cmd->data = idpf_fsteer_max_rules(vport); 55 53 break; 56 54 case ETHTOOL_GRXCLSRULE: 57 - err = -EINVAL; 55 + err = -ENOENT; 56 + spin_lock_bh(&vport_config->flow_steer_list_lock); 58 57 list_for_each_entry(f, &user_config->flow_steer_list, list) 59 - if (f->loc == cmd->fs.location) { 60 - cmd->fs.ring_cookie = f->q_index; 58 + if (f->fs.location == cmd->fs.location) { 59 + /* Avoid infoleak from padding: zero first, 60 + * then assign fields 61 + */ 62 + memset(&cmd->fs, 0, sizeof(cmd->fs)); 63 + cmd->fs = f->fs; 61 64 err = 0; 62 65 break; 63 66 } 67 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 64 68 break; 65 69 case ETHTOOL_GRXCLSRLALL: 66 70 cmd->data = idpf_fsteer_max_rules(vport); 71 + spin_lock_bh(&vport_config->flow_steer_list_lock); 67 72 list_for_each_entry(f, &user_config->flow_steer_list, list) { 68 73 if (cnt == cmd->rule_cnt) { 69 74 err = -EMSGSIZE; 70 75 break; 71 76 } 72 - rule_locs[cnt] = f->loc; 77 + rule_locs[cnt] = f->fs.location; 73 78 cnt++; 74 79 } 75 80 if (!err) 76 81 cmd->rule_cnt = user_config->num_fsteer_fltrs; 82 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 77 83 break; 78 84 default: 79 85 break; ··· 178 168 struct idpf_vport *vport; 179 169 u32 flow_type, q_index; 180 170 u16 num_rxq; 181 - int err; 171 + int err = 0; 182 172 183 173 vport = idpf_netdev_to_vport(netdev); 184 174 vport_config = vport->adapter->vport_config[np->vport_idx]; ··· 203 193 rule = kzalloc(struct_size(rule, rule_info, 1), GFP_KERNEL); 204 194 if (!rule) 205 195 return -ENOMEM; 196 + 197 + fltr = kzalloc(sizeof(*fltr), GFP_KERNEL); 198 + if (!fltr) { 199 + err = -ENOMEM; 200 + goto out_free_rule; 201 + } 202 + 203 + /* detect duplicate entry and reject before adding rules */ 204 + spin_lock_bh(&vport_config->flow_steer_list_lock); 205 + list_for_each_entry(f, &user_config->flow_steer_list, list) { 206 + if (f->fs.location == fsp->location) { 207 + err = -EEXIST; 208 + break; 209 + } 210 + 211 + if (f->fs.location > fsp->location) 212 + break; 213 + parent = f; 214 + } 215 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 216 + 217 + if (err) 218 + goto out; 206 219 207 220 rule->vport_id = cpu_to_le32(vport->vport_id); 208 221 rule->count = cpu_to_le32(1); ··· 265 232 goto out; 266 233 } 267 234 268 - fltr = kzalloc(sizeof(*fltr), GFP_KERNEL); 269 - if (!fltr) { 270 - err = -ENOMEM; 271 - goto out; 272 - } 235 + /* Save a copy of the user's flow spec so ethtool can later retrieve it */ 236 + fltr->fs = *fsp; 273 237 274 - fltr->loc = fsp->location; 275 - fltr->q_index = q_index; 276 - list_for_each_entry(f, &user_config->flow_steer_list, list) { 277 - if (f->loc >= fltr->loc) 278 - break; 279 - parent = f; 280 - } 281 - 238 + spin_lock_bh(&vport_config->flow_steer_list_lock); 282 239 parent ? list_add(&fltr->list, &parent->list) : 283 240 list_add(&fltr->list, &user_config->flow_steer_list); 284 241 285 242 user_config->num_fsteer_fltrs++; 243 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 244 + goto out_free_rule; 286 245 287 246 out: 247 + kfree(fltr); 248 + out_free_rule: 288 249 kfree(rule); 289 250 return err; 290 251 } ··· 329 302 goto out; 330 303 } 331 304 305 + spin_lock_bh(&vport_config->flow_steer_list_lock); 332 306 list_for_each_entry_safe(f, iter, 333 307 &user_config->flow_steer_list, list) { 334 - if (f->loc == fsp->location) { 308 + if (f->fs.location == fsp->location) { 335 309 list_del(&f->list); 336 310 kfree(f); 337 311 user_config->num_fsteer_fltrs--; 338 - goto out; 312 + goto out_unlock; 339 313 } 340 314 } 341 - err = -EINVAL; 315 + err = -ENOENT; 342 316 317 + out_unlock: 318 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 343 319 out: 344 320 kfree(rule); 345 321 return err; ··· 411 381 * @netdev: network interface device structure 412 382 * @rxfh: pointer to param struct (indir, key, hfunc) 413 383 * 414 - * Reads the indirection table directly from the hardware. Always returns 0. 384 + * RSS LUT and Key information are read from driver's cached 385 + * copy. When rxhash is off, rss lut will be displayed as zeros. 386 + * 387 + * Return: 0 on success, -errno otherwise. 415 388 */ 416 389 static int idpf_get_rxfh(struct net_device *netdev, 417 390 struct ethtool_rxfh_param *rxfh) ··· 422 389 struct idpf_netdev_priv *np = netdev_priv(netdev); 423 390 struct idpf_rss_data *rss_data; 424 391 struct idpf_adapter *adapter; 392 + struct idpf_vport *vport; 393 + bool rxhash_ena; 425 394 int err = 0; 426 395 u16 i; 427 396 428 397 idpf_vport_ctrl_lock(netdev); 398 + vport = idpf_netdev_to_vport(netdev); 429 399 430 400 adapter = np->adapter; 431 401 ··· 438 402 } 439 403 440 404 rss_data = &adapter->vport_config[np->vport_idx]->user_config.rss_data; 441 - if (!test_bit(IDPF_VPORT_UP, np->state)) 442 - goto unlock_mutex; 443 405 406 + rxhash_ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); 444 407 rxfh->hfunc = ETH_RSS_HASH_TOP; 445 408 446 409 if (rxfh->key) ··· 447 412 448 413 if (rxfh->indir) { 449 414 for (i = 0; i < rss_data->rss_lut_size; i++) 450 - rxfh->indir[i] = rss_data->rss_lut[i]; 415 + rxfh->indir[i] = rxhash_ena ? rss_data->rss_lut[i] : 0; 451 416 } 452 417 453 418 unlock_mutex: ··· 487 452 } 488 453 489 454 rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; 490 - if (!test_bit(IDPF_VPORT_UP, np->state)) 491 - goto unlock_mutex; 492 455 493 456 if (rxfh->hfunc != ETH_RSS_HASH_NO_CHANGE && 494 457 rxfh->hfunc != ETH_RSS_HASH_TOP) { ··· 502 469 rss_data->rss_lut[lut] = rxfh->indir[lut]; 503 470 } 504 471 505 - err = idpf_config_rss(vport); 472 + if (test_bit(IDPF_VPORT_UP, np->state)) 473 + err = idpf_config_rss(vport); 506 474 507 475 unlock_mutex: 508 476 idpf_vport_ctrl_unlock(netdev);
+1 -1
drivers/net/ethernet/intel/idpf/idpf_idc.c
··· 322 322 for (i = 0; i < adapter->num_alloc_vports; i++) { 323 323 struct idpf_vport *vport = adapter->vports[i]; 324 324 325 - if (!vport) 325 + if (!vport || !vport->vdev_info) 326 326 continue; 327 327 328 328 idpf_unplug_aux_dev(vport->vdev_info->adev);
+154 -120
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 443 443 } 444 444 445 445 /** 446 + * idpf_del_all_flow_steer_filters - Delete all flow steer filters in list 447 + * @vport: main vport struct 448 + * 449 + * Takes flow_steer_list_lock spinlock. Deletes all filters 450 + */ 451 + static void idpf_del_all_flow_steer_filters(struct idpf_vport *vport) 452 + { 453 + struct idpf_vport_config *vport_config; 454 + struct idpf_fsteer_fltr *f, *ftmp; 455 + 456 + vport_config = vport->adapter->vport_config[vport->idx]; 457 + 458 + spin_lock_bh(&vport_config->flow_steer_list_lock); 459 + list_for_each_entry_safe(f, ftmp, &vport_config->user_config.flow_steer_list, 460 + list) { 461 + list_del(&f->list); 462 + kfree(f); 463 + } 464 + vport_config->user_config.num_fsteer_fltrs = 0; 465 + spin_unlock_bh(&vport_config->flow_steer_list_lock); 466 + } 467 + 468 + /** 446 469 * idpf_find_mac_filter - Search filter list for specific mac filter 447 470 * @vconfig: Vport config structure 448 471 * @macaddr: The MAC address ··· 752 729 return 0; 753 730 } 754 731 732 + static void idpf_detach_and_close(struct idpf_adapter *adapter) 733 + { 734 + int max_vports = adapter->max_vports; 735 + 736 + for (int i = 0; i < max_vports; i++) { 737 + struct net_device *netdev = adapter->netdevs[i]; 738 + 739 + /* If the interface is in detached state, that means the 740 + * previous reset was not handled successfully for this 741 + * vport. 742 + */ 743 + if (!netif_device_present(netdev)) 744 + continue; 745 + 746 + /* Hold RTNL to protect racing with callbacks */ 747 + rtnl_lock(); 748 + netif_device_detach(netdev); 749 + if (netif_running(netdev)) { 750 + set_bit(IDPF_VPORT_UP_REQUESTED, 751 + adapter->vport_config[i]->flags); 752 + dev_close(netdev); 753 + } 754 + rtnl_unlock(); 755 + } 756 + } 757 + 758 + static void idpf_attach_and_open(struct idpf_adapter *adapter) 759 + { 760 + int max_vports = adapter->max_vports; 761 + 762 + for (int i = 0; i < max_vports; i++) { 763 + struct idpf_vport *vport = adapter->vports[i]; 764 + struct idpf_vport_config *vport_config; 765 + struct net_device *netdev; 766 + 767 + /* In case of a critical error in the init task, the vport 768 + * will be freed. Only continue to restore the netdevs 769 + * if the vport is allocated. 770 + */ 771 + if (!vport) 772 + continue; 773 + 774 + /* No need for RTNL on attach as this function is called 775 + * following detach and dev_close(). We do take RTNL for 776 + * dev_open() below as it can race with external callbacks 777 + * following the call to netif_device_attach(). 778 + */ 779 + netdev = adapter->netdevs[i]; 780 + netif_device_attach(netdev); 781 + vport_config = adapter->vport_config[vport->idx]; 782 + if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, 783 + vport_config->flags)) { 784 + rtnl_lock(); 785 + dev_open(netdev, NULL); 786 + rtnl_unlock(); 787 + } 788 + } 789 + } 790 + 755 791 /** 756 792 * idpf_cfg_netdev - Allocate, configure and register a netdev 757 793 * @vport: main vport structure ··· 1073 991 u16 idx = vport->idx; 1074 992 1075 993 vport_config = adapter->vport_config[vport->idx]; 1076 - idpf_deinit_rss(vport); 994 + idpf_deinit_rss_lut(vport); 1077 995 rss_data = &vport_config->user_config.rss_data; 1078 996 kfree(rss_data->rss_key); 1079 997 rss_data->rss_key = NULL; ··· 1105 1023 kfree(adapter->vport_config[idx]->req_qs_chunks); 1106 1024 adapter->vport_config[idx]->req_qs_chunks = NULL; 1107 1025 } 1026 + kfree(vport->rx_ptype_lkup); 1027 + vport->rx_ptype_lkup = NULL; 1108 1028 kfree(vport); 1109 1029 adapter->num_alloc_vports--; 1110 1030 } ··· 1125 1041 idpf_idc_deinit_vport_aux_device(vport->vdev_info); 1126 1042 1127 1043 idpf_deinit_mac_addr(vport); 1128 - idpf_vport_stop(vport, true); 1129 1044 1130 - if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) 1045 + if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags)) { 1046 + idpf_vport_stop(vport, true); 1131 1047 idpf_decfg_netdev(vport); 1132 - if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) 1048 + } 1049 + if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) { 1133 1050 idpf_del_all_mac_filters(vport); 1051 + idpf_del_all_flow_steer_filters(vport); 1052 + } 1134 1053 1135 1054 if (adapter->netdevs[i]) { 1136 1055 struct idpf_netdev_priv *np = netdev_priv(adapter->netdevs[i]); ··· 1226 1139 u16 idx = adapter->next_vport; 1227 1140 struct idpf_vport *vport; 1228 1141 u16 num_max_q; 1142 + int err; 1229 1143 1230 1144 if (idx == IDPF_NO_FREE_SLOT) 1231 1145 return NULL; ··· 1277 1189 1278 1190 idpf_vport_init(vport, max_q); 1279 1191 1280 - /* This alloc is done separate from the LUT because it's not strictly 1281 - * dependent on how many queues we have. If we change number of queues 1282 - * and soft reset we'll need a new LUT but the key can remain the same 1283 - * for as long as the vport exists. 1192 + /* LUT and key are both initialized here. Key is not strictly dependent 1193 + * on how many queues we have. If we change number of queues and soft 1194 + * reset is initiated, LUT will be freed and a new LUT will be allocated 1195 + * as per the updated number of queues during vport bringup. However, 1196 + * the key remains the same for as long as the vport exists. 1284 1197 */ 1285 1198 rss_data = &adapter->vport_config[idx]->user_config.rss_data; 1286 1199 rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL); ··· 1290 1201 1291 1202 /* Initialize default rss key */ 1292 1203 netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size); 1204 + 1205 + /* Initialize default rss LUT */ 1206 + err = idpf_init_rss_lut(vport); 1207 + if (err) 1208 + goto free_rss_key; 1293 1209 1294 1210 /* fill vport slot in the adapter struct */ 1295 1211 adapter->vports[idx] = vport; ··· 1306 1212 1307 1213 return vport; 1308 1214 1215 + free_rss_key: 1216 + kfree(rss_data->rss_key); 1309 1217 free_vector_idxs: 1310 1218 kfree(vport->q_vector_idxs); 1311 1219 free_vport: ··· 1484 1388 { 1485 1389 struct idpf_netdev_priv *np = netdev_priv(vport->netdev); 1486 1390 struct idpf_adapter *adapter = vport->adapter; 1487 - struct idpf_vport_config *vport_config; 1488 1391 int err; 1489 1392 1490 1393 if (test_bit(IDPF_VPORT_UP, np->state)) ··· 1524 1429 if (err) { 1525 1430 dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n", 1526 1431 vport->vport_id, err); 1527 - goto queues_rel; 1432 + goto intr_deinit; 1528 1433 } 1529 1434 1530 1435 err = idpf_rx_bufs_init_all(vport); 1531 1436 if (err) { 1532 1437 dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n", 1533 1438 vport->vport_id, err); 1534 - goto queues_rel; 1439 + goto intr_deinit; 1535 1440 } 1536 1441 1537 1442 idpf_rx_init_buf_tail(vport); ··· 1577 1482 1578 1483 idpf_restore_features(vport); 1579 1484 1580 - vport_config = adapter->vport_config[vport->idx]; 1581 - if (vport_config->user_config.rss_data.rss_lut) 1582 - err = idpf_config_rss(vport); 1583 - else 1584 - err = idpf_init_rss(vport); 1485 + err = idpf_config_rss(vport); 1585 1486 if (err) { 1586 - dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n", 1487 + dev_err(&adapter->pdev->dev, "Failed to configure RSS for vport %u: %d\n", 1587 1488 vport->vport_id, err); 1588 1489 goto disable_vport; 1589 1490 } ··· 1588 1497 if (err) { 1589 1498 dev_err(&adapter->pdev->dev, "Failed to complete interface up for vport %u: %d\n", 1590 1499 vport->vport_id, err); 1591 - goto deinit_rss; 1500 + goto disable_vport; 1592 1501 } 1593 1502 1594 1503 if (rtnl) ··· 1596 1505 1597 1506 return 0; 1598 1507 1599 - deinit_rss: 1600 - idpf_deinit_rss(vport); 1601 1508 disable_vport: 1602 1509 idpf_send_disable_vport_msg(vport); 1603 1510 disable_queues: ··· 1633 1544 struct idpf_vport_config *vport_config; 1634 1545 struct idpf_vport_max_q max_q; 1635 1546 struct idpf_adapter *adapter; 1636 - struct idpf_netdev_priv *np; 1637 1547 struct idpf_vport *vport; 1638 1548 u16 num_default_vports; 1639 1549 struct pci_dev *pdev; ··· 1667 1579 goto unwind_vports; 1668 1580 } 1669 1581 1582 + err = idpf_send_get_rx_ptype_msg(vport); 1583 + if (err) 1584 + goto unwind_vports; 1585 + 1670 1586 index = vport->idx; 1671 1587 vport_config = adapter->vport_config[index]; 1672 1588 1673 1589 spin_lock_init(&vport_config->mac_filter_list_lock); 1590 + spin_lock_init(&vport_config->flow_steer_list_lock); 1674 1591 1675 1592 INIT_LIST_HEAD(&vport_config->user_config.mac_filter_list); 1676 1593 INIT_LIST_HEAD(&vport_config->user_config.flow_steer_list); ··· 1683 1590 err = idpf_check_supported_desc_ids(vport); 1684 1591 if (err) { 1685 1592 dev_err(&pdev->dev, "failed to get required descriptor ids\n"); 1686 - goto cfg_netdev_err; 1593 + goto unwind_vports; 1687 1594 } 1688 1595 1689 1596 if (idpf_cfg_netdev(vport)) 1690 - goto cfg_netdev_err; 1691 - 1692 - err = idpf_send_get_rx_ptype_msg(vport); 1693 - if (err) 1694 - goto handle_err; 1695 - 1696 - /* Once state is put into DOWN, driver is ready for dev_open */ 1697 - np = netdev_priv(vport->netdev); 1698 - clear_bit(IDPF_VPORT_UP, np->state); 1699 - if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags)) 1700 - idpf_vport_open(vport, true); 1597 + goto unwind_vports; 1701 1598 1702 1599 /* Spawn and return 'idpf_init_task' work queue until all the 1703 1600 * default vports are created ··· 1718 1635 set_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags); 1719 1636 } 1720 1637 1721 - /* As all the required vports are created, clear the reset flag 1722 - * unconditionally here in case we were in reset and the link was down. 1723 - */ 1638 + /* Clear the reset and load bits as all vports are created */ 1724 1639 clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags); 1640 + clear_bit(IDPF_HR_DRV_LOAD, adapter->flags); 1725 1641 /* Start the statistics task now */ 1726 1642 queue_delayed_work(adapter->stats_wq, &adapter->stats_task, 1727 1643 msecs_to_jiffies(10 * (pdev->devfn & 0x07))); 1728 1644 1729 1645 return; 1730 1646 1731 - handle_err: 1732 - idpf_decfg_netdev(vport); 1733 - cfg_netdev_err: 1734 - idpf_vport_rel(vport); 1735 - adapter->vports[index] = NULL; 1736 1647 unwind_vports: 1737 1648 if (default_vport) { 1738 1649 for (index = 0; index < adapter->max_vports; index++) { ··· 1734 1657 idpf_vport_dealloc(adapter->vports[index]); 1735 1658 } 1736 1659 } 1660 + /* Cleanup after vc_core_init, which has no way of knowing the 1661 + * init task failed on driver load. 1662 + */ 1663 + if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) { 1664 + cancel_delayed_work_sync(&adapter->serv_task); 1665 + cancel_delayed_work_sync(&adapter->mbx_task); 1666 + } 1667 + idpf_ptp_release(adapter); 1668 + 1737 1669 clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags); 1738 1670 } 1739 1671 ··· 1873 1787 } 1874 1788 1875 1789 /** 1876 - * idpf_set_vport_state - Set the vport state to be after the reset 1877 - * @adapter: Driver specific private structure 1878 - */ 1879 - static void idpf_set_vport_state(struct idpf_adapter *adapter) 1880 - { 1881 - u16 i; 1882 - 1883 - for (i = 0; i < adapter->max_vports; i++) { 1884 - struct idpf_netdev_priv *np; 1885 - 1886 - if (!adapter->netdevs[i]) 1887 - continue; 1888 - 1889 - np = netdev_priv(adapter->netdevs[i]); 1890 - if (test_bit(IDPF_VPORT_UP, np->state)) 1891 - set_bit(IDPF_VPORT_UP_REQUESTED, 1892 - adapter->vport_config[i]->flags); 1893 - } 1894 - } 1895 - 1896 - /** 1897 1790 * idpf_init_hard_reset - Initiate a hardware reset 1898 1791 * @adapter: Driver specific private structure 1899 1792 * ··· 1880 1815 * reallocate. Also reinitialize the mailbox. Return 0 on success, 1881 1816 * negative on failure. 1882 1817 */ 1883 - static int idpf_init_hard_reset(struct idpf_adapter *adapter) 1818 + static void idpf_init_hard_reset(struct idpf_adapter *adapter) 1884 1819 { 1885 1820 struct idpf_reg_ops *reg_ops = &adapter->dev_ops.reg_ops; 1886 1821 struct device *dev = &adapter->pdev->dev; 1887 - struct net_device *netdev; 1888 1822 int err; 1889 - u16 i; 1890 1823 1824 + idpf_detach_and_close(adapter); 1891 1825 mutex_lock(&adapter->vport_ctrl_lock); 1892 1826 1893 1827 dev_info(dev, "Device HW Reset initiated\n"); 1894 1828 1895 - /* Avoid TX hangs on reset */ 1896 - for (i = 0; i < adapter->max_vports; i++) { 1897 - netdev = adapter->netdevs[i]; 1898 - if (!netdev) 1899 - continue; 1900 - 1901 - netif_carrier_off(netdev); 1902 - netif_tx_disable(netdev); 1903 - } 1904 - 1905 1829 /* Prepare for reset */ 1906 - if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) { 1830 + if (test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) { 1907 1831 reg_ops->trigger_reset(adapter, IDPF_HR_DRV_LOAD); 1908 1832 } else if (test_and_clear_bit(IDPF_HR_FUNC_RESET, adapter->flags)) { 1909 1833 bool is_reset = idpf_is_reset_detected(adapter); 1910 1834 1911 1835 idpf_idc_issue_reset_event(adapter->cdev_info); 1912 1836 1913 - idpf_set_vport_state(adapter); 1914 1837 idpf_vc_core_deinit(adapter); 1915 1838 if (!is_reset) 1916 1839 reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET); ··· 1945 1892 unlock_mutex: 1946 1893 mutex_unlock(&adapter->vport_ctrl_lock); 1947 1894 1948 - /* Wait until all vports are created to init RDMA CORE AUX */ 1949 - if (!err) 1950 - err = idpf_idc_init(adapter); 1951 - 1952 - return err; 1895 + /* Attempt to restore netdevs and initialize RDMA CORE AUX device, 1896 + * provided vc_core_init succeeded. It is still possible that 1897 + * vports are not allocated at this point if the init task failed. 1898 + */ 1899 + if (!err) { 1900 + idpf_attach_and_open(adapter); 1901 + idpf_idc_init(adapter); 1902 + } 1953 1903 } 1954 1904 1955 1905 /** ··· 2053 1997 idpf_vport_stop(vport, false); 2054 1998 } 2055 1999 2056 - idpf_deinit_rss(vport); 2057 2000 /* We're passing in vport here because we need its wait_queue 2058 2001 * to send a message and it should be getting all the vport 2059 2002 * config data out of the adapter but we need to be careful not ··· 2077 2022 err = idpf_set_real_num_queues(vport); 2078 2023 if (err) 2079 2024 goto err_open; 2025 + 2026 + if (reset_cause == IDPF_SR_Q_CHANGE && 2027 + !netif_is_rxfh_configured(vport->netdev)) 2028 + idpf_fill_dflt_rss_lut(vport); 2080 2029 2081 2030 if (vport_is_up) 2082 2031 err = idpf_vport_open(vport, false); ··· 2225 2166 } 2226 2167 2227 2168 /** 2228 - * idpf_vport_manage_rss_lut - disable/enable RSS 2229 - * @vport: the vport being changed 2230 - * 2231 - * In the event of disable request for RSS, this function will zero out RSS 2232 - * LUT, while in the event of enable request for RSS, it will reconfigure RSS 2233 - * LUT with the default LUT configuration. 2234 - */ 2235 - static int idpf_vport_manage_rss_lut(struct idpf_vport *vport) 2236 - { 2237 - bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); 2238 - struct idpf_rss_data *rss_data; 2239 - u16 idx = vport->idx; 2240 - int lut_size; 2241 - 2242 - rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data; 2243 - lut_size = rss_data->rss_lut_size * sizeof(u32); 2244 - 2245 - if (ena) { 2246 - /* This will contain the default or user configured LUT */ 2247 - memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size); 2248 - } else { 2249 - /* Save a copy of the current LUT to be restored later if 2250 - * requested. 2251 - */ 2252 - memcpy(rss_data->cached_lut, rss_data->rss_lut, lut_size); 2253 - 2254 - /* Zero out the current LUT to disable */ 2255 - memset(rss_data->rss_lut, 0, lut_size); 2256 - } 2257 - 2258 - return idpf_config_rss(vport); 2259 - } 2260 - 2261 - /** 2262 2169 * idpf_set_features - set the netdev feature flags 2263 2170 * @netdev: ptr to the netdev being adjusted 2264 2171 * @features: the feature set that the stack is suggesting ··· 2249 2224 } 2250 2225 2251 2226 if (changed & NETIF_F_RXHASH) { 2227 + struct idpf_netdev_priv *np = netdev_priv(netdev); 2228 + 2252 2229 netdev->features ^= NETIF_F_RXHASH; 2253 - err = idpf_vport_manage_rss_lut(vport); 2254 - if (err) 2255 - goto unlock_mutex; 2230 + 2231 + /* If the interface is not up when changing the rxhash, update 2232 + * to the HW is skipped. The updated LUT will be committed to 2233 + * the HW when the interface is brought up. 2234 + */ 2235 + if (test_bit(IDPF_VPORT_UP, np->state)) { 2236 + err = idpf_config_rss(vport); 2237 + if (err) 2238 + goto unlock_mutex; 2239 + } 2256 2240 } 2257 2241 2258 2242 if (changed & NETIF_F_GRO_HW) {
+19 -27
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 695 695 static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq) 696 696 { 697 697 struct libeth_fq fq = { 698 - .count = rxq->desc_count, 699 - .type = LIBETH_FQE_MTU, 700 - .nid = idpf_q_vector_to_mem(rxq->q_vector), 698 + .count = rxq->desc_count, 699 + .type = LIBETH_FQE_MTU, 700 + .buf_len = IDPF_RX_MAX_BUF_SZ, 701 + .nid = idpf_q_vector_to_mem(rxq->q_vector), 701 702 }; 702 703 int ret; 703 704 ··· 755 754 .truesize = bufq->truesize, 756 755 .count = bufq->desc_count, 757 756 .type = type, 757 + .buf_len = IDPF_RX_MAX_BUF_SZ, 758 758 .hsplit = idpf_queue_has(HSPLIT_EN, bufq), 759 759 .xdp = idpf_xdp_enabled(bufq->q_vector->vport), 760 760 .nid = idpf_q_vector_to_mem(bufq->q_vector), ··· 4643 4641 * idpf_fill_dflt_rss_lut - Fill the indirection table with the default values 4644 4642 * @vport: virtual port structure 4645 4643 */ 4646 - static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) 4644 + void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) 4647 4645 { 4648 4646 struct idpf_adapter *adapter = vport->adapter; 4649 4647 u16 num_active_rxq = vport->num_rxq; ··· 4652 4650 4653 4651 rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; 4654 4652 4655 - for (i = 0; i < rss_data->rss_lut_size; i++) { 4653 + for (i = 0; i < rss_data->rss_lut_size; i++) 4656 4654 rss_data->rss_lut[i] = i % num_active_rxq; 4657 - rss_data->cached_lut[i] = rss_data->rss_lut[i]; 4658 - } 4659 4655 } 4660 4656 4661 4657 /** 4662 - * idpf_init_rss - Allocate and initialize RSS resources 4658 + * idpf_init_rss_lut - Allocate and initialize RSS LUT 4663 4659 * @vport: virtual port 4664 4660 * 4665 - * Return 0 on success, negative on failure 4661 + * Return: 0 on success, negative on failure 4666 4662 */ 4667 - int idpf_init_rss(struct idpf_vport *vport) 4663 + int idpf_init_rss_lut(struct idpf_vport *vport) 4668 4664 { 4669 4665 struct idpf_adapter *adapter = vport->adapter; 4670 4666 struct idpf_rss_data *rss_data; 4671 - u32 lut_size; 4672 4667 4673 4668 rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; 4669 + if (!rss_data->rss_lut) { 4670 + u32 lut_size; 4674 4671 4675 - lut_size = rss_data->rss_lut_size * sizeof(u32); 4676 - rss_data->rss_lut = kzalloc(lut_size, GFP_KERNEL); 4677 - if (!rss_data->rss_lut) 4678 - return -ENOMEM; 4679 - 4680 - rss_data->cached_lut = kzalloc(lut_size, GFP_KERNEL); 4681 - if (!rss_data->cached_lut) { 4682 - kfree(rss_data->rss_lut); 4683 - rss_data->rss_lut = NULL; 4684 - 4685 - return -ENOMEM; 4672 + lut_size = rss_data->rss_lut_size * sizeof(u32); 4673 + rss_data->rss_lut = kzalloc(lut_size, GFP_KERNEL); 4674 + if (!rss_data->rss_lut) 4675 + return -ENOMEM; 4686 4676 } 4687 4677 4688 4678 /* Fill the default RSS lut values */ 4689 4679 idpf_fill_dflt_rss_lut(vport); 4690 4680 4691 - return idpf_config_rss(vport); 4681 + return 0; 4692 4682 } 4693 4683 4694 4684 /** 4695 - * idpf_deinit_rss - Release RSS resources 4685 + * idpf_deinit_rss_lut - Release RSS LUT 4696 4686 * @vport: virtual port 4697 4687 */ 4698 - void idpf_deinit_rss(struct idpf_vport *vport) 4688 + void idpf_deinit_rss_lut(struct idpf_vport *vport) 4699 4689 { 4700 4690 struct idpf_adapter *adapter = vport->adapter; 4701 4691 struct idpf_rss_data *rss_data; 4702 4692 4703 4693 rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; 4704 - kfree(rss_data->cached_lut); 4705 - rss_data->cached_lut = NULL; 4706 4694 kfree(rss_data->rss_lut); 4707 4695 rss_data->rss_lut = NULL; 4708 4696 }
+4 -2
drivers/net/ethernet/intel/idpf/idpf_txrx.h
··· 101 101 idx = 0; \ 102 102 } while (0) 103 103 104 + #define IDPF_RX_MAX_BUF_SZ (16384 - 128) 104 105 #define IDPF_RX_BUF_STRIDE 32 105 106 #define IDPF_RX_BUF_POST_STRIDE 16 106 107 #define IDPF_LOW_WATERMARK 64 ··· 1086 1085 void idpf_vport_intr_deinit(struct idpf_vport *vport); 1087 1086 int idpf_vport_intr_init(struct idpf_vport *vport); 1088 1087 void idpf_vport_intr_ena(struct idpf_vport *vport); 1088 + void idpf_fill_dflt_rss_lut(struct idpf_vport *vport); 1089 1089 int idpf_config_rss(struct idpf_vport *vport); 1090 - int idpf_init_rss(struct idpf_vport *vport); 1091 - void idpf_deinit_rss(struct idpf_vport *vport); 1090 + int idpf_init_rss_lut(struct idpf_vport *vport); 1091 + void idpf_deinit_rss_lut(struct idpf_vport *vport); 1092 1092 int idpf_rx_bufs_init_all(struct idpf_vport *vport); 1093 1093 1094 1094 struct idpf_q_vector *idpf_find_rxq_vec(const struct idpf_vport *vport,
+12 -1
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 2804 2804 * @vport: virtual port data structure 2805 2805 * @get: flag to set or get rss look up table 2806 2806 * 2807 + * When rxhash is disabled, RSS LUT will be configured with zeros. If rxhash 2808 + * is enabled, the LUT values stored in driver's soft copy will be used to setup 2809 + * the HW. 2810 + * 2807 2811 * Returns 0 on success, negative on failure. 2808 2812 */ 2809 2813 int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get) ··· 2818 2814 struct idpf_rss_data *rss_data; 2819 2815 int buf_size, lut_buf_size; 2820 2816 ssize_t reply_sz; 2817 + bool rxhash_ena; 2821 2818 int i; 2822 2819 2823 2820 rss_data = 2824 2821 &vport->adapter->vport_config[vport->idx]->user_config.rss_data; 2822 + rxhash_ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH); 2825 2823 buf_size = struct_size(rl, lut, rss_data->rss_lut_size); 2826 2824 rl = kzalloc(buf_size, GFP_KERNEL); 2827 2825 if (!rl) ··· 2845 2839 } else { 2846 2840 rl->lut_entries = cpu_to_le16(rss_data->rss_lut_size); 2847 2841 for (i = 0; i < rss_data->rss_lut_size; i++) 2848 - rl->lut[i] = cpu_to_le32(rss_data->rss_lut[i]); 2842 + rl->lut[i] = rxhash_ena ? 2843 + cpu_to_le32(rss_data->rss_lut[i]) : 0; 2849 2844 2850 2845 xn_params.vc_op = VIRTCHNL2_OP_SET_RSS_LUT; 2851 2846 } ··· 3577 3570 */ 3578 3571 void idpf_vc_core_deinit(struct idpf_adapter *adapter) 3579 3572 { 3573 + struct idpf_hw *hw = &adapter->hw; 3580 3574 bool remove_in_prog; 3581 3575 3582 3576 if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags)) ··· 3600 3592 cancel_delayed_work_sync(&adapter->mbx_task); 3601 3593 3602 3594 idpf_vport_params_buf_rel(adapter); 3595 + 3596 + kfree(hw->lan_regs); 3597 + hw->lan_regs = NULL; 3603 3598 3604 3599 kfree(adapter->vports); 3605 3600 adapter->vports = NULL;
+2
drivers/net/ethernet/marvell/prestera/prestera_devlink.c
··· 387 387 388 388 dl = devlink_alloc(&prestera_dl_ops, sizeof(struct prestera_switch), 389 389 dev->dev); 390 + if (!dl) 391 + return NULL; 390 392 391 393 return devlink_priv(dl); 392 394 }
+11 -3
drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp.c
··· 44 44 struct mlx5_flow_table *ft; 45 45 struct mlx5_flow_group *miss_group; 46 46 struct mlx5_flow_handle *miss_rule; 47 + struct mlx5_modify_hdr *rx_modify_hdr; 47 48 struct mlx5_flow_destination default_dest; 48 49 struct mlx5e_psp_rx_err rx_err; 49 50 u32 refcnt; ··· 287 286 return err; 288 287 } 289 288 290 - static void accel_psp_fs_rx_fs_destroy(struct mlx5e_accel_fs_psp_prot *fs_prot) 289 + static void accel_psp_fs_rx_fs_destroy(struct mlx5e_psp_fs *fs, 290 + struct mlx5e_accel_fs_psp_prot *fs_prot) 291 291 { 292 292 if (fs_prot->def_rule) { 293 293 mlx5_del_flow_rules(fs_prot->def_rule); 294 294 fs_prot->def_rule = NULL; 295 + } 296 + 297 + if (fs_prot->rx_modify_hdr) { 298 + mlx5_modify_header_dealloc(fs->mdev, fs_prot->rx_modify_hdr); 299 + fs_prot->rx_modify_hdr = NULL; 295 300 } 296 301 297 302 if (fs_prot->miss_rule) { ··· 403 396 modify_hdr = NULL; 404 397 goto out_err; 405 398 } 399 + fs_prot->rx_modify_hdr = modify_hdr; 406 400 407 401 flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | 408 402 MLX5_FLOW_CONTEXT_ACTION_CRYPTO_DECRYPT | ··· 424 416 goto out; 425 417 426 418 out_err: 427 - accel_psp_fs_rx_fs_destroy(fs_prot); 419 + accel_psp_fs_rx_fs_destroy(fs, fs_prot); 428 420 out: 429 421 kvfree(flow_group_in); 430 422 kvfree(spec); ··· 441 433 /* The netdev unreg already happened, so all offloaded rule are already removed */ 442 434 fs_prot = &accel_psp->fs_prot[type]; 443 435 444 - accel_psp_fs_rx_fs_destroy(fs_prot); 436 + accel_psp_fs_rx_fs_destroy(fs, fs_prot); 445 437 446 438 accel_psp_fs_rx_err_destroy_ft(fs, &fs_prot->rx_err); 447 439
+5 -4
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 1608 1608 { 1609 1609 int mode = fec_active_mode(priv->mdev); 1610 1610 1611 - if (mode == MLX5E_FEC_NOFEC || 1612 - !MLX5_CAP_PCAM_FEATURE(priv->mdev, ppcnt_statistical_group)) 1611 + if (mode == MLX5E_FEC_NOFEC) 1613 1612 return; 1614 1613 1615 - fec_set_corrected_bits_total(priv, fec_stats); 1616 - fec_set_block_stats(priv, mode, fec_stats); 1614 + if (MLX5_CAP_PCAM_FEATURE(priv->mdev, ppcnt_statistical_group)) { 1615 + fec_set_corrected_bits_total(priv, fec_stats); 1616 + fec_set_block_stats(priv, mode, fec_stats); 1617 + } 1617 1618 1618 1619 if (MLX5_CAP_PCAM_REG(priv->mdev, pphcr)) 1619 1620 fec_set_histograms_stats(priv, mode, hist);
+7 -2
drivers/net/ethernet/mellanox/mlx5/core/lag/mp.c
··· 173 173 } 174 174 175 175 /* Handle multipath entry with lower priority value */ 176 - if (mp->fib.mfi && mp->fib.mfi != fi && 176 + if (mp->fib.mfi && 177 177 (mp->fib.dst != fen_info->dst || mp->fib.dst_len != fen_info->dst_len) && 178 - fi->fib_priority >= mp->fib.priority) 178 + mp->fib.dst_len <= fen_info->dst_len && 179 + !(mp->fib.dst_len == fen_info->dst_len && 180 + fi->fib_priority < mp->fib.priority)) { 181 + mlx5_core_dbg(ldev->pf[idx].dev, 182 + "Multipath entry with lower priority was rejected\n"); 179 183 return; 184 + } 180 185 181 186 nh_dev0 = mlx5_lag_get_next_fib_dev(ldev, fi, NULL); 182 187 nh_dev1 = mlx5_lag_get_next_fib_dev(ldev, fi, nh_dev0);
+6 -3
drivers/net/ethernet/mellanox/mlx5/core/port.c
··· 393 393 if (err) 394 394 return err; 395 395 396 - *status = MLX5_GET(mcia_reg, out, status); 397 - if (*status) 396 + if (MLX5_GET(mcia_reg, out, status)) { 397 + if (status) 398 + *status = MLX5_GET(mcia_reg, out, status); 398 399 return -EIO; 400 + } 399 401 400 402 ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0); 401 403 memcpy(data, ptr, size); ··· 431 429 mlx5_qsfp_eeprom_params_set(&query.i2c_address, &query.page, &offset); 432 430 break; 433 431 default: 434 - mlx5_core_err(dev, "Module ID not recognized: 0x%x\n", module_id); 432 + mlx5_core_dbg(dev, "Module ID not recognized: 0x%x\n", 433 + module_id); 435 434 return -EINVAL; 436 435 } 437 436
+4 -2
drivers/net/ethernet/mscc/ocelot.c
··· 2307 2307 2308 2308 /* Now, set PGIDs for each active LAG */ 2309 2309 for (lag = 0; lag < ocelot->num_phys_ports; lag++) { 2310 - struct net_device *bond = ocelot->ports[lag]->bond; 2310 + struct ocelot_port *ocelot_port = ocelot->ports[lag]; 2311 2311 int num_active_ports = 0; 2312 + struct net_device *bond; 2312 2313 unsigned long bond_mask; 2313 2314 u8 aggr_idx[16]; 2314 2315 2315 - if (!bond || (visited & BIT(lag))) 2316 + if (!ocelot_port || !ocelot_port->bond || (visited & BIT(lag))) 2316 2317 continue; 2317 2318 2319 + bond = ocelot_port->bond; 2318 2320 bond_mask = ocelot_get_bond_mask(ocelot, bond); 2319 2321 2320 2322 for_each_set_bit(port, &bond_mask, ocelot->num_phys_ports) {
+8
drivers/net/netdevsim/bus.c
··· 332 332 rcu_assign_pointer(nsim_a->peer, nsim_b); 333 333 rcu_assign_pointer(nsim_b->peer, nsim_a); 334 334 335 + if (netif_running(dev_a) && netif_running(dev_b)) { 336 + netif_carrier_on(dev_a); 337 + netif_carrier_on(dev_b); 338 + } 339 + 335 340 out_err: 336 341 put_net(ns_b); 337 342 put_net(ns_a); ··· 385 380 peer = rtnl_dereference(nsim->peer); 386 381 if (!peer) 387 382 goto out_put_netns; 383 + 384 + netif_carrier_off(dev); 385 + netif_carrier_off(peer->netdev); 388 386 389 387 err = 0; 390 388 RCU_INIT_POINTER(nsim->peer, NULL);
+3
drivers/net/phy/mxl-86110.c
··· 938 938 PHY_ID_MATCH_EXACT(PHY_ID_MXL86110), 939 939 .name = "MXL86110 Gigabit Ethernet", 940 940 .config_init = mxl86110_config_init, 941 + .suspend = genphy_suspend, 942 + .resume = genphy_resume, 943 + .soft_reset = genphy_soft_reset, 941 944 .get_wol = mxl86110_get_wol, 942 945 .set_wol = mxl86110_set_wol, 943 946 .led_brightness_set = mxl86110_led_brightness_set,
+1 -1
drivers/net/phy/sfp.c
··· 765 765 dev_addr++; 766 766 } 767 767 768 - return 0; 768 + return data - (u8 *)buf; 769 769 } 770 770 771 771 static int sfp_i2c_configure(struct sfp *sfp, struct i2c_adapter *i2c)
+2
drivers/net/usb/pegasus.c
··· 168 168 netif_device_detach(pegasus->net); 169 169 netif_err(pegasus, drv, pegasus->net, 170 170 "%s returned %d\n", __func__, ret); 171 + usb_free_urb(async_urb); 172 + kfree(req); 171 173 } 172 174 return ret; 173 175 }
+3 -3
drivers/net/virtio_net.c
··· 3791 3791 if (vi->has_rss && !netif_is_rxfh_configured(dev)) { 3792 3792 old_rss_hdr = vi->rss_hdr; 3793 3793 old_rss_trailer = vi->rss_trailer; 3794 - vi->rss_hdr = devm_kzalloc(&dev->dev, virtnet_rss_hdr_size(vi), GFP_KERNEL); 3794 + vi->rss_hdr = devm_kzalloc(&vi->vdev->dev, virtnet_rss_hdr_size(vi), GFP_KERNEL); 3795 3795 if (!vi->rss_hdr) { 3796 3796 vi->rss_hdr = old_rss_hdr; 3797 3797 return -ENOMEM; ··· 3802 3802 3803 3803 if (!virtnet_commit_rss_command(vi)) { 3804 3804 /* restore ctrl_rss if commit_rss_command failed */ 3805 - devm_kfree(&dev->dev, vi->rss_hdr); 3805 + devm_kfree(&vi->vdev->dev, vi->rss_hdr); 3806 3806 vi->rss_hdr = old_rss_hdr; 3807 3807 vi->rss_trailer = old_rss_trailer; 3808 3808 ··· 3810 3810 queue_pairs); 3811 3811 return -EINVAL; 3812 3812 } 3813 - devm_kfree(&dev->dev, old_rss_hdr); 3813 + devm_kfree(&vi->vdev->dev, old_rss_hdr); 3814 3814 goto succ; 3815 3815 } 3816 3816
+3 -3
drivers/net/wireless/virtual/mac80211_hwsim.c
··· 4040 4040 ieee80211_vif_to_wdev(data->nan_device_vif); 4041 4041 4042 4042 if (data->nan_curr_dw_band == NL80211_BAND_5GHZ) 4043 - ch = ieee80211_get_channel(hw->wiphy, 5475); 4043 + ch = ieee80211_get_channel(hw->wiphy, 5745); 4044 4044 else 4045 4045 ch = ieee80211_get_channel(hw->wiphy, 2437); 4046 4046 ··· 4112 4112 hrtimer_cancel(&data->nan_timer); 4113 4113 data->nan_device_vif = NULL; 4114 4114 4115 - spin_lock(&hwsim_radio_lock); 4115 + spin_lock_bh(&hwsim_radio_lock); 4116 4116 list_for_each_entry(data2, &hwsim_radios, list) { 4117 4117 if (data2->nan_device_vif) { 4118 4118 nan_cluster_running = true; 4119 4119 break; 4120 4120 } 4121 4121 } 4122 - spin_unlock(&hwsim_radio_lock); 4122 + spin_unlock_bh(&hwsim_radio_lock); 4123 4123 4124 4124 if (!nan_cluster_running) 4125 4125 memset(hwsim_nan_cluster_id, 0, ETH_ALEN);
+6
drivers/net/wwan/iosm/iosm_ipc_mux.c
··· 456 456 struct sk_buff_head *free_list; 457 457 union mux_msg mux_msg; 458 458 struct sk_buff *skb; 459 + int i; 459 460 460 461 if (!ipc_mux->initialized) 461 462 return; ··· 478 477 if (ipc_mux->channel) { 479 478 ipc_mux->channel->ul_pipe.is_open = false; 480 479 ipc_mux->channel->dl_pipe.is_open = false; 480 + } 481 + 482 + if (ipc_mux->protocol != MUX_LITE) { 483 + for (i = 0; i < IPC_MEM_MUX_IP_SESSION_ENTRIES; i++) 484 + kfree(ipc_mux->ul_adb.pp_qlt[i]); 481 485 } 482 486 483 487 kfree(ipc_mux);
+3 -5
drivers/of/unittest.c
··· 1985 1985 */ 1986 1986 static int __init unittest_data_add(void) 1987 1987 { 1988 - void *unittest_data; 1989 1988 void *unittest_data_align; 1990 1989 struct device_node *unittest_data_node = NULL, *np; 1991 1990 /* ··· 2003 2004 } 2004 2005 2005 2006 /* creating copy */ 2006 - unittest_data = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 2007 + void *unittest_data __free(kfree) = kmalloc(size + FDT_ALIGN_SIZE, GFP_KERNEL); 2007 2008 if (!unittest_data) 2008 2009 return -ENOMEM; 2009 2010 ··· 2013 2014 ret = of_fdt_unflatten_tree(unittest_data_align, NULL, &unittest_data_node); 2014 2015 if (!ret) { 2015 2016 pr_warn("%s: unflatten testcases tree failed\n", __func__); 2016 - kfree(unittest_data); 2017 2017 return -ENODATA; 2018 2018 } 2019 2019 if (!unittest_data_node) { 2020 2020 pr_warn("%s: testcases tree is empty\n", __func__); 2021 - kfree(unittest_data); 2022 2021 return -ENODATA; 2023 2022 } 2024 2023 ··· 2035 2038 /* attach the sub-tree to live tree */ 2036 2039 if (!of_root) { 2037 2040 pr_warn("%s: no live tree to attach sub-tree\n", __func__); 2038 - kfree(unittest_data); 2039 2041 rc = -ENODEV; 2040 2042 goto unlock; 2041 2043 } ··· 2054 2058 2055 2059 EXPECT_END(KERN_INFO, 2056 2060 "Duplicate name in testcase-data, renamed to \"duplicate-name#1\""); 2061 + 2062 + retain_and_null_ptr(unittest_data); 2057 2063 2058 2064 unlock: 2059 2065 of_overlay_mutex_unlock();
+3 -34
drivers/pci/controller/dwc/pci-meson.c
··· 37 37 #define PCIE_CFG_STATUS17 0x44 38 38 #define PM_CURRENT_STATE(x) (((x) >> 7) & 0x1) 39 39 40 - #define WAIT_LINKUP_TIMEOUT 4000 41 40 #define PORT_CLK_RATE 100000000UL 42 41 #define MAX_PAYLOAD_SIZE 256 43 42 #define MAX_READ_REQ_SIZE 256 ··· 349 350 static bool meson_pcie_link_up(struct dw_pcie *pci) 350 351 { 351 352 struct meson_pcie *mp = to_meson_pcie(pci); 352 - struct device *dev = pci->dev; 353 - u32 speed_okay = 0; 354 - u32 cnt = 0; 355 - u32 state12, state17, smlh_up, ltssm_up, rdlh_up; 353 + u32 state12; 356 354 357 - do { 358 - state12 = meson_cfg_readl(mp, PCIE_CFG_STATUS12); 359 - state17 = meson_cfg_readl(mp, PCIE_CFG_STATUS17); 360 - smlh_up = IS_SMLH_LINK_UP(state12); 361 - rdlh_up = IS_RDLH_LINK_UP(state12); 362 - ltssm_up = IS_LTSSM_UP(state12); 363 - 364 - if (PM_CURRENT_STATE(state17) < PCIE_GEN3) 365 - speed_okay = 1; 366 - 367 - if (smlh_up) 368 - dev_dbg(dev, "smlh_link_up is on\n"); 369 - if (rdlh_up) 370 - dev_dbg(dev, "rdlh_link_up is on\n"); 371 - if (ltssm_up) 372 - dev_dbg(dev, "ltssm_up is on\n"); 373 - if (speed_okay) 374 - dev_dbg(dev, "speed_okay\n"); 375 - 376 - if (smlh_up && rdlh_up && ltssm_up && speed_okay) 377 - return true; 378 - 379 - cnt++; 380 - 381 - udelay(10); 382 - } while (cnt < WAIT_LINKUP_TIMEOUT); 383 - 384 - dev_err(dev, "error: wait linkup timeout\n"); 385 - return false; 355 + state12 = meson_cfg_readl(mp, PCIE_CFG_STATUS12); 356 + return IS_SMLH_LINK_UP(state12) && IS_RDLH_LINK_UP(state12); 386 357 } 387 358 388 359 static int meson_pcie_host_init(struct dw_pcie_rp *pp)
+3 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 1047 1047 writel(WR_NO_SNOOP_OVERRIDE_EN | RD_NO_SNOOP_OVERRIDE_EN, 1048 1048 pcie->parf + PARF_NO_SNOOP_OVERRIDE); 1049 1049 1050 - qcom_pcie_clear_aspm_l0s(pcie->pci); 1051 1050 qcom_pcie_clear_hpc(pcie->pci); 1052 1051 1053 1052 return 0; ··· 1315 1316 goto err_disable_phy; 1316 1317 } 1317 1318 1319 + qcom_pcie_clear_aspm_l0s(pcie->pci); 1320 + 1318 1321 qcom_ep_reset_deassert(pcie); 1319 1322 1320 1323 if (pcie->cfg->ops->config_sid) { ··· 1465 1464 1466 1465 static const struct qcom_pcie_cfg cfg_2_3_2 = { 1467 1466 .ops = &ops_2_3_2, 1467 + .no_l0s = true, 1468 1468 }; 1469 1469 1470 1470 static const struct qcom_pcie_cfg cfg_2_3_3 = {
-1
drivers/pci/quirks.c
··· 6308 6308 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node); 6309 6309 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node); 6310 6310 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node); 6311 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RPI, PCI_DEVICE_ID_RPI_RP1_C0, of_pci_make_dev_node); 6312 6311 6313 6312 /* 6314 6313 * Devices known to require a longer delay before first config space access
-7
drivers/pci/vgaarb.c
··· 652 652 return true; 653 653 } 654 654 655 - /* 656 - * Vgadev has neither IO nor MEM enabled. If we haven't found any 657 - * other VGA devices, it is the best candidate so far. 658 - */ 659 - if (!boot_vga) 660 - return true; 661 - 662 655 return false; 663 656 } 664 657
+1
drivers/pinctrl/Kconfig
··· 491 491 depends on ARCH_MICROCHIP || COMPILE_TEST 492 492 depends on OF 493 493 select GENERIC_PINCONF 494 + select REGMAP_MMIO 494 495 default y 495 496 help 496 497 This selects the pinctrl driver for gpio2 on pic64gx.
+1 -1
drivers/pinctrl/mediatek/pinctrl-mt8189.c
··· 1642 1642 }; 1643 1643 1644 1644 static const char * const mt8189_pinctrl_register_base_names[] = { 1645 - "base", "lm", "rb0", "rb1", "bm0", "bm1", "bm2", "lt0", "lt1", "rt", 1645 + "base", "bm0", "bm1", "bm2", "lm", "lt0", "lt1", "rb0", "rb1", "rt", 1646 1646 }; 1647 1647 1648 1648 static const struct mtk_eint_hw mt8189_eint_hw = {
+1 -1
drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
··· 498 498 pctrl->chip.base = -1; 499 499 pctrl->chip.ngpio = data->npins; 500 500 pctrl->chip.label = dev_name(dev); 501 - pctrl->chip.can_sleep = false; 501 + pctrl->chip.can_sleep = true; 502 502 503 503 mutex_init(&pctrl->lock); 504 504
+4 -3
drivers/resctrl/mpam_devices.c
··· 1072 1072 u64 now; 1073 1073 bool nrdy = false; 1074 1074 bool config_mismatch; 1075 - bool overflow; 1075 + bool overflow = false; 1076 1076 struct mon_read *m = arg; 1077 1077 struct mon_cfg *ctx = m->ctx; 1078 1078 bool reset_on_next_read = false; ··· 1176 1176 } 1177 1177 mpam_mon_sel_unlock(msc); 1178 1178 1179 - if (nrdy) { 1179 + if (nrdy) 1180 1180 m->err = -EBUSY; 1181 + 1182 + if (m->err) 1181 1183 return; 1182 - } 1183 1184 1184 1185 *m->val += now; 1185 1186 }
+1 -1
drivers/uio/uio.c
··· 3 3 * drivers/uio/uio.c 4 4 * 5 5 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de> 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de> 8 8 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com> 9 9 *
+7 -6
drivers/xen/acpi.c
··· 89 89 int *trigger_out, 90 90 int *polarity_out) 91 91 { 92 - int gsi; 92 + u32 gsi; 93 93 u8 pin; 94 94 struct acpi_prt_entry *entry; 95 95 int trigger = ACPI_LEVEL_SENSITIVE; 96 - int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ? 96 + int ret, polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ? 97 97 ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW; 98 98 99 99 if (!dev || !gsi_out || !trigger_out || !polarity_out) ··· 105 105 106 106 entry = acpi_pci_irq_lookup(dev, pin); 107 107 if (entry) { 108 + ret = 0; 108 109 if (entry->link) 109 - gsi = acpi_pci_link_allocate_irq(entry->link, 110 + ret = acpi_pci_link_allocate_irq(entry->link, 110 111 entry->index, 111 112 &trigger, &polarity, 112 - NULL); 113 + NULL, &gsi); 113 114 else 114 115 gsi = entry->index; 115 116 } else 116 - gsi = -1; 117 + ret = -ENODEV; 117 118 118 - if (gsi < 0) 119 + if (ret < 0) 119 120 return -EINVAL; 120 121 121 122 *gsi_out = gsi;
+17 -15
fs/btrfs/delayed-inode.c
··· 152 152 return ERR_PTR(-ENOMEM); 153 153 btrfs_init_delayed_node(node, root, ino); 154 154 155 + /* Cached in the inode and can be accessed. */ 156 + refcount_set(&node->refs, 2); 157 + btrfs_delayed_node_ref_tracker_alloc(node, tracker, GFP_NOFS); 158 + btrfs_delayed_node_ref_tracker_alloc(node, &node->inode_cache_tracker, GFP_NOFS); 159 + 155 160 /* Allocate and reserve the slot, from now it can return a NULL from xa_load(). */ 156 161 ret = xa_reserve(&root->delayed_nodes, ino, GFP_NOFS); 157 - if (ret == -ENOMEM) { 158 - btrfs_delayed_node_ref_tracker_dir_exit(node); 159 - kmem_cache_free(delayed_node_cache, node); 160 - return ERR_PTR(-ENOMEM); 161 - } 162 + if (ret == -ENOMEM) 163 + goto cleanup; 164 + 162 165 xa_lock(&root->delayed_nodes); 163 166 ptr = xa_load(&root->delayed_nodes, ino); 164 167 if (ptr) { 165 168 /* Somebody inserted it, go back and read it. */ 166 169 xa_unlock(&root->delayed_nodes); 167 - btrfs_delayed_node_ref_tracker_dir_exit(node); 168 - kmem_cache_free(delayed_node_cache, node); 169 - node = NULL; 170 - goto again; 170 + goto cleanup; 171 171 } 172 172 ptr = __xa_store(&root->delayed_nodes, ino, node, GFP_ATOMIC); 173 173 ASSERT(xa_err(ptr) != -EINVAL); 174 174 ASSERT(xa_err(ptr) != -ENOMEM); 175 175 ASSERT(ptr == NULL); 176 - 177 - /* Cached in the inode and can be accessed. */ 178 - refcount_set(&node->refs, 2); 179 - btrfs_delayed_node_ref_tracker_alloc(node, tracker, GFP_ATOMIC); 180 - btrfs_delayed_node_ref_tracker_alloc(node, &node->inode_cache_tracker, GFP_ATOMIC); 181 - 182 176 btrfs_inode->delayed_node = node; 183 177 xa_unlock(&root->delayed_nodes); 184 178 185 179 return node; 180 + cleanup: 181 + btrfs_delayed_node_ref_tracker_free(node, tracker); 182 + btrfs_delayed_node_ref_tracker_free(node, &node->inode_cache_tracker); 183 + btrfs_delayed_node_ref_tracker_dir_exit(node); 184 + kmem_cache_free(delayed_node_cache, node); 185 + if (ret) 186 + return ERR_PTR(ret); 187 + goto again; 186 188 } 187 189 188 190 /*
+1
fs/btrfs/disk-io.c
··· 2255 2255 BTRFS_DATA_RELOC_TREE_OBJECTID, true); 2256 2256 if (IS_ERR(root)) { 2257 2257 if (!btrfs_test_opt(fs_info, IGNOREBADROOTS)) { 2258 + location.objectid = BTRFS_DATA_RELOC_TREE_OBJECTID; 2258 2259 ret = PTR_ERR(root); 2259 2260 goto out; 2260 2261 }
+4 -4
fs/btrfs/extent_io.c
··· 1728 1728 struct btrfs_ordered_extent *ordered; 1729 1729 1730 1730 ordered = btrfs_lookup_first_ordered_range(inode, cur, 1731 - folio_end - cur); 1731 + fs_info->sectorsize); 1732 1732 /* 1733 1733 * We have just run delalloc before getting here, so 1734 1734 * there must be an ordered extent. ··· 1742 1742 btrfs_put_ordered_extent(ordered); 1743 1743 1744 1744 btrfs_mark_ordered_io_finished(inode, folio, cur, 1745 - end - cur, true); 1745 + fs_info->sectorsize, true); 1746 1746 /* 1747 1747 * This range is beyond i_size, thus we don't need to 1748 1748 * bother writing back. ··· 1751 1751 * writeback the sectors with subpage dirty bits, 1752 1752 * causing writeback without ordered extent. 1753 1753 */ 1754 - btrfs_folio_clear_dirty(fs_info, folio, cur, end - cur); 1755 - break; 1754 + btrfs_folio_clear_dirty(fs_info, folio, cur, fs_info->sectorsize); 1755 + continue; 1756 1756 } 1757 1757 ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size); 1758 1758 if (unlikely(ret < 0)) {
+47 -16
fs/btrfs/inode.c
··· 481 481 ASSERT(size <= sectorsize); 482 482 483 483 /* 484 - * The compressed size also needs to be no larger than a sector. 485 - * That's also why we only need one page as the parameter. 484 + * The compressed size also needs to be no larger than a page. 485 + * That's also why we only need one folio as the parameter. 486 486 */ 487 - if (compressed_folio) 487 + if (compressed_folio) { 488 488 ASSERT(compressed_size <= sectorsize); 489 - else 489 + ASSERT(compressed_size <= PAGE_SIZE); 490 + } else { 490 491 ASSERT(compressed_size == 0); 492 + } 491 493 492 494 if (compressed_size && compressed_folio) 493 495 cur_size = compressed_size; ··· 576 574 if (offset != 0) 577 575 return false; 578 576 577 + /* 578 + * Even for bs > ps cases, cow_file_range_inline() can only accept a 579 + * single folio. 580 + * 581 + * This can be problematic and cause access beyond page boundary if a 582 + * page sized folio is passed into that function. 583 + * And encoded write is doing exactly that. 584 + * So here limits the inlined extent size to PAGE_SIZE. 585 + */ 586 + if (size > PAGE_SIZE || compressed_size > PAGE_SIZE) 587 + return false; 588 + 579 589 /* Inline extents are limited to sectorsize. */ 580 590 if (size > fs_info->sectorsize) 581 591 return false; ··· 632 618 struct btrfs_drop_extents_args drop_args = { 0 }; 633 619 struct btrfs_root *root = inode->root; 634 620 struct btrfs_fs_info *fs_info = root->fs_info; 635 - struct btrfs_trans_handle *trans; 621 + struct btrfs_trans_handle *trans = NULL; 636 622 u64 data_len = (compressed_size ?: size); 637 623 int ret; 638 624 struct btrfs_path *path; 639 625 640 626 path = btrfs_alloc_path(); 641 - if (!path) 642 - return -ENOMEM; 627 + if (!path) { 628 + ret = -ENOMEM; 629 + goto out; 630 + } 643 631 644 632 trans = btrfs_join_transaction(root); 645 633 if (IS_ERR(trans)) { 646 - btrfs_free_path(path); 647 - return PTR_ERR(trans); 634 + ret = PTR_ERR(trans); 635 + trans = NULL; 636 + goto out; 648 637 } 649 638 trans->block_rsv = &inode->block_rsv; 650 639 ··· 691 674 * it won't count as data extent, free them directly here. 692 675 * And at reserve time, it's always aligned to page size, so 693 676 * just free one page here. 677 + * 678 + * If we fallback to non-inline (ret == 1) due to -ENOSPC, then we need 679 + * to keep the data reservation. 694 680 */ 695 - btrfs_qgroup_free_data(inode, NULL, 0, fs_info->sectorsize, NULL); 681 + if (ret <= 0) 682 + btrfs_qgroup_free_data(inode, NULL, 0, fs_info->sectorsize, NULL); 696 683 btrfs_free_path(path); 697 - btrfs_end_transaction(trans); 684 + if (trans) 685 + btrfs_end_transaction(trans); 698 686 return ret; 699 687 } 700 688 ··· 4048 4026 btrfs_set_inode_mapping_order(inode); 4049 4027 4050 4028 cache_index: 4051 - ret = btrfs_init_file_extent_tree(inode); 4052 - if (ret) 4053 - goto out; 4054 - btrfs_inode_set_file_extent_range(inode, 0, 4055 - round_up(i_size_read(vfs_inode), fs_info->sectorsize)); 4056 4029 /* 4057 4030 * If we were modified in the current generation and evicted from memory 4058 4031 * and then re-read we need to do a full sync since we don't have any ··· 4133 4116 "error loading props for ino %llu (root %llu): %d", 4134 4117 btrfs_ino(inode), btrfs_root_id(root), ret); 4135 4118 } 4119 + 4120 + /* 4121 + * We don't need the path anymore, so release it to avoid holding a read 4122 + * lock on a leaf while calling btrfs_init_file_extent_tree(), which can 4123 + * allocate memory that triggers reclaim (GFP_KERNEL) and cause a locking 4124 + * dependency. 4125 + */ 4126 + btrfs_release_path(path); 4127 + 4128 + ret = btrfs_init_file_extent_tree(inode); 4129 + if (ret) 4130 + goto out; 4131 + btrfs_inode_set_file_extent_range(inode, 0, 4132 + round_up(i_size_read(vfs_inode), fs_info->sectorsize)); 4136 4133 4137 4134 if (!maybe_acls) 4138 4135 cache_no_acl(vfs_inode);
+19 -2
fs/btrfs/qgroup.c
··· 3208 3208 { 3209 3209 struct btrfs_qgroup *src; 3210 3210 struct btrfs_qgroup *parent; 3211 + struct btrfs_qgroup *qgroup; 3211 3212 struct btrfs_qgroup_list *list; 3213 + LIST_HEAD(qgroup_list); 3214 + const u32 nodesize = fs_info->nodesize; 3212 3215 int nr_parents = 0; 3216 + 3217 + if (btrfs_qgroup_mode(fs_info) != BTRFS_QGROUP_MODE_FULL) 3218 + return 0; 3213 3219 3214 3220 src = find_qgroup_rb(fs_info, srcid); 3215 3221 if (!src) ··· 3251 3245 if (parent->excl != parent->rfer) 3252 3246 return 1; 3253 3247 3254 - parent->excl += fs_info->nodesize; 3255 - parent->rfer += fs_info->nodesize; 3248 + qgroup_iterator_add(&qgroup_list, parent); 3249 + list_for_each_entry(qgroup, &qgroup_list, iterator) { 3250 + qgroup->rfer += nodesize; 3251 + qgroup->rfer_cmpr += nodesize; 3252 + qgroup->excl += nodesize; 3253 + qgroup->excl_cmpr += nodesize; 3254 + qgroup_dirty(fs_info, qgroup); 3255 + 3256 + /* Append parent qgroups to @qgroup_list. */ 3257 + list_for_each_entry(list, &qgroup->groups, next_group) 3258 + qgroup_iterator_add(&qgroup_list, list->group); 3259 + } 3260 + qgroup_iterator_clean(&qgroup_list); 3256 3261 return 0; 3257 3262 } 3258 3263
+5 -7
fs/btrfs/super.c
··· 736 736 */ 737 737 void btrfs_set_free_space_cache_settings(struct btrfs_fs_info *fs_info) 738 738 { 739 - if (fs_info->sectorsize < PAGE_SIZE) { 739 + if (fs_info->sectorsize != PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) { 740 + btrfs_info(fs_info, 741 + "forcing free space tree for sector size %u with page size %lu", 742 + fs_info->sectorsize, PAGE_SIZE); 740 743 btrfs_clear_opt(fs_info->mount_opt, SPACE_CACHE); 741 - if (!btrfs_test_opt(fs_info, FREE_SPACE_TREE)) { 742 - btrfs_info(fs_info, 743 - "forcing free space tree for sector size %u with page size %lu", 744 - fs_info->sectorsize, PAGE_SIZE); 745 - btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE); 746 - } 744 + btrfs_set_opt(fs_info->mount_opt, FREE_SPACE_TREE); 747 745 } 748 746 749 747 /*
+6 -5
fs/btrfs/transaction.c
··· 520 520 * when this is done, it is safe to start a new transaction, but the current 521 521 * transaction might not be fully on disk. 522 522 */ 523 - static void wait_current_trans(struct btrfs_fs_info *fs_info) 523 + static void wait_current_trans(struct btrfs_fs_info *fs_info, unsigned int type) 524 524 { 525 525 struct btrfs_transaction *cur_trans; 526 526 527 527 spin_lock(&fs_info->trans_lock); 528 528 cur_trans = fs_info->running_transaction; 529 - if (cur_trans && is_transaction_blocked(cur_trans)) { 529 + if (cur_trans && is_transaction_blocked(cur_trans) && 530 + (btrfs_blocked_trans_types[cur_trans->state] & type)) { 530 531 refcount_inc(&cur_trans->use_count); 531 532 spin_unlock(&fs_info->trans_lock); 532 533 ··· 702 701 sb_start_intwrite(fs_info->sb); 703 702 704 703 if (may_wait_transaction(fs_info, type)) 705 - wait_current_trans(fs_info); 704 + wait_current_trans(fs_info, type); 706 705 707 706 do { 708 707 ret = join_transaction(fs_info, type); 709 708 if (ret == -EBUSY) { 710 - wait_current_trans(fs_info); 709 + wait_current_trans(fs_info, type); 711 710 if (unlikely(type == TRANS_ATTACH || 712 711 type == TRANS_JOIN_NOSTART)) 713 712 ret = -ENOENT; ··· 1004 1003 1005 1004 void btrfs_throttle(struct btrfs_fs_info *fs_info) 1006 1005 { 1007 - wait_current_trans(fs_info); 1006 + wait_current_trans(fs_info, TRANS_START); 1008 1007 } 1009 1008 1010 1009 bool btrfs_should_end_transaction(struct btrfs_trans_handle *trans)
+3 -5
fs/btrfs/tree-log.c
··· 190 190 191 191 btrfs_abort_transaction(wc->trans, error); 192 192 193 - if (wc->subvol_path->nodes[0]) { 193 + if (wc->subvol_path && wc->subvol_path->nodes[0]) { 194 194 btrfs_crit(fs_info, 195 195 "subvolume (root %llu) leaf currently being processed:", 196 196 btrfs_root_id(wc->root)); ··· 6341 6341 * and no keys greater than that, so bail out. 6342 6342 */ 6343 6343 break; 6344 - } else if ((min_key->type == BTRFS_INODE_REF_KEY || 6345 - min_key->type == BTRFS_INODE_EXTREF_KEY) && 6346 - (inode->generation == trans->transid || 6347 - ctx->logging_conflict_inodes)) { 6344 + } else if (min_key->type == BTRFS_INODE_REF_KEY || 6345 + min_key->type == BTRFS_INODE_EXTREF_KEY) { 6348 6346 u64 other_ino = 0; 6349 6347 u64 other_parent = 0; 6350 6348
+2 -1
fs/ecryptfs/inode.c
··· 533 533 fsstack_copy_inode_size(dir, lower_dir); 534 534 set_nlink(dir, lower_dir->i_nlink); 535 535 out: 536 + dput(lower_dir_dentry); 536 537 end_creating(lower_dentry); 537 538 if (d_really_is_negative(dentry)) 538 539 d_drop(dentry); ··· 585 584 fsstack_copy_attr_times(dir, lower_dir); 586 585 fsstack_copy_inode_size(dir, lower_dir); 587 586 out: 588 - end_removing(lower_dentry); 587 + end_creating(lower_dentry); 589 588 if (d_really_is_negative(dentry)) 590 589 d_drop(dentry); 591 590 return rc;
+13 -6
fs/erofs/super.c
··· 644 644 * fs contexts (including its own) due to self-controlled RO 645 645 * accesses/contexts and no side-effect changes that need to 646 646 * context save & restore so it can reuse the current thread 647 - * context. However, it still needs to bump `s_stack_depth` to 648 - * avoid kernel stack overflow from nested filesystems. 647 + * context. 648 + * However, we still need to prevent kernel stack overflow due 649 + * to filesystem nesting: just ensure that s_stack_depth is 0 650 + * to disallow mounting EROFS on stacked filesystems. 651 + * Note: s_stack_depth is not incremented here for now, since 652 + * EROFS is the only fs supporting file-backed mounts for now. 653 + * It MUST change if another fs plans to support them, which 654 + * may also require adjusting FILESYSTEM_MAX_STACK_DEPTH. 649 655 */ 650 656 if (erofs_is_fileio_mode(sbi)) { 651 - sb->s_stack_depth = 652 - file_inode(sbi->dif0.file)->i_sb->s_stack_depth + 1; 653 - if (sb->s_stack_depth > FILESYSTEM_MAX_STACK_DEPTH) { 654 - erofs_err(sb, "maximum fs stacking depth exceeded"); 657 + inode = file_inode(sbi->dif0.file); 658 + if ((inode->i_sb->s_op == &erofs_sops && 659 + !inode->i_sb->s_bdev) || 660 + inode->i_sb->s_stack_depth) { 661 + erofs_err(sb, "file-backed mounts cannot be applied to stacked fses"); 655 662 return -ENOTBLK; 656 663 } 657 664 }
+3
fs/inode.c
··· 1593 1593 * @hashval: hash value (usually inode number) to search for 1594 1594 * @test: callback used for comparisons between inodes 1595 1595 * @data: opaque data pointer to pass to @test 1596 + * @isnew: return argument telling whether I_NEW was set when 1597 + * the inode was found in hash (the caller needs to 1598 + * wait for I_NEW to clear) 1596 1599 * 1597 1600 * Search for the inode specified by @hashval and @data in the inode cache. 1598 1601 * If the inode is in the cache, the inode is returned with an incremented
+35 -15
fs/iomap/buffered-io.c
··· 832 832 if (!mapping_large_folio_support(iter->inode->i_mapping)) 833 833 len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); 834 834 835 - if (iter->fbatch) { 835 + if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) { 836 836 struct folio *folio = folio_batch_next(iter->fbatch); 837 837 838 838 if (!folio) ··· 929 929 * process so return and let the caller iterate and refill the batch. 930 930 */ 931 931 if (!folio) { 932 - WARN_ON_ONCE(!iter->fbatch); 932 + WARN_ON_ONCE(!(iter->iomap.flags & IOMAP_F_FOLIO_BATCH)); 933 933 return 0; 934 934 } 935 935 ··· 1544 1544 return status; 1545 1545 } 1546 1546 1547 - loff_t 1547 + /** 1548 + * iomap_fill_dirty_folios - fill a folio batch with dirty folios 1549 + * @iter: Iteration structure 1550 + * @start: Start offset of range. Updated based on lookup progress. 1551 + * @end: End offset of range 1552 + * @iomap_flags: Flags to set on the associated iomap to track the batch. 1553 + * 1554 + * Returns the folio count directly. Also returns the associated control flag if 1555 + * the the batch lookup is performed and the expected offset of a subsequent 1556 + * lookup via out params. The caller is responsible to set the flag on the 1557 + * associated iomap. 1558 + */ 1559 + unsigned int 1548 1560 iomap_fill_dirty_folios( 1549 1561 struct iomap_iter *iter, 1550 - loff_t offset, 1551 - loff_t length) 1562 + loff_t *start, 1563 + loff_t end, 1564 + unsigned int *iomap_flags) 1552 1565 { 1553 1566 struct address_space *mapping = iter->inode->i_mapping; 1554 - pgoff_t start = offset >> PAGE_SHIFT; 1555 - pgoff_t end = (offset + length - 1) >> PAGE_SHIFT; 1567 + pgoff_t pstart = *start >> PAGE_SHIFT; 1568 + pgoff_t pend = (end - 1) >> PAGE_SHIFT; 1569 + unsigned int count; 1556 1570 1557 - iter->fbatch = kmalloc(sizeof(struct folio_batch), GFP_KERNEL); 1558 - if (!iter->fbatch) 1559 - return offset + length; 1560 - folio_batch_init(iter->fbatch); 1571 + if (!iter->fbatch) { 1572 + *start = end; 1573 + return 0; 1574 + } 1561 1575 1562 - filemap_get_folios_dirty(mapping, &start, end, iter->fbatch); 1563 - return (start << PAGE_SHIFT); 1576 + count = filemap_get_folios_dirty(mapping, &pstart, pend, iter->fbatch); 1577 + *start = (pstart << PAGE_SHIFT); 1578 + *iomap_flags |= IOMAP_F_FOLIO_BATCH; 1579 + return count; 1564 1580 } 1565 1581 EXPORT_SYMBOL_GPL(iomap_fill_dirty_folios); 1566 1582 ··· 1585 1569 const struct iomap_ops *ops, 1586 1570 const struct iomap_write_ops *write_ops, void *private) 1587 1571 { 1572 + struct folio_batch fbatch; 1588 1573 struct iomap_iter iter = { 1589 1574 .inode = inode, 1590 1575 .pos = pos, 1591 1576 .len = len, 1592 1577 .flags = IOMAP_ZERO, 1593 1578 .private = private, 1579 + .fbatch = &fbatch, 1594 1580 }; 1595 1581 struct address_space *mapping = inode->i_mapping; 1596 1582 int ret; 1597 1583 bool range_dirty; 1584 + 1585 + folio_batch_init(&fbatch); 1598 1586 1599 1587 /* 1600 1588 * To avoid an unconditional flush, check pagecache state and only flush ··· 1610 1590 while ((ret = iomap_iter(&iter, ops)) > 0) { 1611 1591 const struct iomap *srcmap = iomap_iter_srcmap(&iter); 1612 1592 1613 - if (WARN_ON_ONCE(iter.fbatch && 1593 + if (WARN_ON_ONCE((iter.iomap.flags & IOMAP_F_FOLIO_BATCH) && 1614 1594 srcmap->type != IOMAP_UNWRITTEN)) 1615 1595 return -EIO; 1616 1596 1617 - if (!iter.fbatch && 1597 + if (!(iter.iomap.flags & IOMAP_F_FOLIO_BATCH) && 1618 1598 (srcmap->type == IOMAP_HOLE || 1619 1599 srcmap->type == IOMAP_UNWRITTEN)) { 1620 1600 s64 status;
+3 -3
fs/iomap/iter.c
··· 8 8 9 9 static inline void iomap_iter_reset_iomap(struct iomap_iter *iter) 10 10 { 11 - if (iter->fbatch) { 11 + if (iter->iomap.flags & IOMAP_F_FOLIO_BATCH) { 12 12 folio_batch_release(iter->fbatch); 13 - kfree(iter->fbatch); 14 - iter->fbatch = NULL; 13 + folio_batch_reinit(iter->fbatch); 14 + iter->iomap.flags &= ~IOMAP_F_FOLIO_BATCH; 15 15 } 16 16 17 17 iter->status = 0;
+2 -2
fs/jffs2/wbuf.c
··· 2 2 * JFFS2 -- Journalling Flash File System, Version 2. 3 3 * 4 4 * Copyright © 2001-2007 Red Hat, Inc. 5 - * Copyright © 2004 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright © 2004 Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Created by David Woodhouse <dwmw2@infradead.org> 8 - * Modified debugged and enhanced by Thomas Gleixner <tglx@linutronix.de> 8 + * Modified debugged and enhanced by Thomas Gleixner <tglx@kernel.org> 9 9 * 10 10 * For licensing information, see the file 'LICENCE' in this directory. 11 11 *
+61 -58
fs/locks.c
··· 369 369 while (!list_empty(dispose)) { 370 370 flc = list_first_entry(dispose, struct file_lock_core, flc_list); 371 371 list_del_init(&flc->flc_list); 372 - if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) 373 - locks_free_lease(file_lease(flc)); 374 - else 375 - locks_free_lock(file_lock(flc)); 372 + locks_free_lock(file_lock(flc)); 373 + } 374 + } 375 + 376 + static void 377 + lease_dispose_list(struct list_head *dispose) 378 + { 379 + struct file_lock_core *flc; 380 + 381 + while (!list_empty(dispose)) { 382 + flc = list_first_entry(dispose, struct file_lock_core, flc_list); 383 + list_del_init(&flc->flc_list); 384 + locks_free_lease(file_lease(flc)); 376 385 } 377 386 } 378 387 ··· 585 576 __f_setown(filp, task_pid(current), PIDTYPE_TGID, 0); 586 577 } 587 578 579 + /** 580 + * lease_open_conflict - see if the given file points to an inode that has 581 + * an existing open that would conflict with the 582 + * desired lease. 583 + * @filp: file to check 584 + * @arg: type of lease that we're trying to acquire 585 + * 586 + * Check to see if there's an existing open fd on this file that would 587 + * conflict with the lease we're trying to set. 588 + */ 589 + static int 590 + lease_open_conflict(struct file *filp, const int arg) 591 + { 592 + struct inode *inode = file_inode(filp); 593 + int self_wcount = 0, self_rcount = 0; 594 + 595 + if (arg == F_RDLCK) 596 + return inode_is_open_for_write(inode) ? -EAGAIN : 0; 597 + else if (arg != F_WRLCK) 598 + return 0; 599 + 600 + /* 601 + * Make sure that only read/write count is from lease requestor. 602 + * Note that this will result in denying write leases when i_writecount 603 + * is negative, which is what we want. (We shouldn't grant write leases 604 + * on files open for execution.) 605 + */ 606 + if (filp->f_mode & FMODE_WRITE) 607 + self_wcount = 1; 608 + else if (filp->f_mode & FMODE_READ) 609 + self_rcount = 1; 610 + 611 + if (atomic_read(&inode->i_writecount) != self_wcount || 612 + atomic_read(&inode->i_readcount) != self_rcount) 613 + return -EAGAIN; 614 + 615 + return 0; 616 + } 617 + 588 618 static const struct lease_manager_operations lease_manager_ops = { 589 619 .lm_break = lease_break_callback, 590 620 .lm_change = lease_modify, 591 621 .lm_setup = lease_setup, 622 + .lm_open_conflict = lease_open_conflict, 592 623 }; 593 624 594 625 /* ··· 1669 1620 spin_unlock(&ctx->flc_lock); 1670 1621 percpu_up_read(&file_rwsem); 1671 1622 1672 - locks_dispose_list(&dispose); 1623 + lease_dispose_list(&dispose); 1673 1624 error = wait_event_interruptible_timeout(new_fl->c.flc_wait, 1674 1625 list_empty(&new_fl->c.flc_blocked_member), 1675 1626 break_time); ··· 1692 1643 out: 1693 1644 spin_unlock(&ctx->flc_lock); 1694 1645 percpu_up_read(&file_rwsem); 1695 - locks_dispose_list(&dispose); 1646 + lease_dispose_list(&dispose); 1696 1647 free_lock: 1697 1648 locks_free_lease(new_fl); 1698 1649 return error; ··· 1776 1727 spin_unlock(&ctx->flc_lock); 1777 1728 percpu_up_read(&file_rwsem); 1778 1729 1779 - locks_dispose_list(&dispose); 1730 + lease_dispose_list(&dispose); 1780 1731 } 1781 1732 return type; 1782 1733 } ··· 1791 1742 if (deleg->d_flags != 0 || deleg->__pad != 0) 1792 1743 return -EINVAL; 1793 1744 deleg->d_type = __fcntl_getlease(filp, FL_DELEG); 1794 - return 0; 1795 - } 1796 - 1797 - /** 1798 - * check_conflicting_open - see if the given file points to an inode that has 1799 - * an existing open that would conflict with the 1800 - * desired lease. 1801 - * @filp: file to check 1802 - * @arg: type of lease that we're trying to acquire 1803 - * @flags: current lock flags 1804 - * 1805 - * Check to see if there's an existing open fd on this file that would 1806 - * conflict with the lease we're trying to set. 1807 - */ 1808 - static int 1809 - check_conflicting_open(struct file *filp, const int arg, int flags) 1810 - { 1811 - struct inode *inode = file_inode(filp); 1812 - int self_wcount = 0, self_rcount = 0; 1813 - 1814 - if (flags & FL_LAYOUT) 1815 - return 0; 1816 - if (flags & FL_DELEG) 1817 - /* We leave these checks to the caller */ 1818 - return 0; 1819 - 1820 - if (arg == F_RDLCK) 1821 - return inode_is_open_for_write(inode) ? -EAGAIN : 0; 1822 - else if (arg != F_WRLCK) 1823 - return 0; 1824 - 1825 - /* 1826 - * Make sure that only read/write count is from lease requestor. 1827 - * Note that this will result in denying write leases when i_writecount 1828 - * is negative, which is what we want. (We shouldn't grant write leases 1829 - * on files open for execution.) 1830 - */ 1831 - if (filp->f_mode & FMODE_WRITE) 1832 - self_wcount = 1; 1833 - else if (filp->f_mode & FMODE_READ) 1834 - self_rcount = 1; 1835 - 1836 - if (atomic_read(&inode->i_writecount) != self_wcount || 1837 - atomic_read(&inode->i_readcount) != self_rcount) 1838 - return -EAGAIN; 1839 - 1840 1745 return 0; 1841 1746 } 1842 1747 ··· 1830 1827 percpu_down_read(&file_rwsem); 1831 1828 spin_lock(&ctx->flc_lock); 1832 1829 time_out_leases(inode, &dispose); 1833 - error = check_conflicting_open(filp, arg, lease->c.flc_flags); 1830 + error = lease->fl_lmops->lm_open_conflict(filp, arg); 1834 1831 if (error) 1835 1832 goto out; 1836 1833 ··· 1887 1884 * precedes these checks. 1888 1885 */ 1889 1886 smp_mb(); 1890 - error = check_conflicting_open(filp, arg, lease->c.flc_flags); 1887 + error = lease->fl_lmops->lm_open_conflict(filp, arg); 1891 1888 if (error) { 1892 1889 locks_unlink_lock_ctx(&lease->c); 1893 1890 goto out; ··· 1899 1896 out: 1900 1897 spin_unlock(&ctx->flc_lock); 1901 1898 percpu_up_read(&file_rwsem); 1902 - locks_dispose_list(&dispose); 1899 + lease_dispose_list(&dispose); 1903 1900 if (is_deleg) 1904 1901 inode_unlock(inode); 1905 1902 if (!error && !my_fl) ··· 1935 1932 error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose); 1936 1933 spin_unlock(&ctx->flc_lock); 1937 1934 percpu_up_read(&file_rwsem); 1938 - locks_dispose_list(&dispose); 1935 + lease_dispose_list(&dispose); 1939 1936 return error; 1940 1937 } 1941 1938 ··· 2738 2735 spin_unlock(&ctx->flc_lock); 2739 2736 percpu_up_read(&file_rwsem); 2740 2737 2741 - locks_dispose_list(&dispose); 2738 + lease_dispose_list(&dispose); 2742 2739 } 2743 2740 2744 2741 /*
+15 -6
fs/namei.c
··· 830 830 static bool legitimize_links(struct nameidata *nd) 831 831 { 832 832 int i; 833 - if (unlikely(nd->flags & LOOKUP_CACHED)) { 834 - drop_links(nd); 835 - nd->depth = 0; 836 - return false; 837 - } 833 + 834 + VFS_BUG_ON(nd->flags & LOOKUP_CACHED); 835 + 838 836 for (i = 0; i < nd->depth; i++) { 839 837 struct saved *last = nd->stack + i; 840 838 if (unlikely(!legitimize_path(nd, &last->link, last->seq))) { ··· 881 883 882 884 BUG_ON(!(nd->flags & LOOKUP_RCU)); 883 885 886 + if (unlikely(nd->flags & LOOKUP_CACHED)) { 887 + drop_links(nd); 888 + nd->depth = 0; 889 + goto out1; 890 + } 884 891 if (unlikely(nd->depth && !legitimize_links(nd))) 885 892 goto out1; 886 893 if (unlikely(!legitimize_path(nd, &nd->path, nd->seq))) ··· 921 918 int res; 922 919 BUG_ON(!(nd->flags & LOOKUP_RCU)); 923 920 921 + if (unlikely(nd->flags & LOOKUP_CACHED)) { 922 + drop_links(nd); 923 + nd->depth = 0; 924 + goto out2; 925 + } 924 926 if (unlikely(nd->depth && !legitimize_links(nd))) 925 927 goto out2; 926 928 res = __legitimize_mnt(nd->path.mnt, nd->m_seq); ··· 2844 2836 } 2845 2837 2846 2838 /** 2847 - * start_dirop - begin a create or remove dirop, performing locking and lookup 2839 + * __start_dirop - begin a create or remove dirop, performing locking and lookup 2848 2840 * @parent: the dentry of the parent in which the operation will occur 2849 2841 * @name: a qstr holding the name within that parent 2850 2842 * @lookup_flags: intent and other lookup flags. 2843 + * @state: task state bitmask 2851 2844 * 2852 2845 * The lookup is performed and necessary locks are taken so that, on success, 2853 2846 * the returned dentry can be operated on safely.
+1 -1
fs/netfs/read_collect.c
··· 137 137 rreq->front_folio_order = order; 138 138 fsize = PAGE_SIZE << order; 139 139 fpos = folio_pos(folio); 140 - fend = umin(fpos + fsize, rreq->i_size); 140 + fend = fpos + fsize; 141 141 142 142 trace_netfs_collect_folio(rreq, folio, fend, collected_to); 143 143
-1
fs/nfs_common/common.c
··· 17 17 { NFSERR_NOENT, -ENOENT }, 18 18 { NFSERR_IO, -EIO }, 19 19 { NFSERR_NXIO, -ENXIO }, 20 - /* { NFSERR_EAGAIN, -EAGAIN }, */ 21 20 { NFSERR_ACCES, -EACCES }, 22 21 { NFSERR_EXIST, -EEXIST }, 23 22 { NFSERR_XDEV, -EXDEV },
+2
fs/nfsd/netns.h
··· 66 66 67 67 struct lock_manager nfsd4_manager; 68 68 bool grace_ended; 69 + bool grace_end_forced; 70 + bool client_tracking_active; 69 71 time64_t boot_time; 70 72 71 73 struct dentry *nfsd_client_dir;
+21 -2
fs/nfsd/nfs4layouts.c
··· 764 764 return lease_modify(onlist, arg, dispose); 765 765 } 766 766 767 + /** 768 + * nfsd4_layout_lm_open_conflict - see if the given file points to an inode that has 769 + * an existing open that would conflict with the 770 + * desired lease. 771 + * @filp: file to check 772 + * @arg: type of lease that we're trying to acquire 773 + * 774 + * The kernel will call into this operation to determine whether there 775 + * are conflicting opens that may prevent the layout from being granted. 776 + * For nfsd, that check is done at a higher level, so this trivially 777 + * returns 0. 778 + */ 779 + static int 780 + nfsd4_layout_lm_open_conflict(struct file *filp, int arg) 781 + { 782 + return 0; 783 + } 784 + 767 785 static const struct lease_manager_operations nfsd4_layouts_lm_ops = { 768 - .lm_break = nfsd4_layout_lm_break, 769 - .lm_change = nfsd4_layout_lm_change, 786 + .lm_break = nfsd4_layout_lm_break, 787 + .lm_change = nfsd4_layout_lm_change, 788 + .lm_open_conflict = nfsd4_layout_lm_open_conflict, 770 789 }; 771 790 772 791 int
+1 -1
fs/nfsd/nfs4proc.c
··· 1502 1502 (schedule_timeout(20*HZ) == 0)) { 1503 1503 finish_wait(&nn->nfsd_ssc_waitq, &wait); 1504 1504 kfree(work); 1505 - return nfserr_eagain; 1505 + return nfserr_jukebox; 1506 1506 } 1507 1507 finish_wait(&nn->nfsd_ssc_waitq, &wait); 1508 1508 goto try_again;
+62 -6
fs/nfsd/nfs4state.c
··· 84 84 /* forward declarations */ 85 85 static bool check_for_locks(struct nfs4_file *fp, struct nfs4_lockowner *lowner); 86 86 static void nfs4_free_ol_stateid(struct nfs4_stid *stid); 87 - void nfsd4_end_grace(struct nfsd_net *nn); 87 + static void nfsd4_end_grace(struct nfsd_net *nn); 88 88 static void _free_cpntf_state_locked(struct nfsd_net *nn, struct nfs4_cpntf_state *cps); 89 89 static void nfsd4_file_hash_remove(struct nfs4_file *fi); 90 90 static void deleg_reaper(struct nfsd_net *nn); ··· 1759 1759 1760 1760 /** 1761 1761 * nfsd4_revoke_states - revoke all nfsv4 states associated with given filesystem 1762 - * @net: used to identify instance of nfsd (there is one per net namespace) 1762 + * @nn: used to identify instance of nfsd (there is one per net namespace) 1763 1763 * @sb: super_block used to identify target filesystem 1764 1764 * 1765 1765 * All nfs4 states (open, lock, delegation, layout) held by the server instance ··· 1771 1771 * The clients which own the states will subsequently being notified that the 1772 1772 * states have been "admin-revoked". 1773 1773 */ 1774 - void nfsd4_revoke_states(struct net *net, struct super_block *sb) 1774 + void nfsd4_revoke_states(struct nfsd_net *nn, struct super_block *sb) 1775 1775 { 1776 - struct nfsd_net *nn = net_generic(net, nfsd_net_id); 1777 1776 unsigned int idhashval; 1778 1777 unsigned int sc_types; 1779 1778 1780 1779 sc_types = SC_TYPE_OPEN | SC_TYPE_LOCK | SC_TYPE_DELEG | SC_TYPE_LAYOUT; 1781 1780 1782 1781 spin_lock(&nn->client_lock); 1783 - for (idhashval = 0; idhashval < CLIENT_HASH_MASK; idhashval++) { 1782 + for (idhashval = 0; idhashval < CLIENT_HASH_SIZE; idhashval++) { 1784 1783 struct list_head *head = &nn->conf_id_hashtbl[idhashval]; 1785 1784 struct nfs4_client *clp; 1786 1785 retry: ··· 5555 5556 return -EAGAIN; 5556 5557 } 5557 5558 5559 + /** 5560 + * nfsd4_deleg_lm_open_conflict - see if the given file points to an inode that has 5561 + * an existing open that would conflict with the 5562 + * desired lease. 5563 + * @filp: file to check 5564 + * @arg: type of lease that we're trying to acquire 5565 + * 5566 + * The kernel will call into this operation to determine whether there 5567 + * are conflicting opens that may prevent the deleg from being granted. 5568 + * For nfsd, that check is done at a higher level, so this trivially 5569 + * returns 0. 5570 + */ 5571 + static int 5572 + nfsd4_deleg_lm_open_conflict(struct file *filp, int arg) 5573 + { 5574 + return 0; 5575 + } 5576 + 5558 5577 static const struct lease_manager_operations nfsd_lease_mng_ops = { 5559 5578 .lm_breaker_owns_lease = nfsd_breaker_owns_lease, 5560 5579 .lm_break = nfsd_break_deleg_cb, 5561 5580 .lm_change = nfsd_change_deleg_cb, 5581 + .lm_open_conflict = nfsd4_deleg_lm_open_conflict, 5562 5582 }; 5563 5583 5564 5584 static __be32 nfsd4_check_seqid(struct nfsd4_compound_state *cstate, struct nfs4_stateowner *so, u32 seqid) ··· 6588 6570 return nfs_ok; 6589 6571 } 6590 6572 6591 - void 6573 + static void 6592 6574 nfsd4_end_grace(struct nfsd_net *nn) 6593 6575 { 6594 6576 /* do nothing if grace period already ended */ ··· 6621 6603 */ 6622 6604 } 6623 6605 6606 + /** 6607 + * nfsd4_force_end_grace - forcibly end the NFSv4 grace period 6608 + * @nn: network namespace for the server instance to be updated 6609 + * 6610 + * Forces bypass of normal grace period completion, then schedules 6611 + * the laundromat to end the grace period immediately. Does not wait 6612 + * for the grace period to fully terminate before returning. 6613 + * 6614 + * Return values: 6615 + * %true: Grace termination schedule 6616 + * %false: No action was taken 6617 + */ 6618 + bool nfsd4_force_end_grace(struct nfsd_net *nn) 6619 + { 6620 + if (!nn->client_tracking_ops) 6621 + return false; 6622 + spin_lock(&nn->client_lock); 6623 + if (nn->grace_ended || !nn->client_tracking_active) { 6624 + spin_unlock(&nn->client_lock); 6625 + return false; 6626 + } 6627 + WRITE_ONCE(nn->grace_end_forced, true); 6628 + mod_delayed_work(laundry_wq, &nn->laundromat_work, 0); 6629 + spin_unlock(&nn->client_lock); 6630 + return true; 6631 + } 6632 + 6624 6633 /* 6625 6634 * If we've waited a lease period but there are still clients trying to 6626 6635 * reclaim, wait a little longer to give them a chance to finish. ··· 6657 6612 time64_t double_grace_period_end = nn->boot_time + 6658 6613 2 * nn->nfsd4_lease; 6659 6614 6615 + if (READ_ONCE(nn->grace_end_forced)) 6616 + return false; 6660 6617 if (nn->track_reclaim_completes && 6661 6618 atomic_read(&nn->nr_reclaim_complete) == 6662 6619 nn->reclaim_str_hashtbl_size) ··· 8979 8932 nn->unconf_name_tree = RB_ROOT; 8980 8933 nn->boot_time = ktime_get_real_seconds(); 8981 8934 nn->grace_ended = false; 8935 + nn->grace_end_forced = false; 8936 + nn->client_tracking_active = false; 8982 8937 nn->nfsd4_manager.block_opens = true; 8983 8938 INIT_LIST_HEAD(&nn->nfsd4_manager.list); 8984 8939 INIT_LIST_HEAD(&nn->client_lru); ··· 9061 9012 return ret; 9062 9013 locks_start_grace(net, &nn->nfsd4_manager); 9063 9014 nfsd4_client_tracking_init(net); 9015 + /* safe for laundromat to run now */ 9016 + spin_lock(&nn->client_lock); 9017 + nn->client_tracking_active = true; 9018 + spin_unlock(&nn->client_lock); 9064 9019 if (nn->track_reclaim_completes && nn->reclaim_str_hashtbl_size == 0) 9065 9020 goto skip_grace; 9066 9021 printk(KERN_INFO "NFSD: starting %lld-second grace period (net %x)\n", ··· 9113 9060 9114 9061 shrinker_free(nn->nfsd_client_shrinker); 9115 9062 cancel_work_sync(&nn->nfsd_shrinker_work); 9063 + spin_lock(&nn->client_lock); 9064 + nn->client_tracking_active = false; 9065 + spin_unlock(&nn->client_lock); 9116 9066 cancel_delayed_work_sync(&nn->laundromat_work); 9117 9067 locks_end_grace(&nn->nfsd4_manager); 9118 9068
+9 -3
fs/nfsd/nfsctl.c
··· 259 259 struct path path; 260 260 char *fo_path; 261 261 int error; 262 + struct nfsd_net *nn; 262 263 263 264 /* sanity check */ 264 265 if (size == 0) ··· 286 285 * 3. Is that directory the root of an exported file system? 287 286 */ 288 287 error = nlmsvc_unlock_all_by_sb(path.dentry->d_sb); 289 - nfsd4_revoke_states(netns(file), path.dentry->d_sb); 288 + mutex_lock(&nfsd_mutex); 289 + nn = net_generic(netns(file), nfsd_net_id); 290 + if (nn->nfsd_serv) 291 + nfsd4_revoke_states(nn, path.dentry->d_sb); 292 + else 293 + error = -EINVAL; 294 + mutex_unlock(&nfsd_mutex); 290 295 291 296 path_put(&path); 292 297 return error; ··· 1089 1082 case 'Y': 1090 1083 case 'y': 1091 1084 case '1': 1092 - if (!nn->nfsd_serv) 1085 + if (!nfsd4_force_end_grace(nn)) 1093 1086 return -EBUSY; 1094 1087 trace_nfsd_end_grace(netns(file)); 1095 - nfsd4_end_grace(nn); 1096 1088 break; 1097 1089 default: 1098 1090 return -EINVAL;
-1
fs/nfsd/nfsd.h
··· 233 233 #define nfserr_noent cpu_to_be32(NFSERR_NOENT) 234 234 #define nfserr_io cpu_to_be32(NFSERR_IO) 235 235 #define nfserr_nxio cpu_to_be32(NFSERR_NXIO) 236 - #define nfserr_eagain cpu_to_be32(NFSERR_EAGAIN) 237 236 #define nfserr_acces cpu_to_be32(NFSERR_ACCES) 238 237 #define nfserr_exist cpu_to_be32(NFSERR_EXIST) 239 238 #define nfserr_xdev cpu_to_be32(NFSERR_XDEV)
+14 -14
fs/nfsd/nfssvc.c
··· 406 406 { 407 407 struct nfsd_net *nn = net_generic(net, nfsd_net_id); 408 408 409 - if (!nn->nfsd_net_up) 410 - return; 409 + if (nn->nfsd_net_up) { 410 + percpu_ref_kill_and_confirm(&nn->nfsd_net_ref, nfsd_net_done); 411 + wait_for_completion(&nn->nfsd_net_confirm_done); 411 412 412 - percpu_ref_kill_and_confirm(&nn->nfsd_net_ref, nfsd_net_done); 413 - wait_for_completion(&nn->nfsd_net_confirm_done); 414 - 415 - nfsd_export_flush(net); 416 - nfs4_state_shutdown_net(net); 417 - nfsd_reply_cache_shutdown(nn); 418 - nfsd_file_cache_shutdown_net(net); 419 - if (nn->lockd_up) { 420 - lockd_down(net); 421 - nn->lockd_up = false; 413 + nfsd_export_flush(net); 414 + nfs4_state_shutdown_net(net); 415 + nfsd_reply_cache_shutdown(nn); 416 + nfsd_file_cache_shutdown_net(net); 417 + if (nn->lockd_up) { 418 + lockd_down(net); 419 + nn->lockd_up = false; 420 + } 421 + wait_for_completion(&nn->nfsd_net_free_done); 422 422 } 423 423 424 - wait_for_completion(&nn->nfsd_net_free_done); 425 424 percpu_ref_exit(&nn->nfsd_net_ref); 426 425 426 + if (nn->nfsd_net_up) 427 + nfsd_shutdown_generic(); 427 428 nn->nfsd_net_up = false; 428 - nfsd_shutdown_generic(); 429 429 } 430 430 431 431 static DEFINE_SPINLOCK(nfsd_notifier_lock);
+3 -3
fs/nfsd/state.h
··· 841 841 struct nfsd_file *find_any_file(struct nfs4_file *f); 842 842 843 843 #ifdef CONFIG_NFSD_V4 844 - void nfsd4_revoke_states(struct net *net, struct super_block *sb); 844 + void nfsd4_revoke_states(struct nfsd_net *nn, struct super_block *sb); 845 845 #else 846 - static inline void nfsd4_revoke_states(struct net *net, struct super_block *sb) 846 + static inline void nfsd4_revoke_states(struct nfsd_net *nn, struct super_block *sb) 847 847 { 848 848 } 849 849 #endif 850 850 851 851 /* grace period management */ 852 - void nfsd4_end_grace(struct nfsd_net *nn); 852 + bool nfsd4_force_end_grace(struct nfsd_net *nn); 853 853 854 854 /* nfs4recover operations */ 855 855 extern int nfsd4_client_tracking_init(struct net *net);
+2 -2
fs/nfsd/vfs.c
··· 2865 2865 2866 2866 /* Allow read access to binaries even when mode 111 */ 2867 2867 if (err == -EACCES && S_ISREG(inode->i_mode) && 2868 - (acc == (NFSD_MAY_READ | NFSD_MAY_OWNER_OVERRIDE) || 2869 - acc == (NFSD_MAY_READ | NFSD_MAY_READ_IF_EXEC))) 2868 + (((acc & NFSD_MAY_MASK) == NFSD_MAY_READ) && 2869 + (acc & (NFSD_MAY_OWNER_OVERRIDE | NFSD_MAY_READ_IF_EXEC)))) 2870 2870 err = inode_permission(&nop_mnt_idmap, inode, MAY_EXEC); 2871 2871 2872 2872 return err? nfserrno(err) : 0;
+18
fs/pidfs.c
··· 517 517 switch (cmd) { 518 518 /* Namespaces that hang of nsproxy. */ 519 519 case PIDFD_GET_CGROUP_NAMESPACE: 520 + #ifdef CONFIG_CGROUPS 520 521 if (!ns_ref_get(nsp->cgroup_ns)) 521 522 break; 522 523 ns_common = to_ns_common(nsp->cgroup_ns); 524 + #endif 523 525 break; 524 526 case PIDFD_GET_IPC_NAMESPACE: 527 + #ifdef CONFIG_IPC_NS 525 528 if (!ns_ref_get(nsp->ipc_ns)) 526 529 break; 527 530 ns_common = to_ns_common(nsp->ipc_ns); 531 + #endif 528 532 break; 529 533 case PIDFD_GET_MNT_NAMESPACE: 530 534 if (!ns_ref_get(nsp->mnt_ns)) ··· 536 532 ns_common = to_ns_common(nsp->mnt_ns); 537 533 break; 538 534 case PIDFD_GET_NET_NAMESPACE: 535 + #ifdef CONFIG_NET_NS 539 536 if (!ns_ref_get(nsp->net_ns)) 540 537 break; 541 538 ns_common = to_ns_common(nsp->net_ns); 539 + #endif 542 540 break; 543 541 case PIDFD_GET_PID_FOR_CHILDREN_NAMESPACE: 542 + #ifdef CONFIG_PID_NS 544 543 if (!ns_ref_get(nsp->pid_ns_for_children)) 545 544 break; 546 545 ns_common = to_ns_common(nsp->pid_ns_for_children); 546 + #endif 547 547 break; 548 548 case PIDFD_GET_TIME_NAMESPACE: 549 + #ifdef CONFIG_TIME_NS 549 550 if (!ns_ref_get(nsp->time_ns)) 550 551 break; 551 552 ns_common = to_ns_common(nsp->time_ns); 553 + #endif 552 554 break; 553 555 case PIDFD_GET_TIME_FOR_CHILDREN_NAMESPACE: 556 + #ifdef CONFIG_TIME_NS 554 557 if (!ns_ref_get(nsp->time_ns_for_children)) 555 558 break; 556 559 ns_common = to_ns_common(nsp->time_ns_for_children); 560 + #endif 557 561 break; 558 562 case PIDFD_GET_UTS_NAMESPACE: 563 + #ifdef CONFIG_UTS_NS 559 564 if (!ns_ref_get(nsp->uts_ns)) 560 565 break; 561 566 ns_common = to_ns_common(nsp->uts_ns); 567 + #endif 562 568 break; 563 569 /* Namespaces that don't hang of nsproxy. */ 564 570 case PIDFD_GET_USER_NAMESPACE: 571 + #ifdef CONFIG_USER_NS 565 572 scoped_guard(rcu) { 566 573 struct user_namespace *user_ns; 567 574 ··· 581 566 break; 582 567 ns_common = to_ns_common(user_ns); 583 568 } 569 + #endif 584 570 break; 585 571 case PIDFD_GET_PID_NAMESPACE: 572 + #ifdef CONFIG_PID_NS 586 573 scoped_guard(rcu) { 587 574 struct pid_namespace *pid_ns; 588 575 ··· 593 576 break; 594 577 ns_common = to_ns_common(pid_ns); 595 578 } 579 + #endif 596 580 break; 597 581 default: 598 582 return -ENOIOCTLCMD;
+6 -5
fs/xfs/xfs_iomap.c
··· 1831 1831 */ 1832 1832 if (flags & IOMAP_ZERO) { 1833 1833 xfs_fileoff_t eof_fsb = XFS_B_TO_FSB(mp, XFS_ISIZE(ip)); 1834 - u64 end; 1835 1834 1836 1835 if (isnullstartblock(imap.br_startblock) && 1837 1836 offset_fsb >= eof_fsb) ··· 1850 1851 */ 1851 1852 if (imap.br_state == XFS_EXT_UNWRITTEN && 1852 1853 offset_fsb < eof_fsb) { 1853 - loff_t len = min(count, 1854 - XFS_FSB_TO_B(mp, imap.br_blockcount)); 1854 + loff_t foffset = offset, fend; 1855 1855 1856 - end = iomap_fill_dirty_folios(iter, offset, len); 1856 + fend = offset + 1857 + min(count, XFS_FSB_TO_B(mp, imap.br_blockcount)); 1858 + iomap_fill_dirty_folios(iter, &foffset, fend, 1859 + &iomap_flags); 1857 1860 end_fsb = min_t(xfs_fileoff_t, end_fsb, 1858 - XFS_B_TO_FSB(mp, end)); 1861 + XFS_B_TO_FSB(mp, foffset)); 1859 1862 } 1860 1863 1861 1864 xfs_trim_extent(&imap, offset_fsb, end_fsb - offset_fsb);
+1 -1
include/acpi/acpi_drivers.h
··· 51 51 52 52 int acpi_irq_penalty_init(void); 53 53 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering, 54 - int *polarity, char **name); 54 + int *polarity, char **name, u32 *gsi); 55 55 int acpi_pci_link_free_irq(acpi_handle handle); 56 56 57 57 /* ACPI PCI Device Binding */
+22
include/drm/drm_atomic_helper.h
··· 60 60 int drm_atomic_helper_check_planes(struct drm_device *dev, 61 61 struct drm_atomic_state *state); 62 62 int drm_atomic_helper_check_crtc_primary_plane(struct drm_crtc_state *crtc_state); 63 + void drm_atomic_helper_commit_encoder_bridge_disable(struct drm_device *dev, 64 + struct drm_atomic_state *state); 65 + void drm_atomic_helper_commit_crtc_disable(struct drm_device *dev, 66 + struct drm_atomic_state *state); 67 + void drm_atomic_helper_commit_encoder_bridge_post_disable(struct drm_device *dev, 68 + struct drm_atomic_state *state); 63 69 int drm_atomic_helper_check(struct drm_device *dev, 64 70 struct drm_atomic_state *state); 65 71 void drm_atomic_helper_commit_tail(struct drm_atomic_state *state); ··· 95 89 void 96 90 drm_atomic_helper_calc_timestamping_constants(struct drm_atomic_state *state); 97 91 92 + void drm_atomic_helper_commit_crtc_set_mode(struct drm_device *dev, 93 + struct drm_atomic_state *state); 94 + 98 95 void drm_atomic_helper_commit_modeset_disables(struct drm_device *dev, 99 96 struct drm_atomic_state *state); 97 + 98 + void drm_atomic_helper_commit_writebacks(struct drm_device *dev, 99 + struct drm_atomic_state *state); 100 + 101 + void drm_atomic_helper_commit_encoder_bridge_pre_enable(struct drm_device *dev, 102 + struct drm_atomic_state *state); 103 + 104 + void drm_atomic_helper_commit_crtc_enable(struct drm_device *dev, 105 + struct drm_atomic_state *state); 106 + 107 + void drm_atomic_helper_commit_encoder_bridge_enable(struct drm_device *dev, 108 + struct drm_atomic_state *state); 109 + 100 110 void drm_atomic_helper_commit_modeset_enables(struct drm_device *dev, 101 111 struct drm_atomic_state *old_state); 102 112
+66 -183
include/drm/drm_bridge.h
··· 176 176 /** 177 177 * @disable: 178 178 * 179 - * The @disable callback should disable the bridge. 179 + * This callback should disable the bridge. It is called right before 180 + * the preceding element in the display pipe is disabled. If the 181 + * preceding element is a bridge this means it's called before that 182 + * bridge's @disable vfunc. If the preceding element is a &drm_encoder 183 + * it's called right before the &drm_encoder_helper_funcs.disable, 184 + * &drm_encoder_helper_funcs.prepare or &drm_encoder_helper_funcs.dpms 185 + * hook. 180 186 * 181 187 * The bridge can assume that the display pipe (i.e. clocks and timing 182 188 * signals) feeding it is still running when this callback is called. 183 - * 184 - * 185 - * If the preceding element is a &drm_bridge, then this is called before 186 - * that bridge is disabled via one of: 187 - * 188 - * - &drm_bridge_funcs.disable 189 - * - &drm_bridge_funcs.atomic_disable 190 - * 191 - * If the preceding element of the bridge is a display controller, then 192 - * this callback is called before the encoder is disabled via one of: 193 - * 194 - * - &drm_encoder_helper_funcs.atomic_disable 195 - * - &drm_encoder_helper_funcs.prepare 196 - * - &drm_encoder_helper_funcs.disable 197 - * - &drm_encoder_helper_funcs.dpms 198 - * 199 - * and the CRTC is disabled via one of: 200 - * 201 - * - &drm_crtc_helper_funcs.prepare 202 - * - &drm_crtc_helper_funcs.atomic_disable 203 - * - &drm_crtc_helper_funcs.disable 204 - * - &drm_crtc_helper_funcs.dpms. 205 189 * 206 190 * The @disable callback is optional. 207 191 * ··· 199 215 /** 200 216 * @post_disable: 201 217 * 218 + * This callback should disable the bridge. It is called right after the 219 + * preceding element in the display pipe is disabled. If the preceding 220 + * element is a bridge this means it's called after that bridge's 221 + * @post_disable function. If the preceding element is a &drm_encoder 222 + * it's called right after the encoder's 223 + * &drm_encoder_helper_funcs.disable, &drm_encoder_helper_funcs.prepare 224 + * or &drm_encoder_helper_funcs.dpms hook. 225 + * 202 226 * The bridge must assume that the display pipe (i.e. clocks and timing 203 - * signals) feeding this bridge is no longer running when the 204 - * @post_disable is called. 205 - * 206 - * This callback should perform all the actions required by the hardware 207 - * after it has stopped receiving signals from the preceding element. 208 - * 209 - * If the preceding element is a &drm_bridge, then this is called after 210 - * that bridge is post-disabled (unless marked otherwise by the 211 - * @pre_enable_prev_first flag) via one of: 212 - * 213 - * - &drm_bridge_funcs.post_disable 214 - * - &drm_bridge_funcs.atomic_post_disable 215 - * 216 - * If the preceding element of the bridge is a display controller, then 217 - * this callback is called after the encoder is disabled via one of: 218 - * 219 - * - &drm_encoder_helper_funcs.atomic_disable 220 - * - &drm_encoder_helper_funcs.prepare 221 - * - &drm_encoder_helper_funcs.disable 222 - * - &drm_encoder_helper_funcs.dpms 223 - * 224 - * and the CRTC is disabled via one of: 225 - * 226 - * - &drm_crtc_helper_funcs.prepare 227 - * - &drm_crtc_helper_funcs.atomic_disable 228 - * - &drm_crtc_helper_funcs.disable 229 - * - &drm_crtc_helper_funcs.dpms 227 + * signals) feeding it is no longer running when this callback is 228 + * called. 230 229 * 231 230 * The @post_disable callback is optional. 232 231 * ··· 252 285 /** 253 286 * @pre_enable: 254 287 * 288 + * This callback should enable the bridge. It is called right before 289 + * the preceding element in the display pipe is enabled. If the 290 + * preceding element is a bridge this means it's called before that 291 + * bridge's @pre_enable function. If the preceding element is a 292 + * &drm_encoder it's called right before the encoder's 293 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 294 + * &drm_encoder_helper_funcs.dpms hook. 295 + * 255 296 * The display pipe (i.e. clocks and timing signals) feeding this bridge 256 - * will not yet be running when the @pre_enable is called. 257 - * 258 - * This callback should perform all the necessary actions to prepare the 259 - * bridge to accept signals from the preceding element. 260 - * 261 - * If the preceding element is a &drm_bridge, then this is called before 262 - * that bridge is pre-enabled (unless marked otherwise by 263 - * @pre_enable_prev_first flag) via one of: 264 - * 265 - * - &drm_bridge_funcs.pre_enable 266 - * - &drm_bridge_funcs.atomic_pre_enable 267 - * 268 - * If the preceding element of the bridge is a display controller, then 269 - * this callback is called before the CRTC is enabled via one of: 270 - * 271 - * - &drm_crtc_helper_funcs.atomic_enable 272 - * - &drm_crtc_helper_funcs.commit 273 - * 274 - * and the encoder is enabled via one of: 275 - * 276 - * - &drm_encoder_helper_funcs.atomic_enable 277 - * - &drm_encoder_helper_funcs.enable 278 - * - &drm_encoder_helper_funcs.commit 297 + * will not yet be running when this callback is called. The bridge must 298 + * not enable the display link feeding the next bridge in the chain (if 299 + * there is one) when this callback is called. 279 300 * 280 301 * The @pre_enable callback is optional. 281 302 * ··· 277 322 /** 278 323 * @enable: 279 324 * 280 - * The @enable callback should enable the bridge. 325 + * This callback should enable the bridge. It is called right after 326 + * the preceding element in the display pipe is enabled. If the 327 + * preceding element is a bridge this means it's called after that 328 + * bridge's @enable function. If the preceding element is a 329 + * &drm_encoder it's called right after the encoder's 330 + * &drm_encoder_helper_funcs.enable, &drm_encoder_helper_funcs.commit or 331 + * &drm_encoder_helper_funcs.dpms hook. 281 332 * 282 333 * The bridge can assume that the display pipe (i.e. clocks and timing 283 334 * signals) feeding it is running when this callback is called. This 284 335 * callback must enable the display link feeding the next bridge in the 285 336 * chain if there is one. 286 - * 287 - * If the preceding element is a &drm_bridge, then this is called after 288 - * that bridge is enabled via one of: 289 - * 290 - * - &drm_bridge_funcs.enable 291 - * - &drm_bridge_funcs.atomic_enable 292 - * 293 - * If the preceding element of the bridge is a display controller, then 294 - * this callback is called after the CRTC is enabled via one of: 295 - * 296 - * - &drm_crtc_helper_funcs.atomic_enable 297 - * - &drm_crtc_helper_funcs.commit 298 - * 299 - * and the encoder is enabled via one of: 300 - * 301 - * - &drm_encoder_helper_funcs.atomic_enable 302 - * - &drm_encoder_helper_funcs.enable 303 - * - drm_encoder_helper_funcs.commit 304 337 * 305 338 * The @enable callback is optional. 306 339 * ··· 302 359 /** 303 360 * @atomic_pre_enable: 304 361 * 362 + * This callback should enable the bridge. It is called right before 363 + * the preceding element in the display pipe is enabled. If the 364 + * preceding element is a bridge this means it's called before that 365 + * bridge's @atomic_pre_enable or @pre_enable function. If the preceding 366 + * element is a &drm_encoder it's called right before the encoder's 367 + * &drm_encoder_helper_funcs.atomic_enable hook. 368 + * 305 369 * The display pipe (i.e. clocks and timing signals) feeding this bridge 306 - * will not yet be running when the @atomic_pre_enable is called. 307 - * 308 - * This callback should perform all the necessary actions to prepare the 309 - * bridge to accept signals from the preceding element. 310 - * 311 - * If the preceding element is a &drm_bridge, then this is called before 312 - * that bridge is pre-enabled (unless marked otherwise by 313 - * @pre_enable_prev_first flag) via one of: 314 - * 315 - * - &drm_bridge_funcs.pre_enable 316 - * - &drm_bridge_funcs.atomic_pre_enable 317 - * 318 - * If the preceding element of the bridge is a display controller, then 319 - * this callback is called before the CRTC is enabled via one of: 320 - * 321 - * - &drm_crtc_helper_funcs.atomic_enable 322 - * - &drm_crtc_helper_funcs.commit 323 - * 324 - * and the encoder is enabled via one of: 325 - * 326 - * - &drm_encoder_helper_funcs.atomic_enable 327 - * - &drm_encoder_helper_funcs.enable 328 - * - &drm_encoder_helper_funcs.commit 370 + * will not yet be running when this callback is called. The bridge must 371 + * not enable the display link feeding the next bridge in the chain (if 372 + * there is one) when this callback is called. 329 373 * 330 374 * The @atomic_pre_enable callback is optional. 331 375 */ ··· 322 392 /** 323 393 * @atomic_enable: 324 394 * 325 - * The @atomic_enable callback should enable the bridge. 395 + * This callback should enable the bridge. It is called right after 396 + * the preceding element in the display pipe is enabled. If the 397 + * preceding element is a bridge this means it's called after that 398 + * bridge's @atomic_enable or @enable function. If the preceding element 399 + * is a &drm_encoder it's called right after the encoder's 400 + * &drm_encoder_helper_funcs.atomic_enable hook. 326 401 * 327 402 * The bridge can assume that the display pipe (i.e. clocks and timing 328 403 * signals) feeding it is running when this callback is called. This 329 404 * callback must enable the display link feeding the next bridge in the 330 405 * chain if there is one. 331 - * 332 - * If the preceding element is a &drm_bridge, then this is called after 333 - * that bridge is enabled via one of: 334 - * 335 - * - &drm_bridge_funcs.enable 336 - * - &drm_bridge_funcs.atomic_enable 337 - * 338 - * If the preceding element of the bridge is a display controller, then 339 - * this callback is called after the CRTC is enabled via one of: 340 - * 341 - * - &drm_crtc_helper_funcs.atomic_enable 342 - * - &drm_crtc_helper_funcs.commit 343 - * 344 - * and the encoder is enabled via one of: 345 - * 346 - * - &drm_encoder_helper_funcs.atomic_enable 347 - * - &drm_encoder_helper_funcs.enable 348 - * - drm_encoder_helper_funcs.commit 349 406 * 350 407 * The @atomic_enable callback is optional. 351 408 */ ··· 341 424 /** 342 425 * @atomic_disable: 343 426 * 344 - * The @atomic_disable callback should disable the bridge. 427 + * This callback should disable the bridge. It is called right before 428 + * the preceding element in the display pipe is disabled. If the 429 + * preceding element is a bridge this means it's called before that 430 + * bridge's @atomic_disable or @disable vfunc. If the preceding element 431 + * is a &drm_encoder it's called right before the 432 + * &drm_encoder_helper_funcs.atomic_disable hook. 345 433 * 346 434 * The bridge can assume that the display pipe (i.e. clocks and timing 347 435 * signals) feeding it is still running when this callback is called. 348 - * 349 - * If the preceding element is a &drm_bridge, then this is called before 350 - * that bridge is disabled via one of: 351 - * 352 - * - &drm_bridge_funcs.disable 353 - * - &drm_bridge_funcs.atomic_disable 354 - * 355 - * If the preceding element of the bridge is a display controller, then 356 - * this callback is called before the encoder is disabled via one of: 357 - * 358 - * - &drm_encoder_helper_funcs.atomic_disable 359 - * - &drm_encoder_helper_funcs.prepare 360 - * - &drm_encoder_helper_funcs.disable 361 - * - &drm_encoder_helper_funcs.dpms 362 - * 363 - * and the CRTC is disabled via one of: 364 - * 365 - * - &drm_crtc_helper_funcs.prepare 366 - * - &drm_crtc_helper_funcs.atomic_disable 367 - * - &drm_crtc_helper_funcs.disable 368 - * - &drm_crtc_helper_funcs.dpms. 369 436 * 370 437 * The @atomic_disable callback is optional. 371 438 */ ··· 359 458 /** 360 459 * @atomic_post_disable: 361 460 * 461 + * This callback should disable the bridge. It is called right after the 462 + * preceding element in the display pipe is disabled. If the preceding 463 + * element is a bridge this means it's called after that bridge's 464 + * @atomic_post_disable or @post_disable function. If the preceding 465 + * element is a &drm_encoder it's called right after the encoder's 466 + * &drm_encoder_helper_funcs.atomic_disable hook. 467 + * 362 468 * The bridge must assume that the display pipe (i.e. clocks and timing 363 - * signals) feeding this bridge is no longer running when the 364 - * @atomic_post_disable is called. 365 - * 366 - * This callback should perform all the actions required by the hardware 367 - * after it has stopped receiving signals from the preceding element. 368 - * 369 - * If the preceding element is a &drm_bridge, then this is called after 370 - * that bridge is post-disabled (unless marked otherwise by the 371 - * @pre_enable_prev_first flag) via one of: 372 - * 373 - * - &drm_bridge_funcs.post_disable 374 - * - &drm_bridge_funcs.atomic_post_disable 375 - * 376 - * If the preceding element of the bridge is a display controller, then 377 - * this callback is called after the encoder is disabled via one of: 378 - * 379 - * - &drm_encoder_helper_funcs.atomic_disable 380 - * - &drm_encoder_helper_funcs.prepare 381 - * - &drm_encoder_helper_funcs.disable 382 - * - &drm_encoder_helper_funcs.dpms 383 - * 384 - * and the CRTC is disabled via one of: 385 - * 386 - * - &drm_crtc_helper_funcs.prepare 387 - * - &drm_crtc_helper_funcs.atomic_disable 388 - * - &drm_crtc_helper_funcs.disable 389 - * - &drm_crtc_helper_funcs.dpms 469 + * signals) feeding it is no longer running when this callback is 470 + * called. 390 471 * 391 472 * The @atomic_post_disable callback is optional. 392 473 */
+1
include/linux/filelock.h
··· 49 49 int (*lm_change)(struct file_lease *, int, struct list_head *); 50 50 void (*lm_setup)(struct file_lease *, void **); 51 51 bool (*lm_breaker_owns_lease)(struct file_lease *); 52 + int (*lm_open_conflict)(struct file *, int); 52 53 }; 53 54 54 55 struct lock_manager {
+1 -1
include/linux/ftrace.h
··· 1167 1167 */ 1168 1168 struct ftrace_graph_ent { 1169 1169 unsigned long func; /* Current function */ 1170 - unsigned long depth; 1170 + long depth; /* signed to check for less than zero */ 1171 1171 } __packed; 1172 1172 1173 1173 /*
+1 -1
include/linux/hrtimer.h
··· 2 2 /* 3 3 * hrtimers - High-resolution kernel timers 4 4 * 5 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar 7 7 * 8 8 * data type definitions, declarations, prototypes
+6 -2
include/linux/iomap.h
··· 88 88 /* 89 89 * Flags set by the core iomap code during operations: 90 90 * 91 + * IOMAP_F_FOLIO_BATCH indicates that the folio batch mechanism is active 92 + * for this operation, set by iomap_fill_dirty_folios(). 93 + * 91 94 * IOMAP_F_SIZE_CHANGED indicates to the iomap_end method that the file size 92 95 * has changed as the result of this write operation. 93 96 * ··· 98 95 * range it covers needs to be remapped by the high level before the operation 99 96 * can proceed. 100 97 */ 98 + #define IOMAP_F_FOLIO_BATCH (1U << 13) 101 99 #define IOMAP_F_SIZE_CHANGED (1U << 14) 102 100 #define IOMAP_F_STALE (1U << 15) 103 101 ··· 356 352 int iomap_file_unshare(struct inode *inode, loff_t pos, loff_t len, 357 353 const struct iomap_ops *ops, 358 354 const struct iomap_write_ops *write_ops); 359 - loff_t iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t offset, 360 - loff_t length); 355 + unsigned int iomap_fill_dirty_folios(struct iomap_iter *iter, loff_t *start, 356 + loff_t end, unsigned int *iomap_flags); 361 357 int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, 362 358 bool *did_zero, const struct iomap_ops *ops, 363 359 const struct iomap_write_ops *write_ops, void *private);
+1 -1
include/linux/ktime.h
··· 3 3 * 4 4 * ktime_t - nanosecond-resolution time format. 5 5 * 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005, Red Hat, Inc., Ingo Molnar 8 8 * 9 9 * data type definitions, declarations, prototypes and macros.
+1 -1
include/linux/mtd/jedec.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all JEDEC related definitions 8 8 */
+1 -1
include/linux/mtd/nand-ecc-sw-hamming.h
··· 2 2 /* 3 3 * Copyright (C) 2000-2010 Steven J. Hill <sjhill@realitydiluted.com> 4 4 * David Woodhouse <dwmw2@infradead.org> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * This file is the header for the NAND Hamming ECC implementation. 8 8 */
+1 -1
include/linux/mtd/ndfc.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2006 Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright (c) 2006 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * 5 5 * Info: 6 6 * Contains defines, datastructures for ndfc nand controller
+1 -1
include/linux/mtd/onfi.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all ONFI related definitions 8 8 */
+1 -1
include/linux/mtd/platnand.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Contains all platform NAND related definitions. 8 8 */
+1 -1
include/linux/mtd/rawnand.h
··· 2 2 /* 3 3 * Copyright © 2000-2010 David Woodhouse <dwmw2@infradead.org> 4 4 * Steven J. Hill <sjhill@realitydiluted.com> 5 - * Thomas Gleixner <tglx@linutronix.de> 5 + * Thomas Gleixner <tglx@kernel.org> 6 6 * 7 7 * Info: 8 8 * Contains standard defines and IDs for NAND flash devices
+2 -1
include/linux/netdevice.h
··· 5323 5323 static inline netdev_features_t netdev_add_tso_features(netdev_features_t features, 5324 5324 netdev_features_t mask) 5325 5325 { 5326 - return netdev_increment_features(features, NETIF_F_ALL_TSO, mask); 5326 + return netdev_increment_features(features, NETIF_F_ALL_TSO | 5327 + NETIF_F_ALL_FOR_ALL, mask); 5327 5328 } 5328 5329 5329 5330 int __netdev_update_features(struct net_device *dev);
+1 -1
include/linux/perf_event.h
··· 1 1 /* 2 2 * Performance events: 3 3 * 4 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 4 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 5 5 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 7 7 *
+1 -1
include/linux/plist.h
··· 8 8 * 2001-2005 (c) MontaVista Software, Inc. 9 9 * Daniel Walker <dwalker@mvista.com> 10 10 * 11 - * (C) 2005 Thomas Gleixner <tglx@linutronix.de> 11 + * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 12 12 * 13 13 * Simplifications of the original code by 14 14 * Oleg Nesterov <oleg@tv-sign.ru>
+1 -1
include/linux/rslib.h
··· 2 2 /* 3 3 * Generic Reed Solomon encoder / decoder library 4 4 * 5 - * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de) 5 + * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org) 6 6 * 7 7 * RS code lifted from reed solomon library written by Phil Karn 8 8 * Copyright 2002 Phil Karn, KA9Q
+4 -4
include/linux/soc/airoha/airoha_offload.h
··· 71 71 #define NPU_RX1_DESC_NUM 512 72 72 73 73 /* CTRL */ 74 - #define NPU_RX_DMA_DESC_LAST_MASK BIT(29) 75 - #define NPU_RX_DMA_DESC_LEN_MASK GENMASK(28, 15) 76 - #define NPU_RX_DMA_DESC_CUR_LEN_MASK GENMASK(14, 1) 74 + #define NPU_RX_DMA_DESC_LAST_MASK BIT(27) 75 + #define NPU_RX_DMA_DESC_LEN_MASK GENMASK(26, 14) 76 + #define NPU_RX_DMA_DESC_CUR_LEN_MASK GENMASK(13, 1) 77 77 #define NPU_RX_DMA_DESC_DONE_MASK BIT(0) 78 78 /* INFO */ 79 - #define NPU_RX_DMA_PKT_COUNT_MASK GENMASK(31, 28) 79 + #define NPU_RX_DMA_PKT_COUNT_MASK GENMASK(31, 29) 80 80 #define NPU_RX_DMA_PKT_ID_MASK GENMASK(28, 26) 81 81 #define NPU_RX_DMA_SRC_PORT_MASK GENMASK(25, 21) 82 82 #define NPU_RX_DMA_CRSN_MASK GENMASK(20, 16)
+9
include/linux/trace_recursion.h
··· 34 34 TRACE_INTERNAL_SIRQ_BIT, 35 35 TRACE_INTERNAL_TRANSITION_BIT, 36 36 37 + /* Internal event use recursion bits */ 38 + TRACE_INTERNAL_EVENT_BIT, 39 + TRACE_INTERNAL_EVENT_NMI_BIT, 40 + TRACE_INTERNAL_EVENT_IRQ_BIT, 41 + TRACE_INTERNAL_EVENT_SIRQ_BIT, 42 + TRACE_INTERNAL_EVENT_TRANSITION_BIT, 43 + 37 44 TRACE_BRANCH_BIT, 38 45 /* 39 46 * Abuse of the trace_recursion. ··· 64 57 #define TRACE_FTRACE_START TRACE_FTRACE_BIT 65 58 66 59 #define TRACE_LIST_START TRACE_INTERNAL_BIT 60 + 61 + #define TRACE_EVENT_START TRACE_INTERNAL_EVENT_BIT 67 62 68 63 #define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1) 69 64
+1 -1
include/linux/uio_driver.h
··· 3 3 * include/linux/uio_driver.h 4 4 * 5 5 * Copyright(C) 2005, Benedikt Spranger <b.spranger@linutronix.de> 6 - * Copyright(C) 2005, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2006, Hans J. Koch <hjk@hansjkoch.de> 8 8 * Copyright(C) 2006, Greg Kroah-Hartman <greg@kroah.com> 9 9 *
+2 -1
include/trace/events/btrfs.h
··· 224 224 __entry->generation = BTRFS_I(inode)->generation; 225 225 __entry->last_trans = BTRFS_I(inode)->last_trans; 226 226 __entry->logged_trans = BTRFS_I(inode)->logged_trans; 227 - __entry->root_objectid = btrfs_root_id(BTRFS_I(inode)->root); 227 + __entry->root_objectid = BTRFS_I(inode)->root ? 228 + btrfs_root_id(BTRFS_I(inode)->root) : 0; 228 229 ), 229 230 230 231 TP_printk_btrfs("root=%llu(%s) gen=%llu ino=%llu blocks=%llu "
-2
include/trace/misc/nfs.h
··· 16 16 TRACE_DEFINE_ENUM(NFSERR_NOENT); 17 17 TRACE_DEFINE_ENUM(NFSERR_IO); 18 18 TRACE_DEFINE_ENUM(NFSERR_NXIO); 19 - TRACE_DEFINE_ENUM(NFSERR_EAGAIN); 20 19 TRACE_DEFINE_ENUM(NFSERR_ACCES); 21 20 TRACE_DEFINE_ENUM(NFSERR_EXIST); 22 21 TRACE_DEFINE_ENUM(NFSERR_XDEV); ··· 51 52 { NFSERR_NXIO, "NXIO" }, \ 52 53 { ECHILD, "CHILD" }, \ 53 54 { ETIMEDOUT, "TIMEDOUT" }, \ 54 - { NFSERR_EAGAIN, "AGAIN" }, \ 55 55 { NFSERR_ACCES, "ACCES" }, \ 56 56 { NFSERR_EXIST, "EXIST" }, \ 57 57 { NFSERR_XDEV, "XDEV" }, \
-1
include/uapi/linux/nfs.h
··· 49 49 NFSERR_NOENT = 2, /* v2 v3 v4 */ 50 50 NFSERR_IO = 5, /* v2 v3 v4 */ 51 51 NFSERR_NXIO = 6, /* v2 v3 v4 */ 52 - NFSERR_EAGAIN = 11, /* v2 v3 */ 53 52 NFSERR_ACCES = 13, /* v2 v3 v4 */ 54 53 NFSERR_EXIST = 17, /* v2 v3 v4 */ 55 54 NFSERR_XDEV = 18, /* v3 v4 */
+1 -1
include/uapi/linux/perf_event.h
··· 2 2 /* 3 3 * Performance events: 4 4 * 5 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 8 8 *
+1 -1
include/uapi/linux/xattr.h
··· 23 23 #define XATTR_REPLACE 0x2 /* set value, fail if attr does not exist */ 24 24 25 25 struct xattr_args { 26 - __aligned_u64 __user value; 26 + __aligned_u64 value; 27 27 __u32 size; 28 28 __u32 flags; 29 29 };
+4 -7
io_uring/io-wq.c
··· 947 947 return ret; 948 948 } 949 949 950 - static bool io_wq_for_each_worker(struct io_wq *wq, 950 + static void io_wq_for_each_worker(struct io_wq *wq, 951 951 bool (*func)(struct io_worker *, void *), 952 952 void *data) 953 953 { 954 - for (int i = 0; i < IO_WQ_ACCT_NR; i++) { 955 - if (!io_acct_for_each_worker(&wq->acct[i], func, data)) 956 - return false; 957 - } 958 - 959 - return true; 954 + for (int i = 0; i < IO_WQ_ACCT_NR; i++) 955 + if (io_acct_for_each_worker(&wq->acct[i], func, data)) 956 + break; 960 957 } 961 958 962 959 static bool io_wq_worker_wake(struct io_worker *worker, void *data)
+1 -1
kernel/events/callchain.c
··· 2 2 /* 3 3 * Performance events callchain code, extracted from core.c: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+7 -1
kernel/events/core.c
··· 2 2 /* 3 3 * Performance events core code: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com> ··· 11906 11906 } 11907 11907 } 11908 11908 11909 + static void perf_swevent_destroy_hrtimer(struct perf_event *event) 11910 + { 11911 + hrtimer_cancel(&event->hw.hrtimer); 11912 + } 11913 + 11909 11914 static void perf_swevent_init_hrtimer(struct perf_event *event) 11910 11915 { 11911 11916 struct hw_perf_event *hwc = &event->hw; ··· 11919 11914 return; 11920 11915 11921 11916 hrtimer_setup(&hwc->hrtimer, perf_swevent_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); 11917 + event->destroy = perf_swevent_destroy_hrtimer; 11922 11918 11923 11919 /* 11924 11920 * Since hrtimers have a fixed rate, we can do a static freq->period
+1 -1
kernel/events/ring_buffer.c
··· 2 2 /* 3 3 * Performance events ring-buffer code: 4 4 * 5 - * Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011 Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011 Red Hat, Inc., Peter Zijlstra 8 8 * Copyright © 2009 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com>
+1 -1
kernel/irq/debugfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - // Copyright 2017 Thomas Gleixner <tglx@linutronix.de> 2 + // Copyright 2017 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 4 4 #include <linux/irqdomain.h> 5 5 #include <linux/irq.h>
+1 -1
kernel/irq/matrix.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2017 Thomas Gleixner <tglx@linutronix.de> 2 + // Copyright (C) 2017 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 3 3 4 4 #include <linux/spinlock.h> 5 5 #include <linux/seq_file.h>
+10 -4
kernel/power/swap.c
··· 902 902 for (thr = 0; thr < nr_threads; thr++) { 903 903 if (data[thr].thr) 904 904 kthread_stop(data[thr].thr); 905 - acomp_request_free(data[thr].cr); 906 - crypto_free_acomp(data[thr].cc); 905 + if (data[thr].cr) 906 + acomp_request_free(data[thr].cr); 907 + 908 + if (!IS_ERR_OR_NULL(data[thr].cc)) 909 + crypto_free_acomp(data[thr].cc); 907 910 } 908 911 vfree(data); 909 912 } ··· 1502 1499 for (thr = 0; thr < nr_threads; thr++) { 1503 1500 if (data[thr].thr) 1504 1501 kthread_stop(data[thr].thr); 1505 - acomp_request_free(data[thr].cr); 1506 - crypto_free_acomp(data[thr].cc); 1502 + if (data[thr].cr) 1503 + acomp_request_free(data[thr].cr); 1504 + 1505 + if (!IS_ERR_OR_NULL(data[thr].cc)) 1506 + crypto_free_acomp(data[thr].cc); 1507 1507 } 1508 1508 vfree(data); 1509 1509 }
+3 -2
kernel/sched/core.c
··· 10694 10694 sched_mm_cid_exit(t); 10695 10695 } 10696 10696 10697 - /* Reactivate MM CID after successful execve() */ 10697 + /* Reactivate MM CID after execve() */ 10698 10698 void sched_mm_cid_after_execve(struct task_struct *t) 10699 10699 { 10700 - sched_mm_cid_fork(t); 10700 + if (t->mm) 10701 + sched_mm_cid_fork(t); 10701 10702 } 10702 10703 10703 10704 static void mm_cid_work_fn(struct work_struct *work)
+1 -1
kernel/sched/fair.c
··· 15 15 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> 16 16 * 17 17 * Scaled math optimizations by Thomas Gleixner 18 - * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de> 18 + * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 19 19 * 20 20 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra 21 21 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1 -1
kernel/sched/pelt.c
··· 15 15 * Author: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> 16 16 * 17 17 * Scaled math optimizations by Thomas Gleixner 18 - * Copyright (C) 2007, Thomas Gleixner <tglx@linutronix.de> 18 + * Copyright (C) 2007, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 19 19 * 20 20 * Adaptive scheduling granularity, math enhancements by Peter Zijlstra 21 21 * Copyright (C) 2007 Red Hat, Inc., Peter Zijlstra
+1 -1
kernel/time/clockevents.c
··· 2 2 /* 3 3 * This file contains functions which manage clock event devices. 4 4 * 5 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 7 7 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 8 8 */
+1 -1
kernel/time/hrtimer.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 5 5 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner 6 6 *
+1 -1
kernel/time/tick-broadcast.c
··· 3 3 * This file contains functions which emulate a local clock-event 4 4 * device via a broadcast event source. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-common.c
··· 3 3 * This file contains the base functions to manage periodic tick 4 4 * related events. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-oneshot.c
··· 3 3 * This file contains functions which manage high resolution tick 4 4 * related events. 5 5 * 6 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 6 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 7 7 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 8 8 * Copyright(C) 2006-2007, Timesys Corp., Thomas Gleixner 9 9 */
+1 -1
kernel/time/tick-sched.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright(C) 2005-2006, Thomas Gleixner <tglx@linutronix.de> 3 + * Copyright(C) 2005-2006, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 * Copyright(C) 2005-2007, Red Hat, Inc., Ingo Molnar 5 5 * Copyright(C) 2006-2007 Timesys Corp., Thomas Gleixner 6 6 *
+2
kernel/trace/ring_buffer.c
··· 3137 3137 list) { 3138 3138 list_del_init(&bpage->list); 3139 3139 free_buffer_page(bpage); 3140 + 3141 + cond_resched(); 3140 3142 } 3141 3143 } 3142 3144 out_err_unlock:
+7 -1
kernel/trace/trace.c
··· 138 138 * by commas. 139 139 */ 140 140 /* Set to string format zero to disable by default */ 141 - char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; 141 + static char ftrace_dump_on_oops[MAX_TRACER_SIZE] = "0"; 142 142 143 143 /* When set, tracing will stop when a WARN*() is hit */ 144 144 static int __disable_trace_on_warning; ··· 3012 3012 struct ftrace_stack *fstack; 3013 3013 struct stack_entry *entry; 3014 3014 int stackidx; 3015 + int bit; 3016 + 3017 + bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START); 3018 + if (bit < 0) 3019 + return; 3015 3020 3016 3021 /* 3017 3022 * Add one, for this function and the call to save_stack_trace() ··· 3085 3080 /* Again, don't let gcc optimize things here */ 3086 3081 barrier(); 3087 3082 __this_cpu_dec(ftrace_stack_reserve); 3083 + trace_clear_recursion(bit); 3088 3084 } 3089 3085 3090 3086 static inline void ftrace_trace_stack(struct trace_array *tr,
+3 -4
kernel/trace/trace_events.c
··· 826 826 * When soft_disable is set and enable is set, we want to 827 827 * register the tracepoint for the event, but leave the event 828 828 * as is. That means, if the event was already enabled, we do 829 - * nothing (but set soft_mode). If the event is disabled, we 830 - * set SOFT_DISABLED before enabling the event tracepoint, so 831 - * it still seems to be disabled. 829 + * nothing. If the event is disabled, we set SOFT_DISABLED 830 + * before enabling the event tracepoint, so it still seems 831 + * to be disabled. 832 832 */ 833 833 if (!soft_disable) 834 834 clear_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags); 835 835 else { 836 836 if (atomic_inc_return(&file->sm_ref) > 1) 837 837 break; 838 - soft_mode = true; 839 838 /* Enable use of trace_buffered_event */ 840 839 trace_buffered_event_enable(); 841 840 }
+2 -2
lib/crypto/aes.c
··· 13 13 * Emit the sbox as volatile const to prevent the compiler from doing 14 14 * constant folding on sbox references involving fixed indexes. 15 15 */ 16 - static volatile const u8 __cacheline_aligned aes_sbox[] = { 16 + static volatile const u8 ____cacheline_aligned aes_sbox[] = { 17 17 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 18 18 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 19 19 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, ··· 48 48 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16, 49 49 }; 50 50 51 - static volatile const u8 __cacheline_aligned aes_inv_sbox[] = { 51 + static volatile const u8 ____cacheline_aligned aes_inv_sbox[] = { 52 52 0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 53 53 0xbf, 0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb, 54 54 0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87,
+1 -1
lib/crypto/tests/polyval_kunit.c
··· 183 183 184 184 rand_bytes(state.raw_key, sizeof(state.raw_key)); 185 185 polyval_preparekey(&state.expected_key, state.raw_key); 186 - kunit_run_irq_test(test, polyval_irq_test_func, 20000, &state); 186 + kunit_run_irq_test(test, polyval_irq_test_func, 200000, &state); 187 187 } 188 188 189 189 static int polyval_suite_init(struct kunit_suite *suite)
+1 -1
lib/debugobjects.c
··· 2 2 /* 3 3 * Generic infrastructure for lifetime debugging of objects. 4 4 * 5 - * Copyright (C) 2008, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "ODEBUG: " fmt
+1 -1
lib/plist.c
··· 10 10 * 2001-2005 (c) MontaVista Software, Inc. 11 11 * Daniel Walker <dwalker@mvista.com> 12 12 * 13 - * (C) 2005 Thomas Gleixner <tglx@linutronix.de> 13 + * (C) 2005 Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 14 14 * 15 15 * Simplifications of the original code by 16 16 * Oleg Nesterov <oleg@tv-sign.ru>
+1 -1
lib/reed_solomon/decode_rs.c
··· 5 5 * Copyright 2002, Phil Karn, KA9Q 6 6 * May be used under the terms of the GNU General Public License (GPL) 7 7 * 8 - * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de) 8 + * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org) 9 9 * 10 10 * Generic data width independent code which is included by the wrappers. 11 11 */
+1 -1
lib/reed_solomon/encode_rs.c
··· 5 5 * Copyright 2002, Phil Karn, KA9Q 6 6 * May be used under the terms of the GNU General Public License (GPL) 7 7 * 8 - * Adaption to the kernel by Thomas Gleixner (tglx@linutronix.de) 8 + * Adaption to the kernel by Thomas Gleixner (tglx@kernel.org) 9 9 * 10 10 * Generic data width independent code which is included by the wrappers. 11 11 */
+1 -1
lib/reed_solomon/reed_solomon.c
··· 2 2 /* 3 3 * Generic Reed Solomon encoder / decoder library 4 4 * 5 - * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de) 5 + * Copyright (C) 2004 Thomas Gleixner (tglx@kernel.org) 6 6 * 7 7 * Reed Solomon code lifted from reed solomon library written by Phil Karn 8 8 * Copyright 2002 Phil Karn, KA9Q
+7 -4
net/bridge/br_vlan_tunnel.c
··· 189 189 IP_TUNNEL_DECLARE_FLAGS(flags) = { }; 190 190 struct metadata_dst *tunnel_dst; 191 191 __be64 tunnel_id; 192 - int err; 193 192 194 193 if (!vlan) 195 194 return 0; ··· 198 199 return 0; 199 200 200 201 skb_dst_drop(skb); 201 - err = skb_vlan_pop(skb); 202 - if (err) 203 - return err; 202 + /* For 802.1ad (QinQ), skb_vlan_pop() incorrectly moves the C-VLAN 203 + * from payload to hwaccel after clearing S-VLAN. We only need to 204 + * clear the hwaccel S-VLAN; the C-VLAN must stay in payload for 205 + * correct VXLAN encapsulation. This is also correct for 802.1Q 206 + * where no C-VLAN exists in payload. 207 + */ 208 + __vlan_hwaccel_clear_tag(skb); 204 209 205 210 if (BR_INPUT_SKB_CB(skb)->backup_nhid) { 206 211 __set_bit(IP_TUNNEL_KEY_BIT, flags);
+1 -1
net/bridge/netfilter/ebtables.c
··· 1299 1299 list_for_each_entry(tmpl, &template_tables, list) { 1300 1300 if (WARN_ON_ONCE(strcmp(t->name, tmpl->name) == 0)) { 1301 1301 mutex_unlock(&ebt_mutex); 1302 - return -EEXIST; 1302 + return -EBUSY; 1303 1303 } 1304 1304 } 1305 1305
+2
net/ceph/messenger_v2.c
··· 2376 2376 2377 2377 ceph_decode_64_safe(&p, end, global_id, bad); 2378 2378 ceph_decode_32_safe(&p, end, con->v2.con_mode, bad); 2379 + 2379 2380 ceph_decode_32_safe(&p, end, payload_len, bad); 2381 + ceph_decode_need(&p, end, payload_len, bad); 2380 2382 2381 2383 dout("%s con %p global_id %llu con_mode %d payload_len %d\n", 2382 2384 __func__, con, global_id, con->v2.con_mode, payload_len);
+1 -1
net/ceph/mon_client.c
··· 1417 1417 if (!ret) 1418 1418 finish_hunting(monc); 1419 1419 mutex_unlock(&monc->mutex); 1420 - return 0; 1420 + return ret; 1421 1421 } 1422 1422 1423 1423 static int mon_handle_auth_bad_method(struct ceph_connection *con,
+12 -2
net/ceph/osd_client.c
··· 1586 1586 struct ceph_pg_pool_info *pi; 1587 1587 struct ceph_pg pgid, last_pgid; 1588 1588 struct ceph_osds up, acting; 1589 + bool should_be_paused; 1589 1590 bool is_read = t->flags & CEPH_OSD_FLAG_READ; 1590 1591 bool is_write = t->flags & CEPH_OSD_FLAG_WRITE; 1591 1592 bool force_resend = false; ··· 1655 1654 &last_pgid)) 1656 1655 force_resend = true; 1657 1656 1658 - if (t->paused && !target_should_be_paused(osdc, t, pi)) { 1659 - t->paused = false; 1657 + should_be_paused = target_should_be_paused(osdc, t, pi); 1658 + if (t->paused && !should_be_paused) { 1660 1659 unpaused = true; 1661 1660 } 1661 + if (t->paused != should_be_paused) { 1662 + dout("%s t %p paused %d -> %d\n", __func__, t, t->paused, 1663 + should_be_paused); 1664 + t->paused = should_be_paused; 1665 + } 1666 + 1662 1667 legacy_change = ceph_pg_compare(&t->pgid, &pgid) || 1663 1668 ceph_osds_changed(&t->acting, &acting, 1664 1669 t->used_replica || any_change); ··· 4287 4280 dout("%s osd%d unknown\n", __func__, osd->o_osd); 4288 4281 goto out_unlock; 4289 4282 } 4283 + 4284 + osd->o_sparse_op_idx = -1; 4285 + ceph_init_sparse_read(&osd->o_sparse_read); 4290 4286 4291 4287 if (!reopen_osd(osd)) 4292 4288 kick_osd_requests(osd);
+15 -9
net/ceph/osdmap.c
··· 241 241 242 242 static void free_choose_arg_map(struct crush_choose_arg_map *arg_map) 243 243 { 244 - if (arg_map) { 245 - int i, j; 244 + int i, j; 246 245 247 - WARN_ON(!RB_EMPTY_NODE(&arg_map->node)); 246 + if (!arg_map) 247 + return; 248 248 249 + WARN_ON(!RB_EMPTY_NODE(&arg_map->node)); 250 + 251 + if (arg_map->args) { 249 252 for (i = 0; i < arg_map->size; i++) { 250 253 struct crush_choose_arg *arg = &arg_map->args[i]; 251 - 252 - for (j = 0; j < arg->weight_set_size; j++) 253 - kfree(arg->weight_set[j].weights); 254 - kfree(arg->weight_set); 254 + if (arg->weight_set) { 255 + for (j = 0; j < arg->weight_set_size; j++) 256 + kfree(arg->weight_set[j].weights); 257 + kfree(arg->weight_set); 258 + } 255 259 kfree(arg->ids); 256 260 } 257 261 kfree(arg_map->args); 258 - kfree(arg_map); 259 262 } 263 + kfree(arg_map); 260 264 } 261 265 262 266 DEFINE_RB_FUNCS(choose_arg_map, struct crush_choose_arg_map, choose_args_index, ··· 1983 1979 sizeof(u64) + sizeof(u32), e_inval); 1984 1980 ceph_decode_copy(p, &fsid, sizeof(fsid)); 1985 1981 epoch = ceph_decode_32(p); 1986 - BUG_ON(epoch != map->epoch+1); 1987 1982 ceph_decode_copy(p, &modified, sizeof(modified)); 1988 1983 new_pool_max = ceph_decode_64(p); 1989 1984 new_flags = ceph_decode_32(p); 1985 + 1986 + if (epoch != map->epoch + 1) 1987 + goto e_inval; 1990 1988 1991 1989 /* full map? */ 1992 1990 ceph_decode_32_safe(p, end, len, e_inval);
+5 -3
net/core/skbuff.c
··· 4636 4636 { 4637 4637 struct sk_buff *list_skb = skb_shinfo(skb)->frag_list; 4638 4638 unsigned int tnl_hlen = skb_tnl_header_len(skb); 4639 - unsigned int delta_truesize = 0; 4640 4639 unsigned int delta_len = 0; 4641 4640 struct sk_buff *tail = NULL; 4642 4641 struct sk_buff *nskb, *tmp; 4643 4642 int len_diff, err; 4643 + 4644 + /* Only skb_gro_receive_list generated skbs arrive here */ 4645 + DEBUG_NET_WARN_ON_ONCE(!(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)); 4644 4646 4645 4647 skb_push(skb, -skb_network_offset(skb) + offset); 4646 4648 ··· 4657 4655 nskb = list_skb; 4658 4656 list_skb = list_skb->next; 4659 4657 4658 + DEBUG_NET_WARN_ON_ONCE(nskb->sk); 4659 + 4660 4660 err = 0; 4661 - delta_truesize += nskb->truesize; 4662 4661 if (skb_shared(nskb)) { 4663 4662 tmp = skb_clone(nskb, GFP_ATOMIC); 4664 4663 if (tmp) { ··· 4702 4699 goto err_linearize; 4703 4700 } 4704 4701 4705 - skb->truesize = skb->truesize - delta_truesize; 4706 4702 skb->data_len = skb->data_len - delta_len; 4707 4703 skb->len = skb->len - delta_len; 4708 4704
+4 -3
net/core/sock.c
··· 3896 3896 int sock_recv_errqueue(struct sock *sk, struct msghdr *msg, int len, 3897 3897 int level, int type) 3898 3898 { 3899 - struct sock_exterr_skb *serr; 3899 + struct sock_extended_err ee; 3900 3900 struct sk_buff *skb; 3901 3901 int copied, err; 3902 3902 ··· 3916 3916 3917 3917 sock_recv_timestamp(msg, sk, skb); 3918 3918 3919 - serr = SKB_EXT_ERR(skb); 3920 - put_cmsg(msg, level, type, sizeof(serr->ee), &serr->ee); 3919 + /* We must use a bounce buffer for CONFIG_HARDENED_USERCOPY=y */ 3920 + ee = SKB_EXT_ERR(skb)->ee; 3921 + put_cmsg(msg, level, type, sizeof(ee), &ee); 3921 3922 3922 3923 msg->msg_flags |= MSG_ERRQUEUE; 3923 3924 err = copied;
+4 -3
net/ipv4/arp.c
··· 564 564 565 565 skb_reserve(skb, hlen); 566 566 skb_reset_network_header(skb); 567 - arp = skb_put(skb, arp_hdr_len(dev)); 567 + skb_put(skb, arp_hdr_len(dev)); 568 568 skb->dev = dev; 569 569 skb->protocol = htons(ETH_P_ARP); 570 570 if (!src_hw) ··· 572 572 if (!dest_hw) 573 573 dest_hw = dev->broadcast; 574 574 575 - /* 576 - * Fill the device header for the ARP frame 575 + /* Fill the device header for the ARP frame. 576 + * Note: skb->head can be changed. 577 577 */ 578 578 if (dev_hard_header(skb, dev, ptype, dest_hw, src_hw, skb->len) < 0) 579 579 goto out; 580 580 581 + arp = arp_hdr(skb); 581 582 /* 582 583 * Fill out the arp protocol part. 583 584 *
+2
net/ipv4/inet_fragment.c
··· 488 488 } 489 489 490 490 FRAG_CB(skb)->ip_defrag_offset = offset; 491 + if (offset) 492 + nf_reset_ct(skb); 491 493 492 494 return IPFRAG_OK; 493 495 }
+1 -3
net/ipv4/ping.c
··· 828 828 out_free: 829 829 if (free) 830 830 kfree(ipc.opt); 831 - if (!err) { 832 - icmp_out_count(sock_net(sk), user_icmph.type); 831 + if (!err) 833 832 return len; 834 - } 835 833 return err; 836 834 837 835 do_confirm:
+3 -5
net/ipv4/tcp.c
··· 2652 2652 if (sk->sk_state == TCP_LISTEN) 2653 2653 goto out; 2654 2654 2655 - if (tp->recvmsg_inq) { 2655 + if (tp->recvmsg_inq) 2656 2656 *cmsg_flags = TCP_CMSG_INQ; 2657 - msg->msg_get_inq = 1; 2658 - } 2659 2657 timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); 2660 2658 2661 2659 /* Urgent data needs to be handled specially. */ ··· 2927 2929 ret = tcp_recvmsg_locked(sk, msg, len, flags, &tss, &cmsg_flags); 2928 2930 release_sock(sk); 2929 2931 2930 - if ((cmsg_flags || msg->msg_get_inq) && ret >= 0) { 2932 + if ((cmsg_flags | msg->msg_get_inq) && ret >= 0) { 2931 2933 if (cmsg_flags & TCP_CMSG_TS) 2932 2934 tcp_recv_timestamp(msg, sk, &tss); 2933 - if (msg->msg_get_inq) { 2935 + if ((cmsg_flags & TCP_CMSG_INQ) | msg->msg_get_inq) { 2934 2936 msg->msg_inq = tcp_inq_hint(sk); 2935 2937 if (cmsg_flags & TCP_CMSG_INQ) 2936 2938 put_cmsg(msg, SOL_TCP, TCP_CM_INQ,
+1
net/ipv4/udp.c
··· 1851 1851 sk_peek_offset_bwd(sk, len); 1852 1852 1853 1853 if (!skb_shared(skb)) { 1854 + skb_orphan(skb); 1854 1855 skb_attempt_defer_free(skb); 1855 1856 return; 1856 1857 }
+3
net/mac80211/chan.c
··· 90 90 /* next (or first) interface */ 91 91 iter->sdata = list_prepare_entry(iter->sdata, &local->interfaces, list); 92 92 list_for_each_entry_continue(iter->sdata, &local->interfaces, list) { 93 + if (!ieee80211_sdata_running(iter->sdata)) 94 + continue; 95 + 93 96 /* AP_VLAN has a chanctx pointer but follows AP */ 94 97 if (iter->sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 95 98 continue;
+4 -3
net/mac80211/sta_info.c
··· 1533 1533 } 1534 1534 } 1535 1535 1536 + sinfo = kzalloc(sizeof(*sinfo), GFP_KERNEL); 1537 + if (sinfo) 1538 + sta_set_sinfo(sta, sinfo, true); 1539 + 1536 1540 if (sta->uploaded) { 1537 1541 ret = drv_sta_state(local, sdata, sta, IEEE80211_STA_NONE, 1538 1542 IEEE80211_STA_NOTEXIST); ··· 1545 1541 1546 1542 sta_dbg(sdata, "Removed STA %pM\n", sta->sta.addr); 1547 1543 1548 - sinfo = kzalloc(sizeof(*sinfo), GFP_KERNEL); 1549 - if (sinfo) 1550 - sta_set_sinfo(sta, sinfo, true); 1551 1544 cfg80211_del_sta_sinfo(sdata->dev, sta->sta.addr, sinfo, GFP_KERNEL); 1552 1545 kfree(sinfo); 1553 1546
+2
net/mac80211/tx.c
··· 2397 2397 2398 2398 if (chanctx_conf) 2399 2399 chandef = &chanctx_conf->def; 2400 + else if (local->emulate_chanctx) 2401 + chandef = &local->hw.conf.chandef; 2400 2402 else 2401 2403 goto fail_rcu; 2402 2404
+1 -1
net/netfilter/nf_conncount.c
··· 229 229 230 230 nf_ct_put(found_ct); 231 231 } 232 + list->last_gc = (u32)jiffies; 232 233 233 234 add_new_node: 234 235 if (WARN_ON_ONCE(list->count > INT_MAX)) { ··· 249 248 conn->jiffies32 = (u32)jiffies; 250 249 list_add_tail(&conn->node, &list->head); 251 250 list->count++; 252 - list->last_gc = (u32)jiffies; 253 251 254 252 out_put: 255 253 if (refcounted)
+2 -2
net/netfilter/nf_log.c
··· 89 89 if (pf == NFPROTO_UNSPEC) { 90 90 for (i = NFPROTO_UNSPEC; i < NFPROTO_NUMPROTO; i++) { 91 91 if (rcu_access_pointer(loggers[i][logger->type])) { 92 - ret = -EEXIST; 92 + ret = -EBUSY; 93 93 goto unlock; 94 94 } 95 95 } ··· 97 97 rcu_assign_pointer(loggers[i][logger->type], logger); 98 98 } else { 99 99 if (rcu_access_pointer(loggers[pf][logger->type])) { 100 - ret = -EEXIST; 100 + ret = -EBUSY; 101 101 goto unlock; 102 102 } 103 103 rcu_assign_pointer(loggers[pf][logger->type], logger);
+2 -1
net/netfilter/nf_tables_api.c
··· 4439 4439 4440 4440 if (!nft_use_inc(&chain->use)) { 4441 4441 err = -EMFILE; 4442 - goto err_release_rule; 4442 + goto err_destroy_flow; 4443 4443 } 4444 4444 4445 4445 if (info->nlh->nlmsg_flags & NLM_F_REPLACE) { ··· 4489 4489 4490 4490 err_destroy_flow_rule: 4491 4491 nft_use_dec_restore(&chain->use); 4492 + err_destroy_flow: 4492 4493 if (flow) 4493 4494 nft_flow_rule_destroy(flow); 4494 4495 err_release_rule:
+2 -2
net/netfilter/nft_set_pipapo.c
··· 1317 1317 else 1318 1318 dup_end = dup_key; 1319 1319 1320 - if (!memcmp(start, dup_key->data, sizeof(*dup_key->data)) && 1321 - !memcmp(end, dup_end->data, sizeof(*dup_end->data))) { 1320 + if (!memcmp(start, dup_key->data, set->klen) && 1321 + !memcmp(end, dup_end->data, set->klen)) { 1322 1322 *elem_priv = &dup->priv; 1323 1323 return -EEXIST; 1324 1324 }
+3 -3
net/netfilter/nft_synproxy.c
··· 48 48 struct tcphdr *_tcph, 49 49 struct synproxy_options *opts) 50 50 { 51 - struct nf_synproxy_info info = priv->info; 51 + struct nf_synproxy_info info = READ_ONCE(priv->info); 52 52 struct net *net = nft_net(pkt); 53 53 struct synproxy_net *snet = synproxy_pernet(net); 54 54 struct sk_buff *skb = pkt->skb; ··· 79 79 struct tcphdr *_tcph, 80 80 struct synproxy_options *opts) 81 81 { 82 - struct nf_synproxy_info info = priv->info; 82 + struct nf_synproxy_info info = READ_ONCE(priv->info); 83 83 struct net *net = nft_net(pkt); 84 84 struct synproxy_net *snet = synproxy_pernet(net); 85 85 struct sk_buff *skb = pkt->skb; ··· 340 340 struct nft_synproxy *newpriv = nft_obj_data(newobj); 341 341 struct nft_synproxy *priv = nft_obj_data(obj); 342 342 343 - priv->info = newpriv->info; 343 + WRITE_ONCE(priv->info, newpriv->info); 344 344 } 345 345 346 346 static struct nft_object_type nft_synproxy_obj_type;
+1 -1
net/netfilter/x_tables.c
··· 1764 1764 int xt_register_template(const struct xt_table *table, 1765 1765 int (*table_init)(struct net *net)) 1766 1766 { 1767 - int ret = -EEXIST, af = table->af; 1767 + int ret = -EBUSY, af = table->af; 1768 1768 struct xt_template *t; 1769 1769 1770 1770 mutex_lock(&xt[af].mutex);
+2
net/sched/act_api.c
··· 940 940 int ret; 941 941 942 942 idr_for_each_entry_ul(idr, p, tmp, id) { 943 + if (IS_ERR(p)) 944 + continue; 943 945 if (tc_act_in_hw(p) && !mutex_taken) { 944 946 rtnl_lock(); 945 947 mutex_taken = true;
+13 -13
net/sched/act_mirred.c
··· 266 266 goto err_cant_do; 267 267 } 268 268 269 - /* we could easily avoid the clone only if called by ingress and clsact; 270 - * since we can't easily detect the clsact caller, skip clone only for 271 - * ingress - that covers the TC S/W datapath. 272 - */ 273 - at_ingress = skb_at_tc_ingress(skb); 274 - dont_clone = skb_at_tc_ingress(skb) && is_redirect && 275 - tcf_mirred_can_reinsert(retval); 276 - if (!dont_clone) { 277 - skb_to_send = skb_clone(skb, GFP_ATOMIC); 278 - if (!skb_to_send) 279 - goto err_cant_do; 280 - } 281 - 282 269 want_ingress = tcf_mirred_act_wants_ingress(m_eaction); 283 270 271 + at_ingress = skb_at_tc_ingress(skb); 284 272 if (dev == skb->dev && want_ingress == at_ingress) { 285 273 pr_notice_once("tc mirred: Loop (%s:%s --> %s:%s)\n", 286 274 netdev_name(skb->dev), ··· 276 288 netdev_name(dev), 277 289 want_ingress ? "ingress" : "egress"); 278 290 goto err_cant_do; 291 + } 292 + 293 + /* we could easily avoid the clone only if called by ingress and clsact; 294 + * since we can't easily detect the clsact caller, skip clone only for 295 + * ingress - that covers the TC S/W datapath. 296 + */ 297 + dont_clone = skb_at_tc_ingress(skb) && is_redirect && 298 + tcf_mirred_can_reinsert(retval); 299 + if (!dont_clone) { 300 + skb_to_send = skb_clone(skb, GFP_ATOMIC); 301 + if (!skb_to_send) 302 + goto err_cant_do; 279 303 } 280 304 281 305 /* All mirred/redirected skbs should clear previous ct info */
+1 -1
net/sched/sch_qfq.c
··· 1481 1481 1482 1482 for (i = 0; i < q->clhash.hashsize; i++) { 1483 1483 hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) { 1484 - if (cl->qdisc->q.qlen > 0) 1484 + if (cl_is_active(cl)) 1485 1485 qfq_deactivate_class(q, cl); 1486 1486 1487 1487 qdisc_reset(cl->qdisc);
+3 -5
net/unix/af_unix.c
··· 2904 2904 unsigned int last_len; 2905 2905 struct unix_sock *u; 2906 2906 int copied = 0; 2907 - bool do_cmsg; 2908 2907 int err = 0; 2909 2908 long timeo; 2910 2909 int target; ··· 2929 2930 2930 2931 u = unix_sk(sk); 2931 2932 2932 - do_cmsg = READ_ONCE(u->recvmsg_inq); 2933 - if (do_cmsg) 2934 - msg->msg_get_inq = 1; 2935 2933 redo: 2936 2934 /* Lock the socket to prevent queue disordering 2937 2935 * while sleeps in memcpy_tomsg ··· 3086 3090 3087 3091 mutex_unlock(&u->iolock); 3088 3092 if (msg) { 3093 + bool do_cmsg = READ_ONCE(u->recvmsg_inq); 3094 + 3089 3095 scm_recv_unix(sock, msg, &scm, flags); 3090 3096 3091 - if (msg->msg_get_inq && (copied ?: err) >= 0) { 3097 + if ((do_cmsg | msg->msg_get_inq) && (copied ?: err) >= 0) { 3092 3098 msg->msg_inq = READ_ONCE(u->inq_len); 3093 3099 if (do_cmsg) 3094 3100 put_cmsg(msg, SOL_SOCKET, SCM_INQ,
+4
net/vmw_vsock/af_vsock.c
··· 1787 1787 } else { 1788 1788 newsock->state = SS_CONNECTED; 1789 1789 sock_graft(connected, newsock); 1790 + 1791 + set_bit(SOCK_CUSTOM_SOCKOPT, 1792 + &connected->sk_socket->flags); 1793 + 1790 1794 if (vsock_msgzerocopy_allow(vconnected->transport)) 1791 1795 set_bit(SOCK_SUPPORT_ZC, 1792 1796 &connected->sk_socket->flags);
+4
net/wireless/wext-core.c
··· 1101 1101 return ioctl_standard_call(dev, iwr, cmd, info, handler); 1102 1102 1103 1103 iwp_compat = (struct compat_iw_point *) &iwr->u.data; 1104 + 1105 + /* struct iw_point has a 32bit hole on 64bit arches. */ 1106 + memset(&iwp, 0, sizeof(iwp)); 1107 + 1104 1108 iwp.pointer = compat_ptr(iwp_compat->pointer); 1105 1109 iwp.length = iwp_compat->length; 1106 1110 iwp.flags = iwp_compat->flags;
+4
net/wireless/wext-priv.c
··· 228 228 struct iw_point iwp; 229 229 230 230 iwp_compat = (struct compat_iw_point *) &iwr->u.data; 231 + 232 + /* struct iw_point has a 32bit hole on 64bit arches. */ 233 + memset(&iwp, 0, sizeof(iwp)); 234 + 231 235 iwp.pointer = compat_ptr(iwp_compat->pointer); 232 236 iwp.length = iwp_compat->length; 233 237 iwp.flags = iwp_compat->flags;
+3 -4
rust/kernel/device.rs
··· 14 14 15 15 #[cfg(CONFIG_PRINTK)] 16 16 use crate::c_str; 17 - use crate::str::CStrExt as _; 18 17 19 18 pub mod property; 20 19 ··· 66 67 /// 67 68 /// # Implementing Bus Devices 68 69 /// 69 - /// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or 70 - /// [`platform::Device`]. 70 + /// This section provides a guideline to implement bus specific devices, such as: 71 + #[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")] 72 + /// * [`platform::Device`] 71 73 /// 72 74 /// A bus specific device should be defined as follows. 73 75 /// ··· 160 160 /// 161 161 /// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted 162 162 /// [`impl_device_context_deref`]: kernel::impl_device_context_deref 163 - /// [`pci::Device`]: kernel::pci::Device 164 163 /// [`platform::Device`]: kernel::platform::Device 165 164 #[repr(transparent)] 166 165 pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>);
+1 -1
rust/kernel/device_id.rs
··· 15 15 /// # Safety 16 16 /// 17 17 /// Implementers must ensure that `Self` is layout-compatible with [`RawDeviceId::RawType`]; 18 - /// i.e. it's safe to transmute to `RawDeviceId`. 18 + /// i.e. it's safe to transmute to `RawType`. 19 19 /// 20 20 /// This requirement is needed so `IdArray::new` can convert `Self` to `RawType` when building 21 21 /// the ID table.
+3 -4
rust/kernel/dma.rs
··· 27 27 /// Trait to be implemented by DMA capable bus devices. 28 28 /// 29 29 /// The [`dma::Device`](Device) trait should be implemented by bus specific device representations, 30 - /// where the underlying bus is DMA capable, such as [`pci::Device`](::kernel::pci::Device) or 31 - /// [`platform::Device`](::kernel::platform::Device). 30 + /// where the underlying bus is DMA capable, such as: 31 + #[cfg_attr(CONFIG_PCI, doc = "* [`pci::Device`](kernel::pci::Device)")] 32 + /// * [`platform::Device`](::kernel::platform::Device) 32 33 pub trait Device: AsRef<device::Device<Core>> { 33 34 /// Set up the device's DMA streaming addressing capabilities. 34 35 /// ··· 533 532 /// 534 533 /// # Safety 535 534 /// 536 - /// * Callers must ensure that the device does not read/write to/from memory while the returned 537 - /// slice is live. 538 535 /// * Callers must ensure that this call does not race with a read or write to the same region 539 536 /// that overlaps with this write. 540 537 ///
+8 -4
rust/kernel/driver.rs
··· 33 33 //! } 34 34 //! ``` 35 35 //! 36 - //! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`]. 36 + //! For specific examples see: 37 + //! 38 + //! * [`platform::Driver`](kernel::platform::Driver) 39 + #![cfg_attr( 40 + CONFIG_AUXILIARY_BUS, 41 + doc = "* [`auxiliary::Driver`](kernel::auxiliary::Driver)" 42 + )] 43 + #![cfg_attr(CONFIG_PCI, doc = "* [`pci::Driver`](kernel::pci::Driver)")] 37 44 //! 38 45 //! The `probe()` callback should return a `impl PinInit<Self, Error>`, i.e. the driver's private 39 46 //! data. The bus abstraction should store the pointer in the corresponding bus device. The generic ··· 86 79 //! 87 80 //! For this purpose the generic infrastructure in [`device_id`] should be used. 88 81 //! 89 - //! [`auxiliary::Driver`]: kernel::auxiliary::Driver 90 82 //! [`Core`]: device::Core 91 83 //! [`Device`]: device::Device 92 84 //! [`Device<Core>`]: device::Device<device::Core> ··· 93 87 //! [`DeviceContext`]: device::DeviceContext 94 88 //! [`device_id`]: kernel::device_id 95 89 //! [`module_driver`]: kernel::module_driver 96 - //! [`pci::Driver`]: kernel::pci::Driver 97 - //! [`platform::Driver`]: kernel::platform::Driver 98 90 99 91 use crate::error::{Error, Result}; 100 92 use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+2 -2
rust/kernel/pci/io.rs
··· 20 20 /// 21 21 /// # Invariants 22 22 /// 23 - /// `Bar` always holds an `IoRaw` inststance that holds a valid pointer to the start of the I/O 23 + /// `Bar` always holds an `IoRaw` instance that holds a valid pointer to the start of the I/O 24 24 /// memory mapped PCI BAR and its size. 25 25 pub struct Bar<const SIZE: usize = 0> { 26 26 pdev: ARef<Device>, ··· 54 54 let ioptr: usize = unsafe { bindings::pci_iomap(pdev.as_raw(), num, 0) } as usize; 55 55 if ioptr == 0 { 56 56 // SAFETY: 57 - // `pdev` valid by the invariants of `Device`. 57 + // `pdev` is valid by the invariants of `Device`. 58 58 // `num` is checked for validity by a previous call to `Device::resource_len`. 59 59 unsafe { bindings::pci_release_region(pdev.as_raw(), num) }; 60 60 return Err(ENOMEM);
+1 -1
scripts/crypto/gen-hash-testvecs.py
··· 118 118 def alg_digest_size_const(alg): 119 119 if alg.startswith('blake2'): 120 120 return f'{alg.upper()}_HASH_SIZE' 121 - return f'{alg.upper().replace('-', '_')}_DIGEST_SIZE' 121 + return f"{alg.upper().replace('-', '_')}_DIGEST_SIZE" 122 122 123 123 def gen_unkeyed_testvecs(alg): 124 124 print('')
+1 -1
scripts/spdxcheck.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0 3 - # Copyright Thomas Gleixner <tglx@linutronix.de> 3 + # Copyright Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 4 4 5 5 from argparse import ArgumentParser 6 6 from ply import lex, yacc
+5 -5
sound/ac97/bus.c
··· 298 298 idr_remove(&ac97_adapter_idr, ac97_ctrl->nr); 299 299 dev_dbg(&ac97_ctrl->adap, "adapter unregistered by %s\n", 300 300 dev_name(ac97_ctrl->parent)); 301 + kfree(ac97_ctrl); 301 302 } 302 303 303 304 static const struct device_type ac97_adapter_type = { ··· 320 319 ret = device_register(&ac97_ctrl->adap); 321 320 if (ret) 322 321 put_device(&ac97_ctrl->adap); 323 - } 322 + } else 323 + kfree(ac97_ctrl); 324 + 324 325 if (!ret) { 325 326 list_add(&ac97_ctrl->controllers, &ac97_controllers); 326 327 dev_dbg(&ac97_ctrl->adap, "adapter registered by %s\n", ··· 364 361 ret = ac97_add_adapter(ac97_ctrl); 365 362 366 363 if (ret) 367 - goto err; 364 + return ERR_PTR(ret); 368 365 ac97_bus_reset(ac97_ctrl); 369 366 ac97_bus_scan(ac97_ctrl); 370 367 371 368 return ac97_ctrl; 372 - err: 373 - kfree(ac97_ctrl); 374 - return ERR_PTR(ret); 375 369 } 376 370 EXPORT_SYMBOL_GPL(snd_ac97_controller_register); 377 371
+2
sound/hda/codecs/realtek/alc269.c
··· 6321 6321 SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC), 6322 6322 SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE), 6323 6323 SND_PCI_QUIRK(0x1025, 0x159c, "Acer Nitro 5 AN515-58", ALC2XX_FIXUP_HEADSET_MIC), 6324 + SND_PCI_QUIRK(0x1025, 0x1597, "Acer Nitro 5 AN517-55", ALC2XX_FIXUP_HEADSET_MIC), 6324 6325 SND_PCI_QUIRK(0x1025, 0x169a, "Acer Swift SFG16", ALC256_FIXUP_ACER_SFG16_MICMUTE_LED), 6325 6326 SND_PCI_QUIRK(0x1025, 0x1826, "Acer Helios ZPC", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2), 6326 6327 SND_PCI_QUIRK(0x1025, 0x182c, "Acer Helios ZPD", ALC287_FIXUP_PREDATOR_SPK_CS35L41_I2C_2), ··· 6509 6508 SND_PCI_QUIRK(0x103c, 0x863e, "HP Spectre x360 15-df1xxx", ALC285_FIXUP_HP_SPECTRE_X360_DF1), 6510 6509 SND_PCI_QUIRK(0x103c, 0x86e8, "HP Spectre x360 15-eb0xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1), 6511 6510 SND_PCI_QUIRK(0x103c, 0x86f9, "HP Spectre x360 13-aw0xxx", ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED), 6511 + SND_PCI_QUIRK(0x103c, 0x8706, "HP Laptop 15s-eq1xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 6512 6512 SND_PCI_QUIRK(0x103c, 0x8716, "HP Elite Dragonfly G2 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 6513 6513 SND_PCI_QUIRK(0x103c, 0x8720, "HP EliteBook x360 1040 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_AMP_INIT), 6514 6514 SND_PCI_QUIRK(0x103c, 0x8724, "HP EliteBook 850 G7", ALC285_FIXUP_HP_GPIO_LED),
+3 -1
sound/hda/codecs/side-codecs/tas2781_hda_i2c.c
··· 111 111 sub = acpi_get_subsystem_id(ACPI_HANDLE(physdev)); 112 112 if (IS_ERR(sub)) { 113 113 /* No subsys id in older tas2563 projects. */ 114 - if (!strncmp(hid, "INT8866", sizeof("INT8866"))) 114 + if (!strncmp(hid, "INT8866", sizeof("INT8866"))) { 115 + p->speaker_id = -1; 115 116 goto end_2563; 117 + } 116 118 dev_err(p->dev, "Failed to get SUBSYS ID.\n"); 117 119 ret = PTR_ERR(sub); 118 120 goto err;
+2 -15
sound/soc/codecs/pm4125.c
··· 1505 1505 struct device_link *devlink; 1506 1506 int ret; 1507 1507 1508 - /* Initialize device pointers to NULL for safe cleanup */ 1509 - pm4125->rxdev = NULL; 1510 - pm4125->txdev = NULL; 1511 - 1512 1508 /* Give the soundwire subdevices some more time to settle */ 1513 1509 usleep_range(15000, 15010); 1514 1510 ··· 1533 1537 1534 1538 pm4125->sdw_priv[AIF1_CAP] = dev_get_drvdata(pm4125->txdev); 1535 1539 pm4125->sdw_priv[AIF1_CAP]->pm4125 = pm4125; 1536 - 1537 1540 pm4125->tx_sdw_dev = dev_to_sdw_dev(pm4125->txdev); 1538 - if (!pm4125->tx_sdw_dev) { 1539 - dev_err(dev, "could not get txslave with matching of dev\n"); 1540 - ret = -EINVAL; 1541 - goto error_put_tx; 1542 - } 1543 1541 1544 1542 /* 1545 1543 * As TX is the main CSR reg interface, which should not be suspended first. ··· 1614 1624 device_link_remove(dev, pm4125->rxdev); 1615 1625 device_link_remove(pm4125->rxdev, pm4125->txdev); 1616 1626 1617 - /* Release device references acquired in bind */ 1618 - if (pm4125->txdev) 1619 - put_device(pm4125->txdev); 1620 - if (pm4125->rxdev) 1621 - put_device(pm4125->rxdev); 1627 + put_device(pm4125->txdev); 1628 + put_device(pm4125->rxdev); 1622 1629 1623 1630 component_unbind_all(dev, pm4125); 1624 1631 }
+141 -8
sound/soc/codecs/tlv320adcx140.c
··· 22 22 23 23 #include "tlv320adcx140.h" 24 24 25 + static const char *const adcx140_supply_names[] = { 26 + "avdd", 27 + "iovdd", 28 + }; 29 + 30 + #define ADCX140_NUM_SUPPLIES ARRAY_SIZE(adcx140_supply_names) 31 + 25 32 struct adcx140_priv { 26 - struct snd_soc_component *component; 27 33 struct regulator *supply_areg; 34 + struct regulator_bulk_data supplies[ADCX140_NUM_SUPPLIES]; 28 35 struct gpio_desc *gpio_reset; 29 36 struct regmap *regmap; 30 37 struct device *dev; ··· 129 122 { ADCX140_DEV_STS1, 0x80 }, 130 123 }; 131 124 125 + static const struct regmap_range adcx140_wr_ranges[] = { 126 + regmap_reg_range(ADCX140_PAGE_SELECT, ADCX140_SLEEP_CFG), 127 + regmap_reg_range(ADCX140_SHDN_CFG, ADCX140_SHDN_CFG), 128 + regmap_reg_range(ADCX140_ASI_CFG0, ADCX140_ASI_CFG2), 129 + regmap_reg_range(ADCX140_ASI_CH1, ADCX140_MST_CFG1), 130 + regmap_reg_range(ADCX140_CLK_SRC, ADCX140_CLK_SRC), 131 + regmap_reg_range(ADCX140_PDMCLK_CFG, ADCX140_GPO_CFG3), 132 + regmap_reg_range(ADCX140_GPO_VAL, ADCX140_GPO_VAL), 133 + regmap_reg_range(ADCX140_GPI_CFG0, ADCX140_GPI_CFG1), 134 + regmap_reg_range(ADCX140_GPI_MON, ADCX140_GPI_MON), 135 + regmap_reg_range(ADCX140_INT_CFG, ADCX140_INT_MASK0), 136 + regmap_reg_range(ADCX140_BIAS_CFG, ADCX140_CH4_CFG4), 137 + regmap_reg_range(ADCX140_CH5_CFG2, ADCX140_CH5_CFG4), 138 + regmap_reg_range(ADCX140_CH6_CFG2, ADCX140_CH6_CFG4), 139 + regmap_reg_range(ADCX140_CH7_CFG2, ADCX140_CH7_CFG4), 140 + regmap_reg_range(ADCX140_CH8_CFG2, ADCX140_CH8_CFG4), 141 + regmap_reg_range(ADCX140_DSP_CFG0, ADCX140_DRE_CFG0), 142 + regmap_reg_range(ADCX140_AGC_CFG0, ADCX140_AGC_CFG0), 143 + regmap_reg_range(ADCX140_IN_CH_EN, ADCX140_PWR_CFG), 144 + regmap_reg_range(ADCX140_PHASE_CALIB, ADCX140_PHASE_CALIB), 145 + regmap_reg_range(0x7e, 0x7e), 146 + }; 147 + 148 + static const struct regmap_access_table adcx140_wr_table = { 149 + .yes_ranges = adcx140_wr_ranges, 150 + .n_yes_ranges = ARRAY_SIZE(adcx140_wr_ranges), 151 + }; 152 + 132 153 static const struct regmap_range_cfg adcx140_ranges[] = { 133 154 { 134 155 .range_min = 0, ··· 192 157 .num_ranges = ARRAY_SIZE(adcx140_ranges), 193 158 .max_register = 12 * 128, 194 159 .volatile_reg = adcx140_volatile, 160 + .wr_table = &adcx140_wr_table, 195 161 }; 196 162 197 163 /* Digital Volume control. From -100 to 27 dB in 0.5 dB steps */ ··· 221 185 static const struct snd_kcontrol_new decimation_filter_controls[] = { 222 186 SOC_DAPM_ENUM("Decimation Filter", decimation_filter_enum), 223 187 }; 188 + 189 + static const char * const channel_summation_text[] = { 190 + "Disabled", "2 Channel", "4 Channel" 191 + }; 192 + 193 + static SOC_ENUM_SINGLE_DECL(channel_summation_enum, ADCX140_DSP_CFG0, 2, 194 + channel_summation_text); 224 195 225 196 static const char * const pdmclk_text[] = { 226 197 "2.8224 MHz", "1.4112 MHz", "705.6 kHz", "5.6448 MHz" ··· 381 338 SOC_DAPM_SINGLE("Switch", ADCX140_CH4_CFG0, 0, 1, 0); 382 339 383 340 static const struct snd_kcontrol_new adcx140_dapm_dre_en_switch = 384 - SOC_DAPM_SINGLE("Switch", ADCX140_DSP_CFG1, 3, 1, 0); 341 + SOC_DAPM_SINGLE("Switch", ADCX140_DSP_CFG1, 3, 1, 1); 385 342 386 343 /* Output Mixer */ 387 344 static const struct snd_kcontrol_new adcx140_output_mixer_controls[] = { ··· 716 673 SOC_SINGLE_TLV("Digital CH8 Out Volume", ADCX140_CH8_CFG2, 717 674 0, 0xff, 0, dig_vol_tlv), 718 675 ADCX140_PHASE_CALIB_SWITCH("Phase Calibration Switch"), 676 + 677 + SOC_SINGLE("Biquads Per Channel", ADCX140_DSP_CFG1, 5, 3, 0), 678 + 679 + SOC_ENUM("Channel Summation", channel_summation_enum), 719 680 }; 720 681 721 682 static int adcx140_reset(struct adcx140_priv *adcx140) ··· 746 699 { 747 700 int pwr_ctrl = 0; 748 701 int ret = 0; 749 - struct snd_soc_component *component = adcx140->component; 750 702 751 703 if (power_state) 752 704 pwr_ctrl = ADCX140_PWR_CFG_ADC_PDZ | ADCX140_PWR_CFG_PLL_PDZ; ··· 757 711 ret = regmap_write(adcx140->regmap, ADCX140_PHASE_CALIB, 758 712 adcx140->phase_calib_on ? 0x00 : 0x40); 759 713 if (ret) 760 - dev_err(component->dev, "%s: register write error %d\n", 714 + dev_err(adcx140->dev, "%s: register write error %d\n", 761 715 __func__, ret); 762 716 } 763 717 ··· 773 727 struct adcx140_priv *adcx140 = snd_soc_component_get_drvdata(component); 774 728 u8 data = 0; 775 729 776 - switch (params_width(params)) { 730 + switch (params_physical_width(params)) { 777 731 case 16: 778 732 data = ADCX140_16_BIT_WORD; 779 733 break; ··· 788 742 break; 789 743 default: 790 744 dev_err(component->dev, "%s: Unsupported width %d\n", 791 - __func__, params_width(params)); 745 + __func__, params_physical_width(params)); 792 746 return -EINVAL; 793 747 } 794 748 ··· 1121 1075 return ret; 1122 1076 } 1123 1077 1078 + static int adcx140_pwr_off(struct adcx140_priv *adcx140) 1079 + { 1080 + int ret; 1081 + 1082 + regcache_cache_only(adcx140->regmap, true); 1083 + regcache_mark_dirty(adcx140->regmap); 1084 + 1085 + /* Assert the reset GPIO */ 1086 + gpiod_set_value_cansleep(adcx140->gpio_reset, 0); 1087 + 1088 + /* 1089 + * Datasheet - TLV320ADC3140 Rev. B, TLV320ADC5140 Rev. A, 1090 + * TLV320ADC6140 Rev. A 8.4.1: 1091 + * wait for hw shutdown (25ms) + >= 1ms 1092 + */ 1093 + usleep_range(30000, 100000); 1094 + 1095 + /* Power off the regulators, `avdd` and `iovdd` */ 1096 + ret = regulator_bulk_disable(ARRAY_SIZE(adcx140->supplies), 1097 + adcx140->supplies); 1098 + if (ret) { 1099 + dev_err(adcx140->dev, "Failed to disable supplies: %d\n", ret); 1100 + return ret; 1101 + } 1102 + 1103 + return 0; 1104 + } 1105 + 1106 + static int adcx140_pwr_on(struct adcx140_priv *adcx140) 1107 + { 1108 + int ret; 1109 + 1110 + /* Power on the regulators, `avdd` and `iovdd` */ 1111 + ret = regulator_bulk_enable(ARRAY_SIZE(adcx140->supplies), 1112 + adcx140->supplies); 1113 + if (ret) { 1114 + dev_err(adcx140->dev, "Failed to enable supplies: %d\n", ret); 1115 + return ret; 1116 + } 1117 + 1118 + /* De-assert the reset GPIO */ 1119 + gpiod_set_value_cansleep(adcx140->gpio_reset, 1); 1120 + 1121 + /* 1122 + * Datasheet - TLV320ADC3140 Rev. B, TLV320ADC5140 Rev. A, 1123 + * TLV320ADC6140 Rev. A 8.4.2: 1124 + * wait >= 10 ms after entering sleep mode. 1125 + */ 1126 + usleep_range(10000, 100000); 1127 + 1128 + regcache_cache_only(adcx140->regmap, false); 1129 + 1130 + /* Flush the regcache */ 1131 + ret = regcache_sync(adcx140->regmap); 1132 + if (ret) { 1133 + dev_err(adcx140->dev, "Failed to restore register map: %d\n", 1134 + ret); 1135 + return ret; 1136 + } 1137 + 1138 + return 0; 1139 + } 1140 + 1124 1141 static int adcx140_set_bias_level(struct snd_soc_component *component, 1125 1142 enum snd_soc_bias_level level) 1126 1143 { 1127 1144 struct adcx140_priv *adcx140 = snd_soc_component_get_drvdata(component); 1145 + enum snd_soc_bias_level prev_level 1146 + = snd_soc_component_get_bias_level(component); 1128 1147 1129 1148 switch (level) { 1130 1149 case SND_SOC_BIAS_ON: 1131 1150 case SND_SOC_BIAS_PREPARE: 1151 + if (prev_level == SND_SOC_BIAS_STANDBY) 1152 + adcx140_pwr_ctrl(adcx140, true); 1153 + break; 1132 1154 case SND_SOC_BIAS_STANDBY: 1133 - adcx140_pwr_ctrl(adcx140, true); 1155 + if (prev_level == SND_SOC_BIAS_PREPARE) 1156 + adcx140_pwr_ctrl(adcx140, false); 1157 + if (prev_level == SND_SOC_BIAS_OFF) 1158 + return adcx140_pwr_on(adcx140); 1134 1159 break; 1135 1160 case SND_SOC_BIAS_OFF: 1136 - adcx140_pwr_ctrl(adcx140, false); 1161 + if (prev_level == SND_SOC_BIAS_STANDBY) 1162 + return adcx140_pwr_off(adcx140); 1137 1163 break; 1138 1164 } 1139 1165 ··· 1271 1153 adcx140->phase_calib_on = false; 1272 1154 adcx140->dev = &i2c->dev; 1273 1155 1156 + for (int i = 0; i < ADCX140_NUM_SUPPLIES; i++) 1157 + adcx140->supplies[i].supply = adcx140_supply_names[i]; 1158 + 1159 + ret = devm_regulator_bulk_get(&i2c->dev, ADCX140_NUM_SUPPLIES, 1160 + adcx140->supplies); 1161 + if (ret) { 1162 + dev_err_probe(&i2c->dev, ret, "Failed to request supplies\n"); 1163 + return ret; 1164 + } 1165 + 1274 1166 adcx140->gpio_reset = devm_gpiod_get_optional(adcx140->dev, 1275 1167 "reset", GPIOD_OUT_LOW); 1276 1168 if (IS_ERR(adcx140->gpio_reset)) 1169 + return dev_err_probe(&i2c->dev, PTR_ERR(adcx140->gpio_reset), 1170 + "Failed to get Reset GPIO\n"); 1171 + if (!adcx140->gpio_reset) 1277 1172 dev_info(&i2c->dev, "Reset GPIO not defined\n"); 1278 1173 1279 1174 adcx140->supply_areg = devm_regulator_get_optional(adcx140->dev, ··· 1315 1184 ret); 1316 1185 return ret; 1317 1186 } 1187 + 1188 + regcache_cache_only(adcx140->regmap, true); 1318 1189 1319 1190 i2c_set_clientdata(i2c, adcx140); 1320 1191
-5
sound/soc/codecs/wcd937x.c
··· 2763 2763 wcd937x->sdw_priv[AIF1_CAP] = dev_get_drvdata(wcd937x->txdev); 2764 2764 wcd937x->sdw_priv[AIF1_CAP]->wcd937x = wcd937x; 2765 2765 wcd937x->tx_sdw_dev = dev_to_sdw_dev(wcd937x->txdev); 2766 - if (!wcd937x->tx_sdw_dev) { 2767 - dev_err(dev, "could not get txslave with matching of dev\n"); 2768 - ret = -EINVAL; 2769 - goto err_put_txdev; 2770 - } 2771 2766 2772 2767 /* 2773 2768 * As TX is the main CSR reg interface, which should not be suspended first.
+3 -3
sound/soc/intel/boards/sof_sdw_common.h
··· 46 46 #define SOC_SDW_NO_AGGREGATION BIT(14) 47 47 48 48 /* BT audio offload: reserve 3 bits for future */ 49 - #define SOF_BT_OFFLOAD_SSP_SHIFT 15 50 - #define SOF_BT_OFFLOAD_SSP_MASK (GENMASK(17, 15)) 49 + #define SOF_BT_OFFLOAD_SSP_SHIFT 18 50 + #define SOF_BT_OFFLOAD_SSP_MASK (GENMASK(20, 18)) 51 51 #define SOF_BT_OFFLOAD_SSP(quirk) \ 52 52 (((quirk) << SOF_BT_OFFLOAD_SSP_SHIFT) & SOF_BT_OFFLOAD_SSP_MASK) 53 - #define SOF_SSP_BT_OFFLOAD_PRESENT BIT(18) 53 + #define SOF_SSP_BT_OFFLOAD_PRESENT BIT(21) 54 54 55 55 struct intel_mc_ctx { 56 56 struct sof_hdmi_private hdmi;
-4
sound/soc/sdw_utils/soc_sdw_utils.c
··· 1428 1428 } 1429 1429 1430 1430 slave = dev_to_sdw_dev(sdw_dev); 1431 - if (!slave) { 1432 - ret = -EINVAL; 1433 - goto put_device; 1434 - } 1435 1431 1436 1432 /* Make sure BIOS provides SDCA properties */ 1437 1433 if (!slave->sdca_data.interface_revision) {
+13 -1
sound/soc/sof/intel/hda.c
··· 1549 1549 * name string if quirk flag is set. 1550 1550 */ 1551 1551 if (mach) { 1552 + const struct sof_intel_dsp_desc *chip = get_chip_info(sdev->pdata); 1552 1553 bool tplg_fixup = false; 1553 1554 bool dmic_fixup = false; 1554 1555 ··· 1599 1598 sof_pdata->tplg_filename = tplg_filename; 1600 1599 } 1601 1600 1601 + if (tplg_fixup && mach->mach_params.bt_link_mask && 1602 + chip->hw_ip_version >= SOF_INTEL_ACE_4_0) { 1603 + int bt_port = fls(mach->mach_params.bt_link_mask) - 1; 1604 + 1605 + tplg_filename = devm_kasprintf(sdev->dev, GFP_KERNEL, "%s-ssp%d-bt", 1606 + sof_pdata->tplg_filename, bt_port); 1607 + if (!tplg_filename) 1608 + return NULL; 1609 + 1610 + sof_pdata->tplg_filename = tplg_filename; 1611 + } 1612 + 1602 1613 if (mach->link_mask) { 1603 1614 mach->mach_params.links = mach->links; 1604 1615 mach->mach_params.link_mask = mach->link_mask; ··· 1622 1609 if (tplg_fixup && 1623 1610 mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_SSP_NUMBER && 1624 1611 mach->mach_params.i2s_link_mask) { 1625 - const struct sof_intel_dsp_desc *chip = get_chip_info(sdev->pdata); 1626 1612 int ssp_num; 1627 1613 1628 1614 if (hweight_long(mach->mach_params.i2s_link_mask) > 1 &&
+2
sound/soc/sunxi/sun4i-spdif.c
··· 171 171 * @reg_dac_txdata: TX FIFO offset for DMA config. 172 172 * @has_reset: SoC needs reset deasserted. 173 173 * @val_fctl_ftx: TX FIFO flush bitmask. 174 + * @mclk_multiplier: ratio of internal MCLK divider 175 + * @tx_clk_name: name of TX module clock if split clock design 174 176 */ 175 177 struct sun4i_spdif_quirks { 176 178 unsigned int reg_dac_txdata;
+1 -1
tools/include/uapi/linux/perf_event.h
··· 2 2 /* 3 3 * Performance events: 4 4 * 5 - * Copyright (C) 2008-2009, Thomas Gleixner <tglx@linutronix.de> 5 + * Copyright (C) 2008-2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 6 6 * Copyright (C) 2008-2011, Red Hat, Inc., Ingo Molnar 7 7 * Copyright (C) 2008-2011, Red Hat, Inc., Peter Zijlstra 8 8 *
-1
tools/net/ynl/Makefile
··· 51 51 @echo -e "\tINSTALL pyynl" 52 52 @pip install --prefix=$(DESTDIR)$(prefix) . 53 53 @make -C generated install 54 - @make -C tests install 55 54 56 55 run_tests: 57 56 @$(MAKE) -C tests run_tests
+1 -1
tools/perf/builtin-list.c
··· 4 4 * 5 5 * Builtin list command: list all event types 6 6 * 7 - * Copyright (C) 2009, Thomas Gleixner <tglx@linutronix.de> 7 + * Copyright (C) 2009, Linutronix GmbH, Thomas Gleixner <tglx@kernel.org> 8 8 * Copyright (C) 2008-2009, Red Hat Inc, Ingo Molnar <mingo@redhat.com> 9 9 * Copyright (C) 2011, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> 10 10 */
+2 -2
tools/testing/selftests/drivers/net/hw/lib/py/__init__.py
··· 22 22 NlError, RtnlFamily, DevlinkFamily, PSPFamily 23 23 from net.lib.py import CmdExitFailure 24 24 from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \ 25 - fd_read_timeout, ip, rand_port, wait_port_listen, wait_file 25 + fd_read_timeout, ip, rand_port, wait_port_listen, wait_file, tool 26 26 from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx 27 27 from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \ 28 28 ksft_setup, ksft_variants, KsftNamedVariant ··· 37 37 "CmdExitFailure", 38 38 "bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool", 39 39 "fd_read_timeout", "ip", "rand_port", 40 - "wait_port_listen", "wait_file", 40 + "wait_port_listen", "wait_file", "tool", 41 41 "KsftSkipEx", "KsftFailEx", "KsftXfailEx", 42 42 "ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run", 43 43 "ksft_setup", "ksft_variants", "KsftNamedVariant",
+59
tools/testing/selftests/drivers/net/netdevsim/peer.sh
··· 52 52 ip netns del nssv 53 53 } 54 54 55 + is_carrier_up() 56 + { 57 + local netns="$1" 58 + local nsim_dev="$2" 59 + 60 + test "$(ip netns exec "$netns" \ 61 + cat /sys/class/net/"$nsim_dev"/carrier 2>/dev/null)" -eq 1 62 + } 63 + 64 + assert_carrier_up() 65 + { 66 + local netns="$1" 67 + local nsim_dev="$2" 68 + 69 + if ! is_carrier_up "$netns" "$nsim_dev"; then 70 + echo "$nsim_dev's carrier should be UP, but it isn't" 71 + cleanup_ns 72 + exit 1 73 + fi 74 + } 75 + 76 + assert_carrier_down() 77 + { 78 + local netns="$1" 79 + local nsim_dev="$2" 80 + 81 + if is_carrier_up "$netns" "$nsim_dev"; then 82 + echo "$nsim_dev's carrier should be DOWN, but it isn't" 83 + cleanup_ns 84 + exit 1 85 + fi 86 + } 87 + 55 88 ### 56 89 ### Code start 57 90 ### ··· 145 112 cleanup_ns 146 113 exit 1 147 114 fi 115 + 116 + # netdevsim carrier state consistency checking 117 + assert_carrier_up nssv "$NSIM_DEV_1_NAME" 118 + assert_carrier_up nscl "$NSIM_DEV_2_NAME" 119 + 120 + echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX" > "$NSIM_DEV_SYS_UNLINK" 121 + 122 + assert_carrier_down nssv "$NSIM_DEV_1_NAME" 123 + assert_carrier_down nscl "$NSIM_DEV_2_NAME" 124 + 125 + ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" down 126 + ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" up 127 + 128 + assert_carrier_down nssv "$NSIM_DEV_1_NAME" 129 + assert_carrier_down nscl "$NSIM_DEV_2_NAME" 130 + 131 + echo "$NSIM_DEV_1_FD:$NSIM_DEV_1_IFIDX $NSIM_DEV_2_FD:$NSIM_DEV_2_IFIDX" > $NSIM_DEV_SYS_LINK 132 + 133 + assert_carrier_up nssv "$NSIM_DEV_1_NAME" 134 + assert_carrier_up nscl "$NSIM_DEV_2_NAME" 135 + 136 + ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" down 137 + ip netns exec nssv ip link set dev "$NSIM_DEV_1_NAME" up 138 + 139 + assert_carrier_up nssv "$NSIM_DEV_1_NAME" 140 + assert_carrier_up nscl "$NSIM_DEV_2_NAME" 148 141 149 142 # send/recv packets 150 143
+17 -1
tools/testing/selftests/ftrace/test.d/00basic/trace_marker_raw.tc
··· 89 89 # The id must be four bytes, test that 3 bytes fails a write 90 90 if echo -n abc > ./trace_marker_raw ; then 91 91 echo "Too small of write expected to fail but did not" 92 + echo ${ORIG} > buffer_size_kb 92 93 exit_fail 93 94 fi 94 95 ··· 100 99 101 100 if write_buffer 0xdeadbeef $size ; then 102 101 echo "Too big of write expected to fail but did not" 102 + echo ${ORIG} > buffer_size_kb 103 103 exit_fail 104 104 fi 105 105 } 106 106 107 + ORIG=`cat buffer_size_kb` 108 + 109 + # test_multiple_writes test needs at least 12KB buffer 110 + NEW_SIZE=12 111 + 112 + if [ ${ORIG} -lt ${NEW_SIZE} ]; then 113 + echo ${NEW_SIZE} > buffer_size_kb 114 + fi 115 + 107 116 test_buffer 108 - test_multiple_writes 117 + if ! test_multiple_writes; then 118 + echo ${ORIG} > buffer_size_kb 119 + exit_fail 120 + fi 121 + 122 + echo ${ORIG} > buffer_size_kb
+2
tools/testing/selftests/hid/Makefile
··· 184 184 185 185 CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG)) 186 186 BPF_CFLAGS = -g -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \ 187 + -Wno-microsoft-anon-tag \ 188 + -fms-extensions \ 187 189 -I$(INCLUDE_DIR) 188 190 189 191 CLANG_CFLAGS = $(CLANG_SYS_INCLUDES) \
+14
tools/testing/selftests/hid/tests/conftest.py
··· 5 5 # Copyright (c) 2017 Benjamin Tissoires <benjamin.tissoires@gmail.com> 6 6 # Copyright (c) 2017 Red Hat, Inc. 7 7 8 + from packaging.version import Version 8 9 import platform 9 10 import pytest 10 11 import re ··· 13 12 import subprocess 14 13 from .base import HIDTestUdevRule 15 14 from pathlib import Path 15 + 16 + 17 + @pytest.fixture(autouse=True) 18 + def hidtools_version_check(): 19 + HIDTOOLS_VERSION = "0.12" 20 + try: 21 + import hidtools 22 + 23 + version = hidtools.__version__ # type: ignore 24 + if Version(version) < Version(HIDTOOLS_VERSION): 25 + pytest.skip(reason=f"have hidtools {version}, require >={HIDTOOLS_VERSION}") 26 + except Exception: 27 + pytest.skip(reason=f"hidtools >={HIDTOOLS_VERSION} required") 16 28 17 29 18 30 # See the comment in HIDTestUdevRule, this doesn't set up but it will clean
+48 -13
tools/testing/selftests/hid/tests/test_multitouch.py
··· 9 9 from . import base 10 10 from hidtools.hut import HUT 11 11 from hidtools.util import BusType 12 + import enum 12 13 import libevdev 13 14 import logging 14 15 import pytest ··· 233 232 return 0 234 233 235 234 235 + class HIDButtonType(enum.IntEnum): 236 + CLICKPAD = 0 237 + PRESSUREPAD = 1 238 + DISCRETE_BUTTONS = 2 239 + 240 + 236 241 class PTP(Digitizer): 237 242 def __init__( 238 243 self, 239 244 name, 240 - type="Click Pad", 245 + buttontype=HIDButtonType.CLICKPAD, 241 246 rdesc_str=None, 242 247 rdesc=None, 243 248 application="Touch Pad", ··· 251 244 max_contacts=None, 252 245 input_info=None, 253 246 ): 254 - self.type = type.lower().replace(" ", "") 255 - if self.type == "clickpad": 256 - self.buttontype = 0 257 - else: # pressurepad 258 - self.buttontype = 1 247 + self.buttontype = buttontype 248 + 259 249 self.clickpad_state = False 260 250 self.left_state = False 261 251 self.right_state = False ··· 979 975 assert libevdev.InputEvent(libevdev.EV_ABS.ABS_MT_ORIENTATION, 90) in events 980 976 981 977 class TestPTP(TestWin8Multitouch): 978 + def test_buttontype(self): 979 + """Check for the right ButtonType.""" 980 + uhdev = self.uhdev 981 + assert uhdev is not None 982 + evdev = uhdev.get_evdev() 983 + 984 + # If libevdev.so is not yet compiled with INPUT_PROP_PRESSUREPAD 985 + # python-libevdev won't have it either, let's fake it 986 + if not getattr(libevdev, "INPUT_PROP_PRESSUREPAD", None): 987 + prop = libevdev.InputProperty(name="INPUT_PROP_PRESSUREPAD", value=0x7) 988 + libevdev.INPUT_PROP_PRESSUREPAD = prop 989 + libevdev.props.append(prop) 990 + 991 + if uhdev.buttontype == HIDButtonType.CLICKPAD: 992 + assert libevdev.INPUT_PROP_BUTTONPAD in evdev.properties 993 + elif uhdev.buttontype == HIDButtonType.PRESSUREPAD: 994 + assert libevdev.INPUT_PROP_PRESSUREPAD in evdev.properties 995 + else: 996 + assert libevdev.INPUT_PROP_PRESSUREPAD not in evdev.properties 997 + assert libevdev.INPUT_PROP_BUTTONPAD not in evdev.properties 998 + 982 999 def test_ptp_buttons(self): 983 1000 """check for button reliability. 984 - There are 2 types of touchpads: the click pads and the pressure pads. 985 - Each should reliably report the BTN_LEFT events. 1001 + There are 3 types of touchpads: click pads + pressure pads and 1002 + those with discrete buttons. Each should reliably report the BTN_LEFT events. 986 1003 """ 987 1004 uhdev = self.uhdev 988 1005 evdev = uhdev.get_evdev() 989 1006 990 - if uhdev.type == "clickpad": 1007 + if uhdev.buttontype in [HIDButtonType.CLICKPAD, HIDButtonType.PRESSUREPAD]: 991 1008 r = uhdev.event(click=True) 992 1009 events = uhdev.next_sync_events() 993 1010 self.debug_reports(r, uhdev, events) ··· 1020 995 self.debug_reports(r, uhdev, events) 1021 996 assert libevdev.InputEvent(libevdev.EV_KEY.BTN_LEFT, 0) in events 1022 997 assert evdev.value[libevdev.EV_KEY.BTN_LEFT] == 0 1023 - else: 998 + elif uhdev.buttontype == HIDButtonType.DISCRETE_BUTTONS: 1024 999 r = uhdev.event(left=True) 1025 1000 events = uhdev.next_sync_events() 1026 1001 self.debug_reports(r, uhdev, events) ··· 1943 1918 def create_device(self): 1944 1919 return PTP( 1945 1920 "uhid test dell_044e_1220", 1946 - type="pressurepad", 1921 + buttontype=HIDButtonType.DISCRETE_BUTTONS, 1947 1922 rdesc="05 01 09 02 a1 01 85 01 09 01 a1 00 05 09 19 01 29 03 15 00 25 01 75 01 95 03 81 02 95 05 81 01 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 09 38 95 01 81 06 05 0c 0a 38 02 81 06 c0 c0 05 0d 09 05 a1 01 85 08 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 af 04 75 10 55 0e 65 11 09 30 35 00 46 e8 03 95 01 81 02 26 7b 02 46 12 02 09 31 81 02 c0 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 05 0d 09 56 81 02 09 54 25 05 95 01 75 08 81 02 05 09 19 01 29 03 25 01 75 01 95 03 81 02 95 05 81 03 05 0d 85 09 09 55 75 08 95 01 25 05 b1 02 06 00 ff 85 0a 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 06 01 ff 09 01 a1 01 85 03 09 01 15 00 26 ff 00 95 1b 81 02 85 04 09 02 95 50 81 02 85 05 09 03 95 07 b1 02 85 06 09 04 81 02 c0 06 02 ff 09 01 a1 01 85 07 09 02 95 86 75 08 b1 02 c0 05 0d 09 0e a1 01 85 0b 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 0c 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0", 1948 1923 ) 1949 1924 ··· 2043 2018 def create_device(self): 2044 2019 return PTP( 2045 2020 "uhid test elan_04f3_313a", 2046 - type="touchpad", 2021 + buttontype=HIDButtonType.DISCRETE_BUTTONS, 2047 2022 input_info=(BusType.I2C, 0x04F3, 0x313A), 2048 2023 rdesc="05 01 09 02 a1 01 85 01 09 01 a1 00 05 09 19 01 29 03 15 00 25 01 75 01 95 03 81 02 95 05 81 03 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 75 08 95 05 81 03 c0 06 00 ff 09 01 85 0e 09 c5 15 00 26 ff 00 75 08 95 04 b1 02 85 0a 09 c6 15 00 26 ff 00 75 08 95 04 b1 02 c0 06 00 ff 09 01 a1 01 85 5c 09 01 95 0b 75 08 81 06 85 0d 09 c5 15 00 26 ff 00 75 08 95 04 b1 02 85 0c 09 c6 96 80 03 75 08 b1 02 85 0b 09 c7 95 82 75 08 b1 02 c0 05 0d 09 05 a1 01 85 04 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 05 09 09 02 09 03 15 00 25 01 75 01 95 02 81 02 05 0d 95 01 75 04 25 0f 09 51 81 02 05 01 15 00 26 d7 0e 75 10 55 0d 65 11 09 30 35 00 46 44 2f 95 01 81 02 46 12 16 26 eb 06 26 eb 06 09 31 81 02 05 0d 15 00 25 64 95 03 c0 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 09 56 81 02 09 54 25 7f 95 01 75 08 81 02 25 01 75 01 95 08 81 03 09 c5 75 08 95 02 81 03 05 0d 85 02 09 55 09 59 75 04 95 02 25 0f b1 02 85 07 09 60 75 01 95 01 15 00 25 01 b1 02 95 0f b1 03 06 00 ff 06 00 ff 85 06 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 05 0d 09 0e a1 01 85 03 09 22 a1 00 09 52 15 00 25 0a 75 10 95 01 b1 02 c0 09 22 a1 00 85 05 09 57 09 58 75 01 95 02 25 01 b1 02 95 0e b1 03 c0 c0 05 01 09 02 a1 01 85 2a 09 01 a1 00 05 09 19 01 29 03 15 00 25 01 75 01 95 03 81 02 95 05 81 03 05 01 09 30 09 31 15 81 25 7f 35 81 45 7f 55 00 65 13 75 08 95 02 81 06 75 08 95 05 81 03 c0 c0", 2049 2024 ) ··· 2080 2055 rdesc="05 01 09 02 a1 01 85 02 09 01 a1 00 05 09 19 01 29 02 15 00 25 01 75 01 95 02 81 02 95 06 81 01 05 01 09 30 09 31 15 81 25 7f 75 08 95 02 81 06 c0 c0 05 0d 09 05 a1 01 85 03 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 75 01 95 03 81 03 05 01 15 00 26 1b 04 75 10 55 0e 65 11 09 30 35 00 46 6c 03 95 01 81 02 46 db 01 26 3b 02 09 31 81 02 05 0d c0 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 09 56 81 02 09 54 25 7f 95 01 75 08 81 02 05 09 09 01 25 01 75 01 95 01 81 02 95 07 81 03 05 0d 85 08 09 55 09 59 75 04 95 02 25 0f b1 02 85 0d 09 60 75 01 95 01 15 00 25 01 b1 02 95 07 b1 03 85 07 06 00 ff 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 05 0d 09 0e a1 01 85 04 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 06 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0 06 00 ff 09 01 a1 01 85 09 09 02 15 00 26 ff 00 75 08 95 14 91 02 85 0a 09 03 15 00 26 ff 00 75 08 95 14 91 02 85 0b 09 04 15 00 26 ff 00 75 08 95 1a 81 02 85 0c 09 05 15 00 26 ff 00 75 08 95 1a 81 02 85 0f 09 06 15 00 26 ff 00 75 08 95 01 b1 02 85 0e 09 07 15 00 26 ff 00 75 08 95 01 b1 02 c0", 2081 2056 max_contacts=5, 2082 2057 input_info=(0x3, 0x06CB, 0x2968), 2058 + ) 2059 + 2060 + 2061 + class Testven_0488_108c(BaseTest.TestPTP): 2062 + def create_device(self): 2063 + return PTP( 2064 + "uhid test ven_0488_108c", 2065 + rdesc="05 01 09 02 a1 01 85 06 09 01 a1 00 05 09 19 01 29 03 15 00 25 01 95 03 75 01 81 02 95 01 75 05 81 03 05 01 09 30 09 31 09 38 15 81 25 7f 75 08 95 03 81 06 c0 c0 05 0d 09 05 a1 01 85 01 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 81 03 05 01 15 00 26 ba 0d 75 10 55 0e 65 11 09 30 35 00 46 d0 05 95 01 81 02 26 d0 06 46 bb 02 09 31 81 02 05 0d 95 01 75 10 26 ff 7f 46 ff 7f 09 30 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 81 03 05 01 15 00 26 ba 0d 75 10 55 0e 65 11 09 30 35 00 46 d0 05 95 01 81 02 26 d0 06 46 bb 02 09 31 81 02 05 0d 95 01 75 10 26 ff 7f 46 ff 7f 09 30 81 02 c0 05 0d 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 95 01 75 03 25 05 09 51 81 02 81 03 05 01 15 00 26 ba 0d 75 10 55 0e 65 11 09 30 35 00 46 d0 05 95 01 81 02 26 d0 06 46 bb 02 09 31 81 02 05 0d 95 01 75 10 26 ff 7f 46 ff 7f 09 30 81 02 c0 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 05 0d 09 56 81 02 09 54 25 05 95 01 75 08 81 02 05 09 09 01 25 01 75 01 95 01 81 02 95 07 81 03 05 0d 85 02 09 55 75 08 95 01 25 05 b1 02 09 59 b1 02 06 00 ff 85 03 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 05 0e 09 01 a1 02 85 13 09 23 15 00 25 64 75 08 95 01 b1 02 c0 c0 05 0d 09 0e a1 01 85 04 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 05 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0 06 01 ff 09 02 a1 01 09 00 85 07 15 00 26 ff 00 75 08 96 12 02 b1 02 c0 06 00 ff 09 01 a1 01 85 0d 15 00 26 ff 00 75 08 95 11 09 01 81 02 09 01 91 02 c0 05 0e 09 01 a1 01 85 11 09 35 15 00 26 ff 00 75 08 95 17 b1 02 c0 06 81 ff 09 01 a1 01 09 20 85 17 15 00 26 ff 00 75 08 95 3f 09 01 81 02 09 01 91 02 c0", 2066 + input_info=(0x18, 0x0488, 0x108C), 2067 + buttontype=HIDButtonType.PRESSUREPAD, 2083 2068 ) 2084 2069 2085 2070 ··· 2145 2110 def create_device(self): 2146 2111 return PTP( 2147 2112 "uhid test sipodev_0603_0002", 2148 - type="clickpad", 2113 + buttontype=HIDButtonType.CLICKPAD, 2149 2114 rdesc="05 01 09 02 a1 01 85 03 09 01 a1 00 05 09 19 01 29 02 25 01 75 01 95 02 81 02 95 06 81 03 05 01 09 30 09 31 15 80 25 7f 75 08 95 02 81 06 c0 c0 05 0d 09 05 a1 01 85 04 09 22 a1 02 15 00 25 01 09 47 09 42 95 02 75 01 81 02 75 01 95 02 81 03 95 01 75 04 25 05 09 51 81 02 05 01 15 00 26 44 0a 75 0c 55 0e 65 11 09 30 35 00 46 ac 03 95 01 81 02 46 fe 01 26 34 05 75 0c 09 31 81 02 05 0d c0 55 0c 66 01 10 47 ff ff 00 00 27 ff ff 00 00 75 10 95 01 09 56 81 02 09 54 25 0a 95 01 75 04 81 02 75 01 95 03 81 03 05 09 09 01 25 01 75 01 95 01 81 02 05 0d 85 0a 09 55 09 59 75 04 95 02 25 0f b1 02 85 0b 09 60 75 01 95 01 15 00 25 01 b1 02 95 07 b1 03 85 09 06 00 ff 09 c5 15 00 26 ff 00 75 08 96 00 01 b1 02 c0 05 0d 09 0e a1 01 85 06 09 22 a1 02 09 52 15 00 25 0a 75 08 95 01 b1 02 c0 09 22 a1 00 85 07 09 57 09 58 75 01 95 02 25 01 b1 02 95 06 b1 03 c0 c0 05 01 09 0c a1 01 85 08 15 00 25 01 09 c6 75 01 95 01 81 06 75 07 81 03 c0 05 01 09 80 a1 01 85 01 15 00 25 01 75 01 0a 81 00 0a 82 00 0a 83 00 95 03 81 06 95 05 81 01 c0 06 0c 00 09 01 a1 01 85 02 25 01 15 00 75 01 0a b5 00 0a b6 00 0a b7 00 0a cd 00 0a e2 00 0a a2 00 0a e9 00 0a ea 00 95 08 81 02 0a 83 01 0a 6f 00 0a 70 00 0a 88 01 0a 8a 01 0a 92 01 0a a8 02 0a 24 02 95 08 81 02 0a 21 02 0a 23 02 0a 96 01 0a 25 02 0a 26 02 0a 27 02 0a 23 02 0a b1 02 95 08 81 02 c0 06 00 ff 09 01 a1 01 85 05 15 00 26 ff 00 19 01 29 02 75 08 95 05 b1 02 c0", 2150 2115 ) 2151 2116
+2 -2
tools/testing/selftests/net/lib/py/__init__.py
··· 13 13 from .netns import NetNS, NetNSEnter 14 14 from .nsim import NetdevSim, NetdevSimDev 15 15 from .utils import CmdExitFailure, fd_read_timeout, cmd, bkg, defer, \ 16 - bpftool, ip, ethtool, bpftrace, rand_port, wait_port_listen, wait_file 16 + bpftool, ip, ethtool, bpftrace, rand_port, wait_port_listen, wait_file, tool 17 17 from .ynl import NlError, YnlFamily, EthtoolFamily, NetdevFamily, RtnlFamily, RtnlAddrFamily 18 18 from .ynl import NetshaperFamily, DevlinkFamily, PSPFamily 19 19 ··· 26 26 "NetNS", "NetNSEnter", 27 27 "CmdExitFailure", "fd_read_timeout", "cmd", "bkg", "defer", 28 28 "bpftool", "ip", "ethtool", "bpftrace", "rand_port", 29 - "wait_port_listen", "wait_file", 29 + "wait_port_listen", "wait_file", "tool", 30 30 "NetdevSim", "NetdevSimDev", 31 31 "NetshaperFamily", "DevlinkFamily", "PSPFamily", "NlError", 32 32 "YnlFamily", "EthtoolFamily", "NetdevFamily", "RtnlFamily",
+1
tools/testing/selftests/net/mptcp/Makefile
··· 3 3 top_srcdir = ../../../../.. 4 4 5 5 CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 6 + CFLAGS += -I$(top_srcdir)/tools/include 6 7 7 8 TEST_PROGS := \ 8 9 diag.sh \
+2 -1
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 33 33 #include <linux/tcp.h> 34 34 #include <linux/time_types.h> 35 35 #include <linux/sockios.h> 36 + #include <linux/compiler.h> 36 37 37 38 extern int optind; 38 39 ··· 141 140 exit(1); 142 141 } 143 142 144 - static void xerror(const char *fmt, ...) 143 + static void __noreturn xerror(const char *fmt, ...) 145 144 { 146 145 va_list ap; 147 146
+2 -1
tools/testing/selftests/net/mptcp/mptcp_diag.c
··· 5 5 #include <linux/rtnetlink.h> 6 6 #include <linux/inet_diag.h> 7 7 #include <linux/netlink.h> 8 + #include <linux/compiler.h> 8 9 #include <sys/socket.h> 9 10 #include <netinet/in.h> 10 11 #include <linux/tcp.h> ··· 88 87 89 88 #define rta_getattr(type, value) (*(type *)RTA_DATA(value)) 90 89 91 - static void die_perror(const char *msg) 90 + static void __noreturn die_perror(const char *msg) 92 91 { 93 92 perror(msg); 94 93 exit(1);
+3 -2
tools/testing/selftests/net/mptcp/mptcp_inq.c
··· 28 28 29 29 #include <linux/tcp.h> 30 30 #include <linux/sockios.h> 31 + #include <linux/compiler.h> 31 32 32 33 #ifndef IPPROTO_MPTCP 33 34 #define IPPROTO_MPTCP 262 ··· 41 40 static int proto_tx = IPPROTO_MPTCP; 42 41 static int proto_rx = IPPROTO_MPTCP; 43 42 44 - static void die_perror(const char *msg) 43 + static void __noreturn die_perror(const char *msg) 45 44 { 46 45 perror(msg); 47 46 exit(1); ··· 53 52 exit(r); 54 53 } 55 54 56 - static void xerror(const char *fmt, ...) 55 + static void __noreturn xerror(const char *fmt, ...) 57 56 { 58 57 va_list ap; 59 58
+3 -2
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
··· 25 25 #include <netinet/in.h> 26 26 27 27 #include <linux/tcp.h> 28 + #include <linux/compiler.h> 28 29 29 30 static int pf = AF_INET; 30 31 ··· 128 127 #define MIN(a, b) ((a) < (b) ? (a) : (b)) 129 128 #endif 130 129 131 - static void die_perror(const char *msg) 130 + static void __noreturn die_perror(const char *msg) 132 131 { 133 132 perror(msg); 134 133 exit(1); ··· 140 139 exit(r); 141 140 } 142 141 143 - static void xerror(const char *fmt, ...) 142 + static void __noreturn xerror(const char *fmt, ...) 144 143 { 145 144 va_list ap; 146 145
+44 -1
tools/testing/selftests/net/netfilter/nft_concat_range.sh
··· 29 29 net6_port_net6_port net_port_mac_proto_net" 30 30 31 31 # Reported bugs, also described by TYPE_ variables below 32 - BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate" 32 + BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate insert_overlap" 33 33 34 34 # List of possible paths to pktgen script from kernel tree for performance tests 35 35 PKTGEN_SCRIPT_PATHS=" ··· 410 410 411 411 TYPE_doublecreate=" 412 412 display cannot create same element twice 413 + type_spec ipv4_addr . ipv4_addr 414 + chain_spec ip saddr . ip daddr 415 + dst addr4 416 + proto icmp 417 + 418 + race_repeat 0 419 + 420 + perf_duration 0 421 + " 422 + 423 + TYPE_insert_overlap=" 424 + display reject overlapping range on add 413 425 type_spec ipv4_addr . ipv4_addr 414 426 chain_spec ip saddr . ip daddr 415 427 dst addr4 ··· 1962 1950 err "Could not flush and re-create element in one transaction" 1963 1951 return 1 1964 1952 fi 1953 + 1954 + return 0 1955 + } 1956 + 1957 + add_fail() 1958 + { 1959 + if nft add element inet filter test "$1" 2>/dev/null ; then 1960 + err "Returned success for add ${1} given set:" 1961 + err "$(nft -a list set inet filter test )" 1962 + return 1 1963 + fi 1964 + 1965 + return 0 1966 + } 1967 + 1968 + test_bug_insert_overlap() 1969 + { 1970 + local elements="1.2.3.4 . 1.2.4.1" 1971 + 1972 + setup veth send_"${proto}" set || return ${ksft_skip} 1973 + 1974 + add "{ $elements }" || return 1 1975 + 1976 + elements="1.2.3.0-1.2.3.4 . 1.2.4.1" 1977 + add_fail "{ $elements }" || return 1 1978 + 1979 + elements="1.2.3.0-1.2.3.4 . 1.2.4.2" 1980 + add "{ $elements }" || return 1 1981 + 1982 + elements="1.2.3.4 . 1.2.4.1-1.2.4.2" 1983 + add_fail "{ $elements }" || return 1 1965 1984 1966 1985 return 0 1967 1986 }
+47
tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
··· 1098 1098 "teardown": [ 1099 1099 "$TC qdisc del dev $DUMMY root" 1100 1100 ] 1101 + }, 1102 + { 1103 + "id": "4ed9", 1104 + "name": "Try to redirect to self on egress with clsact", 1105 + "category": [ 1106 + "filter", 1107 + "mirred" 1108 + ], 1109 + "plugins": { 1110 + "requires": [ 1111 + "nsPlugin" 1112 + ] 1113 + }, 1114 + "setup": [ 1115 + "$IP link set dev $DUMMY up || true", 1116 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 1117 + "$TC qdisc add dev $DUMMY clsact", 1118 + "$TC filter add dev $DUMMY egress protocol ip prio 10 matchall action mirred egress redirect dev $DUMMY index 1" 1119 + ], 1120 + "cmdUnderTest": "ping -c1 -W0.01 -I $DUMMY 10.10.10.1", 1121 + "expExitCode": "1", 1122 + "verifyCmd": "$TC -j -s actions get action mirred index 1", 1123 + "matchJSON": [ 1124 + { 1125 + "total acts": 0 1126 + }, 1127 + { 1128 + "actions": [ 1129 + { 1130 + "order": 1, 1131 + "kind": "mirred", 1132 + "mirred_action": "redirect", 1133 + "direction": "egress", 1134 + "index": 1, 1135 + "stats": { 1136 + "packets": 1, 1137 + "overlimits": 1 1138 + }, 1139 + "not_in_hw": true 1140 + } 1141 + ] 1142 + } 1143 + ], 1144 + "teardown": [ 1145 + "$TC qdisc del dev $DUMMY clsact" 1146 + ] 1101 1147 } 1148 + 1102 1149 ]
+32
tools/testing/vsock/vsock_test.c
··· 2192 2192 close(fd); 2193 2193 } 2194 2194 2195 + static void test_stream_accepted_setsockopt_client(const struct test_opts *opts) 2196 + { 2197 + int fd; 2198 + 2199 + fd = vsock_stream_connect(opts->peer_cid, opts->peer_port); 2200 + if (fd < 0) { 2201 + perror("connect"); 2202 + exit(EXIT_FAILURE); 2203 + } 2204 + 2205 + close(fd); 2206 + } 2207 + 2208 + static void test_stream_accepted_setsockopt_server(const struct test_opts *opts) 2209 + { 2210 + int fd; 2211 + 2212 + fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL); 2213 + if (fd < 0) { 2214 + perror("accept"); 2215 + exit(EXIT_FAILURE); 2216 + } 2217 + 2218 + enable_so_zerocopy_check(fd); 2219 + close(fd); 2220 + } 2221 + 2195 2222 static struct test_case test_cases[] = { 2196 2223 { 2197 2224 .name = "SOCK_STREAM connection reset", ··· 2397 2370 .name = "SOCK_SEQPACKET ioctl(SIOCINQ) functionality", 2398 2371 .run_client = test_seqpacket_unread_bytes_client, 2399 2372 .run_server = test_seqpacket_unread_bytes_server, 2373 + }, 2374 + { 2375 + .name = "SOCK_STREAM accept()ed socket custom setsockopt()", 2376 + .run_client = test_stream_accepted_setsockopt_client, 2377 + .run_server = test_stream_accepted_setsockopt_server, 2400 2378 }, 2401 2379 {}, 2402 2380 };